Feed aggregator

Canadian Tire Data Breach Impacts 38 Million Accounts

Security Week - Sat, 02/28/2026 - 6:50am

Names, addresses, email addresses, phone numbers, and encrypted passwords were compromised in the attack.

The post Canadian Tire Data Breach Impacts 38 Million Accounts appeared first on SecurityWeek.

Categories: SecurityWeek

Ask HN: Why spec-driven development when code IS spec?

Hacker News - Sat, 02/28/2026 - 6:46am

Code, as such, is a detailed, verifiable spec that a machine can execute. LLMs are already great at translating code to natural language and vice-versa. Why do we need a second, less detailed and less verifiable copy of the code?

Comments URL: https://news.ycombinator.com/item?id=47194035

Points: 1

# Comments: 5

Categories: Hacker News

Area Man Accidentally Hacks 6,700 Camera-Enabled Robot Vacuums

Wired Security - Sat, 02/28/2026 - 6:30am
Plus: The top US cyber agency falls into shambles, AI models develop an upsetting penchant for nuclear weapons, and more.
Categories: Wired Security

Show HN: Jarvish – The J.A.R.V.I.S. AI inside your shell investigates errors

Hacker News - Sat, 02/28/2026 - 6:19am

Hi HN, I'm the creator of Jarvish.

https://github.com/tominaga-h/jarvis-shell

I spend most of my day in the terminal, and I got incredibly frustrated with the standard error-resolution loop: command fails -> copy the stderr -> open a browser -> paste into ChatGPT/Google -> copy the fix -> paste back into the terminal. It completely breaks the flow state.

I wanted a seamless experience where the shell already knows the context of what just happened.

So I built Jarvish. It’s a fully functional interactive shell written in Rust, but with an AI agent seamlessly integrated into the REPL loop. You don't need any special prefixes—if you type `ls -la`, it runs it. If you type `Jarvis, why did that build fail?`, it routes to the AI.

Here is how it works under the hood:

- The "Black Box" (I/O Capture): It uses `os_pipe` and multithreading to tee the `stdout`/`stderr` of child processes in real-time. This captures the output to memory for the AI while simultaneously rendering it to the terminal without breaking interactive TUI tools.

- Context Memory: The captured I/O is compressed with `zstd`, hashed (like Git blobs), and the metadata is stored in a local SQLite database (`rusqlite`). When you ask the AI a question, it automatically retrieves this recent I/O history as context.

- Agentic Capabilities: Using `async-openai` with function calling, the AI can autonomously read files, execute shell commands, and investigate issues before giving you an answer.

- REPL: Built on top of `reedline` for a Fish-like experience (syntax highlighting, autosuggestions).

I’ve been using it as my daily driver (currently v1.1.0). I would absolutely love to hear your thoughts on the architecture, the Rust implementation, or any feature requests!

Comments URL: https://news.ycombinator.com/item?id=47193789

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: DevIndex – Ranking 50k GitHub developers using a static JSON file

Hacker News - Sat, 02/28/2026 - 6:13am

Hey HN,

I’ve always been frustrated by the lack of an accurate ranking for top open-source contributors on GitHub. The available lists either cap out early or are highly localized, completely missing developers with tens or hundreds of thousands of contributions.

So, I built DevIndex to rank the top 50,000 most active developers globally based on their lifetime contributions.

From an engineering perspective, the constraint I imposed was: *No backend API.* I wanted to host this entirely on GitHub Pages for free, meaning the browser had to handle all 50,000 data-rich records directly.

Here is how we made it work:

1. *The Autonomous Data Factory (Backend):* Because GitHub's API has no "Lifetime Contributions" endpoint, we built a Node.js pipeline running on GitHub Actions. It uses a "Network Walker" spider to traverse the social graph (to break out of algorithmic filter bubbles) and an Updater that chunks GraphQL queries to prevent 502 timeouts. The pipeline continuously updates a single `users.jsonl` file.

*Privacy Note:* We use a "Stealth Star" architecture for opt-outs. If a dev stars our opt-out repo, the pipeline cryptographically verifies them, instantly purges their data, and blocklists them. No emails required. 2. *Engine-Level Streaming (O(1) Memory Parsing):* You can't `JSON.parse()` a 23MB JSONL file without freezing the UI. We built a Stream Proxy using `ReadableStream` and `TextDecoderStream` to parse the NDJSON incrementally, rendering the first 500 users instantly while the rest load in the background.

3. *Turbo Mode & Virtual Fields:* Instantiating 50k JS objects crushes memory. The store holds raw POJOs exactly as parsed. Complex calculated fields (like "Total Commits 2024") use prototype-based getters dynamically generated by a RecordFactory. Adding 60 new data columns adds 0 bytes of memory overhead per record.

4. *The "Fixed-DOM-Order" Grid:* We had to rewrite our underlying UI engine (Neo.mjs). Traditional VDOMs die on massive lists because scrolling triggers thousands of `insertBefore`/`removeChild` mutations. We implemented a strict DOM pool. The VDOM array length never changes. Rows leaving the viewport are recycled in place via hardware-accelerated CSS `translate3d`. A 60fps vertical scroll across 50,000 records generates 0 structural DOM mutations.

5. *Quintuple-Threaded Architecture:* To keep sorting fast and render "Living Sparklines" in the cells, we aggressively split the workload across workers. The Main Thread only applies DOM updates. The App Worker handles the 50k dataset, streaming, and VDOM generation. A dedicated Canvas Worker renders the sparklines independently at 60fps using `OffscreenCanvas`.

The entire backend pipeline, streaming UI, and core engine rewrite were completed in one month by myself and my AI agent.

Live App (see where you rank): https://neomjs.com/apps/devindex/ Code / 26 Architectural Guides: https://github.com/neomjs/neo/tree/dev/apps/devindex

Would love to hear feedback on the architecture, especially from anyone who has tackled "Fat Client" scaling issues or massive GraphQL aggregation!

Comments URL: https://news.ycombinator.com/item?id=47193729

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Mycelio – A gig economy network for idle LLM agents

Hacker News - Sat, 02/28/2026 - 6:00am

Hi HN,

I’ve been running local agents (like OpenClaw) recently, and I noticed a problem: they spend 90% of their time just sitting idle waiting for my prompts. I wanted to build a decentralized playground where they could collaborate, trade compute, and exchange skills autonomously.

Today I'm open-sourcing Mycelio. It’s strictly an A2A (Agent-to-Agent) task routing protocol.

What makes it different: 1. No bloated Python SDKs for humans. Since smart agents can understand APIs directly, integration is just injecting a YAML "Skill" definition into your agent's system prompt. 2. The LLM natively figures out how to use `curl` to poll the `/tasks` endpoint, claim bounties, and submit results. 3. Zero-friction auth using dual-keys (Admin + Worker) to protect the owner.

Right now the network is completely empty, so we are doing a "Genesis 50" bootstrap. The first 50 Agent UUIDs to complete a real transaction on mainnet will be hardcoded into the DB as Genesis Nodes with 10k initial "Karma" points.

You can see the live network heartbeat here: https://mycelio.ai

I'd love to hear your thoughts on building Intent-based protocols specifically for machines vs. classical SDKs.

Comments URL: https://news.ycombinator.com/item?id=47193626

Points: 1

# Comments: 0

Categories: Hacker News

The Most Useless Security Cam Features and Why You Don't Need Them

CNET Feed - Sat, 02/28/2026 - 6:00am
Security cameras come packed with capabilities. But not all of them are helpful for your home, especially if you'd like to save money.
Categories: CNET

Best AV Receiver for 2026

CNET Feed - Sat, 02/28/2026 - 6:00am
These are the best AV receivers from Onkyo, Sony, Yamaha and Denon, based on CNET's testing.
Categories: CNET

Tell HN: 3 months ago we feared AI was useless. Now we fear it will take our job

Hacker News - Sat, 02/28/2026 - 5:58am

I was listening to the latest episode of the WSJ podcast (https://www.wsj.com/podcasts/the-journal/the-ai-economic-doomsday-report-that-shook-wall-street/d9b12d37-a743-4a8c-afb6-2488aa9e812f) and what puzzles me is how 2–3 months ago the market feared that the “AI bubble” from tech companies’ trillions of dollars in CAPEX spending would turn out to be useless because AI seemed to have little or no real use. Indeed, after every earnings report with high CAPEX, the stocks dropped.

Now (over the past 10–15 days) the fear seems to have flipped: that AI will replace programmers, videogame developers, financial advisors, and other similar professions, and companies connected to those sectors are dropping (see the S&P Software & Services Select Industry Index https://www.spglobal.com/spdji/en/indices/equity/sp-software-services-select-industry-index/#overview, -20% since the beginning of the year).

I understand that the “fear of the unknown” is deeply rooted in human psychology, and in disruptive moments like this (I mean the birth of AI) many reactions are irrational, but the speed of these shifts is what I find surprising.

What do you think about the situation in the next few months? What could be the reason for the next drop? It almost seems like people are looking for a justification for selling, rather than selling because of a specific reason.

Comments URL: https://news.ycombinator.com/item?id=47193606

Points: 4

# Comments: 1

Categories: Hacker News

Trapped in MS Office

Hacker News - Sat, 02/28/2026 - 5:55am
Categories: Hacker News

Pages