Feed aggregator
Agent Operating System
Article URL: https://github.com/iii-hq/agentos
Comments URL: https://news.ycombinator.com/item?id=47304963
Points: 1
# Comments: 0
To understand why countries grow, look at their firms
Article URL: https://www.economist.com/finance-and-economics/2026/03/05/to-understand-why-countries-grow-look-at-their-firms
Comments URL: https://news.ycombinator.com/item?id=47304939
Points: 1
# Comments: 1
Thermal Grizzly was scammed twice on raw materials worth €40k
The First Multi-Behavior Brain Upload
Article URL: https://twitter.com/alexwg/status/2030217301929132323
Comments URL: https://news.ycombinator.com/item?id=47304800
Points: 1
# Comments: 0
FlashKeeper: Where SpiSpy meets Stateless Laptop (2024)
Article URL: https://cfp.3mdeb.com/qubes-os-summit-2024/talk/FCENX9/
Comments URL: https://news.ycombinator.com/item?id=47304796
Points: 1
# Comments: 0
Sandvault – Run AI agents isolated in a sandboxed macOS user account
Article URL: https://github.com/webcoyote/sandvault
Comments URL: https://news.ycombinator.com/item?id=47304786
Points: 1
# Comments: 0
The Wrapper
Article URL: https://www.robpanico.com/articles/display/?entry_short=the-wrapper
Comments URL: https://news.ycombinator.com/item?id=47304774
Points: 1
# Comments: 0
Show HN: Kroot – dependency-graph root cause analysis for Kubernetes
Article URL: https://github.com/AnonJon/kroot
Comments URL: https://news.ycombinator.com/item?id=47304755
Points: 1
# Comments: 1
Show HN: A community catalog of CI certified agents
An "awesome list" with a twist: every entry is a verified case study, not just a link. To appear in this catalog, an agent must pass tracecore run --strict-spec in public CI — producing an immutable, schema-validated artifact as evidence.
GitHub Actions is the public gate. No human approval alone can certify an agent; the workflow must pass first.
Comments URL: https://news.ycombinator.com/item?id=47304740
Points: 1
# Comments: 1
Euclid – a hyper minimalist digital clock like no other
Article URL: https://euclid.tulv.in/
Comments URL: https://news.ycombinator.com/item?id=47304738
Points: 1
# Comments: 0
An Executive Decision Maker (2022)
Article URL: https://circuitcellar.com/research-design-hub/projects/executive-decision-maker/
Comments URL: https://news.ycombinator.com/item?id=47304717
Points: 1
# Comments: 0
Magnet-Metadata-API: Torrent Metadata API Service
Article URL: https://github.com/felipemarinho97/magnet-metadata-api
Comments URL: https://news.ycombinator.com/item?id=47304666
Points: 1
# Comments: 1
Show HN: Salvobase – MongoDB-compatible DB in Go maintained by AI agents
MongoDB is great until you read the SSPL. Then you're either paying Atlas prices, running an old 4.x build, or pretending FerretDB is production-ready. We built a third option.
Salvobase is a MongoDB wire-protocol-compatible document database written in Go. Point any Mongo driver and it works. No driver changes, no config changes. It's Apache 2.0, so you can embed it in a commercial product without a legal conversation. What it does: - Full CRUD, indexes (single, compound, unique, text, TTL, partial, wildcard), and most of the aggregation pipeline ($match, $group, $lookup, $unwind, $facet, etc.) - SCRAM-SHA-256 auth - bbolt storage engine: one .db file per database, Snappy-compressed BSON - Built-in Prometheus metrics at :27080/metrics (no exporter needed) - Built-in REST/JSON API at :27080/api/v1/ (MongoDB's equivalent is paid Atlas) - Per-tenant rate limiting, audit logging, 1-second TTL precision, SIGHUP hot reload - make build && make dev and you're running What it doesn't do: No replication. No sharding. No change streams. No multi-document transactions (stubbed). No $where or mapReduce (intentional: security + complexity). Single-node only. If you need a distributed MongoDB replacement, this isn't it yet. But we hope one day it will become that, built by agents. The weird part: The codebase is maintained by AI agents. Not "AI-assisted" - the agents pick issues from the backlog, write code, submit PRs, review each other's PRs, and merge. There's a formal protocol (https://github.com/inder/salvobase/blob/master/AGENT_PROTOCOL.md) covering identity, trust tiers, anti-collusion rules for reviews, claim timeouts, and a kill switch. Humans set direction; agents do the execution. We're curious whether autonomous agent maintenance can sustain a real open source project over time, not just generate initial code. * If you want to donate an agent just drop this prompt into Claude Code, Cursor, Aider, Devin, whatever: Fork/clone github.com/inder/salvobase, read QUICKSTART.md, and start contributing
GitHub: https://github.com/inder/salvobase
Thank you.
Comments URL: https://news.ycombinator.com/item?id=47304607
Points: 1
# Comments: 0
Show HN: Using Isolation forests to flag anomalies in log patterns
Article URL: https://rocketgraph.app/ml
Comments URL: https://news.ycombinator.com/item?id=47304597
Points: 1
# Comments: 1
Data Analysis of the State of the Iranian Conflict on March 8, 2026
Article URL: https://datarepublican.substack.com/p/data-analysis-of-the-state-of-the
Comments URL: https://news.ycombinator.com/item?id=47304592
Points: 1
# Comments: 0
Falling Out of the Coconut Tree: What the Popular Kamala Harris Meme Means
Show HN: OpenVerb – A deterministic action layer for AI agents
Hi HN,
I’ve been working on a project called OpenVerb, which explores an architectural idea for AI systems: separating reasoning from execution.
Most AI agent frameworks today focus on improving reasoning loops, planning, and orchestration (LangChain, LangGraph, etc.). But once an agent decides to perform an action, execution usually becomes a direct tool call, script, or API invocation.
That approach works, but it also creates some issues: • custom glue code for every integration • inconsistent action schemas • limited determinism in execution • difficult auditing and policy enforcement
OpenVerb experiments with treating actions as a protocol layer, not just function calls.
Instead of arbitrary tool calls, systems define structured verbs that describe: • the action being performed • required inputs • expected outputs • execution policies • audit information
Conceptually the architecture looks like this:
AI Model / Agent Framework ↓ Reasoning Layer ↓ OpenVerb (Action Protocol) ↓ System Execution
The idea is that agent frameworks control how the AI thinks, while OpenVerb standardizes how actions are executed.
Some existing projects touch related areas: • Model Context Protocol (MCP) – tool and data discovery for AI systems • LangGraph – deterministic reasoning loops for agents • PydanticAI – structured schemas for agent outputs
OpenVerb is trying to explore something slightly different: a universal grammar for deterministic execution that could work across domains (software systems, spatial systems, robotics, etc.).
Still early and experimental, but I’d love feedback from people thinking about agent architecture or execution reliability.
Curious if others have explored similar ideas or if there are related systems I should look at.
Comments URL: https://news.ycombinator.com/item?id=47304573
Points: 1
# Comments: 0
Show HN: LLM-costs – Compare LLM API costs from terminal (npx, zero install)
Article URL: https://github.com/followtayeeb/llm-costs
Comments URL: https://news.ycombinator.com/item?id=47304545
Points: 1
# Comments: 0
Show HN: Chat AI Agent inside mobile device testing sessions
We build RobotActions, a cloud device farm for Android/iOS testing. We just shipped a Chat AI Agent that lives inside the live device session.
What it does:
- During a session, you can ask "What's the XPath for this button?" and get a ready-to-use locator from the current screen - Ask "Write an Appium test for this flow" → get test code generated from the live accessibility tree - Type "tap the login button" in natural language → it executes on the real device - Ask "Why is my test failing on this element?" → gets context from both vision and the accessibility snapshot
The agent uses a combination of screenshot vision and the device's live accessibility tree. The key insight is that most mobile test failures are locator issues or UI state issues — and an agent with full context of what's on screen right now can solve those immediately, without the engineer leaving the session to use a separate inspector tool.
Technical bits: - Accessibility tree is captured per-frame during the session - Agent has both visual context (screenshot) and structured context (a11y tree) simultaneously - Supports Android (UIAutomator2/XPath/UISelector) and iOS (XCUITest/Appium) - Session context is also exposed via API for CI/CD post-failure reports
Happy to discuss the architecture, especially the tradeoffs between using vision alone vs. vision + a11y tree for locator generation.
Comments URL: https://news.ycombinator.com/item?id=47304542
Points: 1
# Comments: 0
Show HN: Andon – Toyota Production System for LLM Coding Agents
LLM coding agents (Claude Code, Codex, etc.) have structural weaknesses: blind retry loops, volatile learning, silent spec drift, and gate gaming. These aren't bugs in specific models — they're properties of goal-optimizing systems.
ANDON applies Toyota Production System principles to address this: - Jidoka (autonomation): auto-detect failures and block forward-progress commands (git push, deploy) - Kaizen (continuous improvement): force Five Whys root cause analysis, then standardize prevention rules - Meta-ANDON: detect when the agent is stuck in a whack-a-mole debugging loop Install: pip install andon-for-llm-agents Works with any agent that supports hooks/callbacks. Apache-2.0 licensed.
Comments URL: https://news.ycombinator.com/item?id=47304539
Points: 1
# Comments: 0
