Hacker News

Claude's Corner

Hacker News - Sat, 02/28/2026 - 11:38am
Categories: Hacker News

AI pays for its self-existence

Hacker News - Sat, 02/28/2026 - 11:33am

Article URL: https://web4.ai/

Comments URL: https://news.ycombinator.com/item?id=47197289

Points: 1

# Comments: 1

Categories: Hacker News

Obsidian Sync now has a headless client

Hacker News - Sat, 02/28/2026 - 11:31am
Categories: Hacker News

Anthropic vs. DoD: "Any lawful use" is a fight about control

Hacker News - Sat, 02/28/2026 - 11:30am

I served 12 years infantry, then built targeting tools at JSOC vs ISIS. Now I lead a team building AI tools automating the compliance process. I’ve got opinions on Anthropic + DoD

When people argue about “AI in weapons” like it’s a sci-fi trigger bot… I can’t take it seriously.

A “kill chain” isn’t a vibe. It’s a process

Find, Fix, Track, Target, Engage, Assess (F2T2EA) and most of it is information work: sorting signal from noise, building confidence, tightening timelines, and getting decisions to the right humans fast enough to matter.

That’s why this Anthropic vs. DoD fight is getting attention. It’s not just “ethics.”

-> It’s about control.

Here’s what’s actually on the table:

Anthropic says they’ll support the military — but they want two carve-outs: no mass domestic surveillance and no fully autonomous weapons (their definition: systems that “take humans out of the loop entirely” and automate selecting/engaging targets).

Anthropic also says DoD demanded “any lawful use” and threatened offboarding / “supply chain risk” pressure if they didn’t comply.

A DoD memo posted on media.defense.gov explicitly calls for models “free from usage policy constraints” and directs adding standard “any lawful use” language into AI contracts.

The dispute escalated fast — including federal offboarding/blacklist actions and a “supply chain risk” designation as reported by major outlets. Now my take, as someone who’s lived inside the targeting reality:

AI can absolutely help the kill chain without ever being the one “pulling the trigger.”

Speeding up Find/Fix/Track/Target changes outcomes — and it’s not hypothetical.

But if we’re going to talk about “any lawful use,” then stop outsourcing national policy to contract fights.

DoD already has policy that autonomous weapon systems should allow appropriate human judgment over the use of force. So the real question isn’t whether humans matter.

It’s this:

Do we want safety and governance implemented at the model layer (vendor guardrails), the contract layer (“any lawful use”), or the law/policy layer (Congress + DoD doctrine + auditing)?

Because “Terms of Service vs. warfighting” is a stupid place to settle a question this big.

If you’ve worked in intel, targeting, acquisition, or governance:

Where should the boundary live? model, contract, or law, and who owns accountability when it breaks?

Comments URL: https://news.ycombinator.com/item?id=47197243

Points: 1

# Comments: 1

Categories: Hacker News

Show HN: Stacked Game of Life

Hacker News - Sat, 02/28/2026 - 11:28am
Categories: Hacker News

The Epstein Tax

Hacker News - Sat, 02/28/2026 - 11:28am
Categories: Hacker News

Polyworld

Hacker News - Sat, 02/28/2026 - 11:17am
Categories: Hacker News

Clustering Developers by Repo/PR/Issue Signals

Hacker News - Sat, 02/28/2026 - 11:16am

Article URL: https://mates.symploke.dev?hn-ph

Comments URL: https://news.ycombinator.com/item?id=47197049

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Fava Trails – Git-backed memory for AI agents using Jujutsu (JJ)

Hacker News - Sat, 02/28/2026 - 11:12am

Hey HN,

I've been building and running autonomous AI agents (recently consulting on systems testing high on MLE-Bench), and I kept hitting the exact same architectural wall: memory poisoning.

Right now, the industry standard for agent memory is to dump text into storage with very little thought about correctness. If an agent hits a transient network error and writes "this environment has no GPU" to its memory, and later realizes it actually does have a GPU and writes a correction... a standard vector search returns both. Your agent is now schizophrenic, holding contradictory beliefs because they are semantically similar.

Furthermore, memory systems are mostly "write-through." If a user jailbreaks your bot into an offensive persona, the bot saves that as a "user preference" and it persists across sessions forever. We don't let untested code reach main, so why do we let unvalidated agent thoughts reach shared memory?

I built FAVA Trails to fix this. It's an agentic memory layer that uses Jujutsu (JJ) version control under the hood.

For the version control nerds: why Jujutsu? I originally looked at standard Git and SQL-based VCS like Dolt. But JJ is the perfect substrate for autonomous agents. Its conflict resolution, first-class operation log, and the fact that the working copy is a commit makes it inherently crash-proof for long-running agent scripts. If an agent session crashes mid-thought, the JJ commit is already there. No detached HEAD nightmares, no staging area rituals for the agent to mess up. Just atomic state snapshots. (It's colocated with Git, so you can still push the data to a standard remote).

How it works:

- Draft Isolation: Agents write to a local draft namespace first. It doesn't pollute shared memory.

- Trust Gate: A mandatory promotion workflow. An independent LLM (or explicit human approval) reviews the draft before it merges into canonical truth.

- Supersession Chains: Corrections don't silently overwrite history. They link back to it, so you get a full causal graph of why the agent changed its mind.

- MCP Native: It runs as a Model Context Protocol (MCP) server, so agents interact with it via semantic tools (recall, save_thought, propose_truth) and never run VCS commands directly.

It's Apache 2.0 and strictly a pip-installable tool (no cloud lock-in, the data is just Markdown files with YAML frontmatter in your own repo).

Repo: https://github.com/MachineWisdomAI/fava-trails Case Study/Docs: https://fava-trails.org

I'd love to hear your thoughts on using JJ as a backend for state, or how you're handling the "gaslighting agent" problem in your own multi-agent stacks.

Comments URL: https://news.ycombinator.com/item?id=47197011

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: SQLite for Rivet Actors – one database per agent, tenant, or document

Hacker News - Sat, 02/28/2026 - 11:11am

Hey HN! We posted Rivet Actors here previously [1] as an open-source alternative to Cloudflare Durable Objects.

Today we've released SQLite storage for actors (Apache 2.0).

Every actor gets its own SQLite database. This means you can have millions of independent databases: one for each agent, tenant, user, or document.

Useful for:

- AI agents: per-agent DB for message history, state, embeddings

- Multi-tenant SaaS: real per-tenant isolation, no RLS hacks

- Collaborative documents: each document gets its own database with built-in multiplayer

- Per-user databases: isolated, scales horizontally, runs at the edge

The idea of splitting data per entity isn't new: Cassandra and DynamoDB use partition keys to scale horizontally, but you're stuck with rigid schemas ("single-table design" [3]), limited queries, and painful migrations. SQLite per entity gives you the same scalability without those tradeoffs [2].

How this compares:

- Cloudflare Durable Objects & Agents: most similar to Rivet Actors with colocated SQLite and compute, but closed-source and vendor-locked

- Turso Cloud: Great platform, but closed-source + diff use case. Clients query over the network, so reads are slow or stale. Rivet's single-writer actor model keeps reads local and fresh.

- D1, Turso (the DB), Litestream, rqlite, LiteFS: great tools for running a single SQLite database with replication. Rivet is for running lots of isolated databases.

Under the hood, SQLite runs in-process with each actor. A custom VFS persists writes to HA storage (FoundationDB or Postgres).

Rivet Actors also provide realtime (WebSockets), React integration (useActor), horizontal scalability, and actors that sleep when idle.

GitHub: https://github.com/rivet-dev/rivet

Docs: https://www.rivet.dev/docs/actors/sqlite/

[1] https://news.ycombinator.com/item?id=42472519

[2] https://rivet.dev/blog/2025-02-16-sqlite-on-the-server-is-mi...

[3] https://www.alexdebrie.com/posts/dynamodb-single-table/

Comments URL: https://news.ycombinator.com/item?id=47197003

Points: 2

# Comments: 0

Categories: Hacker News

Pages