Feed aggregator

Claude's Corner

Hacker News - Sat, 02/28/2026 - 11:38am
Categories: Hacker News

AI pays for its self-existence

Hacker News - Sat, 02/28/2026 - 11:33am

Article URL: https://web4.ai/

Comments URL: https://news.ycombinator.com/item?id=47197289

Points: 1

# Comments: 1

Categories: Hacker News

Obsidian Sync now has a headless client

Hacker News - Sat, 02/28/2026 - 11:31am
Categories: Hacker News

Anthropic vs. DoD: "Any lawful use" is a fight about control

Hacker News - Sat, 02/28/2026 - 11:30am

I served 12 years infantry, then built targeting tools at JSOC vs ISIS. Now I lead a team building AI tools automating the compliance process. I’ve got opinions on Anthropic + DoD

When people argue about “AI in weapons” like it’s a sci-fi trigger bot… I can’t take it seriously.

A “kill chain” isn’t a vibe. It’s a process

Find, Fix, Track, Target, Engage, Assess (F2T2EA) and most of it is information work: sorting signal from noise, building confidence, tightening timelines, and getting decisions to the right humans fast enough to matter.

That’s why this Anthropic vs. DoD fight is getting attention. It’s not just “ethics.”

-> It’s about control.

Here’s what’s actually on the table:

Anthropic says they’ll support the military — but they want two carve-outs: no mass domestic surveillance and no fully autonomous weapons (their definition: systems that “take humans out of the loop entirely” and automate selecting/engaging targets).

Anthropic also says DoD demanded “any lawful use” and threatened offboarding / “supply chain risk” pressure if they didn’t comply.

A DoD memo posted on media.defense.gov explicitly calls for models “free from usage policy constraints” and directs adding standard “any lawful use” language into AI contracts.

The dispute escalated fast — including federal offboarding/blacklist actions and a “supply chain risk” designation as reported by major outlets. Now my take, as someone who’s lived inside the targeting reality:

AI can absolutely help the kill chain without ever being the one “pulling the trigger.”

Speeding up Find/Fix/Track/Target changes outcomes — and it’s not hypothetical.

But if we’re going to talk about “any lawful use,” then stop outsourcing national policy to contract fights.

DoD already has policy that autonomous weapon systems should allow appropriate human judgment over the use of force. So the real question isn’t whether humans matter.

It’s this:

Do we want safety and governance implemented at the model layer (vendor guardrails), the contract layer (“any lawful use”), or the law/policy layer (Congress + DoD doctrine + auditing)?

Because “Terms of Service vs. warfighting” is a stupid place to settle a question this big.

If you’ve worked in intel, targeting, acquisition, or governance:

Where should the boundary live? model, contract, or law, and who owns accountability when it breaks?

Comments URL: https://news.ycombinator.com/item?id=47197243

Points: 1

# Comments: 1

Categories: Hacker News

Pages