Feed aggregator
Gemini 3.1 Flash Lite Preview
Article URL: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/3-1-flash-lite
Comments URL: https://news.ycombinator.com/item?id=47234849
Points: 1
# Comments: 1
Show HN: Sai – Your always-on co-worker
Article URL: https://www.simular.ai/sai
Comments URL: https://news.ycombinator.com/item?id=47234838
Points: 1
# Comments: 0
AI Domains Not Resolving
Noticed this when trying to download claude code. The claude.ai domain is currently not reliably resolving from many major sources (1.1.1.1, 8.8.8.8, 9.9.9.9). The same appears to be true for other .ai domains. Is there something going on with the Anguilla nameservers?
Comments URL: https://news.ycombinator.com/item?id=47234828
Points: 1
# Comments: 0
The context window is not your database
Article URL: https://hornet.dev/blog/the-context-window-is-not-your-database
Comments URL: https://news.ycombinator.com/item?id=47234781
Points: 2
# Comments: 0
LexisNexis confirms data breach as hackers leak stolen files
Article URL: https://www.bleepingcomputer.com/news/security/lexisnexis-confirms-data-breach-as-hackers-leak-stolen-files/
Comments URL: https://news.ycombinator.com/item?id=47234764
Points: 2
# Comments: 0
Show HN: SysNav – An Intelligent Cockpit for DevOps (Local-First)
Hey HN, I'm Ravi, and I've spent the last few months building *SysNav* (https://sysnav.ai). It's an intelligent terminal workspace designed specifically for SREs who want the speed of a native shell but the context-awareness of an AI when things break.
**The Problem** I found myself constantly context-switching: run `kubectl`, see error, copy to LLM, sanitize PII manually, paste, get generic fix, modify for my variables... it was friction-heavy. **The Solution: SysNav** I built SysNav as a "Assistant for DevOps and Ops". It wraps your terminal in a native "Intelligent Cockpit" interface: 1. **Ask Mode (Safe)**: You ask "Why is the redis pod crashing?". SysNav inspects the screen output (using a local PTY buffer), anonymizes PII, and sends *only* the context to the LLM. It returns a suggested command. **It creates a human-in-the-loop safety barrier.** 2. **Agent Mode (Autonomous)**: If you trust it, you can give it a goal ("Find all zombie processes and kill them"), and it will execute the steps one by one, stopping if it encounters an unexpected error. **Security Architecture (The "Paranoia" Check)** This was the most important part for me. SysNav uses a **Local-First Architecture**: * **SSH Keys & Env Vars**: Never leave your machine. They are stored in your release of the Electron app. * **Context**: We only send the text buffer of the terminal (logs/errors) to the inference layer. We have a redact-pii step that runs locally before network requests. **Status** We are currently in **Public Beta**. The core terminal is "fast-enough" (<100ms startup), and the AI context awareness is saving me about 30 minutes per incident. I'd love for you to roast the architecture or tell me what features would make you actually trust an AI in your production environment. Check it out: https://sysnav.ai (Download available for macOS/Linux/Windows)
Comments URL: https://news.ycombinator.com/item?id=47234763
Points: 1
# Comments: 0
Show HN: Mind-mem – Zero-infra agent memory with 19 MCP tools (BM25+vector+RRF)
Article URL: https://github.com/star-ga/mind-mem
Comments URL: https://news.ycombinator.com/item?id=47234761
Points: 1
# Comments: 1
What Military Drones Can Teach Self-Driving Cars
Article URL: https://spectrum.ieee.org/military-drones-self-driving-cars
Comments URL: https://news.ycombinator.com/item?id=47234759
Points: 1
# Comments: 0
Book Review: Why Are the Prices So Damn High? (2019)
Article URL: https://srconstantin.wordpress.com/2019/06/28/book-review-why-are-the-prices-so-damn-high/
Comments URL: https://news.ycombinator.com/item?id=47234758
Points: 2
# Comments: 0
From Fargo to Zebra
Article URL: https://cendyne.dev/posts/2026-02-27-from-fargo-to-zebra.html
Comments URL: https://news.ycombinator.com/item?id=47234756
Points: 1
# Comments: 0
A New Map of Human Experience [video]
Article URL: https://www.youtube.com/watch?v=r0QY2_Ej32Q
Comments URL: https://news.ycombinator.com/item?id=47234755
Points: 1
# Comments: 0
Mac Themes Garden
Article URL: https://macthemes.garden/
Comments URL: https://news.ycombinator.com/item?id=47234754
Points: 1
# Comments: 0
As a YouTube Creator, I'm Buzzed About Honor's Robot Camera Phone
They seized $4.8m in crypto… then gave the master key to the internet
Pentagon ditches Anthropic AI over “security risk” and OpenAI takes over
On Friday the US Pentagon cut ties with Anthropic, the company behind Claude AI. Defense Secretary Pete Hegseth designated the San Francisco-based company a “supply-chain risk to national security.”
The supply-chain risk designation means that no contractor, supplier, or partner doing business with the US military can deal with Anthropic. The label previously applied only to foreign adversaries like Huawei, though, and using it against a US company marks a rare escalation in a government-industry dispute. According to reports, President Donald Trump also ordered every federal agency to stop using Anthropic’s technology.
What Anthropic wouldn’t budge onAnthropic called the designation “unlawful and politically motivated” and said it intends to challenge it in court.
At the center of the dispute is how far Anthropic believes its models should be allowed to go inside military systems. Anthropic, which was the first frontier AI company deployed on the military’s classified networks, wanted two contractual restrictions on its AI model Claude, as outlined in its response to the Pentagon’s announcement. It forbade the Pentagon to use its tech for the mass domestic surveillance of Americans and did not want its tech employed in fully autonomous weapons.
The Pentagon had previously demanded that all AI vendors agree to “all lawful purposes” language as part of their contracts. Anthropic told ABC that what the Pentagon finally offered left the door open for the government to violate the company’s no-surveillance and no-weapons clauses.
Defense Secretary Hegseth responded with a statement cancelling Anthropic’s $200m Pentagon contract, awarded last July. He accused Anthropic of attempting to seize veto power over military operations and called the company’s position fundamentally incompatible with American principles.
Anthropic’s CEO Dario Amodei called the government’s response retaliatory and punitive and promised to challenge the designation in court.
Legal scholars suggest that the AI company could have a strong case, questioning whether Hegseth can meet the statutory requirements for such a designation, which is allegedly intended to protect military systems from adversarial sabotage rather than resolving a commercial disagreement over contract terms.
Dan W. Ball, senior fellow at the American Foundation for Innovation, called the Pengaton’s move “attempted corporate murder,” arguing that Google, Amazon, and NVIDIA would have to detach themselves from Anthropic if Hegseth got his way. Amazon is Anthropic’s primary cloud computing provider, but it also uses Google’s data centers extensively. Both companies are investors in Anthropic, as is NVIDIA, which also partners with the AI company on GPU engineering. If the Pentagon’s designation restricts federal contractors from integrating Anthropic technology into defense-related systems, those partners could be required to separate or ringfence any federal-facing work involving the company.
OpenAI steps inIn a whirlwind of policy changes by the US military, the Pentagon also signed a deal with ChatGPT creator OpenAI on Friday evening, just a few hours after dropping Anthropic.
OpenAI CEO Sam Altman said the agreement preserved the same principles Anthropic had been blacklisted for defending.
The difference, according to Altman, is the enforcement mechanism. Instead of hard contractual prohibitions, OpenAI accepted the “all lawful purposes” framework but layered on architectural controls: cloud-only deployment, a proprietary safety stack the Pentagon agreed not to override, and cleared engineers embedded forward. OpenAI said these protections made the company confident that the Pentagon couldn’t cross the red lines it shares with Anthropic.
Altman reportedly said Anthropic’s approach differed because it relied on specific contract language rather than existing legal protections, adding Anthropic “may have wanted more operational control than we did.”
The morning afterThe policy dispute did not immediately change how existing systems were operating. According to reporting by The Wall Street Journal and Axios, US Central Command used Anthropic’s AI during Operation Epic Fury, a coordinated US–Israeli operation targeting Iran. The outlets reported that the system was used for intelligence assessment, target analysis, and operational modeling.
Claude remained in use because it was already embedded in certain classified military systems. As a senior defense official previously told Axios:
“It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”
Hegseth announced a six-month period during which the Pentagon will pick Anthropic’s AI out of its systems.
Consumers vote with their feetThe dispute has also prompted reactions from some AI industry employees and users. More than 875 employees across Google and OpenAI signed an open letter backing Anthropic’s stance. According to the letter:
“They’re trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.”
A consumer boycott, organized under the name QuitGPT, is organizing a campaign to avoid using ChatGPT, along with a protest at OpenAI’s HQ this week. Claude also rocketed to the top of Apple’s App Store over the weekend.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.
Social Media Workers Are Burnt Out and Relying on AI to Help. It's a Mixed Bag
Fig Security Launches With $38 Million to Bolster SecOps Resilience
The company was founded in March 2025 and it has now emerged from stealth mode.
The post Fig Security Launches With $38 Million to Bolster SecOps Resilience appeared first on SecurityWeek.
Bank and payments giant complete first payment initiated by artificial intelligence in a controlled environment
Free Software Needs Free Tools: Making Your Project Open
Article URL: https://cfp.cfgmgmtcamp.org/ghent2026/talk/LHWU8T/
Comments URL: https://news.ycombinator.com/item?id=47233925
Points: 1
# Comments: 0
