Hacker News
Little red dot (astronomical object)
Article URL: https://en.wikipedia.org/wiki/Little_red_dot_(astronomical_object)
Comments URL: https://news.ycombinator.com/item?id=47222465
Points: 1
# Comments: 0
Open consultation: Growing up in the online world
Article URL: https://www.gov.uk/government/consultations/growing-up-in-the-online-world-a-national-consultation
Comments URL: https://news.ycombinator.com/item?id=47222457
Points: 1
# Comments: 0
Compiling English Security Policies into Deterministic Agent Guardrails
Article URL: https://starlog.is/articles/ai-agents/provos-ironcurtain/
Comments URL: https://news.ycombinator.com/item?id=47222449
Points: 1
# Comments: 0
I Fuzzed, and Vibe Fixed, the Vibed C Compiler
Article URL: https://john.regehr.org/writing/claude_c_compiler.html
Comments URL: https://news.ycombinator.com/item?id=47222448
Points: 1
# Comments: 0
Cryptographic receipts for CMMC compliance evidence
The problem I kept seeing: defense contractors would spend months preparing for a CMMC assessment — policies, screenshots, control mappings, the whole thing — and then the C3PAO would ask who last modified a document and when. No audit trail. Assessment over. About 15–30% of first-time CMMC assessments fail. I'd guess a big chunk of those aren't failing because the security controls aren't in place. They're failing because there's no way to prove the evidence is authentic. So I built Solymus. Every artifact you upload gets SHA-256 hashed, signed with KMS (ECDSA_SHA_256 — signing the digest, not the payload, because of the 4KB limit), and sealed into a daily Merkle chain. Each artifact gets a public /verify/{id} endpoint — no auth required — so the assessor can check it themselves. One thing worth knowing: right after upload, merkle_status shows "pending". It upgrades to "linked" after midnight UTC when the attestation job runs. The KMS signature is valid immediately — Merkle is the additional daily seal. Took me a while to realize I needed to document that clearly or people assumed it was broken. Free tier at prolixotech.com. Happy to go into the crypto implementation if anyone's curious.
Comments URL: https://news.ycombinator.com/item?id=47222430
Points: 1
# Comments: 0
Making large Postgres migrations practical
Article URL: https://clickhouse.com/blog/practical-postgres-migrations-at-scale-peerdb
Comments URL: https://news.ycombinator.com/item?id=47221602
Points: 1
# Comments: 0
Show HN: Product Model – A structured grammar for bridging PRDs and code
Hi HN,
I built Product Model, an open-source MDX-based grammar that gives product specs the same rigor as code — typed blocks, validation, version control, and a JSON AST output.
The problem: product intent lives in Google Docs and Notion. Implementation lives in code. Between them there's no structured layer, so requirements drift, edge cases disappear, and "that's not what I meant" becomes the most expensive sentence every sprint.
I wanted something more natural than code but more structured than a Google Doc — a format that both humans and AI agents can read, write, and reason about. That's what led me to build this.
Product Model lets PMs author .product.mdx files using blocks like Feature, Policy, Logic, Definition, and Constraint. Tooling validates the grammar, checks cross-references, and builds a machine-readable AST. Think of it as a type system for product requirements.
It comes with a CLI for validation and builds, and a visual Studio editor so you never have to touch raw MDX if you don't want to.
Repo: https://github.com/pmTouchedTheCode/product-model
Would love feedback on the grammar design and whether this matches real PM workflows you've seen.
Comments URL: https://news.ycombinator.com/item?id=47221585
Points: 1
# Comments: 0
Google tests new Learning Hub powered by goal-based actions
Article URL: https://www.testingcatalog.com/google-tests-new-learning-hub-powered-by-goal-based-actions/
Comments URL: https://news.ycombinator.com/item?id=47221568
Points: 1
# Comments: 0
The Vanishing Giants [Coaling Towers] of America's Steam Age
Article URL: https://thereader.mitpress.mit.edu/the-vanishing-giants-of-americas-steam-age/
Comments URL: https://news.ycombinator.com/item?id=47221565
Points: 1
# Comments: 0
The Vegetarian Offset
Article URL: https://hydroindulgence.com/s
Comments URL: https://news.ycombinator.com/item?id=47221556
Points: 1
# Comments: 1
Grabchars 2.0 – get keystrokes direct, first update in 36 years
Article URL: https://github.com/DanielSmith/grabchars
Comments URL: https://news.ycombinator.com/item?id=47221552
Points: 1
# Comments: 1
History Rhymes: Large Language Models Off to a Bad Start?
Article URL: https://michaeljburry.substack.com/p/history-rhymes-large-language-models
Comments URL: https://news.ycombinator.com/item?id=47221537
Points: 1
# Comments: 0
GitHub – Maderix/ANE: Training Neural Networks on Apple Neural Engine
Article URL: https://github.com/maderix/ANE
Comments URL: https://news.ycombinator.com/item?id=47221528
Points: 2
# Comments: 0
Show HN: Engram – Give your terminal an eidetic memory with local AI
Article URL: https://github.com/TLJQ/engram
Comments URL: https://news.ycombinator.com/item?id=47221514
Points: 2
# Comments: 0
Malus: Clean room engineering of any open-source dependency
Article URL: https://malus.sh/blog.html
Comments URL: https://news.ycombinator.com/item?id=47221504
Points: 1
# Comments: 1
Euros
Article URL: https://my-notes.dragas.net/2026/02/22/179-euros/
Comments URL: https://news.ycombinator.com/item?id=47221480
Points: 1
# Comments: 0
5 Takeaways on America's Boom in Billionaires
Article URL: https://www.nytimes.com/2026/03/02/us/billionaire-boom-takeaways.html
Comments URL: https://news.ycombinator.com/item?id=47221474
Points: 1
# Comments: 1
Ask HN: What is your AI workflow for software projects?
Currently my workflow is as follows:
- Pull down any related repos into a root dir
- Tell Claude (Claude Code) to create a markdown file with details on the different repos and how they relate to each other.
- Will create a markdown file explaining the changes I need made.
- Ask it to think through the problem and create a detailed change plan in a separate markdown file.
- Have Claude create a detailed Todo list.
- I'll review everything and, if it's good, have it kick off the work. It should detail any issues it runs into into the change file, not ask permission, and provide me with the required steps to test the changes.
At that point I essentially take back over, review the code, and engage in a two way conversation to dial the changes in.
This is a natural workflow that developed over time but I never truly stopped to think through it. Am I on an island of one with this process? If so, how are you all using the tools?
Comments URL: https://news.ycombinator.com/item?id=47221470
Points: 1
# Comments: 0
TX Ken Paxton calls Conduent 25M PII "largest hack in US history"
Article URL: https://www.extremetech.com/internet/data-breach-exposes-25-million-americans-in-what-texas-calls-the-largest
Comments URL: https://news.ycombinator.com/item?id=47221467
Points: 1
# Comments: 0
Show HN: Btrc – I built a language with AI in a few weeknights that outputs C11
btrc is a statically-typed language that transpiles to C11. It adds classes, generics (monomorphized), type inference, lambdas, f-strings, collections (Vector, Map, Set), threads, GPU compute via WebGPU, ARC memory management, exception handling, and a standard library — while generating strict C11 with no runtime, no GC, and no VM.
The compiler is a 6-stage Python pipeline (lexer → parser → analyzer → IR gen → optimizer → C emitter) driven by a formal EBNF grammar and an algebraic AST spec (Zephyr ASDL). The generated C is readable and linkable with any C11 compiler. It ships with a VS Code extension (LSP with completions, diagnostics, go-to-def, hover) and 930 tests.
I've wanted to build something like this for about 10 years, but it was always too ambitious to do on the side. With AI, I was able to build it over a handful of evenings after work. Some things worth noting:
A few things that I find fun / interesting / noteworthy:
- The EBNF grammar and ASDL spec are the single source of truth — the lexer, parser, and AST node classes are all derived from them, not hardcoded
- Generics are monomorphized (like C++ templates / Rust), so zero runtime overhead but binary size grows per type combination
- ARC handles most memory management including cycle detection and cleanup on exceptions — no GC, deterministic destruction
- The entire stdlib (collections, math, datetime, IO, threading) is written in btrc itself
- @gpu functions transpile to WGSL compute shaders with auto-generated WebGPU boilerplate — array params become storage buffers, scalars become uniforms
- The generated C is meant to be readable — you can step through it in gdb/lldb and it mostly makes sense (just verbose with lots of underscores)
- There's a 3D game engine example (ball + WASD + jump + shadows + raymarching) that's ~570 lines of btrc across 11 small modules
- The VS Code extension reuses the compiler's own lexer, parser, and analyzer — diagnostics match exactly what the compiler reports
- btrc inherits C's memory model wholesale — no borrow checker, no lifetime analysis. You can absolutely still shoot yourself in the foot
- The whole thing was built in a few evenings after work with heavy AI assistance, which felt like the most interesting part of the project honestly
Comments URL: https://news.ycombinator.com/item?id=47221459
Points: 1
# Comments: 0
