Feed aggregator
LLaMAudit: Perform AI detection using local or open models
Article URL: https://github.com/devrupt-io/LLaMAudit
Comments URL: https://news.ycombinator.com/item?id=47083548
Points: 1
# Comments: 0
How to Use Clarity's AI Bot Activity Report
Article URL: https://www.culturefoundry.com/cultivate/content-strategy/how-to-use-claritys-ai-bot-report-for-seo-aeo/
Comments URL: https://news.ycombinator.com/item?id=47083322
Points: 1
# Comments: 0
Show HN: CMV – strip up to 70% of Claude Code without losing any conversation
kept losing good conversations to /compact. you spend 40 minutes having claude map your codebase, it builds up real understanding, then context fills up and /compact crushes everything into a 3k token summary. "we discussed auth and decided on JWT." cool thanks.
dug into the actual session JSONL files and the breakdown is kind of absurd -- 60-70% is raw file contents from tool reads that claude already synthesized, another 15-20% is base64 thinking signatures. your actual conversation is like 10-15% of the window.
so i built cmv. it strips the junk and keeps every message verbatim. tool results over 500 chars become stubs, thinking sigs get removed, everything you said stays.
cmv trim --latest # trim and relaunch, 50-70% smaller
cmv snapshot "analysis" --latest # save a session state
cmv branch "analysis" --name "auth" # fork from it later
also has a TUI dashboard that shows the token breakdown per session so you can see what's eating your context before you do anything.
what it's not:
* not a token monitor (ccusage etc already do that)
* doesn't touch original sessions, everything creates copies
* local only, reads JSONL directly, no API calls
curious how others handle this. most people seem to just accept /compact but losing a deep architectural discussion to a bullet point summary felt wrong enough to build something.
https://github.com/CosmoNaught/claude-code-cmv
Comments URL: https://news.ycombinator.com/item?id=47083309
Points: 1
# Comments: 0
Jeffrey Epstein’s Ties to CBP Agents Sparked a DOJ Probe
Bungled Boeing Starliner mission put stranded NASA crew at risk
Silicon Valley's Favorite Doomsaying Philosopher
Article URL: https://www.newyorker.com/culture/the-lede/silicon-valleys-favorite-doomsaying-philosopher
Comments URL: https://news.ycombinator.com/item?id=47083294
Points: 1
# Comments: 1
Prompt Repetition Improves Non-Reasoning LLMs
Article URL: https://arxiv.org/abs/2512.14982
Comments URL: https://news.ycombinator.com/item?id=47083281
Points: 1
# Comments: 0
Podcast should not disappear after 72 hours. Make it a searchable asset
Article URL: https://podcastarchiveengine.vercel.app/
Comments URL: https://news.ycombinator.com/item?id=47083274
Points: 1
# Comments: 0
PCB Forge
Article URL: https://castpixel.itch.io/pcb-forge
Comments URL: https://news.ycombinator.com/item?id=47083273
Points: 1
# Comments: 0
Show HN: Codedocent – Code visualization for non-programmers
I'm a hardware engineer who reads schematics, not source code. I kept needing to understand codebases for projects I was managing but couldn't read the syntax. So I built a tool that turns any codebase into an interactive visual map with plain English explanations.
Point it at a folder, get nested colored blocks showing the structure (directories → files → classes → functions). Click to drill down. AI generates summaries written for humans, not programmers. Architecture mode shows a dependency graph so you can see how modules connect.
Built the whole thing in ~30 hours using a multi-node AI workflow: Claude for planning/decisions, Claude Code for implementation, five other models for adversarial security review (42 fixes across 6 rounds). I made every design decision; AI wrote every line of code.
Cloud AI (OpenAI/Groq) or local AI (Ollama) — your choice. pip install codedocent and run the setup wizard.
MIT licensed. Would love feedback from people who actually write code — does this help when onboarding onto unfamiliar codebases?
Comments URL: https://news.ycombinator.com/item?id=47083260
Points: 2
# Comments: 0
Exposing biases, moods, personalities, and abstract concepts hidden in LLMs
Article URL: https://news.mit.edu/2026/exposing-biases-moods-personalities-hidden-large-language-models-0219
Comments URL: https://news.ycombinator.com/item?id=47083252
Points: 1
# Comments: 0
Trump order seeks to protect weedkiller at center of barrage of lawsuits
Article URL: https://www.cnbc.com/2026/02/19/trump-kennedy-glyphosate-maha-midterms-rfk-jr.html
Comments URL: https://news.ycombinator.com/item?id=47083231
Points: 2
# Comments: 0
Brain-like computers could be built out of perovskites
Article URL: https://economist.com/science-and-technology/2026/02/18/brain-like-computers-could-be-built-out-of-perovskites
Comments URL: https://news.ycombinator.com/item?id=47083226
Points: 2
# Comments: 0
Frontier Model Training Methodologies
Article URL: https://djdumpling.github.io/2026/01/31/frontier_training.html
Comments URL: https://news.ycombinator.com/item?id=47083221
Points: 1
# Comments: 0
Nullclaw: OpenClaw but in Zig
Article URL: https://github.com/nullclaw/nullclaw
Comments URL: https://news.ycombinator.com/item?id=47083200
Points: 1
# Comments: 0
Show HN: Antenna, a command center for OpenClaw agents
Hi HN!
I’m building Antenna, a Mac app to manage an OpenClaw team in one place. If you run multiple agents, it gets messy fast: scattered chats, unclear command approvals, and poor visibility into what happened where. Antenna is my attempt to make that operationally sane.
Right now I’m focused on three things: seeing conversations across agents in one UI, approving commands safely, and keeping sessions manageable as context grows.
I’m currently testing better visibility into usage/context per session, smoother coding/review workflows, and lightweight controls that reduce complexity instead of adding more.
This is early and moving quickly. I’d really value feedback on: what would make this trustworthy enough for daily use, what’s missing for teams running multiple agents, and what would stop you from adopting it.
Comments URL: https://news.ycombinator.com/item?id=47083199
Points: 1
# Comments: 0
Show HN: 150M AI-Generated Q&A Pages Static
Over the past 6 months, our small team has been building Qeeebo — a large-scale question-and-answer knowledge archive designed to explore whether massive knowledge corpora can be published sustainably using fully static infrastructure.
This month, we are releasing:
• 150+ million structured questions • 24.5 million topics • 171 million topic-question relationships • 18+ million paginated topic pages • 100% pre-rendered static HTML • No origin servers — served entirely via CDN
Each question includes: – A full answer – A summary – Structured citation formats (APA, MLA, Chicago, IEEE, etc.) – Export formats (BibTeX, RIS, JSON-LD, YAML)
The entire system is generated in independent segments (~45k pages each), built across parallel machines running Hugo, then uploaded via automated multi-threaded pipelines with full failure tracking.
Why build this?
Large Q&A platforms historically struggled with sustainability — especially when operating on database-backed, dynamically rendered systems. We wanted to explore whether extreme-scale static generation could reduce infrastructure cost while increasing long-term durability.
This isn’t positioned as a replacement for Wikipedia or Stack Overflow. Instead, it’s an experiment in permanence and cost-efficient knowledge hosting at very large scale.
Happy to answer technical questions.
Comments URL: https://news.ycombinator.com/item?id=47083185
Points: 2
# Comments: 0
We Built an Agent Context Management System
Article URL: https://venturecrane.com/articles/agent-context-management-system/
Comments URL: https://news.ycombinator.com/item?id=47083175
Points: 1
# Comments: 0
An RPI inspired CONTRIBUTING.md to help AI's work and keep humans in the loop
Article URL: https://gist.github.com/rjcorwin/296885590dc8a4ebc64e70879dc04a0f
Comments URL: https://news.ycombinator.com/item?id=47083166
Points: 1
# Comments: 0
