Hacker News
Logic MSO – Oscilloscope with Python Support
Article URL: https://saleae.com/logic-mso
Comments URL: https://news.ycombinator.com/item?id=47049094
Points: 1
# Comments: 0
Why AI writing is so generic, boring, and dangerous: Semantic ablation
Article URL: https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
Comments URL: https://news.ycombinator.com/item?id=47049088
Points: 1
# Comments: 0
Show HN: Wit-ts – A type-level WIT parser for TypeScript
I wrote a parser (runtime and type-level) for WebAssembly Interface Types (https://component-model.bytecodealliance.org/design/wit.html).
const wit = [ "record user { name: string, age: u32 }", "variant api-error { not-found, unauthorized(string) }", "get-user: func(id: u64) -> user;", "create-post: func(author: user, post: post) -> result;", ] as const; type Client = WitClient>; // Client["get-user"]: (id: bigint) => Promise<{ name: string; age: number }> // Client["create-post"]: (author: ...) => Promise<["ok", {...}] | ["err", ["not-found"] | ["unauthorized", string]]> Why did I do this? Good question. I originally did this work as part of this project: https://sdk.kontor.network/. Kontor is a new Bitcoin metaprotocol that uses WITs to define smart contract interfaces.
I carved wit-ts out of the project and removed some domain specific stuff from it, refactored some internals, and extended it to be compatible with a broader subset of the wit specification. Technically there are some valid wit types that would not be handled cleanly here ( e.g. recursive types ).
Tremendous debt is owed to the https://github.com/wevm/abitype project, which does the same thing for Ethereum ABIs and was the direct inspiration for the type-level approach.
Comments URL: https://news.ycombinator.com/item?id=47049085
Points: 1
# Comments: 0
Where Does Gold Come From?
Article URL: https://connordempsey.substack.com/p/where-does-gold-actually-come-from
Comments URL: https://news.ycombinator.com/item?id=47049080
Points: 2
# Comments: 0
Show HN: My 16MB vibe-coded voice cloning app
I vibe coded this text to speech app in an hour last weekend. It uses the new open weight Qwen models so it's fully local. Supports both instruct and voice cloning.
And since it's built with Electrobun it's only 16MB and uses typescript for the main and browser views.
Comments URL: https://news.ycombinator.com/item?id=47049077
Points: 1
# Comments: 0
Intelligent AI Delegation
Article URL: https://arxiv.org/abs/2602.11865
Comments URL: https://news.ycombinator.com/item?id=47049042
Points: 1
# Comments: 0
Show HN: Boolean-query-parser – From a 4-hour hack to 3k downloads
Article URL: https://github.com/Piergiuseppe/boolean-query-parser
Comments URL: https://news.ycombinator.com/item?id=47049035
Points: 1
# Comments: 1
RCT: Vaporized cannabis versus placebo for acute migraine
Article URL: https://headachejournal.onlinelibrary.wiley.com/doi/10.1111/head.70025
Comments URL: https://news.ycombinator.com/item?id=47049033
Points: 1
# Comments: 0
Show HN: Local Voice Assistant
Several weeks ago I built a fully-local voice assistant demo with a FastAPI backend and a simple HTML front-end. All the models (ASR / LLM / TTS) are open weight and running locally, i.e. no data is being sent to the Internet nor any API. It's intended to demonstrate how easy it is to run a fully-local AI setup on affordable commodity hardware, while also demonstrating the uncanny valley and teasing out the ethical considerations of such a setup - it allows you to perform voice cloning.
Link: https://github.com/acatovic/ova
Models used:
ASR: NVIDIA parakeet-tdt-0.6b-v3 600M LLM: Mistral ministral-3 3b 4-bit quantized TTS (Simple): Hexgrad Kokoro 82M TTS (With Voice Cloning): Qwen3-TTS
It implements a classic ASR -> LLM -> TTS architecture:
1. Frontend captures user's audio and sends a blob of bytes to the backend /chat endpoint
2. Backend parses the bytes, extracts sample rate (SR) and channels, then:
2.1. Transcribes the audio to text using an automatic speech recognition (ASR) model
2.2. Sends the transcribed text to the LLM, i.e. "the brain"
2.3. Sends the LLM response to a text-to-speech (TTS) model
2.4. Performs normalization of TTS output, converts it to bytes, and sends the bytes back to frontend
3. The frontend plays the response audio back to the user
I've had a number of people try it out with great success and you can potentially take it any direction, e.g. give it more capabilities so it can offload "hard" tasks to larger models or agents, enable voice streaming, give it skills or knowledge, etc.
Enjoy!
Comments URL: https://news.ycombinator.com/item?id=47049030
Points: 2
# Comments: 0
Sentinel – watch over your Tailscale network and notify of changes
Article URL: https://github.com/jaxxstorm/sentinel
Comments URL: https://news.ycombinator.com/item?id=47049028
Points: 1
# Comments: 0
Temporal Raises $300M Series D to Make Agentic AI Real for Companies
Article URL: https://temporal.io/news/temporal-raises-300M-to-make-agentic-ai-real-for-companies
Comments URL: https://news.ycombinator.com/item?id=47049026
Points: 3
# Comments: 0
Show HN: MAKO – Open protocol for LLM-optimized web content (93% fewer tokens)
Article URL: https://makospec.vercel.app/en
Comments URL: https://news.ycombinator.com/item?id=47049011
Points: 1
# Comments: 1
Show HN: Cai – AI actions on your clipboard, runs locally (macOS, open source)
I've spent a lot of time copy-pasting and switching between apps to summarize text, create events, proofread emails, look up addresses — always the same follow-up steps after copying.
So I built Cai. It sits in the menu bar. Press Option+C, it detects what you copied and shows relevant actions. Ships with llama-server (Ministral 3B) so it works out of the box, or connect Ollama/LM Studio if you already use them.
Free and open source. Would love feedback on what content types or actions to add next.
Comments URL: https://news.ycombinator.com/item?id=47048991
Points: 1
# Comments: 0
Show HN: Kremis – Deterministic memory graph for AI agents (Rust)
Hi HN — I built Kremis, an experimental deterministic memory substrate for AI agents.
I was tired of "black-box" memory where you can't trace why an agent "remembers" or "hallucinates" something. Kremis is an attempt to fix this by using a deterministic graph engine instead of probabilistic embeddings for core state.
Key features:
- Zero Hidden State: Every query result is a concrete path in a graph. You can audit exactly why the AI reached a conclusion. - Strict Determinism: Same input leads to the same graph state. No randomness or floating-point drift in the core logic. - ACID Reliable: Built on redb for crash-safe persistent storage.
How to use it: It ships as a Rust library, a CLI/HTTP API, and an MCP Server. You can plug it directly into Claude Desktop or Cursor to give your AI assistants a verifiable memory.
Development was heavily AI-assisted, and I'm sharing it today to get technical feedback from the Rust and AI community on the architecture.
I'd value your thoughts on: 1. Does a deterministic graph feel like a viable path for long-term agent memory? 2. How can I improve the query ergonomics for complex traversals?
Thanks for any feedback!
Comments URL: https://news.ycombinator.com/item?id=47048981
Points: 1
# Comments: 0
How AI Finds Fuzzy Duplicates in Large Datasets
Article URL: https://futuresearch.ai/semantic-deduplication/
Comments URL: https://news.ycombinator.com/item?id=47048374
Points: 2
# Comments: 1
AI Safety and Corporate Power – Remarks Given – United Nations Security Council
Article URL: https://jack-clark.net/2023/07/18/ai-safety-and-corporate-power-remarks-given-at-the-un-security-council/
Comments URL: https://news.ycombinator.com/item?id=47048361
Points: 1
# Comments: 0
EU also investigating as Grok generated 23,000 CSAM images in 11 days
Article URL: https://9to5mac.com/2026/02/17/eu-also-investigating-as-grok-generated-23000-csam-images-in-11-days/
Comments URL: https://news.ycombinator.com/item?id=47048355
Points: 1
# Comments: 0
Open Source Is Getting Used to Death
Article URL: https://julien.danjou.info/blog/open-source-is-getting-used-to-death/
Comments URL: https://news.ycombinator.com/item?id=47048347
Points: 1
# Comments: 0
Show HN: cc-costline – See your Claude Code spend right in the statusline
I've been using Claude Code as my daily driver and had no easy way to track spending over time. The built-in statusline shows session stats, but nothing about historical cost or how close I am to hitting usage limits.
cc-costline replaces Claude Code's statusline with one that shows rolling cost totals, usage limit warnings, and optionally your rank on the ccclub leaderboard — all in a single line:
``` 14.6k ~ $2.42 / 40% by Opus 4.6 | 5h: 45% / 7d: 8% | 30d: $866 ```
What each segment means:
- `14.6k ~ $2.42 / 40% by Opus 4.6` — session tokens, cost, context window usage, model - `5h: 45% / 7d: 8%` — Claude's 5-hour and 7-day usage limits (color-coded: green → orange → red) - `30d: $866` — rolling 30-day total (configurable to 7d or both)
Setup is one command:
``` npm i -g cc-costline && cc-costline install ```
Comments URL: https://news.ycombinator.com/item?id=47048321
Points: 1
# Comments: 0
Convert to it – universal online file converter
Article URL: https://github.com/p2r3/convert
Comments URL: https://news.ycombinator.com/item?id=47048309
Points: 1
# Comments: 1
