Hacker News
Show HN: Lacune, Go test coverage TUI
I’ve been using Zed for a while and missed inline code coverage visualization. Since this extension doesn’t seem to be coming anytime soon, I built Lacune, a TUI for tracking uncovered code in real time.
Comments URL: https://news.ycombinator.com/item?id=46956000
Points: 1
# Comments: 0
MCP Knife: A CLI Swiss Army Knife for MCP Servers
Article URL: https://vivekhaldar.com/articles/mcp-knife-cli-swiss-army-knife-for-mcp-servers/
Comments URL: https://news.ycombinator.com/item?id=46955876
Points: 1
# Comments: 0
US plans Big Tech carve-out from next wave of chip tariffs
Article URL: https://www.ft.com/content/e6f7f69a-2552-45f5-ae4c-6f1135e5cde1
Comments URL: https://news.ycombinator.com/item?id=46955869
Points: 1
# Comments: 0
Show HN: MCP Orchestrator – Spawn parallel AI sub-agents from one prompt
I built an open-source MCP server (TypeScript/Node.js) that lets you spawn up to 10 parallel sub-agents using Copilot CLI or Claude Code CLI.
Key features: - Context passing to each agent (full file, summary, or grep mode) - Smart timeout selection based on MCP servers requested - Cross-platform (macOS, Linux, Windows) - Headless & programmatic — designed for AI-to-AI orchestration
Example: give one prompt like "research job openings at Stripe, Google, and Meta" — the orchestrator fans it out to 3 parallel agents, each with their own MCP servers (e.g., Playwright for browser), and aggregates results.
Install: npm i @ask149/mcp-orchestrator
This is a solo side project. Would love feedback on: - What CLI backends to support next (Aider, Open Interpreter, local LLM CLIs?) - Ideas for improving the context-passing system - What MCP server integrations would be most useful
PRs and issues welcome — check CONTRIBUTING.md in the repo.
Comments URL: https://news.ycombinator.com/item?id=46955848
Points: 2
# Comments: 0
Show HN: Agx – A Kanban board that runs your AI coding agents
agx is a kanban board where each card is a task that AI agents actually execute.
agx new "Add rate limiting to the API" That creates a card. Drag it to "In Progress" and an agent picks it up. It works through stages — planning, coding, QA, PR — and you watch it move across the board.
The technical problems this solves:
The naive approach to agent persistence is replaying conversation history. It works until it doesn't:
1. Prompt blowup. 50 iterations in, you're stuffing 100k tokens just to resume. Costs explode. Context windows overflow.
2. Tangled concerns. State, execution, and orchestration mixed together. Crash mid-task? Good luck figuring out where you were.
3. Black box execution. No way to inspect what the agent decided or why it's stuck.
agx uses clean separation instead:
- Control plane (PostgreSQL + pg-boss): task state, stage transitions, job queue
- Data plane (CLI + providers): actual execution, isolated per task
- Artifact storage (filesystem): prompts, outputs, decisions as readable files
Agents checkpoint after every iteration. Resuming loads state from the database, not by replaying chat. A 100-iteration task resumes at the same cost as a 5-iteration one.
What you get: - Constant-cost resume, no context stuffing
- Crash recovery: agent wakes up exactly where it left off
- Full observability: query the DB, read the files, tail the logs
- Provider agnostic: Claude Code, Gemini, Ollama all work
Everything runs locally. PostgreSQL auto-starts via Docker. The dashboard is bundled with the CLI.
Comments URL: https://news.ycombinator.com/item?id=46955833
Points: 2
# Comments: 0
OpenClaw Partners with VirusTotal for Skill Security
Article URL: https://openclaw.ai/blog/virustotal-partnership
Comments URL: https://news.ycombinator.com/item?id=46955832
Points: 2
# Comments: 0
Players discover that World of Warcraft is powered by invisible bunnies
Why Every Business Must Engage with AI – and How to Do It Right
Title: Why every business should engage with AI (the real question is how deep)
AI is no longer an experimental technology. It’s becoming a baseline capability for modern businesses. The real question most teams should be asking is not “should we use AI?” but “how deeply should we engage with it?”
I’ve talked to many founders, CTOs, and operators over the past couple of years. The hesitation around AI usually comes from two places:
Teams that haven’t really tried AI and feel comfortable sticking with existing workflows.
Teams that rushed into AI, spent money, got disappointing results, and walked away.
Both often conclude: “AI isn’t for us.” That conclusion is understandable — but increasingly risky.
Many organizations still rely on manual or semi-manual processes: document handling, internal knowledge search, reporting, customer support triage. Everything appears to “work,” but it’s slow, hard to scale, and dependent on headcount rather than leverage.
AI isn’t magic, but it is a force multiplier. Ignoring it means accepting structural inefficiency while competitors gradually improve speed, quality, and decision-making.
One misconception I see a lot: that engaging with AI means building custom models or hiring a large ML team. In practice, AI today is closer to what spreadsheets or search once were — general-purpose tools that most teams can benefit from without deep specialization.
Instead of treating AI adoption as a yes/no decision, it’s more useful to think in levels.
Level 1: AI literacy Every company should be here. This is about enabling people, not systems: using tools like ChatGPT for research, drafting, summarization, and analysis; teaching teams how to verify outputs; and setting clear rules around sensitive data. Low risk, high return.
Level 2: AI-assisted workflows Here AI becomes part of everyday processes without replacing humans. Examples include internal AI assistants over documentation, AI-supported customer support, content generation, or analytics help. This is where many teams see the best ROI with relatively low complexity.
Level 3: AI-driven systems At this level, AI is embedded into products or core operations: RAG systems, agent workflows, forecasting, personalization. This requires clean data, evaluation, and operational discipline. Many failures happen here not because AI doesn’t work, but because teams skip the earlier foundations.
The biggest risk isn’t “doing AI wrong.” It’s not building AI fluency at all while the rest of the market moves forward.
Once AI systems are in production, new problems appear: cost control, reliability, hallucinations, latency, silent regressions. At that point, AI stops being a demo and becomes infrastructure.
For teams already dealing with production AI systems, we’ve been thinking a lot about observability and reliability in this space. Some of that work is shared here: https://optyxstack.com/ai
Curious how others on HN think about the “depth” question when it comes to AI adoption.
Comments URL: https://news.ycombinator.com/item?id=46955823
Points: 1
# Comments: 0
Show HN: PicoClaw – lightweight OpenClaw-style AI bot in one Go binary
I’m building PicoClaw: a lightweight OpenClaw-style personal AI bot that runs as a single Go binary. OpenClaw (Moltbot / Clawdbot) is a great product. I wanted something with a simpler, more “single-binary” architecture that’s easy to read and hack on.
Repo: https://github.com/mosaxiv/picoclaw
Comments URL: https://news.ycombinator.com/item?id=46955793
Points: 2
# Comments: 0
Flood Fill vs. The Magic Circle
Article URL: https://www.robinsloan.com/winter-garden/magic-circle/
Comments URL: https://news.ycombinator.com/item?id=46955772
Points: 1
# Comments: 0
Show HN: A CLI tool to automate Git workflows using AI agents
Hi HN,
I built a CLI tool to automate common git workflows using AI agents (e.g. creating branches, summarizing context, and preparing PRs).
Supported platforms: - GitHub (via gh) - GitLab (via glab)
Supported AI agents: - Claude Code - Gemini CLI - Cursor Agent - Codex CLI
Design goals: - Agent-agnostic (same commands across different AI agents) - No MCP or custom prompts required - Minimal setup (from install to first PR in minutes)
Repo: https://github.com/leochiu-a/git-pr-ai
Feedback and questions welcome.
Comments URL: https://news.ycombinator.com/item?id=46955761
Points: 2
# Comments: 0
Use AI to find movies and TV shows on your streaming services
Article URL: https://pickalready.com
Comments URL: https://news.ycombinator.com/item?id=46955757
Points: 2
# Comments: 0
Spec driven development doesn't work if you're too confused to write the spec
GenAI Go SDK for AI
Article URL: https://50984e11.maruel-ca.pages.dev/post/genai-v0.1.0/
Comments URL: https://news.ycombinator.com/item?id=46955737
Points: 1
# Comments: 0
Show HN: I built an AI-powered late-night call-in radio show from my RV
Show HN: I built an AI-powered late-night call-in radio show from my RV I live in an RV in the desert and I built a system that generates AI callers who phone into my late-night talk show. Each caller has a unique voice, name, backstory, job, vehicle, and opinions. They know the local weather, road conditions, and what's happening in the towns around southern New Mexico. Some are recurring characters who call back with updates on their lives. The stack: - FastAPI backend running the show control panel - OpenRouter for LLM (caller personalities, dialog, topics) — mostly Grok and MiniMax - ElevenLabs / Inworld for TTS with 25+ distinct voices - Caller personality system with memory — regulars remember past conversations - Live phone integration via SignalWire so real people can call in too - Post-production pipeline: stem recording, gap removal, voice compression, music ducking, EBU R128 loudness normalization - Self-hosted on Castopod, episodes served from BunnyCDN The callers aren't scripted. The LLM generates their personality and topic, then we have a real conversation. I respond as the host, the AI generates their replies in real time with TTS. The result sounds like actual late-night radio — someone calls at 2 AM to argue about Pluto's planetary status, another calls about their divorce, another has a conspiracy theory about fusion energy. Real callers can dial in live and get mixed in with the AI characters. Nobody knows who's real. Listen: https://lukeattheroost.com RSS: Spotify, Apple Podcasts, YouTube Call in: 208-439-LUKE The code is a solo project — happy to answer questions about the architecture.
Comments URL: https://news.ycombinator.com/item?id=46955730
Points: 1
# Comments: 0
HeartMuLa: Open-source music foundation model achieving commercial-grade quality
Article URL: https://heart-mula.com
Comments URL: https://news.ycombinator.com/item?id=46955724
Points: 1
# Comments: 1
An emotional app to figure out your next step
Article URL: https://www.heyecho.app/
Comments URL: https://news.ycombinator.com/item?id=46955722
Points: 2
# Comments: 0
Show HN: I built a macOS tool for network engineers – it's called NetViews
Hi HN — I’m the developer of NetViews, a macOS utility I built because I wanted better visibility into what was actually happening on my wired and wireless networks.
I live in the CLI, but for discovery and ongoing monitoring, I kept bouncing between tools, terminals, and mental context switches. I wanted something faster and more visual, without losing technical depth — so I built a GUI that brings my favorite diagnostics together in one place.
About three months ago, I shared an early version here and got a ton of great feedback. I listened: a new name (it was PingStalker), a longer trial, and a lot of new features. Today I’m excited to share NetViews 2.3.
NetViews started because I wanted to know if something on the network was scanning my machine. Once I had that, I wanted quick access to core details—external IP, Wi-Fi data, and local topology. Then I wanted more: fast, reliable scans using ARP tables and ICMP.
As a Wi-Fi engineer, I couldn’t stop there. I kept adding ways to surface what’s actually going on behind the scenes.
Discovery & Scanning: * ARP, ICMP, mDNS, and DNS discovery to enumerate every device on your subnet (IP, MAC, vendor, open ports). * Fast scans using ARP tables first, then ICMP, to avoid the usual “nmap wait”.
Wireless Visibility: * Detailed Wi-Fi connection performance and signal data. * Visual and audible tools to quickly locate the access point you’re associated with.
Monitoring & Timelines: * Connection and ping timelines over 1, 2, 4, or 8 hours. * Continuous “live ping” monitoring to visualize latency spikes, packet loss, and reconnects.
Low-level Traffic (but only what matters): * Live capture of DHCP, ARP, 802.1X, LLDP/CDP, ICMP, and off-subnet chatter. * mDNS decoded into human-readable output (this took months of deep dives).
Under the hood, it’s written in Swift. It uses low-level BSD sockets for ICMP and ARP, Apple’s Network framework for interface enumeration, and selectively wraps existing command-line tools where they’re still the best option. The focus has been on speed and low overhead.
I’d love feedback from anyone who builds or uses network diagnostic tools: - Does this fill a gap you’ve personally hit on macOS? - Are there better approaches to scan speed or event visualization that you’ve used? - What diagnostics do you still find yourself dropping to the CLI for?
Details and screenshots: https://netviews.app There’s a free trial and paid licenses; I’m funding development directly rather than ads or subscriptions. Licenses include free upgrades.
Happy to answer any technical questions about the implementation, Swift APIs, or macOS permission model.
Comments URL: https://news.ycombinator.com/item?id=46955712
Points: 3
# Comments: 0
We chose a pipeline over speech-to-speech for evaluative voice AI
Article URL: https://productfit.substack.com/p/why-speech-to-speech-apis-fail-when
Comments URL: https://news.ycombinator.com/item?id=46955710
Points: 2
# Comments: 0
Show HN: BlazeMQ – 52KB Kafka-compatible broker in C++20, zero dependencies
I built a message broker that speaks the Kafka wire protocol, so any Kafka client (librdkafka, kafka-python, kcat, etc.) works without code changes.
The entire binary is 52KB. No JVM, no ZooKeeper, no third-party libraries — just C++20 with kqueue/epoll. Starts in <10ms, uses 0% CPU when idle. I built this because running Kafka locally for development is painful — gigabytes of RAM, slow startup, ZooKeeper/KRaft configuration. I just wanted something that accepts produce requests and gets out of the way. Technical details: - Single-threaded event loop (kqueue on macOS, epoll on Linux) - Memory-mapped log segments (1GB pre-allocated, sequential I/O) - Lock-free SPSC/MPSC ring buffers with cache-line alignment - Kafka protocol v0-v3 including flexible versions (ApiVersions, Metadata, Produce) - Auto-topic creation on first produce or metadata request The most interesting bug I hit: librdkafka sends ApiVersions v3, which uses Kafka's "flexible versions" encoding. But there's a special exception in the protocol — ApiVersions responses must NOT include header tagged_fields for backwards compatibility. One extra byte shifted every subsequent field, causing librdkafka to compute a ~34GB malloc that crashed immediately. Current limitations: no consumer groups, no replication, single-threaded, no auth. It's v0.1.0 — consume support is next. MIT licensed, runs on macOS (Apple Silicon + Intel) and Linux.
Comments URL: https://news.ycombinator.com/item?id=46955708
Points: 1
# Comments: 0
