Hacker News

Subscribe to Hacker News feed
Hacker News RSS
Updated: 53 min 33 sec ago

Show HN: Radiant – Radial Menu Launcher for macOS Inspired by Blender's Pie Menu

Tue, 02/10/2026 - 8:54am

Hi HN, I built Radiant.

I use Blender's Pie Menu a lot and like how spatial positioning turns into muscle memory — hold a key, move toward a direction, release. After a while you stop thinking about it. I wanted that same interaction model in Figma, VS Code, and the rest of macOS, so I built a system-wide version.

Radiant is a radial and list menu launcher for macOS. You organize actions into menus, trigger them with a hotkey, and pick by direction or position.

Some design decisions I'd be happy to discuss:

- 8 fixed slots per radial menu — a deliberate constraint for spatial memory. More slots = slower selection (Fitts's Law), fewer = not enough utility. List menus handle the "I need 20+ items" case. - Three close modes: release-to-confirm (Blender-style), click-to-confirm, and toggle (menu stays open for multiple actions) - App-specific profiles that auto-switch based on the frontmost application - Built-in macro system — chain keystrokes, delays, text input, and system actions without external tools

Technical details: - Native Swift/SwiftUI, no Electron - CGEventTap for global keyboard/mouse monitoring - Accessibility API for keystroke injection - All data stored locally in UserDefaults, no telemetry - JSON config with import/export for sharing presets

URL: https://radiantmenu.com

Would love to hear your thoughts.

Comments URL: https://news.ycombinator.com/item?id=46959736

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Shuffled - Daily word puzzle game

Tue, 02/10/2026 - 8:50am

Hi HN!

I built a word game last week called Shuffled. It's a daily puzzle where you drag letters around a grid to form the words before running out of moves. It's designed for quick play. Everyone gets the same set of puzzles.

I’d love any feedback on how the difficulty feels and any UX rough edges.

Comments URL: https://news.ycombinator.com/item?id=46959704

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: The Control and Memory Layer for AI Agents

Tue, 02/10/2026 - 8:50am

We launched OpenSink after building AI Agents and noticing some painful patterns: the agents are doing work and it ends up somewhere in Slack, Email, or gets lost. If you use multiple platforms, the agents are scattered around, you can't make changes to them without redeploying and you have low to no visibility regarding what they do. Here are the building blocks: Memory (via Sinks): persistent and searchable memory for agents, that survives restarts. Sessions: see what an agent did during a run, with a structured timeline. Input Requests: the agent asks for human input, waits for a response, and continues the execution. Used to build human-in-the-loop agents with low effort. Configurations: Easily tweak your agent configuration without redeploying the code.

Works with any AI agent platform, or custom code. Launching with docs, examples, and two open-source OpenClaw skills for Memory and Activities.

Website: https://opensink.com Docs: https://docs.opensink.com

Any feedback is valuable at this point, and thank you for reading so far ^^

Comments URL: https://news.ycombinator.com/item?id=46959703

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Vela – Modern programming language compiling to native code via LLVM

Tue, 02/10/2026 - 8:50am

Hello Hacker News,

I’m a computer science student, and I’m building my own programming language called Vela as an learning project.

I'm building Vela to better understand how programming languages work internally: lexer, parser, AST, type systems, and eventually LLVM compilation.

The language is in early stages but already has: >Custom syntax with type inference >Parser and basic interpreter >Pattern matching and pipeline operators >Roadmap for async/await and parallel execution

I'd especially love feedback on: >Language design decisions (syntax choices, features) >Code architecture (currently Python frontend + planned LLVM backend) >Type system implementation >What features to prioritize vs. what to cut

Current status: The fundamentals work, but there are definitely bugs and missing pieces. If you've built a language before, your advice would be invaluable. If anyone wants to contribute, review code, or just point out where I'm doing things wrong, it would help a lot.

Comments URL: https://news.ycombinator.com/item?id=46959698

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Early detection of LLM hallucinations via structural dissonance

Tue, 02/10/2026 - 8:49am

Hi HN,

I've been exploring a different angle on hallucination detection.

Most approaches react after the fact — fact-checking, RAG, or token probabilities. But hallucinated outputs often show structural warning signs before semantic errors become obvious.

I built ONTOS, a research prototype that monitors structural coherence using IDI (Internal Dissonance Index).

ONTOS acts as an 'External Structural Sensor' for LLMs.

It is model-agnostic and non-invasive, designed to complement existing safety layers and alignment frameworks without needing access to internal weights or costly retraining.

Core idea: Track both local continuity (sentence-to-sentence) and global context drift, then detect acceleration of divergence between them in embedding space.

Analogy: Like noticing a piano performance becoming rhythmically unstable before wrong notes are played. Individual tokens may look fine, but the structural "tempo" is collapsing.

What's in the repo:

• Dual-scale monitoring: Local jumps vs global drift • Pre-crash detection: IDI triggers on acceleration, not just deviation • Black-box compatible: No access to model internals needed

Key limitations:

• Detects structural instability, not factual truth • Sentence-level demos (not token-level yet) • Research prototype, not production-ready

What I'd love feedback on:

• Does structural monitoring feel more robust than semantic similarity alone? • What edge cases where hallucinations are structurally perfect? • Fundamental blockers to using this as an external safety sensor?

GitHub: https://github.com/yubainu/SL-CRF

Critical feedback welcome — early-stage exploration.

Comments URL: https://news.ycombinator.com/item?id=46959695

Points: 1

# Comments: 1

Categories: Hacker News

America's $1T AI Gamble

Tue, 02/10/2026 - 8:48am
Categories: Hacker News

Show HN: Octrafic – AI agent for API testing from your terminal

Tue, 02/10/2026 - 8:47am

I built a CLI tool that acts as an AI agent for API testing. Think Claude Code, but for testing APIs – you describe what you want to test, and it autonomously generates test cases, runs them, and reports back. Written in Go, open source, no GUI. It fits into your existing terminal workflow. I was tired of manually writing and updating API tests, so I built something that handles that loop for me. GitHub: https://github.com/Octrafic/octrafic-cli

Feedback welcome.

Comments URL: https://news.ycombinator.com/item?id=46959665

Points: 1

# Comments: 0

Categories: Hacker News

Accelerando, but Janky

Tue, 02/10/2026 - 8:47am
Categories: Hacker News

Show HN: Model Tools Protocol (MTP) – Forget MCP, bash is all you need

Tue, 02/10/2026 - 8:47am

Recently I was trying to use an MCP server to pull data from a service, but hit a limitation: the MCP didn't expose the data I needed, even though the service's REST API supported it. So I wrote a quick CLI wrapper around the API. Worked great, except Claude Code had no structured way to know what my CLI does or how to call it. For `gh` or `curl` the model can learn from the extensive training data, but for a tool I just wrote, it was stabbing in the dark.

MCP solves this discovery problem, but it does it by rebuilding tool interaction from scratch: server processes, JSON-RPC transport, client-host handshakes. It got discovery right but threw out composability to get there. You can't pipe one MCP tool into another or run one in a cron job without a host process. Pulling a Confluence page, checking Jira for duplicates, and filing a ticket is three inference round-trips for work that should be a bash one-liner. I also seem to endlessly get asked to re-login to my MCPs, something `gh` CLI never asks me to do.

I think the industry took a wrong turn here. We didn't need a new execution model for tools, we needed to add one capability to the execution model we already had. That's what Model Tools Protocol (MTP) is: a spec for making any CLI self-describing so LLMs can discover and use it.

MTP does that with a single convention: your CLI responds to `--mtp-describe` with a JSON schema describing its commands, args, types, and examples. No server, no transport, no handshake. I wrote SDKs for Click (Python), Commander.js (TypeScript), Cobra (Go), and Clap (Rust) that introspect the types and help strings your framework already has, so adding `--mtp-describe` to an existing CLI is a single function call.

I don't think MCP should disappear, so there's a bidirectional bridge. `mtpcli serve` exposes any `--mtp-describe` CLI as an MCP server, and `mtpcli wrap` goes the other direction, turning MCP servers into pipeable CLIs. The ~2,500 MCP servers out there become composable CLI tools you can script and run in CI without an LLM in the loop.

The real payoff is composition: your custom CLI, a third-party MCP server, and jq in a single pipeline, no tokens burned. I'll post a concrete example in the comments.

Try it:

npm i -g @modeltoolsprotocol/mtpcli && mtpcli --mtp-describe I know it's unlikely this will take off as I can't compete with the great might of Anthropic, but I very much welcome collaborators on this. PRs are welcome on the spec, additional SDKs, or anything else. Happy building!

Spec and rationale: <https://github.com/modeltoolsprotocol/modeltoolsprotocol>

CLI tool: <https://github.com/modeltoolsprotocol/mtpcli>

SDKs: TypeScript (<https://github.com/modeltoolsprotocol/typescript-sdk>) | Python (<https://github.com/modeltoolsprotocol/python-sdk>) | Go (<https://github.com/modeltoolsprotocol/go-sdk>) | Rust (<https://github.com/modeltoolsprotocol/rust-sdk>)

Comments URL: https://news.ycombinator.com/item?id=46959655

Points: 4

# Comments: 2

Categories: Hacker News

Show HN: Sign Any PDF Free – No account, no watermarks, no limits

Tue, 02/10/2026 - 8:44am

I got tired of paying $15-50/month to sign PDFs, so I built a free alternative.

How it works: Upload PDF, draw or type your signature, place it, download. Done.

No account required. No document limits. No watermarks. No "you've used your 3 free signatures" gotchas.

Monetized through ads (same model as Photopea).

Tech: Node.js + Express, pdf-lib for manipulation, vanilla JS frontend. Processing happens client-side for privacy.

Happy to answer questions about the implementation or business model.

Comments URL: https://news.ycombinator.com/item?id=46959637

Points: 1

# Comments: 0

Categories: Hacker News

Pages