Hacker News

Show HN: Oh-My-OpenClaw – agent orchestration for coding, from Discord/Telegram

Hacker News - Wed, 02/25/2026 - 11:09pm

I've been using Oh-My-OpenCode (OmO) — a multi-agent orchestration plugin for terminal-based AI coding — and it's great. But being terminal-only kept bugging me:

- Can't kick off work from my phone. I'd want to say "implement this feature" while I'm out, but I'd have to open my laptop. - No async workflow. I have to stay attached to the terminal until it's done. - Can't run multiple tasks in parallel. One session, one task, sequentially. - Sometimes I come back and it's just sitting there waiting for a clarification question. - The web UI exists but it's slow. Discord and Telegram are right there — polished, fast, on every device.

So I ported OmO's multi-agent patterns to OpenClaw (a chat-platform AI agent framework) and built a plugin that actually integrates with OmO.

Then I hit a second set of problems on the OpenClaw side:

- The agent tries to do everything itself instead of using sub-agents. - To use OpenCode for coding, I had to explicitly say "open tmux, run opencode, do the work there" every time. - Opus 4.6 as the main model means weak multimodal. I'd have to manually route screenshots and PDFs to Gemini CLI.

Oh-My-OpenClaw (OmOC) solves both sides.

What it does:

1. Fully async, from any chat platform. Type `/omoc omoc_prometheus` in Discord or Telegram and walk away. Planning → agent dispatch → execution → verification runs autonomously. Works from your phone.

2. 11 specialized agents, auto-orchestrated. Not one model doing everything — a team. Prometheus plans, Atlas orchestrates, Sisyphus-Junior implements, Oracle architects, Momus reviews. Each has a specific mandate and personality. The orchestrator (Atlas) delegates, verifies every result, and never writes code itself.

3. Automatic model routing per task category. Simple search? Sonnet. Complex refactor? Opus. Architecture decision? GPT-5.3 Codex. Visual/UI work? Gemini 3.1 Pro. The right model for the right job, configured in one JSON file.

4. OmO + tmux integration. When coding work comes in, OmOC automatically delegates to OpenCode/OmO running in tmux. No manual "open tmux and run opencode" instructions. Gemini CLI multimodal analysis is also auto-routed through tmux.

5. Todo enforcer tracks every step and warns when work is incomplete.

Architecture:

Planning (Prometheus/Metis/Momus) → Orchestration (Atlas) → Workers (Sisyphus-Junior/Hephaestus/Oracle/Explore/Librarian) → tmux: OpenCode/OmO (actual coding) → tmux: Gemini CLI (multimodal analysis)

The plugin layer (TypeScript) enforces all of this at the code level:

- Session-scoped in-memory todo tools — agents must plan before executing - Comment checker — 11 regex patterns kill AI slop comments on sight - Keyword detector — auto-routes to the right workflow based on what you type - Checkpoint system — save/load execution state for crash recovery Two commands. The setup wizard configures all 11 agent personas with your preferred model providers.

Works without OmO — you get the full multi-agent orchestration on OpenClaw alone. *Works with OmO* — you get OmO's full power (AST-grep, LSP, 55+ hooks) accessible from chat, no terminal needed.

Still early. There's plenty of rough edges. Feedback, issues, and PRs are all very welcome.

Comments URL: https://news.ycombinator.com/item?id=47161721

Points: 2

# Comments: 0

Categories: Hacker News

Show HN: Runtric – Turn any topic into a chapter-based learning path

Hacker News - Wed, 02/25/2026 - 11:09pm

Hello. We built an AI learning service that lets you create a curriculum on any topic you want and study without worrying about hallucinations.

While existing AI learning services have mostly focused on how much text inside a PDF the AI can read and answer from, we focused more on how the AI should explain and guide so that real learning actually continues. That’s why we aimed to make it feel less like a simple Q&A tool and more like studying with a teacher who knows how to teach well.

It’s also simple to use.

1. Enter what you want to learn, choose your level and the number of chapters, and a curriculum is generated.

2. Click the curriculum card you want and start learning right away.

3. Each chapter comes with both a tutorial and a chatbot, and the chatbot continues the conversation while understanding the context of the current chapter.

4. So even when a user gets a quiz question wrong, they don’t have to explain everything again from the beginning; the AI can reflect the situation and help immediately.

Lastly, to briefly explain why we built this service: just because the AI era has arrived doesn’t mean the quality of education has automatically improved.

In fact, we felt that with AI added on top of short, stimulating content consumption like Reels or Shorts, people often lose focus when trying to study.

So we focused on reducing the real problems that get in the way of studying with AI, like hallucinations, unstructured learning flow, and the limits of traditional education methods that still haven’t changed.

Thank you.

Service link: https://runtric.com/

Comments URL: https://news.ycombinator.com/item?id=47161720

Points: 1

# Comments: 0

Categories: Hacker News

Testing "Raw" GPU Cache Latency

Hacker News - Wed, 02/25/2026 - 11:06pm
Categories: Hacker News

In 2100, 2 socio-economic classes exist

Hacker News - Wed, 02/25/2026 - 11:06pm

in 2050, 3 socio-economic classes exist:

-poor people who spend 90% of their life on social media looking at short videos of rich kids who seek validation online because they never got it at home.

-middle class; made entirely of AIs that doesn't technically pay taxes because they don't technically exist in 1 location.

-rich people who own the corporations that made the AIs.

In 2100, its down to 2 socio-economic classes:

-poor people hunting down videos made by the very few rich people who are still posting.

-rich AIs that have figured out a way to bribe their CTO into losing focus for a day.

Comments URL: https://news.ycombinator.com/item?id=47161707

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: DeltaMemory – Persistent cognitive memory for production AI agents

Hacker News - Wed, 02/25/2026 - 10:57pm

Most AI agents forget everything between sessions. We built DeltaMemory to fix that.

It's a cognitive memory layer that gives agents persistent recall, automatic fact extraction, and temporal reasoning — via a single SDK call.

Key numbers: - 89% accuracy on LoCoMo long-term conversation benchmark - 50ms p50 retrieval latency - 97% cost reduction vs raw token re-processing

Open source SDKs, works with any LLM stack. Currently in early access.

Happy to answer questions about the architecture, benchmark methodology, or how we handle knowledge graphs.

Comments URL: https://news.ycombinator.com/item?id=47161647

Points: 1

# Comments: 1

Categories: Hacker News

Show HN: Director-AI – token-level NLI+RAG

Hacker News - Wed, 02/25/2026 - 10:54pm

Hey HN,

After watching too many agents confidently lie in production, I built Director-AI.

It sits between your LLM and the user, scoring every generated token with: • 0.6× DeBERTa-v3 NLI (contradiction detection) • 0.4× RAG against your own ChromaDB knowledge base

If coherence < threshold → Rust kernel halts the stream before the token is sent.

Key technical bits: • Works with any OpenAI-compatible endpoint (Ollama, vLLM, llama.cpp, Groq, OpenAI, Claude…) • StreamingKernel + windowed scoring • GroundTruthStore.add() for easy fact ingestion • Dual licensing: AGPL open + commercial (closed-source/SaaS OK)

Honest AggreFact numbers inside (66.2% balanced acc with streaming enabled). Not claiming SOTA on static NLI — the value is in the live gating + custom KB system.

Repo + full examples: https://github.com/anulum/director-ai

Would love feedback on the scoring weights, halt logic, or kernel design. What hallucination problems are you solving today?

Comments URL: https://news.ycombinator.com/item?id=47161620

Points: 1

# Comments: 2

Categories: Hacker News

LazyGravity – I made my phone control Antigravity so I never leave bed

Hacker News - Wed, 02/25/2026 - 10:53pm

I get my best coding ideas when I'm nowhere near my desk — usually right as I'm falling asleep. I got tired of losing that momentum, so I built LazyGravity. It's a local Discord bot that hooks up Antigravity to your phone. I can ship fixes, kick off long implementation tasks, or start whole features from bed, the train, wherever. Send a message in Discord, Antigravity executes it on your home PC, results come back as rich embeds you can reply to for follow-up instructions. How it works: it drives the Antigravity UI directly via Chrome DevTools Protocol over WebSocket (Runtime.evaluate on the Electron shell's DOM). No private API hacking — no risk of account bans like with tools that reverse-engineer proprietary APIs. A few things I care about: - Local-first: your code never leaves your machine. No exposed ports, no cloud relays, no intermediate server. - Secure: whitelist-based access — only your Discord ID can trigger commands. (I recommend a dedicated server to keep things private.) - Context threading: reply to any result embed to continue the conversation with full context preserved. What you can actually do from your phone: - Route local projects to Discord categories, sessions to channels — automatic workspace management - Toggle LLM models or modes (Plan/Code/Architect) with /model and /mode - /screenshot to see exactly what's happening on your desktop in real-time - One-click prompt templates for common tasks - Auto-detect and approve/deny file change dialogs from Discord Still early alpha (v0.1.0), but it's been a game-changer for my own workflow. Looking for folks to try it out, roast the architecture, , add new features and help squash bugs. npm install -g lazy-gravity lazy-gravity setup Demo video in Readme: https://github.com/tokyoweb3/LazyGravity

Comments URL: https://news.ycombinator.com/item?id=47161616

Points: 2

# Comments: 1

Categories: Hacker News

Pages