Hacker News

Show HN: OpenClawHub – A Lib for AI agent workflows so you don't have to

Hacker News - Tue, 03/03/2026 - 8:09pm

Hi HN,

I spent months building AI agents for clients. Same problems every time: - Prompts work in dev, fail in prod - Hard to maintain across 100+ users - Reinventing workflows others already solved

So I built OpenClawHub: a curated library of production-ready AI agent workflows.

What it is: - 20+ battle-tested agent templates (daily reports, PR reviews, SEO, etc.) - Each template: YAML + setup steps + example output - Community-reviewed, not a free-for-all like ClawHub - Focus: practical automations, not skill dumps

How it's different from ClawHub: - ClawHub = npm for skills (building blocks) - OpenClawHub = npm for complete workflows (batteries included)

Typical workflow: 1) Browse templates 2) Copy YAML 3) Add your API keys 4) Deploy

Real examples from the repo: - Daily work report generator (saves 2 hours/day) - GitHub PR review bot (catches issues before merge) - Email triage automation (priority scoring + drafting) - SEO blog post generator (research → outline → publish)

Built for: - Solo founders wearing 10 hats - Dev teams drowning in manual processes - Anyone who wants AI agents that actually work

Tech stack: OpenAI Codex, Claude Code, Trae, browser-use, etc.

Link: https://openclawhub.uk

Feedback welcome. What workflows would save you 10 hours/week?

Comments URL: https://news.ycombinator.com/item?id=47241645

Points: 1

# Comments: 0

Categories: Hacker News

Haptics: Tactile Feedback for the Mobile Web

Hacker News - Tue, 03/03/2026 - 7:36pm

Article URL: https://haptics.lochie.me/

Comments URL: https://news.ycombinator.com/item?id=47241350

Points: 1

# Comments: 1

Categories: Hacker News

Show HN: I built a human rights evaluator for HN (content vs. site behavior)

Hacker News - Tue, 03/03/2026 - 7:26pm

My health challenges limit how much I can work. I've come to think of Claude Code as an accommodation engine — not in the medical-paperwork sense, but in the literal one: it gives me the capacity to finish things that a normal work environment doesn't. Observatory was built in eight days because that kind of collaboration became possible for me. (I even used Claude Code to write this post — but am only posting what resonates with me.) Two companion posts: on the recursive methodology (https://blog.unratified.org/2026-03-03-recursive-methodology...) and what 806 evaluated stories reveal (https://blog.unratified.org/2026-03-03-what-806-stories-reve...).

I built Observatory to automatically evaluate Hacker News front-page stories against all 31 provisions of the UN Universal Declaration of Human Rights — starting with HN because its human-curated front page is one of the few feeds where a story's presence signals something about quality, not just virality. It runs every minute: https://observatory.unratified.org. Claude Haiku 4.5 handles full evaluations; Llama 4 Scout and Llama 3.3 70B on Workers AI run a lighter free-tier pass.

The observation that shaped the design: rights violations rarely announce themselves. An article about a company's "privacy-first approach" might appear on a site running twelve trackers. The interesting signal isn't whether an article mentions privacy — it's whether the site's infrastructure matches its words.

Each evaluation runs two parallel channels. The editorial channel scores what the content says about rights: which provisions it touches, direction, evidence strength. The structural channel scores what the site infrastructure does: tracking, paywalls, accessibility, authorship disclosure, funding transparency. The divergence — SETL (Structural-Editorial Tension Level) — is often the most revealing number. "Says one thing, does another," quantified.

Every evaluation separates observable facts from interpretive conclusions (the Fair Witness layer, same concept as fairwitness.bot — https://news.ycombinator.com/item?id=44030394). You get a facts-to-inferences ratio and can read exactly what evidence the model cited. If a score looks wrong, follow the chain and tell me where the inference fails.

Per our evaluations across 805 stories: only 65% identify their author — one in three HN stories without a named author. 18% disclose conflicts of interest. 44% assume expert knowledge (a structural note on Article 26). Tech coverage runs nearly 10× more retrospective than prospective: past harm documented extensively; prevention discussed rarely.

One story illustrates SETL best: "Half of Americans now believe that news organizations deliberately mislead them" (fortune.com, 652 HN points). Editorial: +0.30. Structural: −0.63 (paywall, tracking, no funding disclosure). SETL: 0.84. A story about why people don't trust media, from an outlet whose own infrastructure demonstrates the pattern.

The structural channel for free Llama models is noisy — 86% of scores cluster on two integers. The direction I'm exploring: TQ (Transparency Quotient) — binary, countable indicators that don't need LLM interpretation (author named? sources cited? funding disclosed?). Code is open source: https://github.com/safety-quotient-lab/observatory — the .claude/ directory has the cognitive architecture behind the build.

Find a story whose score looks wrong, open the detail page, follow the evidence chain. The most useful feedback: where the chain reaches a defensible conclusion from defensible evidence and still gets the normative call wrong. That's the failure mode I haven't solved. My background is math and psychology (undergrad), a decade in software — enough to build this, not enough to be confident the methodology is sound. Expertise in psychometrics, NLP, or human rights scholarship especially welcome. Methodology, prompts, and a 15-story calibration set are on the About page.

Thanks!

Comments URL: https://news.ycombinator.com/item?id=47241255

Points: 2

# Comments: 2

Categories: Hacker News

Universal-3 Pro Streaming

Hacker News - Tue, 03/03/2026 - 7:17pm
Categories: Hacker News

Show HN: Dracula-AI – A lightweight, async SQLite-backed Gemini wrapper

Hacker News - Tue, 03/03/2026 - 7:16pm

I'm an 18-year-old CS student from Turkey. I've been building Dracula, a Python wrapper for the Google Gemini API. I initially built it because I wanted a simpler Mini SDK that handled conversational memory, function calling, and streaming out of the box without the boilerplate of the official SDK.

Recently, I got some well-deserved technical criticism from early users: using JSON files to store chat history was a memory-bloat disaster waiting to happen; forcing a PyQt6 dependency on server-side bots was a terrible design choice; and lacking a retry mechanism meant random 503s from Google crashed the whole app.

So, I went back to the drawing board and completely rewrote the core architecture for v0.8.0. Here is what I changed to make it production-ready:

Swapped JSON for SQLite: I implemented a local database system (using sqlite3 for sync and aiosqlite for async). It now handles massive chat histories without eating RAM, and tracks usage stats safely.

True Async Streaming: Fixed a generator bug that was blocking the asyncio event loop. Streaming now yields chunks natively in real-time.

Exponential Backoff: Added an under-the-hood auto-retry mechanism that gracefully handles 429 rate limits and 503/502 server drops.

Zero Bloat: Split the dependencies. "pip install dracula-ai" installs just the core for FastAPI/Discord bots. "pip install dracula-ai[ui]" brings in the desktop interface.

Here is a quick example of the async streaming:

import os, asyncio from dracula import AsyncDracula

async def main(): async with AsyncDracula(api_key=os.getenv("GEMINI_API_KEY")) as ai: async for chunk in ai.stream("Explain quantum computing"): print(chunk, end="", flush=True)

asyncio.run(main())

Building this has been a huge learning curve for me regarding database migrations, event loops, and package management. I would love for the HN community to look at the code, review the async architecture, and tell me what I did wrong (or right!).

GitHub: https://github.com/suleymanibis0/dracula PyPI: https://pypi.org/project/dracula-ai/

Thanks for reading!

Comments URL: https://news.ycombinator.com/item?id=47241149

Points: 1

# Comments: 0

Categories: Hacker News

Pages