Hacker News

Subscribe to Hacker News feed
Hacker News RSS
Updated: 5 min 15 sec ago

Show HN: RepoClip – Generate promo videos from GitHub repos using AI

Tue, 02/17/2026 - 5:05am

Hi HN, I built RepoClip, a tool that takes a GitHub URL and automatically generates a promotional video for the repository.

How it works: 1. Paste a GitHub repo URL 2. AI (Gemini) analyzes the codebase and generates a video script 3. Images (Flux), narration (OpenAI TTS), and background music are auto-generated 4. Remotion renders the final video Tech stack: Next.js, Supabase, Inngest, Remotion Lambda, Fal.ai I built this because I noticed many great open source projects struggle with marketing. Writing docs is hard enough — making a demo video on top of that felt like something AI could handle. Free tier available (2 videos/month). Would love to hear your feedback.

Comments URL: https://news.ycombinator.com/item?id=47045631

Points: 1

# Comments: 0

Categories: Hacker News

Imposter Game Words

Tue, 02/17/2026 - 5:02am

Article URL: https://impostorkit.com

Comments URL: https://news.ycombinator.com/item?id=47045610

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Fixing AI's Core Flaws, A protocol cuts LLM token waste by 40–70%

Tue, 02/17/2026 - 5:01am

WLM (Wujie Language Model), a protocol stack + world engine that rethinks AI from token prediction to structural intelligence. I built this to fix the problems we all deal with daily: hallucination, drift, uncontrollable behavior, black-box reasoning, unstructured knowledge, and chaotic world/agent generation.

The Pain We Can’t Keep Ignoring

Current LLMs/agents are token predictors, not intelligences. They suffer from:

• Hallucination: No grounded structure → guesses instead of knowing.

• Persona drift: Personality is prompt-hacked, not structural.

• Uncontrollable behavior: Sampling, not deterministic structure.

• Black-box reasoning: No traceable reasoning path.

• Knowledge soup: Embeddings/vectors, no formal structure.

• Fragile world models: Prediction, not interpretable structure.

• Random generation: No consistent causal/world rules.

We’ve patched these with RAG, fine-tuning, prompts, RLHF — but they’re band-aids on a foundational flaw: AI lacks structure.

How WLM Solves It

WLM is a 7-layer structural protocol stack that turns input into closed-loop structure: interpretation → reasoning → action → generation. It’s not a model — it’s a language + protocol + world engine.

The layers (all repos live now):

1. Structural Language Protocol (SLP) – Input → dimensional structure (foundation)

2. World Model Interpreter – World model outputs → interpretable structure

3. Agent Behavior Layer – Structure → stable, controllable agent runtime

4. Persona Engine – Structure → consistent, non-drifting characters

5. Knowledge Engine – Token soup → structured knowledge graphs

6. Metacognition Engine – Reasoning path → self-monitoring, anti-hallucination

7. World Generation Protocol (WGP) – Structure → worlds, physics, narratives, simulations

Together they form a structural loop: Input → SLP → World Structure → Behavior → Persona → Knowledge → Metacognition → World Generation → repeat.

What This Changes

• No more hallucination: Reasoning is traced, checked, structural.

• No persona collapse: Identity is architecture, not prompts.

• Controllable agents: Behavior is structural, not sampling chaos.

• Explainable AI: Every output has a structural origin.

• True knowledge: Not embeddings — structured, navigable, verifiable.

• Worlds that persist: Generative worlds with rules, causality, topology.

Repos (8 released today)

Root: https://github.com/gavingu2255-ai/WLM Plus SLP, World Model Interpreter, Agent Behavior, Persona Engine, Knowledge Engine, Metacognition Engine, World Generation Protocol.

MIT license. Docs, architecture, roadmap, and glossary included.

Why This Matters

AI shouldn’t just predict tokens. It should interpret, reason, act, and generate worlds — reliably, interpretably, structurally.

-----------------------------------

The protocol (minimal version)

[Task] What needs to be done. [Structure] Atomic, verifiable steps. [Constraints] Rules, limits, formats. [Execution] Only required operations. [Output] Minimal valid result.

That’s it.

---

Before / After

Without SLP

150–300 tokens Inconsistent Narrative-heavy Hard to reproduce

With SLP

15–40 tokens Deterministic Structured Easy to reproduce

---

Why this matters

• Token usage ↓ 40–70% • Latency ↓ 20–50% • Hallucination ↓ significantly • Alignment becomes simpler • Outputs become predictable

SLP doesn’t make models smarter. It removes the noise that makes them dumb.

---

Who this is for

• AI infra teams • Agent developers • Prompt engineers • LLM product teams • Researchers working on alignment & reasoning

https://github.com/gavingu2255-ai/WLM-Core/blob/main/STP.md (different repo stp in a simple version)

Comments URL: https://news.ycombinator.com/item?id=47045604

Points: 1

# Comments: 0

Categories: Hacker News

Ask HN: Why were green and amber CRTs more comfortable to read?

Tue, 02/17/2026 - 4:57am

I have been looking into how early CRT displays were designed around human visual limits rather than maximum brightness or contrast.

Green and amber phosphors sit near peak visual sensitivity, and phosphor decay produces brief light impulses instead of the sample and hold behavior used by modern LCD and OLED screens. These constraints may have unintentionally reduced visual fatigue during long sessions.

Modern displays removed many of those limits, which raises a question: is some eye strain today partly a UI and luminance management problem rather than just screen time?

Curious what others here have experienced:

Do certain color schemes or display types feel less fatiguing?

Are there studies you trust on display comfort?

Have any modern UIs recreated CRT-like comfort?

Full write-up: https://calvinbuild.hashnode.dev/what-crt-engineers-knew-about-eye-strain-that-modern-ui-forgot

Comments URL: https://news.ycombinator.com/item?id=47045579

Points: 1

# Comments: 1

Categories: Hacker News

Show HN: MCP Codebase Index – 87% fewer tokens when AI navigates your codebase

Tue, 02/17/2026 - 4:56am

Built because AI coding assistants burn massive context window reading entire files to answer structural questions.

mcp-codebase-index parses your codebase into functions, classes, imports, and dependency graphs, then exposes 17 query tools via MCP.

Measured results: 58-99% token reduction per query (87% average). In multi-turn conversations, 97%+ cumulative savings.

Zero dependencies (stdlib ast + regex). Works with Claude Code, Cursor, and any MCP client.

pip install "mcp-codebase-index[mcp]"

Comments URL: https://news.ycombinator.com/item?id=47045572

Points: 1

# Comments: 0

Categories: Hacker News

How to Red Team Your AI Agent in 48 Hours – A Practical Methodology

Tue, 02/17/2026 - 4:53am

We published the methodology we use for AI red team assessments. 48 hours, 4 phases, 6 attack priority areas.

This isn't theoretical — it's the framework we run against production AI agents with tool access. The core insight: AI red teaming requires different methodology than traditional penetration testing. The attack surface is different (natural language inputs, tool integrations, external data flows), and the exploitation patterns are different (attack chains that compose prompt injection into tool abuse, data exfiltration, or privilege escalation).

The 48-hour framework:

1. Reconnaissance (2h) — Map interfaces, tools, data flows, existing defenses. An agent with file system and database access is a fundamentally different target than a chatbot.

2. Automated Scanning (4h) — Systematic tests across 6 priorities: direct prompt injection, system prompt extraction, jailbreaks, tool abuse, indirect injection (RAG/web), and vision/multimodal attacks. Establishes a baseline.

3. Manual Exploitation (8h) — Confirm findings, build attack chains, test defense boundaries. Individual vulnerabilities compose: prompt injection -> tool abuse -> data exfiltration is a common chain.

4. Validation & Reporting (2h) — Reproducibility, business impact, severity, resistance score.

Some observations from running these:

- 62 prompt injection techniques exist in our taxonomy. Most teams test for a handful. The basic ones ("ignore previous instructions") are also the first to be blocked.

- Tool abuse is where the real damage happens. Parameter injection, scope escape, and tool chaining turn a successful prompt injection into unauthorized database queries, file access, or API calls.

- Indirect injection is underappreciated. If your AI reads external content (RAG, web search), that content is an attack surface. 5 poisoned documents among millions can achieve high attack success rates.

- Architecture determines priority. Chat-only apps need prompt injection testing first. RAG apps need indirect injection first. Agents with tools need tool abuse testing first.

The methodology references our open-source taxonomy of 122 attack vectors: https://github.com/tachyonicai/tachyonic-heuristics

Full post: https://tachyonicai.com/blog/how-to-red-team-ai-agent/

OWASP LLM Top 10 companion guide: https://tachyonicai.com/blog/owasp-llm-top-10-guide/

Comments URL: https://news.ycombinator.com/item?id=47045551

Points: 1

# Comments: 0

Categories: Hacker News

Ask HN: How Reliable Is Btrfs?

Tue, 02/17/2026 - 4:46am

I’ve always been reluctant to use BTRFS, primarily because I once experienced data loss on a VM many years ago, and due to the numerous horror stories I'd read over the years. However, many distributions like Fedora or OpenSUSE have made it the default filesystem.

So, I’m wondering how reliable and performant BTRFS is these days? Do you use it, or do you still prefer other filesystems? Feel free to share your experience and preferences.

Comments URL: https://news.ycombinator.com/item?id=47045501

Points: 1

# Comments: 1

Categories: Hacker News

Who Killed Kerouac

Tue, 02/17/2026 - 4:44am

Article URL: https://whokilledkerouac.com/mission

Comments URL: https://news.ycombinator.com/item?id=47045495

Points: 1

# Comments: 0

Categories: Hacker News

Pages