Feed aggregator

Dear Agent: Prove It

Hacker News - Wed, 02/11/2026 - 11:23pm
Categories: Hacker News

Distributed Llama

Hacker News - Wed, 02/11/2026 - 11:07pm
Categories: Hacker News

GLM-5 was trained entirely on Huawei chips

Hacker News - Wed, 02/11/2026 - 11:06pm

Article URL: https://glm5.net/

Comments URL: https://news.ycombinator.com/item?id=46984799

Points: 4

# Comments: 1

Categories: Hacker News

Show HN: Prompt Builder – A block-based editor for composing AI prompts

Hacker News - Wed, 02/11/2026 - 11:05pm

Hi HN,

I built Prompt Builder (https://www.promptbuilder.space/) because I was frustrated managing complex AI prompts in plain text boxes.

The core idea: instead of one giant textarea, you compose prompts from draggable, reorderable blocks. Each block can have its own XML/custom tags, visibility toggles (so you can A/B test sections), and duplicates.

Key technical details: - Live compiled preview: the right pane shows the exact string being sent, updating as you drag/toggle blocks - Dynamic variables: define {{var_name}} once, use across blocks, change in one place - Multi-model execution: send the same compiled prompt to different model APIs and compare outputs - Voice transcription + optional translation (useful for non-English prompt authoring) - Editable response pane: modify the model's output in place - Folder organization, search, and password-protected shareable links

I'd love feedback on the UX and whether the block metaphor resonates with how you think about prompt structure. What's missing?

Comments URL: https://news.ycombinator.com/item?id=46984792

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Membrane, revisable memory for long lived AI agents

Hacker News - Wed, 02/11/2026 - 10:42pm

Membrane is a general-purpose memory substrate for agentic systems.

Most agent memory today is either context window state or an append only retrieval store. That allows recall but not learning. Facts become stale, procedures drift, and agents cannot revise knowledge safely without losing provenance or auditability.

Membrane treats memory as something that can evolve over time. It ingests raw experience, promotes it into structured knowledge, and allows records to be superseded, forked, contested, merged, or retracted with evidence. The goal is to let long lived agents improve while remaining predictable and inspectable.

The core ideas are simple.

- Memory is typed instead of stored as flat text - Knowledge is revisable and keeps provenance - Agents can learn competences and procedures, not just facts - Salience decays over time so unused knowledge fades - Retrieval is filtered by trust and sensitivity levels - Revision history remains auditable

Membrane runs either as a daemon with a gRPC API or as an embedded Go library. Storage uses SQLite with optional SQLCipher encryption. The repository includes an evaluation suite covering ingestion, revision, consolidation, and retrieval ordering.

Membrane intentionally does not implement vector similarity search. Retrieval backends and agent policy are separated from the memory layer so the system can remain deterministic and inspectable.

I built this while experimenting with long lived agents that need something closer to learning than what RAG systems currently provide. Feedback on architecture, edge cases, and real-world use cases would be helpful.

If you have any suggestions or want to contribute, anything is welcome :)

Comments URL: https://news.ycombinator.com/item?id=46984655

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: 10-min AI threat model (STRIDE and MAESTRO), assumption-driven

Hacker News - Wed, 02/11/2026 - 10:38pm

Hi HN, I built an assumption-driven AI security assessment for teams shipping AI features without a dedicated security team yet.

You paste your AI use case (what it does, data types, vendors, deployment). In ~10 minutes you get a PDF report by email containing: - Trust boundaries + data flows + a threat model diagram (explicitly marked as conceptual/assumption-based) - Threats mapped to STRIDE + MAESTRO (agentic AI) - A risk rating (impact/likelihood) + 5×5 risk matrix - Recommended security controls and compliance mappings (example: EU AI Act, NIST AI 600-1)

Important: we make assumptions (ex: AWS deployment, common patterns) and we call them out in the report so you can correct them.

Link: https://raxit.ai/assessment

Would love feedback on what’s wrong, what’s missing, and what would make this actually useful in a real security review.

Comments URL: https://news.ycombinator.com/item?id=46984636

Points: 1

# Comments: 0

Categories: Hacker News

Pages