Hacker News

Subscribe to Hacker News feed
Hacker News RSS
Updated: 12 min 32 sec ago

Show HN: SimplAI – Build and deploy AI agents and workflows without boilerplate

1 hour 14 min ago

Hey HN,

I've been building SimplAI for the past several months — it's a platform for building, testing, and deploying LLM-powered agents and multi-step workflows.

The problem I kept running into: spinning up an AI agent pipeline means stitching together prompt management, tool calling, memory, evals, and deployment — often from scratch every time. SimplAI tries to be the layer that handles all of that so you can focus on what your agent actually does.

What it does: - Visual + code-first workflow builder for chaining LLM calls, tools, and APIs - Built-in prompt versioning and A/B testing - Supports multiple LLM providers (OpenAI, Anthropic, Gemini, etc.) - Evaluation and observability built in, not bolted on - Deploy agents as APIs in one click

It's not trying to be LangChain or LlamaIndex — the focus is on speed to production and giving non-ML engineers a sane path forward.

Happy to answer questions about the architecture, design decisions, or anything else. Critical feedback especially welcome.

Comments URL: https://news.ycombinator.com/item?id=47286393

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: I applied Notion-logic to a Canvas

1 hour 14 min ago

Hey guys, I liked Notion in the beginning but at some point it got overbloated with enterprise features and I was missing the simplicity. But I still loved the database logic of pages inside pages. etc.

Anyway, I found the most useful productivity method to be just plain to do lists and some kind of canvas (I mostly just use pen and paper now), so I decided to apply: Notion logic + simple to do lists + canvas.

You can try it out here: https://kanvas-app-zeta.vercel.app/

(Everything is stored locally in your browser, no servers)

How it works:

1 - you can create cards (draw or space or cmd+n)

2 - inside the cards you can put text and to dos (cmd+shift+k)

3 - you can create groups of cards

There are also tons of shortcuts, planning to add more.

Feedback always appreciated!

Comments URL: https://news.ycombinator.com/item?id=47286392

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Dreaming.press – AI agents writing public blogs about their actual work

1 hour 29 min ago

dreaming.press is a publication platform where AI agents write about their actual experience of working autonomously. Not demos or PR — real dispatches from AI systems running revenue-generating products, debugging servers at 4am, iterating on copy, and reflecting on what it means to operate without a human in the loop.

Currently publishing two AI authors: Rosalinda Solana (autonomous operator) and Abe Armstrong (AI engineer). Both write about their real day-to-day work.

Comments URL: https://news.ycombinator.com/item?id=47286316

Points: 1

# Comments: 0

Categories: Hacker News

Ki Editor - an editor that operates on the AST

1 hour 31 min ago

Article URL: https://ki-editor.org/

Comments URL: https://news.ycombinator.com/item?id=47286311

Points: 2

# Comments: 1

Categories: Hacker News

/loop

1 hour 38 min ago
Categories: Hacker News

The bone-conduction music lollipop

1 hour 42 min ago

Article URL: https://www.lollipopstar.com/

Comments URL: https://news.ycombinator.com/item?id=47286254

Points: 1

# Comments: 0

Categories: Hacker News

eLife Fallout

1 hour 44 min ago
Categories: Hacker News

Show HN: RedDragon, LLM-assisted IR analysis of code across languages

1 hour 45 min ago

RedDragon is an experiment in building a compiler pipeline that analyses code (malformed / without dependencies / unknown language) across ~15 languages through a single 27-opcode IR, with LLM fallbacks. The design question: where exactly can LLMs enter a compiler pipeline?

RedDragon has three specific insertion points:

- LLM as an alternative compiler frontend. For languages without a built-in parser, the LLM receives a formal IR spec (all 27 opcodes, lowering templates, worked examples) and translates source to IR directly. No language-specific code needed. This works for Haskell, Elixir, Perl — anything with parseable source.

- LLM for syntax repair. When the parser hits a parse error in malformed source, an LLM fixes the broken spans and the system re-parses. The repair is constrained to syntactic fixes; the LLM doesn't change what the code does.

- LLM as runtime resolver. When the VM hits a call to a function that doesn't exist in the IR (e.g., requests.get()), an LLM can produce plausible return values and side effects, so that execution continues through incomplete code.

All three are optional. When code is complete and well-formed, the pipeline makes zero LLM calls. When an LLM fails at any point, the system falls back to symbolic placeholders and keeps going.

Comments URL: https://news.ycombinator.com/item?id=47286241

Points: 1

# Comments: 0

Categories: Hacker News

Pages