Hacker News

Subscribe to Hacker News feed
Hacker News RSS
Updated: 9 min 2 sec ago

Show HN: I treated my CV like a data product-evidence.json,MCP endpoint,llms.txt

Wed, 02/25/2026 - 5:38am

Most job applications disappear into ATS black boxes. I started wondering: what if my CV was structured well enough that whatever AI sits between me and a recruiter could actually parse it correctly, instead of mangling a PDF?

I'm a Not a developer. I built this over a few weeks with Codex+Claude Code.

What I ended up with: https://vassiliylakhonin.github.io/

The interesting design decisions:

Instead of just a PDF, I have six machine-readable JSON files: - resume.json — standard JSON Resume format - evidence.json — maps each claimed metric to its source and verification method. The theory: AI candidate evaluation will increasingly distinguish evidenced claims from unverified ones. - availability.json, capabilities.json, engage.json, verification.json — availability signals, capability profile, intake schema, identity cross-references

llms.txt points crawlers to the pages that matter. robots.txt explicitly allows GPTBot and OAI-SearchBot. JSON-LD (schema.org ProfilePage/Person) on the homepage.

The most experimental piece: a live MCP server on Railway. In principle, an AI recruiting agent could call it as a tool and get structured answers about my background without scraping HTML. I haven't seen anyone else do this for a personal CV, which either means it's ahead of the curve or completely pointless.

The honest version: I have no idea if any of this actually works. I don't know whether recruiter tooling parses llms.txt or JSON-LD from personal sites, or whether everything still flows through LinkedIn scraping and PDF vision models. I built it because structured reporting systems are literally my job, and this felt like the right way to represent that.

Repo: https://github.com/vassiliylakhonin/vassiliylakhonin.github....

Curious: is anyone building sourcing or screening agents that consume structured data from candidate-owned sites? Or does all candidate data still enter the pipeline through LinkedIn and uploaded PDFs?

Comments URL: https://news.ycombinator.com/item?id=47149842

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Ideon – open-source spatial canvas for project context

Wed, 02/25/2026 - 5:28am

Hi HN,

Ideon is a self-hosted workspace that maps project resources (repositories, notes, links, checklists) on a spatial canvas instead of hierarchical lists.

The goal is to preserve the "mental model" of a project visually, reducing context-switching friction when returning to development after a break.

Stack: - Next.js (App Router) + TypeScript - PostgreSQL + Prisma - Docker Compose for self-hosting - AGPLv3 License

Key features: - Spatial organization of resources (drag & drop blocks) - Direct GitHub integration (live issue tracking on canvas) - Markdown notes with real-time sync - Fully self-hostable

It's designed to run on a cheap VPS or a home server. I'm looking for feedback on the spatial approach compared to traditional linear project management tools.

Docs: https://www.theideon.com/docs

Comments URL: https://news.ycombinator.com/item?id=47149766

Points: 1

# Comments: 1

Categories: Hacker News

Lightweight OpenClaw Written in C#

Wed, 02/25/2026 - 5:27am

Article URL: https://github.com/AkiKurisu/DotBot

Comments URL: https://news.ycombinator.com/item?id=47149765

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: ImagineIf – Collaborative storytelling where AI visualizes each segment

Wed, 02/25/2026 - 5:25am

Solo dev here. Built a platform where people write short story segments starting with "Imagine if..." and others continue. AI generates an image for each part, stories branch based on community votes.

Stack: React Native/Expo, FastAPI, MariaDB, FLUX-dev via Replicate, Groq for text.

Curious what you think — especially about the cold start problem with collaborative platforms.

Comments URL: https://news.ycombinator.com/item?id=47149756

Points: 2

# Comments: 0

Categories: Hacker News

100M-Row Challenge with PHP

Wed, 02/25/2026 - 5:24am
Categories: Hacker News

Show HN: Gryt – self-hosted, open-source Discord-style voice chat

Wed, 02/25/2026 - 5:22am

This weekend I finally shipped Gryt, a project I’ve been building since 2022 — an open-source, self-hostable Discord-style app focused on reliable voice chat + text.

I’m the creator. I started it after getting fed up with Discord disconnects/paywalls and wanted something self-hosted and auditable.

I started on this in 2022 and had an early proof-of-concept working back then (auth + friends list), but I quickly realized WebRTC voice isn’t something you can duct-tape together. I spent a big chunk of the next couple years learning the stack (ICE/DTLS-SRTP, NAT traversal, SFU design), then came back and built a proper end-to-end architecture and polished it to the point where I felt comfortable releasing it publicly.

Repo: https://github.com/Gryt-chat/gryt Quick start: https://docs.gryt.chat/docs/guide/quick-start Web client: https://app.gryt.chat

Comments URL: https://news.ycombinator.com/item?id=47149736

Points: 3

# Comments: 0

Categories: Hacker News

GitHub website is down

Wed, 02/25/2026 - 5:22am

Github has become increasingly critical infrastructure. More companies should probably set up their own private linux servers to host private git repos.

Comments URL: https://news.ycombinator.com/item?id=47149733

Points: 1

# Comments: 1

Categories: Hacker News

A thought on quantum error correction: accuracy without replay feels fragile

Wed, 02/25/2026 - 5:19am

I’ve been thinking about quantum error correction work lately, and one thing keeps bothering me.

A lot of effort goes into reporting decoder accuracy improvements, but much less into whether those results are replayable over time or safe to compare after assumptions change.

In practice, small shifts in noise behavior, detector mapping, or measurement stability can quietly invalidate earlier conclusions. Often everything still “looks reasonable,” so regressions go unnoticed until much later.

It feels similar to early distributed systems work, before reproducibility, rollback, and auditability became normal engineering expectations.

I’m curious how people here think about:

replaying historical syndrome data against newer decoders

surfacing stability or confidence in decoder outputs, not just accuracy

deciding when results shouldn’t be compared at all

Is this already well handled in some parts of the field, or is it still an open gap?

Comments URL: https://news.ycombinator.com/item?id=47149713

Points: 1

# Comments: 1

Categories: Hacker News

My AI Coding Workflow

Wed, 02/25/2026 - 4:45am
Categories: Hacker News

Pages