Hacker News

Show HN: Mindweave – AI-powered personal knowledge hub with semantic search

Hacker News - Mon, 02/16/2026 - 12:20am

Hi HN,

I built Mindweave to solve a problem I kept running into — I'd save bookmarks, notes, and links across different apps, then never find them again when I actually needed them.

Mindweave lets you capture notes, links, and files in one place. The interesting part is what happens after:

- Semantic search — find content by meaning, not just keywords. "That article about improving deep work" finds it even if those words don't appear in the content. Powered by pgvector cosine similarity on Gemini embeddings. - AI auto-tagging — Gemini generates tags on save, so you don't have to organize anything manually. - Knowledge Q&A — ask questions about your saved content using RAG. Retrieves relevant pieces, feeds them as context to Gemini, returns a grounded answer. Stack: Next.js 15 (App Router, Server Actions), PostgreSQL 16 + pgvector, Google Gemini (text-embedding-004, 768d), Drizzle ORM, Auth.js v5, Tailwind/shadcn. Deployed on Cloud Run.

A few things I found interesting while building this: - pgvector inside Postgres is surprisingly capable for this scale. No need for a separate vector DB. - The biggest UX challenge was handling edge cases in similarity scores — zero-magnitude embeddings produce NaN from cosine distance, and PostgreSQL treats float8 NaN as greater than all numbers, so they pass through WHERE filters silently. - AI tagging removes more friction than I expected. The difference between "I'll tag this later" and "it's already tagged" is the difference between a system you use and one you abandon.

Live at www.mindweave.space. Source at https://github.com/abhid1234/MindWeave

LinkedIn: https://www.linkedin.com/posts/activity-7428965058388590592-...

Would love feedback, especially on the semantic search UX and the RAG implementation.

Comments URL: https://news.ycombinator.com/item?id=47031164

Points: 1

# Comments: 0

Categories: Hacker News

Too Much Hype?

Hacker News - Mon, 02/16/2026 - 12:13am

Anyone feel bummed that the big takeaways from AI lately have been software is dead, etc? I wake up often with overwhelming gratitude that so many important problems get to be solved and the process / barrier to entry for solving those problems has lowered so much.

There is so much interesting work to be done, but people seem fixated on the negatives.

Comments URL: https://news.ycombinator.com/item?id=47031137

Points: 1

# Comments: 0

Categories: Hacker News

Seedance 2.0

Hacker News - Mon, 02/16/2026 - 12:06am
Categories: Hacker News

Show HN: Tool that spams job listings with honeypot resumes to detect ghost jobs

Hacker News - Mon, 02/16/2026 - 12:01am

Most job boards are just aggregators.

They scrape, paste, and forget. I built Oitii to actually analyze the data before showing it to users.

I developed a scoring system (0-100) to filter out low-effort and ghost listings.

1. Hiring Freeze Cross-Check We verify the company's current financial health against real-time layoff data and hiring freeze trackers before indexing the job. If they just laid off 20% of engineering, we warn you.

2. Smart Salary Synthesis We never show "Undisclosed." If the DB has gaps, we parse job.title for seniority keywords (e.g., "Staff" vs "Mid-Level") and synthesize high-fidelity estimates based on current market rates.

3. The "Trap" Detector Our engine flags logical fallacies in the JD. For example, if the Title says "Entry Level" but the Description demands "3+ years of experience," it gets a massive quality penalty.

4. Active Ping & Honeypots. We don't just trust the post. We use proxy applications to track if resumes are actually being opened (pixel tracking). If the "View Rate" is 0% over 2 weeks, the job is marked as dead.

5. The "Growth Signal" Audit (Cross-Platform Fingerprinting). We cross-reference the listing against the company's direct career page and historical aggregator data to catch "investor fluff."

The Logic: We identify jobs that are reposted on aggregators (to look like the company is growing for VCs) but have been removed or never existed on the company's main ATS.

Zombie Detection: If a role has a high repost velocity (e.g., refreshed every 10 days) but no interview movement, it is flagged as a marketing asset, not a job opening.

It’s built with [Python, Next.js, Supabase].

I’d love feedback on the scoring weights.

Comments URL: https://news.ycombinator.com/item?id=47031077

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Untranslated Einstein paper available in English for the first time

Hacker News - Sun, 02/15/2026 - 11:44pm

As far as I can tell, this paper (in which Einstein solves a decades old question about Crookes Radiometers) has never been available in English! Einstein’s work became public domain on 1st January 2026, which meant that I could finally release this translation that I did during my PhD in 2019!

I have a blog post that gives a little more context: https://adaptive-machine-patterns.com/blog.html#einstein [alt. link: http://archive.today/381Pl] I am new to blogging, so advice welcome.

The preprint is hosted at Cambridge University: https://www.repository.cam.ac.uk/handle/1810/398349 and it has a DOI: https://doi.org/10.17863/CAM.127224

Comments URL: https://news.ycombinator.com/item?id=47030977

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Jsiphon – Streaming JSON parser with delta tracking and ambiguity trees

Hacker News - Sun, 02/15/2026 - 11:40pm

Hi HN, I built Jsiphon to solve a common frustration with LLM streaming: you ask for structured JSON output, but can't use any of it until the entire stream finishes.

If you've used JSON mode (OpenAI, Anthropic, etc.), you've hit this — you want {"answer": "...", "sources": [...]}, but JSON.parse() fails on every incomplete chunk.

LLM responses are inherently append-only (tokens arrive left to right, never go back), so Jsiphon leans into that with three ideas:

1) Append-only parsing — Feed in {"msg": "Hel and get {msg: "Hel"} immediately. Values are only extended, never removed or mutated.

2) Delta tracking — Each snapshot contains only what's new. For a chat bubble, just append delta.content to the DOM — when the LLM produces next chunk "lo, World!", we immediately get {msg: "lo, World!"}. No need to repeat partial JSON parsing or full tree rerendering.

3) Ambiguity tree — A tree that mirrors the shape of your data and tracks which subtrees are finalized at every depth. For example, if you're streaming {"header": {"title": "...", "date": "..."}, "body": "..."}, you can check isAmbiguous(ambiguous.header.title) to use the title the moment it's done, even while header.date and body are still streaming. This isn't a flat "is the whole thing done?" flag — it's per-node stability tracking that propagates up, so isAmbiguous(ambiguous.header) turns false only when all of header's children are finalized.

Existing partial JSON parsers like partial-json and gjp-4-gpt do a great job at the core parsing problem — turning broken JSON into usable objects. Jsiphon builds on that foundation and takes it one step further: instead of just parsing, it gives you a streaming data pipeline where append-only snapshots, per-field deltas, and multi-depth ambiguity tracking all come out of a single async iteration. If you've been using partial-json and wished you knew which fields were done vs still streaming without polling the whole object, that's exactly the gap this fills.

Zero dependencies, never throws on invalid input, handles junk text before/after the JSON root (which LLMs sometimes produce).

GitHub: https://github.com/webtoon-today/jsiphon npm install jsiphon

Would love feedback on the API design — especially the ambiguity tree. Tracking per-node stability across arbitrary nesting depth was the trickiest part. Curious if anyone sees a cleaner approach.

Disclosure: I'm a native Korean speaker. I used Claude to help structure and translate this post into English. The ideas and code are mine.

Comments URL: https://news.ycombinator.com/item?id=47030961

Points: 1

# Comments: 0

Categories: Hacker News

Pages