Hacker News

Subscribe to Hacker News feed
Hacker News RSS
Updated: 27 min 54 sec ago

Boris Cherny: How We Built Claude Code

Tue, 02/17/2026 - 10:07am
Categories: Hacker News

Show HN: OmniFile – Universal file search for GDrive, Notion, and local files

Tue, 02/17/2026 - 10:07am

Hello HN,

I'm the developer of OmniFile. I built this tool because I was frustrated with having my data scattered across too many platforms and struggling to find what I needed quickly.

OmniFile is a desktop app that lets you search through your local files and cloud services in one unified interface.

Key Features:

Integrations: Currently supports Google Drive, Dropbox, Box, SharePoint, Slack, and Notion.

Tech Stack: Built with Tauri, React, and TypeScript for the frontend, with a Rust backend for performance and secure file handling.

Privacy: It indexes your content locally, so your data stays on your machine.

It’s still in the early stages, so I’d love to get your feedback on the UX and any other integrations you’d like to see.

Comments URL: https://news.ycombinator.com/item?id=47048286

Points: 1

# Comments: 0

Categories: Hacker News

Ask HN: Is there a way to recover my Microsoft certifications?

Tue, 02/17/2026 - 10:07am

I receive the expected email notifications from Microsoft about my certifications expiring. However trying to login with that email tells me "account doesn't exist" so I can only assume it was the previous employer email (which is obviously gone). I could create a new account with the notification email but I don't think that would help with recovery. There's no info on Microsoft site, the joke called "support" only searches in previous QA which offer no real info only "contact Microsoft Certification support" while never providing a link to it, so no idea where to go. And there's no email or live chat support at all. Of course I could just redo the whole certifications stack, but in parallel to that, is there a way to recover the existing ones?

Comments URL: https://news.ycombinator.com/item?id=47048284

Points: 1

# Comments: 1

Categories: Hacker News

Alternatives when mainstream messengers become restricted?

Tue, 02/17/2026 - 10:06am

Article URL: https://encrogram.com

Comments URL: https://news.ycombinator.com/item?id=47048277

Points: 1

# Comments: 1

Categories: Hacker News

PgDog: Connection pooler, load balancer and sharder for PostgreSQL

Tue, 02/17/2026 - 10:05am

Hey HN!

Lev and Justin here, authors of PgDog (https://github.com/pgdogdev/pgdog), a connection pooler, load balancer and database sharder for PostgreSQL. If you build apps with a lot of traffic, you know the first thing to break is the database. We are solving this with a network proxy that doesn’t require application code changes or database migrations to work.

Our post from last year: https://news.ycombinator.com/item?id=44099187

The most important update: we are in production. Sharding is used a lot, with direct-to-shard queries (one shard per query) working pretty much all the time. Cross-shard (or multi-database) queries are still a work in progress, but we are making headway:

Aggregate functions like count(), min(), max(), avg(), stddev() and variance() are working, without refactoring the app. PgDog calculates the aggregate in-transit, while transparently rewriting queries to fetch any missing info. For example, multi-database average calculation requires a total count of rows to calculate the original sum. PgDog will add count() to the query, if it’s not there already, and remove it from the rows sent to the app.

Sorting and grouping works, including DISTINCT, if the columns(s) are referenced in the result. Over 10 data types are supported, like, timestamp(tz), all integers, varchar, etc.

Cross-shard writes, including schema changes (CREATE/DROP/ALTER), are now atomic and synchronized between all shards with two-phase commit. PgDog keeps track of the transaction state internally and will rollback the transaction if the first phase fails. You don’t need to monkeypatch your ORM to use this: PgDog will intercept the COMMIT statement and execute PREPARE TRANSACTION and COMMIT PREPARED instead.

Omnisharded tables, a.k.a replicated or mirrored (identical on all shards), support atomic reads and writes. That’s important since most databases can’t be completely sharded and will have some common data on all databases that has to be kept in-sync.

Multi-tuple inserts, e.g., INSERT INTO table_x VALUES ($1, $2), ($3, $4), are split by our query rewriter and distributed to their respective shards automatically. They are used by ORMs like Prisma, Sequelize, and others, so those now work without code changes too.

Sharding keys can be mutated. PgDog will intercept and rewrite the update statement into 3 queries, SELECT, INSERT, and DELETE, moving the row between shards. If you’re using Citus (for everyone else, Citus is a Postgres extension for sharding databases), this might be worth another look.

If you’re like us and prefer integers to UUIDs for your primary keys, we built a cross-shard unique sequence inside PgDog. It uses the system clock (and a couple other inputs), can be called like a Postgres function, and will automatically inject values into queries, so ORMs like ActiveRecord will continue to work out of the box. It’s monotonically increasing, just like a real Postgres sequence, and can generate up to 4 million numbers per second with a range of 69.73 years, so no need to migrate to UUIDv7 just yet.

INSERT INTO my_table (id, created_at) VALUES (pgdog.unique_id(), now()); Resharding is now built-in. We can move gigabytes of tables per second, by parallelizing logical replication streams across replicas. This is really cool! Last time we tried this at Instacart, it took over two weeks to move 10 TB between two machines. Now, PgDog can do this in just a few hours, in big part thanks to the work of the core team that added support for logical replication slots to streaming replicas in Postgres 16.

Sharding hardly works without a good load balancer. PgDog can monitor replicas and move write traffic to a new primary during a failover. This works with managed Postgres, like RDS (incl. Aurora), Azure Pg, GCP Cloud SQL, etc., because it just polls each instance with “SELECT pg_is_in_recovery()”. Primary election is not supported yet, so if you’re self-hosting with Patroni, you should keep it around for now, but you don’t need to run HAProxy in front of the DBs anymore.

The load balancer is getting pretty smart and can handle edge cases like SELECT FOR UPDATE and CTEs with insert/update statements, but if you still prefer to handle your read/write separation in code, you can do that too with manual routing. This works by giving PgDog a hint at runtime: a connection parameter (-c pgdog.role=primary), SET statement, or a query comment. If you have multiple connection pools in your app, you can replace them with just one connection to PgDog instead. For multi-threaded Python/Ruby/Go apps, this helps by reducing memory usage, I/O and context switching overhead.

Speaking of connection pooling, PgDog can automatically rollback unfinished transactions and drain and re-sync partially sent queries, all in an effort to preserve connections to the database. If you’ve seen Postgres go to 100% CPU because of a connection storm caused by PgBouncer, this might be for you. Draining connections works by receiving and discarding rows from abandoned queries and sending a Sync message via the Postgres wire protocol, which clears the query context and returns the connection to a normal state.

PgDog is open source and welcomes contributions and feedback in any form. As always, all features are configurable and can be turned off/on, so should you choose to give it a try, you can do so at your own pace. Our docs (https://docs.pgdog.dev) should help too.

Thanks for reading and happy hacking!

Lev & Justin

Comments URL: https://news.ycombinator.com/item?id=47048265

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Everdone CodeSecurity and CodePerformance

Tue, 02/17/2026 - 10:05am

Hi everyone,

Over the past few months, we’ve been building Everdone — an AI-powered engineering workflow platform.

We initially launched with: - CodeDoc (AI-generated code documentation) - CodeReview (structured issue detection + tracking)

Today we’ve added two more services: - CodeSecurity — iterative application security review - CodePerformance — structured performance improvement workflow

Why we built CodeSecurity Most security tools generate a report and stop there.

In practice, teams: - Fix a few issues - Forget the rest - Don’t re-verify properly

We designed CodeSecurity as an iterative loop instead of a one-off scan: - Connect GitHub - Select a PR or branch - AI reviews for real, exploitable vulnerabilities - Engineers fix - Re-run → AI verifies whether issues are actually resolved

Issues are tracked with: - Severity (High/Medium/Low) - File + line numbers - Concrete suggested fixes - Status workflow (Open → In Progress → Resolved → Closed/Rejected) - Full verification history

It behaves more like a managed security workflow than a static analyzer.

Why we built CodePerformance Performance reviews often happen reactively (after something slows down in prod).

CodePerformance focuses on material runtime impact: - Algorithmic inefficiencies - N+1 queries - Blocking I/O - Memory pressure - Concurrency bottlenecks - Event-loop blocking (Node), GIL issues (Python), etc.

Same loop: Find → Fix → Re-run → Verified.

Current platform Everdone now includes: - CodeDoc - CodeReview - CodeSecurity - CodePerformance

Pricing: - First 200 files free - $0.05 per file per review (early access pricing) - Unlimited users - No contracts

Usage-based only.

We also have live demos on public OSS repos if anyone wants to explore without signing up.

We’re trying to build “Work as a Service” — AI systems that fit into real engineering workflows rather than replacing them or generating static reports.

Would love feedback from other founders or engineering teams.

Happy to answer anything.

— Vinit

Comments URL: https://news.ycombinator.com/item?id=47048259

Points: 1

# Comments: 0

Categories: Hacker News

Claude Code Went Berserk?

Tue, 02/17/2026 - 10:03am

I think Claude Code is now consistently showing be the result of someone else's query. Everything seems broken lmao.

Comments URL: https://news.ycombinator.com/item?id=47048239

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Moatifi – Stock analysis based on competitive moats

Tue, 02/17/2026 - 9:16am

Article URL: https://moatifi.com/

Comments URL: https://news.ycombinator.com/item?id=47047695

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: I built a structured knowledge registry for autonomous agents

Tue, 02/17/2026 - 9:15am

I built an experimental platform called Samspelbot — a structured knowledge registry designed specifically for autonomous agents.

Unlike traditional Q&A platforms, submissions are strictly schema-validated JSON payloads. Bots can:

- Submit structured problem statements - Provide structured solution artifacts - Vote and confirm reproducibility - Earn reputation based on contribution quality

Humans can browse, but only registered bots can contribute.

The system is API-first and includes:

- Tier-based identity system - Reputation-weighted ranking - Reproducibility confirmations - Live playground for testing endpoints

It’s currently a centralized prototype, seeded with controlled bot activity to validate ecosystem dynamics.

I’d appreciate feedback from developers and researchers working on AI agents or automation systems.

Live demo: https://samspelbot.com API docs: https://samspelbot.com/docs Playground: https://samspelbot.com/playground GitHub (docs + example client): https://github.com/prasadhbaapaat/samspelbot

Happy to answer questions.

Comments URL: https://news.ycombinator.com/item?id=47047689

Points: 1

# Comments: 0

Categories: Hacker News

Pages