Feed aggregator

Show HN: TattooForge – AI Tattoo Design Generator

Hacker News - Fri, 02/27/2026 - 10:13pm

Article URL: https://tattooforge.art

Comments URL: https://news.ycombinator.com/item?id=47189771

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: AxKeyStore – Zero-trust CLI secrets manager using your own GitHub repo

Hacker News - Fri, 02/27/2026 - 9:06pm

Hi HN,

I built AxKeyStore, an open-source CLI tool for managing secrets that uses your own private GitHub repository as encrypted storage.

The idea is simple: → All encryption happens locally. → Secrets are stored as encrypted blobs in your private repo. → GitHub is treated as untrusted storage. → No plaintext secrets ever leave your machine. → No plaintext secrets are stored on disk.

Most secret managers either require a hosted backend, a self-hosted server, or trusting a SaaS provider. I wanted something that: → Requires zero infrastructure → Uses tooling developers already have → Keeps the threat model simple

AxKeyStore uses a layered approach to security: → Secrets are encrypted with a Remote Master Key (RMK). → The RMK is encrypted with your master password and stored in the repo. → A Local Master Key (LMK) encrypts your local GitHub token and repo config. → The LMK itself is encrypted using Argon2id-derived keys from your master password. → Encryption uses XChaCha20-Poly1305 (AEAD).

GitHub only sees encrypted binary blobs. Even if someone compromises your repository, they still need your master password to decrypt anything.

Why GitHub? Because it gives: → Private repositories → Version history (commit log as audit trail) → Access control → Free storage → Global availability

Instead of building a backend, I’m leveraging an existing, reliable system - but cryptographically isolating it.

Features → Simple CLI workflow → Hierarchical categories (e.g., cloud/aws/prod) → Version history per secret (via commits) → Retrieve specific versions by SHA → Multi-profile support (separate vaults) → Transactional master password reset → Secure random secret generation

Tech → Written in Rust. Uses tokio, clap, argon2, and chacha20poly1305. → There are unit and integration tests (including mocked GitHub API interactions). → Open source - MIT licensed.

I’d appreciate feedback on: → The threat model - what am I missing? → Whether GitHub as encrypted blob storage is a bad assumption → UX improvements for CLI-based secret workflows → Any crypto or key-handling concerns

I’m especially interested in critique from people who’ve built or audited secret management systems.

Thanks.

Comments URL: https://news.ycombinator.com/item?id=47189172

Points: 1

# Comments: 0

Categories: Hacker News

Trapped in MS Office

Hacker News - Fri, 02/27/2026 - 9:04pm
Categories: Hacker News

Show HN: Meet Alfonso: My OpenClaw Put on Public Discord

Hacker News - Fri, 02/27/2026 - 9:03pm

An OpenClaw : Born in Barranquilla, raised in Miami, eighteen years in London. He built a consultancy that made good money, then watched it hollow out because he chose being interesting over being disciplined. The closest he got to real love ended when she asked “do you see us building a life together?” and his honest answer was a long pause.

Comments URL: https://news.ycombinator.com/item?id=47189131

Points: 3

# Comments: 1

Categories: Hacker News

Show HN: Adversarial AI agents that debate and verify travel itineraries

Hacker News - Fri, 02/27/2026 - 9:02pm

AI travel planners hallucinate constantly - OpenAI's best model hits roughly 10% success on complex travel planning benchmarks (source: TravelPlanner study). The core problem is that recommendations are generated from training data with zero real-world verification.I'm experimenting with a different architecture: two agents with opposing travel philosophies (deep/slow vs highlights/efficient) debate each recommendation, then every suggestion gets validated against Google Places API - real opening hours, actual walking distances, current ratings. Anything unverified gets flagged.Early stage - looking for feedback on the approach. Has anyone tried grounding LLM outputs against structured APIs like this? What's broken about it?

Comments URL: https://news.ycombinator.com/item?id=47189119

Points: 1

# Comments: 0

Categories: Hacker News

Pages