Feed aggregator

Show HN: SalaryScript – The FAANG Negotiation Playbook

Hacker News - Wed, 02/18/2026 - 7:56pm

Hey HN,

I’m the founder of SalaryScript, and I wanted to share a project I’ve been working on for over a year: a comprehensive negotiation playbook tailored specifically for big tech roles.

I’ve navigated the FAANG hiring gauntlet multiple times myself, with stints at FAANG companies, so I know firsthand how opaque and high-stakes these negotiations can be. I wrote SalaryScript based on my experience, insights from a dozen FAANG-alumni colleagues, and lessons from recruiters I’ve worked with over the years. We’ve pooled perspectives from engineers, PMs, and execs who’ve collectively negotiated millions in total compensation.

Think of it as a “golden rule” book for tech negotiations. It covers everything from decoding offer letters and leveraging competing offers to scripting responses for common recruiter tactics, handling equity cliffs, and navigating relocation packages. No fluff — just battle-tested scripts, templates, and strategies that have helped people increase their total comp by 30–50% on average.

One contributor, a former Meta engineering manager, shared how he turned a $300K offer into $550K by reframing the RSU vesting conversation. Another, an ex-Apple product lead, broke down the psychology of why “total comp” framing often beats focusing on base salary alone.

Since launching last year at an introductory price of $19, we’ve crossed 1,000+ downloads and sales. Demand has been strong, and as word has spread, we’ve gradually raised the price.

If you’re prepping for interviews or staring at an offer right now, this might be the edge you need.

If you’re interested, you can check it out at salaryscript.com.

Would love your thoughts, war stories, or questions like what’s the biggest negotiation win (or fail) you’ve had in tech?

Cheers

Comments URL: https://news.ycombinator.com/item?id=47068559

Points: 1

# Comments: 0

Categories: Hacker News

GPLv2 and Installation Requirements

Hacker News - Wed, 02/18/2026 - 7:55pm

Article URL: https://lwn.net/Articles/1052842/

Comments URL: https://news.ycombinator.com/item?id=47068548

Points: 1

# Comments: 0

Categories: Hacker News

Brain file format for AI agents – one file, any LLM, sub-millisecond queries

Hacker News - Wed, 02/18/2026 - 7:54pm

Every AI agent has amnesia. Claude doesn't remember your last conversation. GPT doesn't know what you decided last week. The current solutions — vector databases, markdown files, key-value stores — lose structure, can't track reasoning chains, and lock you to one provider.

I built AgenticMemory: a binary graph format where every cognitive event (facts, decisions, inferences, corrections) is a node with typed edges (caused_by, supports, supersedes). One .amem file holds your agent's entire knowledge graph. Works with any LLM.

Key numbers: • 276ns to add a node • 3.4ms to traverse 5 levels deep in a 100K-node graph • 9ms similarity search across 100K nodes • ~24 MB for a year of daily use • A lifetime of memory fits in under 1 GB

Built in Rust. Zero dependencies. Python SDK: pip install agentic-brain. Rust CLI: cargo install agentic-memory.

https://github.com/agentic-revolution/agentic-memory

Comments URL: https://news.ycombinator.com/item?id=47068547

Points: 1

# Comments: 0

Categories: Hacker News

Ask HN: Do you build your own X?

Hacker News - Wed, 02/18/2026 - 7:50pm

I have developed software for many years now and this hit me today...let me use web development as an example:

When starting out, developing a todo list app is so interesting. After doing it, there comes an expense tracker, digital clock, quotes generator app, invoice generator...the craft & understanding becomes clearer from time to time.

Fast forward to a time where building a React+Vite+Tailwind+Redux is so easy. Then going fullstack with Next.js+Express.js+MySQL/MongoDB and beyond.

In my experience, there has come a time(on many counts) when i just look at the app, 1-2 clicks and figure out how it works. Dashing to the devtools>source & network tabs to verify my instincts...90% right all the time. Another thing is suddenly realizing that I could use X with Y to help users do A, B and C.

QUESTION: Do you build or try to develop those app ideas? Throttle or ignore them? Validate the idea but never publish the result? Build for fun? Do you move on since you have already figured it out? What do you do in this case? Do you build your own X?

Personally, I have failed to resist trying. Seeing it work out gives me joy.

Comments URL: https://news.ycombinator.com/item?id=47068525

Points: 2

# Comments: 0

Categories: Hacker News

The History of Sushi

Hacker News - Wed, 02/18/2026 - 7:47pm
Categories: Hacker News

Apple May Be Adding Support for Conversational AI in CarPlay

CNET Feed - Wed, 02/18/2026 - 7:35pm
Hidden in the latest developer guide for iOS 26.4 is support for "voice-based conversational apps" in CarPlay.
Categories: CNET

Smashing Security podcast #455: Face off: Meta’s Glasses and America’s internet kill switch

Graham Cluely Security Blog - Wed, 02/18/2026 - 7:30pm
Could America turn off Europe's internet? That’s one of the questions that Graham and special guest James Ball will be exploring as they discuss tech sovereignty. Could Gmail, cloud services, and critical infrastructure really become geopolitical leverage? And is anyone actually building a Plan B? Plus we explore if Meta is quietly plotting to turn its smart glasses into face-recognising surveillance specs? With reports of internal memos suggesting they plan to launch controversial features while everyone’s distracted by political chaos, we ask: is this innovation really wanted by the public... or something far creepier? All of this, and much more, in episode 455 of the award-winning "Smashing Security" podcast with cybersecurity veteran Graham Cluley, joined this week by journalist and author James Ball.
Categories: Graham Cluely

Show HN: Local "incident bundle" for AI/agent failures (offline rep and CI JSON)

Hacker News - Wed, 02/18/2026 - 7:01pm

Hi,I built a small local-first CLI toolkit for debugging AI/agent incidents.

Problem I kept hitting: building agents is fast, but when something breaks, handing off “one failing run” is messy (screenshots, scattered logs, partial configs, access to a tracing UI, accidental secrets/PII in payloads).

What this does: run your agent on a case suite and generate a portable evidence pack you can open offline and attach to a GitHub issue/ticket:

report.html (offline viewer)

compare-report.json (machine-readable summary for CI gating: none | require_approval | block)

evidence files referenced via a manifest (so you can verify completeness/integrity)

It’s intentionally self-hosted/local-only: no backend, no accounts, nothing leaves your environment unless you export the pack.

Redaction note: in the “production” pipeline, redaction is applied in the runner before artifacts are written (the agent is not required to support a special header). There’s also a strict mode that scans all manifest-referenced files for residual markers as a safety gate.

I’m not trying to replace tracing/observability tools — this is meant to be the “handoff unit” when sharing a link or granting UI access isn’t viable.

Questions for HN:

If you’ve had to share a single failing run with another engineer/vendor, what was the missing piece that caused the most back-and-forth?

What would you consider “minimum viable contents” vs a “bundle monster”?

Comments URL: https://news.ycombinator.com/item?id=47068139

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Sentinel – Cryptographic proof for AI decisions (zkML, on-chain)

Hacker News - Wed, 02/18/2026 - 6:58pm

Hey HN,

We kept running into the same problem: when an AI system makes a consequential decision, there's no neutral way to verify what it actually saw and did. Logs are editable. Screenshots lie. "Trust us" doesn't hold up in regulated industries or litigation.

So we built SENTINEL. Three products, one protocol:

*SENTINEL Score* -- unjailbreakable safety layer. Text in, float out (0.0-1.0). Nothing else crosses the wire. No tokens, no text, no PHI. Can't be jailbroken because there's nothing to jailbreak. HIPAA environments, military, credit bureaus -- they need a number, not a chatbot.

*SENTINEL Proof* -- neutral ground for AI disputes. Every evaluation ZK-proofed, timestamped, committed on Base mainnet. Neither vendor nor client controls the record. When it goes to court, the transaction hash is the evidence. AI companies could build this themselves -- they won't. "You're storing our conversations?" is a PR disaster. Neutral third party solves that.

*AEGIS* -- AI security that closes the loop. 72B parameter model. Finds vulnerabilities, writes the patch. Not a report. Scan, identify, patch, verify.

8 contracts live on Base mainnet. Free API key at sentinel.biotwin.io -- no credit card, no sales call.

We use SENTINEL Proof inside Bio-Twin (AI drug safety pre-screening) to make AI decisions defensible for pharma labs and attorneys. The Disney $2.75M CA AG settlement last week is a concrete example of why this matters.

Happy to talk on-chain architecture, zkML proof generation at scale, or EU AI Act compliance.

Comments URL: https://news.ycombinator.com/item?id=47068103

Points: 1

# Comments: 0

Categories: Hacker News

Pages