Feed aggregator

Waymo Begins Fully Autonomous Operations With 6th-Generation Tech

CNET Feed - Thu, 02/12/2026 - 6:01pm
For now, the self-driving car company is only providing trips to employees.
Categories: CNET

Ralph Giles Passed Away (Xiph.org| Rust@Mozilla | Ghostscript)

Hacker News - Thu, 02/12/2026 - 5:58pm

It's with much sadness that we announce the passing of our friend and colleague Ralph Giles, or rillian as he was known on IRC.

Ralph began contributing to Xiph.org in 2000 and became a core Ghostscript developer in 2001[1]. Ralph made many contributions to the royalty-free media ecosystem, whether it was as a project lead on Theora, serving as release manager for multiple Xiph libraries or maintaining Xiph infrastructure that has been used across the industry by codec engineers and researchers[2]. He was also the first to ship Rust code in Firefox[3] during his time at Mozilla, which was a major milestone for both the language and Firefox itself.

Ralph was a great contributor, a kind colleague and will be greatly missed.

Official Announcement: https://www.linkedin.com/feed/update/urn:li:activity:7427730451626262530

[1]: http://www.wizards-of-os.org/archiv/sprecher/g_h/ralph_giles.html

[2]: https://media.xiph.org/

[3]: https://medium.com/mozilla-tech/deploying-rust-in-a-large-codebase-7e50328074e8

Comments URL: https://news.ycombinator.com/item?id=46996490

Points: 2

# Comments: 0

Categories: Hacker News

Resurrecting _why's Dream

Hacker News - Thu, 02/12/2026 - 5:58pm
Categories: Hacker News

The Holy Order of Clean Code – A Claude Skill

Hacker News - Thu, 02/12/2026 - 5:55pm

Article URL: https://church.btas.dev

Comments URL: https://news.ycombinator.com/item?id=46996451

Points: 1

# Comments: 0

Categories: Hacker News

YouTube Music Adds AI-Generated Playlists

CNET Feed - Thu, 02/12/2026 - 5:51pm
The new feature is available now, as long as you're a Premium YouTube Music subscriber.
Categories: CNET

Ask HN: Best practices for AI agent safety and privacy

Hacker News - Thu, 02/12/2026 - 5:47pm

tl;dr looking for any links, resources or tips around best practices for data security, privacy, and agent guardrails when using Claude (or others).

My journey over the past few years has been one of borderline AI skeptic for its use in coding to having tried Claude Code a month ago and being unlikely to ever go back to coding big changes without it. Most queries I would have used search for in the past are now done in AI models as a first step.

However, one thing that concerns me is whether I am using best practices around agent safety and code protection. I have turned off the “Help improve Claude” toggle in the web panel for Claude settings. Do we believe that’s enough to really stop them (the companies who took any data they could find to make this tool) from using or training on our code? Are all the companies and people using this product just entrusting their proprietary code bases to these AI companies? Is it enough for me to be on the $20/mo Claude Pro plan or do I have to pony up for a Teams plan to protect my data? Which companies do we trust more in this space?

In terms of agent guardrails, I have set up Claude CLI on a cloud VPS Ubuntu host, as its own user that has access to read and modify the code, but no commit ability or git credentials or access to data on my personal machines. The repos are in a directory with group write access and then my personal user account does all commits and pushes, to ensure that Claude has no tangible way to destroy any data that isn’t backed up offsite in git. I don’t provide any of the environment variable credentials necessary to actually run the software, or access to any real data, so testing and QA is still something I do manually and pushing the changes to another machine.

I use it iteratively on individual features or bug fixes. I still have to go back and forth with it (or drop into my editor) a decent amount when it makes mistakes or to encourage better architectural decisions, but it is overall quite fun and exciting for me to use (at this early stage of learning and exploration) and seems to speed up development for my use case in a major way (solo dev SaaS site with web, iOS, and Android native apps + many little, half-finished side projects and ideas).

Does HN have any links or resources that round up the state of the art best practices around AI use for those who are cautious and not wanting to give it the keys to kingdom, but trying to take advantage of this new coding frontier in a safe way? What commands or settings would be typically considered safe to always allow so it doesn’t need to ask for permission as often? What security or privacy toggles do I want to consider in Claude (or other agents). Is it good to subscribe to a couple services and have one review the other’s code as a first step? I hit usage limits on the $20 Claude Pro, should I go to Max or spread horizontally across different AI models? Thanks for any tips!

Comments URL: https://news.ycombinator.com/item?id=46996368

Points: 1

# Comments: 0

Categories: Hacker News

Ask HN: If your OpenClaw could do 1 thing it currently can't, what would it be?

Hacker News - Thu, 02/12/2026 - 5:46pm

Hey guys

What’s one specific thing you wish your OpenClaw agent could do today, but can’t?

Not vague stuff like “pay for things.” I mean which concrete use case ?

For example:

- “Automatically renew my AWS credits if usage drops below $100 and pay with a virtual card.”

- “Find the cheapest nonstop flight to NYC next month, hold it, and ask me before paying.”

Comments URL: https://news.ycombinator.com/item?id=46996357

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: EncroGram – Messaging When You Assume Everything Will Be Looked At

Hacker News - Thu, 02/12/2026 - 5:07pm

Hi HN,

I’m not using EncroGram because I like clean UI or new apps. I’m using it because I assume that sooner or later, anything I touch might be examined — devices, servers, logs, timelines.

Most messaging apps focus on encrypting content. That’s table stakes now. What matters in practice is everything around the content: identifiers, metadata, backups, correlations, and the quiet assumptions built into the system.

EncroGram caught my attention because it seems to start from a different premise: reduce what exists in the first place. No accounts tied to phone numbers or emails. No analytics. No long-term server-side message storage by design. Fewer moving parts, fewer promises, fewer things to trust.

I’m not under the illusion that this makes communication safe or anonymous. Devices get seized. Networks get monitored. People make mistakes. Nothing here changes that. What it changes, at least in theory, is how much residual data is created by default, and how much of that data lives outside the endpoint.

From my point of view, that’s the real question: not “is it secure?”, but “what’s left behind when things go wrong?”

This doesn’t feel like a mainstream product, and it probably shouldn’t be. It feels more like an experiment in being explicit about trade-offs and threat models instead of hiding them behind UX polish and legal language.

I’m posting this because I’m interested in technical criticism and discussion. Where does a low-retention or stateless approach actually help, and where does it fail? What assumptions does it still rely on that users like me might underestimate?

Not here to promote it — just interested in informed perspectives on the design choices.

Comments URL: https://news.ycombinator.com/item?id=46995951

Points: 1

# Comments: 0

Categories: Hacker News

Pages