Feed aggregator
Show HN: AI Seedance 2 – Solving the "jump-cut" problem in AI video
I’ve been obsessed with a specific problem in AI video: the transition mess. Most models today (Sora, Kling, etc.) are great at generating a single pretty shot, but as soon as the camera moves or the scene changes, the physics fall apart and the visuals start warping into nonsense.
After testing the new Seedance 2.0 models from ByteDance, I noticed they handle scene changes differently. It feels like the model actually understands "editorial logic"—likely because ByteDance (the team behind CapCut/TikTok) trained it on professional editing patterns, not just raw pixels.
I built aiseedance2.app to experiment with this "narrative-first" workflow.
The Current Setup: The Seedance 2.0 API is still in a closed rollout, so I’ve launched this playground using Seedance 1.5 Pro as the engine for now. Even with 1.5 Pro, the temporal consistency and "shot flow" are significantly better than what I've seen in other models. I’ll be migrating to the 2.0 multi-modal reference system the second it's fully public.
Why this matters: If we want AI video to be used for actual filmmaking, the model needs to understand how to "cut" like a human editor. Seedance seems to be the first one to get this right.
I’d love to get your thoughts on the "flow" of these generations.
Comments URL: https://news.ycombinator.com/item?id=46946280
Points: 1
# Comments: 0
Twenty Five Percent Without Thinking
Article URL: https://fakepixels.substack.com/p/twenty-five-percent-without-thinking
Comments URL: https://news.ycombinator.com/item?id=46946271
Points: 1
# Comments: 0
New compliance page – open-source
Article URL: https://compliance.getprobo.com/overview
Comments URL: https://news.ycombinator.com/item?id=46946266
Points: 1
# Comments: 0
Show HN: Verify Your Kubernetes
1. *Bridge between Static & Dynamic*: Validates charts/manifests and live cluster state in one tool.
2. *Drift Detection*: Native capability to detect configuration drift (`--compare-to`).
3. *Auto-Fix*: Generates actionable fix plans (`--fix`) rather than just reporting errors.
4. *Exposure Analysis*: Built-in network exposure assessment.
Comments URL: https://news.ycombinator.com/item?id=46946265
Points: 1
# Comments: 0
Show HN: Nemp Memory for OpenClaw, shared memory for multi-agent workflows
Article URL: https://github.com/SukinShetty/Nemp-memory
Comments URL: https://news.ycombinator.com/item?id=46946254
Points: 1
# Comments: 1
Show HN: Rowbot – Chat with a SQLite Database
I built rowbot because I was getting frustrated reading 500 word responses from LLMs, when what I wanted was not an essay, but a conceptual model.
I wondered if I made the LLM answer in a row-based structure, if it might increase the communication bandwidth.
Admitedly, rowbot is probably not a grand leap forward in how we process information (unless you're trying to conceptualise a database schema). But I think it's an interesting direction to explore – forcing the LLM to return structured data, then rendering it with some kind of infographic engine.
Would love to hear any suggestions on where to take this!
Comments URL: https://news.ycombinator.com/item?id=46946250
Points: 1
# Comments: 0
Hong Kong pro-democracy tycoon Jimmy Lai gets 20 years' jail
Article URL: https://www.bbc.com/news/articles/c8d5pl34vv0o
Comments URL: https://news.ycombinator.com/item?id=46946248
Points: 2
# Comments: 0
Data teams should become context teams
Article URL: https://thenewaiorder.substack.com/p/data-teams-should-become-context
Comments URL: https://news.ycombinator.com/item?id=46946246
Points: 1
# Comments: 0
Show HN: Lawgmented – AI contract review and redrafts in Word
Hi HN fam,
If it’s helpful to anyone here, I’m building Lawgmented - a desktop app that helps you review and redraft contracts in Microsoft Word.
Most early-stage founders (and other non-legal folk) deal with contracts in two ways - they grab a template (or worse, generate one with AI) and hope it’s sensible, or they receive the counterparty’s template and try to figure out what it really says. Lawgmented is a guardrail for both scenarios: it helps non-lawyers spot landmines and sanity-check a contract before negotiating and signing.
Lawgmented is a standalone Windows desktop app that runs alongside Microsoft Word to help review contracts in just a few clicks. It highlights key clauses in the document, explains risks in plain language, and suggests role-tailored redrafts you can apply in Word. It’s only $9/mo with a 7-day free trial.
Would love your feedback! Thanks
Comments URL: https://news.ycombinator.com/item?id=46946237
Points: 1
# Comments: 0
Imec opens 2.5B euros chip pilot line as Europe looks to strengthen AI hand
Article URL: https://www.reuters.com/business/imec-opens-25-bln-euros-chip-pilot-line-europe-looks-strengthen-ai-hand-2026-02-09/
Comments URL: https://news.ycombinator.com/item?id=46946236
Points: 1
# Comments: 0
Multi-scale RAG indexing: why different queries need different chunk sizes
Article URL: https://www.ai21.com/blog/query-dependent-chunking/
Comments URL: https://news.ycombinator.com/item?id=46946229
Points: 2
# Comments: 1
PicoClaw: Ultra-Efficient AI Assistant in Go
Article URL: https://github.com/sipeed/picoclaw
Comments URL: https://news.ycombinator.com/item?id=46946227
Points: 1
# Comments: 0
Ask HN: For whom does a CS degree still make sense?
In the yesteryears we had a multitude who thrived. The math wiz, the memorizer, the loner, the high aptitude. Since the boom of the 2010s the field attracted the ambitious, the glitzy, and the influencer.
Ones who thrived in coding as their identity are finding it hard. The entrepreneurs are loving it.
Who is the CS degree for going forward? Who should safely stay away from it?
Comments URL: https://news.ycombinator.com/item?id=46946226
Points: 1
# Comments: 0
Testing software in the era of coding agents
Article URL: https://www.garymm.org/blog/2026/02/06/testing-software-coding-agents/
Comments URL: https://news.ycombinator.com/item?id=46946217
Points: 1
# Comments: 0
Show HN: We added AGENTS.md to 120 challenges so AI teaches instead of codes
Hi HN! I'm Matt, founder of Frontend Mentor (https://www.frontendmentor.io). We provide front-end and full-stack coding challenges with professional Figma designs, enabling developers to build real projects and grow their skills.
The problem: AI coding tools are great, but they can work against you when you're learning. Ask Copilot or Cursor to help with a beginner project, and they'll happily write the whole thing for you. You ship the project, but you didn't really learn anything.
What we did: We added AGENTS.md (and CLAUDE.md) files to every challenge's starter code. These files tell AI tools how to help based on the challenge's difficulty level, so the AI becomes a learning partner rather than an answer machine.
The idea is simple: AI guidance should scale with the learner.
- Newbie: AI acts as a patient mentor. Breaks problems into tiny steps, uses analogies, and gives multiple hints before showing an approach. Won't hand you a complete solution.
- Junior: AI becomes a supportive guide. Introduces debugging, encourages DevTools usage, and explains the "why," not just the "what."
- Intermediate: AI acts like an experienced colleague. Presents trade-offs, shows multiple approaches, and lets you make decisions.
- Advanced: AI acts like a senior dev. Challenges your thinking, plays devil's advocate, gives honest feedback.
- Guru: AI acts like a peer. Debates approaches, references specs, brings different viewpoints.
The core principle across all levels: guide thinking, don't replace it.
Since tools like Cursor and Copilot already look for AGENTS.md in project directories, this works out of the box with no setup.
We don't think anyone has fully figured out AI-assisted learning yet, and the landscape is shifting so quickly. This is our first attempt at making AI tools better by default for people who are trying to build foundational coding skills, not just ship projects.
Would love to hear your thoughts, especially from anyone considering how AI tools and skill development can work together.
Comments URL: https://news.ycombinator.com/item?id=46946215
Points: 1
# Comments: 0
YouTube TV Launches a Raft of New Streaming Packages From $55 a Month
Atlas Airborne robot performs a backflip [video]
Article URL: https://www.youtube.com/watch?v=UNorxwlZlFk
Comments URL: https://news.ycombinator.com/item?id=46946194
Points: 1
# Comments: 0
Show HN: Docx to PDF in the Browser Using Pandoc and Typst (WASM)
Hi HN,
I wanted a way to convert .docx to .pdf without uploading my files to a random server or installing a 2GB LaTeX distribution.
I built a simple tool that runs Pandoc and Typst entirely in the browser via WebAssembly.
How it works:
- Pandoc (WASM) parses the .docx file.
- It outputs Typst markup.
- Typst (WASM) compiles that markup into a PDF.
Status: It's still a work in progress. It handles basic formatting, tables, and images well enough for daily use, though complex Word layouts might still be a bit wonky.
Why this?
- Privacy: Everything stays in your browser.
- Speed: No server round-trips.
- Lightweight: No need to install Pandoc or Typst locally.
Check it out: https://toolkuai.com/word-to-pdf
Feedback is welcome, especially on how to better map Word styles to Typst.
Comments URL: https://news.ycombinator.com/item?id=46946183
Points: 1
# Comments: 0
Elara Core–Persistent memory and emotional state for AI assistants (MCP, Python)
Article URL: https://github.com/aivelikivodja-bot/elara-core
Comments URL: https://news.ycombinator.com/item?id=46946158
Points: 1
# Comments: 1
AI chat app leak exposes 300 million messages tied to 25 million users
An independent security researcher uncovered a major data breach affecting Chat & Ask AI, one of the most popular AI chat apps on Google Play and Apple App Store, with more than 50 million users.
The researcher claims to have accessed 300 million messages from over 25 million users due to an exposed database. These messages reportedly included, among other things, discussions of illegal activities and requests for suicide assistance.
Behind the scenes, Chat & Ask AI is a “wrapper” app that plugs into various large language models (LLMs) from other companies, including OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini. Users can choose which model they want to interact with.
The exposed data included user files containing their entire chat history, the models used, and other settings. But it also revealed data belonging to users of other apps developed by Codeway—the developer of Chat & Ask AI.
The vulnerability behind this data breach is a well-known and documented Firebase misconfiguration. Firebase is a cloud-based backend-as-a-service (BaaS) platform provided by Google that helps developers build, manage, and scale mobile and web applications.
Security researchers often refer to a set of preventable errors in how developers set up Google Firebase services, which leave backend data, databases, and storage buckets accessible to the public without authentication.
One of the most common Firebase misconfigurations is leaving Security Rules set to public. This allows anyone with the project URL to read, modify, or delete data without authentication.
This prompted the researcher to create a tool that automatically scans apps on Google Play and Apple App Store for this vulnerability—with astonishing results. Reportedly, the researcher, named Harry, found that 103 out of 200 iOS apps they scanned had this issue, collectively exposing tens of millions of stored files.
To draw attention to the issue, Harry set up a website where users can see the apps affected by the issue. Codeway’s apps are no longer listed there, as Harry removes entries once developers confirm they have fixed the problem. Codeway reportedly resolved the issue across all of its apps within hours of responsible disclosure.
How to stay safeBesides checking if any apps you use appear in Harry’s Firehound registry, there are a few ways to better protect your privacy when using AI chatbots.
- Use private chatbots that don’t use your data to train the model.
- Don’t rely on chatbots for important life decisions. They have no experience or empathy.
- Don’t use your real identity when discussing sensitive subjects.
- Keep shared information impersonal. Don’t use real names and don’t upload personal documents.
- Don’t share your conversations unless you absolutely have to. In some cases, it makes them searchable.
- If you’re using an AI that is developed by a social media company (Meta AI, Llama, Grok, Bard, Gemini, and so on), make sure you’re not logged in to that social media platform. Your conversations could be linked to your social media account, which might contain a lot of personal information.
Always remember that the developments in AI are going too fast for security and privacy to be baked into technology. And that even the best AIs still hallucinate.
We don’t just report on privacy—we offer you the option to use it.
Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.
