EFF
Surveillance Self-Defense: 2024 in Review
This year, we celebrated the 15th anniversary of our Surveillance-Self Defense (SSD) guide. How’d we celebrate? We kept at it—continuing to work on, refine, and update one of the longest running security and privacy guides on the internet.
Technology changes quickly enough as it is, but so does the language we use to describe that technology. In order for SSD to thrive, it needs careful attention throughout the year. So, we like to think of SSD as a garden, always in need of a little watering, maybe some trimming, and the occasional mowing down of dead technologies.
Brushing Up on the BasicsA large chunk of SSD exists to explain concepts around digital security in the hopes that you can take that knowledge to make your own decisions about your specific needs. As we often say, security is a mindset, not a purchase. But in order to foster that mindset, you need some basic knowledge. This year, we set out to refine some of this guidance in the hopes of making it easier to read and useful for a variety of skill levels. The guides we updated included:
- Choosing Your Tools
- Communicating with Others
- Keeping Your Data Safe
- Seven Steps to Digital Security
- Why Metadata Matters
- What Is Fingerprinting?
- How do I Protect Myself Against Malware?
If you’re looking for something a bit longer, then some of our more complicated guides are practically novels. This year, we updated a few of these.
We went through our Privacy Breakdown of Mobile Phones and updated it with more recent examples when applicable, and included additional tips at the end of some sections for actionable steps you can take. Phones continue to be one of the most privacy-invasive devices we own, and getting a handle on what they’re capable of is the first step to figuring out what risks you may face.
Our Attending a Protest guide is something we revisit every year (sometimes a couple times a year) to ensure it’s as accurate as possible. This year was no different, and while there were no sweeping changes, we did update the included PDF guide and added screenshots where applicable.
We also reworked our How to: Understand and Circumvent Network Censorship slightly to frame it more as instructional guidance, and included new features and tools to get around censorship, like utilizing a proxy in messaging tools.
New GuidesWe saw two additions to the SSD this year. First up was How to: Detect Bluetooth Trackers, our guide to locating unwanted Bluetooth trackers—like Apple AirTags or Tile—that someone may use to track your location. Both Android and iOS have made changes to detecting these sorts of trackers, but the wide array of different products on the market means it doesn’t always work as expected.
We also put together a guide for the iPhone’s Lockdown Mode. While not a feature that everyone needs to consider, it has proven helpful in some cases, and knowing what those circumstances are is an important step in deciding if it’s a feature you need to enable.
But How do I?As the name suggests, our Tool Guides are all about learning how to best protect what you do on your devices. This might be setting up two-factor authentication, turning on encryption on your laptop, or setting up something like Apple’s Advanced Data Protection. These guides tend to need a yearly look to ensure they’re up-to-date. For example, Signal saw the launch of usernames, so we went in and made sure that was added to the guide. Here’s what we updated this year:
- How to: Avoid Phishing Attacks
- How to: Enable Two-factor Authentication
- How to: Encrypt Your Computer
- How to: Encrypt Your iPhone
- How to: Use Signal
Surveillance Self-Defense isn’t just a website, it’s also a general approach to privacy and security. To that end, we often use our blog to tackle more specific questions or respond to news.
This year, we talked about the risks you might face using your state’s digital driver’s license, and whether or not the promise of future convenience is worth the risks of today.
We dove into an attack method in VPNs called TunnelVision, which showed how it was possible for someone on a local network to intercept some VPN traffic. We’ve reiterated our advice here that VPNs—at least from providers who've worked to mitigate TunnelVision—remain useful for routing your network connection through a different network, but they should not be treated as a security multi-tool.
Location data privacy is still a major issue this year, with potential and horrific abuses of this data popping up in the news constantly. We showed how and why you should disable location sharing in apps that don’t need access to function.
As mentioned above, our SSD on protesting is a perennial always in need of pruning, but sometimes you need to plant a whole new flower, as was the case when we decided to write up tips for protesters on campuses around the United States.
Every year, we fight for more privacy and security, but until we get that, stronger controls of our data and a better understanding of how technology works is our best defense.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
EU Tech Regulation—Good Intentions, Unclear Consequences: 2024 in Review
For a decade, the EU has served as the regulatory frontrunner for online services and new technology. Over the past two EU mandates (terms), the EU Commission brought down many regulations covering all sectors, but Big Tech has been the center of their focus. As the EU seeks to regulate the world’s largest tech companies, the world is taking notice, and debates about the landmark Digital Markets Act (DMA) and Digital Services Act (DSA) have spread far beyond Europe.
The DSA’s focus is the governance of online content. It requires increased transparency in content moderation while holding platforms accountable for their role in disseminating illegal content.
For “very large online platforms” (VLOPs), the DSA imposes a complex challenge: addressing “systemic risks” – those arising from their platforms’ underlying design and rules - as well as from how these services are used by the public. Measures to address these risks often pull in opposite directions. VLOPs must tackle illegal content and address public security concerns; while simultaneously upholding fundamental rights, such as freedom of expression; while also considering impacts on electoral processes and more nebulous issues like “civic discourse.” Striking this balance is no mean feat, and the role of regulators and civil society in guiding and monitoring this process remains unclear.
As you can see, the DSA is trying to walk a fine line: addressing safety concerns and the priorities of the market. The DSA imposes uniform rules on platforms that are meant to ensure fairness for individual users, but without so proscribing the platforms’ operations that they can’t innovate and thrive.
The DMA, on the other hand, concerns itself entirely with the macro level – not on the rights of users, but on the obligations of, and restrictions on, the largest, most dominant platforms.
The DMA concerns itself with a group of “gatekeeper” platforms that control other businesses’ access to digital markets. For these gatekeepers, the DMA imposes a set of rules that are supposed to ensure “contestability” (that is, making sure that upstarts can contest gatekeepers’ control and maybe overthrow their power) and “fairness” for digital businesses.
Together, the DSA and DMA promise a safer, fairer, and more open digital ecosystem.
As 2024 comes to a close, important questions remain: How effectively have these laws been enforced? Have they delivered actual benefits to users?
Fairness Regulation: Ambition and High-Stakes ClashesThere’s a lot to like in the DMA’s rules on fairness, privacy and choice...if you’re a technology user. If you’re a tech monopolist, those rules are a nightmare come true.
Predictably, the DMA was inaugurated with a no-holds-barred dirty fight between the biggest US tech giants and European enforcers.
Take commercial surveillance giant Meta: the company’s mission is to relentlessly gather, analyze and abuse your personal information, without your consent or even your knowledge. In 2016, the EU passed its landmark privacy law, called the General Data Protection Regulation. The GDPR was clearly intended to halt Facebook’s romp through the most sensitive personal information of every European.
In response, Facebook simply pretended the GDPR didn’t say what it clearly said, and went on merrily collecting Europeans’ information without their consent. Facebook’s defense for this is that they were contractually obliged to collect this information, because their terms and conditions represented a promise to users to show them surveillance ads, and if they didn’t gather all that information, they’d be breaking that promise.
The DMA strengthens the GDPR by clarifying the blindingly obvious point that a privacy law exists to protect your privacy. That means that Meta’s services – Facebook, Instagram, Threads, and its “metaverse” (snicker) - are no longer allowed to plunder your private information. They must get your consent.
In response, Meta announced that it would create a new paid tier for people who don’t want to be spied on, and thus anyone who continues to use the service without paying for it is “consenting” to be spied on. The DMA explicitly bans these “Pay or OK” arrangements, but then, the GDPR banned Meta’s spying, too. Zuckerberg and his executives are clearly expecting that they can run the same playbook again.
Apple, too, is daring the EU to make good on its threats. Ordered to open up its iOS devices (iPhones, iPads and other mobile devices) to third-party app stores, the company cooked up a Kafkaesque maze of junk fees, punitive contractual clauses, and unworkable conditions and declared itself to be in compliance with the DMA.
For all its intransigence, Apple is getting off extremely light. In an absurd turn of events, Apple’s iMessage system was exempted from the DMA’s interoperability requirements (which would have forced Apple to allow other messaging systems to connect to iMessage and vice-versa). The EU Commission decided that Apple’s iMessage – a dominant platform that the company CEO openly boasts about as a source of lock-in – was not a “gatekeeper platform.”
Platform regulation: A delicate balanceFor regulators and the public the growing power of online platforms has sparked concerns: how can we address harmful content, while also protecting platforms from being pushed to over-censor, so that freedom of expression isn’t on the firing line?
EFF has advocated for fundamental principles like “transparency,” “openness,” and “technological self-determination.” In our European work, we always emphasize that new legislation should preserve, not undermine, the protections that have served the internet well. Keep what works, fix what is broken.
In the DSA, the EU got it right, with a focus on platforms’ processes rather than on speech control. The DSA has rules for reporting problematic content, structuring terms of use, and responding to erroneous content removals. That’s the right way to do platform governance!
But that doesn’t mean we’re not worried about the DSA’s new obligations for tackling illegal content and systemic risks, broad goals that could easily lead to enforcement overreach and censorship.
In 2024, our fears were realized, when the DSA’s ambiguity as to how systemic risks should be mitigated created a new, politicized enforcement problem. Then-Commissioner Theirry Breton sent a letter to Twitter, saying that under the DSA, the platform had an obligation to remove content related to far-right xenophobic riots in the UK, and about an upcoming meeting between Donald Trump and Elon Musk. This letter sparked widespread concern that the DSA was a tool to allow bureaucrats to decide which political speech could and could not take place online. Breton’s letter sidestepped key safeguards in the DSA: the Commissioner ignored the question of “systemic risks” and instead focused on individual pieces of content, and then blurred the DSA’s critical line between "illegal” and “harmful”; Breton’s letter also ignored the territorial limits of the DSA, demanding content takedowns that reached outside the EU.
Make no mistake: online election disinformation and misinformation can have serious real-world consequences, both in the U.S. and globally. This is why EFF supported the EU Commission’s initiative to gather input on measures platforms should take to mitigate risks linked to disinformation and electoral processes. Together with ARTICLE 19, we submitted comments to the EU Commission on future guidelines for platforms. In our response, we recommend that the guidelines prioritize best practices, instead of policing speech. Additionally, we recommended that DSA risk assessment and mitigation compliance evaluations prioritize ensuring respect for fundamental rights.
The typical way many platforms address organized or harmful disinformation is by removing content that violates community guidelines, a measure trusted by millions of EU users. But contrary to concerns raised by EFF and other civil society groups, a new law in the EU, the EU Media Freedom Act, enforces a 24-hour content moderation exemption for media, effectively making platforms host content by force. While EFF successfully pushed for crucial changes and stronger protections, we remain concerned about the real-world challenges of enforcement.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Celebrating Digital Freedom with EFF Supporters: 2024 in Review
“EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world.” It can be a tough job. A lot of our time is spent fighting bad things that are happening in the world or fixing things that have been broken for a long time.
But this work is important, and we've accomplished great things this year! Thanks to your help, we pushed the USPTO to withdraw harmful patent review proposals, fought for the public's right to access police drone footage, and continue to see more and more of the web encrypted thanks to Certbot and Let’s Encrypt.
Of course, the biggest reason EFF is able to fight for privacy and free expression online is support from EFF members. Public support is not only the reason we can operate but is also a great motivator to wake up and advocate for what’s right—especially when we get to hang out with some really cool folks! And with that, I’d like to reminisce.
EFF's Bay Area FestivitiesEarly in the year we held our annual Spring Members’ Speakeasy. We invited supporters in the Bay Area to join us at Babylon Burning, where all of EFF’s t-shirts, hoodies, and much of our swag are made. There, folks got a fun opportunity to hand print their own tote bag! It was a fun opportunity to see t-shirts that even I had never seen before. Side note, EFF has a lot of mechas on members’ t-shirts.
The EFF team had a great time with EFF supporters at events throughout the year. Of course, my mind was blown seeing the questions EFF gamemasters (including the Cybertiger) came up with for both Tech Trivia and Cyberlaw Trivia. What was even more impressive was seeing how many answers teams got right at both events. During Cyberlaw Trivia, one team was able to recite 22 digits of pi, winning the tiebreaker question and the coveted first place prize!
Beating the Heat in Las VegasNext, one of my favorite summer pastimes beating the heat in Las Vegas, where we get to see thousands of EFF supporters for the summer security conferences—BSidesLV, Black Hat, and DEF CON. This year over one thousand people signed up to support the digital freedom movement in just that one week. The support EFF receives during the summer security conferences always amazes me, and it’s a joy to say hi to everyone that stops by to see us. We received an award from DEF CON and even speed ran a legal case, ensuring a security researchers' ability to give their talk at the conference.
While the lawyers were handling the legal case at DEF CON, a subgroup of us had a blast participating in the EFF Benefit Poker Tournament. Fourty-six supporters and friends played for money, glory, and the future of the web—all while using these new EFF playing cards! In the end, only one winner could beat the celebrity guests, including Cory Doctorow and Deviant (even winning the literal shirt off of Deviant's back).
EFFecting ChangeThis year we also launched a new livestream series: EFFecting Change. With our initial three events, we covered recent Supreme Court cases and how they affect the internet, keeping yourself safe when seeking reproductive care, and how to protest with privacy in mind. We’ve seen a lot of support for these events and are excited to continue them next year. Oh, and no worries if you missed one—they’re all recorded here!
Congrats to Our 2024 EFF Award WinnersWe wanted to end the year in style, of course, with our annual EFF Awards. This year we gave awards to 404 Media, Carolina Botero, and Connecting Humanity—and you can watch the keynote if you missed it. We’re grateful to honor and lift up the important work of these award winners.
And It's All Thanks to YouThere was so much more to this year too. We shared campfire tales from digital freedom legends, the Encryptids; poked fun at bogus copyright law with our latest membership t-shirt; and hosted even more events throughout the country.
As 2025 approaches, it’s important to reflect on all the good work that we’ve done together in the past year. Yes, there’s a lot going on in the world, and times may be challenging, but with support from people like you, EFF is ready to keep up the fight—no matter what.
Many thanks to all of the EFF members who joined forces with us this year! If you’ve been meaning to join, but haven’t yet, year-end is a great time to do so.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Fighting For Progress On Patents: 2024 in Review
The rights we have in the offline world–to speak freely, create culture, play games, build new things and do business–must be available to us online, as well. This core belief drives EFF’s work to fight the misuse of the patent system.
Despite significant progress we’ve made over the last decade, patents, and in particular vague software patents, remain a serious threat to online rights. The median patent lawsuit isn't filed by what Americans would recognize as an ‘inventor,’ but by an anonymous limited liability company that provides no products or services, and instead uses patents to threaten others over alleged infringement. In other words, a patent troll. In the tech sector, more than 85% of patent lawsuits are filed by these “non-practicing entities.”
That’s why at EFF, we continue to help individuals and organizations fight patent threats related to everyday activities like using CAPTCHAs and picture menus, tracking packages or vehicles, teaching languages, holding online contests, or playing simple games online.
Here’s where the fight stands as we move into 2025.
Defending the Public’s Right To Challenge Bad PatentsIn 2012, recognizing the persistent problem of an overburdened patent office issuing a countless number dubious patents each year, Congress established a system called “inter partes reviews” (IPRs) to review and challenge patents. While far from perfect, IPRs have led to the cancellation of thousands of patents that should never have been granted in the first place.
It’s no surprise that big patent owners and patent trolls have long sought to dismantle the IPR system. After unsuccessful attempts to persuade federal courts to dismantle IPRs, they shifted tactics in the past 18 months, attempting to convince the U.S. Patent and Trademark Office (USPTO) to undermine the IPR system by changing the rules on who can use it.
EFF opposed these proposed changes, urging our supporters to file public comments. This effort was a resounding success. After reviewing thousands of comments, including nearly 1,000 inspired by EFF’s call to action, the USPTO withdrew its proposal.
Stopping Congress From Re-Opening The Door To The Worst PatentsThe patent system, particularly in the realm of software, is broken. For more than 20 years, the U.S. Patent Office has issued patents on basic cultural or business practices, often with little more than the addition of computer jargon or trivial technical elements.
The Supreme Court addressed this issue a decade ago with its landmark decision in a case called Alice v. CLS Bank, ruling that simply adding computer language to these otherwise generic patents isn’t enough to make them valid. However, Alice hasn’t fully protected us from patent trolls. Even with this decision, the cost of challenging a patent can run into hundreds of thousands of dollars, enabling patent trolls to make “nuisance” demands for amounts of $100,000 or less. But Alice has dampened the severity and frequency of patent troll claims, and allowed for many more businesses to fight back when needed.
So we weren’t surprised when some large patent owners tried again this year to overturn Alice, with the introduction of the Patent Eligibility Restoration Act (PERA), which would bring the worst patents back into the system. PERA would also have overturned the Supreme Court ruling that prevents the patenting of human genes. EFF opposed PERA at every stage, and late this year, its supporters abandoned their efforts to pass it through the 118th Congress. We know they will try again next year–we’ll be ready.
Shining Light On Secrecy In Patent LitigationLitigation in the U.S is supposed to be transparent, particularly in patent cases involving technologies that impact millions of internet users daily. Unfortunately, this is not always the case. In Entropic Communications LLC v. Charter Communications, filed in the U.S. District Court for the Eastern District of Texas, overbroad sealing of documents has obscured the case from public view. EFF intervened in the case to protect the public’s right to access federal court records, as the claims made by Entropic could have wide-reaching implications for anyone using cable modems to connect to the internet.
Our work to ensure transparency in patent disputes is ongoing. In 2016, EFF intervened in another overly-sealed patent case in the Eastern District of Texas. In 2022, we did the same in California, securing an important transparency ruling. That same year, we supported a judge’s investigation into patent owners in Delaware, which ultimately resulted in referrals for criminal investigation. The judge’s actions were upheld on appeal this year.
It remains far too easy for patent trolls to extort and exploit individuals and companies simply for creating or using software. In 2025, EFF will continue fighting for a patent system that’s open, fair, and transparent.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
We Stood Up for Access to the Law and Congress Listened: 2024 in Review
For a while, ever since they lost in court, a number of industry giants have pushed a bill that purported to be about increasing access to the law. In fact, it would give them enormous power over the public ability to access, share, teach, and comment on the law.
This sounds crazy—no one should be able to own the law. But these industry associations claim there’s a glaring exception to the rule: safety and building codes. The key distinction, they insist, is how these particular laws are developed. Often, when it comes to creating the best practices for an industry, a group of experts comes together to draft model standards. Many of those standards are then “incorporated by reference” into law, making them legal mandates just are surely as the U.S. tax code.
But unlike most U.S. laws, the industry association that convene the experts claim that they own a copyright in the results, which means they get to control – and charge for—access to them.
The consequences aren’t hard to imagine. If you are a journalist trying to figure out if a bridge that collapsed violated legal safety standards, you have to get the standards from the industry association, and pay for it. If you are renter who wants to know whether your apartment complies with the fire code, you face the same barrier. And so on.
Many organizations are working to remedy the situation, making standards available online for free (or, in some cases, for free but with a “premium” version that offers additional services on top). Courts around the country have affirmed their right to do so.
Which brings us to the “Protecting and Enhancing Public Access to Codes Act” or “Pro Codes.” The Act requires industry associations to make standards incorporated by reference into law available for free to the public. But here’s the kicker – in exchange Congress will affirm that they have a legitimate copyright in those laws.
This is bad deal for the public. First, access will mean read-only, and subject to licensing limits. We already know what that looks like: currently the associations that make their codes available to the public online do so through clunky, disorganized, siloed websites, largely inaccessible to the print-disabled, and subject to onerous contractual terms (like a requirement to give up your personal information). The public can’t copy, print, or even link to specific portions of the codes. In other words, you can look at the law (as long as you aren’t print-disabled and you know exactly what to look for), but you can’t share it, compare it, or comment on it. That’s fundamentally against the public interest, as many have said. It gives private parties a windfall to do badly what others, like EFF client Public Resource, already do better and for free.
Second, it’s solving a nonexistent problem. The many volunteers who develop these codes neither need nor want a copyright incentive. The industry associations don’t need it either—they make plenty of profit though trainings, membership fees, and selling standards that haven’t been incorporated into law.
Third, it’s unconstitutional under the First, Fifth, and Fourteenth Amendments, which guarantee the public’s right to read, share, and discuss the law.
We’re pleased that members of Congress have recognized the many problems with this law. Many of you wrote to your members to raise concerns and when it was brought to a vote in committee, members registered those concerns. While it passed out of the House Judiciary Committee, the House of Representatives was asked to vote on the law “on suspension,” meaning it can avoid debate and become law if two-thirds of the House vote yes on it. In theory, it’s meant to make it easier to pass uncontroversial laws.
Because you wrote in, because experts sent letters explaining the problems, enough members of Congress recognized that Pro Codes is not uncontroversial. It is not a small deal to allow industry giants to own parts of the law.
This year, we are glad that so many people lent their time and energy to understanding the wolf in sheep’s clothing that the Pro Codes Act really was. And we hope that SDOs take note that they cannot pull the wool over everyone’s eyes. Not while we’re keeping watch.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Related Cases: Freeing the Law with Public.Resource.Org
Police Surveillance in San Francisco: 2024 in Review
From a historic ban on police using face recognition, to landmark CCOPS legislation, to the first ban in the United States of police deploying deadly force via robot, for several years San Francisco has been leading the way on necessary reforms over how police use technology.
Unfortunately, 2024 was a far cry from those victories.
While EFF continues to fight for common sense police reforms in our own backyard, this year saw a change in city politics to something that was darker and more unaccountable than we’ve seen in awhile.
In the spring of this year, we opposed Proposition E, a ballot measure which allows the San Francisco Police Department (SFPD) to effectively experiment with any piece of surveillance technology for a full year without any approval or oversight. This gutted the 2019 Surveillance Technology Ordinance, which required city departments like the SFPD to obtain approval from the city’s elected governing body before acquiring or using specific surveillance technologies. We understood how dangerous Prop E was to democratic control and transparency, and even went as far as to fly a plane over San Francisco asking voters to reject the measure. Unfortunately, despite a strong opposition campaign, Prop E passed in the March 5, 2024 election.
Soon thereafter, we were reminded of the importance of passing democratic control and transparency laws at all levels of government, not just local. AB 481 is a California law requiring law enforcement agencies to get approval from their local elected governing body before purchasing military equipment, including drones. In the haste to purchase drones after Prop E passed, the SFPD knowingly violated this state law in order to begin purchasing more surveillance equipment. AB 481 has no real enforcement mechanism, which means concerned residents have to wave our arms around and implore the police to follow the law. But, we complained loudly enough that the California Attorney General’s office issued a bulletin reminding law enforcement agencies of their obligations under AB 481.
EFF is an organization proudly based in San Francisco. Our fight to make it a place where technology aids, rather than hinders, safety and equity for all people will continue–even if that means calling attention to the SFPD’s casual law breaking or helping to defend the privacy laws that made this city a shining example of 21st century governance.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
The Atlas of Surveillance Expands Its Data on Police Surveillance Technology: 2024 in Review
EFF’s Atlas of Surveillance is one of the most useful resources for those who want to understand the use of police surveillance by local law enforcement agencies across the United States. This year, as the police surveillance industry has shifted, expanded, and doubled down on its efforts to win new cop customers, our team has been busily adding new spyware and equipment to this database. We also saw many great uses of the Atlas from journalists, students, and researchers, as well as a growing number of contributors. The Atlas of Surveillance currently captures more than 11,700 deployments of surveillance tech and remains the most comprehensive database of its kind. To learn more about each of the technologies, please check out our Street-Level Surveillance Hub, an updated and expanded version of which was released at the beginning of 2024.
Removing Amazon RingWe started off with a big change: the removal of our set of Amazon Ring relationships with local police. In January, Amazon announced that it would no longer facilitate warrantless requests for doorbell camera footage through the company’s Neighbors app — a move EFF and other organizations had been calling on for years. Though police can still get access to Ring camera footage by getting a warrant– or through other legal means– we decided that tracking Ring relationships in the Atlas no longer served its purpose, so we removed that set of information. People should keep in mind that law enforcement can still connect to individual Ring cameras directly through access facilitated by Fusus and other platforms.
Adding third-party platformsIn 2024, we added an important growing category of police technology: the third-party investigative platform (TPIP). This is a designation we created for the growing group of software platforms that pull data from other sources and share it with law enforcement, facilitating analysis of police and other data via artificial intelligence and other tools. Common examples include LexisNexis Accurint, Thomson Reuters Clear, and
New Fusus data404 Media released a report last January on the use of Fusus, an Axon system that facilitates access to live camera footage for police and helps funnel such feeds into real-time crime centers. Their investigation revealed that more than 200,000 cameras across the country are part of the Fusus system, and we were able to add dozens of new entries into the Atlas.
New and updated ALPR dataEFF has been investigating the use of automated license plate readers (ALPRs) across California for years, and we’ve filed hundreds of California Public Records Act requests with departments around the state as part of our Data Driven project. This year, we were able to update all of our entries in California related to ALPR data.
In addition, we were able to add more than 300 new law enforcement agencies nationwide using Flock Safety ALPRs, thanks to a data journalism scraping project from the Raleigh News & Observer.
Redoing drone dataThis year, we reviewed and cleaned up a lot of the data we had on the police use of drones (also known as unmanned aerial vehicles, or UAVs). A chunk of our data on drones was based on research done by the Center for the Study of the Drone at Bard College, which became inactive in 2020, so we reviewed and updated any entries that depended on that resource.
We also added new drone data from Illinois, Minnesota, and Texas.
We’ve been watching Drone as First Responder programs since their inception in Chula Vista, CA, and this year we saw vendors like Axon, Skydio, and Brinc make a big push for more police departments to adopt these programs. We updated the Atlas to contain cities where we know such programs have been deployed.
Other cool uses of the AtlasThe Atlas of Surveillance is designed for use by journalists, academics, activists, and policymakers, and this was another year where people made great use of the data.
The Atlas of Surveillance is regularly featured in news outlets throughout the country, including in the MIT Technology Review reporting on drones, and news from the Auburn Reporter about ALPR use in Washington. It also became the focus of podcasts and is featured in the book “Resisting Data Colonialism – A Practical Intervention.”
Educators and students around the world cited the Atlas of Surveillance as an important source in their research. One of our favorite projects was from a senior at Northwestern University, who used the data to make a cool visualization on surveillance technologies being used. At a January 2024 conference at the IT University of Copenhagen, Bjarke Friborg of the project Critical Understanding of Predictive Policing (CUPP) featured the Atlas of Surveillance in his presentation, “Engaging Civil Society.” The Atlas was also cited in multiple academic papers, including the Annual Review of Criminology, and is also cited in a forthcoming paper from Professor Andrew Guthrie Ferguson at American University Washington College of Law titled “Video Analytics and Fourth Amendment Vision.”
Thanks to our volunteers
The Atlas of Surveillance would not be possible without our partners at the University of Nevada, Reno’s Reynolds School of Journalism, where hundreds of students each semester collect data that we add to the Atlas. This year we also worked with students at California State University Channel Islands and Harvard University.
The Atlas of Surveillance will continue to track the growth of surveillance technologies. We’re looking forward to working with even more people who want to bring transparency and community oversight to police use of technology. If you’re interested in joining us, get in touch.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
The U.S. Supreme Court Continues its Foray into Free Speech and Tech: 2024 in Review
As we said last year, the U.S. Supreme Court has taken an unusually active interest in internet free speech issues over the past couple years.
All five pending cases at the end of last year, covering three issues, were decided this year, with varying degrees of First Amendment guidance for internet users and online platforms. We posted some takeaways from these recent cases.
We additionally filed an amicus brief in a new case before the Supreme Court challenging the Texas age verification law.
Public Officials Censoring Comments on Government Social Media PagesCases: O’Connor-Ratcliff v. Garnier and Lindke v. Freed – DECIDED
The Supreme Court considered a pair of cases related to whether government officials who use social media may block individuals or delete their comments because the government disagrees with their views. The threshold question in these cases was what test must be used to determine whether a government official’s social media page is largely private and therefore not subject to First Amendment limitations, or is largely used for governmental purposes and thus subject to the prohibition on viewpoint discrimination and potentially other speech restrictions.
The Supreme Court crafted a two-part fact-intensive test to determine if a government official’s speech on social media counts as “state action” under the First Amendment. The test includes two required elements: 1) the official “possessed actual authority to speak” on the government’s behalf, and 2) the official “purported to exercise that authority when he spoke on social media.” As we explained, the court’s opinion isn’t as generous to internet users as we asked for in our amicus brief, but it does provide guidance to individuals seeking to vindicate their free speech rights against government officials who delete their comments or block them outright.
Following the Supreme Court’s decision, the Lindke case was remanded back to the Sixth Circuit. We filed an amicus brief in the Sixth Circuit to guide the appellate court in applying the new test. The court then issued an opinion in which it remanded the case back to the district court to allow the plaintiff to conduct additional factual development in light of the Supreme Court’s new state action test. The Sixth Circuit also importantly held in relation to the first element that “a grant of actual authority to speak on the state’s behalf need not mention social media as the method of speaking,” which we had argued in our amicus brief.
Government Mandates for Platforms to Carry Certain Online SpeechCases: NetChoice v. Paxton and Moody v. NetChoice – DECIDED
The Supreme Court considered whether laws in Florida and Texas violated the First Amendment because they allow those states to dictate when social media sites may not apply standard editorial practices to user posts. As we argued in our amicus brief urging the court to strike down both laws, allowing social media sites to be free from government interference in their content moderation ultimately benefits internet users. When platforms have First Amendment rights to curate the user-generated content they publish, they can create distinct forums that accommodate diverse viewpoints, interests, and beliefs.
In a win for free speech, the Supreme Court held that social media platforms have a First Amendment right to curate the third-party speech they select for and recommend to their users, and the government’s ability to dictate those processes is extremely limited. However, the court declined to strike down either law—instead it sent both cases back to the lower courts to determine whether each law could be wholly invalidated rather than challenged only with respect to specific applications of each law to specific functions. The court also made it clear that laws that do not target the editorial process, such as competition laws, would not be subject to the same rigorous First Amendment standards, a position EFF has consistently urged.
Government Coercion in Social Media Content ModerationCase: Murthy v. Missouri – DECIDED
The Supreme Court considered the limits on government involvement in social media platforms’ enforcement of their policies. The First Amendment prohibits the government from directly or indirectly forcing a publisher to censor another’s speech (often called “jawboning”). But the court had not previously applied this principle to government communications with social media sites about user posts. In our amicus brief, we urged the court to recognize that there are both circumstances where government involvement in platforms’ policy enforcement decisions is permissible and those where it is impermissible.
Unfortunately, the Supreme Court did not answer the important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they publish? Rather, it dismissed the cases on “standing” because none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. Thus, while the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion.
However, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media context, that the Supreme Court also decided this year: NRA v. Vullo. In that case, the National Rifle Association alleged that the New York state agency that oversees the insurance industry threatened insurance companies with enforcement actions if they continued to offer coverage to the NRA. The Supreme Court endorsed a multi-factored test that many of the lower courts had adopted to answer the ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech?” Those factors are: 1) word choice and tone, 2) the existence of regulatory authority (that is, the ability of the government speaker to actually carry out the threat), 3) whether the speech was perceived as a threat, and 4) whether the speech refers to adverse consequences.
Some Takeaways From These Three Sets of CasesThe O’Connor-Ratcliffe and Lindke cases about social media blocking looked at the government’s role as a social media user. The NetChoice cases about content moderation looked at government’s role as a regulator of social media platforms. And the Murthy case about jawboning looked at the government’s mixed role as a regulator and user.
Three key takeaways emerged from these three sets of cases (across five total cases):
First, internet users have a First Amendment right to speak on social media—whether by posting or commenting—and that right may be infringed when the government seeks to interfere with content moderation, but it will not be infringed by the independent decisions of the platforms themselves.
Second, the Supreme Court recognized that social media platforms routinely moderate users’ speech: they decide which posts each user sees and when and how they see it, they decide to amplify and recommend some posts and obscure others, and they are often guided in this process by their own community standards or similar editorial policies. The court moved beyond the idea that content moderation is largely passive and indifferent.
Third, the cases confirm that traditional First Amendment rules apply to social media. Thus, when government controls the comments section of a social media page, it has the same First Amendment obligations to those who wish to speak in those spaces as it does in offline spaces it controls, such as parks, public auditoriums, or city council meetings. And online platforms that edit and curate user speech according to their editorial standards have the same First Amendment rights as others who express themselves by selecting the speech of others, including art galleries, booksellers, newsstands, parade organizers, and editorial page editors.
Government-Mandated Age VerificationCase: Free Speech Coalition v. Paxton – PENDING
Last but not least, we filed an amicus brief urging the Supreme Court to strike down HB 1181, a Texas law that unconstitutionally restricts adults’ access to sexual content online by requiring them to verify their age (see our Year in Review post on age verification). Under HB 1181, passed in 2023, any website that Texas decides is composed of one-third or more of “sexual material harmful to minors” must collect age-verifying personal information from all visitors. We argued that the law places undue burdens on adults seeking to access lawful online speech. First, the law forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. Second, compliance with the law requires websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier, for example. Third, while sharing many of the same burdens as document-based age verification, newer technologies like “age estimation” introduce their own problems—and are unlikely to satisfy the requirements of HB 1181 anyway. The court’s decision could have major consequences for the freedom of adults to safely and anonymously access protected speech online.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
EFF Continued to Champion Users’ Online Speech and Fought Efforts to Curtail It: 2024 in Review
People’s ability to speak online, share ideas, and advocate for change are enabled by the countless online services that host everyone’s views.
Despite the central role these online services play in our digital lives, lawmakers and courts spent the last year trying to undermine a key U.S. law, Section 230, that enables services to host our speech. EFF was there to fight back on behalf of all internet users.
Section 230 (47 U.S.C. § 230) is not an accident. Congress passed the law in 1996 because it recognized that for users’ speech to flourish online, services that hosted their speech needed to be protected from legal claims based on any particular user’s speech. The law embodies the principle that everyone, including the services themselves, should be responsible for their own speech, but not the speech of others. This critical but limited legal protection reflects a careful balance by Congress, which at the time recognized that promoting more user speech outweighed the harm caused by any individual’s unlawful speech.
EFF helps thwart effort to repeal Section 230Members of Congress introduced a bill in May this year that would have repealed Section 230 in 18 months, on the theory that the deadline would motivate lawmakers to come up with a different legal framework in the meantime. Yet the lawmakers behind the effort provided no concrete alternatives to Section 230, nor did they identify any specific parts of the law they believed needed to be changed. Instead, the lawmakers were motivated by their and the public’s justifiable dissatisfaction with the largest online services.
As we wrote at the time, repealing Section 230 would be a disaster for internet users and the small, niche online services that make up the diverse forums and communities that host speech about nearly every interest, religious and political persuasion, and topic. Section 230 protects bloggers, anyone who forwards an email, and anyone who reposts or otherwise recirculates the posts of other users. The law also protects moderators who remove or curate other users’ posts.
Moreover, repealing Section 230 would not have hurt the biggest online services, given that they have astronomical amounts of money and resources to handle the deluge of legal claims that would be filed. Instead, repealing Section 230 would have solidified the dominance of the largest online services. That’s why Facebook has long ran a campaign urging Congress to weaken Section 230 – a cynical effort to use the law to cement its dominance.
Thankfully, the bill did not advance, in part because internet users wrote to members of Congress objecting to the proposal. We hope lawmakers in 2025 put their energy toward ending Big Tech’s dominance by enacting a meaningful and comprehensive consumer data privacy law, or pass laws that enable greater interoperability and competition between social media services. Those efforts will go a long way toward ending Big Tech’s dominance without harming users’ online speech.
EFF stands up for users’ speech in courtsCongress was not the only government branch that sought to undermine Section 230 in the past year. Two different courts issued rulings this year that will jeopardize people’s ability to read other people’s posts and make use of basic features of online services that benefit all users.
In Anderson v. TikTok, the U.S. Court of Appeals for the Third Circuit issued a deeply confused opinion, ruling that Section 230 does not apply to the automated system TikTok uses to recommend content to users. The court reasoned that because online services have a First Amendment right to decide how to present their users’ speech, TikTok’s decisions to recommend certain content reflects its own speech and thus Section 230’s protections do not apply.
We filed a friend-of-the-court brief in support of TikTok’s request for the full court to rehear the case, arguing that the decision was wrong on both the First Amendment and Section 230. We also pointed out how the ruling would have far-reaching implications for users’ online speech. The court unfortunately denied TikTok’s rehearing request, and we are waiting to see whether the service will ask the Supreme Court to review the case.
In Neville v. Snap, Inc., a California trial court refused to apply Section 230 in a lawsuit that claims basic features of the service, such as disappearing messages, “Stories,” and the ability to befriend mutual acquaintances, amounted to defectively designed products. The trial court’s ruling departs from a long line of other court decisions that ruled that these claims essentially try to plead around Section 230 by claiming that the features are the problem, rather than the illegal content that users created with a service’s features.
We filed a friend-of-the-court brief in support of Snap’s effort to get a California appellate court to overturn the trial court’s decision, arguing that the ruling threatens the ability for all internet users to rely on basic features of a given service. Because if a platform faces liability for a feature that some might misuse to cause harm, the platform is unlikely to offer that feature to users, despite the fact that the majority of people using the feature for legal and expressive purposes. Unfortunately, the appellate court denied Snap’s petition in December, meaning the case continues before the trial court.
EFF supports effort to empower users to customize their online experiencesWhile lawmakers and courts are often focused on Section 230’s protections for online services, relatively little attention has been paid to another provision in the law that protects those who make tools that allow users to customize their experiences online. Yet Congress included this protection precisely because it wanted to encourage the development of software that people can use to filter out certain content they’d rather not see or otherwise change how they interact with others online.
That is precisely the goal of a tool being developed by Ethan Zuckerman, a professor at the University of Massachusetts Amherst, known as Unfollow Everything 2.0. The browser extension would allow Facebook users to automate their ability to unfollow friends, groups, or pages, thereby limiting the content they see in their News Feed.
Zuckerman filed a lawsuit against Facebook seeking a court ruling that Unfollow Everything 2.0 was immune from legal claims from Facebook under Section 230(c)(2)(B). EFF filed a friend-of-the-court brief in support, arguing that Section 230’s user-empowerment tool immunity is unique and incentivizes the development of beneficial tools for users, including traditional content filtering, tailoring content on social media to a user’s preferences, and blocking unwanted digital trackers to protect a user’s privacy.
The district court hearing the case unfortunately dismissed the case, but its ruling did not reach the merits of whether Section 230 protected Unfollow Everything 2.0. The court gave Zuckerman an opportunity to re-file the case, and we will continue to support his efforts to build user-empowering tools.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
EFF in the Press: 2024 in Review
EFF’s attorneys, activists, and technologists were media rockstars in 2024, informing the public about important issues that affect privacy, free speech, and innovation for people around the world.
Perhaps the single most exciting media hit for EFF in 2024 was “Secrets in Your Data,” the NOVA PBS documentary episode exploring “what happens to all the data we’re shedding and explores the latest efforts to maximize benefits – without compromising personal privacy.” EFFers Hayley Tsukayama, Eva Galperin, and Cory Doctorow were among those interviewed.
One big-splash story in January demonstrated just how in-demand EFF can be when news breaks. Amazon’s Ring home doorbell unit announced that it would disable its Request For Assistance tool, the program that had let police seek footage from users on a voluntary basis – an issue on which EFF, and Matthew Guariglia in particular, have done extensive work. Matthew was quoted in Bloomberg, the Associated Press, CNN, The Washington Post, The Verge, The Guardian, TechCrunch, WIRED, Ars Technica, The Register, TechSpot, The Focus, American Wire News, and the Los Angeles Business Journal. The Bloomberg, AP, and CNN stories in turn were picked up by scores of media outlets across the country and around the world. Matthew also did interviews with local television stations in New York City, Oklahoma City, Allentown, PA, San Antonio, TX and Norfolk, VA. Matthew and Jason Kelley were quoted in Reason, and EFF was cited in reports by the New York Times, Engadget, The Messenger, the Washington Examiner, Silicon UK, Inc., the Daily Mail (UK), AfroTech, and KFSN ABC30 in Fresno, CA, as well as in an editorial in the Times Union of Albany, NY.
Other big stories for us this year – with similar numbers of EFF media mentions – included congressional debates over banning TikTok and censoring the internet in the name of protecting children, state age verification laws, Google’s backpedaling on its Privacy Sandbox promises, the Supreme Court’s Netchoice and Murthy rulings, the arrest of Telegram’s CEO, and X’s tangles with Australia and Brazil.
EFF is often cited in tech-oriented media, with 34 mentions this year in Ars Technica, 32 mentions in The Register, 23 mentions in WIRED, 23 mentions in The Verge, 20 mentions in TechCrunch, 10 mentions in The Record from Recorded Future, nine mentions in 404 Media, and six mentions in Gizmodo. We’re also all over the legal media, with 29 mentions in Law360 and 15 mentions in Bloomberg Law.
But we’re also a big presence in major U.S. mainstream outlets, cited 38 times this year in the Washington Post, 11 times in the New York Times, 11 times in NBC News, 10 times in the Associated Press, 10 times in Reuters, 10 times in USA Today, and nine times in CNN. And we’re being heard by international audiences, with mentions in outlets including Germany’s Heise and Deutsche Welle, Canada’s Globe & Mail and Canadian Broadcasting Corp., Australia’s Sydney Morning Herald and Australian Broadcasting Corp., the United Kingdom’s Telegraph and Silicon UK, and many more.
We’re being heard in local communities too. For example, we talked about the rapid encroachment of police surveillance with media outlets in Sarasota, FL; the San Francisco Bay Area; Baton Rouge, LA; Columbus, OH; Grand Rapids, MI; San Diego, CA; Wichita, KS; Buffalo, NY; Seattle, WA; Chicago, IL; Nashville, TN; and Sacramento, CA, among other localities.
EFFers also spoke their minds directly in op-eds placed far and wide, including:
- Street Sheet, Feb. 15: “No on E: Endangering Accountability and Privacy” (Nash Sheard)
- 48 Hills, Feb. 27: “San Franciscans know a lot about tech. That’s why they should vote No on E” (Jason Kelley and Matthew Guariglia)
- AllAfrica, March 8: “Rihanna, FIFA, Guinness, Marvel, Nike - All Could Be Banned in Ghana” (Daly Barnett, Paige Collings, and Dave Maass)
- The Advocate, May 13: “Why I'm protecting privacy in our connected world” (Erica Portnoy)
- Teen Vogue, June 19: “The Section 230 Sunset Act Would Cut Off Young People’s Access to Online Communities” (Jason Kelley)
- UOL, Aug. 5: “ONU pode fechar pacto global de vigilância arbitrária; o que fará o Brasil?” (Veridiana Alimonti and Michel Roberto de Souza)
- Byline Times, Aug. 16, “Keir Starmer Wants Police to Expand Use of Facial Recognition Technology Across UK – He Should Ban it Altogether” (Paige Collings)
- Slate, Aug. 22, “Expanded Police Surveillance Will Get Us ‘Broken Windows’ on Steroids” (Matthew Guariglia)
- Just Security, Aug. 27: “The UN Cybercrime Convention: Analyzing the Risks to Human Rights and Global Privacy” (Katitza Rodriguez)
- Context, Sept. 17: “X ban in Brazil: Disdainful defiance meets tough enforcement” (Veridiana Alimonti)
- AZ Central/ Arizona Republic, Sept. 19: “Police drones could silently video your backyard. That's a problem” (Hannah Zhao)
- Salon, Oct. 3: “Congress knew banning TikTok was a First Amendment problem. It did so anyway” (Brendan Gilligan)
- Deseret News, Nov. 30: “Opinion: Students’ tech skills should be nurtured, not punished” (Bill Budington and Alexis Hancock)
And if you’re seeking some informative listening during the holidays, EFFers joined a slew of podcasts in 2024, including:
- National Constitution Center’s We the People, Jan. 25: “Unpacking the Supreme Court’s Tech Term” (David Greene)
- What the Hack? with Adam Levin, Feb. 6: “EFF’s Eva Galperin Is Not the Pope of Fighting Stalkerware (But She Is)”
- WSJ’s The Future of Everything, Feb. 9: “How Face Scans and Fingerprints Could Become Your Work Badge” (Hayley Tsukayama)
- Fighting Dark Patterns, Feb. 14: “Dark Patterns and Digital Freedom Today. A conversation with Cindy Cohn.”
- 2600’s Off the Hook, Feb. 21: episode on Appin’s efforts to intimidate journalists and media outlets from reporting on the company’s alleged hacking history (David Greene and Cooper Quintin)
- CISO Series’ Defense in Depth, Feb. 22: “When Is Data an Asset and When Is It a Liability?” (F. Mario Trujillo)
- KCRW’s Scheer Intelligence, March 15: “The banning of TikTok is an attack on the free market” (David Greene)
- Inside Job Boards and Recruitment Marketplaces, March 22: “Is Glassdoor now violating user privacy and anonymity?” (Aaron Mackey)
- Firewalls Don’t Stop Dragons, April 15: “Protecting Kids Online” (Joe Mullin)
- Future Nonprofit, May 7: “Empowerment in Action: Nash Sheard - Building a Strong Bond for Change and Collaboration”
- Mindplex Podcast, May 17: “Is the TikTok Ban Unconstitutional?” (David Greene)
- Bioneers: Revolution From the Heart of Nature, Aug. 8: “None of Your Business: Claiming Our Digital Privacy Rights, Reclaiming Democracy” (Cindy Cohn)
- m/Oppenheim Nonprofit Report, Aug. 27: “Digital Privacy with Electronic Frontier Foundation” (Cindy Cohn)
- malwarebytes' Lock and Code, Sept. 9: “What the arrest of Telegram's CEO means, with Eva Galperin”
- Financial Times’ Tech Tonic, Sept. 9: “The Telegram case: Privacy vs security” (Eva Galperin)
- Command Prompt's More Than A Refresh, Sept. 10: “Cooper Quintin, Senior Staff Technologist @ The EFF”
- Mindplex Podcast, Sept. 16: “Pavel Durov's Arrest & Telegram's Encryption Issues” (David Greene)
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Defending Encryption in the U.S. and Abroad: 2024 in Review
EFF supporters get that strong encryption is tied to one of our most basic rights: the right to have a private conversation. In the digital world, privacy is impossible without strong encryption.
That’s why we’ve always got an eye out for attacks on encryption. This year, we pushed back—successfully—against anti-encryption laws proposed in the U.S., the U.K. and the E.U. And we had a stark reminder of just how dangerous backdoor access to our communications can be.
U.S. Bills Pushing Mass File-Scanning Fail To AdvanceThe U.S. Senate’s EARN IT Bill is a wrongheaded proposal that would push companies away from using encryption and towards scanning our messages and photos. There’s no reason to enact such a proposal, which technical experts agree would turn our phones into bugs in our pockets.
We were disappointed when EARN IT was voted out of committee last year, even though several senators did make clear they wanted to see additional changes before they support the bill. Since then, however, the bill has gone nowhere. That’s because so many people, including more than 100,000 EFF supporters, have voiced their opposition.
People increasingly understand that encryption is vital to our security and privacy. And when politicians demand that tech companies install dangerous scanning software whether users like it or not, it’s clear to us all that they are attacking encryption, no matter how much obfuscation takes place.
EFF has long encouraged companies to adopt policies that support encryption, privacy and security by default. When companies do the right thing, EFF supporters will side with them. EFF and other privacy advocates pushed Meta for years to make end-to-end encryption the default option in Messenger. When Meta implemented the change, they were sued by Nevada’s Attorney General. EFF filed a brief in that case arguing that Meta should not be forced to make its systems less secure.
UK Backs Off Encryption-Breaking LanguageIn the U.K., we fought against the wrongheaded Online Safety Act, which included language that would have let the U.K. government strongarm companies away from using encryption. After pressure from EFF supporters and others, the U.K. government gave last-minute assurances that the bill wouldn’t be applied to encrypted messages. The U.K. agency in charge of implementing the Online Safety Act, Ofcom, has now said that the Act will not apply to end-to-end encrypted messages. That’s an important distinction, and we have urged Ofcom to make that even more clear in its written guidance.
EU Residents Do Not Want “Chat Control”Some E.U. politicians have sought to advance a message-scanning bill that was even more extreme than the U.S. anti-encryption bills. We’re glad to say the EU proposal, which has been dubbed “Chat Control” by its opponents, has also been stalled because of strong opposition.
Even though the European Parliament last year adopted a compromise proposal that would protect our rights to encrypted communications, a few key member states at the EU Council spent much of 2024 pushing forward the old, privacy-smashing version of Chat Control. But they haven’t advanced. In a public hearing earlier this month, 10 EU member states, including Germany and Poland, made clear they would not vote for this proposal.
Courts in the E.U., like the public at large, increasingly recognize that online private communications are human rights, and the encryption required to facilitate them cannot be grabbed away. The European Court of Human Rights recognized this in a milestone judgment earlier this year, Podchasov v. Russia, which specifically held that weakening encryption put at risk the human rights of all internet users.
A Powerful Reminder on BackdoorsAll three of the above proposals are based on a flawed idea: that it’s possible to give some form of special access to peoples’ private data that will never be exploited by a bad actor. But that’s never been true–there is no backdoor that works only for the “good guys.”
In October, the U.S. public learned about a major breach of telecom systems stemming from Salt Typhoon, a sophisticated Chinese-government backed hacking group. This hack infiltrated the same systems that major ISPs like Verizon, AT&T and Lumen Technologies had set up for U.S. law enforcement and intelligence agencies to get “lawful access” to user data. It’s still unknown how extensive the damage is from this hack, which included people under surveillance by U.S. agencies but went far beyond that.
If there’s any upside to a terrible breach like Salt Typhoon, it’s that it is waking up some officials to understand that encryption is vital to both individual and national security. Earlier this month, a top U.S. cybersecurity chief said “encryption is your friend,” making a welcome break with the messaging we’ve seen over the years at EFF. Unfortunately, other agencies, including the FBI, continue to push the idea that strong encryption can be coupled with easy access by law enforcement.
Whatever happens, EFF will continue to stand up for our right to use encryption to have secure and private online communications.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
2024 Year in Review
It is our end-of-year tradition at EFF to look back at the last 12 months of digital rights. This year, the number and diversity of our reflections attest that 2024 was a big year.
If there is something uniting all the disparate threads of work EFF has done this year, it is this: that law and policy should be careful, precise, practical, and technologically neutral. We do not care if a cop is using a glass pressed against your door or the most advanced microphone: they need a warrant.
For example, much of the public discourse this year was taken up by generative AI. It seemed that this issue was a Rorschach test for everyone’s anxieties about technology - be they privacy, replacement of workers, surveillance, or intellectual property. Ultimately, it matters little what the specific technology is: whenever technology is being used against our rights, EFF will oppose that use. It’s a future-proof way of protecting us. If we have privacy protections, labor protections, and protections against government invasions, then it does not matter what technology takes over the public imagination, we will have recourse against its harms.
But AI was only one of the issues we took on this past year. We’ve worked on ensuring that the EU’s new rules regarding large online platforms respect human rights. We’ve filed countless briefs in support of free expression online and represented plaintiffs in cases where bad actors have sought to silence them, including citizen journalists who were targeted for posting clips of city council meetings online.
With your help, we have let the United States Congress know that its citizens are for protecting the free press and against laws that would cut kids off from vital sources of information. We’ve spoken to legislators, reporters, and the public to make sure everyone is informed about the benefits and dangers of new technologies, new proposed laws, and legal precedent.
Even all of that does not capture everything we did this year. And we did not—indeed, we cannot—do it without you. Your support keeps the lights on and ensures we are not speaking just for EFF as an organization but for our thousands of tireless members. Thank you, as always.
We will update this page with new stories about digital rights in 2024 every day between now and the new year.
Defending Encryption in the U.S. and Abroad
EFF in the Press
The U.S. Supreme Court Continues its Foray into Free Speech and Tech
The Atlas of Surveillance Expands Its Data on Police Surveillance Technology
EFF Continued to Champion Users’ Online Speech and Fought Efforts to Curtail It
We Stood Up for Access to the Law and Congress Listened
Police Surveillance in San Francisco
Fighting For Progress On Patents
Celebrating Digital Freedom with EFF Supporters
Surveillance Self-Defense
EU Tech Regulation—Good Intentions, Unclear Consequences
The Growing Intersection of Reproductive Rights and Digital Rights
Electronic Frontier Alliance Fought and Taught Locally
Global Age Verification Measures
While the Court Fights Over AI and Copyright Continue, Congress and States Focus On Digital Replicas
State Legislatures Are The Frontline for Tech Policy
Fighting Automated Oppression
Exposing Surveillance at the U.S.-Mexico Border
Federal Regulators Limit Location Brokers from Selling Your Whereabouts
Fighting Online ID Mandates
AI and Policing
Kids Online Safety Act Continues to Threaten Our Rights Online
Deepening Government Use of AI and E-Government Transition in Latin America
Decentralization Reaches a Turning Point
EFF Tells Appeals Court To Keep Copyright’s Fair Use Rules Broad And Flexible
It’s critical that copyright be balanced with limitations that support users’ rights, and perhaps no limitation is more important than fair use. Critics, humorists, artists, and activists all must have rights to re-use and re-purpose source material, even when it’s copyrighted.
Yesterday, EFF weighed in on another case that could shape the future of our fair use rights. In Sedlik v. Von Drachenberg, a Los Angeles tattoo artist created a tattoo based on a well-known photograph of Miles Davis taken by photographer Jeffrey Sedlik. A jury found that Von Drachenberg, the tattoo artist, did not infringe the photographer’s copyright because her version was different from the photo; it didn’t meet the legal threshold of “substantially similar.” After the trial, the judge in the case considered other arguments brought by Sedlik after the trial and upheld the jury’s findings.
On appeal, Sedlik has made arguments that, if upheld, could narrow fair use rights for everyone. The appeal brief suggests that only secondary users who make “targeted” use of a copyrighted work have strong fair use defenses, relying on an incorrect reading of the Supreme Court’s decision in Andy Warhol Foundation v. Goldsmith.
Fair users select among various alternatives, for both aesthetic and practical reasons.
Such a reading would upend decades of Supreme Court precedent that makes it clear that “targeted” fair uses don’t get any special treatment as opposed to “untargeted” uses. As made clear in Warhol, the copying done by fair users must simply be “reasonably necessary” to achieve a new purpose. The principle of protecting new artistic expressions and new innovations is what led the Supreme Court to protect video cassette recording as fair use in 1984. It also contributed to the 2021 decision in Oracle v. Google, which held that Google’s copying of computer programming conventions created for desktop computers, in order to make it easier to design for modern smartphones, was a type of fair use.
Sedlik argues that if a secondary user could have chosen another work, this means they did not “target” the original work, and thus the user should have a lessened fair use case. But that has never been the rule. As the Supreme Court explained, Warhol could have created art about a product other than Campbell’s Soup; but his choice to copy the famous Campbell’s logo was fully justified because it was “well known to the public, designed to be reproduced, and a symbol of an everyday item for mass consumption.”
Fair users always select among various alternatives, for both aesthetic and practical reasons. A film professor might know of several films that expertly demonstrate a technique, but will inevitably choose just one to show in class. A news program alerting viewers to developing events may have access to many recordings of the event from different sources, but will choose just one, or a few, based on editorial judgments. Software developers must make decisions about which existing software to analyze or to interoperate with in order to build on existing technology.
The idea of penalizing these non-“targeted” fair uses would lead to absurd results, and we urge the 9th Circuit to reject this argument.
Finally, Sedlik also argues that the tattoo artist’s social media posts are necessarily “commercial” acts, which would push the tattoo art further away from fair use. Artists’ use of social media to document their processes and work has become ubiquitous, and such an expansive view of commerciality would render the concept meaningless. That’s why multiple appellate courts have already rejected such a view; the 9th Circuit should do so as well.
In order for innovation and free expression to flourish in the digital age, fair use must remain a flexible rule that allows for diverse purposes and uses.