Electronic Freedom Foundation
Digital identification can invade our privacy and aggravate existing social inequities. Designed wrong, it might be a big step towards national identification, in which every time we walk through a door or buy coffee, a record of the event is collected and aggregated. Also, any system that privileges digital identification over traditional forms will disadvantage people already at society’s margins.
So, we’re troubled by proposed rules on “mobile driver’s licenses” (or “mDLs”) from the U.S. Department of Homeland Security. And we’ve joined with the ACLU and EPIC to file comments that raise privacy and equity concerns about these rules. The stakes are high, as the comments explain:
By making it more convenient to show ID and thus easier to ask for it, digital IDs would inevitably make demands for ID more frequent in American life. They may also lead to the routine use of automated or “robot” ID checks carried out not by humans but by machines, causing such demands to proliferate even more. Depending on how a digital ID is designed, it could also allow centralized tracking of all ID checks, and raise other privacy issues. And we would be likely to see demands for driver’s license checks become widespread online, which would enormously expand the tracking information such ID checks could create. In the worst case, this would make it nearly impossible to engage in online activities that aren’t tied to our verified, real-world identities, thus hampering the ability to engage in constitutionally protected anonymous speech and facilitating privacy-destroying persistent tracking of our activities and associations.
Longer-term, if digital IDs replace physical documents entirely, or if physical-only document holders are placed at a disadvantage, that could have significant implications for equity and fairness in American life. Many people do not have smartphones, including many from our most vulnerable communities. Studies have found that 15 percent of the population does not own a smartphone, including almost 40 percent of people over 65 and 24 percent of people who make less than $30,000 a year.
Finally, we are concerned that the DHS proposal layers REAL ID with mDL. REAL ID has many privacy problems, which should not be carried over into mDLs. Moreover, if a person had an mDL issued by a state DMV, that would address forgery and cloning concerns, without the need for REAL ID and its privacy problems.
The U.S. Senate is on the cusp of approving an infrastructure package, which passed a critical first vote last night by 67-32. Negotiations on the final bill are ongoing, but late yesterday NBC News had the draft broadband provisions. There is a lot to like in it, some of which will depend on decisions by the state governments and the Federal Communications Commission (FCC), and some drawbacks. Assuming that what was released makes it into the final bill, here is what to expect.Not Enough Money to Close the Digital Divide Across the U.S.
We have long advocated for, backed up by evidence, a plan that would connect every American to fiber. It is a vital part of any nationwide communications policy that intends to actually function in the 21st century. The future is clearly heading towards more symmetrical uses, that will require more bandwidth at very low latency. Falling short of that will inevitably create a new digital divide, this one between those with 21st-century access and those without. Fiber-connected people will head towards the cheaper symmetrical multi-gigabit era while others are stuck on capacity-constrained expensive legacy wires. This “speed chasm” will create a divide between those who can participate in an increasingly remote, telecommuting world and those who cannot.
Most estimates put the price tag of universal fiber at $80 to $100 billion, but this bipartisan package proposes only $40 billion in total for construction. It’s pretty obvious that this shortfall will prevent many areas from the funding they need to deliver fiber--or really any broadband access—to the millions of Americans in need of access.
While Congress can rectify this shortfall in the future with additional infusions of funding, as well as a stronger emphasis on treating fiber as an infrastructure, versus purely a broadband service. But it should be clear what it means to not do so now. Some states will do very well under this proposal, by having the federal efforts complement already existing state efforts. For example, California already has a state universal fiber effort underway that recruits all local actors to work with the state to deliver fiber infrastructure. More federal dollars will just augment an already very good thing there. But other states may, unfortunately, get duped into building out or subsidizing slow networks that will inevitably need to be replaced. That will cost the state and federal government more money in the end. This isn’t fated to happen, but it’s a risk invited by the legislation’s adoption of 100/20 Mbps as the build-out metric instead of 100/100 Mbps.Protecting the Cable Monopolies Instead of Giving Us What We Need
Lobbyists for the slow legacy internet access companies descended on Capitol Hill with a range of arguments trying to dissuade Congress from creating competition in neglected markets, which in turn would force existing carriers to provide better service. Everyone will eventually need access to fiber-optic infrastructure. Our technical analysis has made clear that fiber is the superior medium for 21st-century broadband, which is why government infrastructure policy needs to be oriented around pushing fiber into every community.
Even major wireless industry players agree now that fiber is “inextricably linked” with future high-speed wireless connectivity. But all of this was very inconvenient for existing legacy monopolies. Most noteworthy, cable stood to lose if too many people got very fast cheaper internet from someone else. The legislation includes provisions to effectively insulate the underinvested cable monopoly markets from federal dollars. That, arguably, is the worst outcome here.
By defining internet access as the ability to get 100/20 Mbps service, the draft language allows cable monopolies to argue that anyone with access to ancient, insufficient internet access does not need federal money to build new infrastructure. That means communities with nearly decade-old DOCSIS 3.0 broadband are shielded from federal dollars from being used to build fiber. Copper-DSL-only areas, and areas entirely without broadband, will likely take the lion’s share of the $40 billion made available. In addition to rural areas, pockets of urban markets where people are still lacking broadband will qualify. This will lead to an absurd result: people on inferior, too-expensive cable services will be seen as equally served as their neighbors who will get federally funded fiber.The Future-Proofing Criteria Is Essential to Help Avoid Wasting These Investments
The proposal establishes a priority (not a mandate) for future-proof infrastructure, which is essential to avoid the 100/20 Mbps speed, or something close to it, from becoming standard. Legacy industry was fond of telling Congress to be “technology neutral” in its policy, when really they were asking Congress to create a program that subsidized their obsolete connections by lowering the bar. The future-proofing provision helps avoid that outcome though by establishing federal priorities of the broadband projects being funded (see below).
This is where things will be challenging in the years to come. The Biden Administration has been crystal clear about the link between fiber infrastructure and future-proofing per its Treasury guidelines that implemented the broadband provisions of the American Rescue Plan. But the bipartisan bill gives a lot of discretion to the states to distribute the funds. Without a doubt, the same lobby that descended on Congress to argue against 100/100 Mbps will attempt to grift state governments into believing any infrastructure will deliver these goals. That is just not true as a matter of physics. States that understand this will push fiber, and are given the flexibility to do so here.Digital Discrimination Rules
Under the section titled “digital discrimination,” the bill requires the FCC to establish what it means to have equal access to broadband and, more importantly, what a carrier would have to do to violate such a requirement. This provision carries major possibilities but is dependent on who the president nominates to run the FCC, as it will be their responsibility for setting the rules. If done right, it can set the stage for addressing digital redlining in certain urban communities, and push fiber on equitable terms.
If they get the right regulation, the most direct beneficiaries are likely to be city broadband users who have been left behind. Even in big cities with profitable markets, people have been left behind. For example, San Francisco has approximately 100,000 people per the city’s own internal analysis that lack broadband (most of whom are low-income and predominantly people of color), yet are surrounded by Comcast and AT&T fiber deployments in that same city. The same is true in various other major cities per numerous studies, which is why EFF has called for a ban on digital redlining both at the state and federal levels.
Court documents recently reviewed by VICE have revealed that ShotSpotter, a company that makes and sells audio gunshot detection to cities and police departments, may not be as accurate or reliable as the company claims. In fact, the documents reveal that employees at ShotSpotter may be altering alerts generated by the technology in order to justify arrests and buttress prosecutors’ cases. For many reasons, including the concerns raised by these recent reports, police must stop using technologies like ShotSpotter.
Acoustic gunshot detection relies on a series of sensors, often placed on lamp posts or buildings. If a gunshot is fired, the sensors detect the specific acoustic signature of a gunshot and send the time and location to the police. Location is gauged by measuring the amount of time it takes for the sound to reach sensors in different locations.
According to ShotSpotter, the largest vendor of acoustic gunshot detection technology, this information is then verified by human acoustic experts to confirm the sound is gunfire, and not a car backfire, firecracker, or other sounds that could be mistaken for gunshots. The sensors themselves can only determine whether there is a loud noise that somewhat resembles a gunshot. It’s still up to people listening on headphones to say whether or not shots were fired.
In a recent statement, ShotSpotter denied the VICE report and claimed that the technology is “100% reliable.” Absolute claims like these are always dubious. And according to the testimony of a ShotSpotter employee and expert witness in court documents reviewed by VICE, claims about the accuracy of the classification come from the marketing department of the company—not from engineers.
Moreover, ShotSpotter presents a real and disturbing threat to people who live in cities covered in these AI-augmented listening devices—which all too often are over-deployed in majority Black and Latine neighborhoods. A recent study of Chicago showed how, over the span of 21 months, ShotSpotter sent police to dead-end reports of shots fired over 40,000 times. This shows—again—that the technology is not as accurate as the company’s marketing department claims. It also means that police officers routinely are deployed to neighborhoods expecting to encounter an armed shooter, and instead encounter innocent pedestrians and neighborhood residents. This creates a real risk that police officers will interpret anyone they encounter near the projected site of the loud noises as a threat—a scenario that could easily result in civilian casualties, especially in over-policed communities.
In addition to its history of false positives, the danger it poses to pedestrians and residents, and the company's dubious record of altering data at the behest of police departments, there is also a civil liberties concern posed by the fact that these microphones intended to detect gunshots can also record human voices.
Yet people in public places—for example, having a quiet conversation on a deserted street—are often entitled to a reasonable expectation of privacy, without overhead microphones unexpectedly recording their conversations. Federal and state eavesdropping statutes (sometimes called wiretapping or interception laws) typically prohibit the recording of private conversations absent consent from at least one person in that conversation.
In at least two criminal trials, prosecutors sought to introduce as evidence audio of voices recorded on acoustic gunshot detection systems. In the California case People v. Johnson, the court admitted it into evidence. In the Massachusetts case Commonwealth v. Denison, the court did not, ruling that a recording of “oral communication” is prohibited “interception” under the Massachusetts Wiretap Act.
It’s only a matter of time before police and prosecutors’ reliance on ShotSpotter leads to tragic consequences. It’s time for cities to stop using ShotSpotter.
Body bags claiming that “disinformation kills” line the streets today in front of Facebook’s Washington, D.C. headquarters. A group of protesters, affiliated with “The Real Facebook Oversight Board” (an organization that is, confusingly, not affiliated with Facebook or its Oversight Board), is urging Facebook’s shareholders to ban so-called misinformation “superspreaders”—that is, a specific number of accounts that have been deemed responsible for the majority of disinformation about the COVID-19 vaccines.
Disinformation about the vaccines is certainly contributing to their slow uptake in various parts of the U.S. as well as other countries. This disinformation is spreading through a variety of ways: Local communities, family WhatsApp groups, FOX television hosts, and yes, Facebook. The activists pushing for Facebook to remove these “superspreaders” are not wrong: while Facebook does currently ban some COVID-19 mis- and disinformation, urging the company to enforce its own rules more evenly is a tried-and-true tactic.
But while disinformation “superspreaders” are easy to identify based on the sheer amount of information they disseminate, tackling disinformation at a systemic level is not an easy task, and some of the policy proposals we’re seeing have us concerned. Here’s why.1. Disinformation is not always simple to identify.
In the United States, it was only a few decades ago that the medical community deemed homosexuality a mental illness. It took serious activism and societal debate for the medical community to come to an understanding that it was not. Had Facebook been around—and had we allowed it to be arbiter of truth—that debate might not have flourished.
Here’s a more recent example: There is much debate amongst the contemporary medical community as to the causes of ME/CFS, a chronic illness for which a definitive cause has not been determined—and which, just a few years ago, was thought by many not to be real. The Centers for Disease Control notes this and acknowledges that some healthcare providers may not take the illness seriously. Many sufferers of ME/CFS use platforms like Facebook and Twitter to discuss their illness and find community. If those platforms were to crack down on that discussion, relying on the views of the providers that deny the gravity of the illness, those who suffer from it would suffer more greatly.2. Tasking an authority with determining disinfo has serious downsides.
As we’ve seen from the first example, there isn’t always agreement between authorities and society as to what is truthful—nor are authorities inherently correct.
In January, German newspaper Handelsblatt published a report stating that the Oxford-AstraZeneca vaccine was not efficacious for older adults, citing an anonymous government source and claiming that the German government’s vaccination scheme was risky.
AstraZeneca denied the claims, and no evidence that the vaccine was ineffective for older adults was procured, but it didn’t matter: Handelsblatt’s reporting set off a series of events that led to AstraZeneca’s reputation in Germany suffering considerably.
Finally, it’s worth pointing out that even the CDC itself—the authority tasked with providing information about COVID-19—has gotten a few things wrong, most recently in May when it lifted its recommendation that people wear masks indoors, an event that was followed by a surge in COVID-19 cases. That shift was met with rigorous debate on social media, including from epidemiologists and sociologists—debate that was important for many individuals seeking to understand what was best for their health. Had Facebook relied on the CDC to guide its misinformation policy, that debate may well have been stifled.3. Enforcing rules around disinformation is not an easy task.
We know that enforcing terms of service and community standards is a difficult task even for the most resourced, even for those with the best of intentions—like, say, a well-respected, well-funded German newspaper. But if a newspaper, with layers of editors, doesn’t always get it right, how can content moderators—who by all accounts are low-wage workers who must moderate a certain amount of content per hour—be expected to do so? And more to the point, how can we expect automated technologies—which already make a staggering amount of errors in moderation—to get it right?
The fact is, moderation is hard at any level and impossible at scale. Certainly, companies could do better when it comes to repeat offenders like the disinformation “superspreaders,” but the majority of content, spread across hundreds of languages and jurisdictions, will be much more difficult to moderate—and as with nearly every category of expression, plenty of good content will get caught in the net.
All of us deserve basic protection against government searches and seizures that the Constitution provides, including requiring law enforcement to get a warrant before it can access our communications. But currently, the FBI has a backdoor into our communications, a loophole, that Congress can and should close.
This week, Congress will vote on the Commerce, Justice, Science and Related Agencies Appropriations bill (H.R. 4505). Among many other things, this bill contains all the funding for the Department of Justice for Fiscal Year 2022 along with certain restrictions on how the DOJ is allowed to spend taxpayer funds. Reps. Lofgren, Massie, Jayapal, and Davidson have offered an amendment to the bill that would prohibit the use of taxpayer funds to conduct warrantless wiretapping of US Persons conducted under Section 702 of the FISA Amendments Act. We strongly support this Amendment.
Section 702 of the Foreign Intelligence Surveillance Act (FISA) requires tech and telecommunications companies to provide the U.S. government with access to emails and other communications to aid in national security investigations--ostensibly when U.S. persons are in communication with foreign surveillance targets abroad or wholly foreign communications transit the U.S. But in this wide-sweeping dragnet approach to intelligence collection, companies allows government access and collection of a large amount of “incidental” communications--that is millions of untargeted communications of U.S. persons that are swept up with the intended data. Once it is collected, the FBI currently can bypass the 4th Amendment requirement of a warrant and sift through these “incidental” non-targeted communications of Americans -- effectively using Section 702 as a “backdoor” around the constitution. They’ve been told by the FISA Court this violates Americans’ Fourth Amendment rights but it has not seemed to stop them and, frustratingly, the FISA Court has failed to take steps to ensure that they stop.
This amendment would not only forbid the DOJ from doing this activity, it would also send a powerful signal to the intelligence agency that Congress is serious about reform.
Tell your member of Congress to support this amendment today.
The DOJ is opposing this amendment, saying that it would inhibit their investigations and make them less successful in rooting out kidnappings and child trafficking. We’ve heard this argument before, and it’s just not convincing.
The FBI has a wide range of investigatory tools. It gives a scary list of potential investigations that it says might be impacted by removing its backdoor, but for every single one of them, the FBI can get a warrant or use other investigatory tools like National Security Letters. What the DOJ elides in protesting this narrow amendment is that the FBI has gotten used to searching through already collected communications of Americans —overbroadly collected for foreign intelligence purposes — for domestic law enforcement purposes. But it is not the purpose of 702 to save the FBI the trouble of getting a warrant (FISA or otherwise) for domestic investigations as the law and the Constitution requires before it collects needed information from the telecommunications and Internet service providers. The FBI is in no way prohibited from using its long-standing powerful investigatory tools due to this amendment - it just can no longer piggy-back on admittedly over broad foreign intelligence collections.
The government also elides that what it wants is to take advantage of Section 702’s massive well-documented over-collection to have a kind of time machine. There is a possibility that information collected by the NSA will be deleted before the FBI can get a warrant, but the FBI has not submitted any public (or as far as we can tell, classified) evidence that this is a major problem in practice or would have resulted in thwarted prosecutions -- as opposed to just requiring a bit more effort by the FBI. But protecting Americans privacy is worth making the FBI follow the Constitution, even if it is a bit more effort.
The US Supreme Court has denied domestic law enforcement a general warrant — collecting first a broad swath of Americans’ communications then sorting through later what it may need. That is what the FBI is defending here, it is what the FISC raised concerns about and it is what this amendment will rightfully stop.
Tell your member of Congress to support this amendment today.
Tell your member of Congress to support this amendment today.
To commemorate the Electronic Frontier Foundation’s 30th anniversary, we present EFF30 Fireside Chats. This limited series of livestreamed conversations looks back at some of the biggest issues in internet history and their effects on the modern web.
To celebrate 30 years of defending online freedom, EFF held a candid live discussion with net neutrality pioneer and EFF board member Gigi Sohn, who served as Counselor to the Chairman of the Federal Communications Commission and co-founder of leading advocacy organization Public Knowledge. Joining the chat were Senior Legislative Counsel at EFF Ernesto Falcon and Associate Director of Policy and Activism Katharine Trendacosta. You can watch the full conversation here.
In my perfect world, everyone’s connected to a future proof, fast, affordable—and open—internet.
On July 28, we’ll be holding our final EFF30 Fireside Chat—a "Founders Edition." EFF's Executive Director, Cindy Cohn will be joined by some of our founders and early board members, Esther Dyson, Mitch Kapor, and John Gilmore, to discuss everything from EFF's origin story and its role in digital rights to where we are today.
EFF30 Fireside Chat: Founders Edition
Wednesday, July 28, 2021 at 5 pm Pacific Time
Streaming Discussion with Q&A
THIS EVENT IS LIVE AND FREE! $10 DONATION APPRECIATED
The conversation began with a comparison between the policy battles of the 1990s, the 2000’s, and today: “What was happening was that the copyright industry--Hollywood, the recording industry, the book publishers--, saw this technology that gave people power to control their own experience and what they wanted to see, and what they wanted to listen to, and it flipped them out...we really need[ed] an organization in Washington that’s dedicated to a free and open internet, that’s free of copyright gatekeepers, and free of ISP gatekeepers.” This was the founding of Public Knowledge, an organization that fights, alongside EFF, to protect the open internet.%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FASCg1x5al_o%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com
Many think of net neutrality—the idea that Internet service providers (ISPs) should treat all data that travels over their networks fairly, without improper discrimination in favor of particular apps, sites or services—as a fairly recent issue. But it actually started in the late 1990’s, Sohn explained. The battle, in many ways, began in earnest in Portland, when the city’s consumer protection agency told AT&T that their cable modem service was going to be regulated under Title VI of the Communications Act of 1934. This led to a court case where the Ninth Circuit determined that the cable modem service was actually a communications service, and fell under Title II of the Communications Act, and should be regulated similarly to a telephone service. Watch the full clip below for a full deep dive into net neutrality’s history:%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2F9I8CJbn6KVQ%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com
Moving to the topic of broadband access, Katharine Trendacosta describes it along the lines of net neutrality. “It’s not a partisan issue. Most Americans support net neutrality. Most Americans need internet access.” And the increased need for access during the pandemic wasn’t a blip--”This is always what the future was going to look like.”%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FAeph3-kemwo%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com
But crises like the pandemic do show the dangerous cracks that exist due to the current lack of broadband regulation. For example, Sohn explained, the Santa Clara fire department was throttled during the Mendocino Complex fire, and had nowhere to go to fix the problem. And over the last year, “the former FCC chairman had to beg the companies not to cut off people’s service during the pandemic. The FCC couldn’t say ‘you must,’ they had to say ‘Mother, may I?’” To put it bluntly, said Ernesto Falcon, as access is more critical than ever, the lack of authority leaves many people without recourse: “Three-quarters of Americans now think of broadband as the same as electricity and water in terms of its importance in everyday life--and the idea that you would have an unregulated monopoly selling you water, who wants that? No one wants that.”
In the regulatory vacuum, Sohn said, the states are the new battleground for getting net neutrality and broadband access to everyone--and they are well poised to fight that fight. Several states have passed net neutrality laws, including California (the ISPs, of course, are fighting back with lawsuits). And though the federal government has failed to properly expand broadband access, states can do, and some have done, much better:
The FCC and other agencies have spent about 50 billion dollars trying to build broadband everywhere and they’ve failed miserably. They invested in slow technologies, they weren’t careful with where they built, we have slow networks, and by one count we have 42 million Americans that don’t have access to any network at all. We need to be much much smarter. It’s not only about who gets the money, or how much, or for what, but it’s also how it’s given out. And that’s one of the reasons why I’m favorable towards giving a big chunk to the states. They’ll have a better idea of where the need is.%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FEUTcpkNfF9w%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com
This chat was recorded just weeks before California Governor Newsom signed a massive, welcome multi-billion dollar public fiber package into law in late July.
The conversation then went to questions from the audience, which tackled ways to kickstart competition in the ISP market, how to convince politicians to make an expensive fiber optic investment, and ultimately, what the role of government should be in an area in which they’ve (so far) failed. You can, and should, watch the entire Fireside Chat here. Whatever you take away from this wide-ranging discussion of open internet issues, we hope you’ll help us work towards Sohn’s vision of a world where “everyone’s connected to a future proof, fast and affordable—and open—internet.” This is a vision that EFF shares, and one that we believe can exist—if we fight for it.
Check out additional recaps of EFF's 30th anniversary conversation series, and don't miss our final program where we'll delve into the dawn of digital activism with EFF’s early leaders on July 28, 2021: EFF30 Fireside Chat: Founders Edition.
Washington D.C.—The Electronic Frontier Foundation (EFF) filed a Freedom of Information Act (FOIA) lawsuit against the U.S. Postal Service and its inspection agency seeking records about a covert program to secretly comb through online posts of social media users before street protests, raising concerns about chilling the privacy and expressive activity of internet users.
Under an initiative called Internet Covert Operations Program, analysts at the U.S. Postal Inspection Service (USPIS), the Postal Service’s law enforcement arm, sorted through massive amounts of data created by social media users to surveil what they were saying and sharing, according to media reports. Internet users’ posts on Facebook, Twitter, Parler, and Telegraph were likely swept up in the surveillance program.
USPIS has not disclosed details about the program or any records responding to EFF’s FOIA request asking for information about the creation and operation of the surveillance initiative. In addition to those records, EFF is also seeking records on the program’s policies and analysis of the information collected, and communications with other federal agencies, including the Department of Homeland Security (DHS), about the use of social media content gathered under the program.
“We’re filing this FOIA lawsuit to shine a light on why and how the Postal Service is monitoring online speech. This lawsuit aims to protect the right to protest,” said Houston Davidson, EFF public interest legal fellow. “The government has never explained the legal justifications for this surveillance. We’re asking a court to order the USPIS to disclose details about this speech-monitoring program, which threatens constitutional guarantees of free expression and privacy.”
Media reports revealed that a government bulletin dated March 16 was distributed across DHS’s state-run security threat centers, alerting law enforcement agencies that USPIS analysts monitored “significant activity regarding planned protests occurring internationally and domestically on March 20, 2021.” Protests around the country were planned for that day, and locations and times were being shared on Parler, Telegram, Twitter, and Facebook, the bulletin said.
“Monitoring and gathering people’s social media activity chills and suppresses free expression,” said Aaron Mackey, EFF senior staff attorney. “People self-censor when they think their speech is being monitored and could be used to target them. A government effort to scour people’s social media accounts is a threat to our civil liberties.”
For the complaint:
For more on this case:
For more on social media surveillance:
EFF, ACLU Urge Appeals Court to Revive Challenge to Los Angeles’ Collection of Scooter Location Data
San Francisco—The Electronic Frontier Foundation and the ACLU of Northern and Southern California today asked a federal appeals court to reinstate a lawsuit they filed on behalf of electric scooter riders challenging the constitutionality of Los Angeles’ highly privacy-invasive collection of detailed trip data and real-time locations and routes of scooters used by thousands of residents each day.
The Los Angeles Department of Transportation (LADOT) collects from operators of dockless vehicles like Lyft, Bird, and Lime information about every single scooter trip taken within city limits. It uses software it developed to gather location data through Global Positioning System (GPS) trackers on scooters. The system doesn’t capture the identity of riders directly, but collects with precision riders’ location, routes, and destinations to within a few feet, which can easily be used to reveal the identities of riders.
A lower court erred in dismissing the case, EFF and the ACLU said in a brief filed today in the U.S. Circuit Court of Appeals for the Ninth Circuit. The court incorrectly determined that the practice, unprecedented in both its invasiveness and scope, didn’t violate the Fourth Amendment. The court also abused its discretion, failing to exercise its duty to credit the plaintiff’s allegations as true, by dismissing the case without allowing the riders to amend the lawsuit to fix defects in the original complaint, as federal rules require.
“Location data can reveal detailed, sensitive, and private information about riders, such as where they live, who they work for, who their friends are, and when they visit a doctor or attend political demonstrations,” said EFF Surveillance Litigation Director Jennifer Lynch. “The lower court turned a blind eye to Fourth Amendment principles. And it ignored Supreme Court rulings establishing that, even when location data like scooter riders’ GPS coordinates are automatically transmitted to operators, riders are still entitled to privacy over the information because of the sensitivity of location data.”
The city has never presented a justification for this dragnet collection of location data, including in this case, and has said it’s an “experiment” to develop policies for motorized scooter use. Yet the lower court decided on its own that the city needs the data and disregarded plaintiff Justin Sanchez’s statements that none of Los Angeles’ potential uses for the data necessitates collection of all riders’ granular and precise location information en masse.
“LADOT’s approach to regulating scooters is to collect as much location data as possible, and to ask questions later,” said Mohammad Tajsar, senior staff attorney at the ACLU of Southern California. “Instead of risking the civil rights of riders with this data grab, LADOT should get back to the basics: smart city planning, expanding poor and working people’s access to affordable transit, and tough regulation on the private sector.”
The lower court also incorrectly dismissed Sanchez’s claims that the data collection violates the California Communications Privacy Act (CalECPA), which prohibits the government from accessing electronic communications information without a warrant or other legal process. The court’s mangled and erroneous interpretation of CalECPA—that only courts that have issued or are in the process of issuing a warrant can decide whether the law is being violated—would, if allowed to stand, severely limit the ability of people subjected to warrantless collection of their data to ever sue the government.
“The Ninth Circuit should overturn dismissal of this case because the lower court made numerous errors in its handling of the lawsuit,” said Lynch. “The plaintiffs should be allowed to file an amended complaint and have a jury decide whether the city is violating riders’ privacy rights.”
Why should you care about data brokers? Reporting this week about a Substack publication outing a priest with location data from Grindr shows once again how easy it is for anyone to take advantage of data brokers’ stores to cause real harm.
This is not the first time Grindr has been in the spotlight for sharing user information with third-party data brokers. The Norwegian Consumer Council singled it out in its 2020 "Out of Control" report, before the Norwegian Data Protection Authority fined Grindr earlier this year. At the time, it specifically warning that the app’s data-mining practices could put users at serious risk in places where homosexuality is illegal.
But Grindr is just one of countless apps engaging in this exact kind of data sharing. The real problem is the many data brokers and ad tech companies that amass and sell this sensitive data without anything resembling real users’ consent.
Apps and data brokers claim they are only sharing so-called “anonymized” data. But that’s simply not possible. Data brokers sell rich profiles with more than enough information to link sensitive data to real people, even if the brokers don’t include a legal name. In particular, there’s no such thing as “anonymous” location data. Data points like one’s home or workplace are identifiers themselves, and a malicious observer can connect movements to these and other destinations. In this case, that includes gay bars and private residents.
Another piece of the puzzle is the ad ID, another so-called “anonymous" label that identifies a device. Apps share ad IDs with third parties, and an entire industry of “identity resolution” companies can readily link ad IDs to real people at scale.
All of this underlines just how harmful a collection of mundane-seeming data points can become in the wrong hands. We’ve said it before and we’ll say it again: metadata matters.
That’s why the U.S. needs comprehensive data privacy regulation more than ever. This kind of abuse is not inevitable, and it must not become the norm.
Council of Europe’s Actions Belie its Pledges to Involve Civil Society in Development of Cross Border Police Powers Treaty
As the Council of Europe’s flawed cross border surveillance treaty moves through its final phases of approval, time is running out to ensure cross-border investigations occur with robust privacy and human rights safeguards in place. The innocuously named “Second Additional Protocol” to the Council of Europe’s (CoE) Cybercrime Convention seeks to set a new standard for law enforcement investigations—including those seeking access to user data—that cross international boundaries, and would grant a range of new international police powers.
But the treaty’s drafting process has been deeply flawed, with civil society groups, defense attorneys, and even data protection regulators largely sidelined. We are hoping that CoE's Parliamentary Committee (PACE), which is next in line to review the draft Protocol, will give us the opportunity to present and take our privacy and human rights concerns seriously as it formulates its opinion and recommendations before the CoE’s final body of approval, the Council of Ministers, decides the Protocol’s fate. According to the Terms of Reference for the preparation of the Draft Protocol, the Council of Ministers may consider inviting parties “other than member States of the Council of Europe to participate in this examination.”
The CoE relies on committees to generate the core draft of treaty texts. In this instance, the CoE’s Cybercrime Committee (T-CY) Plenary negotiated and drafted the Protocol’s text with the assistance of a drafting group consisting of representatives of State Parties. The process, however, has been fraught with problems. To begin with, T-CY’s Terms of Reference for the drafting process drove a lengthy, non-inclusive procedure that relied on closed sessions (Article 4.3 T-CY Rules of Procedures). While the Terms of Reference allow the T-CY to invite individual subject matter experts on an ad hoc basis, key voices such as data protection authorities, civil society experts, and criminal defense lawyers were mostly sidelined. Instead, the process has been largely commandeered by law enforcement, prosecutors and public safety officials (see here, and here).
Earlier in the process, in April 2018, EFF, CIPPIC, EDRI and 90 civil society organizations from across the globe requested the COE Secretariat General provide more transparency and meaningful civil society participation as the treaty was being negotiated and drafted—and not just during the CoE’s annual and somewhat exclusive Octopus Conferences. However, since T-CY began its consultation process in July 2018, input from external stakeholders has been limited to Octopus Conference participation and some written comments. Civil society organizations were not included in the plenary groups and subgroups where text development actually occurs, nor was our input meaningfully incorporated.
Compounding matters, the T-CY’s final online consultation, where the near final draft text of the Protocol was first presented to external stakeholders, only provided a 2.5 week window for input. The draft text included many new and complex provisions, including the Protocol’s core privacy safeguards, but excluded key elements such as the explanatory text that would normally accompany these safeguards. As was flagged by civil society, privacy regulators, and even by the CoE’s own data protection committee, two and a half weeks is not enough time to provide meaningful feedback on such a complex international treaty. More than anything, this short consultation window gave the impression that T-CY’s external consultations were truly performative in nature.
Despite these myriad shortcomings, the Council of Ministers (CoE’ final statutory decision-making body, comprising member States’ Foreign Affairs Ministers) responded to our process concerns arguing that external stakeholders had been consulted during the Protocol’s drafting process. Even more oddly, the Council of Ministers’ justified the demonstrably curtailed final consultation period by invoking its desire to complete the Protocol on the 20th anniversary of the CoE’s Budapest Cybercrime Convention (that is, by this November 2021).
With great respect, we kindly disagree with Ministers’ response. If T-CY wished to meet its November 2021 deadline, it had many options open to it. For instance, it could have included external stakeholders from civil society and from privacy regulators in its drafting process, as it had been urged to do on multiple occasions.
More importantly, this is a complex treaty with wide ranging implications for privacy and human rights in countries across the world. It is important to get it right, and ensure that concerns from civil society and privacy regulators are taken seriously and directly incorporated into the text. Unfortunately, as the text stands, it raises many substantive problems, including the lack of systematic judicial oversight in cross-border investigations and the adoption of intrusive identification powers that pose a direct threat to online anonymity. The Protocol also undermines key data protection safeguards relating to data transfers housed in central instruments like the European Union’s Law Enforcement Directive and the General Data Protection Regulation.
The Protocol now stands with CoE’s PACE, which will issue an opinion on the Protocol and might recommend some additional changes to its substantive elements. It will then fall to CoE’s Council of Ministers to decide whether to accept any of PACE’s recommendations and adopt the Protocol, a step which we still anticipate will occur in November. Together with CIPPIC, EDRI, Derechos Digitales and NGOs around the world hope that PACE takes our concerns seriously, and that the Council produces a treaty that puts privacy and human rights first.
As part of a larger redesign, the payment app Venmo has discontinued its public “global” feed. That means the Venmo app will no longer show you strangers’ transactions—or show strangers your transactions—all in one place. This is a big step in the right direction. But, as the redesigned app rolls out to users over the next few weeks, it’s unclear what Venmo’s defaults will be going forward. If Venmo and parent company PayPal are taking privacy seriously, the app should make privacy the default, not just an option still buried in the settings.
Currently, all transactions and friends lists on Venmo are public by default, painting a detailed picture of who you live with, where you like to hang out, who you date, and where you do business. It doesn’t take much imagination to come up with all the ways this could cause harm to real users, and the gallery of Venmo privacy horrors is well-documented at this point.
However, Venmo apparently has no plans to make transactions private by default at this point. That would squander the opportunity it has right now to finally be responsive to the concerns of Venmo users, journalists, and advocates like EFF and Mozilla. We hope Venmo reconsiders.
There’s nothing “social” about sharing your credit card statement with your friends.
Even a seemingly positive move from “public” to “friends-only” defaults would maintain much of Venmo’s privacy-invasive status quo. That’s in large part because of Venmo’s track record of aggressively hoovering up users’ phone contacts and Facebook friends to populate their Venmo friends lists. Venmo’s installation process nudges users towards connecting their phone contacts and Facebook friends to Venmo. From there, the auto-syncing can continue silently and persistently, stuffing your Venmo friends list with people you did not affirmatively choose to connect with on the app. In some cases, there is no option to turn this auto-syncing off. There’s nothing “social” about sharing your credit card statement with a random subset of your phone contacts and Facebook friends, and Venmo should not make that kind of disclosure the default.
It’s also unclear if Venmo will continue to offer a “public” setting now that the global feed is gone. Public settings would still expose users’ activities on their individual profile pages and on Venmo’s public API, leaving them vulnerable to the kind of targeted snooping that Venmo has become infamous for.
We were pleased to see Venmo recently take the positive step of giving users settings to hide their friends lists. Throwing out the creepy global feed is another positive step. Venmo still has time to make transactions and friends lists private by default, and we hope it makes the right choice.
If you haven’t already, change your transaction and friends list settings to private by following the steps in this post.
On June 17th, the best legal minds in the Bay Area gathered together for a night filled with tech law trivia—but there was a twist! With in-person events still on the horizon, EFF's 13th Annual Cyberlaw Trivia Night moved to a new browser-based virtual space, custom built in Gather. This 2D environment allowed guests to interact with other participants using video, audio, and text chat, based on proximity in the room.
EFF's staff joined forces to craft the questions, pulling details from the rich canon of privacy, free speech, and intellectual property law to create four rounds of trivia for this year's seven competing teams.
As the evening began, contestants explored the virtual space and caught-up with each-other, but the time for trivia would soon be at hand! After welcoming everyone to the event, our intrepid Quiz Master Kurt Opsahl introduced our judges Cindy Cohn, Sophia Cope, and Mukund Rathi. Attendees were then asked to meet at their team's private table, allowing them to freely discuss answers without other teams being able to overhear, and so the trivia began!
Everyone got off to a great start for the General Round 1 questions, featuring answers that ranged from winged horses to Snapchat filters.
Everyone got off to a great start for the General Round 1 questions, featuring answers that ranged from winged horses to Snapchat filters. For the Intellectual Property Round 2, the questions proved more challenging, but the teams quickly rallied for the Privacy & Free Speech Round 3. With no clear winners so far, teams entered the final 4th round hoping to break away from the pack and secure 1st place.
But a clean win was not to be!
Durie Tangri's team "The Wrath of (Lina) Khan" and Fenwick's team "The NFTs: Notorious Fenwick Trivia" were still tied for first! Always prepared for such an occurrence, the teams headed into a bonus Tie-Breaker round to settle the score. Or so we thought...
After extensive deliberation, the judges arrived at their decision and announced "The Wrath of (Lina) Khan" had the closest to correct answer and were the 1st place winners, with the "The NFTs: Notorious Fenwick Trivia" coming in 2nd, and Ridder, Costa & Johnstone's team "We Invented Email" coming in 3rd. Easy, right? No!
Fenwick appealed to the judges, arguing that under Official "Price is Right" Rules, that the answer closest to correct without going over should receive the tie-breaker point: cue more extensive deliberation (lawyers). Turns out...they had a pretty good point. Motion for Reconsideration: Granted!
But what to do when the winners had already been announced?
Two first place winners, of course! Which also meant that Ridder, Costa & Johnstone's team "We Invented Email" moved into the 2nd place spot, and Facebook's team "Whatsapp" were the new 3rd place winners! Whew! Big congratulations to both winners, enjoy your bragging rights!
EFF's legal interns also joined in the fun, and their team name "EFF the Bluebook" followed the proud tradition of having an amazing team name, despite The Rules stating they were unable to formally compete.
EFF hosts the Cyberlaw Trivia Night to gather those in the legal community who help protect online freedom for their users. Among the many firms that continue to dedicate their time, talent, and resources to the cause, we would especially like to thank Durie Tangri LLP; Fenwick; Ridder, Costa & Johnstone LLP; and Wilson Sonsini Goodrich & Rosati LLP for sponsoring this year’s Bay Area event.
If you are an attorney working to defend civil liberties in the digital world, consider joining EFF's Cooperating Attorneys list. This network helps EFF connect people to legal assistance when we are unable to assist. Interested lawyers reading this post can go here to join the Cooperating Attorneys list.
Are you interested in attending or sponsoring an upcoming Trivia Night? Please email firstname.lastname@example.org for more information.
The Indian government’s new Intermediary Guidelines and Digital Media Ethics Code (“2021 Rules”) pose huge problems for free expression and Internet users’ privacy. They include dangerous requirements for platforms to identify the origins of messages and pre-screen content, which fundamentally breaks strong encryption for messaging tools. Though WhatsApp and others are challenging the rules in court, the 2021 Rules have already gone into effect.
Three UN Special Rapporteurs—the Rapporteurs for Freedom of Expression, Privacy, and Association—heard and in large part affirmed civil society’s criticism of the 2021 Rules, acknowledging that they did “not conform with international human rights norms.” Indeed, the Rapporteurs raised serious concerns that Rule 4 of the guidelines may compromise the right to privacy of every internet user, and called on the Indian government to carry out a detailed review of the Rules and to consult with all relevant stakeholders, including NGOs specializing in privacy and freedom of expression.
2021 Rules contain two provisions that are particularly pernicious: the Rule 4(4) Content Filtering Mandate and the Rule 4(2) Traceability Mandate.Content Filtering Mandate
Rule 4(4) compels content filtering, requiring that providers are able to review the content of communications, which not only fundamentally breaks end-to-end encryption, but creates a system for censorship. Significant social media intermediaries (i.e. Facebook, WhatsApp, Twitter, etc.) must “endeavor to deploy technology-based measures,” including automated tools or other mechanisms, to “proactively identify information” that has been forbidden under the Rules. This cannot be done without breaking the higher-level promises of secure end-to-end encrypted messaging.
Client-side scanning has been proposed as a way to enforce content blocking without technically breaking end-to-end encryption. That is, the user’s own device could use its knowledge of the unencrypted content to enforce restrictions by refusing to transmit, or perhaps to display, certain prohibited information, without revealing to the service provider who was attempting to communicate or view that information. That’s wrong. Client side-scanning requires a robot-spy in the room. A spy in a place where people are talking privately makes it not a private conversation. If that spy is a robot-spy like with client-side scanning, it is still a spy just as much as if it were a human spy.
As we explained last year, client-side scanning inherently breaks the higher-level promises of secure end-to-end encrypted communications. If the provider controls what's in the set of banned materials, they can test against individual statements, so a test against a set of size 1, in practice, is the same as being able to decrypt a message. And with client-side scanning, there's no way for users, researchers, or civil society to audit the contents of the banned materials list.
The Indian government frames the mandate as directed toward terrorism, obscenity, and the scourge of child sexual abuse material, but the mandate is acutally much broader. It also imposes proactive and automatic enforcement of the 2021 Rule’s Section (3)1(d)’s content takedown provisions requiring the proactive blocking of material previously held to be “information which is prohibited under any law,” including specifically laws for the protection of “the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation,” and incitement to any such act. This includes the widely criticized Unlawful Activities Prevention Act, which has reportedly been used to arrest academics, writers and poets for leading rallies and posting political messages on social media.
This broad mandate is all that is necessary to automatically suppress dissent, protest, and political activity that a government does not like, before it can even be transmitted. The Indian government's response to the Rapporteurs dismisses this concern, writing “India's democratic credentials are well recognized. The right to freedom of speech and expression is guaranteed under the Indian Constitution.”
The response misses the point. Even if a democratic state applies this incredible power to preemptively suppress expression only rarely and within the bounds of internationally recognized rights to freedom of expression, Rule(4)4 puts in place the tool kit for an authoritarian crackdown, automatically enforced not only in public discourse, but even in private messages between two people.
Part of a commitment to human rights in a democracy requires civic hygiene, refusing to create the tools of undemocratic power.
Moreover, rules like these give comfort and credence to authoritarian efforts to enlist intermediaries to assist in their crackdowns. If this Rule were available to China, word for word, it could be used to require social media companies to block images of Winnie the Pooh as it happened in China from being transmitted, even in direct “encrypted” messages.
Automated filters also violate due process, reversing the burden of censorship. As the three UN Special Rapporteurs made clear, a “general monitoring obligation that will lead to monitoring and filtering of user-generated content at the point of upload ... would enable the blocking of content without any form of due process even before it is published, reversing the well-established presumption that States, not individuals, bear the burden of justifying restrictions on freedom of expression.”Traceability Mandate
The traceability provision, in Rule 4(2), requires any large social media intermediary that provides messaging services to “enable the identification of the first originator of the information on its computer resource” in response to a court order or a decryption request issued under the 2009 Decryption Rules. The Decryption Rules allow authorities to request the interception or monitoring of any decrypted information generated, transmitted, received, or stored in any computer resource..
The Indian government responded to the Rapporteur report, claiming to honor the right to privacy:
“The Government of India fully recognises and respects the right of privacy, as pronounced by the Supreme Court of India in K.S. Puttaswamy case. Privacy is the core element of an individual's existence and, in light of this, the new IT Rules seeks information only on a message that is already in circulation that resulted in an offence.
This narrow view of Rule (4)4 is fundamentally mistaken. Implementing the Rule requires the messaging service to collect information about all messages, even before the content is deemed a problem, allowing the government to conduct surveillance with a time machine. This changes the security model and prevents implementing strong encryption that is a fundamental backstop to protecting human rights in the digital age.The Danger to Encryption
Both the traceability and filtering mandates endanger encryption, calling for companies to know detailed information about each message that their encryption and security designs would otherwise allow users to keep private. Strong end-to-end encryption means that only the sender and the intended recipient know the content of communications between them. Even if the provider only compares two encrypted messages to see if they match, without directly examining the content, this reduces security by allowing more opportunities to guess at the content.
It is no accident that the 2021 Rules are attacking encryption. Riana Pfefferkorn, Research Scholar at the Stanford Internet Observatory, wrote that the rules were intentionally aimed at end-to-end encryption since the government would insist on software changes to defeat encryption protections:
Speaking anonymously to The Economic Times, one government official said the new rules will force large online platforms to “control” what the government deems to be unlawful content: Under the new rules, “platforms like WhatsApp can’t give end-to-end encryption as an excuse for not removing such content,” the official said.
The 2021 Rules’ unstated requirement to break encryption goes beyond the mandate of the Information Technology (IT) Act, which authorized the 2021 Rules. India’s Centre for Internet & Society’s detailed legal and constitutional analysis of the Rules explains: “There is nothing in Section 79 of the IT Act to suggest that the legislature intended to empower the Government to mandate changes to the technical architecture of services, or undermine user privacy.” Both are required to comply with the Rules.
There are better solutions. For example, WhatsApp found a way to discourage massive chain forwarding of messages without itself knowing the content. It has the app note the number of times a message has been forwarded inside the message itself so that the app can then change its behavior based on this. Since the forwarding count is inside the encrypted message, the WhatsApp server and company don’t see it. So your app might not let you forward a chain letter, because the letter’s content shows it was massively forwarded, but the company can’t look at the encrypted message and know it's content.
Likewise, empowering users to report content can mitigate many of the harms that inspired the Indian 2021 Rules. The key principle of end-to-end encryption is that a message gets securely to its destination, without interception by eavesdroppers. This does not prevent the recipient from reporting abusive or unlawful messages, including now-decrypted content and the sender’s information. An intermediary may be able to facilitate user reporting, and still be able to provide the strong encryption necessary for a free society. Furthermore, there are cryptographic techniques for a user to report abuse in a way that identifies the abusive or unlawful content without the possibility of forging a complaint and preserving the privacy of those people not directly involved.
The 2021 Rules endanger encryption, weakening the privacy and security of ordinary people throughout India, while creating tools which could all too easily be misused against fundamental human rights, and which can give inspiration for authoritarian regimes throughout the world. The Rules should be withdrawn, reviewed and reconsidered, bringing the voices of civil society and advocates for international human rights, to ensure the Rules help protect and preserve fundamental rights in the digital age.
Years ago, we noted that despite being one of the world’s largest economies, the state of California had no broadband plan for universal, affordable, high-speed access. It is clear that access that meets our needs requires fiber optic infrastructure, yet most Californians were stuck with slow broadband monopolies due to laws supported by the cable monopolies providing us with terrible service. For example, the state was literally putting obsolete copper DSL internet connections instead of building out fiber optics to rural communities under a state law large private ISPs supported in 2017. But all of that is finally coming to an end thanks to your efforts.
Today, Governor Newsom signed into law one of the largest state investments in public fiber in the history of the United States. No longer will the state of California simply defer to the whims of AT&T and cable for broadband access, now every community is being given their shot to choose their broadband destiny.How Did We Get a New Law?
California’s new broadband infrastructure program was made possible through a combination of
persistent statewide activism from all corners, political leadership by people such as Senator Lena Gonzalez, and investment funding from the American Rescue Plan passed by Congress. All of these things were part of what led up to the moment when Governor Newsom introduced his multi-billion broadband budget that is being signed into law today. Make no mistake, every single time you picked up the phone or emailed to tell your legislator to vote for affordable, high-speed access to all people, it made a difference because it set the stage for today.
Arguably, what pushed us to this moment was the image of kids doing homework in fast-food parking lots during the pandemic. It made it undeniable that internet access was neither universal nor adequate in speed and capacity. That moment, captured and highlighted by Monterey County Supervisor Luis Alejo, a former member of the Sacramento Assembly, forced a reckoning with the failures of the current broadband ecosystem. Coupled with the COVID-19 pandemic also forcing schools to burn countless millions of public dollars renting out inferior mobile hotspots, Sacramento finally had enough and voted unanimously to change course.What is California’s New Broadband Infrastructure Program and Why is it a Revolution?
California’s new broadband program approaches the problem on multiple fronts. It empowers local public entities, local private actors, and the state government itself to be the source of the solution. The state government will build open-access fiber capacity to all corners of the state. This will ensure that every community has multi-gigabit capacity available to suit their current and future broadband needs. Low-interest financing under the state’s new $750 million “Loan Loss Reserve” program will enable municipalities and county governments to issue broadband bonds to finance their own fiber. An additional $2 billion is available in grants for unserved pockets of the state for private and public applicants.
The combination of these three programs provides solutions that were off the table before the governor signed this law. For example, a rural community can finance a portion of their own fiber network with low-interest loans and bonds, seek grants for the most expensive unserved pockets, and connect with the state’s own fiber network at affordable prices. In a major city, a small private ISP or local school district can apply for a grant to provide broadband to an unserved low-income neighborhood. Even in high-tech cities such as San Francisco, an estimated 100,000 residents lack broadband access in low-income areas, proving that access is a widespread, systemic problem, not just a rural one, that requires an all hands on deck approach.
The revolution here is the fact that the law does not rely on AT&T, Frontier Communications, Comcast, and Charter to solve the digital divide. Quite simply, the program makes very little of the total $6 billion budget available to these large private ISPs who have already received so much money and still failed to deliver a solution. This is an essential first step towards reaching near universal fiber access, because it was never ever going to happen through the large private ISPs who are tethered to fast profits and short term investor expectations that prevent them from pursuing universal fiber access. What the state needed was to empower local partners in the communities themselves who will take on the long-term infrastructure challenge.
If you live in California, now is the time to talk to your mayor and city council about your future broadband needs. Now is the time to talk to your local small businesses about the future the state has enabled if they need to improve their broadband connectivity. Now is the time to talk to your school district about what they can do to improve community infrastructure for local students. Maybe you yourself have the will and desire to build your own local broadband network through this law.
All of these things are now possible because for the first time in state history there is a law in place that lets you decide the broadband future.
Pegasus Project Shows the Need for Real Device Security, Accountability and Redress for those Facing State-Sponsored Malware
People all around the world deserve the right to have a private conversation. Communication privacy is a human right, a civil liberty and one of the centerpieces of a free society. And while we all deserve basic communications privacy, the journalists, NGO workers and human rights and democracy activists among us are especially at risk, since they are often at odds with powerful governments.
So it is no surprise that people around the world are angry to learn that surveillance software sold by NSO Group to governments has been found on cellphones worldwide. Thousands of NGOs, human rights and democracy activists, along with government employees and many others have been targeted and spied upon. We agree and we are thankful for the work done by Amnesty International, the countless journalists at Forbidden Stories, along with Citizen Lab, to bring this awful situation to light.
"A commitment to giving their own citizens strong security is the true test of a country’s commitment to cybersecurity."
Like many others, EFF has warned for years of the danger of the misuse of powerful state-sponsored malware. Yet the stories just keep coming about malware being used to surveil and track journalists and human rights defenders who are then murdered —including the murders of Jamal Khashoggi or Cecilio Pineda-Birto. Yet we have failed to ensure real accountability for the governments and companies responsible.
What can be done to prevent this? How do we create accountability and ensure redress? It’s heartening that both South Africa and Germany have recently banned dragnet communications surveillance, in part because there was no way to protect the essential private communications of journalists and privileged communications of lawyers. All of us deserve privacy, but lawyers, journalists and human rights defenders are at special risk because of their often adversarial relationship with powerful governments. Of course, the dual-use nature of targeted surveillance like the malware that NSO sells is trickier, since it is allowable under human rights law when it is deployed under proper “necessary and proportionate” limits. But that doesn’t mean we are helpless. In fact, we have suggestions on both prevention and accountability.
First, and beyond question, we need real device security. While all software can be buggy and malware often takes advantage of those bugs, we can do much better. To do better, we need the full support of our governments. It’s just shameful that in 2021 the U.S. government as well as many foreign governments in the Five Eyes and elsewhere are more interested in their own easy, surreptitious access to our devices than they are in the actual security of our devices. A commitment to giving their own citizens strong security is the true test of a country’s commitment to cybersecurity. By this measure, the countries of the world, especially those who view themselves as leaders in cybersecurity, are currently failing.
It now seems painfully obvious that we need international cooperation in support of strong encryption and device security. Countries should be holding themselves and each other to account when they pressure device manufacturers to dumb down or back door our devices and when they hoard zero days and other attacks rather than ensuring that those security holes are promptly fixed. We also need governments to hold each other to the “necessary and proportionate” requirement of international human rights law for evaluating surveillance and these limits must apply whether that surveillance is done for law enforcement or national security purposes. And the US, EU, and others must put diplomatic pressure on the countries where these immoral spyware companies are are headquartered in to stop selling hacking gear to countries who use them to commit human rights abuses. At this point, many of these companies -- Cellebrite, NSO Group, and Candiru/Saitu—are headquartered in Israel and it’s time that both governments and civil society focus attention there.
Second, we can create real accountability by bringing laws and remedies around the world up to date to ensure that those impacted by state-sponsored malware have the ability to bring suit or otherwise obtain a remedy. Those who have been spied upon must be able to get redress from both the governments who do the illegal spying and the companies that knowingly provide them with the specific tools to do so. The companies whose good name are tarnished by this malware deserve to be able to stop it too. EFF has supported all of these efforts, but more is needed. Specifically:
We supported WhatsApp’s litigation against NSO Group to stop it from spoofing WhatsApp as a strategy for infecting unsuspecting victims. The Ninth Circuit is currently considering NSO’s appeal.
We sought direct accountability for foreign governments who spy on Americans in the U.S. in Kidane v. Ethiopia. We argued that foreign countries who install malware on Americans’ devices should be held to account, just as the U.S. government would be if it violated the Wiretap Act or any of the other many applicable laws. We were stymied by a cramped reading of the law in the D.C. Circuit -- the court wrongly decided that the fact that the malware was sent from Ethiopia rather than from inside the U.S. triggered sovereign immunity. That dangerous ruling should be corrected by other courts or Congress should clarify that foreign governments don’t have a free pass to spy on people in America. NSO Group says that U.S. telephone numbers (that start with +1) are not allowed to be tracked by its service, but Americans can and do have foreign-based telephones and regardless, everyone in the world deserves human rights and redress. Countries around the world should step up to make sure their laws cover state sponsored malware attacks that occur in their jurisdiction.
We also have supported those who are seeking accountability from companies directly, including the Chinese religious minority who have been targeted using a specially-built part of the Great Firewall of China created by American tech giant Cisco.
"The truth is, too many democratic or democratic-leaning countries are facilitating the spread of this malware because they want to be able to use it against their own enemies."
Third, we must increase the pressure on these companies to make sure they are not selling to repressive regimes and continue naming and shaming those that do. EFF’s Know Your Customer framework is a good place to start, as was the State Department’s draft guidance (that apparently was never finalized). And these promises must have real teeth. Apparently we were right in 2019 that NSO Group’s unenforceable announcement that it was holding itself to the “highest standards of ethical business,” was largely a toothless public relations move. Yet while NSO is rightfully on the hot seat now, they are not the only player in this immoral market. Companies who sell dangerous equipment of all kinds must take steps to understand and limit misuse and these surveillance. Malware tools used by governments are no different.
Fourth, we support former United Nations Special Rapporteur for Freedom of Expression David Kaye in calling for a moratorium on the governmental use of these malware technologies. While this is a longshot, we agree that the long history of misuse, and the growing list of resulting extrajudicial killings of journalists and human rights defenders, along with other human rights abuses, justifies a full moratorium.
These are just the start of possible remedies and accountability strategies. Other approaches may be reasonable too, but each must recognize that, at least right now, the intelligence and law enforcement communities of many countries are not defining “cybersecurity” to include actually protecting us, much less the journalists and NGOs and activists that do the risky work to keep us informed and protect our rights. We also have to understand that unless done carefully, regulatory responses like further triggering U.S. export restrictions could result in less security for the rest of us while not really addressing the problem. The NSO Group was reportedly able to sell to the Saudi regime with the permission and encouragement of the Israeli government under that country’s export regime. The truth is, too many democratic or democratic-leaning countries are facilitating the spread of this malware because they want to be able to use it against their own enemies.
Until governments around the world get out of the way and actually support security for all of us, including accountability and redress for victims, these outrages will continue. Governments must recognize that intelligence agency and law enforcement hostility to device security is dangerous for their own citizens because a device cannot tell if the malware infecting it is from the good guys or the bad guys. This fact is just not going to go away.
We must have strong security at the start, and strong accountability after the fact if we want to get to a world where all of us can enjoy communications security. Only then will our journalists, human rights defenders and NGOs be able to do their work without fear of being tracked, watched and potentially murdered simply because they use a mobile device.
We’ve added one more day to EFF's summer membership drive! Over 900 supporters have answered the call to get the internet right by defending privacy, free speech, and innovation. It’s possible if you’re with us. Will you join EFF?
Through Wednesday, anyone can join EFF or renew their membership for as little as $20 and get a pack of issue-focused Digital Freedom Analog Postcards. Each one represents part of the fight for our digital future, from releasing free expression chokepoints to opposing biometric surveillance to compelling officials to be more transparent. We made this special-edition snail mail set to further connect you with friends or family, and to help boost the signal for a better future online—it's a team effort!
New and renewing members at the Copper level and above can also choose our Stay Golden t-shirt. It highlights your resilience through darkness and our power when we work together. And it's pretty darn fashionable, too.
Analog or digital—what matters is connection. Technology has undeniably become a significant piece of nearly all our communications, whether we are paying bills, working, accessing healthcare, or talking to loved ones. These familiar things require advanced security protocols, unrestricted access to an open web, and vigilant public advocacy. So if the internet is a portal to modern life, then our tech must also embrace civil liberties and human rights.Boost the Signal & Free the Tubes
Why do you support internet freedom? You can advocate for a better online future just by connecting with the people around you. Here’s some sample language you can share with your circles:Staying connected has never been more important. Help me support EFF and the fight for every tech users’ right to privacy, free speech, and digital access. https://eff.org/greetings
Twitter | Facebook | Email
It’s up to all of us to strengthen the best parts of the internet and create the future we want to live in. With people now coming of age only knowing a world connected to the web, EFF is using its decades of expertise in law and technology to stand up for the rights and freedoms that sustain modern democracy. Thank you for being part of this important work.
Support Online Rights For All
In an amicus brief filed Friday, EFF and the Internet Archive argued to the Ninth Circuit Court of Appeals that the Supreme Court’s recent decision in Van Buren v. United States shows that the federal computer crime law does not criminalize the common and useful practice of scraping publicly available information on the internet.
The case, hiQ Labs, Inc. v. LinkedIn Corp., began when LinkedIn attempted to stop its competitor, hiQ Labs, from scraping publicly available data posted by users of LinkedIn. hiQ Labs sued and, on appeal, the Ninth Circuit held that the Computer Fraud and Abuse Act (CFAA) does not prohibit this scraping.
LinkedIn asked the Supreme Court to reverse the decision. Instead, the high court sent the case back to the Ninth Circuit and asked it to take a second look, this time with the benefit of Van Buren.
Our brief points out that Van Buren instructed lower courts to use the “technical meanings” of the CFAA’s terms—not property law or generic, non-technical definitions. It’s a computer crime statute, after all. The CFAA prohibits accessing a computer “without authorization”—from a technical standpoint, that presumes there is an authorization system like a password requirement or other authentication stage.
But when any of the billions of internet users access any of the hundreds of millions of public websites, they do not risk violating federal law. There is no authentication stage between the user and the public website, so “without authorization” is an inapt concept. Van Buren used a “gates-up-or-down” analogy, and for a publicly available website, there is no gate to begin with—or at the very least, the gate is up. Our brief explains that neither LinkedIn’s cease-and-desist letter to hiQ nor its attempts to block its competitor’s IP addresses are the kind of technological access barrier required to invoke the CFAA.
Lastly, our brief acknowledges LinkedIn’s concerns about how unbridled scraping may harm privacy online and invites the company to join growing advocacy efforts to adopt consumer and biometric privacy laws. These laws will directly address the collection of people’s sensitive information without their consent and won’t criminalize legitimate activity online.Related Cases: hiQ v. LinkedIn
Claiming that “right-wing voices are being censored,” Republican-led legislatures in Florida and Texas have introduced legislation to “end Big Tech censorship.” They say that the dominant tech platforms block legitimate speech without ever articulating their moderation policies, that they are slow to admit their mistakes, and that there is no meaningful due process for people who think the platforms got it wrong.
They’re right.So is everyone else
But it’s not just conservatives who have their political speech blocked by social media giants. It’s Palestinians and other critics of Israel, including many Israelis. And it’s queer people, of course. We have a whole project tracking people who’ve been censored, blocked, downranked, suspended and terminated for their legitimate speech, from punk musicians to peanuts fans, historians to war crimes investigators, sex educators to Christian ministries.The goat-rodeo
Content moderation is hard at any scale, but even so, the catalog of big platforms’ unforced errors makes for sorry reading. Experts who care about political diversity, harassment and inclusion came together in 2018 to draft the Santa Clara Principles on Transparency and Accountability in Content Moderation but the biggest platforms are still just winging it for the most part.
The Florida and Texas social media laws are deeply misguided and nakedly unconstitutional, but we get why people are fed up with Big Tech’s ongoing goat-rodeo of content moderation gaffes.So what can we do about it?
Let’s start with talking about why platform censorship matters. In theory, if you don’t like the moderation policies at Facebook, you can quit and go to a rival, or start your own. In practice, it’s not that simple.
First of all, the internet’s “marketplace of ideas” is severely lopsided at the platform level, consisting of a single gargantuan service (Facebook), a handful of massive services (YouTube, Twitter, Reddit, TikTok, etc) and a constellation of plucky, struggling, endangered indieweb alternatives.
If none of the big platforms want you, you can try to strike out on your own. Setting up your own rival platform requires that you get cloud services, anti-DDoS, domain registration and DNS, payment processing and other essential infrastructure. Unfortunately, every one of these sectors has grown increasingly concentrated, and with just a handful of companies dominating every layer of the stack, there are plenty of weak links in the chain and if just one breaks, your service is at risk.
But even if you can set up your own service, you’ve still got a problem: everyone you want to talk about your disfavored ideas with is stuck in one of the Big Tech silos. Economists call this the “network effect,” when a service gets more valuable as more users join it. You join Facebook because your friends are there, and once you’re there, more of your friends join so they can talk to you.
Setting up your own service might get you a more nuanced and welcoming moderation environment, but it’s not going to do you much good if your people aren’t willing to give up access to all their friends, customers and communities by quitting Facebook and joining your nascent alternative, not least because there’s a limit to how many services you can be active on.Network effects
If all you think about is network effects, then you might be tempted to think that we’ve arrived at the end of history, and that the internet was doomed to be a winner-take-all world of five giant websites filled with screenshots of text from the other four.But not just network effects
But network effects aren’t the only idea from economics we need to pay attention to when it comes to the internet and free speech. Just as important is the idea of “switching costs,” the things you have to give up when you switch away from one of the big services - if you resign from Facebook, you lose access to everyone who isn’t willing to follow you to a better place.
Switching costs aren’t an inevitable feature of large communications systems. You can switch email providers and still connect with your friends; you can change cellular carriers without even having to tell your friends because you get to keep your phone number.
The high switching costs of Big Tech are there by design. Social media may make signing up as easy as a greased slide, but leaving is another story. It's like a roach motel: users check in but they’re not supposed to check out.Interop vs. switching costs
Enter interoperability, the practice of designing new technologies that connect to existing ones. Interoperability is why you can access any website with any browser, and read Microsoft Office files using free/open software like LibreOffice, cloud software like Google Office, or desktop software like Apple iWorks.
An interoperable social media giant - one that allowed new services to connect to it - would bust open that roach motel. If you could leave Facebook but continue to connect with the friends, communities and customers who stayed behind, the decision to leave would be much simpler. If you don’t like Facebook’s rules (and who does?) you could go somewhere else and still reach the people that matter to you, without having to convince them that it’s time to make a move.The ACCESS Act
That’s where laws like the proposed ACCESS Act come in. While not perfect, this proposal to force the Big Tech platforms to open up their walled gardens to privacy-respecting, consent-seeking third parties is a way forward for anyone who chafes against Big Tech’s moderation policies and their uneven, high-handed application.
Some tech platforms are already moving in that direction. Twitter says it wants to create an “app store for moderation,” with multiple services connecting to it, each offering different moderation options. We wish it well! Twitter is well-positioned to do this - it’s one tenth the size of Facebook and needs to find ways to grow.
But the biggest tech companies show no sign of voluntarily reducing their switching costs. The ACCESS Act is the most important interoperability proposal in the world, and it could be a game-changer for all internet users.Save users' rights under Section 230, save the internet
Unfortunately for all of us, many of the people who don’t like Big Tech’s moderation think the way to fix it is to eliminate Section 230, a law that makes people who post illegal content responsible for their own speech , while allowing anyone that hosts expressive speech to remove offensive, harassing or otherwise objectionable content.
That means that conservative Twitter alternatives can delete floods of pornographic memes without being sued by their users. It means that online forums can allow survivors of workplace harassment to name their abusers without worrying about libel suits.
If hosting speech makes you liable for what your users say, then only the very biggest platforms can afford to operate, and then only by resorting to shoot-first/ask-questions-later automated takedown systems.Kumbaya
There’s not much that the political left and right agree on these days, but there’s one subject that reliably crosses the political divide: frustration with monopolists’ clumsy handling of online speech.
For the first time, there’s a law before Congress that could make Big Tech more accountable and give internet users more control over speech and moderation policies. The promise of the ACCESS Act is an internet where if you don’t like a big platform’s moderation policies, if you think they’re too tolerant of abusers or too quick to kick someone off for getting too passionate during a debate, you can leave, and still stay connected to the people who matter to you.
Killing CDA 230 won’t fix Big Tech (if that was the case, Mark Zuckerberg wouldn’t be calling for CDA 230 reform). The ACCESS Act won’t either, by itself -- but by making Big Tech open up to new services that are accountable to their users, the ACCESS Act takes several steps in the right direction.
“You can record all you want. I just know it can’t be posted to YouTube,” said an Alameda County sheriff’s deputy to an activist. “I am playing my music so that you can’t post on YouTube.” The tactic didn’t work—the video of his statement can in fact, as of this writing, be viewed on YouTube. But it’s still a shocking attempt to thwart activists’ First Amendment right to record the police—and a practical demonstration that cops understand what too many policymakers do not: copyright can offer an easy way to shut down lawful expression.
This isn’t the first time this year this has happened. It’s not even the first time in California this year. Filming police is an invaluable tool, for basically anyone interacting with them. It can provide accountability and evidence of what occurred outside of what an officer says occurred. Given this country’s longstanding tendency to believe police officers’ word over almost anyone else’s, video of an interaction can go a long way to getting to the truth.
Very often, police officers would prefer not to be recorded, but there’s not much they can do about that legally, given strong First Amendment protections for the right to record. But some officers are trying to get around this reality by making it harder to share recordings on many video platforms: they play music so that copyright filters will flag the video as potentially infringing. Copyright allows these cops to brute force their way past the First Amendment.
Large rightsholders—the major studios and record labels—and their lobbyists have done a very good job of divorcing copyright from debates about speech. The debate over the merits of the Digital Millennium Copyright Act (DMCA) is cast as “artists versus Big Tech.” But we must not forget that, at its core, copyright is a restriction on, as well as an engine for, expression.
Many try to cast the DMCA just as a tool to protect the rights of artists, since in theory it is meant to stop infringement. But the law is also a tool that makes it incredibly simple to remove lawful speech from the internet. The fair use doctrine ensures that copyright can exist in harmony with the First Amendment. But often, the debate gets wrapped up in who has the right to make a living doing what kind of art, and it becomes easy to forget how mechanisms to enforce copyright can actually restrict lawful speech.
Forgetting all of this serves the purpose of those who advocate for the broader use of copyright filters on the internet. And where those filters are voluntarily deployed by companies, they replace a fair use analysis. So a filter that automatically blocks a video for playing a few seconds of a song becomes a useful tool for police officers who do not want to be subject to video-based accountability. What’s the harm in automating the identification and removal of things that have copyrighted material in them? The harm is that you are often removing lawful speech.
It’s as easy to play a song out of your phone as it is to film with it. Easier, even. And copyright filters work by checking if something in an uploaded video matches any of the copyrighted material in its database. A few seconds of a certain song in the audio of a video could prevent that video from being uploaded. That’s the thing the cops in these stories are recognizing. And while it’s funny to see a cop playing Taylor Swift and claiming we can’t watch a video on YouTube that we are actually watching on YouTube, how many of these stories aren’t we hearing about? We know, without a doubt, that YouTube’s filter, Content ID, is very sensitive to music. And some singers and companies have YouTube’s filter set to automatically remove, rather than just demonetize, uploads with parts of their songs in them. Since YouTube is so dominant when it comes to video sharing, knowing how to game Content ID can be very effective in silencing others.
When a story like this gets press attention, the video at issue won’t disappear because everyone recognizes the importance of the speech at issue. Neither the platform nor the record label is going to take down the video of the cop playing Taylor Swift. But countless videos never make it past the filters, and so never get public attention. Many activists don’t know what to do about a copyright claim. They may not want to share their name and contact information, as is required for both DMCA counternotices and challenges to Content ID. Or, when faced with the labyrinthine structure of YouTube’s appeals system, they may just give up.
As the saying goes, we don’t know what we don’t know. Hopefully, these stories help others recognize and fight this devious tactic. If you have similar stories of police officers using this tactic, please let EFF know by emailing email@example.com.
It’s no longer science fiction or unreasonable paranoia. Now, it needs to be said: No, police must not be arming land-based robots or aerial drones. That’s true whether these mobile devices are remote controlled by a person or autonomously controlled by artificial intelligence, and whether the weapons are maximally lethal (like bullets) or less lethal (like tear gas).
Police currently deploy many different kinds of moving and task-performing technologies. These include flying drones, remote control bomb-defusing robots, and autonomous patrol robots. While these different devices serve different functions and operate differently, none of them--absolutely none of them--should be armed with any kind of weapon.
Mission creep is very real. Time and time again, technologies given to police to use only in the most extreme circumstances make their way onto streets during protests or to respond to petty crime. For example, cell site simulators (often called “Stingrays”) were developed for use in foreign battlefields, brought home in the name of fighting “terrorism,” then used by law enforcement to catch immigrants and a man who stole $57 worth of food. Likewise, police have targeted BLM protesters with face surveillance and Amazon Ring doorbell cameras.
Today, scientists are developing an AI-enhanced autonomous drone, designed to find people during natural disasters by locating their screams. How long until police use this technology to find protesters shouting chants? What if these autonomous drones were armed? We need a clear red line now: no armed police drones, period.The Threat is Real
There are already law enforcement robots and drones of all shapes, sizes, and levels of autonomy patrolling the United States as we speak. From autonomous Knightscope robots prowling for “suspicious behavior” and collecting images of license plates and phone identifying information, to Boston Dynamic robotic dogs accompanying police on calls in New York or checking the temperature of unhoused people in Honolulu, to predator surveillance drones flying over BLM protests in Minneapolis.
We are moving quickly towards arming such robots and letting autonomous artificial intelligence determine whether or not to pull the trigger.
According to a Wired report earlier this year, the U.S. Defense Advanced Research Projects Agency (DARPA) in 2020 hosted a test of autonomous robots to see how quickly they could react in a combat simulation and how much human guidance they would need. News of this test comes only weeks after the federal government’s National Security Commission on Artificial Intelligence recommended the United States not sign international agreements banning autonomous weapons. “It is neither feasible nor currently in the interests of the United States,” asserts the report, “to pursue a global prohibition of AI-enabled and autonomous weapon systems.”
In 2020, the Turkish military deployed Kargu, a fully autonomous armed drone, to hunt down and attack Libyan battlefield adversaries. Autonomous armed drones have also been deployed (though not necessarily used to attack people) by the Turkish military in Syria, and by the Azerbaijani military in Armenia. While we have yet to see autonomous armed robots or drones deployed in a domestic law enforcement context, wartime tools used abroad often find their way home.
The U.S. government has become increasingly reliant on armed drones abroad. Many police departments seem to purchase every expensive new toy that hits the market. The Dallas police have already killed someone by strapping a bomb to a remote-controlled bomb-disarming robot.
So activists, politicians, and technologists need to step in now, before it is too late. We cannot allow a time lag between the development of this technology and the creation of policies to let police buy, deploy, or use armed robots. Rather, we must ban police from arming robots, whether in the air or on the ground, whether automated or remotely-controlled, whether lethal or less lethal, and in any other yet unimagined configuration.No Autonomous Armed Police Robots
Whether they’re armed with a taser, a gun, or pepper spray, autonomous robots would make split-second decisions about taking a life, or inflicting serious injury, based on a set of computer programs.
But police technologies malfunction all the time. For example, false positives are frequently generated by face recognition technology, audio gunshot detection, and automatic license plate readers. When this happens, the technology deploys armed police to a situation where they may not be needed, often leading to wrongful arrests and excessive force, especially against people of color erroneously identified as criminal suspects. If the malfunctioning police technology were armed and autonomous, that would create a far more dangerous situation for innocent civilians.
When, inevitably, a robot unjustifiably injures or kills someone--who would be held responsible? Holding police accountable for wrongfully killing civilians is already hard enough. In the case of a bad automated decision, who gets held responsible? The person who wrote the algorithm? The police department that deployed the robot?
Autonomous armed police robots might become one more way for police to skirt or redirect the blame for wrongdoing and avoid making any actual changes to how police function. Debate might bog down in whether to tweak the artificial intelligence guiding a killer robot’s decision making. Further, technology deployed by police is usually created and maintained by private corporations. A transparent investigation into a wrongful killing by an autonomous machine might be blocked by assertions of the company’s supposed need for trade secrecy in its proprietary technology, or by finger-pointing between police and the company. Meanwhile, nothing would be done to make people on the streets any safer.
MIT Professor and cofounder of the Future of Life Institute Max Tegmark told Wired that AI weapons should be “stigmatized and banned like biological weapons.” We agree. Although its mission is much more expensive than the concerns of this blog post, you can learn more about what activists have been doing around this issue by visiting the Campaign to Stop Killer Robots.
Even where police have remote control over armed drones and robots, the grave dangers to human rights are far too great. Police routinely over-deploy powerful new technologies in already over-policed Black, Latinx, and immigrant communities. Police also use them too often as part of the United State’s immigration enforcement regime, and to monitor protests and other First Amendment-protected activities. We can expect more of the same with any armed robots.
Moreover, armed police robots would probably increase the frequency of excessive force against suspects and bystanders. A police officer on the scene generally will have better information about unfolding dangers and opportunities to de-escalate, compared to an officer miles away looking at a laptop screen. Moreover, a remote officer might have less empathy for the human target of mechanical violence.
Further, hackers will inevitably try to commandeer armed police robots. They already have succeeded at taking control of police surveillance cameras. The last thing we need are foreign governments or organized criminals seizing command of armed police robots and aiming them at innocent people.
Armed police robots are especially menacing at protests. The capabilities of police to conduct crowd control by force are already too great. Just look at how the New York City Police Department has had to pay out hundreds of thousands of dollars to settle a civil lawsuit concerning police using a Long Range Acoustic Device (LRAD) punitively against protestors. Police must never deploy taser-equipped robots or pepper spray spewing drones against a crowd. Armed robots would discourage people from attending protests. We must de-militarize our police, not further militarize them.
We need a flat-out ban on armed police robots, even if their use might at first appear reasonable in uncommon circumstances. In Dallas in 2016, police strapped a bomb to an explosive-diffusing robot in order to kill a gunman hiding inside a parking garage who had already killed five police officers and shot seven others. Normalizing armed police robots poses too great a threat to the public to allow their use even in extenuating circumstances. Police have proven time and time again that technologies meant only for the most extreme circumstances inevitably become commonplace, even at protests.Conclusion
Whether controlled by an artificial intelligence or a remote human operator, armed police robots and drones pose an unacceptable threat to civilians. It’s exponentially harder to remove a technology from the hands of police than prevent it from being purchased and deployed in the first place. That’s why now is the time to push for legislation to ban police deployment of these technologies. The ongoing revolution in the field of robotics requires us to act now to prevent a new era of police violence.