Electronic Freedom Foundation

Make a Pledge for EFF Through CFC Today!

EFF - 1 hour 26 min ago

The pledge period for the Combined Federal Campaign (CFC) is underway and EFF needs your help! Last year, U.S. government employees raised over $19,000 for EFF through the CFC, helping us fight for privacy, free speech, and security on the Internet so that we can help create a better digital future.

The Combined Federal Campaign is the world's largest and most successful annual charity campaign for U.S. federal employees and retirees. Since its inception in 1961, the CFC fundraiser has raised more than $8.4 billion for local, national, and international charities. This year's campaign runs from September 21 to January 15, 2021. Be sure to make your pledge before the campaign ends!

U.S. government employees can give to EFF by going to GiveCFC.org and clicking DONATE to give via payroll deduction, credit/debit, or an e-check! Be sure to use our CFC ID #10437. You can also scan the QR code below! 

Even though EFF is celebrating its 30th anniversary, we're still fighting hard to protect online privacy, free expression, and innovation. We've made significant steps towards Internet freedom, including leading the call to reject the EARN-IT bill, which threatens to break encryption and undermine free speech online; keeping track of how COVID-19 is affecting digital rights around the world; and we even mobilized over 800 nonprofits and 64,000 individuals to stop the sale of the entire .ORG registry to a private equity firm.

Government employees have a tremendous impact on the shape of our democracy and the future of civil liberties and human rights online. Become an EFF member today by using our CFC ID #10437 when you make a pledge!

EFF Files Amicus Brief Arguing That Law Enforcement Access to Wi-Fi Derived Location Data Violates the Fourth Amendment

EFF - Wed, 10/28/2020 - 6:27pm

With increasing frequency, law enforcement is using unconstitutional digital dragnet searches to attempt to identify unknown suspects in criminal cases. In Commonwealth v. Dunkins, currently pending before the Pennsylvania Supreme Court, EFF and the ACLU are challenging a new type of dragnet: law enforcement’s use of WiFi data to retrospectively track individuals’ precise physical location.

Phones, computers, and tablets connect to WiFi networks—and in turn, the Internet—through a physical access point. Since a single access point can only service a limited number of devices within a certain range, WiFi networks that have many users and cover larger geographic areas have multiple stationary access points. When a device owner moves through a WiFi network with multiple access points, their device seamlessly switches to the nearest available point. This means that an access point can serve as a proxy for a device owner’s physical location. As an access point records a unique identifier for each device that connects to it, along with the time the device connected, access point logs can reveal a device’s precise location over time.

In Dunkins, police were investigating a robbery that occurred in the middle of the night in a dorm at Moravian College in eastern Pennsylvania. To identify a suspect, police obtained logs of every device that connected to the 80-90 access points in the dorm—about one access point for every other dorm room—around the time of the robbery. From there, police identified devices belonging to several dozen students. They then narrowed their list to include only non-residents. That produced a list of three devices: two appeared to belong to women and one appeared to belong to a man who later turned out to be Dunkins. Since police believed the suspect was a man, they focused their investigation on that device. They then obtained records of Dunkins’ phone for five hours on the night of the robbery, showing each WiFi access point on campus that his phone connected to during that time. Dunkins was ultimately charged with the crime. 

We argued in our brief that searches like this violate the Fourth Amendment. The WiFi log data can reveal sensitive location information, so it is essentially identical to the cell phone location records that the Supreme Court ruled in Carpenter require a warrant. Just like cell phone records, the WiFi logs offered the police the ability to retrospectively track a person’s movement, including inside constitutionally protected spaces like students’ dorm rooms. And just as the Carpenter court recognized that cell phones are essential for participation in modern life, accessing a college WiFi network is equally indispensable to college life. 

Additionally, we argued that even if police had obtained a warrant, such a warrant would be invalid. The Fourth Amendment requires law enforcement to obtain a warrant based on probable cause before searching a particular target. But in this case, police only knew that a crime occurred—they did not have a suspect or even a target device identifier. Assessing virtually the same situation in the context of a geofence warrant, two federal judges recently ruled that the government’s application to obtain location records from a certain place during a specific time period failed to satisfy the Fourth Amendment’s particularity and probable cause requirements. 

The police’s tactics in this case illustrate exactly why indiscriminate searches are a threat to a free society. In acquiring and analyzing the records from everyone in the dorm, the police not only violated the defendant’s rights but they also wrongly learned the location of every student who was in the dormitory in the middle of the night. In particular, police determined that two women wholly unconnected to the robbery were not in their own dorm rooms on the night of the crime. That’s exactly the type of dragnet surveillance that the Fourth Amendment defends against. 

The outcome of this case could have far-reaching consequences. In Pennsylvania and across the nation, public WiFi networks are everywhere. And for poor people and people of color, free public WiFi is often a crucial lifeline. Those communities should not be at a greater risk of surveillance than people who have the means to set up their own private networks. We hope the court will realize what’s at stake here and rule that these types of warrantless searches are illegal.

Related Cases: Carpenter v. United States

The Last Smash and Grab at the Federal Communications Commission

EFF - Tue, 10/27/2020 - 6:02pm

AT&T and Verizon secured arguably one of the biggest regulatory benefits from the Federal Communications Commission (FCC) with the agency ending the last remnants of telecom competition law. In return for this massive gift from the federal government, they will give the public absolutely nothing. 

A Little Bit of Telecom History 

When the Department of Justice successfully broke up the AT&T monopoly into regional companies, it needed Congress to pass a law to open up the regional companies (known as Incumbent Local Exchange Carriers or ILECs) to competition. To do that, the Congress passed the Telecommunications Act of 1996 that established bedrock competition law and reaffirmed non-discrimination policies that net neutrality is based on.  The law Congress created a new industry that would interoperate with the ILECs. These companies were called Competitive Local Exchange Carriers (CLECs) and already existed locally at the time as many were selling early day Internet access via dialup over the AT&T telephone lines along with local phone services. As broadband came to market, CLECs used the copper wires of ILECs to sell competitive DSL services. In the early years the policy worked and competition sprung forth with 1,000s of new companies existing along with a massive new competition based investment in the telecom sector in general (see chart below).

Source: Data assembled by the California Public Utilities Commission in 2005

Congress intervened to create the competitive market through the FCC, but at the same time gave the FCC the keys (through a process called “forbearance”) to eliminate the regulatory interventions should competition take root on its own. The FCC started to apply forbearance just a short number of years after the passage of the ‘96 Act with arguably one of the most significant decisions in 2005 when fiber wires being deployed by ILECs did not have to be shared with CLECs like copper wires with the entry of Verizon FiOS. Many states followed course though with notable resistance because it was questionable whether future networks did not need strong competition policy to ensure the goals of affordability and universality could be met with market forces alone. One California Public Utilities Commissioner expressed concern that “within a few years there may not be any ZIP codes left in California with more than five or six providers.”

What Little Competition Remains Today

The ILECs today, essentially AT&T and Verizon, no longer deploy fiber broadband and have abandoned competition with cable companies to pursue wireless services. Without access to fiber, CLECs began to deploy their own fiber networks that were financed from their revenues they gained from copper DSL customers, even in rural markets. But for the last two years AT&T and Verizon have been trying to put a stop to that by asking the FCC to use its forbearance power to eliminate copper sharing as well, which they achieved today. Worst yet, AT&T is already positioning itself to abandon DSL copper lines rather than upgrade them to fiber in markets across the country leaving people with a cable monopoly or undependable cell phone service for Internet. In other words, the FCC will effectively make the digital divide worse for 100,000s of Americans by today’s decision. 

Broadband competition has been on a long slow decline for well over a decade after the FCC’s 2005 decision. In the years that followed the industry rapidly consolidated with smaller companies being snuffed out or closing shop. Virtually every single prediction the FCC made about the market in 2005 have failed to pan out and the end result is a huge number of Americans now face regional monopolies as their need for high-speed access has grown dramatically during the pandemic. It is time to rethink the approach, but today we got a double down. 

The FCC’s Decision is Not Final and a Future FCC Can Chart a New Course

The decline of competition didn’t happen exclusively at the hands of the current FCC, but the signs of the regional monopolization were obvious at the start of 2017 when the agency decided to abandon its authority over the industry and repeal net neutrality rules. Approving today’s AT&T/Verizon petition to end the 1996’s final remnants of competition policy, rather than improving and modernizing them to promote high-speed access competition, the FCC has decided that big ISPs know best. But we see what is going to happen next. AT&T will work hard to eliminate any small competitors on their copper lines because doing so impairs their ability to replace them with fiber. All that while they themselves will not provide the fiber replacement resulting in a worsening of the digital divide worse across the country. The right policy would make sure every American gets a fiber line rather than disconnected. None of this has to happen though and EFF will work hard in the states and in DC to bring back competition in broadband access.

Facebook’s Election-Week War on Accountability is Wrong, Wrong, Wrong

EFF - Tue, 10/27/2020 - 10:17am

A legacy of the 2016 U.S. election is the controversy about the role played by paid, targeted political ads, particularly ads that contain disinformation or misinformation. Political scientists and psychologists disagree about how these ads work, and what effect they have. It's a pressing political question, especially on the eve of another U.S. presidential race, and the urgency only rises abroad, where acts of horrific genocide have been traced to targeted social media disinformation campaigns.

The same factors that make targeted political ads tempting to bad actors and dirty tricksters are behind much of the controversy. Ad-targeting, by its very nature, is opaque. The roadside billboard bearing a politician's controversial slogan can be pointed at and debated by all. Targeted ads can show different messages to different users, making it possible for politicians to "say the quiet part out loud" without their most extreme messaging automatically coming to light. Without being able to see the ads, we can't properly debate their effect.

Enter Ad Observatory, a project of the NYU Online Transparency Project, at the university's engineering school. Ad Observatory recruits Facebook users to shed light on political (and other) advertising by running a browser plugin that "scrapes" (makes a copy of) the ads they see when using Facebook. These ads are collected by the university and analyzed by the project's academic researchers; they also make these ads available for third party scrutiny. The project has been a keystone of many important studies and the work of accountability journalists.

With the election only days away, the work of the Ad Observatory is especially urgent. Facebook publishes its own “Ad Library," but the NYU researchers explain that the company’s data set is “complicated to use, untold numbers of political ads are missing, and a significant element is lacking: how advertisers choose which specific demographics and groups of people should see their ad—and who shouldn't. They have cataloged many instances in which Facebook had failed to live up to its promises to clearly label ads and fight disinformation.

But rather than embrace the Ad Observatory as a partner that rectifies the limitations of its own systems, Facebook has sent a legal threat to the university, demanding that the project shut down and delete the data it has already collected. Facebook’s position is that collecting data using "automated means" (including scraping) is a violation of its Terms of Service, even when the NYU Ad Observatory is acting on behalf of Facebook's own users, and even in furtherance of the urgent mission of fighting political disinformation during an election that U.S. politicians call the most consequential and contested in living memory.

Facebook’s threats are especially chilling because of its history of enforcing its terms of service using the Computer Fraud and Abuse Act (CFAA). The CFAA makes it a federal crime to access a computer connected to the Internet “without authorization," but it fails to define these terms. It was passed with the aim of outlawing computer break-ins, but some jurisdictions have converted it into a tool to enforce private companies’ computer use policies, like terms of service, which are typically wordy, one-sided contracts that virtually no one reads.

In fact, Facebook is largely responsible for creating terrible legal precedent on scraping and the CFAA in a 2016 Ninth Circuit Court of Appeals decision called Facebook v. Power Ventures. The case involved a dispute between Facebook and a social media aggregator, which Facebook users had voluntarily signed up for. Facebook did not want its users engaging with this service, so it sent Power Ventures a cease and desist letter alleging a violation of its terms of service and tried to block Power Ventures’ IP address. Even though the Ninth Circuit had previously decided that a violation of terms of service alone was not a CFAA violation, the court found that Power Ventures did violate the CFAA when it continued to provide its services after receiving the cease and desist letter. So the Power Ventures decision allows platforms to not only police their platforms against any terms of service violations they deem objectionable, but to turn even minor transgressions against a one-sided contract of adhesion into a violation of federal law that carries potentially serious civil and criminal liability.

More recently, the Ninth Circuit limited the scope of Power Ventures somewhat in HiQ v. LinkedIn. The court clarified that scraping public websites cannot be a CFAA violation regardless of personalized cease and desist letters sent to scrapers. However, that still leaves any material you have to log in to see—like most posts on Facebook—off limits to scraping if the platform decides it doesn’t like the scraper.

Decisions like Power Ventures potentially give Facebook a veto over a wide swath of beneficial outside research. That’s such a problem that some lawyers have argued interpreting the CFAA to criminalize terms of service violations would actually be unconstitutional. And at least one court has taken those concerns to heart, ruling in Sandvig v. Barr that the CFAA did not bar researchers who wanted to create multiple Facebook “tester" accounts to research how algorithms unlawfully discriminate based on characteristics like race or gender. The Sandvig decision should be a warning to Facebook that shutting down important civic research like the Ad Observatory is a serious misuse of the CFAA, which might even violate the First Amendment.

Over the weekend, Facebook executive Rob Leathern posted the official rationale for the multinational company's attack on a public university's researchers: he claimed that "Collecting personal data via scraping tools is an industry-wide problem that’s bad for people’s privacy & unsafe regardless of who is doing it. We protect people's privacy by not only prohibiting unauthorized scraping in our terms, we have teams dedicated to finding and preventing it. And under our agreement with the FTC, we report violations like these as privacy incidents...[W]e want to make sure that providing more transparency doesn't come at the cost of privacy."

Leathern is making a critical mistake here: he is conflating secrecy with privacy. Secrecy is when you (and possibly a few others) know something that everyone else does not get to know. Privacy is when you get to decide who knows what about you. As Facebook's excellent white paper on the subject explains: "What you share and who you share it with should be your decision."

Leathern's blanket condemnation of scraping is just as disturbing as his misunderstanding of privacy. Scraping is a critical piece of competitive compatibility, the process whereby new products and services are designed to work with existing ones without cooperation from the companies that made those services. Scraping is a powerful pro-competitive move that allows users and the companies that serve them to overturn the dominance of monopolists (that’s why it was key to forcing U.S. banks to adopt standards that let their customers manage their accounts in their own way). In the end, scraping is just an automated way of copying and pasting: the information that is extracted by the Ad Observer plugins is the same data that Mr Leathern’s users could manually copy and paste into the Ad Observatory databases.

As with so many technological questions, the ethics of scraping depend on much more than what the drafter of any terms of service thinks is in its own best interest.

Facebook is very wrong here.

First, they are wrong on the law. The Computer Fraud and Abuse Act should not not be on their side. Violating the company's terms of service to perform a constitutionally protected watchdog role is lawful.

Second, they are wrong on the ethics. There is no privacy benefit to users in prohibiting them from choosing to share the political ads they are served with researchers and journalists.

Finally, the are wrong on the facts. Mr. Leathern's follow-up tweets claim that the Ad Observatory's plugins collect "data about friends or others who view the ads." That is a statement with no apparent factual basis, as is made abundantly clear from the project's FAQ and privacy policy.

That's a lot of wrong. And worse, it's a lot of wrong on the eve of an historic election, a wrongness that will chill other projects contemplating their own accountability investigations into dominant tech platforms.

Defending Fair Use in the Omegaverse

EFF - Mon, 10/26/2020 - 8:02pm

Copyright law is supposed to promote creativity, not stamp out criticism. Too often, copyright owners forget that – especially when they have a convenient takedown tool like the Digital Millennium Copyright Act (DMCA).

EFF is happy to remind them – as we did this month on behalf of Internet creator Lindsay Ellis. Ellis had posted a video about a copyright dispute between authors in a very particular fandom niche: the Omegaverse realm of wolf-kink erotica. The video tells the story of that dispute in gory and hilarious detail, while breaking down the legal issues and proceedings along the way. Techdirt called it “truly amazing.” We agree. But feel free to watch “Into the Omegaverse: How a Fanfic Trope Landed in Federal Court,” and decide for yourself.

The dispute described in the video began with a series of takedown notices to online platforms with highly dubious allegations of copyright infringement. According to these, one Omegaverse author, Zoey Ellis (no relation) had infringed the copyright of another, Addison Cain, by copying common thematic aspects of characters in the Omegaverse genre, i.e., tropes. As Ellis’ video explains, these themes not only predate Cain’s works, but are uncopyrightable as a matter of law. Further litigation ensued, and Ellis’ video explains what happened and the opinions she formed based on the publicly available records of those proceedings. Some of those opinions are scathingly critical of Ms. Cain. But the First Amendment protects scathing criticism. So does copyright law: criticism and parody are classic examples of fair use that are authorized by law. Still, as we have written many times, DMCA abuse targeting such fair uses remains a pervasive and persistent problem. 

Nevertheless, it didn’t take long for Cain to send (through counsel) outlandish allegations of copyright infringement and defamation. Soon after, Patreon and YouTube received DMCA notices from email addresses associated with Cain that raised the same allegations.

That’s when EFF stepped in. The video is a classic fair use. It uses a relatively small amount of a copyrighted work for purposes of criticism and parody in an hour-long video that consists overwhelmingly of Ellis’ original content. In short, the copyright claims were deficient as a matter of law. 

The defamation claims were also deficient, but their presence alone was cause for concern: defamation claims have no place in a DMCA notice.  If you want defamatory content taken down, you should seek a court order and satisfy the First Amendment’s rigorous requirements. Platform providers should be extremely skeptical of DMCA notices that include such claims and examine them carefully.

We explained these points in a letter to Cain’s counsel. We hoped the reminder would be well-taken. We were wrong. In response, Cain’s counsel accused EFF of colluding with the Organization for Transformative Works to undermine her client and demanded apologies from both EFF and Ellis.

As we explain in today’s response, that’s not going to happen. EFF has fought for years to protect the rights of content creators like Lindsay Ellis and we will not apologize for our commitment to this work. Nor will Ellis apologize for exercising her right to speak critically about public figures and their work. It’s past time to put an end to this entire matter.

 

Content Moderation and the U.S. Election: What to Ask, What to Demand

EFF - Mon, 10/26/2020 - 7:41pm

With the upcoming U.S. elections, major U.S.-based platforms have stepped up their content moderation practices, likely hoping to avoid the blame heaped upon them after the 2016 election, where many held them responsible for siloing users into ideological bubbles—and, in Facebook’s case, the Cambridge Analytica imbroglio. It’s not clear that social media played a more significant role than many other factors, including traditional media. But the techlash is real enough.

So we can’t blame them for trying, nor can we blame users for asking them to. Online disinformation is a problem that has had real consequences in the U.S. and all over the world—it has been correlated to ethnic violence in Myanmar and India and to Kenya’s 2017 elections, among other events.

But it is equally true that content moderation is a fundamentally broken system. It is inconsistent and confusing, and as layer upon layer of policy is added to a system that employs both human moderators and automated technologies, it is increasingly error-prone. Even well-meaning efforts to control misinformation inevitably end up silencing a range of dissenting voices and hindering the ability to challenge ingrained systems of oppression.

We have been watching closely as Facebook, YouTube, and Twitter, while disclaiming any interest in being “the arbiters of truth,” have all adjusted their policies over the past several months to try arbitrate lies—or at least flag them. And we’re worried, especially when we look abroad. Already this year, an attempt by Facebook to counter election misinformation targeting Tunisia, Togo, Côte d’Ivoire, and seven other African countries resulted in the accidental removal of accounts belonging to dozens of Tunisian journalists and activists, some of whom had used the platform during the country’s 2011 revolution. While some of those users’ accounts were restored, others—mostly belonging to artists—were not.

Back in the U.S., Twitter recently blocked a New York Post article about presidential candidate Joe Biden’s son on the grounds that it was based on hacked materials, and then lifted the block two days later. After placing limits on political advertising in early September, Facebook promised not to change its policies further ahead of the elections. Three weeks later it announced changes to its political advertising policies and then blocked a range of expression it previously permitted. In both cases, users—especially users who care about their ability share and access political information—are left to wonder what might be blocked next.

Given the ever-changing moderation landscape, it’s hard to keep up. But there are some questions users and platforms can ask about every new iteration, whether or not an election is looming. Not coincidentally, many of these overlap with the Santa Clara Principles on Transparency and Accountability in Content Moderation, a set of practices created by EFF and a small group of organizations and advocates, that social media platforms should undertake to provide transparency about why and how often they take down users’ posts, photos, videos and other content.

Is the Approach Narrowly Tailored or a Categorical Ban?

Outright censorship should not be the only answer to disinformation online. When tech companies ban an entire category of content, they have a history of overcorrecting and censoring accurate, useful speech—or, even worse, reinforcing misinformation. Any restrictions on speech should be both necessary and proportionate.

Moreover, online platforms have other ways to address the rapid spread of disinformation. For example, flagging or fact-checking content that may be of concern carries its own problems–again, it means someone—or some machine—has decided what does and does not require further review, and who is and is not an accurate fact-checker. Nonetheless, this approach has the benefit of leaving speech available for those who wish to receive it.

When a company does adopt a categorical ban, we should ask: Can the company explain what makes that category exceptional? Are the rules to define its boundaries clear and predictable, and are they backed up by consistent data? Under what conditions will other speech that challenges established consensus be removed?  Who decides what does or does not qualify as “misleading” or “inaccurate”? Who is tasked with testing and validating the potential bias of those decisions?

Does It Empower Users?

Platforms must address one of the root causes behind disinformation’s spread online: the algorithms that decide what content users see and when. And they should start by empowering users with more individualized tools that let them understand and control the information they see.

Algorithms used by Facebook’s Newsfeed or Twitter’s timeline make decisions about which news items, ads, and user-generated content to promote and which to hide. That kind of curation can play an amplifying role for some types of incendiary content, despite the efforts of platforms like Facebook to tweak their algorithms to “disincentivize” or “downrank” it. Features designed to help people find content they’ll like can too easily funnel them into a rabbit hole of disinformation.

Users shouldn’t be held hostage to a platform’s proprietary algorithm. Instead of serving everyone “one algorithm to rule them all” and giving users just a few opportunities to tweak it, platforms should open up their APIs to allow users to create their own filtering rules for their own algorithms. News outlets, educational institutions, community groups, and individuals should all be able to create their own feeds, allowing users to choose who they trust to curate their information and share their preferences with their communities.

In addition, platforms should examine the parts of their infrastructure that are acting as a megaphone for dangerous content and address that root cause of the problem rather than censoring users.

During an election season, the mistaken deletion of accurate information and commentary can have outsize consequences. Absent exigent circumstances companies must notify the user and give them an opportunity to appeal before the content is taken down. If they choose to appeal, the content should stay up until the question is resolved. Smaller platforms dedicated to serving specific communities may want to take a more aggressive approach. That’s fine, as long as Internet users have a range of meaningful options with which to engage.

Is It Transparent?

The most important parts of the puzzle here are transparency and openness. Transparency about how a platform’s algorithms work, and tools to allow users to open up and create their own feeds, are critical for wider understanding of algorithmic curation, the kind of content it can incentivize, and the consequences it can have.

In other words, actual transparency should allow outsiders to see and understand what actions are performed, and why. Meaningful transparency inherently implies openness and accountability, and cannot be satisfied by simply counting takedowns. That is to say that there is a difference between corporately sanctioned ‘transparency,’ which is inherently limited, and meaningful transparency that empowers users to understand Facebook’s actions and hold the company accountable.

Is the Policy Consistent With Human Rights Principles?

Companies should align their policies with human rights norms. In a paper published last year, David Kaye—the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression—recommends that companies adopt policies that allow users to “develop opinions, express themselves freely and access information of all kinds in a manner consistent with human rights law.” We agree, and we’re joined in that opinion by a growing international coalition of civil liberties and human rights organizations.

Content Moderation Is No Silver Bullet

We shouldn’t look to content moderators to fix problems that properly lie with flaws in the electoral system. You can’t tech your way out of problems the tech didn’t create. And even where content moderation has a role to play, history tells us to be wary. Content moderation at scale is impossible to do perfectly, and nearly impossible to do well, even under the most transparent, sensible, and fair conditions – which is one of many reasons none of these policy choices should be legal requirements. It inevitably involves difficult line-drawing and will be riddled with both mistakes and a ton of decisions that many users will disagree with. However, there are clear opportunities to make improvements and it is far past time for platforms to put these into practice.  

Why Getting Paid for Your Data Is a Bad Deal

EFF - Mon, 10/26/2020 - 2:42pm

One bad privacy idea that won’t die is the so-called “data dividend,” which imagines a world where companies have to pay you in order to use your data.

Sound too good to be true? It is.

Let’s be clear: getting paid for your data—probably no more than a handful of dollars at most—isn’t going to fix what’s wrong with privacy today. Yes, a data dividend may sound at first blush like a way to get some extra money and stick it to tech companies. But that line of thinking is misguided, and falls apart quickly when applied to the reality of privacy today. In truth, the data dividend scheme hurts consumers, benefits companies, and frames privacy as a commodity rather than a right.

EFF strongly opposes data dividends and policies that lay the groundwork for people to think of the monetary value of their data rather than view it as a fundamental right. You wouldn’t place a price tag on your freedom to speak. We shouldn’t place one on our privacy, either.

Think You’re Sticking It to Big Tech? Think Again

Supporters of data dividends correctly recognize one thing: when it comes to privacy in the United States, the companies that collect information currently hold far more power than the individual consumers continually tapped for that information.

But data dividends do not meaningfully correct that imbalance. Here are three questions to help consider the likely outcomes of a data dividend policy:

  • Who will determine how much you get paid to trade away your privacy?
  • What makes your data valuable to companies?
  • What does the average person gain from a data dividend, and what do they lose?

Data dividend plans are thin on details in regarding who will set the value of data. Logically, however, companies have the most information about the value they can extract from our data. They also have a vested interest in using the lowest possible bar to set that value. Legislation in Oregon to value health data would have allowed companies to set that value, leaving little chance that consumers would get anywhere near a fair shake. Even if a third-party, such as a government panel, were tasked with setting a value, the companies would still be the primary sources of information about how they plan to monetize data.

Privacy should not be a luxury. It should not be a bargaining chip. It should never have a price tag.

Which brings us to a second question: why and in what ways do companies value data? Data is the lifeblood of many industries. Some of that data is organized by consumer and then used to deliver targeted ads. But it’s also highly valuable to companies in the aggregate—not necessarily on an individual basis. That’s one reason why data collection can often be so voracious. A principal point of collecting data is to identify trends—to sell ads, to predict behavior, etc.— and it’s hard to do that without getting a lot of information. Thus, any valuation that focuses solely on individualized data, to the exclusion of aggregate data, will be woefully inadequate. This is another reason why individuals aren’t well-positioned to advocate for good prices for themselves.

Even for companies that make a lot of money, the average revenue per user may be quite small. For example, Facebook earned some $69 billion in revenue in 2019. For the year, it averaged about $7 revenue per user, globally, per quarter. Let’s say that again: Facebook is a massive, global company with billions of users, but each user only offers Facebook a modest amount in revenue. Profit per user will be much smaller, so there is no possibility that legislation will require companies to make payouts on a revenue-per-customer basis. As a result, the likely outcome of a data dividend law (even as applied to an extremely profitable company like Facebook) would be that each user receives, in exchange for their personal information over the course of an entire year, a very small piece of the pie—perhaps just a few dollars.

Those small checks in exchange for intimate details about you are not a fairer trade than we have now. The companies would still have nearly unlimited power to do what they want with your data. That would be a bargain for the companies, who could then wipe their hands of concerns about privacy. But it would leave users in the lurch.

All that adds up to a stark conclusion: if where we’ve been is any indication of where we’re going, there won’t be much benefit from a data dividend. What we really need is stronger privacy laws to protect how businesses process our data—which we can, and should do, as a separate and more protective measure.

Whatever the Payout, The Cost Is Too High

And what do we lose by agreeing to a data dividend? We stand to lose a lot. Data dividends will likely be most attractive to those for whom even a small bit of extra money would do a lot. Those vulnerable people—low-income Americans and often communities of color—should not be incentivized to pour more data into a system that already exploits them and uses data to discriminate against them. Privacy is a human right, not a commodity. A system of data dividends would contribute to a society of privacy “haves” and “have-nots.”

Also, as we’ve said before, a specific piece of information can be priceless to a particular person and yet command a very low market price. Public information feeds a lot of the data ecosystem. But even non-public data, such as your location data, may cost a company less than a penny to buy—and cost you your physical safety if it falls into the wrong hands. Likewise, companies currently sell lists of 1,000 people with conditions such as anorexia, depression, and erectile dysfunction for $79 per list—or eight cents per listed person. Such information in the wrong hands could cause great harm.

There is no simple way to set a value for data. If someone asked how much they should pay you to identify where you went to high school, you’d probably give that up for free. But if a mortgage company uses that same data to infer that you’re in a population that is less likely to repay a mortgage—as a Berkeley study found was true for Black and Latinx applicants—it could cost you the chance to buy a home.

Pay-For-Privacy

Those who follow our work know that EFF also opposes “pay-for-privacy” schemes, referring to offers from a company to give you a discount on a good or service in exchange for letting them collect your information.

In a recent example of this, AT&T said it will introduce mobile plans that knock between $5 and $10 off people’s phone bills if they agree to watch more targeted ads on their phone.  "I believe there's a segment of our customer base where, given a choice, they would take some load of advertising for a $5 or $10 reduction in their mobile bill," AT&T Chief Executive Officer John Stankey said to Reuters in September.

Again, there are people for whom $5 or $10 per month would go a long way to make ends meet. That also means, functionally, that similar plans would prey on those who can’t afford to protect themselves. We should be enacting privacy policies that protect everyone, not exploitative schemes that treat lower-income people as second-class citizens.

Pay-for-privacy and data dividends are two sides of the same coin. Some data dividend proponents, such as former presidential candidate Andrew Yang, draw a direct line between the two. Once you recognize that data have some set monetary value, as schemes such as AT&T’s do, it paves the way for data dividends. EFF opposes both of these ideas, as both would lead to an exchange of data that would endanger people and commodify privacy.

It Doesn’t Have to Be Like This

Advocacy of a data dividend—or pay-for-privacy— as the solution to our privacy woes admits defeat. It yields to the incorrect notion that privacy is dead, and is worth no more than a coin flipped your way by someone who holds all the cards.

It undermines privacy to encourage people to accept the scraps of an exploitative system. This further lines the pockets of those who already exploit our data, and exacerbates unfair treatment of people who can’t afford to pay for their basic rights.

There is no reason to concede defeat to these schemes. Privacy is not dead—theoretically or practically, despite what people who profit from abusing your privacy want you to think. As Dipayan Ghosh has said, privacy nihilists ignore a key part of the data economy, “[your] behavioral data are temporally sensitive.” Much of your information has an expiration date, and companies that rely on it will always want to come back to the well for more of it. As the source, consumers should have more of the control.

That’s why we need to change the system and redress the imbalance. Consumers should have real control over their information, and ways to stand up and advocate for themselves. EFF’s top priorities for privacy laws include granting every person the right to sue companies for violating their privacy, and prohibiting discrimination against those who exercise their rights.

It’s also why we advocate strongly for laws that make privacy the default—requiring companies to get your opt-in consent before using your information, and to minimize how they process your data to what they need to serve your needs. That places meaningful power with the consumer — and gives you the choice to say “no.” Allowing a company to pay you for your data may sound appealing in theory. In practice, unlike in meaningful privacy regimes, it would strip you of choice, hand all your data to the companies, and give you pennies in return.

Data dividends run down the wrong path to exercising control, and would dig us deeper into a system that reduces our privacy to just another cost of doing business. Privacy should not be a luxury. It should not be a bargaining chip. It should never have a price tag.

EU vs Big Tech: Leaked Enforcement Plans and the Dutch-French Counterproposal

EFF - Fri, 10/23/2020 - 3:08pm

At the end of September, multiple press outlets published leaked set of antimonopoly enforcement proposals proposed for the a new EU Digital Market Act , which EU officials say they will finalize this year.

The proposals confront the stark fact that the Internet has been thoroughly dominated by a handful of giant, U.S.-based firms, which compete on a global stage with a few giant Chinese counterparts and a handful of companies from Russia and elsewhere. The early promise of a vibrant, dynamic Internet, where giants were routinely toppled by upstarts helmed by outsiders seems to have died, strangled by a monopolistic moment in which the Internet has decayed into "a group of five websites, each consisting of screenshots of text from the other four."

Anti-Monopoly Laws Have Been Under-Enforced The tech sector is not exceptional in this regard: from professional wrestling to eyeglasses to movies to beer to beef and poultry, global markets have collapsed into oligarchies, with each sector dominated by a handful of companies (or just one).

Fatalistic explanations for the unchecked rise of today's monopolized markets—things like network effects and first-mover advantage—are not the whole story. If these factors completely accounted for tech's concentration, then how do we explain wrestling's concentration? Does professional wrestling enjoy network effects too?

A simpler, more parsimonious explanation for the rise of monopolies across the whole economy can be found in the enforcement of anti-monopoly law, or rather, the lack thereof, especially in the U.S. For about forty years, the U.S. and many other governments have embraced a Reagan-era theory of anti-monopoly called "the consumer welfare standard." This ideology, associated with Chicago School economic theorists, counsels governments to permit monopolistic behavior – mergers between large companies, "predatory acquisitions" of small companies that could pose future threats, and the creation of vertically integrated companies that control large parts of their supply chain – so long as there is no proof that this will lead to price-rises in the immediate aftermath of these actions.

For four decades, successive U.S. administrations from both parties, and many of their liberal and conservative counterparts around the world, have embraced this ideology and have sat by as firms have grown not by selling more products than their competitors, or by making better products than their competitors, but rather by ceasing to compete altogether by merging with one another to create a "kill zone" of products and services that no one can compete with.

After generations in ascendancy, the consumer welfare doctrine is finally facing a serious challenge, and not a moment too soon. In the U.S., both houses of Congressheld sweeping hearings on tech companies anticompetitive conduct, and the House's bold report on its lengthy, deep investigation into tech monopolism signaled a political establishment ready to go beyond consumer welfare and return to a more muscular, pre-Reagan form of competition enforcement anchored in the idea that monopolies are bad for society, and that we should prevent them because they hurt workers and consumers, and because they distort politics and smother innovation -- and not merely because they sometimes make prices go up.

A New Set of Anti-Monopoly Tools for the European Union These new EU leaks are part of this trend, and in them, we find a made-in-Europe suite of antimonopoly enforcement proposals that are, by and large, very welcome indeed. The EU defines a new, highly regulated sub-industry within tech called a "gatekeeper platform" -- a platform that exercises "market power" within its niche (the precise definition of this term is hotly contested). For these gatekeepers, the EU proposes a long list of prohibitions:

  • A ban on platforms' use of customer transaction data unless that data is also made available to the companies on the platform (so Amazon would have to share the bookselling data it uses in its own publishing efforts with the publishers that sell through its platform, or stop using that data altogether)
  • Platforms will have to obtain users' consent before combining data about their use of the platform with other data from third parties
  • A ban on "preferential ranking" of platforms' own offerings in their search results: if you search for an address, Google will have to show you the best map preview for that address, even if that's not Google Maps
  • Platforms like iOS and Android can't just pre-load their devices exclusively with their own apps, nor could Google they require Android manufacturers to preinstall Google's preferred apps, and not other apps, on Android devices
  • A ban on devices that use "technical measures" (that's what lawyers call DRM -- any technology that stops you from doing what you want with your stuff) to prevent you from removing pre-installed apps.
  • A ban on contracts that force businesses to offer their wares everywhere on the same terms as the platform demands -- for example, if platforms require monthly subscriptions, a business could offer the same product for a one-time payment somewhere else.
  • A ban on contracts that punish businesses on platforms from telling their customers about ways to use their products without using the platform (so a mobile game could inform you that you can buy cheaper power-ups if you use the company's website instead of the app)
  • A ban on systems that don't let you install unapproved apps (AKA "side-loading")
  • A ban on gag-clauses in contracts that prohibit companies for complaining about the way the platform runs its business
  • A ban on requiring that you use a specific email provider to use a platform (think of the way that Android requires a Gmail address)
  • A requirement that users be able to opt out of signing into services operated by the platform they're using -- so you could sign into YouTube without being signed into Gmail

On top of those rules, there's a bunch of "compliance" systems to make sure they're not being broken:

  • Ad platforms will have to submit to annual audits that will help advertisers understand who saw their ads and in what context
  • Ad platforms will have to submit to annual audits disclosing their "cross-service tracking" of users and explaining how this complies with the GDPR, the EU’s privacy rules
  • Gatekeepers will have to produce documents on demand from regulators to demonstrate their compliance with rules
  • Gatekeepers will have to notify regulators of any planned mergers, acquisitions or partnerships
  • Gatekeepers will have to pay employees to act as compliance officers, watchdogging their internal operations

In addition to all this, the leak reveals a "greylist" of activities that regulators will intervene to stop:

  • Any actions that prevents sellers on a platform from acquiring "essential information" that the platform collects on their customers
  • Collecting more data than is needed to operate a platform
  • Preventing sellers on a platform from using the data that the platform collects on their customers
  • Anything that creates barriers preventing businesses on a platform or their customers from migrating to a rival's platform
  • Keeping an ad platform's click and search data secret -- platforms will have to sell this data on a "fair, reasonable and non-discriminatory" basis
  • Any steps that stop users from accessing a rival's products or services on a platform
  • App store policies that ban third-party sellers from replicating an operating system vendor's own apps
  • Locking users into a platform's own identity service
  • Platforms that degrade quality of service for competitors using the platform
  • Locking platform sellers into using the platform's payment-processor, delivery service or insurance
  • Platforms that offer discounts on their services to some businesses but not others
  • Platforms that block interoperability for delivery, payment and analytics
  • Platforms that degrade connections to rivals' services
  • Platforms that "mislead" users into switching from a third-party's services to the platform's own
  • Platforms that practice "tying" – forcing users to access unrelated third-party apps or services (think of an operating system vendor that requires you to get a subscription to a partner's antivirus tools).

One worrying omission from this list: interoperability rules for dominant companies. The walled gardens with which dominant platforms imprison their users are a serious barrier to new competitors. Forcing them to install gateways – ways for users of new services to communicate with the friends and services they left behind when they switched will go a long way to reducing the power of the dominant companies. That is a more durable remedy than passing rules to force those dominant actors to use their power wisely.

That said, there's plenty to like about these proposals, but the devil is in the details.

In particular, we're concerned that all the rules in the world do no good if they are not enforced. Determining whether a company has "degraded service" to a rival is hard to determine from the outside -- can we be certain that service problems are a deliberate act of sabotage? What about companies' claims that these are just normal technical issues arising from providing service to a third party whose servers and network connections are out of its control?

Harder still is telling whether a search-result unduly preferences a platform's products over rivals: the platforms will say (they do say) that they link to their own services ahead of others because they rank their results by quality and their weather reports, stores, maps, or videos are simply better than everyone else's. Creating an objective metric of the "right" way to present search results is certain to be contentious, even among people of goodwill who agree that the platform's own services aren't best.

What to do then? Well, as economists like to say, "incentives matter." Companies preference their own offerings in search, retail, pre-loading, and tying because they have those offerings. A platform that competes with its customers has an incentive to cheat on any rules of conduct in order to preference its products over the competing products offered by third parties.

Traditional antimonopoly law recognized this obvious economic truth, and responded to it with a policy called "structural separation": this was an industry-by-industry ban on certain kinds of vertical integration. For example, rail companies were banned from operating freight companies that competed with the freighters who used the rails; banks were banned from owning businesses that competed with the businesses they loaned money to. The theory of structural separation is that in some cases, dominant companies simply can't be trusted not to cheat on behalf of their subsidiaries, and catching them cheating is really hard, so we just remove the temptation by banning them from operating subsidiaries that benefit from cheating.

A structural separation regime for tech -- say, one that prevented store-owners from competing with the businesses that sold things in their store, or one that prevented search companies from running ad-companies that would incentivize them to distort their search results -- would take the pressure off of many of the EU's most urgent (and hardest-to-enforce) rules. Not only would companies who broke those rules fail to profit by doing so, but detecting their cheating would be a lot easier.

Imposing structural separation is not an easy task. Given the degree of vertical integration in the tech sector today, structural separation would mean unwinding hundreds of mergers, spinning off independent companies, or requiring independent management and control of subsidiaries. The companies will fight this tooth-and-nail.

But despite this, there is political will for separation. The Dutch and French governments have both signaled their displeasure with the leaked proposal, insisting that it doesn't go far enough, signing a (non-public) position paper that calls for structural separation, with breakups "on the table."

Whatever happens with these proposals, the direction of travel is clear. Monopolies are once again being recognized as a problem in and of themselves, regardless of their impact on short-term prices. It's a welcome, long overdue change.

Why Open Access Is Necessary for Makers

EFF - Fri, 10/23/2020 - 2:39pm

This is an Open Access Week guest post by Jordan Bunker, prototype engineer and open access advocate.

After the world went into lockdown for COVID-19, Makers were suddenly confined to their workshops. Rather than idly wait it out, many of them decided to put their tools and skills to use, developing low-cost, rapid production methods for much-needed PPE and DIY ventilators in an effort to address the worldwide shortage.

It might sound outlandish to think that hobbyists and weekend warriors would be able to design and build devices that contribute to bending the curve of the pandemic, but there’s a rich history of similar work. The “iron lung,” the first modern negative-pressure ventilator, began as a side project of Harvard engineer Philip Drinker. It was powered by an electric motor and air pumps from vacuum cleaners. By 1928, Philip Drinker and Louis Shaw had finished designs, and production of the “Drinker respirator” began, saving lives during the Polio epidemic.

In the 1930s, John Emerson, a high-school drop-out and self-taught inventor, improved on the design of Drinker’s iron lung, releasing a model that was quieter, lighter, more efficient, and half the price of the Drinker respirator. Drinker and Harvard eventually sued Emerson, claiming patent infringement. After defending against these claims, Emerson went on to become a key manufacturer of these life-saving devices, a development that was applauded by healthcare providers of the time.

It’s not enough to just have the tools and know-how; you also need insight and context. Emerson’s machine shop was located in Harvard Square, where he built research devices for the local Boston medical schools. Without a doubt, that high-bandwidth access to researchers and users of his devices assisted in his innovations. In order to develop or improve existing technology, the modern Maker community needs access to the same sort of information.

Open access to research is critical to the process of developing new things. Making the methods and results of research freely available to all preserves the ability to fruitfully investigate and improve upon existing methods and devices. The first step in fixing or improving a system is understanding how it works, and what the mechanisms at play are. Impediments like paywalls or subscriptions decrease the likelihood that research is shared, and results in a severe handicap to the innovation process. In his book Democratizing Innovation, MIT professor Eric von Hippel makes the case that if “innovations are not diffused, multiple users with very similar needs will have to invest to (re)develop very similar innovations, which would be a poor use of resources from the social welfare point of view.” If the purpose of academic research is to push the boundaries of human knowledge, there is no justifiable case to be made for restricting access to that knowledge.

From its earliest days on the Internet, the Maker community has embraced the culture of information sharing. Through project documentation, YouTube videos, free 3D printer STL files, Makers share their methods and innovations freely and openly, enriching the community with each new project. As a result, millions of people have been able to learn new skills, develop new products, and become contributors to the open body of knowledge freely available online.

In 2010, I became frustrated that there was so much information locked in the “ivory tower” of academic research journals, and I wanted to do my part in liberating some of it. After reading a research paper from a UIUC materials science lab (which my university library thankfully had access to), I set out to replicate the results at our local hackerspace. After parsing the jargon, I deciphered their methods and was able to successfully make the conductive ink described in the paper. In keeping with the Maker ethos of sharing, I wrote a blog post describing how I did it using low-tech tools.

In 2020, there are now many Makers who are doing the same thing. YouTubers like Applied Science, The Thought Emporium, NileRed, and Breaking Taps routinely post videos on methods gleaned from research papers, sharing how they replicated the results, even filling in gaps in the papers with their own experiments, methods, successes, and failures.

These hobbyists aren’t just sharing information; they’re also providing a much-needed service to the academic community: replication. With countless academic papers being published every year, there’s a growing “replication crisis,” where many of the studies published have been impossible to reproduce. In 2016, a poll of 1,500 scientists revealed that 70% had failed to reproduce at least one other scientist's experiment, and 50% had even failed to reproduce one of their own experiments. Opening access to research allows Makers to participate in the process, addressing the need for replication.

This democratic, open access approach to the development, discovery, and distribution of research, enables both academics and non-academics alike to test, replicate, improve, and submit their findings in a highly transparent way. It allows all to participate in broadening the horizons of scientific research, regardless of whether they are enrolled at (or employed by) a university. Open access allows anyone who learns or discovers something new to share that information, even if they haven’t spent thousands of dollars and years of their life earning credentials.

Whether it’s for medical device innovation, materials science methods, or any other body of human knowledge, it’s time for open access research to be the default. The promise of the Internet is free and open access to information for and from all, and information gleaned from research should be no different. Making researchers (or Makers) pay to both publish and access research is an antiquated system that has no place on the modern Internet. It serves only to profit publishers and actively hinders innovation and critical research replication. It’s time for the academic community to shed this vestigial appendage and embrace the open access ethos that Makers have engendered online.

EFF is proud to celebrate Open Access Week.

EFF Files Comment Opposing the Department of Homeland Security's Massive Expansion of Biometric Surveillance

EFF - Thu, 10/22/2020 - 6:34pm

EFF, joined by several leading civil liberties and immigrant rights organizations, recently filed a comment calling on the Department of Homeland Security (DHS) to withdraw a proposed rule that would exponentially expand biometrics collection from both U.S. citizens and noncitizens who apply for immigration benefits and would allow DHS to mandate the collection of face data, iris scans, palm prints, voice prints, and DNA. DHS received more than 5,000 comments in response to the proposed rule, and five U.S. Senators also demanded that DHS abandon the proposal.    

DHS’s biometrics database is already the second largest in the world. It contains biometrics from more than 260 million people. If DHS’s proposed rule takes effect, DHS estimates that it would nearly double the number of people added to that database each year, to over 6 million people. And, equally important, the rule would expand both the types of biometrics DHS collects and how DHS uses them.  

What the Rule Would Do

Currently, DHS requires applicants for certain, but not all, immigration benefits to submit fingerprints, photographs, or signatures. DHS’s proposed rule would change that regime in three significant ways.   

First, the proposed rule would make mandatory biometrics submission the default for anyone who submits an application for an immigration benefit. In addition to adding millions of non-citizens, this change would sweep in hundreds of thousands of U.S. citizens and lawful permanent residents who file applications on behalf of family members each year. DHS also proposes to lift its restrictions on the collection of biometrics from children to allow the agency to mandate collection from children under the age of 14. 

Second, the proposed rule would expand the types of biometrics DHS can collect from applicants. The rule would explicitly give DHS the authority to collect palm prints, photographs “including facial images specifically for facial recognition, as well as photographs of physical or anatomical features such as scars, skin marks, and tattoos,” voice prints, iris images, and DNA. In addition, by proposing a new and expansive definition of the term “biometrics,” DHS is laying the groundwork to collect behavioral biometrics, which can identify a person through the analysis of their movements, such as their gait or the way they type. 

Third, the proposed rule would expand how DHS uses biometrics. The proposal states that a core goal of DHS’s expansion of biometrics collection would be to implement “enhanced and continuous vetting,” which would require immigrants “be subjected to continued and subsequent evaluation to ensure they continue to present no risk of causing harm subsequent to their entry.” This type of enhanced vetting was originally contemplated in Executive Order 13780, which also banned nationals of Iran, Libya, Somalia, Sudan, Syria, and Yemen from entering the United States. While DHS offers few details about what such a program would entail, it appears that DHS would collect biometric data as part of routine immigration applications in order to share that data with other law enforcement agencies and monitor individuals indefinitely.

The Rule Is Fatally Flawed and Must Be Stopped 

EFF and our partners oppose this proposed rule on multiple grounds. It fails to take into account the serious privacy and security risks of expanding biometrics collection; it threatens First Amendment activity; and it does not adequately address the risk of error in the technologies and databases that store biometric data. Lastly, DHS has failed to  provide sufficient justification for these drastic changes, and the proposed changes exceed DHS’s statutory authority. 

Privacy and Security Threats

The breadth of the information DHS wants to collect is massive. DHS’s new definition of biometrics would allow for virtually unbounded biometrics collection in the future, creating untold threats to privacy and personal autonomy. This is especially true of behavioral biometrics, which can be collected without a person’s knowledge or consent, expose highly personal and sensitive information about a person beyond mere identity, and allow for tracking on a mass scale. Notably, both Democratic and Republican members of Congress have condemned China’s similar use of biometrics to track the Uyghur Muslim population in Xinjiang.

Of the new types of biometrics DHS plans to collect, DNA collection presents unique threats to privacy. Unlike other biometrics such as fingerprints, DNA contains our most private and personal information. DHS plans to collect DNA specifically to determine genetic family relationships and will store that relationship information with each DNA profile, thus allowing the agency to identify and map immigrant families and, eventually over time, whole immigrant communities. DHS suggests that it will store DNA data indefinitely and makes clear that it retains the authority to share this data with law enforcement. Sharing this data with law enforcement only increases the risk those required to give samples will be erroneously linked to a crime, while exacerbating problems related to the disproportionate number of people of color whose samples are included in government DNA databases. 

Not only is the government’s increased collection of highly sensitive personal data troubling because of the ways the government might use it, but also because that data could end up in the hands of bad actors. Put simply, DHS has not demonstrated that it can keep biometrics safe. For example, just last month, DHS’s Office of Inspector General (OIG) found that the agency’s inadequate security practices enabled bad actors to steal nearly 200,000 travelers’ face images from a subcontractor’s computers. A Government Accountability Office report similarly “identified long-standing challenges in CBP’s efforts to develop and implement [its biometric entry and exit] system.” There have also been serious security breaches from insiders at USCIS. And other federal agencies have had similar challenges in securing biometric data: in 2015, sensitive data on more than 25 million people stored in the Office of Personnel Management databases was stolen. And, as the multiple security breaches of India’s Aadhar national biometric database have shown in the international context, these breaches can make millions of individuals subject to fraud and identity theft.

The risk of security breaches to children’s biometrics is especially acute. A recent U.S. Senate Commerce Committee report collects a number of studies that “indicate that large numbers of children in the United States are victims of identity theft.” Breaches of children’s biometric data further exacerbate this security risk because biometrics cannot be changed. As a recent UNICEF report explains, the collection of children’s biometric information exposes them to “lifelong data risks” that are not possible to presently evaluate. Never before has biometric information been collected from birth, and we do not know how the data collected today will be used in the future.

First Amendment Risks

This massive collection of biometric data—and the danger that it could be leaked—places a significant burden on First Amendment activity. By collecting and retaining biometric data like face recognition and sharing it broadly with federal, state, and local agencies, as well as with contractors and foreign governments, DHS lays the groundwork for a vast surveillance and tracking network that could impact individuals and communities for years to come. DHS could soon build a database large enough to identify and track all people in public places, without their knowledge—not just in places the agency oversees, like at the border, but anywhere there are cameras. This burden falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups that are the most likely to encounter DHS. 

If immigrants and their U.S. citizen and permanent resident family members know the government can request, retain, and share with other law enforcement agencies their most intimate biometric information at every stage of the immigration lifecycle, many may self-censor and refrain from asserting their First Amendment rights. Studies show that surveillance systems and the overcollection of data by the government chill expressive and religious activity. For example, in 2013, a study involving Muslims in New York and New Jersey found excessive police surveillance in Muslim communities had a significant chilling effect on First Amendment-protected activities.

Problems with Biometric Technology

DHS’s decision to move forward with biometrics expansion is also questionable because the agency fails to consider the lack of reliability of many biometric technologies and the databases that store this information. One of the methods DHS proposes to employ to collect DNA, known as Rapid DNA, has been shown to be error prone. Meanwhile, studies have found significant error rates across face recognition systems for people with darker skin, and especially for Black women. 

Moreover, it remains far from clear that collecting more biometrics will make DHS’s already flawed databases any more accurate. In fact, in a recent case challenging the reliability of DHS databases, a federal district court found that independent investigations of several DHS databases highlighted high error rates within the systems. For example, in 2017, the DHS OIG found that the database used for information about visa overstays was wrong 42 percent of the time. Other databases used to identify lawful permanent residents and people with protected status had a 30 percent error rate.

DHS’s Flawed Justification

DHS has offered little justification for this massive expansion of biometric data collection. In the proposed rule, DHS suggests that the new system will “provide DHS with the improved ability to identify and limit fraud.” However, the scant evidence that DHS offers to demonstrate the existence of fraud cannot justify its expansive changes. For example, DHS purports to justify its collection of DNA from children based on the fact that there were “432 incidents of fraudulent family claims” between July 1, 2019 and November 7, 2019 along the southern border. Not only does DHS not define what constitutes a “fraudulent family,” but also it leaves out that during that same period, an estimated 100,000 family units crossed the southern border, meaning that the so-called “fraudulent family” units made up less than one-half of one percent of all family crossings. And we’ve seen this before: the Trump administration has a troubling record of raising false alarms about fraud in the immigration context.

In addition, DHS does not address the privacy costs discussed in depth above. The proposed rule merely notes that “[t]here could be some unquantified impacts related to privacy concerns for risks associated with the collection.” And of course, the changes would come at a considerable financial cost to taxpayers, at a time when USCIS is already experiencing fiscal challenges. Even with the millions of dollars in new fees USCIS will collect, the rule is estimated to cost anywhere from $2.25 to $5 billion over the next 10 years. DHS also notes that additional costs could manifest.

Beyond DHS’s Mandate

Congress has not given DHS the authority to expand biometrics collection in this manner. When Congress has wanted DHS to use biometrics, it has said so clearly. For example, after 9/11, Congress directed DHS to “develop a plan to accelerate the full implementation of an automated biometric entry and exit data system.” But DHS can point to no such authorization in this instance. In fact, Congress is actively considering measures to restrict the government’s use of biometrics. It is not the place for a federal agency to supersede debate in Congress. Elected lawmakers must resolve these important matters through the democratic process before DHS can put forward a proposal like the proposed rule, which seeks to perform an end run around the democratic process.    

What’s Next

If DHS makes this rule final, Congress has the power to block it from taking effect. We hope that DHS will take seriously our comments. But if it doesn’t, Congress will be hearing from us and our members.

Related Cases: Federal DNA CollectionFBI's Next Generation Identification Biometrics DatabaseDNA Collection

Victory! EFF Wins Appeal for Access to Wiretap Application Records

EFF - Thu, 10/22/2020 - 5:11pm

Imagine learning that you were wiretapped by law enforcement, but couldn’t get any information about why. That’s what happened to retired California Highway Patrol officer Miguel Guerrero, and EFF sued on his behalf to get more information about the surveillance. This week, a California appeals court ruled in his case that people who are targets of wiretaps are entitled to inspect the wiretap materials, including the order application and intercepted communications, if a judge finds that such access would be in the interests of justice. This is a huge victory for transparency and accountability in California courts.

This case arose from the grossly disproportionate volume of wiretaps issued by the Riverside County Superior Court in 2014 and 2015. In those years, wiretaps from that single, suburban county accounted for almost twice as many wiretaps as were issued in the rest of California combined, and almost one-fifth of all state and federal wiretaps issued nationwide. After journalists exposed Riverside County’s massive surveillance campaign, watchdog groups and even a federal judge warned that the sheer scale of the wiretaps suggested that the applications and authorizations violated federal law.

Guerrero learned from family members that his phone number was the subject of a wiretap order in 2015. Guerrero, a former law enforcement officer, has no criminal record, and was never arrested or charged with any crime in relation to the wiretap. And, although the law requires that targets of wiretaps receive notice within 90 days of the wiretap’s conclusion, he never received any such notice. He wanted to see the records both to inform the public and to assess whether to bring an action challenging the legality of the wiretap.

When we first went to court, the judge ruled that targets of wiretaps can unseal the wiretap application and order only by proving “good cause” for disclosure. The court then found that neither Guerrero’s desire to pursue a civil action nor the grossly disproportionate volume of wiretaps established good cause for disclosure, commenting that the number of wiretaps was “nothing more than routine.” The court further rejected our argument that the public has a First Amendment right of access to the wiretap order and application.

We appealed, and the Court of Appeal agreed that the trial court erred. The appeals court made clear that, under California law, the target of a wiretap need not show good cause. Instead, the target of a wiretap need only demonstrate that disclosure of the wiretap order and application is “in the interest of justice”—which unlike the good cause standard, does not include any presumption of secrecy.

Importantly, the court provided guidance for how to assess the “interest of justice” in this context, becoming one of the first courts in the nation to interpret this standard. As the court explained, the “interest of justice” analysis requires a court to consider the requester’s interest in access, the government’s interest in secrecy, and the interests of others intercepted persons, and, significantly, the public interest. In considering the public interest, the court explained, courts should consider the huge volume of wiretaps approved in Riverside County. The court specifically rejected the trial court’s assessment that those statistics, on their own, were irrelevant without an independent showing of nefarious conduct.

The case now returns to the trial court, where the judge must apply the Court of Appeal’s analysis. We hope Mr. Guerrero will finally get some answers.

Related Cases: Riverside wiretaps

EFF Urges Vallejo’s Top Officials to End Unconstitutional Practice of Blocking Critics on Social Media

EFF - Thu, 10/22/2020 - 4:40pm
Elected Officials Can’t Block People Whose Views They Dislike

San Francisco—The Electronic Frontier Foundation (EFF) told the City of Vallejo that its practice of blocking people and deleting comments on social media because it doesn't like their messages is illegal under the First Amendment, and demanded that it stop engaging in such viewpoint discrimination, unblock all members of the public, and let them post comments.

In a letter today to Vallejo Mayor Bob Sampayan and the city council written on behalf of Open Vallejo, an independent news organization, and the Vallejo community at large, EFF said that when the government creates a social media page and uses it to speak to the public about its policies and positions, the page becomes a government forum. Under the First Amendment, the public has a right to receive and comment on messages in the forum. Blocking or deleting users’ comments based on the viewpoints expressed is unconstitutional viewpoint discrimination.

“Courts have made clear that government officials, even the president of the United States, can’t delete or block comments because they dislike the viewpoints conveyed,” said Naomi Gilens, Frank Stanton Legal Fellow at EFF. “Doing so is unconstitutional. We’ve asked that all official social media pages of Vallejo officials and pages of all city offices and departments unblock all members of the public and allow them to post comments.

Open Vallejo discovered the practice of deleting comments and blocking users during an investigation of the social media practices of council members, other city officials, and the City of Vallejo itself.

EFF sided with members of the public blocked by President Trump on Twitter who sued him and members of his communications team in July 2017. We filed an amicus brief arguing that it’s common practice for government offices large and small to use social media to communicate to and with the public. All members of the public, regardless of whether government officials dislike their posts or tweets, have a right to receive and comment on government messages, some of which may deal with safety directions during fires, earthquakes, or other emergencies.

The district court agreed, ruling that President Trump’s practice violates the First Amendment. A federal appeals court upheld the ruling. Two other federal Courts of Appeals have ruled in separate cases that viewpoint discrimination on government social media pages is illegal.

We urge Vallejo to bring its social media practices in line with the Constitution, and have requested that city officials respond to our demand by Nov. 6.

For the letter:
https://www.eff.org/document/city-vallejo-demand-letter

For more on social media and the First Amendment:
https://www.eff.org/deeplinks/2017/11/when-officials-tweet-about-government-business-they-dont-get-pick-and-choose-who

 

 

 

Contact:  NaomiGilensFrank Stanton Fellownaomi@eff.org

Open Access Should Include Open Courts

EFF - Thu, 10/22/2020 - 1:51pm

It is a fundamental precept, at least in the United States, that the public should have access to the courts–including court records–and any departure from that rule must be narrow and well-justified. In a nation bound by the rule of law, the public must have the ability to know the law and how it is being applied. But for most of our nation’s history, that right didn’t mean much if you didn’t have the ability to get to the courthouse yourself.

In theory, the PACER (Public Access to Court Electronic Records) system should have changed all that. Though much-maligned for its user-unfriendly design, for more than 25 years PACER has made it possible for parties and the public to find all kinds of legal documents, from substantive briefs and judicial opinions to minor things like a notice of appearance. For those with the skill to navigate its millions of documents, PACER is a treasure trove of information about our legal system–and how it can be abused.

But using PACER takes more than skill–it takes money. Subject to some exceptions, the PACER system charges 10 cents a page to download a document, and that cost can add up fast. The money is supposed to cover the price of running the system, but has been diverted to cover other costs.  And either way, those fees are an unfair barrier to access. Open access activists have tried for years to remedy the problem, and have managed to improve access to some of it. The government itself made some initial forays in the right direction a decade ago, but then retreated, claiming privacy and security concerns. A team of researchers has developed software, called RECAP, that helps users automatically search for free copies of documents, and helps build up a free alternative database. Nonetheless, today most of PACER remains locked behind a paywall.

It’s past time to tear that paywall down, and a bill now working its way through Congress, the bipartisan Open Courts Act of 2020, aims to do just that. The bill would provide public access to federal court records and improve the federal court’s online record system, eliminating PACER's paywall in the process. EFF and a coalition of civil liberties organizations, transparency groups, retired judges, and law libraries have joined together to push Congress and the U.S. Federal Courts to eliminate the paywall and expand access to these vital documents. In a letter (PDF) addressed to the Director of the Administrative Office of United States Courts, which manages PACER, the coalition calls on the AO not to oppose this important legislation.

Passage of the bill would be a huge victory for transparency, due process and democratic accountability. This Open Access Week, EFF urges Congress and the courts to support this important legislation and remove the barriers that make PACER a gatekeeper to information, rather than the open path to public records that it ought to be.


EFF is proud to celebrate Open Access Week.

Related Cases: Freeing the Law with Public.Resource.Org

Open Education and Artificial Scarcity in Hard Times

EFF - Thu, 10/22/2020 - 1:44pm

The sudden move to remote education by universities this year has forced the inevitable: the move to an online education. While most universities won’t be fully remote, having course materials online was already becoming the norm before the COVID-19 pandemic, and this year it has become mandatory for millions of educators and students. As academia recovers from this crisis, and hopefully prepares for the next one, the choices we make will send us down one of two paths. We can move towards a future of online education which replicates the artificial scarcity of traditional publishing, or take a path which fosters an abundance of free materials by embracing the principles of open access and open education.

The well-worn, hefty, out-of-date textbook you may have bought some years ago was likely obsolete the moment you had a reliable computer and an Internet connection. Traditional textbook publishers already know this, and tout that they have embraced the digital era and have ebooks and e-rentals available—sometimes even at a discount. Despite some state laws discouraging the practice, publishers try to bundle their digital textbooks into “online learning systems,” often at the expense of the student. However, the costs and time needed to copy and send thousands of the digital textbooks themselves is trivial compared to their physical equivalent. 

To make matters worse, these online materials are often locked down with DRM which prevent buyers from sharing or reselling books; in turn, devastating the secondhand textbook market. This creates the absurd situation of ebooks, which are almost free to reproduce, being effectively more expensive than a physical book you plan to resell. Fortunately for all of us this scarcity is constructed, and there exists a more equitable and intuitive alternative. 

Right now there is a global collaborative effort among the world's educators and librarians to provide high-quality, free, and up-to-date education materials to all with little restriction. This of course is the global movement towards open education resources (OER). While this tireless effort of thousands of academics may seem complicated, it revolves around a simple idea: Education is a fundamental human right, so if technology enables us to share, reproduce, and update educational materials so effectively that we can give them away for free—it’s our moral duty to do so.

This cornucopia of syllabuses, exams, textbooks, video lectures, and much more is already available and awaiting eager educators and students. This is thanks to the power of open licensing, typically the Creative Commons Attribution license (CC BY), which is the standard for open educational resources. Open licensing preserves your freedom to retain, reuse, revise, remix and redistribute educational materials.  Much like free and open source licensing for code, these licenses help foster a collaborative ecosystem where people can freely use, improve, and recreate useful tools

Yet, most college students are still stuck on the path of prohibitively expensive and often outdated books from traditional publishers. While this situation is bad enough on its own, the COVID-19 pandemic has heightened the absurd and contradictory nature of this status quo. The structural equity offered by supporting OER is as clear and urgent as ever. Open Education, like all of open access more broadly, is a human rights issue.

The Squeeze on Students and Instructors

How do college students cope with being assigned highly priced textbooks? Some are fortunate enough to buy them outright, or can at least scrape together to rent everything they need. Physical books are available, they can share copies, resell them, and buy used. Artificial book scarcity has fortunately already been addressed in many communities with well-funded libraries. Unfortunately the necessity for reducing social contact during the pandemic has made these physical options more difficult if not impossible to orchestrate. That leaves the most vulnerable students with the easiest and by far most common solution for navigating the predicament: hope that you don’t actually need the book and avoid the purchase all together.

Unsurprisingly, a student's performance is highly impacted by access to educational materials, and trying to catch up late in the semester is rarely viable. In short, these wholly artificial barriers to accessing necessary educational materials are setting up the most vulnerable students to choose between risking their grades, their health, or their wallet. Fortunately there is a growing number of institutions embracing OER, saving their students millions of dollars while making it possible for every student to succeed without any undue costs.

Instructors at universities have been feeling the pressure, too. With little support at most institutions, they were asked to prepare a fully online course for the fall, sometimes in addition to an in-person course plan. Studying this sudden pivot, Bay View Analytics estimates 97% of institutions had faculty teaching online for the first time, with 56% of instructors needing to adopt teaching methods they have never tried before. 

Adapting a course to work online is not a trivial amount of work. Integrating technology into education often requires special training in pedagogy, the digital divide, and emerging privacy concerns. Even if it falls short of these trainings, having a selection of pre-made courses to freely adapt is an age-old academic practice which can relieve instructors from this burden. While this informal system of sharing among instructors may have provided some confidence, provided they knew others who have taught similar courses online, this is where the power of OER can really take hold. 

Instead of being limited to who you know, the global community around OER offers a much broader variety of syllabuses and assignments for different teaching styles. As OER continues to grow, instructors will be more resilient and able to choose between the best materials the global OER community has to offer.   

Building Towards Education Equity

Despite the many benefits of open access and open education, most instructors have still never heard of OER. This means a simple first step away from an expensive and locked down system of education is to make sure you make the benefits of OER more widely known. While pushing for the broader utilization of OER, we must advocate for systemic changes to make sure OER is supported on every campus.

For this task, supporting public and private libraries is essential. Despite years of austerity cuts, many academic libraries have established hubs of well-curated OER resources, tailored for the needs of their institution. As just one example, OAISIS is a hub of OER at the State University of New York at Geneseo, where librarians maintain materials from over 500 institutions. As a greater number of educational materials utilize open licenses, it will be essential for librarians to help instructors navigate the growing number of options.

State legislatures are also increasingly introducing bills to address this issue, and we should all push our legislatures to do what’s right. Public funding should save students money and save teachers time, not deepen the divide between those who can and those who can’t access resources.

This is a lesson we cannot forget as we recover from the current crisis. Structural inequity and a system of artificial scarcity is nothing new, and it will still be there on the other side of the COVID-19 pandemic. Traditional publishers have restrained education too much for too long. We already have a future where education can be adaptable, collaborative, and free. Now is the time to reclaim it.

EFF is proud to celebrate Open Access Week 2020

Peru’s Third Who Defends Your Data? Report: Stronger Commitments from ISPs, But Imbalances, and Gaps to Bridge.

EFF - Wed, 10/21/2020 - 4:55pm

Hiperderecho, Peru’s leading digital rights organization, has launched today its third ¿Quién Defiende Tus Datos? (Who Defends you Data)--a report that seeks to hold telecom companies accountable for their users’ privacy. The new Peruvian edition shows improvements compared to 2019’s evaluation.

 Movistar and Claro commit to require a warrant for handing both users’ communications content and metadata to the government. The two companies also earned credit for defending user’s privacy in Congress or for challenging government requests. None scored any star last year in this category. Claro stands out with detailed law enforcement guidelines, including an explanatory chart for the procedures the company adopts before law enforcement requests for communications data. However, Claro should be more specific about the type of communications data covered by the guidelines. All companies have received full stars for their privacy policies, while only three did so in the previous report. Overall, Movistar and Claro are tied in the lead. Entel and Bitel lag, with the former bearing a slight advantage. 

Quien Defiende Tus Datos is part of a series across Latin America and Spain carried out in collaboration with EFF and inspired in our Who Has Your Back? Project. This year’s edition evaluates the four largest Internet Service Providers (ISPs) in Peru: Telefónica-Movistar, Claro, Entel, and Bitel.    

Hiperderecho assessed Peruvian ISPs on seven criteria concerning privacy policies, transparency, user notification, judicial authorization, defense of human rights, digital security, and law enforcement guidelines. In contrast to last year, the report has added two new categories – if ISPs publish law enforcement guidelines and a category checking companies’ commitments to users’ digital security. The full report is available in Spanish, and here we outline the main results:

Regarding transparency reports, Movistar leads the way, earning a full star while Claro receives a partial star. The report had to provide useful data about how many requests received and how many times the company complied. It should also include details about the government agencies that made the requests and the authority’s justifications. For the first time, Claro has provided statistical figures on government demands that require the “lifting of the secrecy of communication (LST).” However, Claro has failed to clarify which type of data (IP addresses and other technical identifiers) is protected under this legal regime. Since Peru's Telecommunications Law and its regulation protect under communications secrecy both the content and personal information obtained through the provision of telecom services, we assume Claro might include both. Yet, as a best practice, the ISP should be more explicit about the type of data, including technical identifiers, protected under communication secrecy. As Movistar does, Claro should also break down government requests’ statistical data in content interception and metadata.  

Movistar and Claro have published their law enforcement guidelines. While Movistar only released a general global policy applicable to its subsidiaries, Claro stands out with detailed guidelines for Peru, including an explanatory chart for the company’s procedures before law enforcement requests for communications data. On the downside, the document broadly refers to "lifting the secrecy of communication" requests without defining what it entails. It should give users greater insight into which kind of data is included in the outlined procedures and whether they are mostly focused on authorities' access to communications content or refer to specific metadata requests.

Entel, Bitel, Claro, and Movistar have published privacy policies applicable to their services that are easy to understand. All of the ISP’s policies provide information about the collected data (such as name, address, and records related to the service provision) and cases in which the company shares personal data with third parties. Claro and Movistar receive full credit in the judicial authorization category for having policies or other documents indicating their commitment to request a judicial order before handing communications data unless the law mandates otherwise. Similarly, Entel states they share users' data with the government in compliance with the law. Peruvian law grants the specialized police investigation unit the power to request from telecom operators access to metadata in specific emergencies set by Legislative Decree 1182, with a subsequent judicial review.

Latin American countries still have a long way ahead in shedding enough light on government surveillance practices. Publishing meaningful transparency reports and law enforcement guidelines are two critical measures that companies should commit to. Users’ notification is the third one. In Peru, none of the ISPs have committed to notifying users of a government request at the earliest moment allowed by law. Yet, Movistar and Claro have provided further information on their reasons and their interpretation of the law for this refusal.

In the digital security category, all companies have received credit for using HTTPS on their websites and for providing secure methods to users in their online channels, such as two-step authentication. All companies but Bitel have scored for the promotion of human rights. While Entel receives a partial score for joining local multi-stakeholder forums, Movistar and Claro fill up their stars for this category. Among others, Movistar has sent comments to Congress in favor of user’s privacy, and Claro has challenged such a disproportionate request issued by the country’s tax administration agency (SUNAT) before Peru’s data protection authority.

We are glad to see that Peru’s third report shows significant progress, but much needed to be done to protect users’ privacy. Entel and Bitel have to catch up with the larger regional providers. And Movistar and Claro can also go further to complete their chart of stars. Hiperderecho will remain vigilant through their ¿Quien Defiende Tus Datos? Reports. 

Open Access Must Be the Rule, Not the Exception

EFF - Wed, 10/21/2020 - 2:05pm
Not Just for COVID-19, But for the Next Crisis Too

The COVID-19 pandemic demands that governments, scientific researchers, and industry work together to bring life-saving technology to the public regardless of who can afford it. But even as we take steps to make medical technology and treatments available to everyone, we shouldn’t forget that more crises will come after COVID-19. There will be future public health disasters; in fact, experts expect pandemics to become more frequent. As climate change continues to threaten human life, there will be other kind of disasters too. A patch for the current crisis is not enough; we need a fundamental change in how scientific research is funded, published, and licensed. As we celebrate Open Access Week, let’s remember that open access must be the rule, not the exception.

We wrote earlier this year about the Open COVID Pledge, a promise that a company can make not to assert its patents or copyrights against anyone helping to fight COVID-19. Companies that take the pledge agree to license their patents and/or copyrights under a license that allows for “diagnosing, preventing, containing, and treating COVID-19.” When we last wrote about the Open COVID Pledge, it had just been introduced and had only a few adopters—most notably, tech giant Intel. Since then, many big tech companies have taken the pledge, including Facebook, Uber, Amazon, and Microsoft. And the list of licensed technology on the Open COVID Pledge website continues to grow.

While EFF applauds those companies for recognizing the urgency of the moment, open licenses and pledges are only the beginning of the discussion about how we can remove legal obstacles to sharing urgently needed innovation. As we’ve discussed before, one way is to harness the power of existing patent law. There’s a provision that lets the US government use or authorize others to use any invention “described in and covered by a patent of the United States” in exchange for reasonable compensation. In other words, the government could license itself or others to use any patented technology to diagnose, treat, or stop the spread of COVID-19. (If a patent-owner wanted to sue for infringement, it would sue the United States, not the licensee.) The government can do that under current law, with no need to get a bill through legislative gridlock.

But that’s not enough either. Rising to the many challenges facing society today requires going to the source—how scientific research is funded and published as well as the legal entanglements that can come with that research. The good news is that the open access community has made progress. Although Congress has failed multiple times to pass a comprehensive open access law, current Executive Branch policies require that federally funded scientific research be made available to the public no later than a year after publication in a scientific journal.

Of course, a year is a very long time: think back to where the world was a year ago and ask yourself if all of the last year’s research should be locked behind a paywall. (Fortunately, major publishers have done the right thing and dropped their paywalls on COVID-19-related research.)

Unfortunately, while it’s now possible for anyone to read most government-funded scientific research, that same body of research can become fuel for abuse of the patent system. Take the notorious patent troll My Health, which sued numerous companies for infringement of its stupid patent on telehealth technology. My Health’s patent didn’t originate at a private company; a university applied for and got it years earlier. This is a story that’s become all too common: due to academia’s embrace of ever more aggressive patenting and licensing practices, patents that emerge out of scientific research create obstacles for the public, thus undermining the point of having strong open access policies in the first place.

That’s why EFF has urged research universities to adopt policies not to license their inventions to patent trolls. It’s also why we’ve urged the government to consider the harm that patent trolls bring to the public when reviewing licenses for inventions that arise from government-funded research. There is simply no place for patent trolls in government-funded research.

A pandemic can expose and intensify existing inequities, be it racial injustice or disparity in education. The COVID-19 pandemic has also demonstrated the urgent need for to remove barriers that keep government-funded research from benefiting everyone. Scientific research that the public pays for must be made available to everyone, without a paywall, without an embargo period. At the same time, the government, private funders, and research institutions must take steps to ensure that the research they fund doesn’t become ammunition for abusive litigation. Not just for the current crisis, but for the next one too.

EFF is proud to celebrate Open Access Week.

EFF to Supreme Court: American Companies Complicit in Human Rights Abuses Abroad Should Be Held Accountable

EFF - Wed, 10/21/2020 - 1:45pm

For years EFF has been calling for U.S. companies that act as “repressions little helpers” to be held accountable, and now we’re telling the U.S. Supreme Court. Despite all the ways that technology has been used as a force for good–connecting people around the world, giving voice to the less powerful, and facilitating knowledge sharing—technology has also been used as a force multiplier for repression and human rights violations, a dark side that cannot be denied.

Today EFF filed a brief urging the Supreme Court to preserve one of the few tools of legal accountability that exist for companies that intentionally aid and abet foreign repression, the Alien Tort Statute (ATS). We told the court about what we and others have been seeing over the past decade or so: surveillance, communications, and database systems, just to name a few, have been used by foreign governments—with the full knowledge of and assistance by the U.S. companies selling those technologies—to spy on and track down activists, journalists, and religious minorities who have been imprisoned, tortured, and even killed.

Specifically, we asked the Supreme Court today to rule that U.S. corporations can be sued by foreigners under the ATS and taken to court for aiding and abetting gross human rights abuses. The court is reviewing an ATS lawsuit brought by former child slaves from Côte d’Ivoire who claim two American companies, Nestle and Cargill, aided in abuse they suffered by providing financial support to cocoa farms they were forced to work at. The ATS allows noncitizens to bring a civil claim in U.S. federal court against a defendant that violated human rights laws. The companies are asking the court to rule that companies cannot be held accountable under the law, and that only individuals can.

We were joined in the brief by the leading organizations tracking the sale of surveillance technology:  Access Now, Article 19, Privacy International, Center for Long-Term Cybersecurity, and Ronald Deibert, director of Citizen Lab at University of Toronto. We told the court that the Nestle case does not just concern chocolate and children. The outcome will have profound implications for millions of Internet users and other citizens of countries around the world. Why? Because providing sophisticated surveillance and censorship products and services to foreign governments is big business for some American tech companies. The fact that their products are clearly being used for tools of oppression seems not to matter. Here are a few examples we cite in our brief:

Cisco custom-built the so-called “Great Firewall” in China, also known as the “Golden Shield, which enables the government to conduct Internet surveillance and censorship against its citizens. Company documents have revealed that, as part of its marketing pitch to China, Cisco built a specific “Falun Gong module” into the Golden Shield that helped Chinese authorities efficiently identify and locate member of the Falun Gong religious minority, who were then apprehended and subjected to torture, forced conversion, and other human rights abuses. Falun Gong practitioners sued Cisco under the ATS in a case currently pending in the U.S. Court of Appeals for the Ninth Circuit. EFF has filed briefs siding with the plaintiffs throughout the case.

Ning Xinhua, a pro-democracy activist from China, just last month sued the successor companies, founder, and former CEO of Yahoo! Under the ATS for sharing his private emails with the Chinese government, which led to his arrest, imprisonment, and torture.

Recently, the government of Belarus used technology from Sandvine, a U.S. network equipment company, to block much of the Internet during the disputed presidential election in August (the company canceled its contract with Belarus because of the censorship). The company’s technology is also used by Turkey, Syria, and Egypt against Internet users to redirect them to websites that contain spyware or block their access to political, human rights, and news content

We also cited a case against IBM where we filed a brief in support of the plaintiffs, victims of apartheid, who sued under the ATS on claims that the tech giant aided and abetted the human rights abuses they suffered at the hands of the South African government. IBM created a customized computer-based national identification system that facilitated the “denationalization” of country’s Black population. Its customized technology enabled efficient identification, racial categorization, and forced segregation, furthering the systemic oppression of South Africa’s native population. Unfortunately the case was dismissed by the U.S. Court of Appeals for the Second Circuit. 

The Supreme Court has severely limited the scope of ATS in several rulings over the years. The court is now being asked to essentially grant immunity from the ATS to U.S. corporations. That would be a huge mistake. Companies that provide products and services to customers that clearly intend to, and do, use them to commit gross human rights abuses must be held accountable for their actions. We don’t think companies should be held liable just because their technologies ended up in the hands of governments that use them to hurt people. But when technology corporations custom-make products for governments that are plainly using them to commit human rights abuses, they cross a moral, ethical, and legal line.

We urge the Supreme Court to hold that U.S. courts are open when a U.S. tech company decides to put profits over basic human rights, and people in foreign countries are seriously harmed or killed by those choices.

 

Related Cases: Doe I v. Cisco

EU Parliament Paves the Way for an Ambitious Internet Bill

EFF - Wed, 10/21/2020 - 3:00am

The European Union has made the first step towards a significant overhaul of its core platform regulation, the e-Commerce Directive.

In order to inspire the European Commission, which is currently preparing a proposal for a Digital Services Act Package, the EU Parliament has voted on three related Reports (IMCO, JURI, and LIBE reports), which address the legal responsibilities of platforms regarding user content, include measures to keep users safe online, and set out special rules for very large platforms that dominate users’ lives.

Clear EFF's Footprint

Ahead of the votes, together with our allies, we argued to preserve what works for a free Internet and innovation, such as to retain the E-Commerce directive’s approach of limiting platforms’ liability over user content and banning Member States from imposing obligations to track and monitor users’ content. We also stressed that it is time to fix what is broken: to imagine a version of the Internet where users have a right to remain anonymous, enjoy substantial procedural rights in the context of content moderation, can have more control over how they interact with content, and have a true choice over the services they use through interoperability obligations.

It’s a great first step in the right direction that all three EU Parliament reports have considered EFF suggestions. There is an overall agreement that platform intermediaries have a pivotal role to play in ensuring the availability of content and the development of the Internet. Platforms should not be held responsible for ideas, images, videos, or speech that users post or share online. They should not be forced to monitor and censor users’ content and communication--for example, using upload filters. The Reports also makes a strong call to preserve users’ privacy online and to address the problem of targeted advertising. Another important aspect of what made the E-Commerce Directive a success is the “country or origin” principle. It states that within the European Union, companies must adhere to the law of their domicile rather than that of the recipient of the service. There is no appetite from the side of the Parliament to change this principle.

Even better, the reports echo EFF’s call to stop ignoring the walled gardens big platforms have become. Large Internet companies should no longer nudge users to stay on a platform that disregards their privacy or jeopardizes their security, but enable users to communicate with friends across platform boundaries. Unfair trading, preferential display of platforms’ own downstream services and transparency of how users’ data are collected and shared: the EU Parliament seeks to tackle these and other issues that have become the new “normal” for users when browsing the Internet and communicating with their friends. The reports also echo EFF’s concerns about automated content moderation, which is incapable of understanding context. In the future, users should receive meaningful information about algorithmic decision-making and learn if terms of service change. Also, the EU Parliament supports procedural justice for users who see their content removed or their accounts disabled. 

Concerns Remain 

The focus on fundamental rights protection and user control is a good starting point for the ongoing reform of Internet legislation in Europe. However, there are also a number of pitfalls and risks. There is a suggestion that platforms should report illegal content to enforcement authorities and there are open questions about public electronic identity systems. Also, the general focus of consumer shopping issues, such as liability provision for online marketplaces, may clash with digital rights principles: the Commission itself acknowledged in a recent internal document that “speech can also be reflected in goods, such as books, clothing items or symbols, and restrictive measures on the sale of such artefacts can affect freedom of expression." Then, the general idea to also include digital services providers established outside the EU could turn out to be a problem to the extent that platforms are held responsible to remove illegal content. Recent cases (Glawischnig-Piesczek v Facebook) have demonstrated the perils of worldwide content takedown orders.

It’s Your Turn Now @EU_Commission

The EU Commission is expected to present a legislative package on 2 December. During the public consultation process, we urged the Commission to protect freedom of expression and to give control to users rather than the big platforms. We are hopeful that the EU will work on a free and interoperable Internet and not follow the footsteps of harmful Internet bills such as the German law NetzDG or the French Avia Bill, which EFF helped to strike down. It’s time to make it right. To preserve what works and to fix what is broken.

Members of Congress Join the Fight for Protest Surveillance Transparency

EFF - Tue, 10/20/2020 - 6:54pm

Three members of Congress have joined the fight for the right to protest by sending a letter to the Privacy and Civil Liberties Oversight Board (PCLOB) to investigate federal surveillance against protesters. We commend these elected officials for doing what they can to help ensure our constitutional right to protest and for taking the interests and safety of protesters to heart.

It often takes years, if not longer, to learn the full scope of government surveillance used against demonstrators involved in a specific action or protest movement. Four months since the murder of George Floyd began a new round of Black-led protests against police violence, there has been a slow and steady trickle of revelations about law enforcement agencies deploying advanced surveillance technology at protests around the country. For example, we learned recently that the Federal Bureau of Investigation sent a team specializing in cellular phone exploitation to Portland, site of some of the largest and most sustained protests.  Before that, we learned about federal, state, and local aerial surveillance done over protests in at least 15 cities. Now, Rep. Anna Eshoo, Rep. Bobby Rush, and Sen. Ron Wyden have asked the PCLOB to dig deeper..

The PCLOB is an independent agency in the executive branch, created in 2004, which undertakes far-ranging investigations into issues related to privacy and civil liberties including mass surveillance of the internet and cellular communications, facial recognition technology at airports, and terrorism watchlists.  In addition to asking the PCLOB to investigate who used what surveillance where, and how it negatively impacted the First Amendment right to protest, the trio of Eshoo, Rush, and Wyden, ask the PCLOB to investigate and enumerate the legal authorities under which agencies are surveilling protests and whether agencies have followed required processes for use of intelligence equipment domestically. The letter continues: 

“PCLOB should investigate what legal authorities federal agencies are using to surveil protesters to help Congress understand if agencies’ interpretations of specific provisions of federal statutes or of the Constitution are consistent with congressional intent. This will help inform whether Congress needs to amend existing statutes or consider legislation to ensure agency actions are consistent with congressional intent.”

We agree with these politicians that government surveillance of protesters is a threat to all of our civil liberties and an affront to a robust, active, and informed democracy. With a guarantee of more protests to come in the upcoming weeks and months, Congress and the PCLOB board must act swiftly to protect our right to protest, investigate how much harm government surveillance has caused, and identify  illegal behavior by these agencies.

In the meantime, if you plan on protesting, make sure you’ve reviewed EFF’s surveillance self-defense guide for protesters.

Video Hearing Wednesday: Advocacy Orgs Go to Court to Block Trump’s Retaliation Against Fact-Checking

EFF - Tue, 10/20/2020 - 6:31pm
Lawsuit Challenges Executive Order Pressuring Social Media Companies to Ignore President’s False Claims

San Francisco – On Wednesday, October 21 at 11 am ET/2 pm PT, voter advocacy organizations will ask a district court to block an unconstitutional Executive Order that retaliates against online services for fact-checking President Trump’s false posts about voting and the upcoming election. Information on attending the video hearing can be found on the court’s website.

The plaintiffs— Common CauseFree PressMaplightRock the Vote, and Voto Latino—are represented by the Electronic Frontier Foundation (EFF), Protect Democracy, and Cooley LLP. At Wednesday’s hearing, Cooley partner Kathleen Hartnett will argue that the president’s Executive Order should not be enforced until the lawsuit is resolved.

Trump signed the “Executive Order on Preventing Online Censorship” in May, after a well-publicized fight with Twitter. First, the president tweeted false claims about the reliability of online voting, and then Twitter decided to append a link to “get the facts about mail-in ballots.” Two days later, Trump signed the order, which directs government agencies to begin law enforcement actions against online services for any supposedly “deceptive acts” of moderation that aren’t in “good faith.” The order also instructs the government to withhold advertising through social media companies who act in “bad faith,” and to kickstart a process to gut platforms’ legal protections under Section 230. Section 230 is the law that allows online services—like Twitter, Facebook, and others—to host and moderate diverse forums of users’ speech without being liable for their users’ content.

The plaintiffs in this case filed the lawsuit because they want to make sure that voting information found online is accurate, and they want social media companies to be able to take proactive steps against misinformation. Instead, the Executive Order chills social media companies from moderating the president’s content in a way that he doesn’t like—an unconstitutional violation of the First Amendment.

WHAT:
Rock the Vote v. Trump

WHEN:
Wednesday
October 21
2 pm

HOW:
To attend the hearing and see the guidelines for the watching the video stream

Contact:  RebeccaJeschkeMedia Relations Director and Digital Rights Analystpress@eff.org

Pages