Electronic Freedom Foundation
Organizing for Digital Rights in the Pacific Northwest
Recently I traveled to Portland, Oregon to speak at the PDX People’s Digital Safety Fair, meet up with five groups in the Electronic Frontier Alliance, and attend BSides PDX 2024. Portland’s first ever Digital Safety Fair was a success and five of our six EFA organizations in the area participated: Personal Telco Project, Encode Justice Oregon, PDX Privacy, TA3M Portland, and Community Broadband PDX. I was able to reaffirm our support with these organizations, and table with most of them as they met local people interested in digital rights. We distributed EFF toolkits as a resource, and we made sure EFA brochures and stickers had a presence on all their tables. A few of these organizations were also present at BSides PDX, and it was great seeing them being leaders in the local infosec and cybersecurity community.
PDX Privacy’s mission is to bring about transparency and control in the acquisition and use of surveillance systems in the Portland Metro area, whether personal data is captured by the government or by commercial entities. Transparency is essential to ensure privacy protections, community control, fairness, and respect for civil rights.
TA3M Portland is an informal meetup designed to connect software creators and activists who are interested in censorship, surveillance, and open technology.
The Oregon Chapter of Encode Justice, the world’s first and largest youth movement for human-centered artificial intelligence, works to mobilize policymakers and the public for guardrails to ensure AI fulfills its transformative potential. Its mission is to ensure we encode justice and safety into the technologies we build.
(l to r) Pictured here with the PDXPrivacy’s Seth, Boaz and new President, Nate. Pictured with Chris Bushick, legendary Portland privacy advocate of TA3M PDX. Pictured with the leaders of Encode Justice Oregon.
There's growing momentum in the Seattle and Portland areas
Community Broadband PDX’s focus is on expanding the existing dark fiber broadband network in Portland to all residents, creating an open-source model where the city owns the fiber, and it’s controlled by local nonprofits and cooperatives, not large ISP’s.
Personal Telco is dedicated to the idea that users have a central role in how their communications networks are operated. This is done by building our own networks that we share with our communities, and by helping to educate others in how they can, too.
At the People’s Digital Safety Fair I spoke in the main room on the campaign to bring high-speed broadband to Portland, which is led by Community Broadband PDX and the Personal TelCo Project. I made a direct call to action for those in attendance to join the campaign. My talk culminated with, “What kind of ACTivist would I be if I didn’t implore you to take an ACTion? Everybody pull out your phones.” Then I guided the room to the website for Community Broadband PDX and to the ‘Join Us’ page where people in that moment signed up to join the campaign, spread the word with their neighbors, and get organized by the Community Broadband PDX team. You can reach out to them at cbbpdx.org and personaltelco.net. You can get in touch with all the groups mentioned in this blog with their hyperlinks above, or use our EFA allies directory to see who’s organizing in your area.
(l to r) BSidesPDX 2024 swag and stickers. A photo of me speaking at the People’s Digital Privacy Fair on broadband access in PDX. Pictured with Jennifer Redman, President of Community Broadband PDX and former broadband administrator for the city of Portland, OR. A picture of the Personal TelCo table with EFF toolkits printed and EFA brochures on hand. Pictured with Ted, Russell Senior, and Drew of Personal Telco Project. Lastly, it's always great to see a member and active supporter of EFF interacting with one of our EFA groups.
It’s very exciting to see what members of the EFA are doing in Portland! I also went up to Seattle and met with a few organizations, including one now in talks to join the EFA. With new EFA friends in Seattle, and existing EFA relationships fortified, I'm excited to help grow our presence and support in the Pacific Northwest, and have new allies with experience in legislative engagement. It’s great to see groups in the Pacific Northwest engaged and expanding their advocacy efforts, and even greater to stand by them as they do!
Electronic Frontier Alliance members get support from a community of like-minded grassroots organizers from across the US. If your group defends our digital rights, consider joining today. https://efa.eff.org
Speaking Freely: Anriette Esterhuysen
*This interview took place in April 2024 at NetMundial+10 in São Paulo, Brazil. This interview has been edited for length and clarity.
Anriette Esterhuysen is a human rights defender and computer networking trailblazer from South Africa. She has pioneered the use of Internet and Communications Technologies (ICTs) to promote social justice in South Africa and throughout the world, focusing on affordable Internet access. She was the executive director of Association for Progressive Communications from 2007 to 2017. In November 2019 Anriette was appointed by the Secretary-General of the United Nations to chair the Internet Governance Forum’s Multistakeholder Advisory Group.
Greene: Can you go ahead and introduce yourself for us?
Esterhuysen: My name is Anriette Esterhuysen, I am from South Africa and I’m currently sitting here with David in Sao Paulo, Brazil. My closest association remains with the Association for Progressive Communications where I was executive director from 2000 to 2017. I continue to work for APC as a consultant in the capacity of Senior Advisor on Internet Governance and convenor of the annual African School on Internet Governance (AfriSIG).
Greene: Can you tell us more about the African School on Internet Governance (AfriSIG)?
AfriSIG is fabulous. It differs from internet governance capacity building provided by the technical community in that it aims to build critical thinking. It also does not gloss over the complex power dynamics that are inherent to multistakeholder internet governance. It tries to give participants a hands-on experience of how different interest groups and sectors approach internet governance issues.
AfriSIG started as a result of Titi Akinsanmi, a young Nigerian doing postgraduate studies in South Africa, approaching APC and saying, “Look, you’ve got to do something. There’s a European School of Internet Governance, there’s one in Latin America, and where is there more need for capacity-building than in Africa?” She convinced me and my colleague Emilar Vushe Gandhi, APC Africa Policy Coordinator at the time, to organize an African internet governance school in 2013 and since then it has taken place every year. It has evolved over time into a partnership between APC and the African Union Commission and Research ICT Africa.
It is a residential leadership development and learning event that takes place over 5 days. We bring together people who are already working in internet or communications policy in some capacity. We create space for conversation between people from government, civil society, parliaments, regulators, the media, business and the technical community on what in Africa are often referred to as “sensitive topics”. This can be anything from LGBTQ rights to online freedom of expression, corruption, authoritarianism, and accountable governance. We try to create a safe space for deep diving the reasons for the dividing lines between, for example, government and civil society in Africa. It’s very delicate. I love doing it because I feel that it transforms people’s thinking and the way they see one another and one another’s roles. At the end of the process, it is common for a government official to say they now understand better why civil society demands media freedom, and how transparency can be useful in protecting the interests of public servants. And civil society activists have a better understanding of the constraints that state officials face in their day-to-day work. It can be quite a revelation for individuals from civil society to be confronted with the fact that in many respects they have greater freedom to act and speak than civil servants do.
Greene: That’s great. Okay now tell me, what does free speech mean to you?
I think of it as freedom of expression. It’s fundamental. I grew up under Apartheid in South Africa and was active in the struggle for democracy. There is something deeply wrong with being surrounded by injustice, cruelty and brutality and not being allowed to speak about it. Even more so when one's own privilege comes at the expense of the oppressed, as was the case for white South Africans like myself. For me, freedom of expression is the most profound part of being human. You cannot change anything, deconstruct it, or learn about it at a human level without the ability to speak freely about what it is that you see, or want to understand. The absence of freedom of expression entrenches misinformation, a lack of understanding of what is happening around you. It facilitates willful stupidity and selective knowledge. That’s why it’s so smart of repressive regimes to stifle freedom of expression. By stifling free speech you disempower the victims of injustice from voicing their reality, on the one hand, and, on the other, you entrench the unwillingness of those who are complicit with the injustice to confront that they’re part of it.
It is impossible to shift a state of repression and injustice without speaking out about it. That is why people who struggle for freedom and justice speak about it, even if doing so gets them imprisoned, assassinated or executed. Change starts through people, the media, communities, families, social movements, and unions, speaking about what needs to change.
Greene: Having grown up in Apartheid, is there a single personal experience or a group of personal experiences that really shaped your views on freedom of expression?
I think I was fortunate in the sense that I grew up with a mother who—based on her Christian beliefs—came to see Apartheid as being wrong. She was working as a social worker for the main state church—the Dutch Reformed Church (DRC) —at the time of the Cottesloe Consultation convened in Johannesburg by the World Council of Churches (WCC) shortly after the Sharpeville Massacre. An outcome statement from this consultation, and later deliberations by the WCC in Geneva, condemned the DRC for its racism. In response the DRC decided to leave the WCC. At a church meeting my mother attended she listened to the debate and someone in the church hierarchy who spoke against this decision and challenged the church for its racist stance. His words made sense to her. She spoke to him after the meeting and soon joined the organization he had started to oppose Apartheid, the Christian Institute. His name was Beyers Naudé and he became an icon of the anti-Apartheid struggle and an enemy of the apartheid state. Apparently, my first protest march was in a pushchair at a rally in 1961 to oppose the rightwing National Party government's decision for South Africa to leave the Commonwealth.
There’s no single moment that shaped my view of freedom of expression. The thing about living in the context of that kind of racial segregation and repression is that you see it every day. It’s everywhere around you, but like Nazi Germany, people—white South Africans—chose not to see it, or if they did, to find ways of rationalizing it.
Censorship was both a consequence of and a building block of the Apartheid system. There was no real freedom of expression. But because we had courageous journalists, and a broad-based political movement—above ground and underground—that opposed the regime, there were spaces where one could speak/listen/learn. The Congress of Democrats established in the 1950s after the Communist Party was banned was a social justice movement in which people of different faiths and political ideologies (Jewish, Christian and Muslim South Africans alongside agnostics and communists) fought for justice together. Later in the 1980s, when I was a student, this broad front approach was revived through the United Democratic Front. Journalists did amazing things. When censorship was at its height during the State of Emergency in the 1980s, newspapers would go to print with columns of blacked-out text—their way of telling the world that they were being censored.
I used to type up copy filed over the phone or cassettes by reporters for the Weekly Mail when I was a student. We had to be fast because everything had to be checked by the paper’s lawyers before going to print. Lack of freedom of expression was legislated. The courage of editors and individual journalists to defy this, and if they could not, to make it obvious made a huge impact on me.
Greene: Is there a time when you, looking back, would consider that you were personally censored?
I was very much personally censored at school. I went to an Afrikaans secondary school. And I kind of have a memory of when, after going back after a vacation, my math teacher—who I had no personal relationship with —walked past me in class and asked me how my holiday on Robben Island was. I thought, why is he asking me that? A few days later I heard from a teacher I was friendly with that there was a special staff meeting about me. They felt I was very politically outspoken in class and the school hierarchy needed to take action. No actual action was taken... but I felt watched, and through that, censored, even if not silenced.
I felt that because for me, being white, it was easier to speak out than for black South Africans, it would be wrong not to do so. As a teenager, I had already made that choice. It was painful from a social point of view because I was very isolated, I didn’t have many friends, I saw the world so differently from my peers. In 1976 when the Soweto riots broke out I remember someone in my class saying, “This is exactly what we’ve been waiting for because now we can just kill them all.” This is probably also why I feel a deep connection with Israel/Palestine. There are many dimensions to the Apartheid analogy. The one that stands out for me is how, as was the case in South Africa too, those with power—Jewish Israelis—dehumanize and villainize the oppressed - Palestinians.
Greene: At some point did you decide that you want human rights more broadly and freedom of expression to be a part of your career?
I don’t think it was a conscious decision. I think it was what I was living for. It was the raison d’etre of my life for a long time. After high school, I had secured places at two universities. At one for a science degree and at the other for a degree in journalism. But I ended up going to a different university making the choice based on the strength of its student movement. The struggle against Apartheid was expressed and conceptualized as a struggle for human rights. The Constitution of democratic South Africa was crafted by human rights lawyers and in many respects it is a localized interpretation of the Universal Declaration.
Later, in the late 1980s, when I started working on access to information through the use of Information and Communication Technologies (ICTS) it felt like an extension of the political work I had done as a student and in my early working life. APC, which I joined as a member—not staff—in the 1990s, was made up of people from other parts of the world who had been fighting their own struggles for freedom—Latin America, Asia, and Central/ Eastern Europe. All with very similar hopes about how the use of these technologies can enable freedom and solidarity.
Greene: So fast forward to now, currently do you think the platforms promote freedom of expression for people or restrict freedom of expression?
Not a simple question. Still, I think the net effect is more freedom of expression. The extent of online freedom of expression is uneven and it’s distorted by the platforms in some contexts. Just look at the biased pro-Israel way in which several platforms moderate content. Enabling hate speech in contexts of conflict can definitely have a silencing effect. By not restricting hate in a consistent manner, they end up restricting freedom of expression. But I think it’s disingenuous to say that overall the internet does not increase freedom of expression. And social media platforms, despite their problematic business models, do contribute. They could of course do it so much better, fairly and consistently, and for not doing that they need to be held accountable.
Greene: We can talk about some of the problems and difficulties. Let’s start with hate speech. You said it’s a problem we have to tackle. How do we tackle it?
You’re talking to a very cynical old person here. I think that social media amplifies hate speech. But I don’t think they create the impulse to hate. Social media business models are extractive and exploitative. But we can’t fix our societies by fixing social media. I think that we have to deal with hate in the offline world. Channeling energy and resources into trying to grow tolerance and respect for human rights in the online space is not enough. It’s just dealing with the symptoms of intolerance and populism. We need to work far harder to hold people, particularly those with power, accountable for encouraging hate (and disinformation). Why is it easy to get away with online hate in India? Because Modi likes hate. It’s convenient for him, it keeps him in political power. Trump is another example of a leader that thrives on hate.
What’s so problematic about social media platforms is the monetization of this. That is absolutely wrong and should be stopped—I can say all kinds of things about it. We need to have a multi-pronged approach. We need market regulation, perhaps some form of content regulation, and new ways of regulating advertising online. We need access to data on what happens inside these platforms. Intervention is needed, but I do not believe that content control is the right way to do it. It is the business model that is at the root of the problem. That’s why I get so frustrated with this huge global effort by governments (and others) to ensure information integrity through content regulation. I would rather they spend the money on strengthening independent media and journalism.
Greene: We should note we are currently at an information integrity conference today. In terms of hate speech, are there hazards to having hate speech laws?
South Africa has hate speech laws which I believe are necessary. Racial hate speech continues to be a problem in South Africa. So is xenophobic hate speech. We have an election coming on May 29 [2024] and I was listening to talk radio on election issues and hearing how political parties use xenophobic tropes in their campaigns was terrifying. “South Africa has to be for South Africans.” “Nigerians run organized crime.” “All drugs come from Mozambique,” and so on. Dangerous speech needs to be called out. Norms are important. But I think that establishing legalized content regulation is risky. In contexts without robust protection for freedom of expression, such regulation can easily be abused by states to stifle political speech.
Greene: Societal or legal norms?
Both. Legal norms are necessary because social norms can be so inconsistent, volatile. But social norms shape people’s everyday experience and we have to strive to make them human rights aware. It is important to prevent the abuse of legal norms—and states are, sadly, pretty good at doing just that. In the case of South Africa hate speech regulation works relatively well because there are strong protections for freedom of expression. There are soft and hard law mechanisms. The South African Human Rights Commission developed a social media charter to counter harmful speech online as a kind of self-regulatory tool. All of this works—not perfectly of course—because we have a constitution that is grounded in human rights. Where we need to be more consistent is in holding politicians accountable for speech that incites hate.
Greene: So do we want checks and balances built into the regulatory scheme or are you just wanting it existing within a government scheme that has checks and balances built in?
I don’t think you need new global rule sets. I think the existing international human rights framework provides what we need and just needs to be strengthened and its application adapted to emerging tech. One of the reasons why I don’t think we should be obsessive about restricting hate speech online is because it is a canary in a coal mine. In societies where there’s a communal or religious conflict or racial hate, removing its manifestation online could be a missed opportunity to prevent explosions of violence offline. That is not to say that there should not be recourse and remedy for victims of hate speech online. Or that those who incite violence should not be held accountable. But I believe we need to keep the bar high in how we define hate speech—basically as speech that incites violence.
South Africa is an interesting case because we have very progressive laws when it comes to same-sex marriage, same-sex adoption, relationships, insurance, spousal recognition, medical insurance and so on, but there’s still societal prejudice, particularly in poor communities. That is why we need a strong rights-oriented legal framework.
Greene: So that would be another area where free speech can be restricted and not just from a legal sense but you think from a higher level principles sense.
Right. Perhaps what I am trying to say is that there is speech that incites violence and it should be restricted. And then there is speech that is hateful and discriminatory, and this should be countered, called out, and challenged, but not censored. When you’re talking about the restriction—or not even the restriction but the recognition and calling out of—harmful speech it’s important not just to do that online. In South Africa stopping xenophobic speech online or on public media platforms would be relatively simple. But it’s not going to stop xenophobia in the streets. To do that we need other interventions. Education, public awareness campaigns, community building, and change in the underlying conditions in which hate thrives which in our case is primarily poverty and unemployment, lack of housing and security.
Greene: This morning someone who spoke at this event was speaking about misinformation said, “The vast majority of misinformation is online.” And certainly in the US, researchers say that’s not true, most of it is on cable news, but it struck me that someone who is considered an expert should know better. We have information ecosystems and online does not exist separately.
It’s not separate. Agree. There’s such a strong tendency to look at online spaces as an alternative universe. Even in countries with low internet penetration, there’s a tendency to focus on the online components of these ecosystems. Another example would be child online protection. Most child abuse takes place in the physical world, and most child abusers are close family members, friends or teachers of their victims—but there is a global obsession with protecting children online. It is a shortsighted and ‘cheap’ approach and it won’t work. Not for dealing with misinformation or for protecting children from abuse.
Greene: Okay, our last question we ask all of our guests. Who is your free speech hero?
Desmond Tutu. I have many free speech heroes but Bishop Tutu is a standout because he could be so charming about speaking his truths. He was fearless in challenging the Apartheid regime. But he would also challenge his fellow Christians. One of his best lines was, “If LGBT people are not welcome in heaven, I’d rather go to the other place.” And then the person I care about and fear for every day is Egyptian blogger Alaa Abd el-Fattah. I remember walking at night through the streets of Cairo with him in 2012. People kept coming up to him, talking to him, and being so obviously proud to be able to do so. His activism is fearless. But it is also personal, grounded in love for his city, his country, his family, and the people who live in it. For Alaa freedom of speech, and freedom in general, was not an abstract or a political goal. It was about freedom to love, to create art, music, literature and ideas in a shared way that brings people joy and togetherness.
Greene: Well now I have a follow-up question. You said you think free speech is undervalued these days. In what ways and how do we see that?
We see it manifested in the absence of tolerance, in the increase in people claiming that their freedoms are being violated by the expression of those they disagree with, or who criticize them. It’s as if we’re trying to establish these controlled environments where we don’t have to listen to things that we think are wrong, or that we disagree with. As you said earlier, information ecosystems have offline and online components. Getting to the “truth” requires a mix of different views, disagreement, fact-checking, and holding people who deliberately spread falsehoods accountable for doing so. We need people to have the right to free speech, and to counter-speech. We need research and evidence gathering, investigative journalism, and, most of all, critical thinking. I’m not saying there shouldn't be restrictions on speech in certain contexts, but do it because the speech is illegal or actively inciteful. Don’t do it because you think it will achieve so-called information integrity. And especially, don’t do it in ways that undermine the right to freedom of expression.
Oppose The Patent-Troll-Friendly PREVAIL Act
Good news: the Senate Judiciary Committee has dropped one of the two terrible patent bills it was considering, the patent-troll-enabling Patent Eligibility Restoration Act (PERA).
Bad news: the committee is still pushing the PREVAIL Act, a bill that would hamstring the U.S.’s most effective system for invalidating bad patents. PREVAIL is a windfall for patent trolls, and Congress should reject it.
Tell Congress: No New Bills For Patent Trolls
One of the most effective tools to fight bad patents in the U.S. is a little-known but important system called inter partes review, or IPR. Created by Congress in 2011, the IPR process addresses a major problem: too many invalid patents slip through the cracks at the U.S. Patent and Trademark Office. While not an easy or simple process, IPR is far less expensive and time-consuming than the alternative—fighting invalid patents in federal district court.
That’s why small businesses and individuals rely on IPR for protection. More than 85% of tech-related patent lawsuits are filed by non-practicing entities, also known as “patent trolls”—companies that don’t have products or services of their own, but instead make dozens, or even hundreds, of patent claims against others, seeking settlement payouts.
So it’s no surprise that patent trolls are frequent targets of IPR challenges, often brought by tech companies. Eliminating these worst-of-the-worst patents is a huge benefit to small companies and individuals that might otherwise be unable to afford an IPR challenge themselves.
For instance, Apple used an IPR-like process to invalidate a patent owned by the troll Ameranth, which claimed rights over using mobile devices to order food. Ameranth had sued over 100 restaurants, hotels, and fast-food chains. Once the patent was invalidated, after an appeal to the Federal Circuit, Ameranth’s barrage of baseless lawsuits came to an end.
PREVAIL Would Ban EFF and Others From Filing Patent ChallengesThe IPR system isn’t just for big tech—it has also empowered nonprofits like EFF to fight patents that threaten the public interest.
In 2013, a patent troll called Personal Audio LLC claimed that it had patented podcasting. The patent titled “System for disseminating media content representing episodes in a serialized sequence,” became the basis for the company’s demand for licensing fees from podcasters nationwide. Personal Audio filed lawsuits against three podcasters and threatened countless others.
EFF took on the challenge, raising over $80,000 through crowd-funding to file an IPR petition. The Patent Trial and Appeals Board agreed: the so-called “podcasting patent,” should never have been granted. EFF proved that Personal Audio’s claims were invalid, and our victory was upheld all the way to the Supreme Court.
The PREVAIL Act would block such efforts. It limits IPR petitions to parties directly targeted by a patent owner, shutting out groups like EFF that protect the broader public. If PREVAIL becomes law, millions of people indirectly harmed by bad patents—like podcasters threatened by Personal Audio—will lose the ability to fight back.
PREVAIL Tilts the Field in Favor of Patent TrollsThe PREVAIL Act will make life easier for patent trolls at every step of the process. It is shocking that the Senate Judiciary Committee is using the few remaining hours it will be in session this year to advance a bill that undermines the rights of innovators and the public.
Patent troll lawsuits target individuals and small businesses for simply using everyday technology. Everyone who can meet the legal requirements of an IPR filing should have the right to challenge invalid patents. Use our action center today and tell Congress: that’s not a right we want to give up today.
Tell Congress: reject the prevail act
More on the PREVAIL Act:
- EFF’s blog on how the PREVAIL Act takes rights away from the public
- Our coalition opposition letter to the Senate Judiciary Committee opposing PREVAIL
- Read why patients rights and consumer groups also oppose PREVAIL
The U.S. National Security State is Here to Make AI Even Less Transparent and Accountable
The Biden White House has released a memorandum on “Advancing United States’ Leadership in Artificial Intelligence” which includes, among other things, a directive for the National Security apparatus to become a world leader in the use of AI. Under direction from the White House, the national security state is expected to take up this leadership position by poaching great minds from academia and the private sector and, most disturbingly, leveraging already functioning private AI models for national security objectives.
Private AI systems like those operated by tech companies are incredibly opaque. People are uncomfortable—and rightly so—with companies that use AI to decide all sorts of things about their lives–from how likely they are to commit a crime, to their eligibility for a job, to issues involving immigration, insurance, and housing. Right now, as you read this, for-profit companies are leasing their automated decision-making services to all manner of companies and employers and most of those affected will never know that a computer made a choice about them and will never be able to appeal that decision or understand how it was made.
But it can get worse; combining both private AI with national security secrecy threatens to make an already secretive system even more unaccountable and untransparent. The constellation of organizations and agencies that make up the national security apparatus are notoriously secretive. EFF has had to fight in court a number of times in an attempt to make public even the most basic frameworks of global dragnet surveillance and the rules that govern it. Combining these two will create a Frankenstein’s Monster of secrecy, unaccountability, and decision-making power.
While the Executive Branch pushes agencies to leverage private AI expertise, our concern is that more and more information on how those AI models work will be cloaked in the nigh-impenetrable veil of government secrecy. Because AI operates by collecting and processing a tremendous amount of data, understanding what information it retains and how it arrives at conclusions will all become incredibly central to how the national security state thinks about issues. This means not only will the state likely make the argument that the AI’s training data may need to be classified, but they may also argue that companies need to, under penalty of law, keep the governing algorithms secret as well.
As the memo says, “AI has emerged as an era-defining technology and has demonstrated significant and growing relevance to national security. The United States must lead the world in the responsible application of AI to appropriate national security functions.” As the US national security state attempts to leverage powerful commercial AI to give it an edge, there are a number of questions that remain unanswered about how much that ever-tightening relationship will impact much needed transparency and accountability for private AI and for-profit automated decision making systems.
Now's The Time to Start (or Renew) a Pledge for EFF Through the CFC
The Combined Federal Campaign (CFC) pledge period is underway and runs through January 15, 2024! If you're a U.S. federal employee or retiree, be sure to show your support for EFF by using our CFC ID 10437.
Not sure how to make a pledge? No problem--it’s easy! First, head over to GiveCFC.org and click “DONATE.” Then you can search for EFF using our CFC ID 10437 and make a pledge via payroll deduction, credit/debit, or an e-check. If you have a renewing pledge, you can also increase your support there as well!
The CFC is the world’s largest and most successful annual charity campaign for U.S. federal employees and retirees. Last year, members of the CFC community raised nearly $34,000 to support EFF’s work advocating for privacy and free expression online. That support has helped us:
- Push the Fifth Circuit Court of Appeals to find that geofence warrants are “categorically” unconstitutional.
- Launch Digital Rights Bytes, a resource dedicated to teaching people how to take control of the technology they use every day.
- Call out unconstitutional age-verification and censorship laws across the U.S.
- Continue to develop and maintain our privacy-enhancing tools, like Certbot and Privacy Badger.
Federal employees and retirees greatly impact our democracy and the future of civil liberties and human rights online. Support EFF’s work by using our CFC ID 10437 when you make a pledge today!
Speaking Freely: Marjorie Heins
This interview has been edited for length and clarity.*
Marjorie Heins is a writer, former civil rights/civil liberties attorney, and past director of the Free Expression Policy Project (FEPP) and the American Civil Liberties Union's Arts Censorship Project. She is the author of "Priests of Our Democracy: The Supreme Court, Academic Freedom, and the Anti-Communist Purge," which won the Hugh M. Hefner First Amendment Award in Book Publishing in 2013, and "Not in Front of the Children: Indecency, Censorship, and the Innocence of Youth," which won the American Library Association's Eli Oboler Award for Best Published Work in the Field of Intellectual Freedom in 2002.
Her most recent book is "Ironies and Complications of Free Speech: News and Commentary From the Free Expression Policy Project." She has written three other books and scores of popular and scholarly articles on free speech, censorship, constitutional law, copyright, and the arts. She has taught at New York University, the University of California - San Diego, Boston College Law School, and the American University of Paris. Since 2015, she has been a volunteer tour guide at the Metropolitan Museum of Art in New York City.
Greene: Can you introduce yourself and the work you’ve done on free speech and how you got there?
Heins: I’m Marjorie Heins, I’m a retired lawyer. I spent most of my career at the ACLU. I started in Boston, where we had a very small office, and we sort of did everything—some sex discrimination cases, a lot of police misconduct cases, occasionally First Amendment. Then, after doing some teaching and a stint at the Massachusetts Attorney General’s office, I found myself in the national office of the ACLU in New York, starting a project on art censorship. This was in response to the political brouhaha over the National Endowment for the Arts starting around 1989/ 1990.
Culture wars, attacks on some of the grants made by the NEA, became a big hot button issue. The ACLU was able to raise a little foundation money to hire a lawyer to work on some of these cases. And one case that was already filed when I got there was National Endowment for the Arts vs Finley. It was basically a challenge by four theater performance artists whose grants had been recommended by the peer panel but then ultimately vetoed by the director after a lot of political pressure because their work was very much “on the edge.” So I joined the legal team in that case, the Finley case, and it had a long and complicated history. Then, by the mid-1990s we were faced with the internet. And there were all these scares over pornography on the internet poisoning the minds of our children. So the ACLU got very involved in challenging censorship legislation that had been passed by Congress, and I worked on those cases.
I left the ACLU in 1998 to write a book about what I had learned about censorship. I was curious to find out more about the history primarily of obscenity legislation—the censorship of sexual communications. So it’s a scholarly book called “Not in front of the Children.” Among the things I discovered is that the origins of censorship of sexual content, sexual communications, come out of this notion that we need to protect children and other “vulnerable beings.” And initially that included women and uneducated people, but eventually it really boiled down to children—we need censorship basically of everybody in order to protect children. So that’s what Not in front of the Children was all about.
And then I took my foundation contacts—because at the ACLU if you have a project you have to raise money—and started a little project, a little think tank which became affiliated with the National Coalition Against Censorship called the Free Expression Policy Project. And at that point we weren’t really doing litigation anymore, we were doing a lot of friend of the court briefs, a lot of policy reports and advocacy articles about some of the values and competing interests in the whole area of free expression. And one premise of this project, from the start, was that we are not absolutists. So we didn’t accept the notion that because the First Amendment says “Congress shall make no law abridging the freedom of speech,” then there’s some kind of absolute protection for something called free speech and there can’t be any exceptions. And, of course, there are many exceptions.
So the basic premise of the Free Expression Policy Project was that some exceptions to the First Amendment, like obscenity laws, are not really justified because they are driven by different ideas about morality and a notion of moral or emotional harm rather than some tangible harm that you can identify like, for example, in the area of libel and slander or invasion of privacy or harassment. Yes, there are exceptions. The default, the presumption, is free speech, but there could be many reasons why free speech is curtailed in certain circumstances.
The Free Expression Policy Project continued for about seven years. It moved to the Brennan Center for Justice at NYU Law School for a while, and, finally, I ran out of ideas and funding. I kept up the website for a little while longer, then ultimately ended the website. Then I thought, “okay, there’s a lot of good information on this website and it’s all going to disappear, so I’m going to put it into a book.” Oh, I left out the other book I worked on in the early 2000s – about academic freedom, the history of academic freedom, called “Priests of Our Democracy: The Supreme Court, Academic Freedom, and the Anti-Communist Purge.” This book goes back in history even before the 1940s and 1950s Red Scare and the effect that it had on teachers and universities. And then this last book is called “Ironies and Complications of Free Speech: News and Commentary From the Free Expression Policy Project,” which is basically an anthology of the best writings from the Free Expression Policy Project.
And that’s me. That’s what I did.
Greene: So we have a ton to talk about because a lot of the things you’ve written about are either back in the news and regulatory cycle or never left it. So I want to start with your book “Not in Front of the Children” first. I have at least one copy and I’ve been referring to it a lot and suggesting it because we’ve just seen a ton of efforts to try and pass new child protection laws to protect kids from online harms. And so I’m curious, first there was a raft of efforts around Tik Tok being bad for kids, now we’re seeing a lot of efforts aimed at shielding kids from harmful material online. Do you think this a throughline from concerns back from mid-19th Century England. Is it still the same debate or is there something different about these online harms?
Both are true I think. It’s the same and it’s different. What’s the same is that using the children as an argument for basically trying to suppress information, ideas, or expression that somebody disapproves of goes back to the beginning of censorship laws around sexuality. And the subject matters have changed, the targets have changed. I’m not too aware of new proposals for internet censorship of kids, but I’m certainly aware of what states—of course, Florida being the most prominent example—have done in terms of school books, school library books, public library books, and education from not only k-12 but also higher education in terms of limiting the subject matters that can be discussed. And the primary target seems to be anything to do with gay or lesbian sexuality and anything having to do with a frank acknowledgement of American slavery or Jim Crow racism. Because the argument in Florida, and this is explicit in the law, is because it would make white kids feel bad, so let’s not talk about it. So in that sense the two targets that I see now—we’ve got to protect the kids against information about gay and lesbian people and information about the true racial history of this country—are a little different from the 19th century and even much of the 20th century.
Greene: One of the things I see is that the harms motivating the book bans and school restrictions are the same harms that are motivating at least some of the legislators who are trying to pass these laws. And notably a lot of the laws only address online harmful material without being specific about subject matter. We’re still seeing some that are specifically about sexual material, but a lot of them, including the Kids Online Safety Act really just focus on online harms more broadly.
I haven’t followed that one, but it sounds like it might have a vagueness problem!
Greene: One of the things I get concerned about with the focus on design is that, like, a state Attorney General is not going to be upset if the design has kids reading a lot of bible verses or tomes about being respectful to your parents. But they will get upset and prosecute people if the design feature is recommending to kids gender-affirming care or whatever. I just don’t know if there’s a way of protecting against that in a law.
Well, as we all know, when we’re dealing with commercial speech there’s a lot more leeway in terms of regulation, and especially if ads are directed at kids. So I don’t have a problem with government legislation in the area of restricting the kinds of advertising that can be directed at kids. But if you get out of the area of commercial speech and to something that’s kind of medical, could you have constitutional legislation that prohibited websites from directing kids to medically dangerous procedures? You’re sort of getting close to the borderline. If it’s just information then I think the legislation is probably going to be unconstitutional even if it’s related to kids.
Greene: Let’s shift to academic freedom. Which is another fraught issue. What do you think of the current debates now over both restrictions on faculty and universities restricting student speech?
Academic freedom is under the gun from both sides of the political spectrum. For example, Diversity, Equity, and Inclusion (DEI) initiatives, although they seem well-intentioned, have led to some pretty troubling outcomes. So that when those college presidents were being interrogated by the members of Congress (in December 2023), they were in a difficult position, among other reasons, because at least at Harvard and Penn it was pretty clear there were instances of really appalling applications of this idea of Diversity, Equity, and Inclusion – both to require a certain kind of ideological approach and to censor or punish people who didn’t go along with the party line, so to speak.
The other example I’m thinking of, and I don’t know if Harvard and Penn do this – I know that the University of California system does it or at least it used to – everybody who applies for a faculty position has to sign a diversity statement, like a loyalty oath, saying that these are the principles they agree with and they will promise to promote.
And you know you have examples, I mean I may sound very retrograde on this one, but I will not use the pronoun “they” for a singular person. And I know that would mean I couldn’t get a faculty job! And I’m not sure if my volunteer gig at the Met museum is going to be in trouble because they, very much like universities, have given us instructions, pages and pages of instructions, on proper terminology – what terminology is favored or disfavored or should never be used, and “they” is in there. You can have circumlocutions so you can identify a single individual without using he or she if that individual – I mean you can’t even know what the individual’s preference is. So that’s another example of academic freedom threats from I guess you could call the left or the DEI establishment.
The right in American politics has a lot of material, a lot of ammunition to use when they criticize universities for being too politically correct and too “woke.” On the other hand, you have the anti-woke law in Florida which is really, as I said before, directed against education about the horrible racial history of this country. And some of those laws are just – whatever you may think about the ability of state government and state education departments to dictate curriculum and to dictate what viewpoints are going to be promoted in the curriculum – the Florida anti-woke law and don’t say gay law really go beyond I think any kind of discretion that the courts have said state and local governments have to determine curriculum.
Greene: Are you surprised at all that we’re seeing that book bans are as big of a thing now as they were twenty years ago?
Well, nothing surprises me. But yes, I would not have predicted that there were going to be the current incarnations of what you can remember from the old days, groups like the American Family Association, the Christian Coalition, the Eagle Forum, the groups that were “culture warriors” who were making a lot of headlines with their arguments forty years ago against even just having art that was done by gay people. We’ve come a long way from that, but now we have Moms for Liberty and present-day incarnations of the same groups. The homophobia agenda is a little more nuanced, it’s a little different from what we were seeing in the days of Jesse Helms in Congress. But the attacks on drag performances, this whole argument that children are going to be groomed to become drag queens or become gay—that’s a little bit of a different twist, but it’s basically the same kind of homophobia. So it’s not surprising that it’s being churned up again if this is something that politicians think they can get behind in order to get elected. Or, let me put it another way, if the Moms for Liberty type groups make enough noise and seem to have enough political potency, then politicians are going to cater to them.
And so the answer has to be groups on the other side that are making the free expression argument or the intellectual freedom argument or the argument that teachers and professors and librarians are the ones who should decide what books are appropriate. Those groups have to be as vocal and as powerful in order to persuade politicians that they don’t have to start passing censorship legislation in order to get votes.
Greene: Going back to the college presidents and being grilled on the hill, you wrote that maybe there was, in response to the genocide question, which I think they were most sharply criticized there, that there was a better answer that they could have given. Could you talk about that?
I think in that context, both for political reasons and for reasons of policy and free speech doctrine, the answer had to be that if students on campus are calling for genocide of Jews or any other ethnic or religious group that should not be permitted on campus and that amounts to racial harassment. Of course, I suppose you could imagine scenarios where two antisemitic kids in the privacy of their dorm room said this and nobody else heard it—okay, maybe it doesn’t amount to racial harassment. But private colleges are not bound by the First Amendment. They all have codes of civility. Public colleges are bound by the First Amendment, but not the same standards as the public square. So I took the position that in that circumstance the presidents had to answer, “Yes, that would violate our policies and subject a student to discipline.” But that’s not the same as calling for the intifada or calling for even the elimination of the state of Israel as having been a mistake 75 years ago. So I got a little pushback on that little blog post that I wrote. And somebody said, “I’m surprised a former ACLU lawyer is saying that calling for genocide could be punished on a college campus.” But you know, the ACLU has many different political opinions within both the staff and Board. There were often debates on different kinds of free speech issues and where certain lines are drawn. And certainly on issues of harassment and when hate speech becomes harassment—under what circumstances it becomes harassment. So, yes, I think that’s what they should have said. A lot of legal scholars, including David Cole of the ACLU, said they gave exactly the right answer, the legalistic answer, that it depends on the context. In that political situation that was not the right answer.
Greene: It was awkward. They did answer as if they were having an academic discussion and not as if they were talking to members of Congress.
Well they also answered as if they were programmed. I mean Claudine Gay repeated the exact same words that probably somebody had told her to say at least twice if not more. And that did not look very good. It didn’t look like she was even thinking for herself.
Greene: I do think they were anticipating the followup question of, “Well isn’t saying ‘From the River to the Sea’ a call for genocide and how come you haven’t punished students for that?” But as you said, that would then lead into a discussion of how we determine what is or is not a call for genocide.
Well they didn’t need a followup question because to Elise Stefanik, “Intifada” or “from the river to the sea” was equivalent to a call for genocide, period, end of discussion. Let me say one more thing about these college hearings. What these presidents needed to say is that it’s very scary when politicians start interrogating college faculty or college presidents about curriculum, governance, and certainly faculty hires. One of the things that was going on there was they didn’t think there were enough conservatives on college faculties, and that was their definition of diversity. You have to push back on that, and say it is a real threat to academic freedom and all of the values that we talk about that are important at a university education when politicians start getting their hands on this and using funding as a threat and so forth. They needed to say that.
Greene: Let’s pull back and talk about free speech principles more broadly. Why is, after many years of work in this area, why do you think free expression is important?
What is the value of free expression more globally? [laughs] A lot of people have opined on that.
Greene: Why is it important to you personally?
Well I define it pretty broadly. So it doesn’t just include political debate and discussion and having all points of view represented in the public square, which used to be the narrower definition of what the First Amendment meant, certainly according to the Supreme Court. But the Court evolved. And so it’s now recognized, as it should be, that free expression includes art. The movies—it doesn’t even have to be verbal—it can be dance, it can be abstract painting. All of the arts, which feed the soul, are part of free expression. And that’s very important to me because I think it enriches us. It enriches our intellects, it enriches our spiritual lives, our emotional lives. And I think it goes without saying that political expression is crucial to having a democracy, however flawed it may be.
Greene: You mentioned earlier that you don’t consider yourself to be a free speech absolutist. Do you consider yourself to be a maximalist or an enthusiast? What do you see as being sort of legitimate restrictions on any individual’s freedom of expression?
Well, we mentioned this at the beginning. There are a lot of exceptions to the First Amendment that are legitimate and certainly, when I started at the ACLU I thought that defamation laws and libel and slander laws violate the first amendment. Well, I’ve changed my opinion. Because there’s real harm that gets caused by libel and slander. As we know, the Supreme Court has put some First Amendment restrictions around those torts, but they’re important to have. Threats are a well-recognized exception to the freedom of speech, and the kind of harm caused by threats, even if they’re not followed through on, is pretty obvious. Incitement becomes a little trickier because where do you draw the lines? But at some point an incitement to violent action I think can be restricted for obvious reasons of public safety. And then we have restrictions on false advertising but, of course, if we’re not in the commercial context, the Supreme Court has told us that lies are protected by the First Amendment. That’s probably wise just in terms of not trying to get the government and the judicial process involved in deciding what is a lie and what isn’t. But of course that’s done all the time in the context of defamation and commercial speech. Hate speech is something, as we know, that’s prohibited in many parts of Europe but not here. At least not in the public square as opposed to employment contexts or educational contexts. Some people would say, “Well, that’s dictated by the First Amendment and they don’t have the First Amendment over there in Europe, so we’re better.” But having worked in this area for a long time and having read many Supreme Court decisions, it seems to me the First Amendment has been subjected to the same kind of balancing test that they use in Europe when they interpret their European Convention on Human Rights or their individual constitutions. They just have different policy choices. And the policy choice to prohibit hate speech given the history of Europe is understandable. Whether it is effective in terms of reducing racism, Islamophobia, antisemitism… is there more of that in Europe than there is here? Hard to know. It’s probably not that effective. You make martyrs out of people who are prosecuted for hate speech. But on the other hand, some of it is very troubling. In the United States Holocaust denial is protected.
Greene: Can you talk a little bit about your experience being a woman advocating for first amendment rights for sexual expression during a time when there was at least some form of feminist movement saying that some types of sexualization of women was harmful to women?
That drove a wedge right through the feminist movement for quite a number of years. There’s still some of that around, but I think less. The battle against pornography has been pretty much a losing battle.
Greene: Are there lessons from that time? You were clearly on one side of it, are there lessons to be learned from that when we talk about sort of speech harms?
One of the policy reports we did at the Free Expression Policy Project was on media literacy as an alternative to censorship. Media literacy can be expanded to encompass a lot of different kinds of education. So if you had decent sex education in this country and kids were able to think about the kinds of messages that you see in commercial pornography and amateur pornography, in R-rated movies, in advertising—I mean the kind of sexist messages and demeaning messages that you see throughout the culture—education is the best way of trying to combat some of that stuff.
Greene: Okay, our final question that we ask everyone. Who is your free speech hero?
When I started working on “Priests of our Democracy” the most important case, sort of the culmination of the litigation that took place challenging loyalty programs and loyalty oaths, was a case called Keyishian v. Board of Regents. This is a case in which Justice Brennan, writing for a very slim majority of five Justices, said academic freedom is “a special concern of the First Amendment, which does not tolerate laws that cast a pall of orthodoxy over the classroom.” Harry Keyishian was one of the five plaintiffs in this case. He was one of five faculty members at the University of Buffalo who refused to sign what was called the Feinberg Certificate, which was essentially a loyalty oath. The certificate required all faculty to say “I’ve never been a member of the Communist Party and if I was, I told the President and the Dean all about it.” He was not a member of the Communist Party, but as Harry said much later in an interview – because he had gone to college in the 1950s and he saw some of the best professors being summarily fired for refusing to cooperate with some of these Congressional investigating committees – fast forward to the Feinberg Certificate loyalty oath: he said his refusal to sign was his “revenge on the 1950s.” And so he becomes the plaintiff in this case that challenges the whole Feinberg Law, this whole elaborate New York State law that basically required loyalty investigations of every teacher in the public system. So Harry became my hero. I start my book with Harry. The first line in my book is, “Harry Keyishian was a junior at Queen’s College in the Fall of 1952 when the Senate Internal Security Subcommittee came to town.” And he’s still around. I think he just had his 90th birthday!
On Alaa Abd El Fattah’s 43rd Birthday, the Fight For His Release Continues
Today marks prominent British-Egyptian coder, blogger, activist, and political prisoner Alaa Abd El Fattah’s 43rd birthday—his eleventh behind bars. Alaa should have been released on September 29, but Egyptian authorities have continued his imprisonment in contravention of the country’s own Criminal Procedure Code. Since September 29, Alaa’s mother, mathematician Leila Soueif, has been on hunger strike, while she and the rest of his family have worked to engage the British government in securing Alaa’s release.
Last November, an international counsel team acting on behalf of Alaa’s family filed an urgent appeal to the UN Working Group on Arbitrary Detention. EFF joined 33 other organizations in supporting the submission and urging the UNWGAD promptly to issue its opinion on the matter. Last week, we signed another letter urging the UNWGAD once again to issue an opinion.
Despite his ongoing incarceration, Alaa’s writing and his activism have continued to be honored worldwide. In October, he was announced as the joint winner of the PEN Pinter Prize alongside celebrated writer Arundhati Roy. His 2021 collection of essays, You Have Not Yet Been Defeated, has been re-released as part of Fitzcarraldo Editions’ First Decade Collection. Alaa is also the 2023 winner of PEN Canada’s One Humanity Award and the 2022 winner of EFF’s own EFF Award for Democratic Reform Advocacy.
EFF once again calls for Alaa Abd El Fattah’s immediate and unconditional release and urges the UN Working Group on Arbitrary Detention to promptly issue its opinion on his incarceration. We further urge the British government to take action to secure his release.
"Why Is It So Expensive To Repair My Devices?"
Now, of course, we’ve all dropped a cell phone, picked it up, and realized that we’ve absolutely destroyed its screen. Right? Or is it just me...? Either way, you’ve probably seen how expensive it can be to repair a device, whether it be a cell phone, laptop, or even a washing machine.
Device repair doesn’t need to be expensive, but companies have made repair a way to siphon more money from your pocket to theirs. It doesn’t need to be this way, and with our new site—Digital Rights Bytes—we lay out how we got here and what we can do to fix this issue.
Check out our short one-minute video explaining why device repair has become so expensive and what you can do to defend your right to repair. If you’re hungry to learn more, we’ve broken up some key takeaways into small byte-sized pieces you can even share with your family and friends.
Digital Rights Bytes also has answers to other common questions including if your phone is actually listening to you, ownership of your digital media, and more. Got any additional questions you’d like us to answer in the future? Let us know on your favorite social platform using the hashtag #DigitalRightsBytes so we can find it!
EFF Is Ready for What's Next | EFFector 36.14
Don't be scared of your backlog of digital rights news, instead, check out EFF's EFFector newsletter! It's the one-stop-shop to keeping up with the latest in the fight for online freedoms. This time we cover our expectations and preparations for the next U.S. presidential administration, surveillance towers at the U.S.-Mexico border, and EFF's new report on the use of AI in Latin America.
It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive or by clicking the button below:
EFFECTOR 36.14 - EFF IS READY FOR WHAT'S NEXT
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
Tell Congress To Stop These Last-Minute Bills That Help Patent Trolls
This week, the Senate Judiciary Committee is set to use its limited time in the lame-duck session to vote on a bill that would make the patent system even worse.
The Patent Eligibility Restoration Act (S. 2140), or PERA, would undo vital limits on computer technology patents that the Supreme Court established in the landmark 2014 Alice v. CLS Bank decision. Alice barred patent applicants from obtaining patents simply by adding generic computer language to abstract ideas.
Tell Congress: No New Bills For Patent Trolls
While Alice hasn’t fully fixed the problems of the patent system, or patent trolling, it has led to the rejection of hundreds of terrible software patents, including patents on crowdfunding, tracking packages, photo contests, watching online ads, computer bingo, upselling, and many others.
PERA would not only revive these dangerous technology patents, but also expand patenting of human genes—a type of patent the Supreme Court essentially blocked in 2013.
The Senate Judiciary is also scheduled to vote on the PREVAIL Act (S. 2220) that seeks to severely limit the public’s ability to challenge bad patents at the patent office. These challenges are among the most effective tools for eliminating patents that never should have been granted in the first place.
Passing these bills would sell out the public interest to a narrow group of patent holders. EFF stands together with a broad coalition of patients rights groups, consumer rights organizations, think tanks, startups, and business organizations to oppose these harmful bills.
This week, we need to show Congress that everyday users and creators won’t support laws that foster more patent abuse. Help us send a clear message to your representatives in Congress today.
Tell Congress to reject pera and prevail
The U.S. Senate must reject bills like these that would allow the worst patent scams to expand and thrive.
Speaking Freely: Tanka Aryal
*This interview took place in April 2024 at NetMundial+10 in São Paulo, Brazil and has been edited for length and clarity.
Tanka Aryal is the President of Digital Rights Nepal. He is an attorney practicing at the Supreme Court of Nepal. He has long worked to promote digital rights, the right to information, freedom of expression, civic space, accountability, and internet freedom nationally for the last 15 years. Mr. Aryal holds two LLM degrees in International Human Rights Laws from Kathmandu School of Law and Central European University Hungary. Additionally, he completed different degrees from Oxford University UK and Tokiwa University Japan. Mr. Aryal has worked as a consultant and staff with different national international organizations including FHI 360, International Center for Not-for-profit Law (ICNL), UNESCO, World Bank, ARTICLE 19, United Nations Development Programme (UNDP), ISOC, and the United Nations Department of Economic and Social Affairs (UNDESA/DPADM). Mr. Aryal led a right information campaign throughout the country for more than 4 years as the Executive Director of Citizens’ Campaign for Right to Information.
Greene: Can you introduce yourself? And can you tell me what kind of work your organization does on freedom of speech in particular?
I am Tanka Aryal, I’m from Nepal and I represent Digital Rights Nepal. Looking at my background of work, I have been working on freedom of expression for the last twenty years. Digital Rights Nepal is a new organization that started during COVID when a number of issues came up particularly around freedom of expression online and the use of different social media platforms expressing the ideas of every individual representing different classes, castes, and groups of society. The majority of work done by my organization is particularly advocating for freedom of expression online as well as data privacy and protection. This is the domain we work in mainly, but in the process of talking about and advocating for freedom of expression we also talk about access to information, online information integrity, misinformation, and disinformation.
Greene: What does free speech mean to you personally?
It’s a very heavy question! I know it’s not an absolute right—it has limitations. But I feel like if I am not doing any harm to other individuals or it’s not a mass security type of thing, there should not be interference from the government, platforms, or big companies. At the same time, there are a number of direct and indirect undue influences from the political wings or the Party who is running the government, which I don’t like. No interference in my thoughts and expression—that is fundamental for me with freedom of expression.
Greene: Do you consider yourself to be passionate about freedom of expression?
Oh yes. What I’ve realized is, if I consider the human life, existence starts once you start expressing yourself and dealing and communicating with others. So this is the very fundamental freedom for every human being. If this part of rights is taken away then your life, my life, as a human is totally incomplete. That’s why I’m so passionate about this right. Because this right has created a foundation for other rights as well. For example, if I speak out and demand my right to education or the right to food, if my right to speak freely is not protected, then those other rights are also at risk.
Greene: Do you have a personal experience that shaped how you feel about freedom of expression?
Yes. I don’t mean this in a legal sense, but my personal understanding is that if you are participating in any forum, unless you express your ideas and thoughts, then you are very hardly counted. This is the issue of existence and making yourself exist in society and in community. What I realized was that when you express your ideas with the people and the community, then the response is better and sometimes you get to engage further in the process. If I would like to express myself, if there are no barriers, then I feel comfortable. In a number of cases in my life and journey dealing with the government and media and different political groups, if I see some sort of barriers or external factors that limit me speaking, then that really hampers me. I realize that that really matters.
Greene: In your opinion, what is the state of freedom of expression in Nepal right now?
It’s really difficult. It’s not one of those absolute types of things. There are some indicators of where we stand. For instance, where we stand on the Corruption Index, where we stand on the Freedom of Expression Index. If I compare the state of freedom of expression in Nepal, it’s definitely better than the surrounding countries like India, Bangladesh, Pakistan, and China. But, learning from these countries, my government is trying to be more restrictive. Some laws and policies have been introduced that limit freedom of expression online. For instance, Tik Tok is banned by the government. We have considerably good conditions, but still there is room to improve in a way that you can have better protections for expression.
Greene: What was the government’s thinking with banning TikTok?
There are a number of interpretations. Before banning TikTok the government was seen as pro-China. Once the government banned TikTok—India had already banned it—that decision supported a narrative that the government is leaning to India rather than China. You know, this sort of geopolitical interpretation. A number of other issues were there, too. Platforms were not taking measures even for some issues that shouldn’t have come through the platforms. So the government took the blanket approach in a way to try to promote social harmony and decency and morality. Some of the content published on TikTok was not acceptable, in my opinion, as a consumer myself. But the course of correction could have been different, maybe regulation or other things. But the government took the shortcut way by banning Tik Tok, eliminating the problem.
Greene: So a combination of geopolitics and that they didn’t like what people were watching on TikTok?
Actually there are a number of narratives told by the different blocks of people, people with different ideas and the different political wings. It was said that the government—the Maoist leader is the prime minister—considers the very rural people as their vote bank. The government sees them as less literate, brain-washed types of people. “Okay, this is my vote bank, no one can sort of toss it.” Then once TikTok became popular the TikTok users were the very rural people, women, marginalized people. So they started using Tik Tok asking questions to the government and things like that. It was said that the Maoist party was not happy with that. “Okay, now our vote bank is going out of our hands so we better block TikTok and keep them in our control.” So that is the narrative that was also discussed.
Greene: It’s similar in the US, we’re dealing with this right now. Similarly, I think it’s a combination of the geopolitics just with a lot of anti-China sentiment in the US as well as a concern around, “We don’t like what the kids are doing on TikTok and China is going to use it to serve political propaganda and brainwash US users.”
In the case of the US and India, TikTok was banned for national security. But in our case, the government never said, “Okay, TikTok is banned for our national security.” Rather, they were focusing on content that the government wasn’t happy with.
Greene: Right, and let me credit Nepal there for their candor, though I don’t like the decision. Because I personally don’t think the United States government’s national security excuse is very convincing either. But what types of speech or categories of content or topics are really targeted by regulators right now for restriction?
To be honest, the elected leaders, maybe the President, the Prime Minister, the powerholders don’t like the questions being posed to them. That is a general thing. Maybe the Mayor, maybe the Prime Minister, maybe a Minister, maybe a Chief Minister of one province—the powerholders don’t like being questioned. That is one type of speech made by the people—asking questions, asking for accountability. So that is one set of targets. Similarly, some speech that’s for the protection of the rights of the individual in many cases—like hate speech against Dalit, and women, and the LGBTQIA community—so any sort of speech or comments, any type of content, related to this domain is an issue. People don’t have the capacity to listen to even very minor critical things. If anybody says, “Hey, Tanka, you have these things I would like to be changed from your behavior.” People can say these things to me. As a public position holder I should have that ability to listen and respond accordingly. But politicians say, “I don’t want to listen to any sort of criticism or critical thoughts about me.” Particularly the political nature of the speech which seeks accountability and raises transparency issues, that is mostly targeted.
Greene: You said earlier that as long as your speech doesn’t harm someone there shouldn’t be interference. Are there certain harms that are caused by speech that you think are more serious or that really justify regulation or laws restricting them?
It’s a very tricky one. Even if regulation is justified, if one official can ban something blanketly, it should go through judicial scrutiny. We tend to not have adequate laws. There are a number of gray areas. Those gray areas have been manipulated and misused by the government. In many cases, misused by, for example, the police. What I understood is that our judiciary is sometimes very sensible and very sensitive about freedom of expression. However, in many cases, if the issue is related to the judiciary itself they are very conservative. Two days back I read in a newspaper that there was a sting operation around one judge engaging [in corruption] with a business. And some of the things came into the media. And the judiciary was so reactive! It was not blamed on the whole judiciary, but the judiciary asked online media to remove that content. There were a number of discussions. Like without further investigation or checking the facts, how can the judiciary give that order to remove that content? Okay, one official thought that this is wrong content, and if the judiciary has the power to take it down, that’s not right and that can be misused any time. I mean, the judiciary is really good if the issues are related to other parties, but if the issue is related to the judiciary itself, the judiciary is conservative.
Greene: You mentioned gray areas and you mentioned some types of hate speech. Is that a gray area in Nepal?
Yeah, actually, we don’t have that much confidence in law. What we have is the Electronic Transactions Act. Section 47 says that content online can not be published if the content harms others, and so on. It’s very abstract. So that law can be misused if the government really wanted to drag you into some sort of very difficult position.
We have been working toward and have provided input on a new law that’s more comprehensive, that would define things in proper ways that have less of a chance of being misused by the police. But it could not move ahead. The bill was drafted in the past parliament. It took lots of time, we provided input, and then after five years it could not move ahead. Then parliament dissolved and the whole thing became null. The government is not that consultative. Unlike how here we are talking [at NetMundial+10] with multi stakeholder participation—the government doesn’t bother. They don’t see incentive for engaging civil society. Rather they consider if we can give them the other troublemakers, let’s keep them away and pass the law. That is the idea they are practicing. We don’t have very clear laws, and because we don’t have clear laws some people really violate fundamental principles. Say someone was attacking my privacy or I was facing defamation issues. The police are very shorthanded, they can’t arrest that person even if they’re doing something really bad. In the meantime, the police, if they have a good political nexus and they just want to drag somebody, they can misuse it.
Greene: How do you feel about private corporations being gatekeepers of speech?
It’s very difficult. Even during election time the Election Commission issued an Election Order of Conduct, you could see how foolish they are. They were giving the mandate to the ISPs that, “If there is a violation of this Order of Conduct, you can take it down.” That sort of blanket power given to them can be misused any time. So if you talk about our case, we don’t have that many giant corporations, of course Meta and all the major companies are there. Particularly the government has given certain mandates to ISPs, and in many cases even the National Press Council was asking the ISP Association and the Nepal Telecommunications Authority (NTA) that regulates all ISPs. Without having a very clear mandate to the Press Council, without having a clear mandate to NTA, they are exercising power to instruct the ISPs, “Hey, take this down. Hey, don’t publish this.” So that’s the sort of mechanism and the practice out there.
Greene: You said that Digital Rights Nepal was founded during the pandemic. What was the impetus for starting the organization?
We were totally trapped at home, working from home, studying from home, everything from home. I had worked for a nonprofit organization in the past, advocating for freedom of expression and more, and when we were at home during COVID a number of issues came out about online platforms. Some people were able to exercise their rights because they have access to the internet, but some people didn’t have access to the internet and were unable to exercise freedom of expression. So we recognized there are a number of issues and there is a big digital divide. There are a number of regulatory gray areas in this sector. Looking at the number of kids who were compelled to do online school, their data protection and privacy was another issue. We were engaging in these e-commerce platforms to buy things and there aren’t proper regulations. So we thought there are a number of issues and nobody working on them, so let’s form this initiative. It didn’t come all of the sudden, but our working background was there and that situation really made us realize that we needed to focus our work on these issues.
Greene: Okay, our final question. Who is your free speech hero?
It depends. In my context, in Nepal, there are a couple of people that don’t hesitate to express their ideas even if it is controversial. There’s also Voltaire’s saying, “I defend your freedom of expression even if I don’t like the content.” He could be one of my free speech heroes. Because sometimes people are hypocrites. They say, “I try to advocate freedom of expression if it applies to you and the government and others, but if any issues come to harm me I don’t believe in the same principle.” Then people don’t defend freedom of expression. I have seen a number of people showing their hypocrisy once the time came where the speech is against them. But for me, like Voltaire says, even if I don’t like your speech I’ll defend it until the end because I believe in the idea of freedom of expression.
Creators of This Police Location Tracking Tool Aren't Vetting Buyers. Here's How To Protect Yourself
404 Media, along with Haaretz, Notus, and Krebs On Security recently reported on a company that captures smartphone location data from a variety of sources and collates that data into an easy-to-use tool to track devices’ (and, by proxy, individuals’) locations. The dangers that this tool presents are especially grave for those traveling to or from out-of-state reproductive health clinics, places of worship, and the border.
The tool, called Locate X, is run by a company called Babel Street. Locate X is designed for law enforcement, but an investigator working with Atlas Privacy, a data removal service, was able to gain access to Locate X by simply asserting that they planned to work with law enforcement in the future.
With an incoming administration adversarial to those most at risk from location tracking using tools like Locate X, the time is ripe to bolster our digital defenses. Now more than ever, attorneys general in states hostile to reproductive choice will be emboldened to use every tool at their disposal to incriminate those exerting their bodily autonomy. Locate X is a powerful tool they can use to do this. So here are some timely tips to help protect your location privacy.
First, a short disclaimer: these tips provide some level of protection to mobile device-based tracking. This is not an exhaustive list of techniques, devices, or technologies that can help restore one’s location privacy. Your security plan should reflect how specifically targeted you are for surveillance. Additional steps, such as researching and mitigating the on-board devices included with your car, or sweeping for physical GPS trackers, may be prudent steps which are outside the scope of this post. Likewise, more advanced techniques such as flashing your device with a custom-built privacy- or security-focused operating system may provide additional protections which are not covered here. The intent is to give some basic tips for protecting yourself from mobile device location tracking services.
Disable Mobile Advertising IdentifiersServices like Locate X are built atop an online advertising ecosystem that incentivizes collecting troves of information from your device and delivering it to platforms to micro-target you with ads based on your online behavior. One linchpin in the way distinct information (in this case, location) delivered to an app or website at a certain point in time is connected to information delivered to a different app or website at the next point in time is through unique identifiers such as the mobile advertising identifiers (MAIDs). Essentially, MAIDs allow advertising platforms and the data brokers they sell to to “connect the dots” between an otherwise disconnected scatterplot of points on a map, resulting in a cohesive picture of the movement of a device through space and time.
As a result of significant pushback by privacy advocates, both Android and iOS provided ways to disable advertising identifiers from being delivered to third-parties. As we described in a recent post, you can do this on Android following these steps:
With the release of Android 12, Google began allowing users to delete their ad ID permanently. On devices that have this feature enabled, you can open the Settings app and navigate to Security & Privacy > Privacy > Ads. Tap “Delete advertising ID,” then tap it again on the next page to confirm. This will prevent any app on your phone from accessing it in the future.
The Android opt out should be available to most users on Android 12, but may not be available on older versions. If you don’t see an option to “delete” your ad ID, you can use the older version of Android’s privacy controls to reset it and ask apps not to track you.
And on iOS:
Apple requires apps to ask permission before they can access your IDFA. When you install a new app, it may ask you for permission to track you.
Select “Ask App Not to Track” to deny it IDFA access.
To see which apps you have previously granted access to, go to Settings > Privacy & Security > Tracking.
In this menu, you can disable tracking for individual apps that have previously received permission. Only apps that have permission to track you will be able to access your IDFA.
You can set the “Allow apps to Request to Track” switch to the “off” position (the slider is to the left and the background is gray). This will prevent apps from asking to track in the future. If you have granted apps permission to track you in the past, this will prompt you to ask those apps to stop tracking as well. You also have the option to grant or revoke tracking access on a per-app basis.
Apple has its own targeted advertising system, separate from the third-party tracking it enables with IDFA. To disable it, navigate to Settings > Privacy > Apple Advertising and set the “Personalized Ads” switch to the “off” position to disable Apple’s ad targeting.
Audit Your Apps’ Trackers and PermissionsIn general, the more apps you have, the more intractable your digital footprint becomes. A separate app you’ve downloaded for flashlight functionality may also come pre-packaged with trackers delivering your sensitive details to third-parties. That’s why it’s advisable to limit the amount of apps you download and instead use your pre-existing apps or operating system to, say, find the bathroom light switch at night. It isn't just good for your privacy: any new app you download also increases your “attack surface,” or the possible paths hackers might have to compromise your device.
We get it though. Some apps you just can’t live without. For these, you can at least audit what trackers the app communicates with and what permissions it asks for. Both Android and iOS have a page in their Settings apps where you can review permissions you've granted apps. Not all of these are only “on” or “off.” Some, like photos, location, and contacts, offer more nuanced permissions. It’s worth going through each of these to make sure you still want that app to have that permission. If not, revoke or dial back the permission. To get to these pages:
On Android: Open Settings > Privacy & Security > Privacy Controls > Permission Manager.
On iPhone: Open Settings > Privacy & Security.
If you're inclined to do so, there are tricks for further research. For example, you can look up tracks in Android apps using an excellent service called Exodus Privacy. As of iOS 15, you can check on the device itself by turning on the system-level app privacy report in Settings > Privacy > App Privacy Report. From that point on, browsing to that menu will allow you to see exactly what permissions an app uses, how often it uses them, and what domains it communicates with. You can investigate any given domain by just pasting it into a search engine and seeing what’s been reported on it. Pro tip: to exclude results from that domain itself and only include what other domains say about it, many search engines like Google allow you to use the syntax
-site:www.example.com.
Disable Real-Time Tracking with Airplane ModeTo prevent an app from having network connectivity and sending out your location in real-time, you can put your phone into airplane mode. Although it won’t prevent an app from storing your location and delivering it to a tracker sometime later, most apps (even those filled with trackers) won’t bother with this extra complication. It is important to keep in mind that this will also prevent you from reaching out to friends and using most apps and services that you depend on. Because of these trade-offs, you likely will not want to keep Airplane Mode enabled all the time, but it may be useful when you are traveling to a particularly sensitive location.
Some apps are designed to allow you to navigate even in airplane mode. Tapping your profile picture in Google Maps will drop down a menu with Offline maps. Tapping this will allow you to draw a boundary box and pre-download an entire region, which you can do even without connectivity. As of iOS 18, you can do this on Apple Maps too: tap your profile picture, then “Offline Maps,” and “Download New Map.”
Other apps, such as Organic Maps, allow you to download large maps in advance. Since GPS itself determines your location passively (no transmissions need be sent, only received), connectivity is not needed for your device to determine its location and keep it updated on a map stored locally.
Keep in mind that you don’t need to be in airplane mode the entire time you’re navigating to a sensitive site. One strategy is to navigate to some place near your sensitive endpoint, then switch airplane mode on, and use offline maps for the last leg of the journey.
Separate Devices for Separate PurposesFinally, you may want to bring a separate, clean device with you when you’re traveling to a sensitive location. We know this isn’t an option available to everyone. Not everyone can afford purchasing a separate device just for those times they may have heightened privacy concerns. If possible, though, this can provide some level of protection.
A separate device doesn’t necessarily mean a separate data plan: navigating offline as described in the previous step may bring you to a place you know Wi-Fi is available. It also means any persistent identifiers (such as the MAID described above) are different for this device, along with different device characteristics which won’t be tied to your normal personal smartphone. Going through this phone and keeping its apps, permissions, and browsing to an absolute minimum will avoid an instance where that random sketchy game you have on your normal device to kill time sends your location to its servers every 10 seconds.
One good (though more onerous) practice that would remove any persistent identifiers like long-lasting cookies or MAIDs is resetting your purpose-specific smartphone to factory settings after each visit to a sensitive location. Just remember to re-download your offline maps and increase your privacy settings afterwards.
Further ReadingOur own Surveillance Self-Defense site, as well as many other resources, are available to provide more guidance in protecting your digital privacy. Often, general privacy tips are applicable in protecting your location data from being divulged, as well.
The underlying situation that makes invasive tools like Locate X possible is the online advertising industry, which incentivises a massive siphoning of user data to micro-target audiences. Earlier this year, the FTC showed some appetite to pursue enforcement action against companies brokering the mobile location data of users. We applauded this enforcement, and hope it will continue into the next administration. But regulatory authorities only have the statutory mandate and ability to punish the worst examples of abuse of consumer data. A piecemeal solution is limited in its ability to protect citizens from the vast array of data brokers and advertising services profiting off of surveilling us all.
Only a federal privacy law with a strong private right of action which allows ordinary people to sue companies that broker their sensitive data, and which does not preempt states from enacting even stronger privacy protections for their own citizens, will have enough teeth to start to rein in the data broker industry. In the meantime, consumers are left to their own devices (pun not intended) in order to protect their most sensitive data, such as location. It’s up to us to protect ourselves, so let’s make it happen!
Celebrating the Life of Aaron Swartz: Aaron Swartz Day 2024
Aaron Swartz was a digital rights champion who believed deeply in keeping the internet open. His life was cut short in 2013, after federal prosecutors charged him under the Computer Fraud and Abuse Act (CFAA) for systematically downloading academic journal articles from the online database JSTOR. Facing the prospect of a long and unjust sentence, Aaron died by suicide at the age of 26. EFF was proud to call Aaron a friend and ally.
Today, November 8, would have been his 38th birthday. On November 9, the organizers of Aaron Swartz Day are celebrating his life with a guest-packed podcast featuring those carrying on the work around issues close to his heart. Hosts Lisa Rein and Andre Vinicus Leal Sobral will speak to:
- Ryan Shapiro, co-founder of the national security transparency non-profit Property of the People
- Nathan Dyer of SecureDrop, Newsroom Support Engineer for the Freedom of the Press Foundation.
- Tracey Jaquith, Founding Coder and TV Architect at the Internet Archive
- Tracy Rosenberg, co-founder of the Aaron Swartz Day Police Surveillance Project and Oakland Privacy
- Brewster Kahle founder of the Internet Archive
- Ryan Sternlicht, VR developer, educator, researcher, advisor, and maker
- Grant Smith Ellis, Chairperson of the Board, MassCann and Legal Intern at the Parabola Center
- Michael “Mek” Karpeles, Open Library, Internet Archive
The podcast will start at 2 p.m. PT/10 p.m. UTC. Please read the official page of the Aaron Swartz Day and International Hackathon for full details.
If you're a programmer or developer engaged in cutting-edge exploration of technology, please check out EFF's Coders' Rights Project.
EFF to Second Circuit: Electronic Device Searches at the Border Require a Warrant
EFF, along with ACLU and the New York Civil Liberties Union, filed an amicus brief in the U.S. Court of Appeals for the Second Circuit urging the court to require a warrant for border searches of electronic devices, an argument EFF has been making in the courts and Congress for nearly a decade.
The case, U.S. v. Kamaldoss, involves the criminal prosecution of a man whose cell phone and laptop were forensically searched after he landed at JFK airport in New York City. While a manual search involves a border officer tapping or mousing around a device, a forensic search involves connecting another device to the traveler’s device and using software to extract and analyze the data to create a detailed report the device owner’s activities and communications. In part based on evidence obtained during the forensic device searches, Mr. Kamaldoss was subsequently charged with prescription drug trafficking.
The district court upheld the forensic searches of his devices because the government had reasonable suspicion that the defendant “was engaged in efforts to illegally import scheduled drugs from abroad, an offense directly tied to at least one of the historic rationales for the border exception—the disruption of efforts to import contraband.”
The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2023, U.S. Customs and Border Protection (CBP) conducted 41,767 device searches.
The Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless “routine” searches of luggage, vehicles, and other items crossing the border.
The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as drugs, weapons, and other prohibited items, thereby blocking their entry into the country.
In our brief, we argue that the U.S. Supreme Court’s balancing test in Riley v. California (2014) should govern the analysis here. In that case, the Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest against an arrestee’s privacy interests in the depth and breadth of personal information stored on a cell phone. The Supreme Court concluded that the search-incident-to-arrest warrant exception does not apply, and that police need to get a warrant to search an arrestee’s phone.
Travelers’ privacy interests in their cell phones and laptops are, of course, the same as those considered in Riley. Modern devices, a decade later, contain even more data points that together reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial status, health conditions, and family and professional associations.
In considering the government’s interests in warrantless access to digital data at the border, Riley requires analyzing how closely such searches hew to the original purpose of the warrant exception—preventing the entry of prohibited goods themselves via the items being searched. We argue that the government’s interests are weak in seeking unfettered access to travelers’ electronic devices.
First, physical contraband (like drugs) can’t be found in digital data. Second, digital contraband (such as child pornography) can’t be prevented from entering the country through a warrantless search of a device at the border because it’s likely, given the nature of cloud technology and how internet-connected devices work, that identical copies of the files are already in the country on servers accessible via the internet.
Finally, searching devices for evidence of contraband smuggling (for example, text messages revealing the logistics of an illegal import scheme) and other evidence for general law enforcement (i.e., investigating non-border-related domestic crimes) are too “untethered” from the original purpose of the border search exception, which is to find prohibited items themselves and not evidence to support a criminal prosecution.
If the Second Circuit is not inclined to require a warrant for electronic device searches at the border, we also argue that such a search—whether manual or forensic—should be justified only by reasonable suspicion that the device contains digital contraband and be limited in scope to looking for digital contraband. This extends the Ninth Circuit’s rule from U.S. v. Cano (2019) in which the court held that only forensic device searches at the border require reasonable suspicion that the device contains digital contraband, while manual searches may be conducted without suspicion. But the Cano court also held that all searches must be limited in scope to looking for digital contraband (for example, call logs are off limits because they can’t contain digital contraband in the form of photos or files).
In our brief, we also highlighted three other district courts within the Second Circuit that required a warrant for border device searches: U.S. v. Smith (2023), which we wrote about last year; U.S. v. Sultanov (2024), and U.S. v. Fox (2024). We plan to file briefs in their appeals, as well, in the hope that the Second Circuit will rise to the occasion and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.
EFF to Court: Reject X’s Effort to Revive a Speech-Chilling Lawsuit Against a Nonprofit
This post was co-written by EFF legal intern Gowri Nayar.
X’s lawsuit against the nonprofit Center for Countering Digital Hate is intended to stifle criticism and punish the organization for its reports criticizing the platform’s content moderation practices, and a previous ruling dismissing the lawsuit should be affirmed, EFF and multiple organizations argued in a brief filed this fall.
X sued the Center for Countering Digital Hate (“CCDH”) in federal court in 2023 in response to its reports, which concluded that X’s practices have facilitated an environment of hate speech and misinformation online. Although X’s suit alleges, among other things, breach of contract and violation of the Computer Fraud and Abuse Act, the case is really about X trying to hold CCDH liable for the public controversy surrounding its moderation practices. At bottom, X is claiming that CCDH damaged the platform by critically reporting on it.
CCDH sought to throw out the case on the merits and under California’s anti-SLAPP statute. The California law allows lawsuits to be dismissed if they are filed in retaliation for someone exercising their free speech rights, known as Strategic Lawsuits Against Public Participation, or SLAPPs. In March, the district court ruled in favor of CCDH, dismissed the case, and found that the lawsuit was a SLAPP.
As the district judge noted, X’s suit “is about punishing the Defendants for their speech.” It was correct to reject X’s contract and CFAA theories and saw them for what they were: grievances with CCDH’s criticisms masquerading as legal claims.
X appealed the ruling to the U.S. Court of Appeals for the Ninth Circuit earlier this year. In September, EFF, along with the ACLU, ACLU of Northern California, and the Knight First Amendment Institute at Columbia University, filed an amicus brief in support of CCDH.
The amicus brief argues that the Ninth Circuit should not allow X to make use of state contract law and a federal anti-hacking statute to stifle CCDH’s speech. Through this lawsuit, X wants to punish CCDH for publishing reports that highlighted how X’s policies and practices are allowing misinformation and hate speech to thrive on its platform. We also argue against the enforcement of X’s anti-scraping provisions because of how vital scraping is to modern journalism and research.
Lastly, we called on the court to dismiss X’s interpretation of the CFAA because it relied on a legal theory that has already been rejected by courts—including the Ninth Circuit itself—in earlier cases. Allowing the CFAA to be used to criminalize all instances of unauthorized access would run counter to prior decisions and would render illegal large categories of activities such as sharing passwords with friends and family.
Ruling in favor of X in this lawsuit would set a very dangerous precedent for free speech rights and allow powerful platforms to exert undue control over information online. We hope the Ninth Circuit affirms the lower court decision and dismisses this meritless lawsuit.
The 2024 U.S. Election is Over. EFF is Ready for What's Next.
The dust of the U.S. election is settling, and we want you to know that EFF is ready for whatever’s next. Our mission to ensure that technology serves you—rather than silencing, tracking, or oppressing you—does not change. Some of what’s to come will be in uncharted territory. But we have been preparing for whatever this future brings for a long time. EFF is at its best when the stakes are high.
No matter what, EFF will take every opportunity to stand with users. We’ll continue to advance our mission of user privacy, free expression, and innovation, regardless of the obstacles. We will hit the ground running.
During the previous Trump administration, EFF didn’t just hold the line. We pushed digital rights forward in significant ways, both nationally and locally. We supported those protesting in the streets, with expanded Surveillance Self-Defense guides and our Security Education Companion. The first offers information for how to protect yourself while you exercise your First Amendment rights, and the second gives tips on how to help your friends and colleagues be more safe.
Along with our allies, we fought government use of face surveillance, passing municipal bans on the dangerous technology. We urged the Supreme Court to expand protections for your cell phone data, and in Carpenter v United States, they did so—recognizing that location information collected by cell providers creates a “detailed chronicle of a person’s physical presence compiled every day, every moment over years.” Now, police must get a warrant before obtaining a significant amount of this data.
EFF is at its best when the stakes are high.
But we also stood our ground when governments and companies tried to take away the hard-fought protections we’d won in previous years. We stopped government attempts to backdoor private messaging with “ghost” and “client-side scanning” measures that obscured their intentions to undermine end-to-end encryption. We defended Section 230, the common sense law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on. And when the COVID pandemic hit, we carefully analyzed and pushed back measures that would have gone beyond what was necessary to keep people safe and healthy by invading our privacy and inhibiting our free speech.
Every time policymakers or private companies tried to undermine your rights online during the last Trump administration from 2016-2020, we were there—just as we continued to be under President Biden. In preparation for the next four years, here’s just some of the groundwork we’ve already laid:
- Border Surveillance: For a decade we’ve been revealing how the hundreds of millions of dollars pumped into surveillance technology along the border impacts the privacy of those who live, work, or seek refuge there, and thousands of others transiting through our border communities each day. We’ve defended the rights of people whose devices have been searched or seized upon entering the country. We’ve mapped out the network of automated license plate readers installed at checkpoints and land entry points, and the more than 465 surveillance towers along the U.S.-Mexico border. And we’ve advocated for sanctuary data policies restricting how ICE can access criminal justice and surveillance data.
- Surveillance Self-Defense: Protecting your private communications will only become more critical, so we’ve been expanding both the content and the translations of our Surveillance Self-Defense guides. We’ve written clear guidance for staying secure that applies to everyone, but is particularly important for journalists, protesters, activists, LGBTQ+ youths, and other vulnerable populations.
- Reproductive Rights: Long before Roe v. Wade was overturned, EFF was working to minimize the ways that law enforcement can obtain data from tech companies and data brokers. After the Dobbs decision was handed down, we supported multiple laws in California that shield both reproductive and transgender health data privacy, even for people outside of California. But there’s more to do, and we’re working closely with those involved in the reproductive justice movement to make more progress.
- Transition Memo: When the next administration takes over, we’ll be sending a lengthy, detailed policy analysis to the incoming administration on everything from competition to AI to intellectual property to surveillance and privacy. We provided a similarly thoughtful set of recommendations on digital rights issues after the last presidential election, helping to guide critical policy discussions.
We’ve prepared much more too. The road ahead will not be easy, and some of it is not yet mapped out, but one of the reasons EFF is so effective is that we play the long game. We’ll be here when this administration ends and the next one takes over, and we’ll continue to push. Our nonpartisan approach to tech policy works because we work for the user.
We’re not merely fighting against individual companies or elected officials or even specific administrations. We are fighting for you. That won’t stop no matter who’s in office.
AI in Criminal Justice Is the Trend Attorneys Need to Know About
The integration of artificial intelligence (AI) into our criminal justice system is one of the most worrying developments across policing and the courts, and EFF has been tracking it for years. EFF recently contributed a chapter on AI’s use by law enforcement to the American Bar Association’s annual publication, The State of Criminal Justice 2024.
The chapter describes some of the AI-enabled technologies being used by law enforcement, including some of the tools we feature in our Street-Level Surveillance hub, and discusses the threats AI poses to due process, privacy, and other civil liberties.
Face recognition, license plate readers, and gunshot detection systems all operate using forms of AI, all enabling broad, privacy-deteriorating surveillance that have led to wrongful arrests and jail time through false positives. Data streams from these tools—combined with public records, geolocation tracking, and other data from mobile phones—are being shared between policing agencies and used to build increasingly detailed law enforcement profiles of people, whether or not they’re under investigation. AI software is being used to make black box inferences and connections between them. A growing number of police departments have been eager to add AI to their arsenals, largely encouraged by extensive marketing by the companies developing and selling this equipment and software.
“As AI facilitates mass privacy invasion and risks routinizing—or even legitimizing—inequalities and abuses, its influence on law enforcement responsibilities has important implications for the application of the law, the protection of civil liberties and privacy rights, and the integrity of our criminal justice system,” EFF Investigative Researcher Beryl Lipton wrote.
The ABA’s 2024 State of Criminal Justice publication is available from the ABA in book or PDF format.
EFF Lawsuit Discloses Documents Detailing Government’s Social Media Surveillance of Immigrants
Despite rebranding a federal program that surveils the social media activities of immigrants and foreign visitors to a more benign name, the government agreed to spend more than $100 million to continue monitoring people’s online activities, records disclosed to EFF show.
Thousands of pages of government procurement records and related correspondence show that the Department of Homeland Security and its component Immigrations and Customs Enforcement largely continued an effort, originally called extreme vetting, to try to determine whether immigrants posed any threat by monitoring their social media and internet presence. The only real change appeared to be rebranding the program to be known as the Visa Lifecycle Vetting Initiative.
The government disclosed the records to EFF after we filed suit in 2022 to learn what had become of a program proposed by President Donald Trump. The program continued under President Joseph Biden. Regardless of the name used, DHS’s program raises significant free expression and First Amendment concerns because it chills the speech of those seeking to enter the United States and allows officials to target and punish them for expressing views they don’t like.
Yet that appears to be a major purpose of the program, the released documents show. For example, the terms of the contracting request specify that the government sought a system that could:
analyze and apply techniques to exploit publicly available information, such as media, blogs, public hearings, conferences, academic websites, social media websites such as Twitter, Facebook, and Linkedln, radio, television, press, geospatial sources, internet sites, and specialized publications with intent to extract pertinent information regarding individuals.
That document and another one make explicit that one purpose of the surveillance and analysis is to identify “derogatory information” about Visa applicants and other visitors. The vague phrase is broad enough to potentially capture any online expression that is critical of the U.S. government or its actions.
EFF has called on DHS to abandon its online social media surveillance program because it threatens to unfairly label individuals as a threat or otherwise discriminate against them on the basis of their speech. This could include denying people access to the United States for speaking their mind online. It’s also why EFF has supported a legal challenge to a State Department practice requiring people applying for a Visa to register their social media accounts with the government.
The documents released in EFF’s lawsuit also include a telling passage about the controversial program and the government’s efforts to sanitize it. In an email discussing the lawsuit against the State Department’s social media moniker collection program, an ICE official describes the government’s need to rebrand the program, “from what ICE originally referred to as the Extreme Vetting Initiative.”
The official wrote:
On or around July 2017 at an industry day event, ICE sought input from the private sector on the use of artificial intelligence to assist in visa applicant vetting. In the months that followed there was significant pushback from a variety channels, including Congress. As a result, on or around May 2018, ICE modified its strategy and rebranded the concept as the Visa Lifecycle Vetting Project.
Other documents detail the specifics of the contract and bidding process that resulted in DHS awarding $101,155,431.20 to SRA International, Inc., a government contractor that uses a different name after merging with another contractor. The company is owned by General Dynamics.
The documents also detail an unsuccessful effort by a competitor to overturn DHS’s decision to award the contract to SRA, though much of the content of that dispute is redacted.
All of the documents released to EFF are available on DocumentCloud.
Judge’s Investigation Into Patent Troll Results In Criminal Referrals
In 2022, three companies with strange names and no clear business purpose beyond patent litigation filed dozens of lawsuits in Delaware federal court, accusing businesses of all sizes of patent infringement. Some of these complaints claimed patent rights over basic aspects of modern life; one, for example, involved a patent that pertains to the process of clocking in to work through an app.
These companies–named Mellaconic IP, Backertop Licensing, and Nimitz Technologies–seemed to be typical examples of “patent trolls,” companies whose primary business is suing others over patents or demanding licensing fees rather than providing actual products or services.
However, the cases soon took an unusual turn. The Delaware federal judge overseeing the cases, U.S. District Judge Colm Connolly, sought more information about the patents and their ownership. One of the alleged owners was a food-truck operator who had been promised “passive income,” but was entitled to only a small portion of any revenue generated from the lawsuits. Another owner was the spouse of an attorney at IP Edge, the patent-assertion company linked to all three LLCs.
Following an extensive investigation, the judge determined that attorneys associated with these shell companies had violated legal ethics rules. He pointed out that the attorneys may have misled Han Bui, the food-truck owner, about his potential liability in the case. Judge Connolly wrote:
[T]he disparity in legal sophistication between Mr. Bui and the IP Edge and Mavexar actors who dealt with him underscore that counsel's failures to comply with the Model Rules of Professional Conduct while representing Mr. Bui and his LLC in the Mellaconic cases are not merely technical or academic.
Judge Connolly also concluded that IP Edge, the patent-assertion company behind hundreds of patent lawsuits and linked to the three LLCs, was the “de facto owner” of the patents asserted in his court, but that it attempted to hide its involvement. He wrote, “IP Edge, however, has gone to great lengths to hide the ‘we’ from the world,” with "we" referring to IP Edge. Connolly further noted, “IP Edge arranged for the patents to be assigned to LLCs it formed under the names of relatively unsophisticated individuals recruited by [IP Edge office manager] Linh Deitz.”
The judge referred three IP Edge attorneys to the Supreme Court of Texas’ Unauthorized Practice of Law Committee for engaging in “unauthorized practices of law in Texas.” Judge Connolly also sent a letter to the Department of Justice, suggesting an investigation into “individuals associated with IP Edge LLC and its affiliate Maxevar LLC.”
Patent Trolls Tried To Shut Down This InvestigationThe attorneys involved in this wild patent trolling scheme challenged Judge Connolly’s authority to proceed with his investigation. However, because transparency in federal courts is essential and applicable to all parties, including patent assertion entities, EFF and two other patent reform groups filed a brief in support of the judge’s investigation. The brief argued that “[t]he public has a right—and need—to know who is controlling and benefiting from litigation in publicly-funded courts.” Companies targeted by the patent trolls, as well as the Chamber of Commerce, filed their own briefs supporting the investigation.
The appeals court sided with us, upholding Judge Connolly’s authority to proceed, which led to the referral of the involved attorneys to the disciplinary counsel of their respective bar associations.
After this damning ruling, one of the patent troll companies and its alleged owner made a final effort at appealing this outcome. In July of this year, the U.S. Court of Appeals for the Federal Circuit ruled that investigating Backertop Licensing LLC and ordering its alleged owner to testify was “an appropriate means to investigate potential misconduct involving Backertop.”
In EFF’s view, these types of investigations into the murky world of patent trolling are not only appropriate but should happen more often. Now that the appeals court has ruled, let’s take a look at what we learned about the patent trolls in this case.
Patent Troll Entities Linked To French GovernmentOne of the patent trolling entities, Nimitz Technologies LLC, asserted a single patent, U.S. Patent No. 7,848,328, against 11 companies. When the judge required Nimitz’s supposed owner, a man named Mark Hall, to testify in court, Hall could not describe anything about the patent or explain how Nimitz acquired it. He didn’t even know the name of the patent (“Broadcast Content Encapsulation”). When asked what technology was covered by the patent, he said, “I haven’t reviewed it enough to know,” and when asked how he paid for the patent, Hall replied, “no money exchanged hands.”
The exchange between Hall and Judge Connolly went as follows:
Q. So how do you come to own something if you never paid for it with money?
A. I wouldn't be able to explain it very well. That would be a better question for Mavexar.
Q. Well, you're the owner?
A. Correct.
Q. How do you know you're the owner if you didn't pay anything for the patent?
A. Because I have the paperwork that says I'm the owner.
(Nov. 27, 2023 Opinion, pages 8-9.)
The Nimitz patent originated from the Finnish cell phone company Nokia, which later assigned it and several other patents to France Brevets, a French sovereign investment fund, in 2013. France Brevets, in turn, assigned the patent to a US company called Burley Licensing LLC, an entity linked to IP Edge, in 2021. Hau Bui (the food truck owner) signed on behalf of Burley, and Didier Patry, then the CEO of France Brevets, signed on behalf of the French fund.
France Brevets was an investment fund formed in 2009 with €100 million in seed money from the French government to manage intellectual property. France Brevets was set to receive 35% of any revenue related to “monetizing and enforcement” of the patent, with Burley agreeing to file at least one patent infringement lawsuit within a year, and collect a “total minimum Gross Revenue of US $100,000” within 24 months, or the patent rights would be given back to France Brevets.
Burley Licensing LLC, run by IP Edge personnel, then created Nimitz Technologies LLC— a company with no assets except for the single patent. They obtained a mailing address for it from a Staples in Frisco, Texas, and assigned the patent to the LLC in August 2021, while the obligations to France Brevets remained unchanged until the fund shut down in 2022.
The Bigger PictureIt’s troubling that patent lawsuits are often funded by entities with no genuine interest in innovation, such as private equity firms. However, it’s even more concerning when foreign government-backed organizations like France Brevets manipulate the US patent system for profit. In this case, a Finnish company sold its patents to a French government fund, which used US-based IP lawyers to file baseless lawsuits against American companies, including well-known establishments like Reddit and Bloomberg, as well as smaller ones like Tastemade and Skillshare.
Judges should enforce rules requiring transparency about third-party funding in patent lawsuits. When ownership is unclear, it’s appropriate to insist that the real owners show up and testify—before dragging dozens of companies into court over dubious software patents.
Related documents:
- Memorandum and Order referring counsel to disciplinary bodies (Nov. 23, 2023)
- Federal Circuit Opinion affirming the order requiring Lori LaPray to appear “for testimony regarding potential fraud on the court,” as well as the District Court’s order of monetary sanction against Ms. LaPray for subsequently failing to appear
The Human Toll of ALPR Errors
This post was written by Gowri Nayar, an EFF legal intern.
Imagine driving to get your nails done with your family and all of a sudden, you are pulled over by police officers for allegedly driving a stolen car. You are dragged out of the car and detained at gun point. So are your daughter, sister, and nieces. The police handcuff your family, even the children, and force everyone to lie face-down on the pavement, before eventually realizing that they made a mistake. This happened to Brittney Gilliam and her family on a warm Sunday in Aurora, Colorado, in August 2020.
And the error? The police officers who pulled them over were relying on information generated by automated license plate readers (ALPRs). These are high-speed, computer-controlled camera systems that automatically capture all license plate numbers that come into view, upload them to a central server, and compare them to a “hot list” of vehicles sought by police. The ALPR system told the police that Gilliam’s car had the same license plate number as a stolen vehicle. But the stolen vehicle was a motorcycle with Montana plates, while Gilliam’s vehicle was an SUV with Colorado plates.
Likewise, Denise Green had a frightening encounter with San Francisco police officers late one night in March of 2009. She had just dropped her sister off at a BART train station, when officers pulled her over because their ALPR indicated that she was driving a stolen vehicle. Multiple officers ordered her to exit her vehicle, at gun point, and kneel on the ground as she was handcuffed. It wasn’t until roughly 20 minutes later that the officers realized they had made an error and let her go.
Turns out that the ALPR had misread a ‘3’ as a ‘7’ on Green’s license plate. But what is even more egregious is that none of the officers bothered to double-check the ALPR tip before acting on it.
In both of these dangerous episodes, the motorists were Black. ALPR technology can exacerbate our already discriminatory policing system, among other reasons because too many police officers react recklessly to information provided by these readers.
Wrongful detentions like these happen all over the country. In Atherton, California, police officers pulled over Jason Burkleo on his way to work, on suspicion of driving a stolen vehicle. They ordered him at gun point to lie on his stomach to be handcuffed, only to later realize that their license plate reader had misread an ‘H’ for an ‘M’. In Espanola, New Mexico, law enforcement officials detained Jaclynn Gonzales at gun point and placed her 12 year-old sister in the back of a patrol vehicle, before discovering that the reader had mistaken a ‘2’ for a ‘7’ on their license plates. One study found that ALPRs misread the state of 1-in-10 plates (not counting other reading errors).
Other wrongful stops result from police being negligent in maintaining ALPR databases. Contra Costa sheriff’s deputies detained Brian Hofer and his brother on Thanksgiving day in 2019, after an ALPR indicated his car was stolen. But the car had already been recovered. Police had failed to update the ALPR database to take this car off the “hot list” of stolen vehicles for officers to recover.
Police over-reliance on ALPR systems is also a problem. Detroit police knew that the vehicle used in a shooting was a Dodge Charger. Officers then used ALPR cameras to find the license plate numbers of all Dodge Chargers in the area around the time. One such car, observed fully two miles away from the shooting, was owned by Isoke Robinson. Police arrived at her house and handcuffed her, placed her 2-year old son in the back of their patrol car, and impounded her car for three weeks. None of the officers even bothered to check her car’s fog lights, though the vehicle used for the shooting had a missing fog light.
Officers have also abused ALPR databases to obtain information for their own personal gain, for example, to stalk an ex-wife. Sadly, officer abuse of police databases is a recurring problem.
Many people subjected to wrongful ALPR detentions are filing and winning lawsuits. The city of Aurora settled Brittney Gilliam’s lawsuit for $1.9 million. In Denise Green’s case, the city of San Francisco paid $495,000 for her seizure at gunpoint, constitutional injury, and severe emotional distress. Brian Hofer received a $49,500 settlement.
While the financial costs of such ALPR wrongful detentions are high, the social costs are much higher. Far from making our communities safer, ALPR systems repeatedly endanger the physical safety of innocent people subjected to wrongful detention by gun-wielding officers. They lead to more surveillance, more negligent law enforcement actions, and an environment of suspicion and fear.
Since 2012, EFF has been resisting the safety, privacy, and other threats of ALPR technology through public records requests, litigation, and legislative advocacy. You can learn more at our Street-Level Surveillance site.