FTC Rightfully Acts Against So-Called “AI Weapon Detection” Company Evolv
(Fr, 06 Dez 2024)
The Federal Trade Commission has entered a settlement with self-styled
“weapon detection” company Evolv, to resolve the FTC’s claim that the company “knowingly”
and repeatedly” engaged in “unlawful” acts of misleading claims about their technology. Essentially, Evolv’s technology, which is in schools, subways, and stadiums,
does far less than they’ve been claiming.
The FTC alleged in their complaint that despite the lofty claims made by Evolv,
the technology is fundamentally no different from a metal detector: “The company has insisted publicly and repeatedly that Express is a ‘weapons detection’ system and not a ‘metal
detector.’ This representation is solely a marketing distinction, in that the only things that Express scanners detect are metallic and its alarms can be set off by metallic objects
that are not weapons.” A typical contract for Evolv costs tens of thousands of dollars per year—five times the cost of traditional metal
detectors. One district in Kentucky spent $17 million to outfit its schools with the software.
The settlement requires notice, to the many schools which use this technology to keep weapons out of classrooms, that they are allowed to cancel their contracts. It also blocks
the company from making any representations about their technology’s:
ability to detect weapons
ability to ignore harmless personal items
ability to detect weapons while ignoring harmless personal items
ability to ignore harmless personal items without requiring visitors to remove any such items from pockets or bags
The company also is prohibited from making statements regarding:
Weapons detection accuracy, including in comparison to the use of metal detectors
False alarm rates, including comparisons to the use of metal detectors
The speed at which visitors can be screened, as compared to the use of metal detectors
Labor costs, including comparisons to the use of metal detectors
Testing, or the results of any testing
Any material aspect of its performance, efficacy, nature, or central characteristics, including, but not limited to, the use of algorithms, artificial intelligence, or other
automated systems or tools.
If the company can’t say these things anymore…then what do they even have left to sell?
There’s a reason so many people accuse artificial intelligence of being “snake oil.” Time and again, a company takes public data in order to power “AI” surveillance, only for
taxpayers to learn it does
no such thing. “Just walk out” stores actually required people watching you on camera to determine what you purchased. Gunshot
detection software that relies on a combination of artificial intelligence and human “acoustic experts” to purportedly identify and locate gunshots “rarely produces evidence of a gun-related
crime.” There’s a lot of well-justified suspicion about what’s really going on within the black box of corporate secrecy in which artificial intelligence so often
operates.
Even when artificial intelligence used by the government isn’t “snake oil,” it often does more harm than good. AI systems can introduce or exacerbate harmful biases that have
massive negative impacts on people’s lives. AI systems have been implicated with falsely accusing people of welfare fraud, increasing racial bias in jail sentencing as well as policing and crime prediction, and falsely identifying people as suspects based on facial
recognition.
Now, the politicians, schools, police departments, and private venues have been duped again. This time, by Evolv, a company which purports to sell “weapon detection technology”
which they claimed would use AI to scan people entering a stadium, school, or museum and theoretically alert authorities if it recognizes the shape of a weapon on a
person.
Even before the new FTC action, there was indication that this technology was not an effective solution to weapon-based violence. From July to October, New York City rolled out
a trial of Evolv technology in 20 subway systems in an attempt to keep people from bringing weapons on to the transit system. Out of 2,749 scans there were 118 false positives. Twelve knives and no guns were
recovered.
Make no mistake, false positives are dangerous. Falsely telling officers to
expect an armed individual is a recipe for an
unarmed person to be injured or even killed.
Cities, performance venues, schools, and transit systems are understandably eager to do something about violence–but throwing money at the problem by buying unproven technology
is not the answer and actually takes away resources and funding from more proven and systematic approaches. We applaud the FTC for standing up to the lucrative security theater
technology industry.
>> mehr lesen
This Bill Could Put A Stop To Censorship By Lawsuit
(Thu, 05 Dec 2024)
For years now, deep-pocketed individuals and corporations have been turning to civil lawsuits to silence their opponents. These Strategic Lawsuits Against Public Participation, or SLAPPs, aren’t designed to win on the merits, but rather to harass journalists,
activists, and consumers into silence by suing them over their protected speech. While 34 states have laws to protect against
these abuses, there is still no protection at a federal level.
Today, Reps. Jamie Raskin (D-MD) and Kevin Kiley (R-CA) introduced the bipartisan
Free Speech Protection Act. This bill is the best chance
we’ve seen in many years to secure strong federal protection for journalists, activists, and everyday people who have been subject to harassing meritless lawsuits.
take action
Tell Congress We Don't want a weaponized court system
The Free Speech Protection Act is a long overdue tool to protect against the use of SLAPP lawsuits as legal weapons that benefit the wealthy and powerful. This bill will help everyday
Americans of all political stripes who speak out on local and national issues.
Individuals or companies who are publicly criticized (or even simply discussed) will sometimes use SLAPP suits to intimidate their critics. Plaintiffs who file these suits don’t
need to win on the merits, and sometimes they don’t even intend to see the case through. But the stress of the lawsuit and the costly legal defense alone can silence or chill the free
speech of defendants.
State anti-SLAPP laws work. But since state laws are often not applicable in federal court, people and companies can still maneuver to manipulate the court system, filing cases in
federal court or in states with weak or nonexistent anti-SLAPP laws.
SLAPPs All Around
SLAPP lawsuits in federal court are increasingly being used to target activists and online critics. Here are a few recent examples:
Coal Ash Company Sued Environmental Activists
In 2016, activists in Uniontown, Alabama—a poor, predominantly Black town with a median per capita income of around $8,000—were sued for $30 million by a Georgia-based
company that put hazardous coal ash into Uniontown’s residential landfill. The activists were sued over statements on their website and Facebook page, which said things like
the landfill “affected our everyday life,” and, “You can’t walk outside, and you cannot breathe.” The plaintiff settled the case after the ACLU stepped in to defend the activist
group.
Shiva Ayyadurai Sued A Tech Blog That Reported On Him
In 2016, technology blog Techdirt published articles disputing Shiva Ayyadurai’s claim to
have “invented email.” Techdirt founder Mike Masnick was hit with a $15 million libel lawsuit in federal court. Masnick, an EFF Award winner, fought back in court and
his reporting remains online, but the legal fees had a big effect on his business. With a strong federal anti-SLAPP law, more writers and publishers will be able to fight back against
bullying lawsuits without resorting to crowd-funding.
Logging Company Sued Greenpeace
In 2016, environmental non-profit Greenpeace was sued along with several individual activists by Resolute Forest Products. Resolute sued over blog post statements such as Greenpeace’s allegation that Resolute’s
logging was “bad news for the climate.” (After four years of litigation, Resolute was ordered to pay nearly $1 million
in fees to Greenpeace—because a judge found that California’s strong anti-SLAPP law should apply.)
Congressman Sued His Twitter Critics And Media Outlets
In 2019, anonymous Twitter accounts were sued by Rep. Devin
Nunes, then a congressman representing parts of Central California. Nunes used lawsuits to attempt to unmask and punish two Twitter users who used the handles
@DevinNunesMom and @DevinCow to criticize his actions as a politician. Nunes filed these actions in a state court in Henrico County, Virginia. The location had little connection to
the case, but Virginia’s weak anti-SLAPP law has enticed many plaintiffs there.
Over the next few years, Nunes went on to sue many other journalists who published critical articles about him, using state and federal courts to sue CNN, The Washington Post, his hometown paper The Fresno Bee, MSNBC, a group of his own
constituents, and others. Nearly all of these lawsuits were dropped or dismissed by courts. If a federal anti-SLAPP law were in place, more defendants would have a chance of
dismissing such lawsuits early and recouping their legal fees.
Fast Relief From SLAPPs
The Free Speech Protection Act gives defendants of SLAPP suits a powerful tool to defend themselves.
The bill would allow a defendant sued for speaking out on a matter of public concern to file a special motion to dismiss, which the court must generally decide on within 90
days. If the court grants the speaker-defendant’s motion, the claims are dismissed. In many situations, defendants who prevail on an anti-SLAPP motion will be entitled to have the
plaintiff reimburse them for their legal fees.
take action
Tell Congress to pass the free speech protection act
EFF has been defending the rights of online speakers for more than 30 years. A strong federal anti-SLAPP law will bring us
closer to the vision of an internet that allows anyone to speak out and organize for change, especially when they speak against those with more power and resources. Anti-SLAPP laws
enhance the rights of all. We urge Congress to pass The Free Speech Protection Act.
>> mehr lesen
Let's Answer the Question: "Why is Printer Ink So Expensive?"
(Thu, 05 Dec 2024)
Did you know that most printer ink isn’t even expensive to make? Why
then is it so expensive to refill the ink on your printer?
The answer is actually pretty simple: monopolies, weird laws, and companies exploiting their users for profit. If this sounds mildly infuriating and makes
you want to learn ways to fight back, then head over to our new site, Digital
Rights Bytes! We’ve even created a short video to explain what the heck is going on here.
We’re answering the common tech questions that may be bugging you. Whether you’re hoping to learn something new or want to share resources with your family
and friends, Digital Rights Bytes can be your one-stop-shop to learn more about the technology you use every day.
Digital Rights Bytes also has answers to other common questions about device repair, ownership of your digital media, and more. If you’ve got additional
questions you’d like us to tackle in the future, let us know on your favorite social platform using
the hashtag #DigitalRightsBytes!
>> mehr lesen
Location Tracking Tools Endanger Abortion Access. Lawmakers Must Act Now.
(Wed, 04 Dec 2024)
EFF wrote recently about Locate
X, a deeply troubling location tracking tool that allows users to see the precise whereabouts of individuals based on the locations of their smartphone devices.
Developed and sold by the data surveillance company Babel Street, Locate X collects smartphone location data from a variety of sources and collates that data into an easy-to-use tool
to track devices. The tool features a navigable map with red dots, each representing an individual device. Users can then follow the location of specific devices as they move about
the map.
Locate X–and other similar
services–are able to do this by taking advantage of our largely unregulated location data market.
Unfettered location tracking puts us all at risk. Law enforcement agencies can purchase their way around warrant requirements
and bad actors can pay for services that
make it easier to engage in stalking and harassment. Location tracking tools particularly threaten groups especially vulnerable to targeting, such as immigrants, the LGBTQ+ community, and even U.S. intelligence personnel abroad. Crucially, in a
post-Dobbs United States, location surveillance also poses a serious danger to abortion-seekers across the country.
EFF has warned before about how the
location data market threatens reproductive rights. The recent reports on Locate X illustrate even more starkly how the collection and sale of location data endangers patients in
states with abortion bans and restrictions.
In late October, 404 Media
reported that privacy advocates from Atlas Privacy, a data removal company, were able to get their hands on Locate
X and use it to track an individual device’s location data as it traveled across state lines to visit an abortion clinic. Although the tool was designed for law enforcement, the
advocates gained access by simply asserting that they planned to work with law enforcement in the future. They were then able to use the tool to track an individual device as it
traveled from an apparent residence in Alabama, where there is a complete abortion ban, to a reproductive health clinic in Florida, where abortion is banned after 6 weeks of
pregnancy.
Following this report, we published a
guide to help people shield themselves from tracking tools like Locate X. While we urge everyone to take appropriate technical precautions for their situation, it’s
far past time to address the issue at its source. The onus shouldn’t be on individuals to protect themselves from such invasive surveillance. Tools like Locate X only exist because
U.S. lawmakers have failed to enact legislation that would protect our location data from being bought and sold to the highest bidder.
Thankfully, there’s still time to reshape the system, and there are a number of laws legislators could pass today to help protect us from mass location surveillance. Remember:
when our location information is for sale, so is our safety.
Blame Data Brokers and the Online Advertising Industry
There are a vast array of apps available for your smartphone that request access to your location. Sharing this information, however, may allow your location data to be
harvested and sold to shadowy companies known as data brokers. Apps request access to device location to provide various features, but once access has been granted, apps can mishandle
that information and are free to share and sell your whereabouts to third parties, including data brokers. These companies collect data showing the precise movements of hundreds of
millions of people without their knowledge or meaningful consent. They then make this data available to anyone willing to pay, whether that’s a private company like Babel Street (and
anyone they in turn sell to) or government agencies, such as law enforcement, the military, or
ICE.
This puts everyone at risk. Our location data reveals far more than most people realize, including where we live and work, who we spend time with, where we worship, whether
we’ve attended protests or political gatherings, and when and where we seek medical care—including reproductive healthcare.
Without massive troves of commercially available location data, invasive tools like Locate X would not exist.
For years, EFF haswarned about the risk of law enforcement or
bad actors using commercially available location data to track and punish abortion seekers. Multiple data brokers have specifically targeted and sold location information tied to
reproductive healthcare clinics. The data broker SafeGraph, for example, classified Planned Parenthood as a “brand” that could be tracked,
allowing investigators at Motherboard to purchase data for over 600 Planned Parenthood facilities across the U.S.
Meanwhile, the data broker Near sold the location data of abortion-seekers
to anti-abortion groups, enabling them to send targeted anti-abortion ads to people who visited clinics. And location data firm Placer.ai even once offered heat maps showing where visitors to
Planned Parenthood clinics approximately lived. Sale to private actors is disturbing given that several states have introduced and passed
abortion “bounty hunter” laws, which allow private citizens to enforce abortion restrictions by suing abortion-seekers for cash.
Government officials in abortion-restrictive states are also targeting location information (and
other personal data) about people who visit abortion clinics. In Idaho, for example, law enforcement
used cell phone data to charge a mother and son with kidnapping for aiding an abortion-seeker who traveled across state lines to receive care. While police can obtain this data by
gathering evidence and requesting a warrant based on probable cause, the data broker industry allows them to bypass legal requirements and buy this information en masse, regardless of
whether there’s evidence of a crime.
Lawmakers Can Fix This
So far, Congress and many states have failed to enact legislation that would meaningfully rein in the data broker industry and protect our location information. Locate X is
simply the end result of such an unregulated data ecosystem. But it doesn’t have to be this way. There are a number of laws that Congress and state legislators could pass right now
that would help protect us from location tracking tools.
1. Limit What Corporations Can Do With Our Data
A key place to start? Stronger consumer privacy protections. EFF has consistentlypushed for legislation that would limit
the ability of companies to harvest and monetize our data. If we enforce strict rules on how location data is collected, shared, and sold, we can stop it from ending up in the hands
of private surveillance companies and law enforcement without our consent.
We urge legislators to consider comprehensive, across-the-board data privacy
laws. Companies should be required to minimize the collection and processing of location data to only what is strictly necessary to offer the service the user
requested (see, for example, the recently-passed Maryland Online Data Privacy Act).
Companies should also be prohibited from processing a person’s data, except with their informed, voluntary, specific, opt-in consent.
We also support reproductive health-specific data privacy laws, like Rep. Sara Jacobs’ proposed “My Body My Data” Act. Laws like this would create important protections for a variety of
reproductive health data, even beyond location data. Abortion-specific data privacy laws can provide some protection against the specific problem posed by Locate X. But to fully
protect against location tracking tools, we must legally limit processing of all location data and not just data at sensitive locations, such as
reproductive healthcare clinics.
While a limited law might provide some help, it would not offer foolproof protection. Imagine this scenario: someone travels from Alabama to New York for abortion care. With a
data privacy law that protects only sensitive, reproductive health locations, Alabama police could still track that person’s device on the journey to New York. Upon reaching the
clinic in New York, their device would disappear into a sensitive location blackout bubble for a couple of hours, then reappear outside of the bubble where police could resume
tracking as the person heads home. In this situation, it would be easy to infer where the person was during those missing two hours, giving Alabama police the lead they need.
The best solution is to minimize all location data, no exceptions.
2. Limit How Law Enforcement Can Get Our Data
Congress and state legislatures should also pass laws limiting law enforcement’s ability to access our location data without proper legal safeguards.
Much of our mobile data, like our location data, is information law enforcement would typically need a court order to access. But thanks to the data broker industry, law
enforcement can skip the courts entirely and simply head to the commercial market. The U.S. government has turned this loophole into a way to gather personal data on individuals without a search
warrant.
Lawmakers must close this loophole—especially if they’re serious about protecting abortion-seekers from hostile law enforcement in abortion-restrictive states. A key way to do
this is for Congress to pass the Fourth Amendment is
Not For Sale Act, which was originally introduced by Senator Ron Wyden in 2021 and made the important and historic step of passing the U.S. House of Representatives earlier this year.
Another crucial step is to ban law enforcement from sending “geofence warrants” to corporate holders of location data. Unlike
traditional warrants, a geofence warrant doesn’t start with a particular suspect or even a device or account; instead police request data on every device in a given geographic area
during a designated time period, regardless of whether the device owner has any connection to the crime under investigation.This could include, of course, an abortion
clinic.
Notably, geofence warrants are very popular with law enforcement. Between 2018 and 2020, Google alone received more than 5,700 demands of this type from states that now have anti-abortion and
anti-LGBTQ legislation on the books.
Several federal and state courts havealreadyfound individual geofence warrants to be
unconstitutional and some have even ruled they are “categorically prohibited by the Fourth
Amendment.” But instead of waiting for remaining courts to catch up, lawmakers should take action now, pass legislation banning geofence
warrants, and protect all of us–abortion-seekers included–from this form of dragnet surveillance.
3. Make Your State a Data Sanctuary
In the wake of the Dobbs decision, many states stepped up to serve as health care sanctuaries for people seeking abortion care that they
could not access in their home states. To truly be a safe refuge, these states must also be data sanctuaries. A state that has data about people who
sought abortion care must protect that data, and not disclose it to adversaries who would use it to punish them for seeking that healthcare. California has already passed laws to this effect, and more
states should follow suit.
What You Can Do Right Now
Even before lawmakers act, there are steps you can take to better shield your location data from tools like Locate X. As noted above, we published a Locate X-specific guide several weeks ago.
There are also additional tips on EFF’s Surveillance
Self-Defense site, as well as manyotherresources available to provide more
guidance in protecting your digital privacy. Many general privacy practices also offer strong protection against location tracking.
But don’t stop there: we urge you to make your voice heard and contact your representatives. While these precautions offer immediate protection, only stronger laws will ensure
comprehensive location privacy in the long run.
>> mehr lesen
Top Ten EFF Digital Security Resources for People Concerned About the Incoming Trump Administration
(Wed, 04 Dec 2024)
In the wake of the 2024 election in the United States, many people are concerned about tightening up their digital privacy and security practices. As always, we recommend that
people start making their security plan by understanding their risks. For most people in the
U.S., the threats that they face and the methods by which they are likely to be surveilled or harassed have not changed, but the consequences of digital privacy or security failures
may become much more serious, especially for vulnerable populations such as journalists, activists, LGBTQ+ people, people seeking or providing abortion-related care, Black or
Indigenous people, and undocumented immigrants.
EFF has decades of experience in providing digital privacy and security resources, particularly for vulnerable people. We’ve written a lot of resources over the years and here
are the top ten that we think are most useful right now:
1. Surveillance Self-Defense
https://ssd.eff.org/
Our Surveillance Self-Defense guides are a great place to start your journey of securing yourself against digital threats. We know that it can be a bit overwhelming, so we
recommend starting with our guide on making a security plan so you can familiarize yourself with
the basics and decide on your specific needs. Or, if you’re planning to head out to a protest soon and want to know the most important ways to protect yourself, check out our guide
to Attending a Protest. Many people in the groups most likely to be targeted in the upcoming
months will need advice tailored to their specific threat models, and for that we recommend the Security Scenarios module as a quick way to find the right information for your particular
situation.
2. Street-Level Surveillance
https://sls.eff.org/
If you are creating your security plan for the first time, it’s helpful to know which technologies might realistically be used to spy on you. If you’re going to be out on the
streets protesting or even just existing in public, it’s important to identify which threats to take seriously. Our Street-Level Surveillance team has spent years studying the
technologies that law enforcement uses and has made this handy website where you can find information about technologies including drones, face recognition, license plate readers,
stingrays, and more.
3. Atlas Of Surveillance
https://atlasofsurveillance.org/
Once you have learned about the different types of surveillance technologies police can acquire from our Street-Level surveillance guides, you might want to know which
technologies your local police has already bought. You can find that in our Atlas of Surveillance, a crowd-sourced map of police surveillance technologies in the United
States.
4. Doxxing: Tips To Protect Yourself Online & How to Minimize Harm
https://www.eff.org/deeplinks/2020/12/doxxing-tips-protect-yourself-online-how-minimize-harm
Surveillance by governments and law enforcement is far from the only kind of threat that people face online. We expect to see an increase in doxxing and harassment of vulnerable
populations by vigilantes, emboldened by the incoming administration’s threatened policies. This guide is our thinking around the precautions you may want to take if you are
likely to be doxxed and how to minimize the harm if you’ve been doxxed already.
5. Using Your Phone in Times of Crisis
https://www.eff.org/deeplinks/2022/03/using-your-phone-times-crisis
Using your phone in general can be a cause for anxiety for many people. We have a short guide on what considerations you should make when you are using your phone in times of
crisis. This guide is specifically written for people in war zones, but may also be useful more generally.
6. Surveillance-Self Defense for Campus Protests
https://www.eff.org/deeplinks/2024/06/surveillance-defense-campus-protests
One prediction we can safely make for 2025 is that campus protests will continue to be important. This blog post is our latest thinking about how to put together your security
plan before you attend a protest on campus.
7. Security Education Companion
https://www.securityeducationcompanion.org/
For those who are already comfortable with Surveillance Self-Defense, you may be getting questions from your family, friends, or community about what to do now. You may even
consider giving a digital security training session to people in your community, and for that you will need guidance and training materials. The Security Education Companion has
everything you need to get started putting together a training plan for your community, from recommended lesson plans and materials to guides on effective teaching.
8. Police Location Tracking
https://www.eff.org/deeplinks/2024/11/creators-police-location-tracking-tool-arent-vetting-buyers-heres-how-protect
One police surveillance technology we are especially concerned about is location tracking services. These are data brokers that get your phone's location, usually through the
same invasive ad networks that are baked into almost every app, and sell that information to law enforcement. This can include historical maps of where a specific device has been, or
a list of all the phones that were at a specific location, such as a protest or abortion clinic. This blog post goes into more detail on the problem and provides a guide on how to
protect yourself and keep your location private.
9. Should You Really Delete Your Period Tracking App?
https://www.eff.org/deeplinks/2022/06/should-you-really-delete-your-period-tracking-app
As soon as the Supreme Court overturned Roe v. Wade, one of the most popular bits of advice going around the internet was to “delete your period
tracking app.” Deleting your period tracking app may feel like an effective countermeasure in a world where seeking abortion care is increasingly risky and criminalized, but it’s not
advice that is grounded in the reality of the ways in which governments and law enforcement currently gather evidence against people who are prosecuted for their pregnancy outcomes.
This blog post provides some more effective ways of protecting your privacy and sensitive information.
10. Why We Can’t Just Tell You Which Messenger App to Use
https://www.eff.org/deeplinks/2018/03/why-we-cant-give-you-recommendation
People are always asking us to give them a recommendation for the best end-to-end encrypted messaging app. Unfortunately, this is asking for a simple answer to an extremely
nuanced question. While the short answer is “probably Signal most of the time,” the long answer goes into why that is not always the case. Since we wrote this in 2018, some companies
have come and gone, but our thinking on this topic hasn’t changed much.
Bonus external guide
https://digitaldefensefund.org/learn
Our friends at the Digital Defense Fund have put together an excellent collection of guides aimed at particularly vulnerable people who are thinking about digital security for
the first time. They have a comprehensive collection of links to other external guides as well.
***
EFF is committed to keeping our privacy and security advice accurate and up-to-date, reflecting the needs of a variety of vulnerable populations. We hope these resources will
help you keep yourself and your community safe in dangerous times.
>> mehr lesen
Speaking Freely: Aji Fama Jobe
(Tue, 03 Dec 2024)
*This interview has been edited for length and clarity.
Aji Fama Jobe is a digital creator, IT consultant, blogger, and tech community leader from The Gambia. She helps run Women TechMakers Banjul, an organization that provides
visibility, mentorship, and resources to women and girls in tech. She also serves as an Information Technology Assistant with the World Bank Group where she focuses on resolving IT
issues and enhancing digital infrastructure. Aji Fama is a dedicated advocate working to leverage technology to enhance the lives and opportunities of women and girls in Gambia and
across Africa.
Greene: Why don’t you start off by introducing yourself?
My name is Aji Fama Jobe. I’m from Gambia and I run an organization called Women TechMakers Banjul that provides resources to women and girls in Gambia, particularly in the
Greater Banjul area. I also work with other organizations that focus on STEM and digital literacy and aim to impact more regions and more people in the world. Gambia is made up of six
different regions and we have host organizations in each region. So we go to train young people, especially women, in those communities on digital literacy. And that’s what I’ve been
doing for the past four or five years.
Greene: So this series focuses on freedom of expression. What does freedom of expression mean to you personally?
For me it means being able to express myself without being judged. Because most of the time—and especially on the internet because of a lot of cyber bullying—I tend to think a
lot before posting something. It’s all about, what will other people think? Will there be backlash? And I just want to speak freely. So for me it means to speak freely without being
judged.
Greene: Do you feel like free speech means different things for women in the Gambia than for men? And how do you see this play out in the work that you do?
In the Gambia we have freedom of expression, the laws are there, but the culture is the opposite of the laws. Society still frowns on women who speak out, not just in the
workspace but even in homes. Sometimes men say a woman shouldn’t speak loud or there’s a certain way women should express. It’s the culture itself that makes women not speak up in
certain situations. In our culture it’s widely accepted that you let the man or the head of the family—who’s normally a man, of course—speak. I feel like freedom of speech is really
important when it comes to the work we do. Because women should be able to speak freely. And when you speak freely it gives you that confidence that you can do something. So it’s a
larger issue. What our organization does on free speech is address the unconscious bias in the tech space that impacts working women. I work as an IT consultant and sometimes when
we’re trying to do something technical people always assume IT specialists are men. So sometimes we just want to speak up and say, “It’s IT woman, not IT guy.”
Greene: We could say that maybe socially we need to figure this out, but now let me ask you this. Do you think the government has a role in regulating online speech?
Those in charge of policy enforcement don’t understand how to navigate these online pieces. It’s not just about putting the policies in place. They need to train people how to
navigate this thing or how to update these policies in specific situations. It’s not just about what the culture says. The policy is the policy and people should follow the rules, not
just as civilians but also as policy enforcers and law enforcement. They need to follow the rules, too.
Greene: What about the big companies that run these platforms? What’s their role in regulating online speech?
With cyber-bullying I feel like the big companies need to play a bigger role in trying to bring down content sometimes. Take Facebook for example. They don’t have many people
that work in Africa and understand Africa with its complexities and its different languages. For instance, in the Gambia we have 2.4 million people but six or seven languages. On the
internet people use local languages to do certain things. So it’s hard to moderate on the platform’s end, but also they need to do more work.
Greene: So six local languages in the Gambia? Do you feel there’s any platform that has the capability to moderate that?
In the Gambia? No. We have some civil society that tries to report content, but it’s just civil society and most of them do it on a voluntary basis, so it’s not that strong. The
only thing you can do is report it to Facebook. But Facebook has bigger countries and bigger issues to deal with, and you end up waiting in a lineup of those issues and then the
damage has already been done.
Greene: Okay, let’s shift gears. Do you consider the current government of the Gambia to be democratic?
I think it is pretty democratic because you can speak freely after 2016 unlike with our last
president. I was born in an era when people were not able to speak up. So I can only compare the last regime and the current one. I think now it’s more democratic because people are able to speak out online. I can remember back
before the elections of 2016 that if you said certain things online you had to
move out of the country. Before 2016 people who were abroad would not come back to Gambia for fear of facing reprisal for content they had posted online. Since 2016 we have seen
people we hadn’t seen for like ten or fifteen years. They were finally able to come back.
Greene: So you lived in the country under a non-democratic regime with the prior administration. Do you have any personal stories you could tell about life before 2016 and feeling
like you were censored? Or having to go outside of the country to write something?
Technically it was a democracy but the fact was you couldn’t speak freely. What you said could get you in trouble—I don’t consider that a democracy.
During the last regime I was in high school. One thing I realized was that there were certain political things teachers wouldn’t discuss because they had to protect themselves.
At some point I realized things changed because before 2016 we didn’t say the president’s name. We would give him nicknames, but the moment the guy left power we felt free to say his
name directly. I experienced censorship from not being able to say his name or talk about him. I realized there was so much going on when the Truth, Reconciliation, and Reparations Commission (TRC) happened and people
finally had the confidence to go on TV and speak about their stories.
As a young person I learned that what you see is not everything that’s happening. There were a lot of things that were happening but we couldn’t see because the media was
restricted. The media couldn’t publish certain things. When he left and through the TRC we learned about what happened. A lot of people lost their lives. Some had to flee. Some people
lost their mom or dad or some got raped. I think that opened my world. Even though I’m not politically inclined or in the political space, what happened there impacted me. Because we
had a political moment where the president didn’t accept the elections, and a lot of people fled and went to Senegal. I stayed like three or four months and the whole country was on
lockdown. So that was my experience of what happens when things don’t go as planned when it comes to the electoral process. That was my personal experience.
Greene: Was there news media during that time? Was it all government-controlled or was there any independent news media?
We had some independent news media, but those were from Gambians outside of the country. The media that was inside the country couldn’t publish anything against the government.
If you wanted to know what was really happening, you had to go online. At some point, WhatsApp was blocked so we had to move to Telegram and other social media. I also realized that
at some point because my dad was in Iraq and I had to download a VPN so I could talk to him and tell him what was happening in the country because my mom and I were there. That’s why
when people censor the internet I’m really keen on that aspect because I’ve experienced that.
Greene: What made you start doing the work you’re doing now?
First, when I started doing computer science—I have a computer science background—there was no one there to tell me what to do or how to do it. I had to navigate things for
myself or look for people to guide me. I just thought, we don’t have to repeat the same thing for other people. That’s why we started Women TechMakers. We try to guide people and
train them. We want employers to focus on skills instead of gender. So we get to train people, we have a lot of book plans and online resources that we share with people. If you want
to go into a certain field we try to guide you and send you resources. That’s one of the things we do. Just for people to feel confident in their skills. And everyday people say to
me, “Because of this program I was able to get this thing I wanted,” like a job or an event. And that keeps me going. Women get to feel confident in their skills and in the places
they work, too. Companies are always looking for diversity and inclusion. Like, “oh I have two female developers.” At the end of the day you can say you have two developers and
they’re very good developers. And yeah, they’re women. It’s not like they’re hired because they’re women, it’s because they’re skilled. That’s why I do what I do.
Greene: Is there anything else you wanted to say about freedom of speech or about preserving online open spaces?
I work with a lot of technical people who think freedom of speech is not their issue. But what I keep saying to people is that you think it’s not your issue until you experience
it. But freedom of speech and digital rights are everybody’s issues. Because at the end of the day if you don’t have that freedom to speak freely online or if you are not protected
online we are all vulnerable. It should be everybody’s responsibility. It should be a collective thing, not just government making policies. But also people need to be aware of what
they’re posting online. The words you put out there can make or break someone, so it’s everybody’s business. That’s how I see digital rights and freedom of expression. As a collective
responsibility.
Greene: Okay, our last question that we ask everybody. Who is your free speech hero?
My mom’s elder sister. She passed away in 2015, but her name is Mariama Jaw and she was in the political space even during the time when people were not able to
speak. She was my hero because I went to political rallies with her and she would say what people were not willing to say. Not just in political spaces, but in general conversation,
too. She’s somebody who would tell you the truth no matter what would happen, whether her life was in danger or not. I got so much inspiration from her because a lot of women don’t go
into politics or do certain things and they just want to get a husband, but she went against all odds and she was a politician, a mother and sister to a lot of people, to a lot of
women in her community.
>> mehr lesen
Today’s Double Feature: Privacy and Free Speech
(Tue, 03 Dec 2024)
It’s Power Up Your Donation Week! Right now, your contribution to the Electronic Frontier Foundation will go twice as far to protect digital privacy, security, and
free speech rights for everyone. Will you donate today to get a free 2X match?Power Up!
Give to EFF and get a free donation match
Thanks to a fund made by a group of dedicated supporters, your donation online gets an automatic match up to $307,200 through December 10! This means every dollar you
give equals two dollars to fight surveillance,
oppose censorship, defend encryption, promote open
access to information, and much more. EFF makes every cent count.
Lights, Laptops, Action!
Who has time to decode tech policy, understand the law, then figure out how to change things for the users? EFF does. The purpose of every attorney, activist, and technologist at EFF
is to watch your back and make technology better. But you are the superstar who makes it possible with your support.
'Fix Copyright' member shirt inspired by Steamboat Willie entering the public domain.
With the help of people like you, EFF has been able to help unravel legal and ethical questions surrounding the rise of AI; keep policymakers on the road to net neutrality; encourage the Fifth Circuit Court of Appeals to rule that
location-based geofence warrants are
unconstitutional; and explain why banning TikTok
and passing laws like the Kids Online Safety Act (KOSA) will not achieve
internet safety.
The world struggles to get tech right, but EFF’s experts advocate for you every day of the year. Take action by renewing your EFF membership! You can set the stage for civil
liberties and human rights online for everyone. Please give today and let your donation go twice as far for digital rights!Power Up!
Support internet freedom
(and get an Instant match!)
Already an EFF Member?
Strengthen the community when you help us spread the word about Power Up Your Donation Week! Here’s some sample language that you can share:
Donate to EFF this week for an instant match! Double your impact on digital privacy, security, and free speech rights for everyone. https://eff.org/power-up
Bluesky |
Email | Facebook | LinkedIn |
X
(More at eff.org/social)
Each of us has the power to help in the movement for internet freedom. Our future depends on forging a web where we can have private conversations and explore the world online with
confidence, so I thank you for your moral support and hope to have you on EFF's side as a member, too.
________________________
EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating ELEVEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible
as allowed by law.
>> mehr lesen
Amazon and Google Must Keep Their Promises on Project Nimbus
(Mon, 02 Dec 2024)
When a company makes a promise, the public should be able to rely on it. Today, nearly every person in the U.S. is a customer of either Amazon or Google—and many of us are
customers of both technology giants. Both of these companies havemadepublicpromises that they will ensure their
technologies are not being used to facilitate human rights violations. These promises are not just corporate platitudes; they’re commitments to every customer and to society at
large.
It’s a reasonable thing to ask if these promises are being kept. And it’s especially important since Amazon and Google have been increasingly implicated by reportsthattheirtechnologies,
specifically their joint cloud computing initiative called Project Nimbus, are being used to facilitate mass surveillance and human rights violations of Palestinians in the Occupied
Territories of the West Bank, East Jerusalem, and Gaza. This was the basis of our public call in August 2024 for the
companies to come clean about their involvement.
But we didn’t just make a public call. We sent letters directly to the Global
Head of Public Policy at Amazon and to Google’s Global Head of
Human Rights in late September. We detailed what these companies have promised and asked them to tell us by November 1, 2024 how they were complying. We hoped that
they could clear up the confusion, or at least explain where we, or the reporting we were relying on, were wrong.
But instead, they failed to respond. This is unfortunate, since it leads us to question how serious they were in their promises. And it should lead you to question that
too.
Project Nimbus: Technology at the Expense of Human Rights
Project Nimbus provides advanced cloud and AI capabilities to the Israeli government, tools that an increasing numberofcrediblereportssuggest are being used to target
civilians under pervasive surveillance in the Occupied Palestinian Territories. This is more than a technical collaboration—it’s a human rights crisis in the making as evidenced by
data-driven targeting programs like Project Lavender and Where’s Daddy, which have
reportedly led to detentions, killings, and the systematic oppression of journalists, healthcare workers, aid workers, and ordinary families.
Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation.
The consequences are serious. Vulnerable communities in Gaza and the West Bank suffer violations of their human rights, including their rights to privacy, freedom of movement,
and free association, all of which can be fostered and furthered by pervasive surveillance. These documented violations underscore the ethical responsibility of Amazon and Google,
whose technologies are at the heart of this surveillance scheme.
Amazon and Google’s Promises
Amazon and Google have made public commitments to align with the UN Guiding Principles
on Business and Human Rights and their own
AIethics frameworks. These
frameworks are supposed to ensure that their technologies do not contribute to harm. But their silence on these pressing concerns speaks volumes, undermining trust in their supposed
dedication to these principles and casting doubt on their sincerity.
Unanswered Letters, Unanswered Accountability
When we sentletters to Amazon and Google, it was with direct, actionable questions about their
involvement in Project Nimbus. We asked for transparency about their contracts, clients, and risk assessments. We called for evidence that due diligence had been conducted and
demanded explanations of the steps taken to prevent their technologies from facilitating abuse.
Our coredemands were straightforward and tied directly to the company’s commitments:
Disclose the scope of their involvement in Project Nimbus.
Provide evidence of risk assessments tied to this project.
Explain how they are addressing credible reports of misuse.
Despite these reasonable and urgent requests, which are tied directly to the companies’ stated legal and ethical commitments, both companies have remained silent, and their
silence isn’t just an insufficient response—it’s an alarming one.
Why Transparency Cannot Wait
Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation. For both of these companies, it’s an obligation they have promised to the rest
of us. For global companies that wield immense power, silence in the face of abuse is inexcusable.
The Fight for Accountability
EFF is making these letters public to highlight the human rights obligations Amazon and Google have undertaken and to raise reasonable questions they should answer in light of
public reports about the misuse of their technologies in the Occupied Palestinian Territories. We aren’t the first ones to raise concerns, but, having raised these
questions publicly,
and now having given the companies a chance to clarify, we are increasingly concerned about their complicity.
Google and Amazon have promised all of us—their customers and noncustomers alike—that they would take steps to ensure that their technologies support a future where technology
empowers rather than oppresses. It’s increasingly clear that those promises are being ignored, if not entirely broken. EFF will continue to push for transparency and
accountability.
>> mehr lesen
One Down, Many to Go with Pre-Installed Malware on Android
(Wed, 27 Nov 2024) Last
year, we investigated a Dragon Touch children’s tablet (KidzPad Y88X 10) and confirmed that it was linked to a string of fully compromised Android TV Boxes that
also had multiplereports
of malware, adware, and a sketchy firmware update channel. Since then, Google has taken the (now former) tablet
distributor off of their list of Play Protect certified
phones and tablets. The burden of catching this type of threat should not be placed on the consumer. Due diligence by manufacturers, distributors, and resellers is
the only way to tackle this issue of pre-installed compromised devices making their way into the hands of unknowing customers. But in order to mitigate this issue, regulation and
transparency need to be a part of the strategy.
As of October, Dragon Touch is not selling any tablets on their website anymore. However, there is lingering inventory still out there in places like Amazon and Newegg. There are storefronts that exist only on reseller sites for better customer reach, but considering Dragon Touch also
wiped their blog of any mention of their tablets, we assume a
little more than a strategy shift happened here.
We wrote a guide to
help parents set up their kid’s Android devices safely, but it’s difficult to choose which device to purchase to begin with. Advising people to simply buy a more expensive iPad or
Amazon Fire Tablet doesn’t change the fact people are going to purchase low-budget devices. Lower budget devices can be just as reputable if the ecosystem provided a path for better
accountability.
Who is Responsible?
There are some tools in development for consumer education, like the newly developed, voluntary Cyber Trust Mark by the FCC. This
label would aim to inform consumers of the capabilities and guarantee that minimum security standards were met for an IoT device. However, the consumer holding the burden to check for
pre-installed malware is absolutely ridiculous. Responsibility should fall to regulators, manufacturers, distributors, and resellers to check for this kind of threat.
More often than not, you can search for low budget Android devices on retailers like Amazon or Newegg, and find storefront pages with little transparency on who runs the store
and whether or not they come from a reputable distributor. This is true for more than just Android devices, but considering how many products are created for and with the Android
ecosystem, working on this problem could mean better security for thousands of products.
Yes, it is difficult to track hundreds to thousands of distributors and all of their products. It is hard to keep up with rapidly developing threats in the supply chain. You
can’t possibly know of every threat out there.
With all due respect to giant resellers, especially the multi-billion dollar ones: tough luck. This is what you inherit when you want to “sell everything.” You also
inherit the responsibility and risk of each market you encroach or supplant.
Possible Remedy: Firmware Transparency
Thankfully, there is hope on the horizon and tools exist to monitor compromised firmware.
Last year, Google presented Android Binary
Transparency in response to pre-installed malware. This would help track firmware that has been compromised with these two components:
An append-only log of firmware information that is immutable, globally observable, consistent, and auditable. Assured with cryptographic properties.
A network of participants that invest in witnesses, log health, and standardization.
Google is not the first to think of this
concept. This is largely extracting lessons of success from Certificate
Transparency. Yet, better support directly from the Android ecosystem for Android images would definitely help. This would provide an ecosystem of transparency of
manufacturers and developers that utilize the Android Open Source Project (AOSP) to be just as respected as higher-priced brands.
We love open source here at EFF and would like to continue to see innovation and availability in devices that aren’t necessarily created by bigger, more expensive names. But
there needs to be an accountable ecosystem for these products so that pre-installed malware can be more easily detected and not land in consumer hands so easily. Right now you
can verify your Pixel device if you have a little technical skill. We
would like verification to be done by regulators and/or distributors instead of asking consumers to crack out their command lines to verify themselves.
It would be ideal to see existing programs like Android Play Protect certified run a log like this with open-source log implementations, like Trillian. This way, security
researchers, resellers, and regulating bodies could begin to monitor and query information on different Android Original Equipment Manufacturers (OEMs).
There are tools that exist to verify firmware, but right now this ecosystem is a wishlist of sorts. At EFF, we like to imagine what could be better. While a hosted comprehensive log of
Android OEMs doesn’t currently exist, the tools to create it do. Some early participants for accountability in the Android realm include F-Droid’s Android SDK Transparency Log and the Guardian Project’s
(Tor) Binary Transparency
Log.
Time would be better spent on solving this problem systemically, than researching whether every new electronic evil rectangle or IoT device has malware or not.
A complementary solution with binary transparency is the Software Bill of Materials (SBOMs). Think of this as a “list of ingredients” that make up software. This is another idea
that is not very new, but has gathered more institutional and government
support. The components listed in an SBOM could highlight issues or vulnerabilities that were reported for certain components of a software. Without binary transparency though,
researchers, verifiers, auditors, etc. could still be left attempting to extract firmware from devices
that haven’t listed their images. If manufacturers readily provided these images, SBOMs can be generated more easily and help create a less opaque market of electronics. Low
budget or not.
We are glad to see some movement from last year’s investigations. Right in time for Black Friday. More can be done and we hope to see not only devices taken down more swiftly
when reported, especially with shady components, but better support for proactive detection. Regardless of how much someone can spend, everyone deserves a safe, secure device that
doesn’t have malware crammed into it.
>> mehr lesen
Tell the Senate: Don’t Weaponize the Treasury Department Against Nonprofits
(Wed, 27 Nov 2024)
Last week the House of Representatives passed a dangerous bill that would allow the
Secretary of Treasury to strip a U.S. nonprofit of its tax-exempt status. If it passes the Senate and is signed into law, H.R. 9495 would give broad and easily abused new powers to
the executive branch. Nonprofits would not have a meaningful opportunity to defend themselves, and could be targeted without disclosing the reasons or evidence for the
decision.
This bill is an existential threat to nonprofits of all stripes. Future administrations could weaponize the powers in this bill to target nonprofits on either end of the
political spectrum. Even if they are not targeted, the threat alone could chill the activities of some nonprofit organizations.
The bill’s authors have combined this attack on nonprofits, originally written as H.R. 6408, with other legislation that would prevent the IRS from imposing fines and penalties
on hostages while they are held abroad. These are separate matters. Congress should separate these two bills to allow a meaningful vote on this dangerous expansion of executive power.
No administration should be given this much power to target nonprofits without due process.
tell your senator
Protect nonprofits
Over 350 civil liberties, religious, reproductive health, immigrant rights, human rights, racial justice, LGBTQ+, environmental, and educational organizations signed a letter opposing the bill as written. Now, we need your help.
Tell the Senate not to pass H.R. 9495, the
so-called “Stop Terror-Financing and Tax Penalties on American Hostages Act.”
>> mehr lesen
EFF Tells the Second Circuit a Second Time That Electronic Device Searches at the Border Require a Warrant
(Tue, 26 Nov 2024)
EFF, along with ACLU and the New York Civil Liberties Union, filed a second amicus brief in the U.S. Court of
Appeals for the Second Circuit urging the court to require a warrant for border searches of electronic devices, an argument
EFF has been making in the courts and Congress for nearly a decade.
The case, U.S. v. Smith, involved a traveler who was stopped at Newark airport after returning
from a trip to Jamaica. He was detained by border officers at the behest of the FBI and his cell phone was forensically searched. He had been under investigation for his involvement
in a conspiracy to control the New York area emergency mitigation services (“EMS”) industry, which included (among other things) insurance fraud and extortion. He was subsequently
prosecuted and sought to have the evidence from his cell phone thrown out of court.
As we wrote about last year, the district court
made history in holding that border searches of cell phones require a warrant and therefore warrantless device searches at the border violate the Fourth Amendment. However, the judge
allowed the evidence to be used in Mr. Smith’s prosecution because, the judge concluded, the officers had a “good faith” belief that they were legally permitted to search his phone
without a warrant.
The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2023, U.S. Customs and Border
Protection (CBP) conducted 41,767
device searches.
The Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless
“routine” searches of luggage, vehicles, and other items crossing the border.
The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as
drugs, weapons, and other prohibited items, thereby blocking their entry into the country.
In our brief, we argue that the U.S. Supreme Court’s balancing test in Riley v. California
(2014) should govern the analysis here—and that the district court was correct in applying Riley. In
that case, the Supreme Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest against an arrestee’s privacy interests
in the depth and breadth of personal information stored on a cell phone. The Supreme Court concluded that the search-incident-to-arrest warrant exception does not apply, and that
police need to get a warrant to search an arrestee’s phone.
Travelers’ privacy interests in their cell phones and laptops are, of course, the same as those considered in Riley. Modern devices, a decade later, contain even more data
points that together reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial
status, health conditions, and family and professional associations.
In considering the government’s interests in warrantless access to digital data at the border, Riley requires analyzing how closely such searches hew to the original purpose
of the warrant exception—preventing the entry of prohibited goods themselves via the items being searched. We argue that the government’s interests are weak in seeking unfettered
access to travelers’ electronic devices.
First, physical contraband (like drugs) can’t be found in digital data.
Second, digital contraband (such as child pornography) can’t be prevented from entering the country through a warrantless search of a device at the border because it’s likely, given
the nature of cloud technology and how internet-connected devices work, that identical copies of the files are already in the country on servers accessible via the internet.
As the Smith court stated, “Stopping the cell phone from entering the country would not … mean stopping the data contained on it from entering the country” because any data
that can be found on a cell phone—even digital contraband—“very likely does exist not just on the phone device itself, but also on faraway computer servers potentially located within
the country.”
Finally, searching devices for evidence of contraband smuggling (for example, text messages revealing the logistics of an illegal import scheme) and other evidence for
general law enforcement (i.e., investigating non-border-related domestic crimes, as was the case of the FBI investigating Mr. Smith’s involvement in the EMS conspiracy) are too
“untethered” from the original purpose of the border search exception, which is to find prohibited items themselves and not evidence to support a criminal prosecution.
If the Second Circuit is not inclined to require a warrant for electronic device searches at the border, we also argue that such a search—whether manual or forensic—should be
justified only by reasonable suspicion that the device contains digital contraband and be limited in scope to looking for digital contraband. This extends the Ninth Circuit’s
rule from U.S. v. Cano (2019) in which the court held that only forensic device searches at
the border require reasonable suspicion that the device contains digital contraband, while manual searches may be conducted without suspicion. But the Cano court also held
that all searches must be limited in scope to looking for digital contraband (for example, call logs are off limits because they can’t contain digital contraband in the form
of photos or files).
In our brief, we also highlighted two other district courts within the Second Circuit that required a warrant for border device searches: U.S. v. Sultanov (2024) and U.S. v. Fox(2024). We plan
to file briefs in their appeals, as well. Earlier this month, we filed a brief in another Second Circuit border search case, U.S. v. Kamaldoss. We hope that the Second Circuit will rise to
the occasion in one of these cases and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.
>> mehr lesen
Looking for the Answer to the Question, "Do I Really Own the Digital Media I Paid For?"
(Tue, 26 Nov 2024)
Sure, buying your favorite video game, movie, or album online is super convenient. I personally love being able to pre-order a game and play it the night of
release, without needing to go to a store.
But something you may not have thought about before making your purchase are the differences between owning a physical or digital copy of that media. Unfortunately, there’s quite a few rights you give
up by purchasing a digital copy of your favorite game, movie, or album! On our new site, Digital Rights Bytes, we outline the
differences between owning physical and digital media, and why we need to break down that barrier.
Digital Rights Bytes explains this and answers other common questions about technology that may be getting on your nerves and includes short videos featuring adorable animals. You can also read up on what EFF is doing to ensure
you actually own the digital media you pay for, and how you can take action, too.
Got other questions you’d like us to answer in the future? Let us know on your favorite social platform using the hashtag
#DigitalRightsBytes.
>> mehr lesen
Organizing for Digital Rights in the Pacific Northwest
(Fri, 22 Nov 2024)
Recently I traveled to Portland, Oregon to speak at the PDX People’s Digital Safety Fair, meet up
with five groups in the Electronic Frontier Alliance, and attend BSides PDX 2024. Portland’s first ever Digital Safety Fair was a success and five of our six EFA organizations in the
area participated: Personal Telco Project, Encode
Justice Oregon, PDX Privacy, TA3M Portland, and Community Broadband
PDX. I was able to reaffirm our support with these organizations, and table with most of them as they met local people interested in digital rights. We distributed
EFF toolkits as a resource, and we made sure EFA brochures and stickers had a presence on all their tables. A few of these organizations were also present at BSides PDX, and it was
great seeing them being leaders in the local infosec and cybersecurity community.
PDX Privacy’s mission is to bring about transparency and control in the acquisition and use of surveillance systems in the
Portland Metro area, whether personal data is captured by the government or by commercial entities. Transparency is essential to ensure privacy protections, community control,
fairness, and respect for civil rights.
TA3M Portland is an informal meetup designed to connect software creators and activists
who are interested in censorship, surveillance, and open technology.
The Oregon Chapter of Encode Justice, the world’s first and largest youth movement for human-centered
artificial intelligence, works to mobilize policymakers and the public for guardrails to ensure AI fulfills its transformative potential. Its mission is to ensure we encode justice
and safety into the technologies we build.
(l to r) Pictured here with the PDXPrivacy’s Seth, Boaz and new President, Nate. Pictured with Chris Bushick, legendary Portland privacy advocate of TA3M PDX. Pictured with
the leaders of Encode Justice Oregon.
There's growing momentum in the Seattle and Portland areas
Community Broadband PDX’s focus is on expanding the existing dark fiber broadband network in Portland to all residents, creating an
open-source model where the city owns the fiber, and it’s controlled by local nonprofits and cooperatives, not large ISP’s.
Personal Telco is dedicated to the idea that users have a central role in how their communications networks are operated.
This is done by building our own networks that we share with our communities, and by helping to educate others in how they can, too.
At the People’s Digital Safety Fair I spoke in the main room on the campaign to
bring high-speed broadband to Portland, which is led by Community Broadband PDX and the Personal TelCo Project. I made a direct call to action for those in attendance
to join the campaign. My talk culminated with, “What kind of ACTivist would I be if I didn’t implore you to take an ACTion? Everybody pull out your phones.” Then I guided the room to
the website for Community Broadband PDX and to the ‘Join Us’ page where people in that moment signed up to join the campaign, spread the word with their neighbors, and get organized
by the Community Broadband PDX team. You can reach out to them at cbbpdx.org and personaltelco.net. You can get in touch with all the groups mentioned in this blog with their hyperlinks above, or use our
EFA allies directory to see who’s organizing in your area.
(l to r) BSidesPDX 2024 swag and stickers. A photo of me speaking at the People’s Digital Privacy Fair on broadband access in PDX. Pictured with Jennifer Redman, President of
Community Broadband PDX and former broadband administrator for the city of Portland, OR. A picture of the Personal TelCo table with EFF toolkits printed and EFA brochures on hand.
Pictured with Ted, Russell Senior, and Drew of Personal Telco Project. Lastly, it's always great to see a member and active supporter of EFF interacting with one of our EFA
groups.
It’s very exciting to see what members of the EFA are doing in Portland! I also went up to Seattle and met with a few organizations, including one now in talks to join the EFA.
With new EFA friends in Seattle, and existing EFA relationships fortified, I'm excited to help grow our presence and support in the Pacific Northwest, and have new allies with
experience in legislative engagement. It’s great to see groups in the Pacific Northwest engaged and expanding their advocacy efforts, and even greater to stand by them as they
do!
Electronic Frontier Alliance members get support from a community of like-minded grassroots organizers from across the US. If your group defends our digital rights, consider
joining today. https://efa.eff.org
>> mehr lesen
Speaking Freely: Anriette Esterhuysen
(Fri, 22 Nov 2024)
*This interview took place in April 2024 at NetMundial+10 in São Paulo, Brazil. This interview has been edited for length and clarity.
Anriette Esterhuysen is a human rights defender and computer networking trailblazer from South Africa. She has pioneered the use of Internet and Communications Technologies (ICTs) to promote social justice in
South Africa and throughout the world, focusing on affordable Internet access. She was the executive director of Association for
Progressive Communications from 2007 to 2017. In November 2019 Anriette was appointed by the Secretary-General of the United Nations to chair
the Internet Governance Forum’sMultistakeholder Advisory Group.
Greene: Can you go ahead and introduce yourself for us?
Esterhuysen: My name is Anriette Esterhuysen, I am from South Africa and I’m currently sitting here with David in Sao Paulo, Brazil. My closest association remains with
the Association for Progressive Communications where I was executive director from 2000 to 2017. I continue to work
for APC as a consultant in the capacity of Senior Advisor on Internet Governance and convenor of the annual African School on Internet
Governance (AfriSIG).
Greene: Can you tell us more about the African School on Internet Governance (AfriSIG)?
AfriSIG is fabulous. It differs from internet governance capacity building provided by the technical community in that it aims to build critical thinking. It also does not gloss
over the complex power dynamics that are inherent to multistakeholder internet governance. It tries to give participants a hands-on experience of how different interest groups and
sectors approach internet governance issues.
AfriSIG started as a result of Titi Akinsanmi, a young Nigerian doing postgraduate studies in South Africa, approaching APC and saying, “Look, you’ve got to do something.
There’s a European School of Internet Governance, there’s one in Latin America, and where is there more need for capacity-building than in Africa?” She convinced me and my
colleague Emilar Vushe Gandhi, APC Africa Policy Coordinator at the time, to organize an African internet
governance school in 2013 and since then it has taken place every year. It has evolved over time into a partnership between APC and the African Union Commission and Research ICT
Africa.
It is a residential leadership development and learning event that takes place over 5 days. We bring together people who are already working in internet or communications policy
in some capacity. We create space for conversation between people from government, civil society, parliaments, regulators, the media, business and the technical community on what in
Africa are often referred to as “sensitive topics”. This can be anything from LGBTQ rights to online freedom of expression, corruption, authoritarianism, and accountable governance.
We try to create a safe space for deep diving the reasons for the dividing lines between, for example, government and civil society in Africa. It’s very delicate. I love doing it
because I feel that it transforms people’s thinking and the way they see one another and one another’s roles. At the end of the process, it is common for a government official to say
they now understand better why civil society demands media freedom, and how transparency can be useful in protecting the interests of public servants. And civil society activists have
a better understanding of the constraints that state officials face in their day-to-day work. It can be quite a revelation for individuals from civil society to be confronted with the
fact that in many respects they have greater freedom to act and speak than civil servants do.
Greene: That’s great. Okay now tell me, what does free speech mean to you?
I think of it as freedom of expression. It’s fundamental. I grew up under Apartheid in South Africa and was active in the struggle for democracy. There is something deeply wrong
with being surrounded by injustice, cruelty and brutality and not being allowed to speak about it. Even more so when one's own privilege comes at the expense of the oppressed, as was
the case for white South Africans like myself. For me, freedom of expression is the most profound part of being human. You cannot change anything, deconstruct it, or learn about it at
a human level without the ability to speak freely about what it is that you see, or want to understand. The absence of freedom of expression entrenches misinformation, a lack of
understanding of what is happening around you. It facilitates willful stupidity and selective knowledge. That’s why it’s so smart of repressive regimes to stifle freedom of
expression. By stifling free speech you disempower the victims of injustice from voicing their reality, on the one hand, and, on the other, you entrench the unwillingness of those who
are complicit with the injustice to confront that they’re part of it.
It is impossible to shift a state of repression and injustice without speaking out about it. That is why people who struggle for freedom and justice speak about it, even if
doing so gets them imprisoned, assassinated or executed. Change starts through people, the media, communities, families, social movements, and unions, speaking about what needs to
change.
Greene: Having grown up in Apartheid, is there a single personal experience or a group of personal experiences that really shaped your views on freedom of expression?
I think I was fortunate in the sense that I grew up with a mother who—based on her Christian beliefs—came to see Apartheid as being wrong. She was working as a social worker for
the main state church—the Dutch Reformed Church (DRC) —at the time of the Cottesloe
Consultation convened in Johannesburg by the World Council of Churches (WCC) shortly
after the Sharpeville Massacre. An outcome statement from this consultation, and later
deliberations by the WCC in Geneva, condemned the DRC for its racism. In response the DRC decided to leave the WCC. At a church meeting my mother attended she listened to the
debate and someone in the church hierarchy who spoke against this decision and challenged the church for its racist stance. His words made sense to her. She spoke to him after the
meeting and soon joined the organization he had started to oppose Apartheid, the Christian
Institute. His name was Beyers Naudé and he became an icon of the
anti-Apartheid struggle and an enemy of the apartheid state. Apparently, my first protest march was in a pushchair at a rally in 1961 to oppose the rightwing National Party
government's decision for South Africa to leave the Commonwealth.
There’s no single moment that shaped my view of freedom of expression. The thing about living in the context of that kind of racial segregation and repression is that you see it
every day. It’s everywhere around you, but like Nazi Germany, people—white South Africans—chose not to see it, or if they did, to find ways of rationalizing it.
Censorship was both a consequence of and a building block of the Apartheid system. There was no real freedom of expression. But because we had courageous journalists, and a
broad-based political movement—above ground and underground—that opposed the regime, there were spaces where one could speak/listen/learn. The Congress of Democrats established in the 1950s after the Communist Party was banned was a
social justice movement in which people of different faiths and political ideologies (Jewish, Christian and Muslim South Africans alongside agnostics and communists) fought for
justice together. Later in the 1980s, when I was a student, this broad front approach was revived through the United Democratic Front. Journalists did amazing things. When censorship was at its
height during the State of Emergency in the 1980s, newspapers would go to print with columns of blacked-out text—their way of telling the world that they were being censored.
mediacensorshipexample.png
I used to type up copy filed over the phone or cassettes by reporters for the Weekly Mail when I was a student. We had to be fast because everything had to be checked by the
paper’s lawyers before going to print. Lack of freedom of expression was legislated. The courage of editors and individual journalists to defy this, and if they could not, to make it
obvious made a huge impact on me.
Greene: Is there a time when you, looking back, would consider that you were personally censored?
I was very much personally censored at school. I went to an Afrikaans secondary school. And I kind of have a memory of when, after going back after a vacation, my math
teacher—who I had no personal relationship with —walked past me in class and asked me how my holiday on Robben
Island was. I thought, why is he asking me that? A few days later I heard from a teacher I was friendly with that there was a special staff meeting about me. They
felt I was very politically outspoken in class and the school hierarchy needed to take action. No actual action was taken... but I felt watched, and through that, censored, even if
not silenced.
I felt that because for me, being white, it was easier to speak out than for black South Africans, it would be wrong not to do so. As a teenager, I had already made that choice.
It was painful from a social point of view because I was very isolated, I didn’t have many friends, I saw the world so differently from my peers. In 1976 when the Soweto riots broke out I remember someone in my class saying, “This is exactly what we’ve been waiting for
because now we can just kill them all.” This is probably also why I feel a deep connection with Israel/Palestine. There are many dimensions to the Apartheid analogy. The one
that stands out for me is how, as was the case in South Africa too, those with power—Jewish Israelis—dehumanize and villainize the oppressed - Palestinians.
Greene: At some point did you decide that you want human rights more broadly and freedom of expression to be a part of your career?
I don’t think it was a conscious decision. I think it was what I was living for. It was the raison d’etre of my life for a long time. After high school, I had secured places at
two universities. At one for a science degree and at the other for a degree in journalism. But I ended up going to a different university making the choice based on the strength of
its student movement. The struggle against Apartheid was expressed and conceptualized as a struggle for human rights. The Constitution of democratic South Africa was crafted by human
rights lawyers and in many respects it is a localized interpretation of the Universal Declaration.
Later, in the late 1980s, when I started working on access to information through the use of Information and Communication Technologies (ICTS) it felt like an extension of
the political work I had done as a student and in my early working life. APC, which I joined as a member—not staff—in the 1990s, was made up of people from other parts of the world
who had been fighting their own struggles for freedom—Latin America, Asia, and Central/ Eastern Europe. All with very similar hopes about how the use of these technologies can enable
freedom and solidarity.
Greene: So fast forward to now, currently do you think the platforms promote freedom of expression for people or restrict freedom of expression?
Not a simple question. Still, I think the net effect is more freedom of expression. The extent of online freedom of expression is uneven and it’s distorted by the platforms in
some contexts. Just look at the biased pro-Israel way in which several platforms moderate content. Enabling hate speech in contexts of conflict can definitely have a silencing effect.
By not restricting hate in a consistent manner, they end up restricting freedom of expression. But I think it’s disingenuous to say that overall the internet does not increase
freedom of expression. And social media platforms, despite their problematic business models, do contribute. They could of course do it so much better, fairly and consistently, and
for not doing that they need to be held accountable.
Greene: We can talk about some of the problems and difficulties. Let’s start with hate speech. You said it’s a problem we have to tackle. How do we tackle it?
You’re talking to a very cynical old person here. I think that social media amplifies hate speech. But I don’t think they create the impulse to hate. Social media business
models are extractive and exploitative. But we can’t fix our societies by fixing social media. I think that we have to deal with hate in the offline world. Channeling energy and
resources into trying to grow tolerance and respect for human rights in the online space is not enough. It’s just dealing with the symptoms of intolerance and populism. We need to
work far harder to hold people, particularly those with power, accountable for encouraging hate (and disinformation). Why is it easy to get away with online hate in India? Because
Modi likes hate. It’s convenient for him, it keeps him in political power. Trump is another example of a leader that thrives on hate.
What’s so problematic about social media platforms is the monetization of this. That is absolutely wrong and should be stopped—I can say all kinds of things about it. We need to
have a multi-pronged approach. We need market regulation, perhaps some form of content regulation, and new ways of regulating advertising online. We need access to data on what
happens inside these platforms. Intervention is needed, but I do not believe that content control is the right way to do it. It is the business model that is at the root of the
problem. That’s why I get so frustrated with this huge global effort by governments (and others) to ensure information integrity through content regulation. I would rather they
spend the money on strengthening independent media and journalism.
Greene: We should note we are currently at an information integrity conference today. In terms of hate speech, are there hazards to having hate speech laws?
South Africa has hate speech laws which I believe are necessary. Racial hate speech continues to be a problem in South Africa. So is xenophobic hate speech. We have an election
coming on May 29 [2024] and I was listening to talk radio on election issues and hearing how political parties use xenophobic tropes in their campaigns was terrifying. “South Africa
has to be for South Africans.” “Nigerians run organized crime.” “All drugs come from Mozambique,” and so on. Dangerous speech needs to be called out. Norms are important.
But I think that establishing legalized content regulation is risky. In contexts without robust protection for freedom of expression, such regulation can easily be abused by states to
stifle political speech.
Greene: Societal or legal norms?
Both. Legal norms are necessary because social norms can be so inconsistent, volatile. But social norms shape people’s everyday experience and we have to strive to make
them human rights aware. It is important to prevent the abuse of legal norms—and states are, sadly, pretty good at doing just that. In the case of South Africa hate speech regulation
works relatively well because there are strong protections for freedom of expression. There are soft and hard law mechanisms. The South African Human Rights Commission developed a social media charter to counter harmful
speech online as a kind of self-regulatory tool. All of this works—not perfectly of course—because we have a constitution that is grounded in human rights. Where we need to be more
consistent is in holding politicians accountable for speech that incites hate.
Greene: So do we want checks and balances built into the regulatory scheme or are you just wanting it existing within a government scheme that has checks and balances built
in?
I don’t think you need new global rule sets. I think the existing international human rights framework provides what we need and just needs to be strengthened and its
application adapted to emerging tech. One of the reasons why I don’t think we should be obsessive about restricting hate speech online is because it is a canary in a coal mine. In
societies where there’s a communal or religious conflict or racial hate, removing its manifestation online could be a missed opportunity to prevent explosions of violence
offline. That is not to say that there should not be recourse and remedy for victims of hate speech online. Or that those who incite violence should not be held accountable. But
I believe we need to keep the bar high in how we define hate speech—basically as speech that incites violence.
South Africa is an interesting case because we have very progressive laws when it comes to same-sex marriage, same-sex adoption, relationships, insurance, spousal recognition,
medical insurance and so on, but there’s still societal prejudice, particularly in poor communities. That is why we need a strong rights-oriented legal framework.
Greene: So that would be another area where free speech can be restricted and not just from a legal sense but you think from a higher level principles sense.
Right. Perhaps what I am trying to say is that there is speech that incites violence and it should be restricted. And then there is speech that is hateful and discriminatory,
and this should be countered, called out, and challenged, but not censored. When you’re talking about the restriction—or not even the restriction but the recognition and calling
out of—harmful speech it’s important not just to do that online. In South Africa stopping xenophobic speech online or on public media platforms would be relatively simple. But it’s
not going to stop xenophobia in the streets. To do that we need other interventions. Education, public awareness campaigns, community building, and change in the underlying
conditions in which hate thrives which in our case is primarily poverty and unemployment, lack of housing and security.
Greene: This morning someone who spoke at this event was speaking about misinformation said, “The vast majority of misinformation is online.” And certainly in the US, researchers
say that’s not true, most of it is on cable news, but it struck me that someone who is considered an expert should know better. We have information ecosystems and online does not
exist separately.
It’s not separate. Agree. There’s such a strong tendency to look at online spaces as an alternative universe. Even in countries with low internet penetration, there’s a tendency
to focus on the online components of these ecosystems. Another example would be child online protection. Most child abuse takes place in the physical world, and most child abusers are
close family members, friends or teachers of their victims—but there is a global obsession with protecting children online. It is a shortsighted and ‘cheap’ approach and it
won’t work. Not for dealing with misinformation or for protecting children from abuse.
Greene: Okay, our last question we ask all of our guests. Who is your free speech hero?
Desmond Tutu. I have many free speech heroes but Bishop Tutu is a standout because he could be so charming
about speaking his truths. He was fearless in challenging the Apartheid regime. But he would also challenge his fellow Christians. One of his best lines was, “If LGBT people are
not welcome in heaven, I’d rather go to the other place.” And then the person I care about and fear for every day is Egyptian blogger Alaa Abd el-Fattah. I remember walking at night through the streets of Cairo with him in 2012. People kept
coming up to him, talking to him, and being so obviously proud to be able to do so. His activism is fearless. But it is also personal, grounded in love for his city, his country, his
family, and the people who live in it. For Alaa freedom of speech, and freedom in general, was not an abstract or a political goal. It was about freedom to love, to create art, music,
literature and ideas in a shared way that brings people joy and togetherness.
Greene: Well now I have a follow-up question. You said you think free speech is undervalued these days. In what ways and how do we see that?
We see it manifested in the absence of tolerance, in the increase in people claiming that their freedoms are being violated by the expression of those they disagree with, or who
criticize them. It’s as if we’re trying to establish these controlled environments where we don’t have to listen to things that we think are wrong, or that we disagree with. As you
said earlier, information ecosystems have offline and online components. Getting to the “truth” requires a mix of different views, disagreement, fact-checking, and holding people who
deliberately spread falsehoods accountable for doing so. We need people to have the right to free speech, and to counter-speech. We need research and evidence gathering, investigative
journalism, and, most of all, critical thinking. I’m not saying there shouldn't be restrictions on speech in certain contexts, but do it because the speech is illegal or actively
inciteful. Don’t do it because you think it will achieve so-called information integrity. And especially, don’t do it in ways that undermine the right to freedom of expression.
>> mehr lesen
Oppose The Patent-Troll-Friendly PREVAIL Act
(Wed, 20 Nov 2024)
Update 11/21/2024: The Senate Judiciary Committee voted 11-10 in favor of PREVAIL, and several senators expressed concerns about the bill. Thanks
to EFF supporters who spoke out! We will continue to oppose this misguided bill.
Good news: the Senate Judiciary Committee has dropped one of the two terrible patent bills it was considering, the patent-troll-enabling Patent Eligibility Restoration Act (PERA).
Bad news: the committee is still pushing the PREVAIL
Act, a bill that would hamstring the U.S.’s most effective system for invalidating bad patents. PREVAIL is a windfall for patent trolls, and Congress should reject
it.
Take ActionTell Congress: No New Bills For Patent Trolls
One of the most effective tools to fight bad patents in the U.S. is a little-known but important system called inter partes review, or IPR. Created by Congress in 2011, the IPR
process addresses a major problem: too many invalid patents slip through the cracks at the U.S. Patent and Trademark Office. While not an easy or simple process, IPR is far less
expensive and time-consuming than the alternative—fighting invalid patents in federal district court.
That’s why small businesses and individuals rely on IPR for protection. More than 85% of tech-related patent lawsuits are filed by non-practicing entities, also known as “patent
trolls”—companies that don’t have products or services of their own, but instead make dozens, or even hundreds, of patent claims against others, seeking settlement payouts.
So it’s no surprise that patent trolls are frequent targets of IPR challenges, often brought by tech companies. Eliminating these worst-of-the-worst patents is a huge benefit to small
companies and individuals that might otherwise be unable to afford an IPR challenge themselves.
For instance, Apple used an IPR-like process to invalidate a patent owned by the troll Ameranth, which claimed rights over using mobile devices to order food. Ameranth had sued
over 100 restaurants, hotels, and fast-food chains. Once the patent was invalidated, after an appeal to the Federal Circuit, Ameranth’s barrage of baseless lawsuits came to
an end.
PREVAIL Would Ban EFF and Others From Filing Patent Challenges
The IPR system isn’t just for big tech—it has also empowered nonprofits like EFF to fight patents that threaten the public interest.
In 2013, a patent troll called Personal Audio LLC claimed that it had patented podcasting.
The patent titled “System for disseminating media content representing episodes in a serialized sequence,” became the basis for the company’s demand for licensing fees from podcasters
nationwide. Personal Audio filed lawsuits against three podcasters and
threatened countless others.
EFF took on the challenge, raising over $80,000 through crowd-funding to file an IPR
petition. The Patent Trial and Appeals Board agreed: the so-called “podcasting patent,” should never have been granted. EFF proved that Personal Audio’s claims were invalid, and our
victory was upheld all the way to the Supreme
Court.
The PREVAIL Act would block such efforts. It limits IPR petitions to parties directly targeted by a patent owner, shutting out groups like EFF that protect the broader public.
If PREVAIL becomes law, millions of people indirectly harmed by bad patents—like podcasters threatened by Personal Audio—will lose the ability to fight back.
PREVAIL Tilts the Field in Favor of Patent Trolls
The PREVAIL Act will make life easier for patent trolls at every step of the process. It is shocking that the Senate Judiciary Committee is using the few remaining hours it will
be in session this year to advance a bill that undermines the rights of innovators and the public.
Patent troll lawsuits target individuals and small businesses for simply using everyday technology. Everyone who can meet the legal requirements of an IPR filing should have the
right to challenge invalid patents. Use our action center today and tell Congress: that’s not a right we want to give up today.
Take ActionTell Congress: reject the prevail act
More on the PREVAIL Act:
EFF’s blog on how the PREVAIL Act takes
rights away from the public
Our coalition opposition letter to the Senate
Judiciary Committee opposing PREVAIL
Read why patients rights and consumer groups also oppose PREVAIL
>> mehr lesen
The U.S. National Security State is Here to Make AI Even Less Transparent and Accountable
(Tue, 19 Nov 2024)
The Biden White House has released a
memorandum on “Advancing United States’ Leadership in Artificial Intelligence” which includes, among other things, a directive for the National Security
apparatus to become a world leader in the use of AI. Under direction from the White House, the national security state is expected to take up this leadership position by poaching
great minds from academia and the private sector and, most disturbingly, leveraging already functioning private AI models for national security objectives.
Private AI systems like those operated by tech companies are incredibly opaque. People are uncomfortable—and rightly so—with companies that use AI to decide all sorts of things about their
lives–from how likely they are to commit a crime, to their eligibility for a job, to issues involving immigration, insurance, and housing. Right now, as you read this, for-profit
companies are leasing their automated decision-making services to all manner of companies and employers and most of those affected will never know that a computer made a choice about
them and will never be able to appeal that decision or understand how it was made.
But it can get worse; combining both private AI with national security secrecy threatens to make an already secretive system even more unaccountable and untransparent. The
constellation of organizations and agencies that make up the national security apparatus are notoriously secretive. EFF has had to fight in court a number of times in an attempt to
make public even the most basic frameworks of global dragnet surveillance and the rules that govern it. Combining these two will create a Frankenstein’s Monster of secrecy,
unaccountability, and decision-making power.
While the Executive Branch pushes agencies to leverage private AI expertise, our concern is that more and more information on how those AI models work will be cloaked in the
nigh-impenetrable veil of government secrecy. Because AI operates by collecting and processing a tremendous amount of data, understanding what information it retains and how it
arrives at conclusions will all become incredibly central to how the national security state thinks about issues. This means not only will the state likely make the argument that the
AI’s training data may need to be classified, but they may also argue that companies need to, under penalty of law, keep the governing algorithms secret as well.
As the memo says, “AI has emerged as an era-defining technology and has demonstrated significant and growing relevance to national security. The United States must lead
the world in the responsible application of AI to appropriate national security functions.” As the US national security state attempts to leverage powerful commercial AI to give it an
edge, there are a number of questions that remain unanswered about how much that ever-tightening relationship will impact much needed transparency and accountability for private AI
and for-profit automated decision making systems.
>> mehr lesen
Now's The Time to Start (or Renew) a Pledge for EFF Through the CFC
(Tue, 19 Nov 2024)
The Combined Federal Campaign (CFC) pledge period is underway and runs through January 15, 2024! If you're a U.S. federal employee or retiree, be sure to show your
support for EFF by using our CFC ID 10437.
Not sure how to make a pledge? No problem--it’s easy! First, head over to GiveCFC.org and click “DONATE.” Then you can search for EFF using our CFC
ID 10437 and make a pledge via payroll deduction, credit/debit, or an e-check. If you have a renewing pledge, you can also increase your support there as well!
The CFC is the world’s largest and most successful annual charity campaign for U.S. federal employees and retirees. Last year, members of the CFC community raised
nearly $34,000 to support EFF’s work advocating for privacy and free expression online. That support has helped us:
Push the Fifth Circuit Court of Appeals to find that geofence warrants are “categorically” unconstitutional.
Launch Digital Rights Bytes, a resource dedicated to teaching people how to take control of the technology they use every day.
Call out unconstitutional age-verification and censorship laws across the U.S.
Continue to develop and maintain our privacy-enhancing tools, like Certbot and Privacy Badger.
Federal employees and retirees greatly impact our democracy and the future of civil liberties and human rights online. Support EFF’s work by using our CFC ID 10437
when you make a pledge today!
>> mehr lesen
Speaking Freely: Marjorie Heins
(Tue, 19 Nov 2024)
This interview has been edited for length and clarity.*
Marjorie Heins is a writer, former civil rights/civil liberties attorney, and past director of the Free Expression Policy Project (FEPP) and the American Civil Liberties Union's Arts Censorship Project. She is the
author of "Priests of Our Democracy: The Supreme Court, Academic Freedom, and the Anti-Communist Purge," which won the Hugh M. Hefner First Amendment Award in Book Publishing in 2013,
and "Not in Front of the Children: Indecency, Censorship, and the Innocence of Youth," which won the American Library Association's Eli Oboler Award for Best Published Work in the
Field of Intellectual Freedom in 2002.
Her most recent book is "Ironies and Complications of Free Speech: News and Commentary From the Free Expression Policy Project." She has written three other books and scores
of popular and scholarly articles on free speech, censorship, constitutional law, copyright, and the arts. She has taught at New York University, the University of California - San
Diego, Boston College Law School, and the American University of Paris. Since 2015, she has been a volunteer tour guide at the Metropolitan Museum of Art in New York City.
Greene: Can you introduce yourself and the work you’ve done on free speech and how you got there?
Heins: I’m Marjorie Heins, I’m a retired lawyer. I spent most of my career at the ACLU. I started in Boston, where we had a very small office, and we sort of did everything—some
sex discrimination cases, a lot of police misconduct cases, occasionally First Amendment. Then, after doing some teaching and a stint at the Massachusetts Attorney General’s office, I
found myself in the national office of the ACLU in New York, starting a project on art censorship. This was in response to the political brouhaha over the National Endowment for the Arts starting around 1989/ 1990.
Culture wars, attacks on some of the grants made by the NEA, became a big hot button issue. The ACLU was able to raise a little foundation money to hire a lawyer to work on some
of these cases. And one case that was already filed when I got there was National
Endowment for the Arts vs Finley. It was basically a challenge by four theater performance artists whose grants had been recommended by the peer panel but then
ultimately vetoed by the director after a lot of political pressure because their work was very much “on the edge.” So I joined the legal team in that case, the
Finley case, and it had a long and complicated history. Then, by the mid-1990s we were faced with the internet. And there were all these scares over
pornography on the internet poisoning the minds of our children. So the ACLU got very involved in challenging censorship legislation that had been passed by Congress, and I worked on
those cases.
I left the ACLU in 1998 to write a book about what I had learned about censorship. I was curious to find out more about the history primarily of obscenity legislation—the
censorship of sexual communications. So it’s a scholarly book called “Not in front of the Children.” Among the things I discovered is that the origins
of censorship of sexual content, sexual communications, come out of this notion that we need to protect children and other “vulnerable beings.” And initially that included women and
uneducated people, but eventually it really boiled down to children—we need censorship basically of everybody in order to protect children. So that’s what Not in front
of the Children was all about.
And then I took my foundation contacts—because at the ACLU if you have a project you have to raise money—and started a little project, a little think tank which became
affiliated with the National Coalition Against Censorship called the Free Expression Policy Project. And at that point we weren’t really doing litigation anymore, we
were doing a lot of friend of the court briefs, a lot of policy reports and advocacy articles about some of the values and competing interests in the whole area of free expression.
And one premise of this project, from the start, was that we are not absolutists. So we didn’t accept the notion that because the First Amendment says “Congress shall make no law
abridging the freedom of speech,” then there’s some kind of absolute protection for something called free speech and there can’t be any exceptions. And, of course, there are many
exceptions.
So the basic premise of the Free Expression Policy Project was that some exceptions to the First Amendment, like obscenity laws, are not really justified because they are driven
by different ideas about morality and a notion of moral or emotional harm rather than some tangible harm that you can identify like, for example, in the area of libel and slander or
invasion of privacy or harassment. Yes, there are exceptions. The default, the presumption, is free speech, but there could be many reasons why free speech is curtailed in certain
circumstances.
The Free Expression Policy Project continued for about seven years. It moved to the Brennan Center for Justice
at NYU Law School for a while, and, finally, I ran out of ideas and funding. I kept up the website for a little while longer, then ultimately ended the website. Then I thought,
“okay, there’s a lot of good information on this website and it’s all going to disappear, so I’m going to put it into a book.” Oh, I left out the other book I worked on in the early
2000s – about academic freedom, the history of academic freedom, called “Priests of Our Democracy: The Supreme Court, Academic Freedom, and the Anti-Communist
Purge.” This book goes back in history even before the 1940s and 1950s Red Scare and the effect that it had on teachers and universities. And then this last book is
called “Ironies and Complications of Free Speech: News and Commentary From the Free Expression Policy Project,” which is basically an anthology of the
best writings from the Free Expression Policy Project.
And that’s me. That’s what I did.
Greene: So we have a ton to talk about because a lot of the things you’ve written about are either back in the news and regulatory cycle or never left it. So I want to start with
your book “Not in Front of the Children” first. I have at least one copy and I’ve been referring to it a lot and suggesting it because we’ve just seen a ton of
efforts to try and pass new child protection laws to protect kids from online harms. And so I’m curious, first there was a raft of efforts around Tik Tok being bad for kids, now we’re
seeing a lot of efforts aimed at shielding kids from harmful material online. Do you think this a throughline from concerns back from mid-19th Century England. Is it still the same
debate or is there something different about these online harms?
Both are true I think. It’s the same and it’s different. What’s the same is that using the children as an argument for basically trying to suppress information, ideas, or
expression that somebody disapproves of goes back to the beginning of censorship laws around sexuality. And the subject matters have changed, the targets have changed. I’m not too
aware of new proposals for internet censorship of kids, but I’m certainly aware of what states—of course, Florida being the most prominent example—have done in terms of school books,
school library books, public library books, and education from not only k-12 but also higher education in terms of limiting the subject matters that can be discussed. And the primary
target seems to be anything to do with gay or lesbian sexuality and anything having to do with a frank acknowledgement of American slavery or Jim Crow racism. Because the argument in
Florida, and this is explicit in the law, is because it would make white kids feel bad, so let’s not talk about it. So in that sense the two targets that I see now—we’ve got to
protect the kids against information about gay and lesbian people and information about the true racial history of this country—are a little different from the 19th century and even
much of the 20th century.
Greene: One of the things I see is that the harms motivating the book bans and school restrictions are the same harms that are motivating at least some of the legislators who are
trying to pass these laws. And notably a lot of the laws only address online harmful material without being specific about subject matter. We’re still seeing some that are
specifically about sexual material, but a lot of them, including the Kids Online Safety Act really just focus on online harms more broadly.
I haven’t followed that one, but it sounds like it might have a vagueness problem!
Greene: One of the things I get concerned about with the focus on design is that, like, a state Attorney General is not going to be upset if the design has kids reading a lot of
bible verses or tomes about being respectful to your parents. But they will get upset and prosecute people if the design feature is recommending to kids gender-affirming care or
whatever. I just don’t know if there’s a way of protecting against that in a law.
Well, as we all know, when we’re dealing with commercial speech there’s a lot more leeway in terms of regulation, and especially if ads are directed at kids. So I don’t have a
problem with government legislation in the area of restricting the kinds of advertising that can be directed at kids. But if you get out of the area of commercial speech and to
something that’s kind of medical, could you have constitutional legislation that prohibited websites from directing kids to medically dangerous procedures? You’re sort of getting
close to the borderline. If it’s just information then I think the legislation is probably going to be unconstitutional even if it’s related to kids.
Greene: Let’s shift to academic freedom. Which is another fraught issue. What do you think of the current debates now over both restrictions on faculty and universities restricting
student speech?
Academic freedom is under the gun from both sides of the political spectrum. For example, Diversity, Equity, and Inclusion (DEI) initiatives, although they seem
well-intentioned, have led to some pretty troubling outcomes. So that when those college presidents were being interrogated by the members of Congress (in December 2023), they were in
a difficult position, among other reasons, because at least at Harvard and Penn it was pretty clear there were instances of really appalling applications of this idea of Diversity,
Equity, and Inclusion – both to require a certain kind of ideological approach and to censor or punish people who didn’t go along with the party line, so to speak.
The other example I’m thinking of, and I don’t know if Harvard and Penn do this – I know that the University of California system does it or at least it used
to – everybody who applies for a faculty position has to sign a diversity statement, like a loyalty oath, saying that these are the principles they agree with and they will promise to
promote.
And you know you have examples, I mean I may sound very retrograde on this one, but I will not use the pronoun “they” for a singular person. And I know that would mean I
couldn’t get a faculty job! And I’m not sure if my volunteer gig at the Met museum is going to be in trouble because they, very much like universities, have given us instructions,
pages and pages of instructions, on proper terminology – what terminology is favored or disfavored or should never be used, and “they” is in there. You can have circumlocutions so you
can identify a single individual without using he or she if that individual – I mean you can’t even know what the individual’s preference is. So that’s another example of academic
freedom threats from I guess you could call the left or the DEI establishment.
The right in American politics has a lot of material, a lot of ammunition to use when they criticize universities for being too politically correct and too
“woke.” On the other hand, you have the anti-woke law in Florida which is really, as I said before,
directed against education about the horrible racial history of this country. And some of those laws are just – whatever you may think about the ability of state government and state
education departments to dictate curriculum and to dictate what viewpoints are going to be promoted in the curriculum – the Florida anti-woke law and don’t say gay law really go beyond I think any kind of discretion that the courts have
said state and local governments have to determine curriculum.
Greene: Are you surprised at all that we’re seeing that book bans are as big of a thing now as they were twenty years ago?
Well, nothing surprises me. But yes, I would not have predicted that there were going to be the current incarnations of what you can remember from the old days, groups like the
American Family Association, the Christian Coalition, the Eagle Forum, the groups that were “culture warriors” who were making a lot of headlines with their arguments forty years ago
against even just having art that was done by gay people. We’ve come a long way from that, but now we have Moms for Liberty and present-day incarnations of the same groups. The
homophobia agenda is a little more nuanced, it’s a little different from what we were seeing in the days of Jesse
Helms in Congress. But the attacks on drag performances, this whole argument that children are going to be groomed to become drag queens or become gay—that’s a little
bit of a different twist, but it’s basically the same kind of homophobia. So it’s not surprising that it’s being churned up again if this is something that politicians think they can
get behind in order to get elected. Or, let me put it another way, if the Moms for Liberty type groups make enough noise and seem to have enough political potency, then politicians
are going to cater to them.
And so the answer has to be groups on the other side that are making the free expression argument or the intellectual freedom argument or the argument that teachers and
professors and librarians are the ones who should decide what books are appropriate. Those groups have to be as vocal and as powerful in order to persuade politicians that they don’t
have to start passing censorship legislation in order to get votes.
Greene: Going back to the college presidents and being grilled on the hill, you wrote that maybe there was, in response to the genocide question, which I think they were most
sharply criticized there, that there was a better answer that they could have given. Could you talk about that?
I think in that context, both for political reasons and for reasons of policy and free speech doctrine, the answer had to be that if students on campus are calling for genocide
of Jews or any other ethnic or religious group that should not be permitted on campus and that amounts to racial harassment. Of course, I suppose you could imagine scenarios where two
antisemitic kids in the privacy of their dorm room said this and nobody else heard it—okay, maybe it doesn’t amount to racial harassment. But private colleges are not bound by the
First Amendment. They all have codes of civility. Public colleges are bound by the First Amendment, but not the same standards as the public square. So I took the position that in
that circumstance the presidents had to answer, “Yes, that would violate our policies and subject a student to discipline.” But that’s not the same as calling for the intifada or
calling for even the elimination of the state of Israel as having been a mistake 75 years ago. So I got a little pushback on that little blog post that I wrote. And somebody said, “I’m surprised a
former ACLU lawyer is saying that calling for genocide could be punished on a college campus.” But you know, the ACLU has many different political opinions within both the staff and
Board. There were often debates on different kinds of free speech issues and where certain lines are drawn. And certainly on issues of harassment and when hate speech becomes
harassment—under what circumstances it becomes harassment. So, yes, I think that’s what they should have said. A lot of legal scholars, including David Cole of the ACLU, said they gave exactly the right answer, the legalistic answer, that it depends on the
context. In that political situation that was not the right answer.
Greene: It was awkward. They did answer as if they were having an academic discussion and not as if they were talking to members of Congress.
Well they also answered as if they were programmed. I mean Claudine Gay repeated the exact
same words that probably somebody had told her to say at least twice if not more. And that did not look very good. It didn’t look like she was even thinking for herself.
Greene: I do think they were anticipating the followup question of, “Well isn’t saying ‘From the River to the Sea’ a call for genocide and how come you haven’t punished students
for that?” But as you said, that would then lead into a discussion of how we determine what is or is not a call for genocide.
Well they didn’t need a followup question because to Elise Stefanik, “Intifada” or “from the
river to the sea” was equivalent to a call for genocide, period, end of discussion. Let me say one more thing about these college hearings. What these presidents needed to say is that
it’s very scary when politicians start interrogating college faculty or college presidents about curriculum, governance, and certainly faculty hires. One of the things that was going
on there was they didn’t think there were enough conservatives on college faculties, and that was their definition of diversity. You have to push back on that, and say it is a real
threat to academic freedom and all of the values that we talk about that are important at a university education when politicians start getting their hands on this and using funding
as a threat and so forth. They needed to say that.
Greene: Let’s pull back and talk about free speech principles more broadly. Why is, after many years of work in this area, why do you think free expression is
important?
What is the value of free expression more globally? [laughs] A lot of people have opined on that.
Greene: Why is it important to you personally?
Well I define it pretty broadly. So it doesn’t just include political debate and discussion and having all points of view represented in the public square, which used to be the
narrower definition of what the First Amendment meant, certainly according to the Supreme Court. But the Court evolved. And so it’s now recognized, as it should be, that free
expression includes art. The movies—it doesn’t even have to be verbal—it can be dance, it can be abstract painting. All of the arts, which feed the soul, are part of free expression.
And that’s very important to me because I think it enriches us. It enriches our intellects, it enriches our spiritual lives, our emotional lives. And I think it goes without saying
that political expression is crucial to having a democracy, however flawed it may be.
Greene: You mentioned earlier that you don’t consider yourself to be a free speech absolutist. Do you consider yourself to be a maximalist or an enthusiast? What do you see as
being sort of legitimate restrictions on any individual’s freedom of expression?
Well, we mentioned this at the beginning. There are a lot of exceptions to the First Amendment that are legitimate and certainly, when I started at the ACLU I thought that
defamation laws and libel and slander laws violate the first amendment. Well, I’ve changed my opinion. Because there’s real harm that gets caused by libel and slander. As we know, the
Supreme Court has put some First Amendment restrictions around those torts, but they’re important to have. Threats are a well-recognized exception to the freedom of speech, and the
kind of harm caused by threats, even if they’re not followed through on, is pretty obvious. Incitement becomes a little trickier because where do you draw the lines? But at some point
an incitement to violent action I think can be restricted for obvious reasons of public safety. And then we have restrictions on false advertising but, of course, if we’re not in the
commercial context, the Supreme Court has told us that lies are protected by the First Amendment. That’s probably wise just in terms of not trying to get the government and the
judicial process involved in deciding what is a lie and what isn’t. But of course that’s done all the time in the context of defamation and commercial speech. Hate speech is
something, as we know, that’s prohibited in many parts of Europe but not here. At least not in the public square as opposed to employment contexts or educational contexts. Some people
would say, “Well, that’s dictated by the First Amendment and they don’t have the First Amendment over there in Europe, so we’re better.” But having worked in this area for a long time
and having read many Supreme Court decisions, it seems to me the First Amendment has been subjected to the same kind of balancing test that they use in Europe when they interpret
their European Convention on Human Rights or their individual constitutions. They just have different policy choices. And the policy choice to prohibit hate speech given the history
of Europe is understandable. Whether it is effective in terms of reducing racism, Islamophobia, antisemitism… is there more of that in Europe than there is here? Hard to know. It’s
probably not that effective. You make martyrs out of people who are prosecuted for hate speech. But on the other hand, some of it is very troubling. In the United States Holocaust
denial is protected.
Greene: Can you talk a little bit about your experience being a woman advocating for first amendment rights for sexual expression during a time when there was at least some form of
feminist movement saying that some types of sexualization of women was harmful to women?
That drove a wedge right through the feminist movement for quite a number of years. There’s still some of that around, but I think less. The battle against pornography has been
pretty much a losing battle.
Greene: Are there lessons from that time? You were clearly on one side of it, are there lessons to be learned from that when we talk about sort of speech
harms?
One of the policy reports we did at the Free Expression Policy Project was on media literacy as an alternative to censorship. Media literacy can be expanded to encompass a lot
of different kinds of education. So if you had decent sex education in this country and kids were able to think about the kinds of messages that you see in commercial pornography and
amateur pornography, in R-rated movies, in advertising—I mean the kind of sexist messages and demeaning messages that you see throughout the culture—education is the best way of
trying to combat some of that stuff.
Greene: Okay, our final question that we ask everyone. Who is your free speech hero?
When I started working on “Priests of our Democracy” the most important case, sort of the culmination of the litigation that took place
challenging loyalty programs and loyalty oaths, was a case called Keyishian v. Board of
Regents. This is a case in which Justice Brennan, writing for a very slim majority of five Justices, said academic freedom is “a special concern of the First
Amendment, which does not tolerate laws that cast a pall of orthodoxy over the classroom.” Harry Keyishian was one of the five plaintiffs in this case. He was one of five faculty
members at the University of Buffalo who refused to sign what was called the Feinberg Certificate, which was essentially a loyalty oath. The certificate required all faculty to say
“I’ve never been a member of the Communist Party and if I was, I told the President and the Dean all about it.” He was not a member of the Communist Party, but as Harry said much
later in an interview – because he had gone to college in the 1950s and he saw some of the best professors being summarily fired for refusing to cooperate with some of these
Congressional investigating committees – fast forward to the Feinberg Certificate loyalty oath: he said his refusal to sign was his “revenge on the 1950s.” And so he becomes the
plaintiff in this case that challenges the whole Feinberg Law, this whole elaborate New York State law that basically required loyalty investigations of every teacher in the public
system. So Harry became my hero. I start my book with Harry. The first line in my book is, “Harry Keyishian was a junior at Queen’s College in the Fall of 1952 when the Senate
Internal Security Subcommittee came to town.” And he’s still around. I think he just had his 90th birthday!
>> mehr lesen
On Alaa Abd El Fattah’s 43rd Birthday, the Fight For His Release Continues
(Mon, 18 Nov 2024)
Today marks prominent British-Egyptian coder, blogger, activist, and political prisoner Alaa Abd El Fattah’s 43rd
birthday—his eleventh behind bars. Alaa should have been released on September 29, but Egyptian authorities have continued his imprisonment in
contravention of the country’s own Criminal Procedure Code. Since September 29, Alaa’s mother, mathematician Leila Soueif, has been on hunger strike, while she and the rest of his family have worked to engage the British government in securing
Alaa’s release.
Last November, an international counsel team acting on behalf of Alaa’s family filed an urgent appeal
to the UN Working Group on Arbitrary Detention. EFF joined 33 other organizations in supporting the submission and urging the UNWGAD promptly to issue its opinion on the matter.
Last week, we signed
another letter urging the UNWGAD once again to issue an opinion.
Despite his ongoing incarceration, Alaa’s writing and his activism have continued to be honored worldwide. In October, he was announced as the joint winner of the PEN Pinter Prize
alongside celebrated writer Arundhati Roy. His 2021 collection of essays, You Have Not Yet Been Defeated, has been re-released as part of
Fitzcarraldo Editions’ First Decade Collection. Alaa is also the 2023 winner of PEN
Canada’s One Humanity
Award and the 2022 winner of EFF’s own EFF Award for Democratic
Reform Advocacy.
EFF once again calls for Alaa Abd El Fattah’s immediate and unconditional release and urges the UN Working Group on Arbitrary Detention to promptly issue its opinion on his
incarceration. We further urge the British government to take action to secure his release.
>> mehr lesen
"Why Is It So Expensive To Repair My Devices?"
(Thu, 14 Nov 2024)
Now, of course, we’ve all dropped a cell phone, picked it up, and realized that we’ve absolutely destroyed its screen. Right? Or is it just me...? Either way, you’ve probably seen how expensive it can be to repair a device, whether it be a cell phone, laptop, or
even a washing machine.
Device repair doesn’t need to be expensive, but companies have made repair a way to siphon more money from your pocket to theirs. It doesn’t need to be this
way, and with our new site—Digital Rights Bytes—we lay out how we got here and what we can do to fix this
issue.
Check out our short one-minute video explaining why device
repair has become so expensive and what you can do to defend your right to repair. If you’re hungry to learn more, we’ve broken up some key takeaways into small
byte-sized pieces you can even share with your family and friends.
Digital Rights Bytes also has answers to other common questions including if your phone is actually listening to you, ownership of your digital media, and
more. Got any additional questions you’d like us to answer in the future? Let us know on your favorite social
platform using the hashtag #DigitalRightsBytes so we can find it!
>> mehr lesen
EFF Is Ready for What's Next | EFFector 36.14
(Wed, 13 Nov 2024)
Don't be scared of your backlog of digital rights news, instead, check out EFF's EFFector newsletter! It's the one-stop-shop to keeping up with the latest in the
fight for online freedoms. This time we cover our expectations and preparations for the next U.S. presidential administration, surveillance towers at the U.S.-Mexico border, and EFF's new report
on the use of AI in Latin
America.
It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the
full issue here, or subscribe to get the next one in your inbox automatically! You can also listen
to the audio version of the newsletter on the Internet Archive or by clicking the button below:
LISTEN ON YOUTUBE
EFFECTOR 36.14 - EFF IS READY FOR WHAT'S NEXT
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human
rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other
stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us
fight for a brighter digital future.
>> mehr lesen
Tell Congress To Stop These Last-Minute Bills That Help Patent Trolls
(Wed, 13 Nov 2024)
Update 11/21/2024: The Senate Judiciary Committee voted 11-10 in favor of PREVAIL, and several senators expressed concerns about the bill. Thanks to EFF supporters who
spoke out! We will continue to oppose this misguided bill.
Update 11/14/2024: The PERA and PREVAIL patent bills were pulled at the last minute today, without getting a committee vote. Senators are right to have
concerns with these deeply flawed bills. We hope to engage with the next Congress on real patent fixes—changes that will create a more fair system for small companies and
everyday users of tech. Thanks to all those who spoke out! If you haven't told Congress your opinion on these terrible patent bills, you can still do so using our action center
linked below. Help us move the next Congress in a different direction.
This week, the Senate Judiciary Committee is set to use
its limited time in the lame-duck session to vote on a bill that would make the patent system even
worse.
The Patent Eligibility Restoration Act (S. 2140), or PERA, would undo
vital limits on computer technology patents that the Supreme Court established in the landmark 2014 Alice v. CLS Bank decision.
Alice barred patent applicants from obtaining patents simply by adding generic computer language to abstract ideas.
Take ActionTell Congress: No New Bills For Patent Trolls
While Alice hasn’t fully fixed the problems of the patent system, or patent trolling, it has led to the rejection of hundreds of terrible software patents, including patents on
crowdfunding, tracking packages, photo contests, watching online ads, computer bingo, upselling, and many
others.
PERA would not only revive these dangerous technology patents, but also expand patenting of human genes—a type of patent the Supreme Court essentially blocked in 2013.
The Senate Judiciary is also scheduled to vote on the PREVAIL Act (S. 2220) that seeks to severely limit the public’s ability to challenge bad patents at the patent office. These
challenges are among the most effective tools for eliminating patents that never should have been granted in the first place.
Passing these bills would sell out the public interest to a narrow group of patent holders. EFF stands together with a broad coalition of patients rights
groups, consumer rights organizations, think tanks, startups, and business organizations to oppose these harmful bills.
This week, we need to show Congress that everyday users and creators won’t support laws that foster more patent abuse. Help us send a clear message to your representatives in
Congress today.
Take ActionTell Congress to reject pera and prevail
The U.S. Senate must reject bills like these that would allow the worst patent scams to expand and thrive.
>> mehr lesen
Speaking Freely: Tanka Aryal
(Tue, 12 Nov 2024)
*This interview took place in April 2024 at NetMundial+10 in São Paulo, Brazil and has been edited for length and clarity.
Tanka Aryal is the President of Digital Rights Nepal. He is an attorney practicing at
the Supreme Court of Nepal. He has long worked to promote digital rights, the right to information, freedom of expression, civic space, accountability, and internet freedom nationally
for the last 15 years. Mr. Aryal holds two LLM degrees in International Human Rights Laws from Kathmandu School of Law and Central European University Hungary. Additionally, he
completed different degrees from Oxford University UK and Tokiwa University Japan. Mr. Aryal has worked as a consultant and staff with different national international organizations
including FHI 360, International Center for Not-for-profit Law (ICNL), UNESCO, World Bank, ARTICLE 19,
United Nations Development Programme (UNDP), ISOC, and the United Nations Department of Economic and Social Affairs (UNDESA/DPADM). Mr. Aryal led a right information campaign throughout the country for more than 4 years as the Executive
Director of Citizens’ Campaign for Right to
Information.
Greene: Can you introduce yourself? And can you tell me what kind of work your organization does on freedom of speech in particular?
I am Tanka Aryal, I’m from Nepal and I represent Digital Rights Nepal. Looking at my background of work, I have been working on freedom of expression for the last twenty years.
Digital Rights Nepal is a new organization that started during COVID when a number of issues came up particularly around freedom of expression online and the use of different social
media platforms expressing the ideas of every individual representing different classes, castes, and groups of society. The majority of work done by my organization is particularly
advocating for freedom of expression online as well as data privacy and protection. This is the domain we work in mainly, but in the process of talking about and advocating for
freedom of expression we also talk about access to information, online information integrity, misinformation, and disinformation.
Greene: What does free speech mean to you personally?
It’s a very heavy question! I know it’s not an absolute right—it has limitations. But I feel like if I am not doing any harm to other individuals or it’s not a mass security
type of thing, there should not be interference from the government, platforms, or big companies. At the same time, there are a number of direct and indirect undue influences from the
political wings or the Party who is running the government, which I don’t like. No interference in my thoughts and expression—that is fundamental for me with freedom of
expression.
Greene: Do you consider yourself to be passionate about freedom of expression?
Oh yes. What I’ve realized is, if I consider the human life, existence starts once you start expressing yourself and dealing and communicating with others. So this is the very
fundamental freedom for every human being. If this part of rights is taken away then your life, my life, as a human is totally incomplete. That’s why I’m so passionate about this
right. Because this right has created a foundation for other rights as well. For example, if I speak out and demand my right to education or the right to food, if my right to speak
freely is not protected, then those other rights are also at risk.
Greene: Do you have a personal experience that shaped how you feel about freedom of expression?
Yes. I don’t mean this in a legal sense, but my personal understanding is that if you are participating in any forum, unless you express your ideas and thoughts, then you are
very hardly counted. This is the issue of existence and making yourself exist in society and in community. What I realized was that when you express your ideas with the people and the
community, then the response is better and sometimes you get to engage further in the process. If I would like to express myself, if there are no barriers, then I feel comfortable. In
a number of cases in my life and journey dealing with the government and media and different political groups, if I see some sort of barriers or external factors that limit me
speaking, then that really hampers me. I realize that that really matters.
Greene: In your opinion, what is the state of freedom of expression in Nepal right now?
It’s really difficult. It’s not one of those absolute types of things. There are some indicators of where we stand. For instance, where we stand on the Corruption Index, where we stand on the Freedom of Expression Index. If I compare the state of freedom of expression in Nepal, it’s definitely
better than the surrounding countries like India, Bangladesh, Pakistan, and China. But, learning from these countries, my government is trying to be more restrictive. Some laws and
policies have been introduced that limit freedom of expression online. For instance, Tik Tok is banned by the government. We have considerably good conditions, but still there is room
to improve in a way that you can have better protections for expression.
Greene: What was the government’s thinking with banning TikTok?
There are a number of interpretations. Before banning TikTok the government was seen as pro-China. Once the government banned TikTok—India had already banned it—that decision
supported a narrative that the government is leaning to India rather than China. You know, this sort of geopolitical interpretation. A number of other issues were there, too.
Platforms were not taking measures even for some issues that shouldn’t have come through the platforms. So the government took the blanket approach in a way to try to promote social
harmony and decency and morality. Some of the content published on TikTok was not acceptable, in my opinion, as a consumer myself. But the course of correction could have been
different, maybe regulation or other things. But the government took the shortcut way by banning Tik Tok, eliminating the problem.
Greene: So a combination of geopolitics and that they didn’t like what people were watching on TikTok?
Actually there are a number of narratives told by the different blocks of people, people with different ideas and the different political wings. It was said that the
government—the Maoist leader is the prime minister—considers the very rural people as their vote bank. The government sees them as less literate, brain-washed types of people. “Okay,
this is my vote bank, no one can sort of toss it.” Then once TikTok became popular the TikTok users were the very rural people, women, marginalized people. So they started using Tik
Tok asking questions to the government and things like that. It was said that the Maoist party was not happy with that. “Okay, now our vote bank is going out of our hands so we better
block TikTok and keep them in our control.” So that is the narrative that was also discussed.
Greene: It’s similar in the US, we’re dealing with this right now. Similarly, I think it’s a combination of the geopolitics just with a lot of anti-China sentiment in the US as
well as a concern around, “We don’t like what the kids are doing on TikTok and China is going to use it to serve political propaganda and brainwash US users.”
In the case of the US and India, TikTok was banned for national security. But in our case, the government never said, “Okay, TikTok is banned for our national security.” Rather,
they were focusing on content that the government wasn’t happy with.
Greene: Right, and let me credit Nepal there for their candor, though I don’t like the decision. Because I personally don’t think the United States government’s national security
excuse is very convincing either. But what types of speech or categories of content or topics are really targeted by regulators right now for restriction?
To be honest, the elected leaders, maybe the President, the Prime Minister, the powerholders don’t like the questions being posed to them. That is a general thing. Maybe the
Mayor, maybe the Prime Minister, maybe a Minister, maybe a Chief Minister of one province—the powerholders don’t like being questioned. That is one type of speech made by the
people—asking questions, asking for accountability. So that is one set of targets. Similarly, some speech that’s for the protection of the rights of the individual in many cases—like
hate speech against Dalit, and women, and the LGBTQIA community—so any sort of speech or comments, any type of content, related to this domain is an issue. People don’t have the
capacity to listen to even very minor critical things. If anybody says, “Hey, Tanka, you have these things I would like to be changed from your behavior.” People can say these things
to me. As a public position holder I should have that ability to listen and respond accordingly. But politicians say, “I don’t want to listen to any sort of criticism or critical
thoughts about me.” Particularly the political nature of the speech which seeks accountability and raises transparency issues, that is mostly targeted.
Greene: You said earlier that as long as your speech doesn’t harm someone there shouldn’t be interference. Are there certain harms that are caused by speech that you think are more
serious or that really justify regulation or laws restricting them?
It’s a very tricky one. Even if regulation is justified, if one official can ban something blanketly, it should go through judicial scrutiny. We tend to not have adequate laws.
There are a number of gray areas. Those gray areas have been manipulated and misused by the government. In many cases, misused by, for example, the police. What I understood is that
our judiciary is sometimes very sensible and very sensitive about freedom of expression. However, in many cases, if the issue is related to the judiciary itself they are very
conservative. Two days back I read in a newspaper that there was a sting operation around one judge engaging [in corruption] with a business. And some of the things came into the
media. And the judiciary was so reactive! It was not blamed on the whole judiciary, but the judiciary asked online media to remove that content. There were a number of discussions.
Like without further investigation or checking the facts, how can the judiciary give that order to remove that content? Okay, one official thought that this is wrong content, and if
the judiciary has the power to take it down, that’s not right and that can be misused any time. I mean, the judiciary is really good if the issues are related to other parties, but if
the issue is related to the judiciary itself, the judiciary is conservative.
Greene: You mentioned gray areas and you mentioned some types of hate speech. Is that a gray area in Nepal?
Yeah, actually, we don’t have that much confidence in law. What we have is the
Electronic Transactions Act. Section 47 says that content online can not be published if the content harms others, and so on. It’s very abstract. So that law
can be misused if the government really wanted to drag you into some sort of very difficult position.
We have been working toward and have provided input on a new law that’s more comprehensive, that would define things in proper ways that have less of a chance of being misused
by the police. But it could not move ahead. The bill was drafted in the past parliament. It took lots of time, we provided input, and then after five years it could not move ahead.
Then parliament dissolved and the whole thing became null. The government is not that consultative. Unlike how here we are talking [at NetMundial+10] with multi stakeholder
participation—the government doesn’t bother. They don’t see incentive for engaging civil society. Rather they consider if we can give them the other troublemakers, let’s keep them
away and pass the law. That is the idea they are practicing. We don’t have very clear laws, and because we don’t have clear laws some people really violate fundamental principles. Say
someone was attacking my privacy or I was facing defamation issues. The police are very shorthanded, they can’t arrest that person even if they’re doing something really bad. In the
meantime, the police, if they have a good political nexus and they just want to drag somebody, they can misuse it.
Greene: How do you feel about private corporations being gatekeepers of speech?
It’s very difficult. Even during election time the Election Commission issued an Election Order of Conduct, you could see how foolish they are. They were giving the mandate to
the ISPs that, “If there is a violation of this Order of Conduct, you can take it down.” That sort of blanket power given to them can be misused any time. So if you talk about our
case, we don’t have that many giant corporations, of course Meta and all the major companies are there. Particularly the government has given certain mandates to ISPs, and in
many cases even the National Press Council was asking the ISP Association and the
Nepal Telecommunications Authority (NTA) that regulates all ISPs. Without having a
very clear mandate to the Press Council, without having a clear mandate to NTA, they are exercising power to instruct the ISPs, “Hey, take this down. Hey, don’t publish this.” So
that’s the sort of mechanism and the practice out there.
Greene: You said that Digital Rights Nepal was founded during the pandemic. What was the impetus for starting the organization?
We were totally trapped at home, working from home, studying from home, everything from home. I had worked for a nonprofit organization in the past, advocating for freedom of
expression and more, and when we were at home during COVID a number of issues came out about online platforms. Some people were able to exercise their rights because they have access
to the internet, but some people didn’t have access to the internet and were unable to exercise freedom of expression. So we recognized there are a number of issues and there is a big
digital divide. There are a number of regulatory gray areas in this sector. Looking at the number of kids who were compelled to do online school, their data protection and privacy was
another issue. We were engaging in these e-commerce platforms to buy things and there aren’t proper regulations. So we thought there are a number of issues and nobody working on them,
so let’s form this initiative. It didn’t come all of the sudden, but our working background was there and that situation really made us realize that we needed to focus our work on
these issues.
Greene: Okay, our final question. Who is your free speech hero?
It depends. In my context, in Nepal, there are a couple of people that don’t hesitate to express their ideas even if it is controversial. There’s also Voltaire’s saying, “I defend your freedom of expression even if I don’t like the content.” He could be one of my free
speech heroes. Because sometimes people are hypocrites. They say, “I try to advocate freedom of expression if it applies to you and the government and others, but if any issues come
to harm me I don’t believe in the same principle.” Then people don’t defend freedom of expression. I have seen a number of people showing their hypocrisy once the time came where the
speech is against them. But for me, like Voltaire says, even if I don’t like your speech I’ll defend it until the end because I believe in the idea of freedom of
expression.
>> mehr lesen
Creators of This Police Location Tracking Tool Aren't Vetting Buyers. Here's How To Protect Yourself
(Sat, 09 Nov 2024) 404 Media, along with Haaretz, Notus, and Krebs On Security recently reported on a company that
captures smartphone location data from a variety of sources and collates that data into an easy-to-use tool to track devices’ (and, by proxy, individuals’) locations. The dangers that
this tool presents are especially grave for those traveling to or from out-of-state reproductive health
clinics, places of worship, and the border.
The tool, called Locate X, is run by a company called Babel Street. Locate X is designed for law enforcement, but an investigator working with Atlas Privacy, a data removal service, was able to gain access to Locate X by simply asserting that they planned to work with law
enforcement in the future.
With an incoming administration adversarial to those most at risk from location tracking using tools like Locate X, the time is ripe to bolster our digital defenses. Now more
than ever, attorneys general in states hostile to reproductive choice will be emboldened to use every tool at their disposal to incriminate those exerting their bodily autonomy.
Locate X is a powerful tool they can use to do this. So here are some timely tips to help protect your location privacy.
First, a short disclaimer: these tips provide some level of protection to mobile device-based tracking. This is not an exhaustive list of techniques, devices, or technologies
that can help restore one’s location privacy. Your security plan should reflect how
specifically targeted you are for surveillance. Additional steps, such as researching and
mitigating the on-board devices included with your car, or sweeping for physical GPS
trackers, may be prudent steps which are outside the scope of this post. Likewise, more advanced techniques such as flashing your device with a custom-built
privacy- or security-focused operating system may provide
additional protections which are not covered here. The intent is to give some basic tips for protecting yourself from mobile device location tracking services.
Disable Mobile Advertising Identifiers
Services like Locate X are built atop an online advertising ecosystem that incentivizes collecting troves of information from your device and delivering it to platforms to
micro-target you with ads based on your online behavior. One linchpin in the way distinct information (in this case, location) delivered to an app or website at a certain point in
time is connected to information delivered to a different app or website at the next point in time is through unique identifiers such as the mobile advertising identifiers (MAIDs).
Essentially, MAIDs allow advertising platforms and the data brokers they sell to to “connect the dots” between an otherwise disconnected scatterplot of points on a map, resulting in a
cohesive picture of the movement of a device through space and time.
As a result of significant pushback by privacy advocates, both Android and iOS provided ways to disable advertising identifiers from being delivered to third-parties. As
we described in a recent post, you can do this on
Android following these steps:
With the release of Android 12, Google began allowing users to delete
their ad ID permanently. On devices that have this feature enabled, you can open the Settings app and navigate to Security & Privacy >
Privacy > Ads. Tap “Delete advertising ID,” then tap it again on the next page to confirm. This will prevent any app on your phone from
accessing it in the future.
The Android opt out should be available to most users on Android 12, but may not be available on older versions. If you don’t see an option to “delete” your ad ID, you can
use the older version of Android’s privacy controls to reset it and ask apps not to track you.
And on iOS:
Apple requires apps to ask permission
before they can access your IDFA. When you install a new app, it may ask you for permission to track you.
Select “Ask App Not to Track” to deny it IDFA access.
To see which apps you have previously granted access to, go to Settings > Privacy & Security > Tracking.
In this menu, you can disable tracking for individual apps that have previously received permission. Only apps that have permission to track you will be able to access your
IDFA.
You can set the “Allow apps to Request to Track” switch to the “off” position (the slider is to the left and the background is gray).
This will prevent apps from asking to track in the future. If you have granted apps permission to track you in the past, this will prompt you to ask those apps to stop tracking as
well. You also have the option to grant or revoke tracking access on a per-app basis.
Apple has its own targeted advertising system, separate from the third-party tracking it enables with IDFA. To disable it, navigate to Settings > Privacy
> Apple Advertising and set the “Personalized Ads” switch to the “off” position to disable Apple’s ad targeting.
Audit Your Apps’ Trackers and Permissions
In general, the more apps you have, the more intractable your digital footprint becomes. A separate app you’ve downloaded for flashlight functionality may also come pre-packaged
with trackers delivering your sensitive details to third-parties. That’s why it’s advisable to limit the amount of apps you download and instead use your pre-existing apps or
operating system to, say, find the bathroom light switch at night. It isn't just good for your privacy: any new app you download also increases your “attack surface,” or the possible
paths hackers might have to compromise your device.
We get it though. Some apps you just can’t live without. For these, you can at least audit what trackers the app communicates with and what permissions it asks for. Both Android
and iOS have a page in their Settings apps where you can review permissions you've granted apps. Not all of these are only “on” or “off.” Some, like photos, location, and contacts,
offer more nuanced permissions. It’s worth going through each of these to make sure you still want that app to have that permission. If not, revoke or dial back the permission. To get
to these pages:
On Android: Open Settings > Privacy & Security > Privacy Controls > Permission Manager.
On iPhone: Open Settings > Privacy & Security.
If you're inclined to do so, there are tricks for further research. For example, you can look up tracks in Android apps using an excellent service called Exodus Privacy. As of iOS 15, you can check on the device itself by turning on the system-level app privacy report in
Settings > Privacy > App Privacy Report. From that point on, browsing to that menu will allow you to see exactly what permissions an app uses, how often it
uses them, and what domains it communicates with. You can investigate any given domain by just pasting it into a search engine and seeing what’s been reported on it. Pro tip: to
exclude results from that domain itself and only include what other domains say about it, many search engines like Google allow you to use the syntax
-site:www.example.com
.
Disable Real-Time Tracking with Airplane Mode
To prevent an app from having network connectivity and sending out your location in real-time, you can put your phone into airplane mode. Although it won’t prevent an app from
storing your location and delivering it to a tracker sometime later, most apps (even those filled with trackers) won’t bother with this extra complication. It is important to keep in
mind that this will also prevent you from reaching out to friends and using most apps and services that you depend on. Because of these trade-offs, you likely will not want to keep
Airplane Mode enabled all the time, but it may be useful when you are traveling to a particularly sensitive location.
Some apps are designed to allow you to navigate even in airplane mode. Tapping your profile picture in Google Maps will drop down a menu with Offline maps.
Tapping this will allow you to draw a boundary box and pre-download an entire region, which you can do even without connectivity. As of iOS 18, you can do this on Apple Maps too: tap
your profile picture, then “Offline Maps,” and “Download New Map.”
Other apps, such as Organic Maps, allow you to download large maps in advance. Since GPS itself determines your
location passively (no transmissions need be sent, only received), connectivity is not needed for your device to determine its location and keep it updated on a map stored
locally.
Keep in mind that you don’t need to be in airplane mode the entire time you’re navigating to a sensitive site. One strategy is to navigate to some place
near your sensitive endpoint, then switch airplane mode on, and use offline maps for the last leg of the journey.
Separate Devices for Separate Purposes
Finally, you may want to bring a separate, clean device with you when you’re traveling to a sensitive location. We know this isn’t an option available to everyone. Not everyone
can afford purchasing a separate device just for those times they may have heightened privacy concerns. If possible, though, this can provide some level of protection.
A separate device doesn’t necessarily mean a separate data plan: navigating offline as described in the previous step may bring you to a place you know Wi-Fi is available. It
also means any persistent identifiers (such as the MAID described above) are different for this device, along with different device characteristics which won’t be tied to your normal
personal smartphone. Going through this phone and keeping its apps, permissions, and browsing to an absolute minimum will avoid an instance where that random sketchy game you have on
your normal device to kill time sends your location to its servers every 10 seconds.
One good (though more onerous) practice that would remove any persistent identifiers like long-lasting cookies or MAIDs is resetting your purpose-specific smartphone to factory
settings after each visit to a sensitive location. Just remember to re-download your offline maps and increase your privacy settings afterwards.
Further Reading
Our own Surveillance Self-Defense site, as well as manyotherresources, are available to provide more guidance in protecting your digital privacy. Often, general privacy tips
are applicable in protecting your location data from being divulged, as well.
The underlying situation that makes invasive tools like Locate X possible is the online advertising industry, which incentivises a massive siphoning of user data to micro-target
audiences. Earlier this year, the FTC showed someappetite to pursue enforcement
action against companies brokering the mobile location data of users. We applauded this enforcement, and hope it will continue into the next administration. But regulatory authorities
only have the statutory mandate and ability to punish the worst examples of abuse of consumer data. A piecemeal solution is limited in its ability to protect citizens from the vast
array of data brokers and advertising services profiting off of surveilling us all.
Only a federal privacy law with a strong private right of
action which allows ordinary people to sue companies that broker their sensitive data, and which does not preempt states from enacting even stronger privacy
protections for their own citizens, will have enough teeth to start to rein in the data broker industry. In the meantime, consumers are left to their own devices (pun not intended) in
order to protect their most sensitive data, such as location. It’s up to us to protect ourselves, so let’s make it happen!
>> mehr lesen
Celebrating the Life of Aaron Swartz: Aaron Swartz Day 2024
(Sat, 09 Nov 2024)
Aaron Swartz was a digital rights champion who believed deeply in keeping the internet open. His life was cut
short in 2013, after federal prosecutors charged him under the Computer Fraud and Abuse Act (CFAA) for systematically downloading
academic journal articles from the online database JSTOR. Facing the prospect of a long and unjust sentence, Aaron died by suicide at the age of 26. EFF was proud to call Aaron a
friend and ally.
Today, November 8, would have been his 38th birthday. On November 9, the organizers of Aaron Swartz
Day are celebrating his life with a guest-packed podcast featuring those carrying on the work around
issues close to his heart. Hosts Lisa Rein and Andre Vinicus Leal Sobral will speak to:
Ryan Shapiro, co-founder of the national security transparency non-profit Property of the People
Nathan Dyer of SecureDrop, Newsroom Support Engineer for the Freedom of the Press Foundation.
Tracey Jaquith, Founding Coder and TV Architect at the Internet Archive
Tracy Rosenberg, co-founder of the Aaron Swartz Day Police Surveillance Project and Oakland Privacy
Brewster Kahle founder of the Internet Archive
Ryan Sternlicht, VR developer, educator, researcher, advisor, and maker
Grant Smith Ellis, Chairperson of the Board, MassCann and Legal Intern at the Parabola Center
Michael “Mek” Karpeles, Open Library, Internet Archive
The podcast will start at 2 p.m. PT/10 p.m. UTC. Please read the official page of the Aaron Swartz Day and
International Hackathon for full details.
If you're a programmer or developer engaged in cutting-edge exploration of technology, please check out EFF's Coders' Rights Project.
>> mehr lesen
EFF to Second Circuit: Electronic Device Searches at the Border Require a Warrant
(Sat, 09 Nov 2024)
EFF, along with ACLU and the New York Civil Liberties Union, filed an amicus brief in the U.S. Court of
Appeals for the Second Circuit urging the court to require a warrant for border searches of electronic devices, an argument
EFF has been making in the courts and Congress for nearly a decade.
The case, U.S. v. Kamaldoss, involves the criminal prosecution of a man whose cell phone and
laptop were forensically searched after he landed at JFK airport in New York City. While a manual search involves a border officer tapping or mousing around a device, a forensic
search involves connecting another device to the traveler’s device and using software to extract and analyze
the data to create a detailed report the device owner’s activities and communications. In part based on evidence obtained during the forensic device searches, Mr. Kamaldoss was
subsequently charged with prescription drug trafficking.
The district court upheld the forensic searches of his devices because the government had reasonable suspicion that the defendant “was engaged in efforts to illegally import scheduled
drugs from abroad, an offense directly tied to at least one of the historic rationales for the border exception—the disruption of efforts to import contraband.”
The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2023, U.S. Customs and Border
Protection (CBP) conducted 41,767
device searches.
The Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless
“routine” searches of luggage, vehicles, and other items crossing the border.
The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as
drugs, weapons, and other prohibited items, thereby blocking their entry into the country.
In our brief, we argue that the U.S. Supreme Court’s balancing test in Riley v. California
(2014) should govern the analysis here. In that case, the Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest
against an arrestee’s privacy interests in the depth and breadth of personal information stored on a cell phone. The Supreme Court concluded that the search-incident-to-arrest warrant
exception does not apply, and that police need to get a warrant to search an arrestee’s phone.
Travelers’ privacy interests in their cell phones and laptops are, of course, the same as those considered in Riley. Modern devices, a decade later, contain even more data
points that together reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial
status, health conditions, and family and professional associations.
In considering the government’s interests in warrantless access to digital data at the border, Riley requires analyzing how closely such searches hew to the original purpose
of the warrant exception—preventing the entry of prohibited goods themselves via the items being searched. We argue that the government’s interests are weak in seeking unfettered
access to travelers’ electronic devices.
First, physical contraband (like drugs) can’t be found in digital data. Second, digital contraband (such as child pornography) can’t be prevented from entering the country through a
warrantless search of a device at the border because it’s likely, given the nature of cloud technology and how internet-connected devices work, that identical copies of the files are
already in the country on servers accessible via the internet.
Finally, searching devices for evidence of contraband smuggling (for example, text messages revealing the logistics of an illegal import scheme) and other evidence for
general law enforcement (i.e., investigating non-border-related domestic crimes) are too “untethered” from the original purpose of the border search exception, which is to find
prohibited items themselves and not evidence to support a criminal prosecution.
If the Second Circuit is not inclined to require a warrant for electronic device searches at the border, we also argue that such a search—whether manual or forensic—should be
justified only by reasonable suspicion that the device contains digital contraband and be limited in scope to looking for digital contraband. This extends the Ninth Circuit’s
rule from U.S. v. Cano (2019) in which the court held that only forensic device searches at
the border require reasonable suspicion that the device contains digital contraband, while manual searches may be conducted without suspicion. But the Cano court also held
that all searches must be limited in scope to looking for digital contraband (for example, call logs are off limits because they can’t contain digital contraband in the form
of photos or files).
In our brief, we also highlighted three other district courts within the Second Circuit that required a warrant for border device searches: U.S. v. Smith (2023), which
we wrote about last year; U.S. v. Sultanov (2024), and U.S. v. Fox (2024). We plan to file briefs in their appeals, as well, in the hope that the Second
Circuit will rise to the occasion and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.
>> mehr lesen
EFF to Court: Reject X’s Effort to Revive a Speech-Chilling Lawsuit Against a Nonprofit
(Fri, 08 Nov 2024)
This post was co-written by EFF legal intern Gowri Nayar.
X’s lawsuit against the nonprofit Center for Countering Digital Hate is intended to stifle criticism and punish the organization for its reports criticizing the platform’s content
moderation practices, and a previous ruling dismissing the lawsuit should be affirmed, EFF and multiple organizations argued in a brief filed this fall.
X sued the Center for Countering Digital Hate (“CCDH”) in federal court in 2023 in response to its reports, which concluded that X’s practices have facilitated an environment of hate
speech and misinformation online. Although X’s suit alleges, among other things, breach of contract and violation of the Computer Fraud and Abuse Act, the case is really about X
trying to hold CCDH liable for the public controversy surrounding its moderation practices. At bottom, X is claiming that CCDH damaged the platform by critically reporting on it.
CCDH sought to throw out the case on the merits and under California’s anti-SLAPP statute. The California law allows lawsuits to be dismissed if they are filed in retaliation for
someone exercising their free speech rights, known as Strategic Lawsuits Against Public Participation, or SLAPPs. In March, the district court ruled in favor of CCDH, dismissed the
case, and found that the lawsuit was a SLAPP.
As the district judge noted, X’s suit “is about punishing the Defendants for their speech.” It was correct to reject X’s contract and CFAA theories and saw them for what they
were: grievances with CCDH’s criticisms masquerading as legal claims.
X appealed the ruling to the U.S. Court of Appeals for the Ninth Circuit earlier this year. In September, EFF, along with the ACLU, ACLU of Northern California, and the Knight First
Amendment Institute at Columbia University, filed an amicus brief in support of CCDH.
The amicus brief argues that the Ninth Circuit should not allow X to make use of state contract law and a federal anti-hacking statute to stifle CCDH’s speech. Through this lawsuit, X
wants to punish CCDH for publishing reports that highlighted how X’s policies and practices are allowing misinformation and hate speech to thrive on its platform. We also argue
against the enforcement of X’s anti-scraping provisions because of how vital scraping is to modern journalism and research.
Lastly, we called on the court to dismiss X’s interpretation of the CFAA because it relied on a legal theory that has already been rejected by courts—including the Ninth Circuit
itself—in earlier cases. Allowing the CFAA to be used to criminalize all instances of unauthorized access would run counter to prior decisions and would render illegal large
categories of activities such as sharing passwords with friends and family.
Ruling in favor of X in this lawsuit would set a very dangerous precedent for free speech rights and allow powerful platforms to exert undue control over information online. We hope
the Ninth Circuit affirms the lower court decision and dismisses this meritless lawsuit.
>> mehr lesen
The 2024 U.S. Election is Over. EFF is Ready for What's Next.
(Wed, 06 Nov 2024)
The dust of the U.S. election is settling, and we want you to know that EFF is ready for whatever’s next. Our mission to ensure that technology serves you—rather than silencing,
tracking, or oppressing you—does not change. Some of what’s to come will be in uncharted territory. But we have been preparing for whatever this future brings for a long time. EFF is
at its best when the stakes are high.
No matter what, EFF will take every opportunity to stand with users. We’ll continue to advance our mission of user privacy, free expression, and innovation, regardless of the
obstacles. We will hit the ground running.
During the previous Trump administration, EFF didn’t just hold the line. We pushed digital rights forward in significant ways, both nationally and locally. We supported
those protesting in the streets, with expanded Surveillance Self-Defense guides and our Security Education Companion. The first offers information for how to protect yourself while you
exercise your First Amendment rights, and the second gives tips on how to help your friends and colleagues be more safe.
Along with our allies, we fought government use of face surveillance, passing municipal bans on the dangerous technology. We urged the Supreme
Court to expand protections for your cell phone data, and in Carpenter v United States, they did so—recognizing that location information collected by cell providers creates a “detailed
chronicle of a person’s physical presence compiled every day, every moment over years.” Now, police must get a warrant before obtaining a significant amount of this data.
EFF is at its best when the stakes are high.
But we also stood our ground when governments and companies tried to take away the hard-fought protections we’d won in previous years. We stopped government attempts to
backdoor private messaging with “ghost” and
“client-side scanning” measures that obscured their intentions to undermine end-to-end encryption. We defended Section 230, the common sense law that protects Americans’ freedom
of expression online by protecting the intermediaries we all rely on. And when the COVID pandemic hit, we carefully analyzed and pushed back measures that would have gone beyond what was necessary to keep
people safe and healthy by invading our privacy and inhibiting our free speech.
Every time policymakers or private companies tried to undermine your rights online during the last Trump administration from 2016-2020, we were there—just as we continued to be
under President Biden. In preparation for the next four years, here’s just some of the groundwork we’ve already laid:
Border Surveillance: For a decade we’ve been revealing how the hundreds of millions of
dollars pumped into surveillance technology along the border impacts the privacy of those who live, work, or seek refuge there, and thousands of others transiting through our
border communities each day. We’ve defended the rights of people whose devices have been searched or seized upon entering the country. We’ve mapped out the network of automated
license plate readers installed at checkpoints and land entry points, and the more than 465 surveillance towers along the U.S.-Mexico border. And we’ve advocated for sanctuary
data policies restricting how ICE can access criminal justice and surveillance data.
Surveillance Self-Defense: Protecting your private communications will only become more critical, so we’ve been expanding both the content
and the translations of our Surveillance Self-Defense guides. We’ve written clear guidance for staying secure that applies to everyone, but is particularly important for
journalists, protesters, activists, LGBTQ+ youths, and other vulnerable populations.
Reproductive Rights: Long before Roe v. Wade was overturned,
EFF was working to minimize the ways that law enforcement can obtain data from tech
companies and data brokers. After the Dobbs decision was handed down, we supported multiple laws in California that shield both reproductive and transgender health data
privacy, even for people outside of California. But there’s more to do, and we’re working closely with those involved in the reproductive justice movement to make more
progress.
Transition Memo: When the next administration takes over, we’ll be sending a lengthy, detailed policy analysis to the incoming administration on everything
from competition to AI to intellectual property to surveillance and privacy. We provided a similarly thoughtful set of recommendations on digital rights issues after the last
presidential election, helping to guide critical policy discussions.
We’ve prepared much more too. The road ahead will not be easy, and some of it is not yet mapped out, but one of the reasons EFF is so effective is that we play the long game. We’ll be
here when this administration ends and the next one takes over, and we’ll continue to push. Our nonpartisan approach to tech policy works because we work for the user.
We’re not merely fighting against individual companies or elected officials or even specific administrations. We are fighting
for you. That won’t stop no matter who’s in office.
DONATE TODAY
>> mehr lesen
AI in Criminal Justice Is the Trend Attorneys Need to Know About
(Tue, 05 Nov 2024)
The integration of artificial intelligence (AI) into our criminal justice system is one of the most worrying developments across policing and the courts, and EFF has been
tracking it for years. EFF recently contributed a chapter on AI’s use by law enforcement to the American Bar Association’s annual publication, The State of Criminal Justice 2024.
The chapter describes some of the AI-enabled technologies being used by law enforcement, including some of the tools we feature in our Street-Level Surveillance hub, and discusses the threats AI poses to due process, privacy, and other civil liberties.
Face recognition, license plate readers, and gunshot detection systems all operate using forms of AI, all enabling broad, privacy-deteriorating surveillance that have led
to wrongful arrests and jail time through false
positives. Data streams from these tools—combined with public records, geolocation tracking, and other data from mobile phones—are being shared between policing agencies and used to
build increasingly detailed law enforcement profiles of people, whether or not they’re under investigation. AI software is being used to make black box inferences and connections between them. A growing number of police departments have
been eager to add AI to their arsenals, largely encouraged by extensive marketing by the companies
developing and selling this equipment and software.
“As AI facilitates mass privacy invasion and risks routinizing—or even legitimizing—inequalities and abuses, its influence on law enforcement responsibilities has
important implications for the application of the law, the protection of civil liberties and privacy rights, and the integrity of our criminal justice system,” EFF Investigative
Researcher Beryl Lipton wrote.
The ABA’s 2024 State of Criminal Justice publication is available from the ABA in book or PDF
format.
>> mehr lesen
EFF Lawsuit Discloses Documents Detailing Government’s Social Media Surveillance of Immigrants
(Tue, 05 Nov 2024)
Despite rebranding a federal program that surveils the social media activities of immigrants and foreign visitors to a more benign name, the government agreed to spend more than $100
million to continue monitoring people’s online activities, records disclosed to EFF show.
Thousands of pages of government procurement records and related correspondence show that the Department of Homeland Security and its component Immigrations and Customs Enforcement
largely continued an effort, originally called extreme vetting, to try to determine whether immigrants posed any threat by monitoring their social media and internet presence. The
only real change appeared to be rebranding the program to be known as the Visa Lifecycle Vetting Initiative.
The government disclosed the records to EFF after we
filed suit in 2022 to learn what had become of a program proposed by President Donald Trump. The program continued under President Joseph Biden. Regardless of the name used, DHS’s
program raises significant free expression and First Amendment concerns because it chills the speech of those seeking to enter the United States and allows officials to target and
punish them for expressing views they don’t like.
Yet that appears to be a major purpose of the program, the released documents show. For example, the terms of the contracting request specify that the government sought a system that
could:
analyze and apply techniques to exploit publicly available information, such as media, blogs, public hearings, conferences, academic websites, social media websites such as
Twitter, Facebook, and Linkedln, radio, television, press, geospatial sources, internet sites, and specialized publications with intent to extract pertinent information regarding
individuals.
That document and another one
make explicit that one purpose of the surveillance and analysis is to identify “derogatory information” about Visa applicants and other visitors. The vague phrase is broad enough to
potentially capture any online expression that is critical of the U.S. government or its actions.
EFF has called on DHS to abandon its online social
media surveillance program because it threatens to unfairly label individuals as a threat or otherwise discriminate against them on the basis of their speech. This could include
denying people access to the United States for speaking their mind online. It’s also why EFF has supported a legal challenge to a State Department practice requiring
people applying for a Visa to register their social media accounts with the government.
The documents released in EFF’s lawsuit also include a telling passage about the
controversial program and the government’s efforts to sanitize it. In an email discussing the lawsuit against the State Department’s social media moniker collection program, an ICE
official describes the government’s need to rebrand the program, “from what ICE originally referred to as the Extreme Vetting Initiative.”
The official wrote:
On or around July 2017 at an industry day event, ICE sought input from the private sector on the use of artificial intelligence to assist in visa applicant vetting. In the months
that followed there was significant pushback from a variety channels, including Congress. As a result, on or around May 2018, ICE modified its strategy and rebranded the concept
as the Visa Lifecycle Vetting Project.
Other documents detail the specifics of the contract
and bidding process that resulted in DHS awarding $101,155,431.20 to SRA International, Inc., a government contractor that uses a different name after merging with another contractor. The company is owned by General Dynamics.
The documents also detail an unsuccessful effort by a competitor to overturn DHS’s decision to award the contract to SRA,
though much of the content of that dispute is redacted.
All of the documents released to EFF are available on DocumentCloud.
>> mehr lesen
Judge’s Investigation Into Patent Troll Results In Criminal Referrals
(Mon, 04 Nov 2024)
In 2022, three companies with strange names and no clear business purpose beyond patent litigation filed dozens of lawsuits in Delaware federal court, accusing businesses
of all sizes of patent infringement. Some of these complaints claimed patent rights over basic aspects of modern life; one, for example, involved a patent that pertains to the process of clocking in to work through an app.
These companies–named Mellaconic IP, Backertop Licensing, and Nimitz Technologies–seemed to be typical examples of “patent trolls,” companies whose primary business
is suing others over patents or demanding licensing fees rather than providing actual products or services.
However, the cases soon took an unusual turn. The Delaware federal judge overseeing the cases, U.S. District Judge Colm Connolly, sought more information about the patents and
their ownership. One of the alleged owners was a food-truck operator who had been promised “passive income,” but was entitled to only a small portion of any revenue generated from the
lawsuits. Another owner was the spouse of an attorney at IP Edge, the patent-assertion company linked to all three LLCs.
Following an extensive investigation, the judge determined that attorneys associated with these shell companies had violated legal ethics rules. He pointed out that the
attorneys may have misled Hau Bui, the food-truck owner, about his potential liability in the case. Judge Connolly wrote:
[T]he disparity in legal sophistication between Mr. Bui and the IP Edge and Mavexar actors who dealt with him underscore that counsel's failures to comply with the Model
Rules of Professional Conduct while representing Mr. Bui and his LLC in the Mellaconic cases are not merely technical or academic.
Judge Connolly also concluded that IP Edge, the patent-assertion company behind hundreds of patent lawsuits and linked to the three LLCs, was the “de facto owner” of the patents
asserted in his court, but that it attempted to hide its involvement. He wrote, “IP Edge, however, has gone to great lengths to hide the ‘we’ from the world,” with "we" referring to
IP Edge. Connolly further noted, “IP Edge arranged for the patents to be assigned to LLCs it formed under the names of relatively unsophisticated individuals recruited by [IP Edge
office manager] Linh Deitz.”
The judge referred three IP Edge
attorneys to the Supreme Court of
Texas’ Unauthorized Practice of Law Committee for engaging in “unauthorized practices of law in Texas.” Judge Connolly also sent a letter to the
Department of Justice, suggesting an investigation into “individuals associated with IP Edge LLC and its affiliate Maxevar LLC.”
Patent Trolls Tried To Shut Down This Investigation
The attorneys involved in this wild patent trolling scheme challenged Judge Connolly’s authority to proceed with his investigation. However, because transparency in federal
courts is essential and applicable to all parties, including patent assertion entities, EFF and two other patent reform groups filed a brief in support of the
judge’s investigation. The brief argued that “[t]he public has a right—and need—to know who is controlling and benefiting from litigation in publicly-funded courts.”
Companies targeted by the patent trolls, as well as the Chamber of Commerce, filed their own briefs supporting the investigation.
The appeals court sided with us, upholding Judge Connolly’s authority to
proceed, which led to the referral of the involved attorneys to the disciplinary counsel of their respective bar associations.
After this damning ruling, one of the patent troll companies and its alleged owner made a final effort at appealing this outcome. In July of this year, the U.S. Court of Appeals
for the Federal Circuit ruled that investigating Backertop
Licensing LLC and ordering its alleged owner to testify was “an appropriate means to investigate potential misconduct involving Backertop.”
In EFF’s view, these types of investigations into the murky world of patent trolling are not only appropriate but should happen more often. Now that the appeals court has ruled,
let’s take a look at what we learned about the patent trolls in this case.
Patent Troll Entities Linked To French Government
One of the patent trolling entities, Nimitz Technologies LLC, asserted a single patent, U.S. Patent No. 7,848,328, against 11 companies. When the judge required Nimitz’s
supposed owner, a man named Mark Hall, to testify in court, Hall could not describe anything about the patent or explain how Nimitz acquired it. He didn’t even know the name of the
patent (“Broadcast Content Encapsulation”). When asked what technology was covered by the patent, he said, “I haven’t reviewed it enough to know,” and when asked how he paid for the
patent, Hall replied, “no money exchanged hands.”
The exchange between Hall and Judge Connolly went as follows:
Q. So how do you come to own something if you never paid for it with money?
A. I wouldn't be able to explain it very well. That would be a better question for Mavexar.
Q. Well, you're the owner?
A. Correct.
Q. How do you know you're the owner if you didn't pay anything for the patent?
A. Because I have the paperwork that says I'm the owner.
(Nov. 27, 2023 Opinion, pages 8-9.)
The Nimitz patent originated from the Finnish cell phone company Nokia, which later assigned it and several other patents to France Brevets, a French sovereign investment
fund, in 2013. France Brevets, in turn, assigned the patent to a US company called Burley Licensing LLC, an entity linked to IP Edge, in 2021. Hau Bui (the food truck owner)
signed on behalf of Burley, and Didier Patry, then the
CEO of France Brevets, signed on behalf of the French fund.
France Brevets was an investment fund
formed in 2009 with €100 million in seed money from the French government to manage intellectual property. France Brevets was set to receive 35% of any revenue
related to “monetizing and enforcement” of the patent, with Burley agreeing to file at least one patent infringement lawsuit within a year, and collect a “total minimum Gross Revenue
of US $100,000” within 24 months, or the patent rights would be given back to France Brevets.
Burley Licensing LLC, run by IP Edge personnel, then created Nimitz Technologies LLC— a company with no assets except for the single patent. They obtained a mailing address for
it from a Staples in Frisco, Texas, and assigned the patent to the LLC in August 2021, while the obligations to France Brevets remained unchanged until the fund shut down in 2022.
The Bigger Picture
It’s troubling that patent lawsuits are often funded by entities with no genuine interest in innovation, such as private equity firms. However, it’s even more concerning when
foreign government-backed organizations like France Brevets manipulate the US patent system for profit. In this case, a Finnish company sold its patents to a French government fund,
which used US-based IP lawyers to file baseless lawsuits against American companies, including well-known establishments like Reddit and Bloomberg, as well as smaller ones like Tastemade and Skillshare.
Judges should enforce rules requiring transparency about third-party funding in patent lawsuits. When ownership is unclear, it’s appropriate to insist that the real owners show
up and testify—before dragging dozens of companies into court over dubious software patents.
Related documents:
Memorandum and Order referring counsel to disciplinary bodies (Nov.
23, 2023)
Federal Circuit Opinion affirming the order requiring Lori LaPray
to appear “for testimony regarding potential fraud on the court,” as well as the District Court’s order of monetary sanction against Ms. LaPray for subsequently failing to
appear
>> mehr lesen
The Human Toll of ALPR Errors
(Sat, 02 Nov 2024)
This post was written by Gowri Nayar, an EFF legal intern.
Imagine driving to get your nails done with your family and all of a sudden, you are pulled over by police officers for allegedly driving a stolen car. You are dragged out of the car
and detained at gun point. So are your daughter, sister, and nieces. The police handcuff your family, even the children, and force everyone to lie face-down on the pavement, before
eventually realizing that they made a mistake. This happened to Brittney Gilliam and her family on a warm Sunday in Aurora, Colorado, in August 2020.
And the error? The police officers who pulled them over were relying
on information generated by automated license plate readers (ALPRs). These are high-speed,
computer-controlled camera systems that automatically capture all license plate numbers that come into view, upload them to a central server, and compare them to a “hot list” of
vehicles sought by police. The ALPR system told the police that Gilliam’s car had the same license plate number as a stolen vehicle. But the stolen vehicle was a motorcycle with
Montana plates, while Gilliam’s vehicle was an SUV with Colorado plates.
Likewise, Denise Green had a frightening encounter with San Francisco police officers late one
night in March of 2009. She had just dropped her sister off at a BART train station, when officers pulled her over because their ALPR indicated that she was driving a stolen vehicle.
Multiple officers ordered her to exit her vehicle, at gun point, and kneel on the ground as she was handcuffed. It wasn’t until roughly 20 minutes later that the officers realized
they had made an error and let her go.
Turns out that the ALPR had misread a ‘3’ as a ‘7’ on Green’s license plate. But what is even more egregious is that none of the officers bothered to double-check the ALPR tip before
acting on it.
In both of these dangerous episodes, the motorists were Black. ALPR technology can exacerbate our already discriminatory policing system, among other reasons because too many
police officers react recklessly to information provided by these readers.
Wrongful detentions like these happen all over the country. In Atherton,
California, police officers pulled over Jason Burkleo on his way to work, on suspicion of driving a stolen vehicle. They ordered him at gun point to lie on his stomach to be
handcuffed, only to later realize that their license plate reader had misread an ‘H’ for an ‘M’. In Espanola, New
Mexico, law enforcement officials detained Jaclynn Gonzales at gun point and placed her 12 year-old sister in the back of a patrol vehicle, before discovering that the reader had
mistaken a ‘2’ for a ‘7’ on their license plates. One study found that ALPRs misread the state of 1-in-10 plates (not counting
other reading errors).
Other wrongful stops result from police being negligent in maintaining ALPR databases. Contra Costa sheriff’s deputies detained Brian Hofer and his brother on
Thanksgiving day in 2019, after an ALPR indicated his car was stolen. But the car had already been recovered. Police had failed to update the ALPR database to take this car off the
“hot list” of stolen vehicles for officers to recover.
Police over-reliance on ALPR systems is also a problem. Detroit police knew that the
vehicle used in a shooting was a Dodge Charger. Officers then used ALPR cameras to find the license plate numbers of all Dodge Chargers in the area around the time. One such car,
observed fully two miles away from the shooting, was owned by Isoke Robinson. Police arrived at her house and handcuffed her, placed her 2-year old son in the back of their
patrol car, and impounded her car for three weeks. None of the officers even bothered to check her car’s fog lights, though the vehicle used for the shooting had a missing fog
light.
Officers have also abused ALPR databases to obtain information for their own personal gain, for example, to stalk an ex-wife. Sadly, officer abuse of police databases is a recurringproblem.
Many people subjected to wrongful ALPR detentions are filing and winning lawsuits. The city of Aurora settled Brittney Gilliam’s lawsuit for $1.9 million. In Denise Green’s case, the
city of San Francisco paid $495,000 for her seizure at gunpoint, constitutional injury, and severe emotional distress. Brian Hofer received a $49,500 settlement.
While the financial costs of such ALPR wrongful detentions are high, the social costs are much higher. Far from making our communities safer, ALPR systems repeatedly endanger the
physical safety of innocent people subjected to wrongful detention by gun-wielding officers. They lead to more surveillance, more negligent law enforcement actions, and an environment
of suspicion and fear.
Since 2012, EFF has been resisting the safety, privacy, and other threats of ALPR technology through public records requests, litigation, and legislative advocacy. You can learn
more at our Street-Level Surveillance site.
>> mehr lesen
"Is My Phone Listening To Me?"
(Thu, 31 Oct 2024)
The short answer is no, probably not! But, with EFF’s new site, Digital Rights Bytes, we go in-depth on
this question—and many others.
Whether you’re just starting to question some of the effects of technology in your life or you’re the designated tech wizard of your family looking for
resources to share, Digital Rights Bytes is here to help answer some common questions that may be bugging you about the devices you use.
We often hear the question, “Is my phone listening to me?” Generally,
the answer is no, but the reason you may think that your phone is listening to you is actually quite complicated. Data brokers and advertisers have some sneaky tactics at their
disposal to serve you ads that feel creepy in the moment and may make you think that your device is secretly taking notes on everything you say.
Watch the short video—featuring a cute little penguin discovering how advertisers
collect and track their personal data—and share it with your family and friends who have asked similar questions!
Curious to learn more? We also have information about how to mitigate this tracking and what EFF is doing to stop these data brokers from collecting your
information.
Digital Rights Bytes also has answers to other common questions about device repair, ownership of your digital media, and more. Got any additional questions
you’d like us to answer in the future? Let us know on your favorite social platform using the hashtag #DigitalRightsBytes so we can find it!
>> mehr lesen
EFF Launches Digital Rights Bytes to Answer Tech Questions that Bug Us All
(Thu, 31 Oct 2024)
New Site Dishes Up Byte-Sized, Yummy, Nutritious Videos and Other Information About Your Online Life
SAN FRANCISCO—The Electronic Frontier Foundation today launched “Digital Rights Bytes,” a new website with short videos offering quick, easily digestible answers to the technology questions that trouble us
all.
“It’s increasingly clear there is no way to separate our digital lives from everything else that we do — the internet is now everybody's hometown. But
nobody handed us a map or explained how to navigate safely,” EFF Executive Director Cindy Cohn said. “We hope Digital Rights Bytes will provide easy-to-understand information people
can trust, and an entry point for thinking more broadly about digital privacy, freedom of expression, and other civil liberties in our digital world.”
Initial topics on Digital Rights Bytes include “Is my phone listening to me?”, “Why is device repair so costly?”, “Can the government read my text
messages?” and others. More topics will be added over time.
For each topic, the site provides a brief animated video and a concise, layperson’s explanation of how the technology works. It also provides advice and
resources for what users can do to protect themselves and take action on important issues.
EFF is the leading nonprofit defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation
through impact litigation, policy analysis, grassroots activism, and technology Development. Its mission is to ensure that technology supports freedom, justice and innovation for all
people of the world.
For the Digital Rights Bytes website: https://www.digitalrightsbytes.org/
Contact:
Jason
Kelley
Activism Director
jason@eff.org
>> mehr lesen
Sorry, Gas Companies - Parody Isn't Infringement (Even If It Creeps You Out)
(Wed, 30 Oct 2024)
Activism comes in many forms. You might hold a
rally, write to Congress, or
fly a blimp
over the NSA. Or you might use a darkly hilarious parody to make your point, like our client Modest Proposals recently
did.
Modest Proposals is an activist collective that uses parody and culture jamming to advance environmental justice and other social causes. As part of
a campaign shining a spotlight on the
environmental damage and human toll caused by the liquefied natural gas (LNG) industry, Modest Proposals invented a company called Repaer. The fake company’s website offers energy companies the opportunity to purchase “life offsets” that balance the
human deaths their activities cause by extending the lives of individuals deemed economically valuable. The website also advertises a “Plasma Pals” program that encourages parents to
donate their child’s plasma to wealthy recipients. Scroll down on the homepage a bit, and you’ll see the logos for three (real) LNG companies—Repaer’s “Featured
Partners.”
Believe it or not, the companies didn’t like this. (Shocking!) Two of them—TotalEnergies and Equinor—sent our client stern emails threatening legal action if
their names and logos weren’t removed from the website. TotalEnergies also sent a demand to the website’s hosting service, Netlify, that got repaer.earth taken offline. That was our
cue to get involved.
We sent letters to both companies, explaining what should be
obvious: the website was a noncommercial work of activism, unlikely to confuse any reasonable viewer. Trademark law is about protecting consumers; it’s not a tool for businesses to
shut down criticism. We also sent a counternotice to Netlify denying TotalEnergies’ allegations and demanding that repaer.earth be restored.
We wish this were the first time we’ve had to send letters like that, but EFF regularly helps activists and critics push back on bogus trademark and copyrightclaims. This incident is also part of a broader and long-standing pattern of the energy industry weaponizing the law to quash dissent by
environmental activists. ThesearejustexamplesEFFhaswrittenabout. We’ve been
fighting these tactics for a long time, both by representing individual activist groups and through supporting legislative efforts like a federal anti-SLAPP bill.
Frustratingly, Netlify made us go through the full DMCA
counternotice process—including a 10-business-day waiting period to have the site restored—even though this was never a DMCA claim. (The DMCA is copyright
law, not trademark, and TotalEnergies didn’t even meet the notice requirements that Netlify claims to follow.) Rather than wait around for Netlify to act, Modest Proposals eventually
moved the website to a different hosting service.
Equinor and TotalEnergies, on the other hand, have remained silent. This is a pretty common result when we help push back against bad trademark and
copyright claims: the rights owners slink away once they realize their bullying tactics won’t work, without actually admitting they were wrong. We’re glad these companies seem to have
backed off regardless, but victims of bogus claims deserve more certainty than this.
>> mehr lesen
The Frightening Stakes of this Halloween’s Net Neutrality Hearing
(Wed, 30 Oct 2024)
The future of the open internet is in danger this October 31st, not from ghosts and goblins, but from the broadband companies that control internet access
in most of the United States. These companies would love to use their oligopoly power to charge
users and websites additional fees for “premium” internet access, which they can create by artificially throttling some connections and prioritizing others. Thanks to public pressure
and a coalition of public interest groups, the Federal Communications Commission (FCC) has forbidden such
paid prioritization and throttling, as well as outright blocking of websites. These net neutrality protections ensure that ISPs treat all data that travels over their networks fairly,
without improper discrimination in favor of particular apps, sites or services.
But the lure of making more money without investing in better service or infrastructure is hard for broadband services like Comcast and AT&T to resist.
So the big telecom companies have challenged the FCC’s rules in court—and their case has now made its way to the Sixth Circuit Court of Appeals.
A similar challenge was soundly rejected by the D.C. Circuit Court of Appeals in 2016. Unfortunately the FCC, led by a new Chair, repealed those hard-won
rules in 2017—despite intense resistance from nonprofits, artists, tech companies large and small, libraries, and millions of regular internet users. A few years later, FCC membership
changed again, and the new FCC restored net neutrality protections. As everyone expected, Team Telecom ran back to court, leading to this appeal.
A few things have changed since 2017, however, and none of them good for Team Internet. For one thing, the case is being heard in the Sixth Circuit, which
is not bound by the D.C. Circuit’s earlier reasoning, and which has already signaled its sympathy for Team Telecom in a preliminary ruling.
And, of course, the makeup of the Supreme Court has changed dramatically. Justice Kavanaugh, in particular, dissented from the D.C. Circuit majority when it
reviewed the 2015 order—a dissent that clearly influenced the Sixth Circuit’s initial ruling in the case. That influence may well be felt when this case inevitably makes its way to
the Supreme Court.
The central legal questions are: 1) what did Congress mean when it directed the FCC to regulate “telecommunications services” differently from “information
services,” and 2) into which category does broadband fall. This matters because the rules that we need to preserve the open internet — such as forbidding discrimination against
certain applications — require the FCC to treat access providers like “common carriers,” treatment that can only be applied to telecommunications services. If the FCC has to define
broadband as an “information service,” it can impose regulations that “promote competition” (good) but it cannot do much to forbid paid prioritization, throttling or blocking
(bad).
The answers to those questions will likely depend on whether the Sixth Circuit thinks regulation of the internet is a “major question,” meaning whether it
is an issue has “vast economic or political significance.” If so, the Supreme Court has said that agencies can only address it if Congress has clearly authorized them to do
so.
The “major questions doctrine” is on the rise thanks to a Supreme Court majority that is deeply skeptical of the so-called administrative state. In the past
few years, the majority has used it to reject multiple agency actions, such as the CDC’s temporary moratorium on evictions in areas hard-hit by
Covid.
Equally importantly, the Supreme Court recently changed the rules on whether and how court should defer to plausible agency interpretations of the statutes
under which they operate. In the case of Loper Bright Enterprises v.
Raimondo, the Court ended an era of judicial deference to agency determinations. Rather than allowing agencies to act according to the
agencies’ own plausible determinations about the scope and meaning of the authorities granted to them by Congress, courts are now instructed to reach those determinations
independently. Ironically, under the old rule of deference, in 2003 the Ninth Circuit independently
concluded that broadband was a telecommunications service – the most straightforward and correct reading of
the statute and the one that provides a sound legal basis for net neutrality protections. In fact, the court said it had been erroneous for the FCC to say otherwise. But the FCC and
telecoms successfully argued that the courts should defer to the FCC’s contrary reading, and won at the Supreme Court based on the doctrine of judicial deference that
Loper Bright has now overruled.
Putting these legal threads together, Team Telecom is arguing that the FCC cannot classify current broadband offerings as a telecommunications service, even though that’s the best reading of the statute, because that classification is be a “major question”
that only Congress can decide. Team Internet argues that Congress clearly delegated that decision-making power to the FCC, which is one reason the Supreme Court did not treat the
issue as a “major question” the last time it looked at the issue. Team Telecom also argues that, after the Loper Bright
decision, the court need not defer to the FCC’s interpretation of its own authority. Team Internet explains that, this time, the FCC’s interpretation aligns
with the best understanding of the statute and the facts. EFF stands with Team Internet and so
should the court. It will likely issue a decision in the first half of 2025, so the specter of uncertainty will be with us for some time. Even when the panel issues an opinion, the
losing side will be able to request that the full Sixth Circuit rehear the case, and then the Supreme Court would be the next and final resting place of the
matter.
>> mehr lesen
Triumphs, Trials, and Tangles From California's 2024 Legislative Session
(Wed, 30 Oct 2024)
California’s 2024 legislative session has officially adjourned, and it’s time to reflect on the wins and losses that have shaped Californians’ digital rights landscape this
year.
EFF monitored nearly 100 bills in the state this session alone, addressing a broad range of issues related to privacy, free speech, and innovation. These include proposed
standards for Artificial Intelligence (AI) systems used by state agencies, the intersection of AI and copyright, police surveillance practices, and various privacy concerns. While we
have seen some significant victories, there are also alarming developments that raise concerns about the future of privacy protection in the state.
Celebrating Our Victories
This legislative session brought some wins for privacy advocates—most notably the defeat of four dangerous bills: A.B. 3080, A.B. 1814, S.B. 1076, and S.B. 1047. These bills
posed serious threats to consumer privacy and would have undermined the progress we’ve made in previous years.
First, we commend the California Legislature for not advancing A.B.
3080, “The Parent’s Accountability and Child Protection Act” authored by Assemblymember Juan Alanis (Modesto). The bill would have created
powerful incentives for “pornographic internet websites” to use age-verification mechanisms. The bill was not clear on what counts as “sexually explicit content.” Without clear
guidelines, this bill will further harm the ability of all youth—particularly LGBTQ+ youth—to access legitimate content online. Different versions of bills requiring age verification
have appeared in more than a dozen states. We understand Asm. Alanis' concerns,
but A.B. 3080 would have required broad, privacy-invasive data collection from internet users of all ages. We are grateful that it did not make it to the finish line.
Second, EFF worked with dozens of organizations to defeat A.B.
1814, a facial recognition bill authored by Assemblymember Phil Ting (San Francisco). The bill attempted to expand the use of facial recognition software by police
to “match” images from surveillance databases to possible suspects. Those images could then be used to issue arrest warrants or search warrants. The bill merely said that these
matches can't be the sole reason for a warrant to be issued—a standard that has already failed to stop false arrests in other states.
Police departments and facial recognition companies alike both currently maintain that police cannot justify an arrest using only algorithmic matches–so what would this bill really
change? The bill only gave the appearance of doing something to address face recognition technology's harms, while allowing the practice to continue. California should not give law
enforcement the green light to mine databases, particularly those where people contributed information without knowledge that it would be accessed by law enforcement. You can read
more about this bill here,
and we are glad to see the California legislature reject this dangerous bill.
EFF also worked to oppose and defeat S.B. 1076, by
Senator Scott Wilk (Lancaster). This bill would have weakened the California Delete Act (S.B. 362). Enacted last year, the Delete Act
provides consumers with an easy “one-click” button to request the removal of their personal information held by data brokers registered in California. By January 1, 2026. S.B. 1076
would have opened loopholes for data brokers to duck compliance. This would have hurt consumer rights and undone oversight on an opaque ecosystem of entities that collect then sell
personal information they’ve amassed on individuals. S.B. 1076 would have likely created significant confusion with the development, implementation, and long-term usability of the
delete mechanism established in the California Delete Act, particularly as the California Privacy Protection Agency works on regulations for it.
Lastly, EFF opposed S.B. 1047, the
“Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” authored by Senator Scott Wiener (San Francisco). This bill
aimed to regulate AI models that might have "catastrophic" effects, such as attacks on critical infrastructure. Ultimately, we believe focusing on speculative, long-term, catastrophic
outcomes from AI (like machines going rogue and taking over the world) pulls attention away from AI-enabled harms that are directly before us. EFF supported parts of the bill, like
the creation of a public cloud-computing cluster (CalCompute). However, we also had concerns from the beginning that the bill set an abstract and confusing set of regulations for
those developing AI systems and was built on a shaky self-certification mechanism. Those concerns remained about the final version of the bill, as it passed the legislature.
Governor Newsom vetoed S.B. 1047; we encourage lawmakers concerned about the threats unchecked AI may pose to instead consider regulation that focuses on real-world
harms.
Of course, this session wasn’t all sunshine and rainbows, and we had some big setbacks. Here are a few:
The Lost Promise of A.B. 3048
Throughout this session, EFF and our partners supported A.B.
3048, common-sense legislation that would have required browsers to let consumers exercise their protections under the California Consumer Privacy Act (CCPA). California
is currently one of approximately a dozen states requiring businesses to honor consumer privacy requests made through opt–out preference signals in their browsers and devices. Yet
large companies have often made it difficult for consumers to exercise those rights on their own. The bill would have properly balanced providing consumers with ways to exercise their
privacy rights without creating burdensome requirements for developers or hindering innovation.
Unfortunately, Governor Newsom chose to veto A.B. 3048. His veto
letter cited the lack of support from mobile operators, arguing that because “No major mobile OS incorporates an option for an opt-out signal,” it is “best if design
questions are first addressed by developers, rather than by regulators.” EFF believes technologists should be involved in the regulatory process and hopes to assist in that process.
But Governor Newsom is wrong: we cannot wait for industry players to voluntarily support regulations that protect consumers. Proactive measures are essential to safeguard privacy
rights.
This bill would have moved California in the right direction, making California the first state to require browsers to offer consumers the ability to exercise their
rights.
Wrong Solutions to Real Problems
A big theme we saw this legislative session were proposals that claimed to address real problems but would have been ineffective or failed to respect privacy. These included
bills intended to address young people’s safety online and deepfakes in elections.
While we defeated many misguided bills that were introduced to address young people’s access to the internet, S.B. 976, authored by Senator Nancy Skinner (Oakland), received Governor
Newsom’s signature and takes effect on January 1, 2027. This proposal aims to regulate the "addictive" features of social media companies, but instead compromises the privacy of
consumers in the state. The bill is also likely preempted by federal law and raises considerable First Amendment and privacy concerns. S.B. 976 is unlikely to protect children online,
and will instead harm all online speakers by burdening free speech and diminishing online privacy by incentivizing companies to collect more personal information.
It is no secret that deepfakes can be incredibly convincing, and that can have scary consequences, especially during an election year. Two bills that attempted to address this
issue are A.B. 2655 and A.B. 2839. Authored by Assemblymember
Marc Berman (Palo Alto), A.B. 2655 requires online platforms to develop and implement procedures to block and take down, as well as separately label, digitally manipulated content
about candidates and other elections-related subjects that creates a false portrayal about those subjects. We believe A.B. 2655 likely violates the First Amendment and will lead to
over-censorship of online speech. The bill is also preempted by Section 230, a federal law that provides
partial immunity to online intermediaries for causes of action based on the user-generated content published on their platforms.
Similarly, A.B. 2839, authored by Assemblymember Gail
Pellerin (Santa Cruz), not only bans the distribution of materially deceptive or altered election-related content, but also burdens mere distributors (internet websites, newspapers,
etc.) who are unconnected to the creation of the content—regardless of whether they know of the prohibited manipulation. By extending beyond the direct publishers and toward
republishers, A.B. 2839 burdens and holds liable republishers of content in a manner that has been found unconstitutional.
There are ways to address the harms of deepfakes without stifling innovation and free speech. We recognize the complex issues raised by potentially harmful, artificially
generated election content. But A.B. 2655 and A.B. 2839, as written and passed, likely violate the First Amendment and run afoul of federal law. In fact, less than a month after they
were signed, a federal judge put A.B. 2839’s enforcement on
pause (via a preliminary injunction) on First Amendment grounds.
Privacy Risks in State Databases
We also saw a troubling trend in the legislature this year that we will be making a priority as we look to 2025. Several bills emerged this session that, in different ways,
threatened to weaken privacy protections within state databases. Specifically, A.B. 518 and A.B. 2723, which received Governor Newsom’s signature, are a step backward for data
privacy.
A.B. 518 authorizes numerous agencies in California to share, without
restriction or consent, personal information with the state Department of Social Services (DSS), exempting this sharing from all state privacy laws. This includes county-level
agencies, and people whose information is shared would have no way of knowing or opting out. A. B. 518 is incredibly broad, allowing the sharing of health information, immigration
status, education records, employment records, tax records, utility information, children’s information, and even sealed juvenile records—with no requirement that DSS keep this
personal information confidential, and no restrictions on what DSS can do with the information.
On the other hand, A.B. 2723 assigns a governing board to
the new “Cradle to Career (CTC)” longitudinal education database intended to synthesize student information collected from across the state to enable comprehensive research and
analysis. Parents and children provide this information to their schools, but this project means that their information will be used in ways they never expected or consented to. Even
worse, as written, this project would be exempt from the following privacy safeguards of the Information Practices Act
of 1977 (IPA), which, with respect to state agencies, would otherwise guarantee California parents and students:
the right for subjects whose information is kept in the data system to receive notice their data is in the system;
the right to consent or, more meaningfully, to withhold consent;
and the right to request correction of erroneous information.
By signing A.B. 2723, Gov. Newsom stripped California parents and students of the rights to even know that this is happening, or agree to this data processing in the first
place.
Moreover, while both of these bills allowed state agencies to trample on Californians’ IPA rights, those IPA rights do not even apply to the county-level agencies affected by
A.B. 518 or the local public schools and school districts affected by A.B. 2723—pointing to the need for more guardrails around unfettered data sharing on the local level.
A Call for Comprehensive Local Protections
A.B. 2723 and A.B. 518 reveal a crucial missing piece in Californians' privacy rights: that the privacy rights guaranteed to individuals through California's IPA do not protect
them from the ways local agencies collect, share, and process data. The absence of robust privacy protections at the local government level is an ongoing issue that must be
addressed.
Now is the time to push for stronger privacy protections, hold our lawmakers accountable, and ensure that California remains a leader in the fight for digital privacy. As
always, we want to acknowledge how much your support has helped our advocacy in California this year. Your voices are invaluable, and they truly make a difference.
Let’s not settle for half-measures or weak solutions. Our privacy is worth the fight.
>> mehr lesen
No Matter What the Bank Says, It's YOUR Money, YOUR Data, and YOUR Choice
(Wed, 30 Oct 2024)
The Consumer Finance Protection Bureau (CFPB) has just
finalized a rule that makes it easy and safe for you to figure out which bank will give you the best deal and switch to that bank, with just a couple of
clicks.
We love this kind of thing: the coolest thing about a digital world is how easy it is to switch from product or service to another—in theory.
Digital tools are so flexible, anyone who wants your business can write a program
to import your data into a new service and forward any messages or interactions that show up at the old service.
That's the theory. But in practice, companies have figured out how to use law - IP law, cybersecurity law, contract law, trade secrecy law—to literally criminalize this kind of marvelous digital flexibility,
so that it can end up being even harder to switch away from a digital service than it is to hop around among traditional, analog ones.
Companies love lock-in. The harder it is to quit a product or service, the worse a company can treat you without risking your business.
Economists call the difficulties you face in leaving one service for another the "switching costs" and businesses go to great lengths to raise the switching costs
they can impose on you if you have the temerity to be a disloyal customer.
So long as it's easier to coerce your loyalty than it is to earn it, companies win and their customers lose. That's where the new CFPB rule comes in.
Under this rule, you can authorize a third party - another bank, a comparison shopping site, a broker, or just your bookkeeping software - to request your account data from your
bank. The bank has to give the third party all the data you've authorized. This data can include your transaction history and all the data needed to
set up your payees and recurring transactions somewhere else.
That means that—for example—you can authorize a comparison shopping site to access some of your bank details, like how much you pay in overdraft fees and service charges, how
much you earn in interest, and what your loans and credit cards are costing you. The service can use this data to figure out which bank will cost you the least and pay you the
most.
Then, once you've opened an account with your new best bank, you can direct it to request all your data from your old bank, and with a few
clicks, get fully set up in your new financial home. All your payees transfer over, all your regular payments, all the transaction history you'll rely on at tax time. "Painless" is an
admittedly weird adjective to apply to household finances, but this comes pretty darned close.
Americans lose a lot of money to banking fees and low interest rates. How much? Well, CFPB economists, using a very
conservative methodology, estimate that this rule will make the American public at least $677 million better off, every
year.
Now, that $677 million has to come from somewhere, and it does: it comes from the banks that are currently charging sky-high fees and paying rock-bottom interest. The largest of
these banks are suing the CFPB in a bid to
block the rule from taking effect.
These banks claim that they are doing this to protect us, their depositors, from a torrent of fraud that would be unleashed if we were allowed to give third parties access to
our own financial data. Clearly, this is the only reason a giant bank would want to make it harder for us to change to a competitor (it can't possibly have anything to do with the
$677 million we stand to save by switching).
We've heard arguments like these before. While EFF takes a back seat to no one when it comes to defending user security (we practically invented this), we reject the idea that user security is improved when corporations lock us in
(and leading security experts agree with
us).
This is not to say that a bad data-sharing interoperability rule wouldn't be, you know, bad. A rule that lacked
the proper safeguards could indeed enable a wave of fraud and identity theft the likes of which we've never seen.
Thankfully, this is a good interoperability rule! We liked it when it was first proposed, and it got
even better through the rulemaking process.
First, the CFPB had the wisdom to know that a federal finance agency probably wasn't the best—or only—group of people to design a data-interchange standard. Rather than telling
the banks exactly how they should transmit data when requested by their customers, the CFPB instead said, "These are the data you need to share and these are the characteristics of a
good standards body. So long as you use a standard from a good standards body that shares this data, you're in compliance with the rule." This is an approach we've advocated for
years, and it's the first time we've seen it in the wild.
The CFPB also instructs the banks to fail safe: any time a bank gets a request to share your data that it thinks might be fraudulent, they have the right to block the process
until they can get more information and confirm that everything is on the up-and-up.
The rule also regulates the third parties that can get your data, establishing stringent criteria for which kinds of entities can do this. It also limits
how they can use your data (strictly for the purposes you authorize) and what they need to do with the data when that has been completed (delete it forever),
and what else they are allowed to do with it (nothing). There's also a mini "click-to-cancel"
rule that guarantees that you can instantly revoke any third party's access to your data, for any reason.
The CFPB has had the authority to make a rule like this since its founding in 2010, with the passage of the Consumer Financial Protection Act (CFPA). Back when the CFPA was
working its way through Congress, the banks howled that they were being forced to give up "their" data to their competitors.
But it's not their data. It's your data. The decision about who you share it with belongs to you, and you
alone.
>> mehr lesen
Court Orders Google (a Monopolist) To Knock It Off With the Monopoly Stuff
(Tue, 29 Oct 2024)
A federal court recently ordered Google to make it easier for Android users to switch to rival app stores, banned Google from using its vast cash reserves to block competitors,
and hit Google with a bundle of thou-shalt-nots and assorted prohibitions.
Each of these measures is well crafted, narrowly tailored, and purpose-built to accomplish something vital: improving competition in mobile app stores.
You love to see it.
Some background: the mobile OS market is a duopoly run by two dominant firms, Google (Android) and Apple (iOS). Both companies distribute software through their app stores
(Google's is called "Google Play," Apple's is the "App Store"), and both companies use a combination of market power and legal intimidation to ensure that their users get
all their apps from the company's store.
This creates a chokepoint: if you make an app and I want to run it, you have to convince Google (or Apple) to put it in their store first. That means that Google and Apple can
demand all kinds of concessions from you, in order to reach me. The most important concession is
money, and lots of it. Both Google and Apple demand 30 percent of every dime generated with
an app - not just the purchase price of the app, but every transaction that takes place within the app after that. The companies have all kinds of onerous rules blocking app makers
from asking their users to buy stuff on their website, instead of in the app, or from offering discounts to users who do so.
For avoidance of doubt: 30 percent is a lot. The "normal" rate for payment processing is more like 2-5 percent, a commission that's
gone up 40 percent since covid hit, a price-hike that is itself attributable to monopoly power in the
sector.That's bad, but Google and Apple demand ten times that (unless you qualify for their small business discount, in which case,
they only charge five times more than the Visa/Mastercard cartel).
Epic Games - the company behind the wildly successful multiplayer game Fortnite - has been chasing Google and Apple through the courts over this for years, and last
December, they prevailed in their case against
Google.
This week's court ruling is the next step in that victory. Having concluded that Google illegally acquired and maintained a monopoly over apps for Android, the court had to
decide what to do about it.
It's a great judgment: read it for yourself, or peruse the highlights in this excellent summary from The
Verge.
For the next three years, Google must meet the following criteria:
Allow third-party app stores for Android, and let those app stores distribute all the same apps as are available in Google Play (app developers can opt out of this);
Distribute third-party app stores as apps, so users can switch app stores by downloading a new one from Google Play, in just the same way as
they'd install any app;
Allow apps to use any payment processor, not just Google's 30 percent money-printing machine;
Permit app vendors to tell users about other ways to pay for the things they buy in-app;
Permit app vendors to set their own prices.
Google is also prohibited from using its cash to fence out rivals, for example, by:
Offering incentives to app vendors to launch first on Google Play, or to be exclusive to Google Play;
Offering incentives to app vendors to avoid rival app stores;
Offering incentives to hardware makers to pre-install Google Play;
Offering incentives to hardware makers not to install rival app stores.
These provisions tie in with Google's other recent loss; in Google v. DoJ, where the company was
found to have operated a monopoly over
search. That case turned on the fact that Google paid unimaginably vast sums - more than $25 billion per year - to phone makers,
browser makers, carriers, and, of course, Apple, to make Google Search the default. That meant that every search box you were likely to encounter would connect to Google, meaning that
anyone who came up with a better search engine would have no
hope of finding users.
What's so great about these remedies is that they strike at the root of the Google app monopoly. Google locks billions of users into its platform, and that means that software
authors are at its mercy. By making it easy for users to switch from
one app store to another, and by preventing Google from interfering with that free choice, the court is saying to Google, "You can only remain dominant if you're the best - not
because you're holding 3.3 billion Android users hostage."
Interoperability - plugging new features, services and products into existing systems - is digital technology's secret superpower, and it's great to see the courts recognizing how a
well-crafted interoperability order can cut through thorny tech problems.
Google has vowed to appeal. They say they're being
singled out, because Apple won a similar case earlier this year. It's
true, a different court got it wrong with Apple.
But Apple's not off the hook, either: the EU's Digital Markets Act took effect this year, and its provisions broadly mirror the injunction that just landed on Google. Apple
responded to the EU by refusing to substantively comply with the law, teeing up another
big, hairy battle.
In the meantime, we hope that other courts, lawmakers and regulators continue to explore the possible uses of interoperability to make technology work for its users. This order
will have far-reaching implications, and not just for games like Fortnite: the 30 percent app tax is a millstone around the neck of all kinds of institutions, from independent game
devs who are dolphins caught in Google's tuna net to the free press itself..
>> mehr lesen
Cop Companies Want All Your Data and Other Takeaways from This Year’s IACP Conference
(Mon, 28 Oct 2024)
Artificial intelligence dominated the technology talk on panels, among sponsors, and across the trade floor at this year’s annual conference of the International Association of
Chiefs of Police (IACP).
IACP, held Oct. 19 - 22 in Boston, brings together thousands of police employees with the businesses who want to sell them guns, gadgets, and gear. Across the four-day schedule
were presentations on issues like election security and conversations with top brass like Secretary of Homeland Security Alejandro Mayorkas. But the central attraction was clearly the
trade show floor.
Hundreds of vendors of police technology spent their days trying to attract new police customers and sell existing ones on their newest projects. Event sponsors included big
names in consumer services, like Amazon Web Services (AWS) and Verizon, and police technology giants, like Axon. There was a private ZZ Top concert at TD Garden for the 15,000+
attendees. Giveaways — stuffed animals, espresso, beer, challenge coins, and baked goods — appeared alongside Cybertrucks, massage stations, and tables of police supplies: vehicles,
cameras, VR training systems, and screens displaying software for recordkeeping and data crunching.
And vendors were selling more ways than ever for police to surveillance the public and collect as much personal data as possible. EFF will continue to follow up on what we’ve
seen in our research and at IACP.
A partial view of the vendor booths at IACP 2024
Doughnuts provided by police tech vendor Peregrine
“All in On AI” Demands Accountability
Police are pushing forward full speed ahead on AI.
EFF’s Atlas of Surveillance tracks use of AI-powered equipment like face recognition, automated license plate readers, drones, predictive
policing, and gunshot detection. We’ve seen a trend toward the
integration of these various data
streams, along with private cameras, AI video analysis, and information bought from data brokers. We’ve been following the adoption of real-time crime centers. Recently, we started
tracking the rise of what we call Third
Party Investigative Platforms, which are AI-powered systems that claim to sort or provide huge swaths of data, personal and public, for investigative
use.
The IACP conference featured companies selling all of these kinds of surveillance. Also, each day contained multiple panels on how AI could be integrated into local police work,
including featured speakers like Axon founder Rick Smith, Chula Vista Police Chief Roxana Kennedy, and Fort Collins Police Chief Jeff Swoboda, whose
agency was among the first to use Axon’s DraftOne, software using genAI
to create police reports. Drone as First Responder (DFR) programs were prominently featured by Skydio, Flock Safety, and Brinc. Clearview AI marketed its face recognition
software. Axon offered a whole set of different tools, centering its whole presentation around AxonAI and the computer-driven future.
The booth for police drone provider, Brinc
The policing “solution” du jour is AI, but in reality it demands oversight, skepticism, and, in somecases, total elimination. AI in policing carries a dire list of risks,
including extreme privacy violations, bias, false accusations, and the sabotage of our civil liberties. Adoption of such tools at minimum requires community control of whether to acquire them, and if adopted, transparency and clear
guardrails.
The Corporate/Law Enforcement Data Surveillance Venn Diagram Is Basically A Circle
AI cannot exist without data: data to train the algorithms, to analyze even more data, to trawl for trends and generate assumptions. Police have been accruing their own data for
years through cases, investigations, and surveillance. Corporations have also been gathering information from us: our behavior online, our purchases, how long we look at an image,
what we click on.
As one vendor employee said to us, “Yeah, it’s scary.”
Corporate harvesting and monetizing of our data market is wildly unregulated. Data brokers have been busily vacuuming up whatever information they can. A whole industry provides
law enforcement access to as much information about as many people as possible, and packages police data to “provide insights” and visualizations. At IACP, companies like LexisNexis,
Peregrine, DataMinr, and others showed off how their platforms can give police access to evermore data from tens of thousands of sources.
Some Cops Care What the Public Thinks
Cops will move ahead with AI, but they would much rather do it without friction from their constituents. Some law enforcement officials remain shaken up by the global 2020 protests
following the police murder of George Floyd. Officers at IACP regularly referred to the “public” or the “activists” who might oppose their use of drones and other equipment. One
featured presentation, “Managing the Media's 24-Hour News Cycle and Finding a Reporter You Can Trust,” focused on how police can try to set the narrative that the media tells and the
public generally believes. In another talk, Chula Vista showed off professionally-produced videos designed to win public favor.
This underlines something important: Community engagement, questions, and advocacy are well worth the effort. While many police officers think privacy is dead, it isn’t. We should have faith that when we push back and exert enough pressure, we can stop
law enforcement’s full-scale invasion of our private lives.
Cop Tech is Coming To Every Department
The companies that sell police spy tech, and many departments that use it, would like other departments to use it, too, expanding the sources of data feeding into these networks. In
panels like “Revolutionizing Small and Mid-Sized Agency Practices with Artificial Intelligence,” and “Futureproof: Strategies for Implementing New Technology for Public Safety,”
police officials and vendors encouraged agencies of all sizes to use AI in their communities. Representatives from state and federal agencies talked about regional information-sharing
initiatives and ways smaller departments could be connecting and sharing information even as they work out funding for more advanced technology.
A Cybertruck at the booth for Skyfire AI
“Interoperability” and “collaboration” and “data sharing” are all the buzz. AI tools and surveillance equipment are available to police departments of all sizes, and that’s how
companies, state agencies, and the federal government want it. It doesn’t matter if you think your Little Local Police Department doesn’t need or can’t afford this technology. Almost
every company wants them as a customer, so they can start vacuuming their data into the company system and then share that data with everyone else.
We Need Federal Data Privacy Legislation
There isn’t a comprehensive federal data privacy law, and it shows. Police officials and their vendors know that there are no guardrails from Congress preventing use of these
new tools, and they’re typically able to navigate around piecemeal state legislation.
We need real laws against this mass harvesting and marketing of our sensitive personal information — a real line in the sand that limits these data companies from helping police surveil us
lest we cede even more of our rapidly dwindling privacy. We need new laws to protect ourselves from complete strangers trying to buy and search data on our lives, so we can explore
and create and grow without fear of indefinite retention of every character we type, every icon we click.
Having a computer, using the internet, or buying a cell phone shouldn’t mean signing away your life and its activities to any random person or company that wants to make a
dollar off of it.
>> mehr lesen
EU to Apple: “Let Users Choose Their Software”; Apple: “Nah”
(Mon, 28 Oct 2024)
This year, a far-reaching, complex new piece of legislation comes into effect in EU: the Digital Markets
Act (DMA), which represents some of the most ambitious tech policy in European history. We don’t love everything in the DMA, but some of its provisions are great,
because they center the rights of users of technology, and they do that by taking away some of the control platforms exercise over users, and handing that control back to the
public who rely on those platforms.
Our favorite parts of the DMA are the interoperability provisions. IP laws in the EU (and the US) have all but killed the longstanding and honorable tradition of adversarial interoperability: that’s when you can alter a service, program or device you use,
without permission from the company that made it. Whether that’s getting your car fixed by a third-party mechanic, using third-party ink in your printer, or choosing which apps
run on your phone, you should have the final word. If a company wants you to use its official services, it should make the best services, at the best price – not use the law to
force you to respect its business-model.
It seems the EU agrees with us, at least on this issue. The DMA includes several provisions that force the giant tech companies that control so much of our online lives (AKA
“gatekeeper platforms”) to provide official channels for interoperators. This is a great idea, though, frankly, lawmakers should also restore the right of tinkerers and hackers to
reverse-engineer your stuff and let you make it work the way you want.
One of these interop provisions is aimed at app stores for mobile devices. Right now, the only (legal) way to install software on your iPhone is through Apple’s App Store. That’s
fine, so long as you trust Apple and you think they’re doing a great job, but pobody’s nerfect, and even if you love Apple, they won’t always get it right – like when they tell
you you’re not allowed to have an app that records civilian deaths from US drone strikes, or
a game that simulates life in a sweatshop, or a dictionary (because it has swear words!). The final word on which apps you use on your device should be
yours.
Which is why the EU ordered Apple to open up iOS devices to rival app stores, something Apple categorically refuses to do. Apple’s “plan” for complying with the DMA is, shall we say, sorely lacking (this is part of a grand tradition of
American tech giants wiping their butts with EU laws that protect Europeans from predatory activity, like the years Facebook spent ignoring European privacy laws, manufacturing stupid legal theories to defend the indefensible).
Apple’s plan for opening the App Store is effectively impossible for any competitor to use, but this goes double for anyone hoping to offer free and open source software to iOS
users. Without free software – operating systems like GNU/Linux, website tools like WordPress, programming languages like Rust and Python, and so on – the internet would grind to a
halt.
Our dear friends at Free Software Foundation Europe (FSFE) have filed an important brief with the European Commission, formally objecting to Apple’s ridiculous plan on the grounds that it effectively bars iOS users from choosing free software for
their devices.
FSFE’s brief makes a series of legal arguments, rebutting Apple’s self-serving theories about what the DMA really means. FSFE shoots down Apple’s tired argument that copyrights and
patents override any interoperability requirements. U.S. courts have been inconsistent on this issue, but we’re hopeful that the Court of Justice of the E.U. will reject the
“intellectual property trump card.” Even more importantly, FSFE makes moral and technical arguments about the importance of safeguarding the technological
self-determination of users by letting them choose free software, and about why this is as safe – or safer – than giving Apple a veto over its customers’ software choices.
Apple claims that because you might choose bad software, you shouldn’t be able to choose software, period. They say that if competing app stores are allowed to exist, users won’t be
safe or private. We disagree – and so do some of the most respected security experts in the world.
It’s true that Apple can use its power wisely to ensure that you only choose good software. But it’s also used that power to attack its users, like in China, where Apple blocked all working privacy tools from iPhones and then neutered a tool used to organize pro-democracy protests.
It’s not just in China, either. Apple has blanketed the world with billboards
celebrating its commitment to its users’ privacy, and they made good on that promise, blocking third-party surveillance (to the $10 billion dollar chagrin of Facebook). But right in the
middle of all that, Apple also started secretly spying on iOS users to fuel
its own surveillance advertising network, and then lied about it.
Pobody’s nerfect. If you trust Apple with your privacy and security, that’s great. But for people who don’t trust Apple to have the final word – for people who value software freedom,
or privacy (from Apple), or democracy (in China), users should have the final say.
We’re so pleased to see the EU making tech policy we can get behind – and we’re grateful to our friends at FSFE for holding Apple’s feet to the fire when they flout that law.
>> mehr lesen
The Real Monsters of Street Level Surveillance
(Fri, 25 Oct 2024)
Safe trick-or-treating this Halloween means being aware of the real monsters of street-level surveillance. You might not always see these
menaces, but they are watching you. The real-world harms of these terrors wreak havoc on our communities. Here, we highlight just a few of the beasts.
To learn more about all of the street-level surveillance creeps in your community, check out our even-spookier resource, sls.eff.org.
If your blood runs too cold, take a break with our favorite digital rights legends— the
Encryptids.
The Face Stealer
Careful where you look. Around any corner may loom the Face Stealer, an arachnid mimic that captures your likeness with just a glance. Is that your mother in the woods? Your
roommate down the alley? The Stealer thrives on your dread and confusion, luring you into its web. Everywhere you go, strangers and loved ones alike recoil, convinced you’re something
monstrous. Survival means adapting to a world where your face is no longer yours—it’s a lure for the horror that claimed it.
The Real Monster
Face recognition technology (FRT) might not jump out at you, but the impacts of this monster
are all too real. EFF wants to banish this monster with a
full ban on government use, and prohibit companies from feeding on
this data without permission. FRT is a tool for mass surveillance, snooping on protesters, and deepening social inequalities.
Three-eyed Beast
Freeze! In your weakest moment, you may encounter the Three-Eyed Beast—and you don’t want to make any sudden movements. As it snarls, its third eye cracks open and sends a
chill through your soul. This magical gaze illuminates your every move, identifying every flaw and mistake. The rest of the world is shrouded in darkness as its piercing squeals of
delight turn you into a spectacle—sometimes calling in foes like the Face
Stealer. The real fear sets in when the eye closes once more, leaving you alone in the shadows as you realize its gaze was the last to ever find you.
The Real Monster
Body-worn cameras are marketed as a fix for police transparency, but instead our communities get
another surveillance tool pointed at us. Officers often decide when to record and what happens to the footage, leading to selective use that shields misconduct rather than exposes it.
Even worse, these cameras can house other surveillance threats like Face Recognition Technology. Without strict safeguards, and community control of whether to adopt them in the first place, these cameras do more
harm than good.
Shrapnel Wraith
If you spot this whirring abomination, it’s likely too late. The Shrapnel Wraith circles, unleashed on our most under-served and over-terrorized communities. This twisted heap
of bolts and gears, puppeted by spiteful spirits into this gestalt form of a vulture. It watches your most private moments, but don’t mistake it for a mere voyeur; it also strikes
with lethal force. Its junkyard shrapnel explodes through the air, only for two more vultures to rise from the wreckage. Its shadow swallows the streets, its buzzing sinking through
your skin. Danger is circling just overhead.
The Real Monster
Drones and robots give law enforcement constant and often unchecked surveillance power. Frequently
equipped with tools like high-definition cameras, heat sensors, and license plate readers, these products can extend surveillance into seemingly private spaces like one’s own backyard. Worse, some can be armed with explosives and other weapons making
them a potentially lethal threat. Drone and robot use must have strong protections for people’s privacy, and we strongly oppose arming them with any
weapons.
Doorstep Creep
Candy-seekers, watch which doors you ring this Halloween, as the Doorstep Creep lurks at more and more homes. Slinking by the door, this ghoul fosters fear and mistrust in
communities, transforming cozy entries into a fortress of suspicion. Your visit feels judged, unwanted, and in a shadow of loathing. As you walk away, slanderous whispers echo
in the home and down the street. You are not welcome here. Doors lock, blinds close, and the Creeps' dark eyes remind you of how alone you are.
The Real Monster
Community Surveillance Apps come in many forms, encouraging the adoption of more home
security devices like doorway cameras, smart doorbells, and more crowd-sourced surveillance apps. People come to these apps out of fear and only find more of the same, with greater
public paranoia, racial gatekeeping, and even vigilante violence. EFF believes the makers of these platforms should position them away from crime and suspicion and toward community
support and mutual aid.
Foggy Gremlin
Be careful where you step for this scavenger. The Foggy Gremlin sticks to you like a leech, and envelopes you in a psychedelic mist to draw in large predators. You can run, but
no longer hide, as the fog spreads and grows denser. Anywhere you go, and anywhere you’ve been is now a hunting ground. As exhaustion sets in, a world once open and bright has become
narrow, dark, and sinister.
The Real Monster
Real-time location tracking is a chilling mechanism that enables law enforcement to
monitor individuals through data bought from brokers, often without warrants or oversight. Location data, harvested from mobile apps, can be weaponized to conduct area searches that
expose sensitive information about countless individuals, the overwhelming majority of whom are innocent. We oppose this digital dragnet and advocate for legislation like the
Fourth Amendment is Not For Sale Act to protect individuals
from such tracking.
Street Level SurveillanceFight the monsters in your community
>> mehr lesen
Disability Rights Are Technology Rights
(Thu, 24 Oct 2024)
At EFF, our work always begins from the same place: technological self-determination. That’s the right to decide which technology you use, and how you use it. Technological
self-determination is important for every technology user, and it’s especially important for users with disabilities.
Assistive technologies are a crucial aspect of living a full and fulfilling life, which gives people with disabilities motivation to be some of the most skilled, ardent, and
consequential technology users in the world. There’s a whole world of high-tech assistive tools and devices out there, with disabled technologists and users intimately involved in the
design process.
The accessibility movement’s slogan, “Nothing about us without us,” has its origins in the first stirrings of
European democratic sentiment in sixteenth (!) century and it expresses a critical truth: no one can ever know your needs as well you do. Unless you get a say in how
things work, they’ll never work right.
So it’s great to see people with disabilities involved in the design of assistive tech, but that’s where self-determination should start,
not end. Every person is different, and the needs of people with disabilities are especially idiosyncratic and fine-grained. Everyone deserves and
needs the ability to modify, improve, and reconfigure the assistive technologies they rely on.
Unfortunately, the same tech companies that devote substantial effort to building in assistive features often devote even more effort to
ensuring that their gadgets, code and systems can’t be modified by their users.
Take streaming video. Back in 2017, the W3C finalized “Encrypted Media Extensions” (EME), a standard for adding digital rights management (DRM) to web browsers. The EME spec
includes numerous accessibility features, including facilities for including closed captioning and audio descriptive tracks.
But EME is specifically designed so that anyone who reverse-engineers and modifies it will fall afoul of Section 1201 of the Digital Millennium Copyright Act (DMCA 1201), a 1998 law that
provides for five-year prison-sentences and $500,000 fines for anyone who distributes tools that can modify DRM. The W3C considered – and rejected – a binding covenant that would protect technologists
who added more accessibility features to EME.
The upshot of this is that EME’s accessibility features are limited to the suite that a handful of giant technology companies have decided are important enough to develop, and
that suite is hardly comprehensive. You can’t (legally) modify an EME-restricted stream to shift the colors to ones
that aren’t affected by your color-blindness. You certainly can’t run code that buffers the video and looks ahead to see if there are any seizure-triggering strobe
effects, and dampens them if there are.
It’s nice that companies like Apple, Google and Netflix put a lot of thought into making EME video accessible, but it’s unforgivable that they arrogated to themselves the sole
right to do so. No one should have that power.
It’s bad enough when DRM infects your video streams, but when it comes for hardware, things get really ugly. Powered wheelchairs – a sector dominated by a cartel of
private-equity backed giants that have gobbled up all their competing firms – have a
serious DRM problem.
Powered wheelchair users who need even basic repairs are corralled by DRM into using the manufacturer’s authorized depots, often enduring long waits during which they are unable
to leave their homes or even their beds. Even small routine adjustments, like changing the wheel torque after adjusting your tire pressure, can require an official service
call.
Colorado passed the country’s first powered wheelchair Right to Repair law in 2022. Comparable legislation is now pending in California, and the
Federal Trade Commission has signaled that it will crack down on companies that use DRM to
block repairs. But the wheels of justice grind slow – and wheelchair users’ own wheels shouldn’t be throttled to match them.
People with disabilities don’t just rely on devices that their bodies go into; gadgets that go into our bodies are increasingly common, and
there, too, we have a DRM problem. DRM is common in implants like continuous glucose monitors and insulin pumps, where it
is used to lock
people with diabetes into a single vendor’s products, as a prelude to gouging them (and their insurers) for parts, service, software updates and medicine.
Even when a manufacturer walks away from its products, DRM creates insurmountable legal risks for third-party technologists who want to continue to support and maintain them.
That’s bad enough when it’s your smart speaker that’s been orphaned, but imagine what it’s like to have an orphaned neural implant that no one can support without risking prison time under DRM
laws.
Imagine what it’s like to have the bionic eye that is literally wired into your head go dark
after the company that made it folds up shop – survived only by the 95-year legal restrictions that DRM law provides for, restrictions that guarantee that no one will provide
you with software that will restore your vision.
Every technology user deserves the final say over how the systems they depend on work. In an ideal world, every assistive technology would be designed with this in mind: free
software, open-source hardware, and designed for easy repair.
But we’re living in the Bizarro world of assistive tech, where not only is it normal to distribute tools for people with disabilities are designed without any consideration for
the user’s ability to modify the systems they rely on – companies actually dedicate extra engineering effort to creating legal liability for anyone
who dares to adapt their technology to suit their own needs.
Even if you’re able-bodied today, you will likely need assistive technology or will benefit from accessibility adaptations. The curb-cuts that accommodate wheelchairs make life
easier for kids on scooters, parents with strollers, and shoppers and travelers with rolling bags. The subtitles that make TV accessible to Deaf users allow hearing people to follow
along when they can’t hear the speaker (or when the director deliberately chooses to muddle
the dialog). Alt tags in online images make life easier when you’re on a slow data connection.
Fighting for the right of disabled people to adapt their technology is fighting for everyone’s rights.
(EFF extends our thanks to Liz Henry for their help with this article.)
>> mehr lesen
The UK Must Act: Alaa Abd El-Fattah Still Imprisoned 25 Days After Release Date
(Wed, 23 Oct 2024)
It’s been 25 days since September 29, the day that should have seen British-Egyptian blogger, coder, and activist Alaa Abd El Fattah walk free. Egyptian
authorities refused to release him at the end of his sentence, in
contradiction of the country's own Criminal Procedure Code, which requires that time served in pretrial detention count toward a prison sentence. In the days since, Alaa’s family has
been able to secure meetings with high-level British officials, including Foreign Secretary David Lammy, but as of yet, the Egyptian government still has not released Alaa.
In early October, Alaa was named the 2024 PEN Writer of Courage by PEN Pinter Prize
winner Arundhati Roy, who presented the
award in a ceremony where it was received by Egyptian publication Mada Masr editor Lina Attalah on Alaa’s behalf.
Alaa’s mother, Laila Soueif, is now on her third week of hunger strike and says that she
won’t stop until Alaa is free or she’s taken to the hospital. In recent weeks, Alaa’s mothers and sisters have met with several members of Parliament in the hopes of placing more
pressure on officials. As the BBC reports, his family are “deeply disappointed with how the
current government, and the previous one, have handled his case” and believe that the UK has more leverage with Egypt that it is not using.
Alaa deserves to finally return to his family, now in the UK, and to be reunited with his son, Khaled, who is now a teenager. We urge EFF supporters in the
UK to write to their MP (external link) to place pressure on the UK’s Labour government to use their power
to push for Alaa’s release.
>> mehr lesen
In Appreciation of David Burnham
(Tue, 22 Oct 2024)
We at EFF have long recognized the threats posed by the unchecked technological prowess of law enforcement and intelligence agencies. Since our founding in 1990, we have been in
the forefront of efforts to impose meaningful legal controls and accountability on the secretive activities of those entities, including the National Security Agency (NSA). While the
U.S. Senate’s Church Committee hearings and report in the mid-1970s documented the past abuses of government surveillance powers, it could not anticipate the dangers those
interception and collection capabilities would bring to a networked environment. As Sen. Frank Church said in 1975 about an unchecked NSA, “No American would have any privacy left,
such is the capability to monitor everything: telephone conversations, telegrams, it doesn't matter. There would be no place to hide.” The communications infrastructure was still in a
mid-20th century analog mode.
burnham.jpg
One of the first observers to recognize the impact of NSA’s capabilities in the emerging digital landscape was David Burnham, a pioneering investigative journalist and author who
passed away earlier this month at 91 years of age. While the obituary that ran at his old home, The New York Times, rightly emphasized Burnham’s ground-breaking investigations of police corruption
and the shoddy safety standards of the nuclear power industry (depicted, respectively, in the films “Serpico” and “Silkwood”), those in the digital rights world are especially appreciative of his prescience when it came to the issues we care about
deeply.
In 1983, Burnham published “The Rise
of the Computer State,” one of the earliest examinations of the emerging challenges of the digital age. As Walter Cronkite wrote in his foreword to the book, “The
same computer that enables us to explore the outer reaches of space and the mysteries of the atom can also be turned into an instrument of tyranny. We must ensure that the rise of the
computer state does not also mean the demise of our civil liberties.” Here is what Burnham wrote in a piece for The New York Times Magazine based
on the reporting in his book:
With unknown billions of Federal dollars, the [NSA] purchases the most sophisticated communications and computer equipment in the world. But truly to comprehend
the growing reach of this formidable organization, it is necessary to recall once again how the computers that power the NSA are also gradually changing lives of Americans
- the way they bank, obtain benefits from the Government and communicate with family and friends. Every day, in almost every area of culture and commerce, systems and
procedures are being adopted by private companies and organizations...that make it easier for the NSA to dominate American society...
Remember, that was written in 1983. Ten years before the launch of the Mosaic browser and three decades before mobile devices became ubiquitous. But Burnham understood the
trajectory of the emerging technology, for both the government and its citizens.
Recognizing the dangers of unchecked surveillance powers, Burnham was a champion of oversight and transparency, and, consequently, he was a skilled and aggressive user of the
Freedom of Information Act. In 1989, he partnered with Professor Susan Long to establish the Transactional
Records Access Clearinghouse (TRAC) at Syracuse University. TRAC combines sophisticated use of FOIA with data analytics techniques “to develop as comprehensive and detailed a
picture as possible about what federal enforcement and regulatory agencies actually do . . . and to organize all of this information to make it readily accessible to the public.” From
its FOIA requests, TRAC adds more than 3 billion new records to its database annually. Its work is widely acclaimed by the many academics, journalists and lawyers who make use of its
extensive resources. It is a fitting legacy to Burnham’s unwavering belief in the power of information.
As EFF Executive Director Cindy Cohn has said when describing our work, we stand on the shoulders of giants. With his recognition of technology’s challenges to privacy, his
insistence on transparency, and his joy in telling truth to power, David Burnham was one of them.
Full disclosure: David was a longtime colleague, client and friend.
>> mehr lesen
How Many U.S. Persons Does Section 702 Spy On? The ODNI Needs to Come Clean.
(Tue, 22 Oct 2024)
EFF has joined with 23 other organizations including the ACLU, Restore the Fourth, the Brennan Center for Justice, Access Now, and the Freedom of the Press Foundation to
demand that the Office of the Director of National Intelligence (ODNI) furnish the public with an estimate of
exactly how many U.S. persons’ communications have been hoovered up, and are now sitting on a government server for law enforcement to unconstitutionally sift through at their
leisure.
This letter was motivated by the fact that representatives of the National Security Agency (NSA) have promised in the past to provide the public with an estimate of how many
U.S. persons—that is, people on U.S. soil—have had their communications “incidentally” collected through the surveillance authority Section 702 of the FISA Amendments
Act.
As the letter states, “ODNI and NSA cannot expect public trust to be unconditional. If ODNI and NSA continue to renege on pledges to members of Congress, and to withhold
information that lawmakers, civil society, academia, and the press have persistently sought over the course of thirteen years, that public trust will be fatally undermined.”
Section 702 allows the government to conduct surveillance of foreigners abroad from inside the United States. It operates, in part, through the cooperation of large and small telecommunications service providers which hand over the digital
data and communications they oversee. While Section 702 prohibits the NSA from intentionally targeting Americans with this mass surveillance, these
agencies routinely acquire a huge amount of innocent Americans' communications “incidentally”
because, as it turns out, people in the United States communicate with people overseas all the time. This means that the U.S. government ends up with
a massive pool consisting of the U.S.-side of conversations as well as communications from all over the globe. Domestic law enforcement agencies, including the Federal Bureau of
Investigation (FBI), can then conduct backdoor warrantless searches of these “incidentally collected”
communications.
For over 10 years, EFF has fought hard every time Section 702 expires in the hope that we can get some much-needed reforms into any bills that seek to reauthorize the authority.
Most recently, in spring 2024, Congress
renewed Section 702 for another two years with none of the changes necessary to restore privacy rights.
While we wait for the upcoming opportunity to fight Section 702, joining our allies to sign on to this letter in the fight for transparency will give us a better
understanding of the scope of the problem.
You can read the whole letter here.
>> mehr lesen
EFF to Massachusetts’ Highest Court: Pretrial Electronic Monitoring Should Not Eviscerate Privacy Rights
(Tue, 22 Oct 2024)
When someone is placed on location monitoring for one purpose, it does not justify law enforcement’s access to that information for a completely different
purpose without a proper warrant.
EFF joined the Committee for Public Counsel Services, ACLU, ACLU of Massachusetts, and the Massachusetts Association of Criminal Defense Lawyers, in filing
an amicus brief in the Massachusetts Supreme Judicial Court, in Commonwealth v. Govan, arguing just that.
In this case, the defendant Anthony Govan was subjected to pretrial electronic monitoring as a condition of release prior to trial. In investigating a
completely unrelated crime, the police asked the pretrial electronic monitoring division for the identity and location of “anyone” who was near the location of this latter incident.
Mr. Govan’s data was part of the response, and that information was used against him in this unrelated case.
Our joint amicus brief highlighted the coercive nature of electronic monitoring programs. When the alternative is being locked up, there is no meaningful
consent to the collection of information under electronic monitoring. At the same time, as someone on pretrial release, Mr. Govan had a reasonable expectation of privacy in his
location information. As courts, including the U.S. Supreme Court, have recognized, location and movement information are incredibly sensitive and revealing. Just because someone is on
electronic monitoring, it doesn’t mean they have no expectation of privacy, whether they are going to a political protest, a prayer group, an abortion clinic, a gun show, or their
private home. Pretrial electronic monitoring collects this information around the clock—information that otherwise would not have been available to law enforcement through traditional
tools.
The violation of privacy is especially problematic in this case, because Mr. Govan had not been convicted and is still presumed to be innocent. According to
current law, those on pretrial release are entitled to far stronger Fourth Amendment protections than those
who are on monitored release after a conviction. As argued in the amicus brief, absent a proper warrant, the
information gathered by the electronic monitoring program should only be used to make sure Mr. Govan was complying with his pretrial release conditions.
Lastly, although this case is decided on the absence of a warrant or a warrant exception, we argued that the court should provide guidance for future
warrants. The Fourth Amendment and its state corollaries prohibit “general warrants,” akin to a fishing expedition, and instead require warrants meet nexus and particularity
requirements. Bulk location data requests like the one in this case cannot meet that standard.
While electronic monitoring is marketed as an alternative to detention, the evidence does not bear this
out. Courts should not allow the government to use the information gathered from this expansion of state surveillance to be used beyond its
purpose without a warrant.
>> mehr lesen
U.S. Border Surveillance Towers Have Always Been Broken
(Mon, 21 Oct 2024)
A new bombshell scoop
from NBC News revealed an internal U.S. Border Patrol memo claiming that 30 percent of camera towers that compose the agency's "Remote Video Surveillance System" (RVSS) program
are broken. According to the report, the memo describes "several technical problems" affecting approximately 150 towers.
Except, this isn't a bombshell. What should actually be shocking is that Congressional leaders are acting shocked, like those who recently sent a letterabout the towers to Department of Homeland Security (DHS)
Secretary Alejandro Mayorkas. These revelations simply reiterate what people who have been watching border technology have known for decades: Surveillance at the U.S.-Mexico border is
a wasteful endeavor that is ill-equipped to respond to an ill-defined problem.
Yet, after years of bipartisan recognition that these programs were straight-up boondoggles, there seems to be a competition among political leaders to throw the most money at
programs that continue to fail.
Official oversight reports about the failures, repeated breakages, and general ineffectiveness of these camera towers have been public since at least the mid-2000s. So why
haven't border security agencies confronted the problem in the last 25 years? One reason is that these cameras are largely political theater; the technology dazzles publicly, then
fizzles quietly. Meanwhile, communities that should be thriving at the border are treated like a laboratory for tech companies looking to cash in on often exaggerated—if not
fabricated—homeland security threats.
The Acronym Game
EFF is mapping surveillance at the U.S.-Mexico border
.
In fact, the history of camera towers at the border is an ugly cycle. First, Border Patrol introduces a surveillance program with a catchy name and big promises. Then a few
years later, oversight bodies, including Congress, conclude it's an abject mess. But rather than abandon the program once and for all, border security officials come up with a new
name, slap on a fresh coat of paint, and continue on. A few years later, history repeats.
In the early 2000s, there was the Integrated Surveillance Intelligence System (ISIS), with the installation of RVSS towers in places like Calexico, California and Nogales,
Arizona, which was later became the America's Shield Initiative (ASI). After those failures, there was Project 28 (P-28), the first stage of the Secure Border Initiative (SBInet).
When that program was canceled, there were various new programs like the Arizona Border Surveillance Technology Plan, which became the Southwest Border Technology Plan. Border Patrol
introduced the Integrated Fixed Tower (IFT) program and the RVSS Update program, then the Automated Surveillance Tower (AST) program. And now we've got a whole slew of new acronyms,
including the Integrated Surveillance Tower (IST) program and the Consolidated Towers and Surveillance Equipment (CTSE) program.
Feeling overwhelmed by acronyms? Welcome to the shell game of border surveillance. Here's what happens whenever oversight bodies take a closer look.
ISIS and ASI
An RVSS from the early 2000s in Calexico, California.
Let's start with the Integrated Surveillance Intelligence System (ISIS), a program comprised of towers, sensors and databases originally launched in 1997 by the Immigration and
Naturalization Service. A few years later, INS was reorganized into the U.S. Department of Homeland Security (DHS), and ISIS became part of the newly formed Customs & Border
Protection (CBP).
It was only a matter of years before the DHS Inspector General concluded that
ISIS was a flop: "ISIS remote surveillance technology yielded few apprehensions as a percentage of detection, resulted in needless investigations of
legitimate activity, and consumed valuable staff time to perform video analysis or investigate sensor alerts."
During Senate hearings, Sen. Judd Gregg (R-NH), complained about a
"total breakdown in the camera structures," and that the U.S.
government "bought cameras that didn't work."
Around 2004, ISIS was folded into the new America's Shield Initiative (ASI), which officials claimed would fix those problems. CBP Commissioner Robert Bonner even promoted ASI as a "critical part of CBP’s strategy to build smarter borders." Yet, less than a
year later, Bonner stepped down, and the Government Accountability Office (GAO) found the ASI had numerous unresolved issues necessitating a total reevaluation. CBP disputed none of the findings and
explained it was dismantling ASI in order to move onto something new that would solve everything:
the Secure Border Initiative (SBI).
Reflecting on the ISIS/ASI programs in 2008, Rep. Mike Rogers (R-MI)
said, "What we found was a camera and sensor system that was plagued by mismanagement, operational problems, and financial waste. At that time, we put the Department on notice that
mistakes of the past should not be repeated in SBInet."
You can guess what happened next.
P-28 and SBInet
The subsequent iteration was called Project 28, which then evolved into the Secure Border Initiative's SBInet, starting in the Arizona desert.
In 2010, the DHS Chief Information Officer summarized its comprehensive
review: "'Project 28,' the initial prototype for the SBInet system, did not perform as planned. Project 28 was not scalable to meet the mission requirements for a
national comment [sic] and control system, and experienced significant technical difficulties."
A DHS graphic illustrating the SBInet concept
Meanwhile, bipartisan consensus had emerged about the failure of the program, due to the technical problems as well as contracting irregularities and cost overruns.
As Rep. Christopher Carney (D-PA) said in his prepared statement
during Congressional hearings:
P–28 and the larger SBInet program are supposed to be a model of how the Federal Government is leveraging technology to secure our borders, but Project 28, in my mind, has
achieved a dubious distinction as a trifecta of bad Government contracting: Poor contract management; poor contractor performance; and a poor final product.
Rep. Rogers' remarks were even more cutting: "You know the history of ISIS and what a disaster that was, and we had hoped to take the lessons from that and do better on this
and, apparently, we haven’t done much better. "
Perhaps most damning of all was yet another GAO report that found, "SBInet
defects have been found, with the number of new defects identified generally increasing faster than the number being fixed—a trend that is not indicative of a system that is
maturing."
In January 2011, DHS Secretary Janet Napolitano canceled
the $3-billion program.
IFTs, RVSSs, and ASTs
Following the termination of SBInet, the Christian Science Monitor ran the naive headline, "US cancels 'virtual fence' along Mexican border. What's Plan
B?" Three years later, the newspaper answered its own question with another question, "'Virtual' border fence idea revived. Another 'billion dollar
boondoggle'?"
Boeing was the main contractor blamed for SBINet's failure, but Border Patrol ultimately awarded one of the biggest new contracts to Elbit Systems, which had been one of
Boeing's subcontractors on SBInet. Elbit began installing IFTs (again, that stands for "Integrated Fixed Towers") in many of the exact same places slated for SBInet. In some cases,
the equipment was simply swapped on an existing SBInet tower.
Meanwhile, another contractor, General Dynamics Information Technology, began installing new RVSS towers and upgrading old ones as part of the RVSS-U program. Border Patrol also
started installing hundreds of "Autonomous Surveillance Towers" (ASTs) by yet another vendor, Anduril Industries, embracing the new buzz of artificial intelligence.
An Autonomous Surveillance Tower and an RVSS tower along the Rio Grande.
In 2017, the GAO complained the Border Patrol's poor data quality
made the agency "limited in its ability to determine the mission benefits of its surveillance technologies." In one case, Border Patrol stations in the Rio Grande Valley claimed IFTs
assisted in 500 cases in just six months. The problem with that assertion was there are no IFTs in Texas or, in fact, anywhere outside Arizona.
A few years later, the DHS Inspector General issued yet another report
indicating not much had improved:
CBP faced additional challenges that reduced the effectiveness of its existing technology. Border Patrol officials stated they had inadequate personnel to fully leverage
surveillance technology or maintain current information technology systems and infrastructure on site. Further, we identified security vulnerabilities on some CBP servers and
workstations not in compliance due to disagreement about the timeline for implementing DHS configuration management requirements.
CBP is not well-equipped to assess its technology effectiveness to respond to these deficiencies. CBP has been aware of this challenge since at least 2017 but lacks a
standard process and accurate data to overcome it.
Overall, these deficiencies have limited CBP’s ability to detect and prevent the illegal entry of noncitizens who may pose threats to national security.
Around that same time, the RAND Corporation published a study funded by DHS that found "strong evidence" the IFT program was having
no impact on apprehension levels at the border, and only "weak" and "inconclusive" evidence that the RVSS towers were having any effect on apprehensions.
And yet, border authorities and their supporters in Congress are continuing to promote unproven, AI-driven technologies as the latest remedy for years of failures, including the
ones voiced in the memo obtained by NBC News. These systems involve cameras controlled by algorithms that automatically identify and track objects or people of interest. But in an age
when algorithmic errors and bias are being identified nearly everyday in every sector including law enforcement, it is unclear how this technology has earned the trust of the
government.
History Keeps Repeating
That brings us today, with reportedly 150 or more towers out of service. So why does Washington keep
supporting surveillance at the border? Why are they proposing record-level funding for a system that seems irreparable? Why have they abandoned their duty to scrutinize federal
programs?
Well, one reason may be that treating problems at the border as humanitarian crises or pursuing foreign policy or immigration reform measures isn't as politically useful as
promoting a phantom "invasion" that requires a military-style response. Another reason may be that tech companies and defense contractors
wield immense amounts of influence and stand to make millions, if not billions, profiting off border surveillance. The price is paid by taxpayers, but also in the civil
liberties of border communities and the human rights of asylum seekers and migrants.
But perhaps the biggest reason this history keeps repeating itself is that no one is ever really held accountable for wasting potentially billions of dollars on high-tech snake
oil.
>> mehr lesen
EFF to Third Circuit: TikTok Has Section 230 Immunity for Video Recommendations
(Sat, 19 Oct 2024)
UPDATE: On October 23, 2024, the Third Circuit denied TikTok's petition for rehearing en banc.
EFF legal intern Nick Delehanty was the principal author of this post.
EFF filed an amicus brief in the U.S. Court of Appeals for the Third Circuit in support of
TikTok’s request that the full court reconsider the case Anderson v. TikTok after a
three-judge panel ruled that Section 230 immunity doesn’t apply to TikTok’s recommendations of users’ videos. We argued that the panel was incorrect on the law, and this case has
wide-ranging implications for the internet as we know it today. EFF was joined on the brief with Center for Democracy & Technology (CDT), Foundation for Individual Rights and
Expression (FIRE), Public Knowledge, Reason Foundation, and Wikimedia Foundation.
At issue is the panel’s misapplication of First Amendment precedent. The First Amendment protects the editorial decisions of publishers about whether
and how to display content, such as the videos TikTok displays to users through its recommendation algorithm.
Additionally, because common law allows publishers to be liable for other people’s content that they publish (for example, letters to the editor that are defamatory in print
newspapers) due to limited First Amendment protection, Congress passed Section 230 to protect online platforms from liability for harmful user-generated content.
Section 230 has been pivotal for the growth and diversity of the internet—without it, internet intermediaries would potentially be
liable for every piece of content posted by users, making them less likely to offer open platforms for third-party speech.
In this case, the Third Circuit panel erroneously held that since TikTok enjoys protection for editorial choices under the First Amendment, TikTok’s recommendations of user videos
amount to TikTok’s first-party speech, making it ineligible for Section 230 immunity. In our brief, we argued that First Amendment protection for editorial choices and Section 230
protection are not mutually exclusive.
We also argued that the panel’s ruling does not align with what every other circuit has found: that Section 230 also immunizes the editorial decisions of internet intermediaries. We
made four main points in support of this argument:
First, the panel ignored the text of Section 230 in that editorial choices are included in the commonly understood definition of “publisher” in the statute.
Second, the panel created a loophole in Section 230 by allowing plaintiffs who were harmed by user-generated content to bypass Section 230 by focusing on an online platform’s
editorial decisions about how that content was displayed.
Third, it’s crucial that Section 230 protects editorial decisions notwithstanding additional First Amendment protection because Section 230 immunity is not only a defense against
liability, it’s also a way to end a lawsuit early. Online platforms might ultimately win lawsuits on First Amendment grounds, but the time and expense of protracted litigation would
make them less interested in hosting user-generated content. Section 230’s immunity from suit (as well as immunity from liability) advances Congress’ goal of encouraging speech at
scale on the internet.
Fourth, TikTok’s recommendations specifically are part of a publisher’s “traditional editorial
functions” because recommendations reflect choices around the display of third-party content and so are protected by Section 230.
We also argued that allowing the panel’s decision to stand would harm not only internet intermediaries, but all internet users. If internet intermediaries were liable for recommending
or otherwise deciding how to display third-party content posted to their platforms, they would end useful content curation and engage in heavy-handed censorship to remove anything
that might be legally problematic from their platforms. These responses to a weakened Section 230 would greatly limit users’ speech on the internet.
The full Third Circuit should recognize the error of the panel’s decision and reverse to preserve free expression online.
>> mehr lesen
A Flourishing Internet Depends on Competition
(Fri, 18 Oct 2024)
Antitrust law has long recognized that monopolies stifle innovation and gouge consumers on price. When it comes to Big Tech, harm to innovation—in the form
of “kill zones,” where major
corporations buy up new entrants to a market before they can compete with them—has been easy to find. Consumer harms have been harder to quantify, since a lot of services the Big Tech
companies offer are “free.” This is why we must move beyond price as the major determinator of consumer harm. And once that’s done, it’s easier to see even greater benefits
competition brings to the greater internet ecosystem.
In the decades since the internet entered our lives, it has changed from a wholly new and untested environment to one where a few major players dominate
everyone's experience. Policymakers have been slow to adapt and have equated what's good for the whole internet with what is good for those companies. Instead of a balanced ecosystem,
we have a monoculture. We need to eliminate the build up of power around the giants and instead have fertile soil for new growth.
Content Moderation
In content moderation, for example, it’s basically rote for experts to say that content moderation is impossible at
scale. Facebook reports
over three billion active users and is available in over 100
languages. However, Facebook is an American company that primarily does its business in English. Communication, in every culture, is heavily
dependent on context. Even if it was hiring experts in every language it is in, which it manifestly is not, the company itself runs on American values. Being able to choose a social media service rooted in your own culture and
language is important. It’s not that people have to choose that service, but it’s important that they have
the option.
This sometimes happens in smaller fora. For example, the knitting website
Ravelry, a central hub for patterns and discussions about yarn, banned all discussions about then-President Donald Trump in 2019, as it was
getting toxic. A number of disgruntled users banded together to make their disallowed content available in other places.
In a competitive landscape, instead of demanding that Facebook or Twitter, or YouTube have the exact content rules you want, you could
pick a service with the ones you want. If you want everything protected by the First Amendment, you could find it.
If you want an environment with clear rules, consistently enforced, you could find that. Especially since smaller platforms could actually enforce its rules, unlike the current behemoths.
Product Quality
The same thing applies to product quality and the “enshittification” of platforms. Even if all of Facebook’s users spoke the same language, that’s no guarantee that they share the same
values, needs, or wants. But, Facebook is an American company and it conducts its business largely in English and according to American cultural norms. As it is, Facebook’s feeds are
designed to maximize user engagement and time on the service. Some people may like the recommendation algorithm, but other may want the traditional chronological feed. There’s no
incentive for Facebook to offer the choice because it is not concerned with losing users to a competitor that does. It’s concerned with being able to serve as many ads to as many
people as possible. In general, Facebook lacks user controls that would allow people to customize their experience on the site. That includes the ability to reorganize your feed to be
chronological, to eliminate posts from anyone you don’t know, etc. There may be people who like the current, ad-focused algorithm, but no one else can get a product they would
like.
Another obvious example is how much the experience of googling something has deteriorated. It’s almost hack to complain about it now, but when when it
started, Google was revolutionary in its ability to a) find exactly what you were searching for and b) allow normal language searching (that is, not requiring you to use
boolean searches in order to get the desired
result). Google’s secret sauce was, for a long time, the ability to find the right result to a totally unique search query. If you could remember some specific string of words in the
thing you were looking for, Google could find it. However, in the endless hunt for “growth,” Google moved away from quality search results and towards
quantity. It also clogged the first page of results with ads and sponsored links.
Morals, Privacy, and Security
There are many individuals and small businesses that would like to avoid using Big Tech services, either because they are bad or because they have ethical
and moral concerns. But, the bigger they are, the harder it is to avoid. For example, even if someone decides not to buy products from Amazon.com because they don’t agree with
how it treats its
workers, they may not be able to avoid patronizing Amazon Web Services (AWS), which funds the commerce side of the business. Netflix, The Guardian, Twitter, and Nordstrom are all companies that pay for Amazon’s
services. The Mississippi Department of Employment Security moved
its data management to Amazon in 2021. Trying to avoid Amazon entirely is functionally impossible. This means that there is no way for people to
“vote with their feet,” withholding their business from
companies they disagree with.
Security and privacy are also at risk without competition. For one thing, it’s easier for a malicious actor or oppressive state to get what they want when
it’s all in the hands of a single company—a single point of failure. When a single company controls the tools everyone relies on, an outage cripples the globe. This digital
monoculture was on display during this year's Crowdstrike outage, where one badly-thought-out update crashed networks across the world and across industries. The personal danger of
digital monoculture shows itself when Facebook messages are used in a criminal investigation against
a mother and daughter discussing abortion and in “geofence warrants” that demand Google turn over information about every device within a certain distance of a crime. For another thing,
when everyone is only able to share expression in a few places that makes it easier for regimes to target certain speech and for gatekeepers to maintain control over creativity.
Another example of the relationship between privacy and competition is Google’s so-called “Privacy Sandbox.” Google’s messaged it as removing “third-party
cookies” that track you across the internet. However, the change actually just moved that data
into the sole control of Google, helping cement its ad monopoly. Instead of eliminating tracking, the Privacy Sandbox does tracking within the browser directly, allowing Google to
charge for access to the insights gleaned from your browsing history with advertisers and websites, rather than those companies doing it themselves. It’s not more privacy, it’s just
concentrated control of data.
You see this same thing at play with Apple’s app store in the saga of Beeper Mini, an app that allowed secure communications through iMessage between Apple
and non-Apple phones. In doing so, it eliminated the dreaded “green bubbles” that indicated that messages were not encrypted (ie not between two iPhones). While Apple’s design choice
was, in theory, meant to flag that your conversation wasn’t secure, it ended up being a design choice that motivated people to get iPhones just to avoid the stigma. Beeper Mini made messages more secure and removed the need to get a whole new phone to get rid of the green bubble. So Apple moved to break Beeper
Mini, effectively choosing monopoly over security. If Apple had moved to secure non-iPhone messages on its own, that would be one thing. But it didn’t, it just prevented users from
securing them on their own.
Obviously, competition isn’t a panacea. But, like privacy, its prioritization means less emergency firefighting and more fire prevention. Think of it as a controlled burn—removing the
dross that smothers new growth and allows fires to rage larger than ever before.
>> mehr lesen