Copyright is a Civil Liberties Nightmare
(Fr, 31 Jan 2025)
If you’ve got lawyers and a copyright, the law gives you tremendous power to silence speech you don’t like. Copyright’s statutory damages can be as high as $150,000 per work
infringed, even if no actual harm is done. This makes it far too dangerous to rely on the limitations and exceptions to fair use, as you may face a financial death sentence if a court
decides you got it wrong. Most would-be speakers back down in the face of such risks, no matter now legitimate their use. The Digital
Millennium Copyright Act provides an incentive for platforms to remove content on your say-so, without a judge ever reviewing your papers. The special procedures and damages
available to copyright owners make it one of the most appealing mechanisms for removing unwanted speech from the internet. Copyright
owners have intimidated researchers away from disclosing that their software spies on users or is full of bugs that make it unsafe. When a blockbuster entertainment product inspires
people to tell their own stories by depicting themselves in the same world or costumes, a letter from the studio’s lawyers will usually convince them to stay silent. And whose who
sell software write their own law into End User License Agreements and can threaten any user who disobeys them with copyright damages.
Culture has always been a conversation, not a product that is packaged up for consumption.
These are only a few of the ways that copyright is a civil liberties nightmare in the modern age, and only a few of the abuses of copyright that we fight against in court.
Copyright started out as a way for European rulers to ensure that
publishers remained friendly to the government, and we still see this dynamic in the cozy relationship between Hollywood and the US military and police forces. But more and more it’s been a way for
private entities that are already powerful to prevent both market competition and contrary ideas from challenging their dominance.
The imbalance of power between authors and the owners of mass media is the main reason that authors only get a small share of the value they create. Copyright is at its best when it
protects a creator from being beaten to market by those who own mass media channels, giving them some leverage to negotiate. With that small bit of leverage, they can get paid
something rather than nothing, though the publishing deals in highly concentrated industries are famously one-sided.
But, too often, we see copyright at its worst instead, and there is no good reason for copyright law to be as broad and draconian as it is now. It lasts essentially forever, as
you will probably be dead before any new works you cherished as a child will enter the public domain. It is uniquely favored by the courts as a means for controlling speech, with
ordinary First Amendment considerations taking a back seat to the interests of content owners. The would-be speaker has to prove their right to speak: for example, by persuading a
court that they were making a fair use. And the penalties for a court deciding your use was infringing are devastating.
It’s even used as a supposed justification for spying on and filtering the internet. Anyone familiar with automated copyright controls like ContentID on YouTube
knows how restrictive they
tend to be. Bizarrely, copyright has grown so broad that it doesn’t just bar others from reproducing a work or adapting it into
another medium such as film, it even prevents making original stories with a character or setting “owned” by the copyright owner. For the vast majority of our history, humans have
built on and retold one another’s stories. Culture has always been a conversation, not a product that is packaged up for consumption.
The same is true for innovation, with a boom in software technology coming before copyright was applied to software. And, thanks to free software licenses that remove the
default, restrictive behavior of copyright, we have communities of scrappy innovators building tools that we all rely upon for a functioning internet. When the people who depend upon
a technology have a say in creating it and have the option to build their own to suit their needs, we’re much more likely to get technology that serves our interests and respects our
privacy and autonomy. That's far superior to technology that comes into our homes as an agent of its creators, seeking to exploit us for advertising data, or limit our choices of apps
and hardware to serve another’s profit motive. EFF has been at the vanguard for decades, fighting back against copyright overreach
in the digital world. More than ever, people need to be able to tell their stories, criticize the powerful and the status quo, and to communicate with technologies that aren’t
censored by overzealous copyright bots.
>> mehr lesen
Executive Order to the State Department Sideswipes Freedom Tools, Threatens Censorship Resistance, Privacy, and Anonymity of Millions
(Thu, 30 Jan 2025)
In the first weeks of the Trump Administration, we have witnessed a spate of sweeping, confusing, and likely unconstitutional executive orders, including
some that have already had devastating human consequences. EFF is tracking many of them, as well as other developments that impact digital rights.
Right now, we want to draw attention to one of the executive orders that directly impacts the freedom tools that people around the world rely on to
safeguard their security, privacy, and anonymity. EFF understands how critical these tools are – protecting the ability to make and share anticensorship, privacy and
anonymity-protecting technologies has been central to our work since the Crypto
Wars of the 1990s.
This executive order called the Reevaluating and Realigning United States
Foreign Aid has led the State Department to immediately suspend its contracts with hundreds of organizations in the
U.S. and around the world that
have received support through programs administered by the State Department, including through its Bureau of Democracy, Human Rights, and Labor. This includes many freedom
technologies that use cryptography, fight censorship, protect freedom of speech, privacy and anonymity for millions of people around the world. While the State Department has
issued some limited waivers, so far those waivers do not seem to cover the open source internet freedom technologies. As a result, many of these projects have to stop or
severely curtail their work, lay off talented workers, and stop or slow further development.
There are many examples of freedom technologies, but here are a few that should be readily understandable to EFF’s audience: First, the Tor Project, which helps ensure that people can navigate the internet securely and
privately and without fear of being tracked, both protecting themselves and avoiding censorship. Second, the Guardian Project, which creates privacy tools, open-source software libraries, and customized software solutions that can be used by individuals and groups around the world to protect
personal data from unjust intrusion, interception and monitoring. Third, the Open Observatory of Network
Interference, or OONI, has been carefully
measuring government internet censorship in countries around the world since 2012. Fourth, the Save App from OpenArchive, is a mobile app designed to help people securely archive, verify, and encrypt their
mobile media and preserve it on the Internet Archive and decentralized web storage.
We hope that cutting off support for these and similar tools and technologies of freedom is only a temporary oversight, and that more clear thinking about
these and many similar projects will result in full reinstatement. After all, these tools support people working for freedom consistent with this administration’s foreign policy
objectives —including in places like Iran, Venezuela, Cuba, North Korea, and China, just to name a few. By
helping people avoid censorship, protect their speech, document human rights abuses, and retain privacy and anonymity, this work literally saves lives.
U.S. government funding helps these organizations do the less glamorous work of developing and maintaining deeply technical tools and getting them into the
hands of people who need them. That is, and should remain, in the U.S. government’s interest. And sadly, it’s not work that is easily fundable otherwise. But technical people
understand that these tools require ongoing support by dedicated, talented people to keep them running and available.
It’s hard to imagine that this work does not align with U.S. government priorities under any administration, and certainly not one that has stressed its
commitment to fighting censorship and supporting digital technologies like cryptocurrencies that use some of the same privacy and anonymity-protecting techniques. These organizations
exist to use technology to protect freedom around the world.
We urge the new administration to restore support for these critical internet freedom tools.
>> mehr lesen
The Internet Never Forgets: Fighting the Memory Hole
(Thu, 30 Jan 2025)
If there is one axiom that we should want to be true about the internet, it should be: the internet never forgets. One of the advantages of our advancing
technology is that information can be stored and shared more easily than ever before. And, even more crucially, it can be stored in multiple places.
Those who back things up and index information are critical to preserving a shared understanding of facts and history, because the powerful will always seek
to influence the public’s perception of them. It can be as subtle as organizing a campaign to downrank articles about their misdeeds, or as unsubtle as removing previously available
information about themselves.
This is often called “memory-holing,” after the incinerator chutes in George Orwell’s 1984
that burned any reference to the past that the government had changed. One prominent pre-internet example is Disney’s ongoing battle to remove
Song of the South from public consciousness. (One can
wonder if they might have succeeded if not for the internet). Instead of acknowledging mistakes, memory-holing allows powerful people, companies, and governments to pretend they never
made the mistake in the first place.
It also allows those same actors to pretend that they haven’t made a change, and
that a policy rule or definition has always been the same. This creates an impression of permanency where, historically, there was fluidity.
One of the fastest and easiest routes to the memory hole is a copyright claim. One particularly egregious practice is when a piece of media that is critical
of someone, or just embarrassing to them, is copied and
backdated. Then, that person or their agent claims their copy is the “original” and that the real article is “infringement.” Once the real
article is removed, the copy is also disappeared and legitimate speech vanishes.
Another frequent tactic is to claim copyright infringement when someone’s own words, images, or websites are used against them, despite it being fair use. A
recent example is reporter Marisa Kabas receiving a takedown
notice for sharing a screenshot of a politician’s campaign website that showed him with his cousin, alleged UHC shooter Luigi Mangione. The
screenshot was removed out of an abundance of caution, but proof of something newsworthy should not be so easy to disappear. And it wasn’t. The politician's website was changed to
remove the picture, but a copy of the website before the change is preserved via the Internet Archive’s Wayback Machine.
In fact, the Wayback Machine is one of the best tools people have to fight memory-holing. Changing your own website is the first step to making embarrassing
facts disappear, but the Wayback Machine preserves earlier versions. Some seek to use copyright to have entire websites blocked or taken down, and once again the Wayback Machine
preserves what once was.
This isn’t to say that everyone should be judged by their worst day, immortalized on the internet forever. It is to say that tools to remove those things
will, ultimately, be of more use to the powerful than the everyday person. Copyright does not let you disappear bad news about yourself. Because the internet never
forgets.
>> mehr lesen
Protect Your Privacy on Bumble
(Thu, 30 Jan 2025)
Late last year, Bumble finally rolled out its updated privacy policy after a coalition of twelve
digital rights, LGBTQ+, human rights, and gender justice civil society organizations launched a campaign demanding stronger data
protections.
Unfortunately, the company, like other dating apps, has not moved
far enough, and continues to burden users with the responsibility of navigating misleading privacy settings on the app, as well as absorbing the consequences of infosec gaps, however
severe.
This should not be your responsibility—dating apps like Bumble should be prioritizing your privacy by default. This data falling into the wrong hands can come with
unacceptable consequences, especially for those
seeking reproductive health care, survivors of intimate partner violence, and members of the LGBTQ+ community. Laws should require companies to put our privacy over their profit, and we’re fighting hard for the introduction of comprehensive data privacy legislation in the U.S.
to achieve this.
But in the meantime, here’s a step-by-step guide on how to protect yourself and your most intimate information whilst using the dating service.
Review Your Login Information
When you create a Bumble account, you have the option to use your phone number as a login, or use your Facebook, Google (on Android), or Apple (on iOS) account. If you use your
phone number, you’ll get verification texts when you login from a new device and you won’t need any
sort of password.
Using your Apple, Google, or Facebook account might share some data with those services, but can also be a useful backup plan if you lose access to your phone number for
whatever reason. Deciding if that trade-off is worth it is up to you. If you do choose to use those services, be sure to use a strong, unique password for your accounts and two-factor authentication. You can always review these login methods and add or remove one if
you don’t want to use it anymore.
Tap the Profile option, then the gear in the upper-right corner. Scroll down to Security and Privacy > Ways you can log in
and review your settings.
You can also optionally link your Spotify account to your Bumble profile. While this should only display your top artists, depending on how you use Spotify there’s always a
chance a bug or change might reveal more than you intend. You can disable this integration if you want:
Tap the Profile option, then “Complete Profile,” and scroll down the Spotify section at the bottom of that page. If the “Connect my Spotify” box is checked, tap it to
uncheck the box. You can also follow Spotify’s directions to revoke app access
there.
Disable Bumble’s Behavioral Ads
You don’t have many privacy options on Bumble, but there is one important setting we recommend changing: disable behavioral ads. By default, Bumble can take information from your profile and use that
to display targeted ads, which track and target you based on your supposed interests. It’s best to turn this feature off:
Tap the profile option, then the gear in the upper-right corner.
If you’re based in the U.S., scroll down to Security and Privacy > Privacy settings, and enable the option for
“Do not use my profile information to show me relevant ads.”
If you’re based in Europe, scroll down to Security and Privacy > Privacy settings, and click “Reject
all.”
You should also disable the advertising ID on your phone, helping limit what Bumble—and any other app—can access about you for behavioral ads.
iPhone: Open Settings > Privacy & Security > Tracking, and set the toggle
for “All Apps to Request to Track” to off.
Android: Open Settings > Security & privacy > Privacy controls >
Ads, and tap “Delete advertising ID.”
Review the Bumble Permissions on Your Phone
Bumble asks for a handful of permissions from your device, like access to your location and camera roll (and camera). It’s worth reviewing these permissions, and possibly
changing them.
Location
Bumble won’t work without some level of location access, but
you can limit what it gets by only allowing the app to access your location when you have the app open. You can deny access to your “precise location,” which is your exact spot, and
instead only provide a general location. This is sort of like providing the app access to your zip code instead of your exact address.
iPhone: Open Settings > Privacy & Security > Location Services >
Bumble. Select the option for “While Using the App,” and disable the toggle for “Precise Location.”
Android: Open Settings > Security & Privacy > Privacy Controls >
Permission Manager > Location > Bumble. Select the option to “Allow only while
using the app,” and disable the toggle for “Use precise location.”
Photos
In order to upload profile pictures, you’ve likely already given Bumble access to your photo roll. Giving Bumble access to your whole photo roll doesn’t upload
every photo you’ve ever taken, but it’s still good practice to limit what the app can even access so there’s less room for mistakes.
iPhone: Open Settings > Privacy & Security > Photos >
Bumble. Select the option for “Limited Access.”
Android: Open Settings > Security & Privacy > Privacy Controls > Permission Manager > Photos and
videos > Bumble. Select the option to “Allow limited access.”
Practice Communication Guidelines for Safer Use
As with any social app, it’s important to be mindful of what you share with others when you first chat, to not disclose any financial details, and to trust your gut if something
feels off. It’s also useful to review your profile information now and again to make sure you’re still comfortable sharing what you’ve listed there. Bumble has some more
instructions on how to protect your personal information.
If you decide you’re done with Bumble for good, then you should delete your account before deleting the app off your phone. In the Bumble app, tap the Profile option, then tap the
gear icon. Scroll down to the bottom of that page, tap “Delete Account” and follow the on-screen directions. Once complete, go ahead and delete the app.
Whilst the privacy options at our disposal may seem inadequate to meet the difficult moments ahead of us, especially for vulnerable communities in the United States and across
the globe, taking these small steps can prove essential to protecting you and your information. At the same time, we’re continuing our work with organizations like
Mozilla and Ultra Violet to ensure that all corporations—including dating apps like Bumble—protect our most important private information. Finding love should not involve such a
privacy impinging tradeoff.
>> mehr lesen
EFF to State AGs: Time to Investigate Crisis Pregnancy Centers
(Tue, 28 Jan 2025)
Discovering that you’re pregnant can trigger a mix of emotions—excitement, uncertainty, or even distress—depending on your circumstances. Whatever your feelings are, your next steps
will likely involve disclosing that news, along with other deeply personal information, to a medical provider or counselor as you explore your options.
Many people will choose to disclose that information to their trusted obstetricians, or visit their local Planned Parenthood clinic. Others, however, may instead turn to a crisis
pregnancy center (CPC). Trouble is, some of these centers may not be doing a great job of prioritizing or protecting their clients’ privacy.
CPCs (also known as “fake clinics”) are facilities that are often connected to religious organizations and have a strong anti-abortion stance. While many offer pregnancy tests, counseling, and
information, as well as limited medical services in some cases, they do not provide reproductive healthcare such as abortion or, in many cases, contraception. Some are licensed
medical clinics; most are not. Either way, these services are a growing enterprise: in 2022, CPCs
reportedly received $1.4 billion in revenue, including substantial federal and state funds.
Last year, researchers at the Campaign for Accountability filed multiple complaints urging
attorneys general in five states—Idaho, Minnesota, Washington, Pennsylvania, and New Jersey—to investigate crisis pregnancy centers that allegedly had misrepresented, through their
client intake process and/or websites, that information provided to them was protected by the Health Insurance Portability and Accountability Act (“HIPAA”).
Additionally, an incident in
Louisiana raised concerns that CPCs may be sharing client information with other centers in their affiliated networks, without appropriate privacy or anonymity protections. In
that case, a software training video inadvertently disclosed the names and personal information of roughly a dozen clients.
Unfortunately, these privacy practices aren’t confined to those states. For example, the Pregnancy Help Center, located in Missouri, states on its website that:
Pursuant to the Health Insurance Portability and Accountability Act (HIPAA), Pregnancy Help Center has developed a notice for patients, which provides a clear explanation of
privacy rights and practices as it relates to private health information.
And its Notice of Privacy Practices suggests oversight by the U.S. Department of Health and Human, instructing clients who feel their rights were violated to:
file a complaint with the U.S. Department of Health and Human Services Office for Civil Rights by sending a letter to 200 Independence Avenue, S.W., Washington, D.C. 20201,
calling 1-877-696-6775, or visiting www.hhs.gov/ocr/privacy/hipaa/complaints/.
Websites for centers in other states, such as Florida, Texas, and Arkansas, contain similar language.
As we’ve noted before, there are far too few protections for user privacy–including medical privacy—and individuals have little control over how their personal data is collected, stored, and used. Until
Congress passes a comprehensive privacy law that includes a private right of action, state attorneys general must take proactive steps to protect their constituents from unfair or
deceptive privacy practices. Accordingly, EFF has called on attorneys general in Florida, Texas, Arkansas, and Missouri to investigate potential privacy violations and hold accountable CPCs that engage in deceptive
practices.
Regardless of your views on reproductive healthcare, we should all agree that privacy is a basic human right, and that consumers deserve transparency. Our elected officials have a
responsibility to ensure that personal information, especially our sensitive medical data, is protected.
>> mehr lesen
What Proponents of Digital Replica Laws Can Learn from the Digital Millennium Copyright Act
(Tue, 28 Jan 2025)
We're taking part in Copyright
Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every
day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make
sure that copyright promotes creativity and innovation
Performers—and ordinary people—are understandably concerned that they may be replaced or defamed by AI-generated imitations. We’ve seen a host of state and
federal bills designed to address that concern, but every one just generates new problems.
One of the most pernicious proposals is the NO FAKES Act, and Copyright
Week is a good time to remember why. We’ve detailed the many problems of the bill before, but, ironically enough, one of the worst aspects is the bone it throws to critics who worry
the legislation’s broad provisions and dramatic penalties will lead platforms to over-censor online expression: a safe harbor scheme modeled on the DMCA notice and takedown process.
In essence, platforms can avoid liability if they remove all instances of allegedly illegal content once they are notified that the content is unauthorized.
Platforms that ignore such a notice can be on the hook just for linking to unauthorized replicas. And every
single copy made, transmitted, or displayed is a separate violation, incurring a $5000 penalty – which will add up fast. The bill does offer one not very useful carveout:
if a platform can prove in court that it had an objectively reasonable belief that the content was lawful, the
penalties for getting it wrong are capped at $1 million.
The safe harbors offer cold comfort to platforms and the millions of people who rely on them to create, share, and access content. The DMCA notice and
takedown process has offered important protections for the development of new venues for speech, helping creators finds audiences and vice versa. Without those protections, Hollywood
would have had a veto right over all kinds of important speech tools and platforms, from basic internet service to social media and news sites to any other service that might be used
to host or convey copyrighted content, thanks to copyright’s ruinous statutory penalties. The risks of accidentally facilitating infringement would have been just too
high.
But the DMCA notice and takedown process has also been regularlyabused
to target lawful speech. Congress knew this was a risk, so it built in some safeguards: a counter-notice process to help users get improperly
targeted content restored, and a process for deterring that abuse in the first place by allowing users to hold notice senders accountable when they misuse the process. Unfortunately,
some courts have mistakenly interpreted the latter provisions to require showing that the sender subjectively knew it was lying when it claimed the content was unlawful. That standard
is very hard to meet in most cases.
Proponents of a new digital replica right could have learned from that experience and created a notice process with strong provisions against abuse. Those
provisions are even more necessary here, where it would be even harder for providers to know whether a notice is false. Instead, NO FAKES offers fewer safeguards than the DMCA. For example, while the DMCA puts the burden on the rightsholder to put up or shut up (i.e., file a lawsuit) if a speaker pushes back and explains why the content is lawful, NO FAKES instead
puts the burden on the speaker to run to court within 14 days to defend their rights. The powerful have
lawyers on retainer who can do that, but most creators, activists, and citizen journalists do not.
And the NO FAKES provisions to allow improperly targeted speakers to hold the notice abuser accountable will offer as little deterrent as the roughly
parallel provisions in the DMCA. As with the DMCA, a speaker must prove that the lie was “knowing,” which can be interpreted to mean that the sender gets off scot-free as long as
they subjectively believe the lie to be true, no matter how unreasonable that
belief.
If proponents want to protect online expression for everyone, at a minimum they should redraft the counter-notice process to more closely model the DMCA,
and clarify that abusers, like platforms, will be held to an objective knowledge standard. If they don’t, the advent of digital replicas will, ironically enough, turn out to be an
excuse to strangle all kinds of new and old creativity.
>> mehr lesen
California Law Enforcement Misused State Databases More Than 7,000 Times in 2023
(Tue, 28 Jan 2025)
The Los Angeles County Sheriff’s Department (LACSD) committed wholesale abuse of sensitive criminal justice databases in 2023, violating a specific rule against searching the
data to run background checks for concealed carry firearm permits.
The sheriff’s department’s 6,789 abuses made up a majority of the record 7,275 violations across California that were reported to the state Department of Justice (CADOJ) in 2023
regarding the California Law Enforcement Telecommunications System (CLETS).
Records obtained by EFF also included numerous cases of other forms of database abuse in 2023, such as police allegedly using data for personal vendettas. While many violations
resulted only in officers or other staff being retrained in appropriate use of the database, departments across the state reported that violations in 2023 led to 24 officers being
suspended, six officers resigning, and nine being fired.
CLETS contains a lot of sensitive information and is meant to provide officers in California with access to a variety of databases, including records from the Department of
Motor Vehicles, the National Law Enforcement Telecommunications System, Criminal Justice Information Services, and the National Crime Information Center. Law enforcement agencies with
access to CLETS are required to inform the state Justice Department
of any investigations and discipline related to misuse of the system. This mandatory reporting helps to provide oversight and transparency around how local agencies are using
and abusing their access to the array of databases.
A slide from a Long Beach Police Department training for new recruits.
Misuse can take many forms, ranging from sharing passwords to using the system to look up
romantic partners or celebrities. In 2019, CADOJ declared that using CLETS data for
"immigration enforcement" is considered misuse under the California Values Act.
EFF periodically files California Public Records Act requests for the data and records generated by these CLETS misuse disclosures. To help improve access to this data, EFF's
investigations team has compiled and compressed that information from the years 2019 - 2023 for public download. Researchers and journalists can look up the individual data per agency year-to-year.
Download the 2019-2023 data here. Data from previous years is available here: 2010-2014, 2015, 2016, 2017,
2018.
California agencies are required to report misuse of CLETS to CADOJ by February 1 of the following year, which means numbers for 2024 are due to the state agency at the end of
this month. However, it often takes the state several more months to follow up with agencies that do not respond and to enter information from the individual forms into a
database.
Across California between 2019 and 2023, there have been:
761 investigations of CLETS misuse, resulting in findings of at least 7,635 individual violations of the system’s rules
55 officer suspensions, 50 resignations, and 42 firings related to CLETS misuse
six misdemeanor convictions and one felony conviction related to CLETS misuse
As we reviewed the data made public since 2019, there were a few standout situations worth additional reporting. For example, LACSD in 2023 conducted one investigation into
CLETS misuse which resulted in substantiating thousands of misuse claims. The Riverside County Sheriff's Office and Pomona Police Department also found hundreds of violations of
access to CLETS the same year.
Some of the highest profile cases include:
LACSD’s use of criminal justice data for concealed carry permit research, which is specifically forbidden by CLETS rules. According to meeting notes of the CLETS oversight body, LACSD retrained
all staff and implemented new processes. However, state Justice Department officials acknowledged that this problem was not unique, and they had documented other agencies abusing
the data in the same way.
A Redding Police Department officer in 2021 was charged with six misdemeanors after being accused of accessing CLETS to set up a traffic stop for his fiancée's ex-husband,
resulting in the man's car being towed and impounded, the local outlet A News Cafe
reported. Court records show the officer was fired, but he
was ultimately acquitted by a jury in the criminal case. He now works for a different police department 30 miles away.
The Folsom Police Department in 2021 fired an officer who
was accused of sending racist texts and engaging in sexual misconduct, as
well as abusing CLETS. However, the Sacramento County District Attorney told a local TV station it declined
to file charges, citing insufficient evidence.
A Madera Police Officer in 2021 resigned and pleaded guilty to accessing CLETS
and providing that information to an unauthorized person. He received a one-year suspended sentence and 100 hours of community service, according to court records. In a statement,
the police department said the individual's "behavior was absolutely inappropriate" and "his actions tarnish the nobility of our profession."
A California Highway Patrol officer was charged with improperly accessing CLETS to
investigate vehicles his friend was interested in purchasing as part of his automotive business.
The San Francisco Police Department, which failed to provide its numbers to CLETS in 2023, may be reporting at least one violation from the past year, according to a May 2024 report of sustained
complaints, which lists one substantiated violation involving “Computer/CAD/CLETS Misuse.”
CLETS is only one of many massive databases available to law enforcement, but it is one of the very few with a mandatory reporting requirement for abuse; violations of other
systems likely never go reported to a state oversight body or at all. The sheer amount of misuse should serve as a warning that other systems police use, such as automated license
plate reader and face recognition databases, are likely also being abused at a high rate–or even higher, since they are not subject to the same scrutiny as CLETS.
Related Cases:
California Law Enforcement Telecommunications System
>> mehr lesen
Don't Make Copyright Law in Smoke-Filled Rooms
(Tue, 28 Jan 2025)
We're taking part in Copyright
Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every
day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make
sure that copyright promotes creativity and innovation.
Copyright law affects everything we do on the internet. So why do some lawmakers and powerful companies still think they can write new copyright law behind
closed doors, with ordinary internet users on the outside?
Major movie and TV studios are again pushing Congress to
create a vast new censorship regime on the U.S. internet, one that could even reach abroad and conscript infrastructure companies to help make whole
websites disappear. The justification is, as always, creating ever more draconian means of going after copyright infringement, and never mind all of the
powerful tools that already exist.
The movie studios and other major media companies last tried
this in 2012, seeking to push a pair of internet censorship bills called SOPA and PIPA through Congress, without hearings. Lawmakers were preparing to
ignore the concerns of internet users not named Disney, Warner, Paramount, or Fox. At first, they ignored the long, sad history of copyright enforcement
tools being used for censorship. They ignored the technologists, including some of the creators of the internet,
who explained how website-blocking creates security threats and inevitably blocks lawful speech. And they ignored the pleas of ordinary users who were
concerned about the websites they relied on going dark because of hasty site-blocking orders.
Writing new copyright laws in the proverbial smoke-filled backroom was somewhat less surprising in 2012. Before the internet, copyright mainly governed the
relationships between authors and publishers, movie producers and distributors, photographers and clients, and so on. The easiest way to make changes was
to get representatives of these industries together to hash out the details, then have Congress pass those changes into law. It worked well enough for most
people.
In the internet age, that approach is unworkable. Every use of the internet, whether sending a photo, reading a social media post, or working
on a shared document, causes a copy of some creative work. And nearly every creative work that’s recorded on a computing device is automatically governed
by copyright law, with no registration or copyright notices required. That makes copyright a fundamental governing law of the internet. It shapes the
design and functions of the devices we use, what software we can run, and when and how we can participate in culture. Its massive penalties and confusing
exceptions can ensnare everyone from landlords to librarians, from students to salespeople.
Users fought back. In a historic protest, thousands of
websites went dark for a day, with messages encouraging users to oppose the SOPA/PIPA bills. EFF alone helped users send more than 1,000,000 emails to
Congress, and countless more came from other organizations. Web traffic briefly brought down some Senate websites. 162 million people visited Wikipedia and
8 million looked up their representatives’ phone numbers. Google received more than 7 million signatures on its petition. Everyone who wrote, called, and
visited their lawmakers sent a message that laws affecting the internet can't be made in a backroom by insiders bearing campaign cash. Congress quickly
scrapped the bills.
After that, although Congress avoided changing copyright law for years, the denizens of the smoke-filled room never gave up. The then-leaders of the
Motion Picture Association and the Recording Industry Association of America both vented angrily about
ordinary people getting a say over copyright. Big Media went on a world tour, pushing for site-blocking laws that led to the same problems [Italy etc] of censorship and
over-blocking in many countries that U.S. users had mostly avoided.
Now, they’re trying again. Major media companies are
pushing Congress to pass new site-blocking laws that would conscript internet service providers, domain name services, and potentially others to build a
new censorship machine. The problems of overblocking and misuse haven’t gone away—if anything they've gotten as ever more of our life is lived online. The
biggest tech companies, who in 2012 were prodded into action by a mass movement of internet users, are now preoccupied by antitrust lawsuits and seeking
favor from the new administration in Washington. And as with other extraordinary tools that Congress has given to the largest copyright holders,
site-blocking won’t stay confined to copyright—other powerful industries and governments will clamor to use the system for censorship, and it will get ever
harder to resist those calls.
It seems like lawmakers have learned nothing, because copyright law is again being written in secret by a handful of industry representatives. That was
unacceptable in 2012, and it’s even more unacceptable in 2025. Before considering site blocking, or any major changes to copyright, Congress needs to
consult with every kind of internet user, including small content creators, small businesses, educators, librarians, and technologists not beholden to the largest tech and media
companies.
We can’t go backwards. Copyright law affects everyone, and everyone needs a say in its evolution. Before taking up site-blocking or any other major changes
to copyright law, Congress needs to air those proposals publicly, seek input from far and wide—and listen to it.
>> mehr lesen
It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy
(Mon, 27 Jan 2025)
We're taking part in Copyright
Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various
groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and
innovation
One of the unintended consequences of the internet is that more of us than ever are aware of how much of our lives is affected by copyright. People see
their favorite YouTuber’s video get removed or re-edited due to copyright. People know they can’t tinker with or fix their devices. And people have realized, and are angry about, the
fact that they don’t own much of the media they have paid for.
All of this is to say that copyright is no longer—if it ever was—a niche concern of certain industries. As corporations have pushed to expand copyright,
they have made it everyone’s problem. And that means they don’t get to make the law in secret anymore.
Twelve years ago, a diverse coalition of Internet users, non-profit groups, and Internet companies defeated the Stop Online Piracy Act (SOPA) and the PROTECT IP Act
(PIPA), bills that would have forced Internet companies to blacklist and block websites accused of hosting copyright infringing content. These
were bills that would have made censorship very easy, all in the name of copyright protection.
As people raise more and more concerns about the major technology companies that control our online lives, it’s important not to fall into the trap of
thinking that copyright will save us. As SOPA/PIPA reminds us: expanding copyright serves the gatekeepers, not the users.
We continue to fight for a version of copyright that does what it is supposed to. And so, every year, EFF and a number of diverse organizations participate
in Copyright Week. Each year, we pick five copyright issues to highlight and advocate a set of principles of copyright law. This year’s issues are:
Monday: Copyright Policy Should Be Made in the Open With Input From Everyone: Copyright is not a niche concern. It affects everyone’s
experience online, therefore laws and policy should be made in the open and with users’ concerns represented and taken into account.
Tuesday: Copyright Enforcement as a Tool of Censorship: Freedom of expression is a fundamental human right essential to a functioning
democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.
Wednesday: Device and Digital Ownership: As the things we buy increasingly exist either in digital form or as devices with software, we
also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it
works, repair it, remove unwanted features, or tinker with it to make it work in a new way.
Thursday: The Preservation and Sharing of Information and Culture: Copyright often blocks the preservation and sharing of information
and culture, traditionally in the public interest. Copyright law and policy should encourage and not discourage the saving and sharing of information.
Friday: Free Expression and Fair Use: Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to
comment, criticize, and rework our common culture.
Every day this week, we’ll be sharing links to blog posts on these topics at https://www.eff.org/copyrightweek.
>> mehr lesen
EFF to Michigan Supreme Court: Cell Phone Search Warrants Must Strictly Follow The Fourth Amendment’s Particularity and Probable Cause Requirements
(Sat, 25 Jan 2025)
Last week, EFF, along with the Criminal Defense Attorneys of Michigan, ACLU, and ACLU of Michigan, filed an amicus brief in People v. Carson in the Supreme Court
of Michigan, challenging the constitutionality of the search warrant of Mr. Carson's smart phone.
In this case, Mr. Carson was arrested for stealing money from his neighbor's safe with a co-conspirator. A few months later, law enforcement applied for a search warrant for Mr.
Carson's cell phone. The search warrant enumerated the claims that formed the basis for Mr. Carson's arrest, but the only mention of a cell phone was a law enforcement officer's
general assertion that phones are communication devices often used in the commission of crimes. A warrant was issued which allowed the search of the entirety of Mr. Carson's smart
phone, with no temporal or category limits on the data to be searched. Evidence found on the phone was then used to convict Mr. Carson.
On appeal, the Court of Appeals made a number of rulings in favor of Mr. Carson, including that evidence from the phone should not have been admitted because the search warrant lacked
particularity and was unconstitutional. The government's appeal to the Michigan Supreme Court was accepted and we filed an amicus brief.
In our brief, we argued that the warrant was constitutionally deficient and overbroad, because there was no probable cause for searching the cell phone and that the warrant was
insufficiently particular because it failed to limit the search to within a time frame or certain categories of information.
As the U.S. Supreme Court recognized in Riley v. California, electronic devices such as
smart phones “differ in both a quantitative and a qualitative sense” from other objects. The devices
contain immense storage capacities and are filled with sensitive and revealing data, including apps for everything from banking to therapy to religious practices to personal health.
As the refrain goes, whatever the need, “there's an app for that.” This special nature of digital devices requires courts to review warrants to search
digital devices with heightened attention to the Fourth Amendment’s probable cause and particularity requirements.
In this case, the warrant fell far short. In order for there to be probable cause to search an item, the warrant application must establish a “nexus” between the incident being investigated and the place to be searched. But the application in this case gave no reason
why evidence of the theft would be found on Mr. Carson's phone. Instead, it only stated the allegations leading to Mr. Carson's arrest and boilerplate language about cell phone use
among criminals. While those facts may establish probable cause to arrest Mr. Carson, they did not establish probable cause to search Mr. Carson's phone. If it were
otherwise, the government would always be able to search the cell phone of someone they had probable cause to arrest, thereby eradicating the independent determination of whether
probable cause exists to search something. Without a nexus between the crime and Mr. Carson’s phone, there was no probable cause.
Moreover, the warrant allowed for the search of “any and all data” contained on the cell phone, with no limits whatsoever. This type of "all content" warrants are the exact type of
general warrants against which the Fourth Amendment and its state corollaries were meant to protect. Cell phone search warrants that have been upheld have contained temporal
constraints and a limit to the categories of data to be searched. Neither limitations—or any other limitations—were in the issued search warrant. The police
should have used date limitations in applying for the search warrant, as they do in their warrant applications for other searches in the same investigation. Additionally, the warrant
allowed the search of all the information on the phone, the vast majority of which did not—and could not—contain evidence related to the investigation.
As smart phones become more capacious and entail more functions, it is imperative that courts adhere to the narrow construction of warrants for the search of electronic devices to
support the basic purpose of the Fourth Amendment to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials.
>> mehr lesen
Face Scans to Estimate Our Age: Harmful and Creepy AF
(Fri, 24 Jan 2025) Governmentmuststoprestrictingwebsiteaccesswithlawsrequiringageverification.
Some advocates of these censorship schemes argue we
can nerd our way out of the many harms they cause to speech, equity, privacy, and infosec. Their silver bullet? “Age
estimation” technology that scans our faces, applies an algorithm, and guesses how old we are – before letting us access online content and opportunities to communicate with
others. But when confronted with age estimation face scans, many people will refrain from accessing restricted websites, even when they have a legal right to use them. Why?
Because quite simply, age estimation face scans are creepy AF – and harmful. First, age estimation is inaccurate and discriminatory. Second, its underlying technology can be used to
try to estimate our other demographics, like ethnicity and gender, as well as our names. Third, law enforcement wants to use its underlying technology to guess our emotions and
honesty, which in the hands of jumpy officers is likely to endanger innocent people. Fourth, age estimation face scans create privacy and infosec threats for the people scanned. In
short, government should be restraining this hazardous technology, not normalizing it through age verification mandates.
Error and discrimination
Age estimation is ofteninaccurate. It’s in the name: age estimation. That means these face scans will regularly mistake adults for
adolescents, and wrongfully deny them access to restricted websites. By the way, it will also sometimes mistake adolescents for adults.
Age estimation also is discriminatory. Studies show face scans are more likely to err in estimating the age of
people of color and women. Which means that as a tool of age verification, these face scans will have an unfair disparate impact.
Estimating our identity and demographics
Age estimation is a tech sibling of face identification and the
estimation of other demographics. To users, all face scans look the same and we shouldn’t allow them to become a normal part of the internet. When we submit to a face scan to estimate
our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics.
Some companies are in both the age estimation business and the face identification business.
Other developers claim they can use age estimation’s underlying technology – application of an algorithm to a face scan – to estimate our gender (likethesevenders) and our ethnicity (likethesevenders). But these scans are likely to misidentify the many
people whose faces do not conform to gender and ethnic averages (such as transgenderpeople). Worse, powerful institutions can harm people with this technology. China uses face scans to identify ethnicUyghurs. Transphobic legislators may try to use them to enforce
bathroom bans. For this reason, advocates have sought to prohibit gender estimation face scans.
Estimating our emotions and honesty
Developers claim they can use face estimation’s underlying technology to estimate our emotions (likethesevenders). But this will always have a high
error rate, because people express emotions differently, based on culture, temperament, and neurodivergence. Worse, researchers are trying to use face scans to estimate deception, and even criminality. Mind-reading
technologies have a long and dubious history, from phrenology to polygraphs.
Unfortunately, powerful institutions may believe the hype. In 2008, the U.S. Department of Homeland Security disclosed its efforts to use “image analysis” of “facial features” (among
other biometrics) to identify “malintent” of people being screened. Other
policing agencies are using algorithms to analyze emotions and deception.
When police technology erroneously identifies a civilian as a threat, many officers overreact. For example, ALPR
errors recurringly prompt police officers to draw guns on innocent drivers. Some government agencies now advise drivers to keep their hands on the steering wheel during a traffic stop, to reduce the risk that the driver’s
movements will frighten the officer. Soon such agencies may be advising drivers not to roll their eyes, because the officer’s smart glasses could misinterpret that facial
expression as anger or deception.
Privacy and infosec
The government should not be forcing tech companies to collect even more personal data from users. Companies already collect too much data and have proved they cannot be trusted to protect it.
Age verification face scans create new threats to our privacy and information security. These systems collect a scan of our face and guess our age. A poorly designed system might
store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this
information can expose intimate details about us. Our faces are unique, immutable, and constantly on display – creating risk of biometric tracking across innumerable virtual and IRL
contexts. Last year, hackers breached an age verification
company (among many other companies).
Of course, there are better and worse ways to design a technology. Some privacy and infosec risks might be reduced, for example, by conducting face scans on-device instead of
in-cloud, or by deleting everything immediately after a visitor passes the age test. But lower-risk does not mean zero-risk. Clever hackers might find ways to breach even
well-designed systems, companies might suddenly change their systems to make them less privacy-protective (perhaps at the urging of government), and employees and contractors might
abuse their special access. Numerous states are mandating age verification with varying rules for how to do so; numerous websites are subject to these mandates; and numerous vendors
are selling face scanning services. Inevitably, many of these websites and services will fail to maintain the most privacy-preserving systems, because of carelessness or greed.
Also, face scanning algorithms are often trained on data that was collected using questionable privacy methods—whether it be from users with murky-consent or non-users. The government
data sets used to test biometric algorithms sometimes come from prisoners and immigrants.
Most significant here, when most people arrive at most age verification checkpoints, they will have no idea whether the face scan system has minimized the privacy and infosec risks.
So many visitors will turn away, and forego the content and conversations available on restricted website.
Next steps
Algorithmic face scans are dangerous, whether used to estimate our age, our other demographics, our name, our emotions, or our honesty. Thus, EFF supports a ban on governmentuse of this technology, and strict regulation (including consent and minimization) for corporate use.
At a minimum, government must stop coercing websites into using face scans, as a means of complying with censorious age verification mandates. Age estimation does not eliminate the
privacy and security issues that plague all age verification systems. And these face scans cause many people to refrain from accessing websites they have a legal right to access.
Because face scans are creepy AF.
>> mehr lesen
Second Circuit Rejects Record Labels’ Attempt to Rewrite the DMCA
(Fri, 24 Jan 2025)
In a major win for creator communities, the U.S. Court of Appeals for the Second Circuit has once again handed video streaming site Vimeo a solid win in its long-running legal battle with Capitol Records and a host of other record labels.
The labels claimed that Vimeo was liable for copyright infringement on its site, and specifically that it can’t rely on the Digital Millennium Copyright Act’s safe harbor because
Vimeo employees “interacted” with user-uploaded videos that included infringing recordings of musical performances owned by the labels. Those interactions included commenting on,
liking, promoting, demoting , or posting them elsewhere on the site. The record labels contended that these videos contained popular songs, and it would’ve been obvious to Vimeo
employees that this music was unlicensed.
But as EFF explained in an amicus brief filed in support of Vimeo, even rightsholders themselves mistakenly
demand takedowns. Labels often request takedowns of music they don’t
own or control, and even request takedowns of their own content. They also regularlytargetfairuses. When rightsholders themselves cannot accurately identify
infringement, courts cannot presume that a service provider can do so, much less a blanket presumption as to hundreds of videos.
In an earlier ruling, the court held that the labels had to show that it would be apparent to a
person without specialized knowledge of copyright law that the particular use of the music was unlawful, or prove that the Vimeo workers had expertise in copyright law. The labels
argued that Vimeo’s own efforts to educate its employees and user about copyright, among other circumstantial evidence, were enough to meet that burden. The Second Circuit disagreed,
finding that:
Vimeo’s exercise of prudence in instructing employees not to use copyrighted music and advising users that use of copyrighted music “generally (but not always) constitutes
copyright infringement” did not educate its employees about how to distinguish between infringing uses and fair use.
The Second Circuit also rejected another equally dangerous argument: that Vimeo lost safe harbor protection by receiving a “financial benefit” from infringing activity, such as
user-uploaded videos, that the platform had a “right and ability to control.” The labels contended that any website that exercises editorial judgment—for example, by removing,
curating, or organizing content—would necessarily have the “right and ability to control” that content. If they were correct, ordinary content moderation would put a platform at risk
of crushing copyright liability.
As the Second Circuit put it, the labels’ argument:
would substantially undermine what has generally been understood to be one of Congress’s major objectives in passing the DMCA: encouraging entrepreneurs to establish websites that
can offer the public rapid, efficient, and inexpensive means of communication by shielding service providers from liability for infringements placed on the sites by users.
Fortunately, the Second Circuit’s decisions in this case help preserve the safe harbors and the expression and innovation that they make possible. But it should not have taken well
over a decade of litigation—and likely several millions of dollars in legal fees—to get there.
Related Cases:
Capitol v. Vimeo
>> mehr lesen
Speaking Freely: Lina Attalah
(Thu, 23 Jan 2025)
This interview has been edited for length and clarity.*
Jillian York: Welcome, let’s start here. What does free speech or free expression mean to you personally?
Lina Attalah: Being able to think without too many calculations and without fear.
York: What are the qualities that make you passionate about the work that you do, and also about telling stories and utilizing your free expression in that way?
Well, it ties in with your first question. Free speech is basically being able to express oneself without fear and without too many calculations. These are things that are not
granted, especially in the context I work in. I know that it does not exist in any absolute way anywhere, and increasingly so now, but even more so in our context, and historically it
hasn't existed in our context. So this has also drawn me to try to unearth what is not being said, what is not being known, what is not being shared. I guess the passion came from
that lack more than anything else. Perhaps, if I lived in a democracy, maybe I wouldn't have wanted to be a journalist.
York: I’d like to ask you about Syria, since you just traveled there. I know that you're familiar with the context there in terms of censorship and the Internet in particular. What
do you see in terms of people's hopes for more expression in Syria in the future?
I think even though we share an environment where freedom of expression has been historically stifled, there is an exception to Syria when it comes to the kind of controls there
have been on people's ability to express, let alone to organize and mobilize. I think there's also a state of exception when it comes to the price that had to be paid in Syrian
prisons for acts of free expression and free speech. This is extremely exceptional to the fabric of Syrian society. So going there and seeing that this condition was gone, after so
much struggle, after so much loss, is a situation that is extremely palpable. From the few days I spent there, what was clear to me is that everybody is pretty much uncertain about
the future, but there is an undoubted relief that this condition is gone for now, this fear. It literally felt like it's a lower sky, sort of repressing people's chests somehow, and
it's just gone. This burden was just gone. It's not all flowery, it's not all rosy. Everybody is uncertain. But the very fact that this fear is gone is very palpable and cannot be
taken away from the experience we're living through now in Syria.
York: I love that. Thank you. Okay, let’s go to Egypt a little bit. What can you tell us about the situation for free speech in the context of Egypt? We're coming up on fourteen
years since the uprising in 2011 and eleven years since Sisi came to power. And I mean, I guess,
contextualize that for our readers who don't know what's happened in Egypt in the past decade or so.
For a quick summary, the genealogy goes as follows. There was a very tight margin through which we managed to operate as journalists, as activists, as people trying to sort of
enlarge the space through which we can express ourselves on matters of public concerns in the last years of Mubarak's rule. And this is the time that coincided with the opening up of the internet—back in the time when the
internet was also more of a public space, before the overt privatization that we experience in that virtual space as well. Then the Egyptian revolution happened in 2011 and that space further exploded in expression and diversity of
voices and people speaking to different issues that had previously been reserved to the hideouts of activist circles.
Then you had a complete reversal of all of this with the takeover of a military appointed government. Subsequently, with the election of President Sisi in 2014, it became clear
that it was a government that believed that the media's role—this is just one example focusing on the media—is to basically support the government in a very sort of 1960s nasserite
understanding that there is a national project, that he's leading it, and we are his soldiers. We should basically endorse, support, not criticize, not weaken, basically not say
anything differently from him. And you know this, of course, transcends the media. Everybody should be a soldier in a way and also the price of doing otherwise has been hefty, in the
sense that a lot of people ended up under prosecution, serving prolonged jail sentences, or even spending prolonged times in pre-trial detention without even getting
prosecuted.
So you have this total reversal from an unfolding moment of free speech that sort of exploded for a couple of years starting in 2011, and then everything closing up, closing up,
closing up to the point where that margin that I started off talking about at the beginning is almost no longer even there. And, on a personal note, I always ask myself if the margin
has really tightened or if one just becomes more scared as they grow older? But the margin has indeed tightened quite extensively. Personally, I'm aging and getting more scared. But
another objective indicator is that almost all of my friends and comrades who have been with me on this path are no longer around because they are either in prison or in exile or have
just opted out from the whole political apparatus. So that says that there isn't the kind of margin through which we managed to maneuver before the revolution.
York: Earlier you touched on the privatization of online spaces. Having watched the way tech companies have behaved over the past decade, what do you think that
these companies fail to understand about the Egyptian and the regional context?
It goes back to how we understand this ecosystem, politically, from the onset. I am someone who thinks of governments and markets, or governments and corporations, as the main
actors in a market, as dialectically interchangeable. Let's say they are here to control, they are here to make gains, and we are here to contest them even though we need them. We
need the state, we need the companies. But there is no reason on earth to believe that either of them want our best. I'm putting governments and companies in the same bucket, because
I think it's important not to fall for the liberals’ way of thinking that the state has certain politics, but the companies are freer or are just after gains. I do think of them as
formidable political edifices that are self-serving. For us, the political game is always how to preserve the space that we've created for ourselves, using some of the leverage from
these edifices without being pushed over and over.
For me, this is a very broad political thing, and I think about them as a duality, because, operating as a media organization in a country like Egypt, I have to deal with the
dual repression of those two edifices. To give you a very concrete example, in 2017 the Egyptian government blocked my website, Mada Masr, alongside a few other media websites, shortly before going on and blocking hundreds of
websites. All independent media websites, without exception, have been blocked in Egypt alongside sites through which you can download VPN services in order to be able to also access
these blocked websites. And that's done by the government, right? So one of the things we started doing when this happened in 2017 is we started saying, “Okay, we should invest in
Meta. Or back then it was still Facebook, so we should invest in Facebook more. Because the government monitors you.” And this goes back to the relation, the interchangeability of
states and companies. The government would block Mada Masr, but would never block Facebook, because it's bad for business. They care about keeping Facebook up and
running.
It's not Syria back in the time of Assad. It's not Tunisia back in the time of Ben Ali. They still want some degree of openness, so they would keep social media open. So we let
go of our poetic triumphalism when we said, we will try to invest in more personalized, communitarian dissemination mechanisms when building our audiences, and we'll just go on
Facebook. Because what option do we have? But then what happens is that is another track of censorship in a different way that still blocks my content from being able to reach its
audiences through all the algorithmic developments that happened and basically the fact that—and this is not specific to Egypt—they just want to think of themselves as the publishers.
They started off by treating us as the publishers and themselves as the platforms, but at this point, they want to be everything. And what would we expect from a big company, a
profitable company, besides them wanting to be everything?
York: I don't disagree at this point. I think that there was a point in time where I would have disagreed. When you work closely with companies, it’s easy to fall into the trap of
believing that change is possible because you know good people who work there, people who really are trying their best. But those people are rarely capable of shifting the direction
of the company, and are often the ones to leave first.
Let’s shift to talking about our friend, Egyptian political prisoner Alaa Abd El-Fattah. You mentioned the impact that the
past 11 years, really the past 14 years, have had on people in Egypt. And, of course, there are many political prisoners, but one of the prisoners that that EFF readers will be
familiar with is Alaa. You recently accepted the English PEN Award on his behalf. Can you tell us more about what
he has meant to you?
One way to start talking about Alaa is that I really hope that 2025 is the year when he will get released. It's just ridiculous to keep making that one single demand over and
over without seeing any change there. So Alaa has been imprisoned on account of his free speech, his attempt to speak freely. And he attempted to speak, you know, extremely freely in
the sense that a lot of his expression is his witty sort of engagement with surrounding political events that came through his personal accounts on social media, in additional to the
writing that he's been doing for different media platforms, including ours and yours and so on. And in that sense, he's so unmediated, he’s just free. A truly free spot. He has become
the icon of the Egyptian revolution, the symbol of revolutionary spirit who you know is fighting for people's right to free speech and, more broadly, their dignity. I guess I'm trying
to make a comment, a very basic comment, on abolition and, basically, the lack of utility of prisons, and specifically political prisons. Because the idea is to mute that voice. But
what has happened throughout all these years of Alaa’s incarceration is that his voice has only gotten amplified by this very lack, by this very absence, right? I always lament about
the fact that I do not know if I would have otherwise become very close to Alaa. Perhaps if he was free and up and running, we wouldn't have gotten this close. I have no idea. Maybe
he would have just gone working on his tech projects and me on my journalism projects. Maybe we would have tried to intersect, and we had tried to intersect, but maybe we would have
gone on without interacting much. But then his imprisonment created this tethering where I learned so much through his absence.
Somehow I've become much more who I am in terms of the journalism, in terms of the thinking, in terms of the politics, through his absence, through that lack. So there is
something that gets created with this aggressive muting of a voice that should be taken note of. That being said, I don't mean to romanticize absence, because he needs to be free. You
know it's, it's becoming ridiculous at this point. His incarceration is becoming ridiculous at this point.
York: I guess I also have to ask, what would your message be to the UK Government at this point?
Again, it's a test case for what so-called democratic governments can still do to their citizens. There needs to be something more forceful when it comes to demanding Alaa’s
release, especially in view of the condition of his mother, who has been on a hunger strike for over 105 days as of the day of this interview. So I can't accept that this cannot be a
forceful demand, or this has to go through other considerations pertaining to more abstract bilateral relations and whatnot. You know, just free the man. He's your citizen. You know,
this is what's left of what it means to be a democratic government.
York: Who is your free speech hero?
It’s Alaa. He always warns us of over-symbolizing him or the others. Because he always says, when we over symbolize heroes, they become abstract. And we stop being able to
concretize the fights and the resistance. We stop being able to see that this is a universal battle where there are so many others fighting it, albeit a lot more invisible, but at the
same time. Alaa, in his person and in what he represents, reminds me of so much courage. A lot of times I am ashamed of my fear. I'm ashamed of not wanting to pay the price, and I
still don't want to pay the price. I don't want to be in prison. But at the same time, I look up at someone like Alaa, fearlessly saying what he wants to say, and I’m just always in
awe of him.
>> mehr lesen
The Impact of Age Verification Measures Goes Beyond Porn Sites
(Thu, 23 Jan 2025)
As age verification bills pass across the world under the guise of “keeping children safe online,” governments are increasingly giving themselves the authority to decide what
topics are deemed “safe” for young people to access, and forcing online services to remove and block anything that may be deemed “unsafe.” This growing legislative trend
has sparked significant concerns and numerous First Amendment challenges, including a case currently pending before the Supreme Court–Free Speech Coalition v. Paxton. The
Court is now considering how government-mandated age verification impacts adults’ free speech rights online.
These challenges keep arising because this isn’t just about safety—it’s censorship. Age verification laws target a slew of broadly-defined topics. Some block access to websites
that contain some "sexual material harmful to minors," but define the term so loosely that “sexual material” could encompass anything from sex education to R-rated movies; others
simply list a variety of vaguely-defined harms. In either instance, lawmakers and regulators could use the laws to target LGBTQ+ content online.
This risk is especially clear given what we already know about platform content policies. These policies, which claim to "protect children" or keep sites “family-friendly,”
often label LGBTQ+ content as
“adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only
becomes clear when the policies (and/or laws) are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies and bills.
In either case, it is critical to recognize that age verification bills could block far more than just pornography.
Take Oklahoma’s bill, SB 1959, for example. This state age
verification law aims to prevent young people from accessing content that is “harmful to minors” and went into effect last November 1st. It incorporates definitions from another
Oklahoma statute, Statute 21-1040, which defines material “harmful to
minors” as any description or exhibition, in whatever form, of nudity and “sexual conduct.” That same statute then defines “sexual conduct” as including acts of “homosexuality.”
Explicitly, then, SB 1959 requires a site to verify someone’s age before showing them content about homosexuality—a vague enough term that it could potentially apply to content from
organizations like GLAAD and Planned Parenthood.
This vague definition will undoubtedly cause platforms to over-censor content relating to LGBTQ+ life, health, or rights out of fear of liability. Separately, bills such as SB
1959 might also cause users to self-police their own speech for the same reasons, fearing de-platforming. The law leaves platforms unsure and unable to precisely exclude the minimum
amount of content that fits the bill's definition, leading them to over censorship of content that may just also include this very blog post.
Beyond Individual States: Kids Online Safety Act (KOSA)
Laws like the proposed federal Kids
Online Safety Act (KOSA) make government officials the arbiters of what young people can see online and will lead platforms to implement invasive age verification
measures to avoid the threat of liability. If KOSA passes, it will lead to people who make online content about sex education, and LGBTQ+ identity and health, being persecuted and shut down as well. All it will take is one member of the Federal Trade
Commission seeking to score political points, or a state attorney general seeking to ensure re-election, to start going after the online speech they don’t like. These speech burdens
will also affect regular users as platforms mass-delete content in the name of avoiding lawsuits and investigations under KOSA.
Senator Marsha Blackburn, co-sponsor of KOSA, has expressed a priority in
“protecting minor children from the transgender [sic] in this culture and that influence.” KOSA, to Senator Blackburn, would address this problem by limiting content in the places
“where children are being indoctrinated.” Yet these efforts all fail to protect children from the actual harms of the online world, and instead deny
vulnerable young people a crucial avenue of communication and access to information.
LGBTQ+ Platform Censorship by Design
While the censorship of LGBTQ+ content through age verification laws can be represented as an “unintended consequence” in certain instances, barring
access to LGBTQ+ content is part of the platforms' design. One of the more pervasive examples is Meta suppressing LGBTQ+ content across its platforms under the guise of protecting younger users
from "sexually suggestive content.” According to a recent report, Meta has been
hiding posts that reference LGBTQ+ hashtags like #lesbian, #bisexual, #gay, #trans, and #queer for users that turned the sensitive content filter on, as well as showing users a blank
page when they attempt to search for LGBTQ+ terms. This leaves teenage users with no choice in what content they see, since the sensitive content filter is turned on for them by
default.
This policy change came on the back of a protracted effort by Meta to allegedly protect teens online. In January last year, the corporation announced a new set of “sensitive content” restrictions across its platforms
(Instagram, Facebook, and Threads), including hiding content which the platform no longer considered age-appropriate. This was followed later by the introduction of Instagram For Teens to further limit the content users under the age of 18 could see. This feature
sets minors’ accounts to the most restrictive levels by default, and teens under 16 can only reverse those settings through a parent or guardian.
Meta has apparently now reversed the restrictions on LGBTQ+ content after
calling the issue a “mistake.” This is not good enough. In allowing pro-LGBTQ+ content to be integrated into the sensitive content filter, Meta has aligned itself with those that are
actively facilitating a violent and harmful removal of rights for LGBTQ+ people—all under the guise of keeping children and teens safe. Not only is this a deeply flawed strategy, it
harms everyone who wishes to express themselves on the internet. These policies are written and enforced discriminatorily and at the expense of transgender, gender-fluid, and
nonbinary speakers. They also often convince or require platforms to implement tools that, using the laws' vague and subjective definitions, end up blocking access to LGBTQ+ and reproductive health content.
The censorship of this content prevents individuals from being able to engage with such material online to explore their identities, advocate for broader societal
acceptance and against hate, build
communities, and discover new
interests. With corporations like Meta intervening to decide how people create, speak, and connect, a crucial form of engagement for all kinds of users has been
removed and the voices of people with less power are regularly shut down.
And at a time when LGBTQ+ individuals are already under vast pressure from violent homophobic threats offline, these online restrictions
have an amplified impact.
LGBTQ+ youth are at a higher risk of experiencing bullying and rejection, often turning to online spaces as outlets for self-expression. For those without family support or who
face the threat of physical or emotional abuse at home because of their sexual orientation or gender identity, the internet becomes an essential resource. A report from the Gay, Lesbian & Straight Education Network (GLSEN)
highlights that LGBTQ+ youth engage with the internet at higher rates than their peers, often showing greater levels of civic engagement online compared to offline. Access to digital
communities and resources is critical for LGBTQ+ youth, and restricting access to them poses unique dangers.
Call to Action: Digital Rights Are LGBTQ+ Rights
These laws have the potential to harm us all—including the children they are designed to protect.
As more U.S. states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to
information. This conglomeration of laws poses significant challenges for users trying to maintain anonymity online and access critical content—whether it’s LGBTQ+ resources,
reproductive health information, or otherwise. These policies threaten the very freedoms they purport to protect, stifling conversations about identity, health, and social justice,
and creating an environment of fear and repression.
The fight against these laws is not just about defending online spaces; it’s about safeguarding the fundamental rights of all individuals to express themselves and access
life-saving information.
We need to stand up against these age verification laws—not only to protect users’ free expression rights, but also to safeguard the free flow of information that is vital to a
democratic society. Reach out to your state and federal
legislators, raise awareness about the consequences of these policies, and support organizations like the LGBT Tech, ACLU, the
Woodhull Freedom Foundation, and others that are fighting for digital rights of young people alongside EFF.
The fight for the safety and rights of LGBTQ+ youth is not just a fight for visibility—it’s a fight for their very survival. Now more than ever, it’s essential for allies,
advocates, and marginalized communities to push back against these dangerous laws and ensure that the internet remains a space where all voices can be heard, free from discrimination
and censorship.
>> mehr lesen
Texas Is Enforcing Its State Data Privacy Law. So Should Other States.
(Wed, 22 Jan 2025)
States need to have and use data privacy laws to bring privacy violations to light and hold companies accountable for them. So, we were glad to see that the Texas Attorney General’s
Office has filed its first lawsuit under Texas Data Privacy and Security Act (TDPSA) to take the Allstate Corporation to task for sharing driver location and other driving data
without telling customers.
In its complaint, the attorney general’s office
alleges that Allstate and a number of its subsidiaries (some of which go by the name “Arity”) “conspired to secretly collect and sell ‘trillions of miles’ of consumers’ ‘driving
behavior’ data from mobile devices, in-car devices, and vehicles.” (The defendant companies are also accused of violating Texas’ data broker law and its insurance law prohibiting
unfair and deceptive practices.)
On the privacy front, the complaint says the defendant companies created a software development kit (SDK), which is basically a set of tools that developers can create to integrate
functions into an app. In this case, the Texas Attorney General says that Allstate and Arity specifically designed this toolkit to scrape location data. They then allegedly paid third
parties, such as the app Life360, to embed it in their apps. The complaint also alleges that Allstate and Arity chose to promote their SDK to third-party apps that already
required the use of location date, specifically so that people wouldn’t be alerted to the additional collection.
That’s a dirty trick. Data that you can pull from cars is often highly sensitive, as wehaveraisedrepeatedly. Everyone should know when that information's being collected and
where it's going.
More state regulators should follow suit and use the privacy laws on their books.
The Texas Attorney General’s office estimates that 45 million Americans, including those in Texas, unwittingly downloaded this software that collected their information, including
location information, without notice or consent. This violates Texas’ privacy law, which went into effect in July 2024 and requires companies to provide a reasonably accessible notice
to a privacy policy, conspicuous notice that they’re selling or processing sensitive data for targeting advertising, and to obtain consumer consent to process sensitive data.
This is a low bar, and the companies named in this complaint still allegedly failed to clear it. As law firm Husch Blackwell pointed out in its write-up of the case, all Arity had to do, for
example, to fulfill one of the notice obligations under the TDPSA was to put up a line on their website saying, “NOTICE: We may sell your sensitive personal data.”
In fact, Texas’s privacy law does not meet the minimum of what we’d consider a strong privacy law.
For example, the Texas Attorney General is the only one who can file a lawsuit under its states privacy law. But we advocate for provisions that make sure that everyone, not only state attorneys general, can file suits to make sure that all
companies respect our privacy.
Texas’ privacy law also has a “right to cure”—essentially a 30-day period in which a company can “fix” a privacy violation and duck a Texas enforcement action. EFF opposes rights to
cure, because they essentially give companies a “get-out-jail-free” card when caught violating privacy law. In this case, Arity was notified and given the chance to show it had cured
the violation. It just didn’t.
According the complaint, Arity apparently failed to take even basic steps that would have spared it from this enforcement action. Other companies violating our privacy may be more
adept at getting out of trouble, but they should be found and taken to task too. That’s why we advocate for strong privacy laws that do even more to protect consumers.
Nineteen states now have some version of a data privacy law. Enforcement has been a bit slower. California has brought a few enforcement actions since its privacy law went into effect in 2020; Texas and New Hampshire are two states that have created dedicated data privacy
units in their Attorney General offices, signaling they’re staffing up to enforce their laws. More state regulators should follow suit and use the privacy laws on their books. And
more state legislators should enact and strengthen their laws to make sure companies are truly respecting our privacy.
>> mehr lesen
The FTC’s Ban on GM and OnStar Selling Driver Data Is a Good First Step
(Wed, 22 Jan 2025)
The Federal Trade
Commission announced a proposed settlement agreeing that General Motors and its subsidiary, OnStar, will be banned from selling geolocation and driver behavior data
to credit agencies for five years. That’s good news for G.M. owners. Every car owner and driver deserves to be protected.
Last year, a New York Times investigation
highlighted how G.M. was sharing
information with insurance companies without clear knowledge from the driver. This resulted in people’s insurance premiums increasing, sometimes without them
realizing why that was happening. This data sharing problem was common amongst many carmakers, not just G.M., but figuring out what your car was sharing was often a Sisyphean
task, somehow managing to be more complicated than trying to learn similar details about apps or websites.
The FTC complaint
zeroed in on how G.M. enrolled people in its OnStar connected vehicle service with a misleading process. OnStar was initially designed to help drivers in an emergency, but over
time the service collected and shared more data that had nothing to do with emergency services. The result was people signing up for the service without realizing they were agreeing
to share their location and driver behavior data with third parties, including insurance companies and consumer reporting agencies. The FTC also alleged that G.M. didn’t disclose who
the data was shared with (insurance companies) and for what purposes (to deny or set rates). Asking car owners to choose between safety and privacy is a nasty tactic, and one that
deserves to be stopped.
For the next five years, the settlement bans G.M. and OnStar from these sorts of privacy-invasive practices, making it so they cannot share driver data or geolocation to
consumer reporting agencies, which gather and sell
consumers’ credit and other information. They must also obtain opt-in consent to collect data, allow consumers to obtain and delete their data, and give car owners an option to
disable the collection of location data and driving information.
These are all important, solid steps, and these sorts of rules should apply to all
carmakers. With privacy-related options buried away in websites, apps, and infotainment systems, it is currently far too difficult to see what sort of data your car collects,
and it is not always possible
to opt out of data collection or sharing. In reality, no consumer knowingly agrees to let their carmaker sell their driving data to other companies.
All carmakers should be forced to protect their customers’ privacy, and they should have to do so for longer than just five years. The best way to ensure that would be through a
comprehensive consumer data privacy legislation with strong
data minimization rules and requirements for clear, opt-in consent. With a strong privacy law, all car makers—not just G.M.— would only have authority to collect, maintain, use, and
disclose our data to provide a service that we asked for.
>> mehr lesen
VICTORY! Federal Court (Finally) Rules Backdoor Searches of 702 Data Unconstitutional
(Wed, 22 Jan 2025)
Better late than never: last night a federal district court held that backdoor
searches of databases full of Americans’ private communications collected under Section 702 ordinarily require a warrant. The landmark ruling comes in a criminal case, United States v. Hasbajrami, after more than a decade of litigation, and over four years since the Second Circuit
Court of Appeals found that backdoor searches constitute “separate
Fourth Amendment events” and directed the district court to determine a warrant was required. Now, that has been officially decreed.
In the intervening years, Congress has
reauthorized Section 702 multiple times, each time ignoring overwhelming evidence that the FBI and the intelligence community abuse their access to databases of warrantlessly
collected messages and other data. The Foreign Intelligence Surveillance Court (FISC), which Congress assigned with the primary role of judicial oversight of Section 702, has also
repeatedly dismissed arguments that the backdoor
searches violate the Fourth Amendment, giving the intelligence community endless do-overs despite its repeated transgressions of even lax safeguards on these
searches.
This decision sheds light on the government’s liberal use of what is essential a “finders keepers” rule regarding your communication data. As a legal authority, FISA Section 702
allows the intelligence community to collect a massive amount of communications data from overseas in the name of “national security.” But, in cases where one side of that
conversation is a person on US soil, that data is still collected and retained in large databases searchable by federal law enforcement. Because the US-side of these communications is
already collected and just sitting there, the government has claimed that law enforcement agencies do not need a warrant to sift through them. EFF argued for over a decade that this
is unconstitutional, and now a federal court agrees with us.
EFF argued for over a decade that this is unconstitutional, and now a federal court agrees with us.
Hasbajrami involves a U.S. resident who was arrested at New York JFK airport in 2011 on his way to Pakistan and charged with providing material support to terrorists. Only
after his original conviction did the government explain that its case was premised in part on emails between Mr. Hasbajrami and an unnamed foreigner associated with terrorist groups,
emails collected warrantless using Section 702 programs, placed in a database, then searched, again without a warrant, using terms related to Mr. Hasbajrami himself.
The district court found that regardless of whether the government can lawfully warrantlessly collect communications between foreigners and Americans using Section 702, it cannot
ordinarily rely on a “foreign intelligence exception” to the Fourth Amendment’s warrant clause when searching these communications, as is the FBI’s routine practice. And, even if such
an exception did apply, the court found that the intrusion on privacy caused by reading our most sensitive communications rendered these searches “unreasonable” under the meaning of
the Fourth Amendment. In 2021 alone, the FBI conducted 3.4 million warrantless searches of US
person’s 702 data.
In light of this ruling, we ask Congress to uphold its responsibility to protect civil rights and civil liberties by refusing to renew Section 702 absent a number of necessary
reforms, including an official warrant requirement for querying US persons data and increased transparency. On April 15, 2026, Section 702 is set to expire. We expect any lawmaker
worthy of that title to listen to what this federal court is saying and create a legislative warrant requirement so that the intelligence community does not continue to trample on the
constitutionally protected rights to private communications. More immediately, the FISC should amend its rules for backdoor searches and require the FBI to seek a warrant before
conducting them.
Related Cases:
United States v. Hasbajrami
>> mehr lesen
Protecting “Free Speech” Can’t Just Be About Targeting Political Opponents
(Wed, 22 Jan 2025)
The White House executive order “restoring
freedom of speech and ending federal censorship,” published Monday, misses the mark on truly protecting Americans’ First Amendment rights.
The order calls for an investigation of efforts under the Biden administration to “moderate, deplatform, or otherwise suppress speech,” especially on social media companies. It
goes on to order an Attorney General investigation of any government activities “over the last 4 years” that are inconsistent with the First Amendment. The order states in
part:
Under the guise of combatting “misinformation,” “disinformation,” and “malinformation,” the Federal Government infringed on the constitutionally protected speech rights of
American citizens across the United States in a manner that advanced the Government’s preferred narrative about significant matters of public debate.
But noticeably absent from the Executive Order is any commitment to government transparency. In the Santa Clara
Principles, a guideline for online content moderation authored by EFF and other civil society groups, we state that “governments and other state actors should themselves report
their involvement in content moderation decisions, including data on demands or requests for content to be actioned or an account suspended, broken down by the legal basis for the
request." This Executive Order doesn’t come close to embracing such a principle.
The order is also misguided in its time-limited targeting. Informal government efforts to persuade, cajole, or strong-arm private media platforms, also called “jawboning,” have
been an aspect of every U.S. government since at least 2011. Any good-faith inquiry into such
pressures would not be limited to a single administration. It’s misleading to suggest the previous administration was the only, or even the primary, source of such pressures. This
time limit reeks of political vindictiveness, not a true effort to limit improper government actions.
To be clear, a look back at past government involvement in online content moderation is a good thing. But an honest inquiry would not be time-limited to the actions of
a political opponent, nor limited to only past actions. The public would also be better served by a report that had a clear deadline, and a requirement that the results be made
public, rather than sent only to the President’s office. Finally, the investigation would be better placed with an inspector general, not the U.S. Attorney General, which implies
possible prosecutions.
As we havewrittenbefore, the First Amendment forbids the government from coercing private entities to censor
speech. This principle has countered efforts to pressure intermediaries like bookstores and credit card processors to limit others’ speech. But not every
communication about user speech is unconstitutional; some are beneficial, like when platforms reach out to government agencies as authoritative sources of information.
For anyone who may have been excited to see a first-day executive order truly focused on free expression, President Trump’s Jan. 20 order is a disappointment, at
best.
>> mehr lesen
EFF Sends Transition Memo on Digital Policy Priorities to New Administration and Congress
(Tue, 21 Jan 2025)
Topics Include National Security Surveillance, Consumer Privacy, AI, Cybersecurity, and Many More
SAN FRANCISCO—Standing up for technology users in 2025 and beyond requires careful thinking about government surveillance, consumer privacy, artificial
intelligence, and encryption, among other topics. To help incoming federal policymakers think through these key issues, the Electronic Frontier Foundation (EFF) has shared a
transition memo
with the Trump Administration and the 119th U.S. Congress.
“We routinely work with officials and staff in the White House and Congress on a wide range of policies that will affect digital rights in the coming
years,” said EFF Director of Federal Affairs India McKinney. “As the oldest, largest, and most trusted nonpartisan digital rights organization, EFF’s litigators, technologists, and activists have a depth of knowledge and experience that remains
unmatched. This memo focuses on how Congress and the Trump Administration can prioritize helping ordinary Americans protect their digital freedom.”
The 64-page memo covers topics such as surveillance, including warrantless digital dragnets, national security surveillance, face recognition technology,
border surveillance, and reproductive justice; encryption and cybersecurity; consumer privacy, including vehicle data, age verification, and digital identification; artificial
intelligence, including algorithmic decision-making, transparency, and copyright concerns; broadband access and net neutrality; Section 230’s protections of free speech online;
competition; copyright; the Computer Fraud and Abuse Act; and patents.
EFF also shared a transition memo with the incoming Biden Administration and Congress in
2020.
“The new Congress and the Trump Administration have an opportunity to make the internet a much better place for users. This memo should serve as a blueprint
for how they can do so,” said EFF Executive Director Cindy Cohn. “We’ll be here when this administration ends and the next one takes over, and we’ll continue to push. Our nonpartisan
approach to tech policy works because we always work for technology users.”
For the 2025 transition memo: https://eff.org/document/eff-transition-memo-trump-administration-2025
For the 2020 transition memo: https://www.eff.org/document/eff-transition-memo-incoming-biden-administration-november-2020
Contact:
India
McKinney
Director of Federal Affairs
india@eff.org
Maddie
Daly
Assistant Director of Federal Affairs
>> mehr lesen
VPNs Are Not a Solution to Age Verification Laws
(Mon, 20 Jan 2025)
VPNs are having a moment.
On January 1st, Florida joined 18 other states in
implementing an age verification law that burdens Floridians' access to sites that host adult content, including pornography websites like Pornhub. In protest to these laws, PornHub
blocked access to users in Florida. Residents in the “Free State of Florida” have now lost
access to the world's most popular adult entertainment website and 16th-most-visited site of any kind in the world.
At the same time, Google Trends data showed a spike in searches for VPN
access across Florida–presumably because users are trying to access the site via VPNs.
How Did This Happen?
Nearly two years ago, Louisiana enacted a law that started a wave across neighboring states in the U.S. South: Act 440. This wave of legislation has significantly impacted how residents in these states access
“adult” or “sexual” content online. Florida, Tennessee, and South Carolina are now
among the list of nearly half of U.S. states where users can no longer access many major adult websites at all, while others require verification due to the
restrictive laws that are touted as child protection measures. These laws introduce surveillance systems that threaten everyone’s rights to
speech and privacy, and introduce more harm than they seek to combat.
Despite experts from acrosscivilsociety flagging concerns about the impact of these laws on both adults’ and children’s rights,
politicians in Florida decided to push ahead and enact one of the most contentious age verification mandates earlier this year in HB 3.
HB 3 is a part of the state’s ongoing efforts to regulate online content, and requires websites that host “adult material” to implement a method of verifying the age of users
before they can access the site. Specifically, it mandates that adult websites require users to submit a form of government-issued identification, or use a third-party age
verification system approved by the state. The law also bans anyone under 14 from accessing or creating a social media account. Websites that fail to comply with the law's age
verification requirements face civil penalties and could be subject to lawsuits from the state.
Pornhub, to its credit, understands these risks. In response to the implementation of age verification laws in various states, the company has taken a firm stand by blocking
access to users in regions where such laws are enforced. Before the laws’ implementation date, Florida users were greeted with this
message: “You will lose access to PornHub in 12 days. Did you know that your government wants you to give your driver’s license before you can access
PORNHUB?”
Pornhub then restricted access to Florida residents on January 1st, 2025—right when HB 3 was set to take effect. The platform expressed concerns that the age verification
requirements would compromise user privacy, pointing out that these laws would force platforms to collect sensitive personal data, such as government-issued identification, which
could lead to potential breaches and misuse of that information. In a statement to local
news, Aylo, Pornhub’s parent company, said that they have “publicly supported age verification for years” but they believe this law puts users’ privacy at
risk:
Unfortunately, the way many jurisdictions worldwide, including Florida, have chosen to implement age verification is ineffective, haphazard, and dangerous. Any regulations
that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy. Moreover, as
experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.
This is not speculation. We have seen how this scenario plays out in the United States. In Louisiana last year, Pornhub was one of the few sites to comply with the new law.
Since then, our traffic in Louisiana dropped approximately 80 percent. These people did not stop looking for porn. They just migrated to darker corners of the internet that don't
ask users to verify age, that don't follow the law, that don't take user safety seriously, and that often don't even moderate content. In practice, the laws have just made the
internet more dangerous for adults and children.
The company’s response reflects broader concerns over privacy and digital rights, as many fear that these measures are a step toward increased government surveillance
online.
How Do VPNs Play a Role?
Within this context, it is no surprise that Google searches for VPNs in Florida have skyrocketed. But as more states and countries pass age verification laws, it is crucial to
recognize the broader implications these measures have on privacy, free speech, and access to information. While VPNs may be able to disguise the source of your internet activity,
they are not foolproof—nor should they be necessary to access legally protected speech.
A VPN routes all your network traffic through an "encrypted tunnel" between your devices and the VPN server. The traffic then leaves the VPN to its ultimate destination, masking
your original IP address. From a website's point of view, it appears your location is wherever the VPN server is. A VPN should not be seen as a tool for anonymity. While it can
protect your location from some companies, a disreputable VPN service might deliberately collect personal information or other valuable data. There are many other ways companies may
track you while you use a VPN, including GPS, web cookies, mobile ad IDs, tracking pixels, or fingerprinting.
With varying mandates across different regions, it will become increasingly difficult for VPNs to effectively circumvent these age verification requirements because each state
or country may have different methods of enforcement and different types of identification checks, such as government-issued IDs, third-party verification systems, or biometric data.
As a result, VPN providers will struggle to keep up with these constantly changing laws and ensure users can bypass the restrictions, especially as more sophisticated detection
systems are introduced to identify and block VPN traffic.
The ever-growing conglomeration of age verification laws poses significant challenges for users trying to maintain anonymity online, and have the potential to harm us
all—including the young people they are designed to protect.
What Can You Do?
If you are navigating protecting your privacy or want to learn more about VPNs, EFF provides a
comprehensive guide on using VPNs and protecting digital privacy–a valuable resource for anyone looking to use these tools.
No one should have to hand over their driver’s license just to access free websites. EFF has long fought against mandatory age verification laws, from the U.S. to Canada and Australia. And under the context of weakening rights for already vulnerable communities
online, politicians around the globe must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms.
Dozens of bills currently being debated by state and federal lawmakers could result in dangerous age verification mandates. We will resist them. We must stand up against these
types of laws, not just for the sake of free expression, but to protect the free flow of information that is essential to a free society. Contact your state and federal legislators, raise awareness about the unintended
consequences of these laws, and support organizations that are fighting for digital rights and privacy protections alongside EFF, such as the ACLU, Woodhull Freedom
Foundation, and others.
>> mehr lesen
Mad at Meta? Don't Let Them Collect and Monetize Your Personal Data
(Fri, 17 Jan 2025)
If you’re fed up with Meta right now, you’re not alone. Google searches for deleting Facebook and Instagram spiked last week after Meta announced
its latest policy changes. These changes, seemingly designed to appease the incoming Trump
administration, included loosening Meta’s hate
speech policy to allow for the targeting of LGBTQ+ people and immigrants.
If these changes—orMeta’slonghistoryofanti-competitive, censorial, andinvasivepractices—make you want to cut ties with the company, it’s sadly not as
simple as deleting your Facebook account or spending less time on Instagram. Meta tracks your activity across millions of websites and apps, regardless of whether you use its
platforms, and it profits from that data through targeted ads. If you want to limit Meta’s ability to collect and profit from your personal data, here’s what you need to know.
Meta’s Business Model Relies on Your Personal Data
You might think of Meta as a social media company, but its primary business is surveillance advertising. Meta’s business model relies on collecting as much information as possible
about people in order to sell highly-targeted ads. That’s why Meta is one of the main companies
tracking you across the internet—monitoring your activity far beyond its own platforms. When Apple introduced changes to make tracking harder on iPhones, Meta lost billions in revenue,
demonstrating just how valuable your personal data is to its business.
How Meta Harvests Your Personal Data
Meta’s tracking tools are embedded in millions of websites and apps, so you can’t escape the company’s surveillance just by avoiding
or deleting Facebook and Instagram. Meta’s tracking pixel, found on 30% of the world’s
most popular websites, monitors people’s behavior across the web and can expose sensitive information, including financial and mentalhealth
data. A 2022 investigation by The Markup found
that a third of the top U.S. hospitals had sent sensitive patient information to Meta through its tracking pixel.
Meta’s surveillance isn’t limited to your online activity. The company also encourages businesses to send them data about your offline purchases and
interactions. Even deleting your Facebook and Instagram accounts won’t stop Meta from harvesting your personal data. Meta in 2018 admitted to collecting information about non-users, including
their contact details and browsing history.
Take These Steps to Limit How Meta Profits From Your Personal Data
Although Meta’s surveillance systems are pervasive, there are ways to limit how Meta collects and uses your personal data.
Update Your Meta Account Settings
Open your Instagram or Facebook app and navigate to the Accounts Center page.
You’ll find a link to Accounts Center on the Settings pages of both apps. If you have trouble finding
Accounts Center, check Meta’s help pages for Facebook and Instagram.
If you use a web browser instead of Meta’s apps, visit accountscenter.facebook.com or
accountscenter.instagram.com.
If your Facebook and Instagram accounts are linked on your Accounts Center page, you only have to update the following settings once. If
not, you’ll have to update them separately for Facebook and Instagram. Once you find your way to the Accounts Center, the directions below are the same for
both platforms.
Meta makes it harder than it should be to find and update these settings. The following steps are accurate at the time of publication, but Meta often changes their settings and
adds additional steps. The exact language below may not match what Meta displays in your region, but you should have a setting controlling each of the following permissions.
Once you’re on the “Accounts Center” page, make the following changes:
1) Stop Meta from targeting ads based on data it collects about you on other apps and websites:
Click the Ad preferences option under Accounts Center, then select the Manage Info tab (this tab may be called Ad
settings depending on your location). Click the Activity information from ad partners option, then Review
Setting. Select the option for No, don’t make my ads more relevant by using this information and click the “Confirm” button when
prompted.
2) Stop Meta from using your data (from Facebook and Instagram) to help advertisers target you on other apps. Meta’s ad network connects advertisers with other apps through privacy-invasive ad auctions—generating more money and data for Meta in the
process.
Back on the Ad preferences page, click the Manage info tab again (called Ad settings
depending on your location), then select the Ads shown outside of Meta setting, select Not allowed and
then click the “X” button to close the pop-up.
Depending on your location, this setting will be called Ads from ad partners on the Manage info tab.
3) Disconnect the data that other companies share with Meta about you from your account:
From the Accounts Center screen, click the Your information and permissions option, followed by Your
activity off Meta technologies, then Manage future activity. On this screen, choose the option to Disconnect future
activity, followed by the Continue button, then confirm one more time by clicking the Disconnect future activity
button. Note: This may take up to 48 hours to take effect.
Note: This will also clear previous activity, which might log you out of apps and websites you’ve signed into through Facebook.
While these settings limit how Meta uses your data, they won’t necessarily stop the company from collecting it and potentially using it for
other purposes.
Install Privacy Badger to Block Meta’s Trackers
Privacy Badger is a free browser extension by EFF that blocks trackers—like Meta’s pixel—from loading on websites you
visit. It also replaces embedded Facebook posts, Like buttons, and Share buttons with click-to-activate placeholders, blocking another way that Meta tracks you. The
next version of Privacy Badger (coming next week) will extend this protection to embedded Instagram and Threads posts, which also send your data to Meta.
Visit privacybadger.org to install Privacy Badger on your web browser. Currently, Firefox on Android is the
only mobile browser that supports Privacy Badger.
Limit Meta’s Tracking on Your Phone
Take these additional steps on your mobile device:
Disable your phone’s advertising ID to make it harder for Meta to track what you do across apps. Follow EFF’s instructions for doing this on your iPhone or Android device.
Turn off location access for Meta’s apps. Meta doesn’t need to know where you are all the time to function, and you can safely disable location access without
affecting how the Facebook and Instagram apps work. Review this setting using EFF’s guides for your iPhone or Android device.
The Real Solution: Strong Privacy Legislation
Stopping a company you distrust from profiting off your personal data shouldn’t require tinkering with hidden settings and installing browser extensions. Instead, your data
should be private by default. That’s why we need strong federal privacy
legislation that puts you—not Meta—in control of your information.
Without strong privacy legislation, Meta will keepfinding
ways to bypass your privacy protections and monetize your personal data. Privacy is about more than safeguarding your sensitive information—it’s about having the power to prevent
companies like Meta from exploiting your personal data for profit.
>> mehr lesen
EFF Statement on U.S. Supreme Court's Decision to Uphold TikTok Ban
(Fri, 17 Jan 2025)
We are deeply disappointed that the Court failed to require the strict First Amendment scrutiny required in a case like this, which would’ve led to the inescapable conclusion
that the government's desire to prevent potential future harm had to be rejected as infringing millions of Americans’ constitutionally protected free speech. We are disappointed to
see the Court sweep past the undisputed content-based justification for the law – to control what speech Americans see and share with each other – and rule only based on the shaky
data privacy concerns.
The United States’ foreign foes easily can steal, scrape, or buy Americans’ data by countless other means. The ban or forced sale of one social media app will do virtually
nothing to protect Americans' data privacy – only comprehensive consumer privacy legislation can achieve that goal. Shutting down communications platforms or forcing their
reorganization based on concerns of foreign propaganda and anti-national manipulation is an eminently anti-democratic tactic, one that the US has previously condemned globally.
>> mehr lesen
Systemic Risk Reporting: A System in Crisis?
(Thu, 16 Jan 2025)
The first batch of reports assessing the so called
“systemic risks” posed by the largest online platforms are in. These reports are a result of the Digital Services Act (DSA), Europe’s new law regulating platforms
like Google, Meta, Amazon or X, and have been eagerly awaited by civil society groups across the globe. In their reports, companies are supposed to assess whether their services
contribute to a wide range of barely defined risks. These go beyond the dissemination of illegal content and include vaguely defined categories such as negative effects on the
integrity of elections, impediments to the exercise of fundamental rights or undermining of civic discourse. We have previously warned that the subjectivity of these categories
invites a politization of the
DSA.
In view of a new DSA investigation
into TikTok’s potential role
in Romania’s presidential
election, we take a look at the reports and the framework that has produced them to understand their value and
limitations.
A Short DSA Explainer
The DSA covers a lot of different services. It regulates online markets like Amazon or Shein, social networks like Instagram and TikTok, search engines like
Google and Bing, and even app stores like those run by Apple and Google. Different obligations apply to different services, depending on their type and size. Generally, the lower the
degree of control a service provider has over content shared via its product, the fewer obligations it needs to comply with.
For example, hosting services like cloud computing must provide points of contact for government authorities and users and basic transparency reporting.
Online platforms, meaning any service that makes user generated content available to the public, must meet additional requirements like providing users with detailed information about
content moderation decisions and the right to appeal. They must also comply with additional transparency obligations.
While the DSA is a necessary update to the EU’s liability rules and improved users’ rights, we have plenty of concerns with the route that
it takes:
We worry about the powers it gives to authorities to request user data and the obligation on providers to proactively share user data with law
enforcement.
We are also concerned about the ways in which trusted flaggers could lead to the over-removal of speech, and
We caution against the misuse of the DSA’s mechanism to deal with emergencies like a pandemic.
Introducing Systemic Risks
The most stringent DSA obligations apply to large online platforms and search engines that have more than 45 million users in the EU. The European
Commission has so far designated more than 20
services to constitute such “very large online platforms” (VLOPs) or “very large online search engines” (VLOSEs). These companies, which include
X, TikTok, Amazon, Google Search, Maps and Play, YouTube and several porn platforms, must proactively assess and mitigate “systemic risks” related to the design, operation and use of
their services. The DSA’s non-conclusive list of risks includes four broad categories: 1) the dissemination of illegal content, 2) negative effects on the exercise of fundamental
rights, 3) threats to elections, civic discourse and public safety, and 4) negative effects and consequences in relation to gender-based violence, protection of minors and public
health, and on a person’s physical and mental wellbeing.
The DSA does not provide much guidance on how VLOPs and VLOSEs are supposed to analyze whether they contribute to the somewhat arbitrary seeming list of
risks mentioned. Nor does the law offer clear definitions of how these risks should be understood, leading to concerns that they could be interpreted widely and lead to the extensive
removal of lawful but awful content. There is equally little guidance on risk mitigation as the DSA merely names a few measures that platforms can choose to employ. Some of these recommendations are incredibly broad, such as adapting the design, features or functioning of a
service, or “reinforcing internal processes”. Others, like introducing age verification measures, are much more specific but come with a host of issues and can
undermine fundamental rights themselves.
Risk Management Through the Lens of the Romanian Election
Per the DSA, platforms must annually publish reports detailing how they have analyzed and managed risks. These reports are complemented by separate reports
compiled by external auditors, tasked with assessing platforms’ compliance with their obligations to manage risks and other obligations put forward by the
DSA.
To better understand the merits and limitations of these reports, let’s examine the example of the recent Romanian election. In late November 2024, an
ultranationalist and pro-Russian candidate, Calin Georgescu, unexpectedly won the first round of Romania’s presidential election. After reports by local civil society groups accusing TikTok of amplifying pro-Georgescu content, and a declassified
brief published by Romania’s intelligence services that alleges cyberattacks and influence operations, the Romanian constitutional court
annulled the results of the election. Shortly after, the European Commission opened formal proceedings against TikTok for insufficiently managing systemic risks related to the integrity of the Romanian election.
Specifically, the Commission’s investigation focuses on “TikTok's recommender systems, notably the risks linked to the coordinated inauthentic manipulation or automated exploitation
of the service and TikTok's policies on political advertisements and paid-for political content.”
TikTok’s own risk assessment report dedicates eight pages to potential negative effects on elections and civic discourse. Curiously,
TikTok’s definition of this particular category of risk focuses on the spread of election misinformation but makes no mention of coordinated inauthentic behavior or the manipulation
of its recommender systems. This illustrates the wide margin on platforms to define systemic risks and implement their own mitigation strategies. Leaving it up to platforms to define
relevant risks not only makes the comparison of approaches taken by different companies impossible, it can also lead to overly broad or narrow approaches—potentially undermining fundamental rights or running counter to the obligation to effectively deal with risks, as in this example. It should
also be noted that mis- and disinformation are terms not defined by international human rights law and are therefore not well suited as a robust basis on which freedom of expression
may be restricted.
In its report, TikTok describes the measures taken to mitigate potential risks to elections and civic discourse. This overview broadly describes some
election-specific interventions like labels for content that has not been fact checked but might contain misinformation, and describes TikTok’s policies like its ban of political ads, which
is notoriously easy to
circumvent. It does not entail any indication that the robustness and utility of the measures employed are documented or have been tested, nor
any benchmarks of when TikTok considers a risk successfully mitigated. It does not, for example, contain figures on how many pieces of content receive certain labels, and how these
influence users’ interactions with the content in question.
Similarly, the report does not contain any data regarding the efficacy of TikTok’s enforcement of its political ads ban. TikTok’s “methodology” for risk
assessments, also included in the report, does not help in answering any of these questions, either. And looking at the report
compiled by the external auditor, in this case KPMG, we are once again left disappointed: KPMG concluded that it was impossible to assess
TikTok’s systemic risk compliance because of two earlier, pending investigations by the European Commission due to potential non-compliance with the systemic risk mitigation
obligations.
Limitations of the DSA’s Risk Governance Approach
What then, is the value of the risk and audit reports, published roughly a year after their finalization? The answer may be very
little.
As explained above, companies have a lot of flexibility in how to assess and deal with risks. On the one hand, some degree of flexibility is necessary:
every VLOP and VLOSE differs significantly in terms of product logics, policies, user base and design choices. On the other hand, the high degree of flexibility in determining what
exactly a systemic risk is can lead to significant inconsistencies and render risk analysis unreliable. It also allows regulators to put forward their own definitions, thereby
potentially expanding risk categories as they see fit to deal with emerging or politically salient issues.
Rather than making sense of diverse and possibly conflicting definitions of risks, companies and regulators should put forward joint benchmarks, and include
civil society experts in the process.
Speaking of benchmarks: There is a critical lack of standardized processes, assessment methodologies and reporting templates. Most assessment reports
contain very little information on how the actual assessments are carried out, and the auditors’ reports distinguish themselves through an almost complete lack of insight into the
auditing process itself. This information is crucial, but it is near impossible to adequately scrutinize the reports themselves without understanding whether auditors were provided
the necessary information, whether they ran into any roadblocks looking at specific issues, and how evidence was produced and documented. And without methodologies that are applicable
across the board it will remain very challenging, if not impossible, to compare approaches taken by different companies.
The TikTok example shows that the risk and audit reports do not contain the “smoking gun” some might have hoped for. Besides the shortcomings explained
above, this is due to the inherent limitations of the DSA itself. Although the DSA attempts to take a holistic approach to complex societal risks that cut across different but
interconnected challenges, its reporting system is forced to only consider the obligations put forward by the DSA itself. Any legal assessment framework will struggle to capture
complex societal challenges like the integrity of elections or public safety. In addition, phenomena as complex as electoral processes and civic discourse are shaped by a range of
different legal instruments, including European rules on political ads, data protection, cybersecurity and media pluralism, not to mention countless national laws. Expecting a
definitive answer on the potential implications of large online services on complex societal processes from a risk report will therefore always fall short.
The Way Forward
The reports do present a slight improvement in terms of companies’ accountability and transparency. Even if the reports may not include the hard evidence of
non-compliance some might have expected, they are a starting point to understanding how platforms attempt to grapple with complex issues taking place on their services. As such, they
are, at best, the basis for an iterative approach to compliance. But many of the risks described by the DSA as systemic and their relationships with online services are still poorly
understood.
Instead of relying on platforms or regulators to define how risks should be conceptualized and mitigated, a joint approach is
needed—one that builds on expertise by civil society, academics and activists, and emphasizes best practices. A
collaborative approach would help make sense of these complex challenges and how they can be addressed in ways that strengthen users’ rights and protect fundamental
rights.
>> mehr lesen
Digital Rights and the New Administration | EFFector 37.1
(Wed, 15 Jan 2025)
It's a new year and EFF is here to help you keep up with your New Year's resolution to stay up-to-date on the latest digital rights news with our EFFector newsletter!
This edition of the newsletter covers our tongue-in-cheek "awards" for some of the worst data breaches in 2024, The Breachies; an explanation of "real-time bidding," the most privacy-invasive surveillance system you
may have never heard of; and our notes to Meta on how to empower freedom of expression on their
platforms.
You can read the full newsletter here, and even get future editions directly to your
inbox when you subscribe! Additionally, we've got an audio edition of EFFector on the Internet Archive, or you can view it by clicking the button below:
LISTEN ON YouTube
EFFECTOR 37.1 - DIGITAL RIGHTS AND THE NEW ADMINISTRATION
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human
rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other
stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
>> mehr lesen
Police Use of Face Recognition Continues to Wrack Up Real-World Harms
(Wed, 15 Jan 2025)
Police have shown, time and time again, that they cannot be trusted with face recognition
technology (FRT). It is too dangerous, invasive, and in the hands of law enforcement, a perpetual liability. EFF has long argued that face recognition, whether it is
fully accurate or not, is too dangerous for police use, and such
use ought to bebanned.
Now, The Washington Post
has proved one more reason for this ban: police claim to use FRT just as an investigatory lead, but in practice officers routinely ignore protocol and immediately arrest the
most likely match spit out by the computer without first doing their own investigation.
Cities across the United States have decided to join the growing movement to ban police use of face recognition because this technology is simply too dangerous in the hands of
police.
The report also tells the stories of two men who were unknown to the public until now: Christopher Galtin and Jason Vernau. They were wrongfully arrested in St. Louis and Miami,
respectively, after being misidentified by face recognition. In both cases, the men were jailed despite readily available evidence that would have shown that, despite the apparent
match found by the computer, they in fact were not the correct match.
This is infuriating. Just last year, the Assistant Chief of Police for the Miami Police Department, the department that wrongfully arrested Jason Vernau, testified before Congress that
his department does not arrest people based solely on face recognition and without proper followup investigations. “Matches are treated like an anonymous tip,” he said during the
hearing.
Apparently not all officers got the memo.
We’ve seen this before. Many times. Galtin and Vernau join a growing list of those known to have been wrongfully arrested around the United States based on police use of face
recognition. They include Michael Oliver,Nijeer Parks,Randal Reid, Alonzo Sawyer, Robert Williams, and Porcha Woodruff. It is no coincidence that all six of these people, and now
adding Christopher Galtin to that list, are Black. Scholars and activists have been raising the alarm for years that, in addition to a huge amount of police surveillance generally
being directed at Black communities, face recognition specifically has a long history of having a lower rate of accuracy when it comes to identifying people with
darker complexions. The case of Robert Williams in Detroit resulted in a lawsuit which ended in the Detroit police department, which had used FRT to justify a number
of wrongful arrests, instituting strict new
guidelines about the use of face recognition technology.
Cities across the United States have decided to join the growing
movement to ban police use of face recognition because this technology is simply too dangerous in the hands of police.
Even in a world where the technology is 100% accurate, police still should not be trusted with it. The temptation for police to fly a drone over a protest and use face
recognition to identify the crowd would be too great and the risks to civil liberties too high. After all, we already see that police are cutting corners and using their technology in
ways that violate their own departmental policies.
We continue to urge cities, states, and Congress to ban police use of face recognition technology. We
stand ready to assist. As intrepid tech journalists and researchers continue to do their jobs, increased evidence of these harms will only increase the urgency of our
movement.
>> mehr lesen
EFFecting Change: Digital Rights & the New Administration
(Wed, 15 Jan 2025)
Please join EFF for the next segment of EFFecting Change, our livestream series covering digital privacy and free speech.
EFFecting Change Livestream Series:
Digital Rights & the New Administration
Thursday, January 16th
10:00 AM - 11:00 AM Pacific - Check Local Time
This event is LIVE and FREE!
What direction will your digital rights take under Trump and the 119th Congress? Find out about the topics EFF is watching and the effect they might have on you.
Join our panel of experts as they discuss surveillance, age verification, and consumer privacy. Learn how you can advocate for your digital rights and the resources available to
you with our panel featuring EFF Senior Investigative Researcher Beryl Lipton, EFF Senior Staff Technologist Bill Budington, EFF Legislative Director Lee
Tien, and EFF Senior Policy Analyst Joe Mullin.
We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page.
Want to make sure
you don’t miss our next
livestream? Here’s a link to sign up
for updates about this series: eff.org/ECUpdates.
>> mehr lesen
Platforms Systematically Removed a User Because He Made "Most Wanted CEO" Playing Cards
(Tue, 14 Jan 2025)
On December 14, James Harr, the owner of an online store called ComradeWorkwear, announced on social media that
he planned to sell a deck of “Most Wanted CEO” playing cards, satirizing the infamous “Most-wanted Iraqi playing cards” introduced by the U.S. Defense
Intelligence Agency in 2003. Per the ComradeWorkwear
website, the Most Wanted CEO cards would offer “a critique of the capitalist machine that sacrifices people and planet for profit,” and “Unmask
the oligarchs, CEOs, and profiteers who rule our world...From real estate moguls to weapons manufacturers.”
But within a day of posting his plans for the card deck to his combined 100,000 followers on Instagram and TikTok, the New
York Post ran a front page story on Harr, calling the cards “disturbing.” Less than 5 hours later, officers from the New York City Police
Department came to Harr's door to interview him. They gave no indication he had done anything illegal or would receive any further scrutiny, but the next day the New York police
commissioner held the New York Post story up during a press
conference after announcing charges against Luigi Mangione, the alleged assassin of UnitedHealth Group CEO Brian Thompson. Shortly thereafter,
platforms from TikTok to Shopify disabled both the company’s accounts and Harr’s personal accounts, simply because he used the moment to highlight what he saw as the harms that large
corporations and their CEOs cause.
Even benign posts, such as one about Mangione’s astrological sign, were deleted from Threads.
Harr was not alone. After the assassination, thousands of people took to social media to express their negative experiences with the healthcare industry,
speculate about who was behind the murder, and show their sympathy for either the victim or the shooter—if social media platforms allowed them to do so.
Many users reported having their accounts banned and content removed after sharing comments about Luigi Mangione,
Thompson's alleged assassin. TikTok, for example reportedly removed comments that simply said, "Free Luigi." Even seemingly benign content, such as a post about Mangione’s
astrological sign or a video montage of him set to music, was deleted from Threads, according to users.
The Most Wanted CEO playing cards did not reference Mangione, and would the cards—which have not been released—would not include personal information
about any CEO. In his initial posts about the cards, Harr said he planned to include QR codes with more information about each company and, in his view, what dangers the companies
present. Each suit would represent a different industry, and the back of each card would include a generic shooting-range style silhouette. As Harr put it in his now-removed video,
the cards would include “the person, what they’re a part of, and a QR code that goes to dedicated pages that explain why they’re evil. So you could be like, 'Why is the CEO of Walmart
evil? Why is the CEO of Northrop Grumman evil?’”
A design for the Most Wanted CEO playing cards
Many have riffed on the military’s tradition of
using playing cards to help troops learn about the enemy. You can currently find “Gaza’s Most Wanted” playing cards on Instagram, purportedly depicting “leaders and commanders of various groups such as the
IRGC, Hezbollah, Hamas, Houthis, and numerous leaders within Iran-backed militias.” A Shopify store selling “Covid’s Most Wanted” playing cards, displaying figures like Bill Gates and
Anthony Fauci, and including QR codes linking to a website “where all the crimes and evidence are listed,” is available as of this writing. Hero Decks, which sells novelty
playing cards generally showing sports figures, even produced a deck of “Wall Street Most Wanted” cards in 2003
(popular enough to have a second edition).
A Shopify store selling “Covid’s Most Wanted” playing cards is available as of this writing.
Aswe’vesaidmanytimes, content moderation at
scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well. Companies often get it wrong and remove content or whole accounts that those
affected by the content would agree do not violate the platform’s terms of service or community guidelines. Conversely, they allow speech that could arguably be seen to violate
those terms and guidelines. That has been especiallytrue
for speech related to divisive topics and during heated national discussions. These mistakes often remove important voices, perspectives, and context,
regularly impacting not just everyday users but journalists, human rights defenders, artists, sex worker advocacy groups, LGBTQ+ advocates,
pro-Palestinian
activists, and political groups. In some instances, this even harms people's livelihoods.
Instagram disabled the ComradeWorkwear account for “not following community standards,” with no further information provided. Harr’s personal account was
also banned. Meta has a policy
against the "glorification" of dangerous organizations and people, which it defines as "legitimizing or defending the violent or hateful acts of a
designated entity by claiming that those acts have a moral, political, logical or other justification that makes them acceptable or reasonable.” Meta’s Oversight Board has
overturnedmultiplemoderation decisions
by the company regarding its application of this policy. While Harr had posted to Instagram that “the CEO must die” after Thompson’s assassination, he
included an explanation that, "When we say the ceo must die, we mean the structure of capitalism must be broken.” (Compare this to a series of
Instagram story posts from musician Ethel Cain, whose account is still available,
which used the hashtag #KillMoreCEOs, for one of many examples of how moderation affects some people and not others.)
TikTok reported that Harr violated the platform’s community guidelines with no additional information. The platform has a policy against "promoting (including any praise,
celebration, or sharing of manifestos) or providing material support" to violent extremists or people who cause serial or mass violence. TikTok gave Harr no opportunity for appeal,
and continued to remove additional accounts Harr only created to update his followers on his life. TikTok did not point to any specific piece of content that violated its
guidelines.
These voices shouldn’t be silenced into submission simply for drawing attention to the influence that platforms have.
On December 20, PayPal informed Harr it could no longer continue processing payments for ComradeWorkwear, with no information about why. Shopify informed
Harr that his store was selling “offensive content,” and his Shopify and Apple Pay accounts would both be disabled. In a follow-up email, Shopify told Harr the decision to close his
account “was made by our banking partners who power the payment gateway.”
Harr’s situation is not unique. Financial
and social media platforms have an enormous
amount of control over our online expression, and we’ve long been critical of their over-moderation, uneven enforcement, lack of transparency, and failure to offer reasonable
appeals. This is why EFF co-created The Santa Clara Principles
on transparency and accountability in content moderation, along with a broad coalition of organizations, advocates, and academic experts. These platforms
have the resources to set the standard for content moderation, but clearly don’t apply their moderation evenly, and in many instances, aren’t even doing the basics—like offering clear
notices and opportunities for appeal.
Harr was one of many who expressed frustration online with the growing power of corporations. These voices shouldn’t be silenced into submission simply for drawing attention to the influence that
they have. These are exactly the kinds of actions that Harr intended to highlight. If the Most Wanted CEO
deck is ever released, it shouldn’t be a surprise for the CEOs of these platforms to find themselves in the lineup.
>> mehr lesen
Five Things to Know about the Supreme Court Case on Texas’ Age Verification Law, Free Speech Coalition v Paxton
(Mon, 13 Jan 2025)
The Supreme Court will hear arguments on Wednesday in a case that will determine whether states can violate adults’ First Amendment rights to access sexual content online by
requiring them to verify their age.
The case, Free Speech Coalition v. Paxton, could have far-reaching effects for every internet users’ free speech, anonymity, and privacy rights.
The Supreme Court will decide whether a Texas law, HB1181, is constitutional. HB 1811 requires a huge swath of websites—many that would likely not consider themselves adult content
websites—to implement age verification.
The plaintiff in this case is the Free Speech Coalition, the nonprofit non-partisan trade
association for the adult industry, and the Defendant is Texas, represented by Ken Paxton, the state’s Attorney General. But this case is about much more than adult content or the
adult content industry. State and federal lawmakers across the country have recently turned to ill-conceived, unconstitutional, and dangerous censorship legislation that would force
websites to determine the identity of users before allowing them access to protected speech—in some cases, social media. If the Supreme Court were to side with Texas, it would open
the door to a slew of state laws that frustrate internet users’ First Amendment rights and make them less secure online. Here's what you need to know about
the upcoming arguments, and why it’s critical for the Supreme Court to get this case right.
1. Adult Content is Protected Speech, and It Violates the First Amendment for a State to Require Age-Verification to Access It.
Under U.S. law, adult content is protected speech. Under the
Constitution and a history of legal precedent, a legal restriction on access to protected speech must pass a very high bar. Requiring invasive age verification to access protected
speech online simply does not pass that test. Here’s why:
While other laws prohibit the sale of adult content to minors and result in age verification via a government ID or other proof-of-age in physical spaces, there are practical
differences that make those disclosures less burdensome or even nonexistent compared to online prohibitions. Because of the sheer scale of the internet, regulations affecting online
content sweep in millions of people who are obviously adults, not just those who visit physical bookstores or other places to access adult materials, and not just those who might
perhaps be seventeen or under.
First, under HB 1181, any website that Texas decides is composed of
“one-third” or more of “sexual material harmful to minors” is forced to collect age-verifying personal information from all visitors—even to access the other two-thirds of material
that is not adult content.
Second, while there are a variety of methods for verifying age online, the Texas law generally forces adults to submit personal information over the internet to access entire
websites, not just specific sexual materials. This is the most common method of online age verification today, and the law doesn't set out a specific method for websites to verify
ages. But fifteen million adult U.S. citizens do not have a driver’s license, and over two million have no form of photo ID. Other methods of age verification, such as using online
transactional data, would also exclude a large number of people who, for example, don’t have a mortgage.
The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed.
Less accurate methods, such as “age estimation,” which are usually based solely on an image or video of their face alone, have their own privacy concerns. These methods are
unable to determine with any accuracy whether a large number of people—for example, those over seventeen but under twenty-five years old—are the age they claim to be. These
technologies are unlikely to satisfy the requirements of HB 1181 anyway.
Third, even for people who are able to verify their age, the law still deters adult users from speaking and accessing lawful content by undermining anonymous internet browsing.
Courts have consistently ruled that anonymity is an aspect of the freedom of speech protected by the First Amendment.
Lastly, compliance with the law will require websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when
briefly flashing an ID card to a cashier.
2. HB1181 Requires Every Adult in Texas to Verify Their Age to See Legally Protected Content, Creating a Privacy and Data Security
Nightmare.
Once information is shared to verify a user’s age, there’s no real way for a website visitor to be certain that the data they’re handing over is not going to be retained and
used by the website, or further shared or even sold. Age
verification systems are surveillance systems. Users must trust that the website they visit, or its third-party verification service, both of which could be
fly-by-night companies with no published privacy standards, are following these rules. While many users will simply not access the content as a result—see the above point—others may
accept the risk, at their peril.
There is real risk that website employees will misuse the data, or that thieves will steal it. Data breaches affect nearly everyone in the U.S. Last year, age verification
company AU10TIX encountered a
breach, and there’s no reason to suspect this issue won’t grow if more websites are required, by law, to use age verification. The more information a website
collects, the more chances there are for it to get into the hands of a marketing company, a bad actor, or someone who has filed a subpoena for it.
The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed. The law amplifies the security risks
because it applies to such sensitive websites, potentially allowing a website or bad actor to link this personal information with the website at issue, or even with the specific types
of adult content that a person views. This sets up a dangerous regime that would reasonably frighten many users away viewing the site in the first place. Given the regularity of data
breaches of less sensitive information, HB1811 creates a perfect storm for data privacy.
3. This Decision Could Have a Huge Impact on Other States with Similar Laws, as Well as Future Laws Requiring Online Age Verification.
More than a third of U.S. states have introduced or
enacted laws similar to Texas’ HB1181. This ruling could have major consequences for those laws and for the freedom of adults across the country to safely and
anonymously access protected speech online, because the precedent the Court sets here could apply to both those and future laws. A bad decision in this case could be seen as a green
light for federal lawmakers who are interested in a broader national age verification requirement on online pornography.
It’s also not just adult content that’s at risk. A ruling from the Court on HB1181 that allows Texas violate the First Amendment here could make it harder to fight state and
federal laws like the Kids Online Safety
Act which would force users to verify their ages before accessing social media.
4. The Supreme Court Has Rightly Struck Down Similar Laws Before.
In 1997, the Supreme Court struck down, in a 7-2 decision, a federal online age-verification law in Reno v. American Civil Liberties Union. In
that landmark free speech case the court ruled that many elements of the Communications Decency Act violated the First Amendment, including part of the law
making it a crime for anyone to engage in online speech that is "indecent" or "patently offensive" if the speech could be viewed by a minor. Like HB1181, that law would have resulted in many users being unable to view constitutionally protected speech, as many websites would have had to
implement age verification, while others would have been forced to shut down.
Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear.
The CDA fight was one of the first big rallying points
for online freedom, and EFF participated as both a plaintiff and as co-counsel. When the law first passed, thousands of websites turned their backgrounds black in
protest. EFF launched its "blue ribbon" campaign and millions of websites around the world joined in support of free speech online. Even today, you can find the blue ribbon throughout
the Web.
Since that time, both the Supreme Court and many other federal courts have correctly recognized that online identification mandates—no matter what method they use or form they
take—more significantly burden First Amendment rights than restrictions on in-person access to adult materials. Because courts have consistently held that similar age verification
laws are unconstitutional, the precedent is clear.
5. There is No Safe, Privacy Protecting Age-Verification Technology.
The same constitutional problems that the Supreme Court identified in Reno back in 1997 have only metastasized. Since then, courts
have found that “[t]he risks of compelled digital verification are just as large, if not
greater” than they were nearly 30 years ago. Think about it: no matter what method someone uses to verify your age, to do so accurately, they must know who you are, and they must
retain that information in some way or verify it again and again. Different age verification methods don’t each fit somewhere on a spectrum of 'more safe' and 'less safe,' or 'more
accurate' and 'less accurate.' Rather, they each fall on a spectrum of dangerous in one way to dangerous in a different way. For more information about the dangers of various methods,
you can read our comments to the New York State Attorney General regarding
the implementation of the SAFE for Kids Act.
* * *
The Supreme Court Should Uphold Online First Amendment Rights and Strike Down This Unconstitutional Law
Texas’ age verification law robs internet users of anonymity, exposes them to privacy and security risks, and blocks some adults entirely from accessing sexual content that’s
protected under the First Amendment. Age-verification laws like this one reach into fully every U.S. adult household. We look forward to the court striking down this unconstitutional
law and once again affirming these important online free speech rights.
For more information on this case, view our amicus brief filed with the
Supreme Court. For a one-pager on the problems with age verification, see
here. For more information on recent state laws dealing with age verification, see Fighting Online ID Mandates: 2024 In Review. For more information on
how age verification laws are playing out around the world, see Global
Age Verification Measures: 2024 in Review.
>> mehr lesen
Meta’s New Content Policy Will Harm Vulnerable Users. If It Really Valued Free Speech, It Would Make These Changes
(Thu, 09 Jan 2025)
Earlier this week, when Meta announced changes to their content
moderation processes, we were hopeful that some of those changes—which we
will address in more detail in this post—would enable greater freedom of expression on the company’s platforms, something for which we have advocated for many years. While Meta’s
initial announcement primarily addressed changes to its misinformation policies and included rolling back over-enforcement and automated tools that we have long criticized, we
expressed hope that “Meta will also look closely at its content moderation practices with regards to other commonly censored topics such as LGBTQ+ speech, political dissidence, and
sex work.”
Facebook has a clear and disturbing track record of silencing and further marginalizing already oppressed peoples, and then being less than forthright about their content
moderation policy.
However, shortly after our initial statement was published, we became aware that rather than addressing those historically over-moderated subjects, Meta was taking the opposite
tack and —as reported by the
Independent—was making targeted changes to its hateful conduct policy that would allow dehumanizing
statements to be made about certain vulnerable groups.
It was our mistake to formulate our responses and expectations on what is essentially a marketing video for upcoming policy changes before any of those changes were
reflected in their documentation. We prefer to focus on the actual impacts of online censorship felt by people, which tends to be further removed from the stated policies
outlined in community guidelines and terms of service documents. Facebook has a clear and disturbing track record of silencing and further marginalizing already oppressed peoples, and
then being less than forthright about their content moderation policy. These first changes to actually surface in Facebook's community standards document seem to be in the same
vein.
Specifically, Meta’s hateful conduct policy now contains the following:
People sometimes use sex- or gender-exclusive language when discussing access to spaces often limited by sex or gender, such as access to bathrooms, specific schools,
specific military, law enforcement, or teaching roles, and health or support groups. Other times, they call for exclusion or use insulting language in the context of discussing
political or religious topics, such as when discussing transgender rights, immigration, or homosexuality. Finally, sometimes people curse at a gender in the context of a romantic
break-up. Our policies are designed to allow room for these types of speech.
But the implementation of this policy shows that it is focused on allowing more hateful speech against specific groups, with a noticeable and particular focus on enabling more
speech challenging the legitimacy of LGBTQ+ rights. For example,
While allegations of mental illness against people based on their protected characteristics remain a tier 2 violation, the revised policy now allows
“allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism
[sic] and homosexuality.”
The revised policy now specifies that Meta allows speech advocating gender-based and sexual orientation-based-exclusion from military, law enforcement, and teaching jobs,
and from sports leagues and bathrooms.
The revised policy also removed previous prohibitions on comparing people to inanimate objects, feces, and filth based on their protected characteristics.
These changes reveal that Meta seems less interested in freedom of expression as a principle and more focused on appeasing the incoming U.S. administration, a concern we
mentioned in our initial statement with respect to the announced move of the content policy team from California to Texas to address “appearances of bias.” Meta said it would be
making some changes to reflect that these topics are “the subject of frequent political discourse and debate” and can be said “on TV or the floor of Congress.” But if that is truly
Meta’s new standard, we are struck by how selectively it is being rolled out, and particularly allowing more anti-LGBTQ+ speech.
We continue
to stand firmly against hateful anti-trans content remaining on Meta’s platforms, and strongly condemn any policy change directly aimed at enabling hate toward vulnerable
communities—both in the U.S. and internationally.
Real and Sincere Reforms to Content Moderation Can Both Promote Freedom of Expression and Protect Marginalized Users
In its initial announcement, Meta also said it would change how policies are enforced to reduce mistakes, stop reliance on automated systems to flag every piece of content, and
add staff to review appeals. We believe that, in theory, these are positive measures that should result in less censorship of expression for which Meta has long been criticized by the
global digital rights community, as well as by artists, sex worker advocacy groups, LGBTQ+ advocates, Palestine advocates, and political groups, among others.
But we are aware that these problems, at a corporation with a history of biased and harmful moderation like Meta, need a careful, well-thought-out, and sincere fix that will not
undermine broader freedom of expression goals.
For more than a decade, EFF has been critical of the impact that content moderation at scale—and automated content moderation in particular—has
on various groups. If Meta is truly interested in promoting freedom of expression across its platforms, we renew our calls to prioritize the following much-needed improvements
instead of allowing more hateful speech.
Meta Must Invest in Its Global User Base and Cover More Languages
Meta has long failed to invest in providing cultural and linguistic competence in its moderation practices often leading to inaccurate removal of content as well as a greater
reliance on (faulty) automation tools. This has been apparent to us for a long time. In the wake of the 2011 Arab uprisings, we documented our concerns with Facebook’s reporting processes and their
effect on activists in the Middle East and North Africa. More recently, the need for cultural competence in the industry generally was emphasized in the revised Santa Clara Principles.
Over the years, Meta’s global shortcomings became even more apparent as its platforms were used to promote hate and extremism in a number of locales. One key example is the
platform’s failure to moderate anti-Rohingya
sentiment in Myanmar—the direct result of having far too few Burmese-speaking moderators (in 2015, as extreme violence and violent sentiment toward the Rohingya was well underway,
there were just two such moderators).
If Meta is indeed going to roll back the use of automation to flag and action most content and ensure that appeals systems work effectively, which will solve some of these
problems, it must also invest globally in qualified content moderation personnel to make sure that content from countries outside of the United States and in languages other than
English is fairly moderated.
Reliance on Automation to Flag Extremist Content Allows for Flawed Moderation
We have long been critical of Meta’s over-enforcement of terrorist and extremist speech, specifically of the impact it has on human rights content. Part of the problem is Meta’s over-reliance
on moderation to flag extremist content. A 2020 document reviewing moderation across the Middle East and North Africa claimed that algorithms used to detect terrorist content in
Arabic incorrectly flag posts 77 percent of the
time.
More recently, we have seen this with Meta’s automated moderation to remove the phrase “from the river to the sea.” As we argued in a submission to the Oversight Board—with which the Board
also agreed—moderation decisions
must be made on an individualized basis because the phrase has a significant historical usage that is not hateful or otherwise in violation of Meta’s community standards.
Another example of this problem that has overlapped with Meta’s shortcomings with respect to linguistic competence is in relation to the term “shaheed,” which translates most
closely to “martyr” and is used by Arabic speakers and many non-Arabic-speaking Muslims elsewhere in the world to refer primarily (though not exclusively) to individuals who have died
in the pursuit of ideological causes. As we argued in our joint
submission with ECNL to the Meta Oversight Board, use of the term is context-dependent, but Meta has used automated moderation to indiscriminately remove instances of
the word. In their policy advisory opinion, the Oversight Board noted that any restrictions on freedom of expression
that seek to prevent violence must be necessary and proportionate, “given that undue removal of content may be ineffective and even counterproductive.”
Marginalized communities that experience persecution offline often face disproportionate censorship online. It is imperative that Meta recognize the responsibilities it has to
its global user base in upholding free expression, particularly of communities that may otherwise face censorship in their home countries.
Sexually-Themed Content Remains Subject to Discriminatory Over-censorship
Our critique of Meta’s removal of sexually-themed content goes back more than a decade. The company’s policies
on adult sexual activity and nudity affect a wide
range of people and communities, but most acutely impact LGBTQ+ individuals and sex workers. Typically aimed at keeping sites “family friendly” or
“protecting the children,” these policies are often unevenly enforced, often classifying LGBTQ+ content as “adult” or “harmful” when similar heterosexual content isn’t. These policies
were often written and enforced discriminatorily and at the expense of gender-fluid and nonbinary speakers—we joined in the We
the Nipple campaign aimed at remedying this discrimination.
In the midst of ongoing political divisions, issues like this have a serious impact on social media users.
Most nude content is legal, and engaging with such material online provides individuals with a safe and open framework to explore their identities, advocate for broader societal
acceptance and against hate, build
communities, and discover new
interests. With Meta intervening to become the arbiters of how people create and engage with nudity and sexuality—both offline and in the digital space—a crucial form
of engagement for all kinds of users has been removed and the voices of people with less power have regularly been shut down.
Over-removal of Abortion Content Stifles User Access to Essential Information
The removal of abortion-related posts on Meta
platforms containing the word ‘kill’ have failed to meet the criteria for restricting users’ right to freedom of expression. Meta has regularly over-removed abortion related content,
hamstringing its user’s ability to voice their political beliefs. The use of automated tools for content moderation leads to the biased removal of this language, as well as essential
information. In 2022, Vice reported that a
Facebook post stating "abortion pills can be mailed" was flagged within seconds of it being posted.
At a time when bills are being tabled across the U.S. to restrict the exchange of abortion-related
information online, reproductive justice and safe access to abortion, like so many other aspects of managing our healthcare, is fundamentally tied to our digital lives. And with
corporations deciding what content is hosted online, the impact of this removal is exacerbated.
What was benign data online is effectively now potentially criminal evidence. This expanded threat to digital rights is especially dangerous for BIPOC, lower-income, immigrant,
LGBTQ+ people and other traditionally marginalized communities, and the healthcare providers serving these communities. Meta must adhere to its responsibility to respect international human
rights law, and ensure that any abortion-related content removal be both necessary and proportionate.
Meta’s symbolic move of its content team from California to Texas, a state that is aiming to make the distribution of abortion information illegal, also raises serious concerns that Meta will backslide on this issue—in
line with local Texan state law banning abortion—rather than make improvements.
Meta Must Do Better to Provide Users With Transparency
EFF has been critical of Facebook’s lack of
transparency for a long time. When it comes to content moderation the company’s transparency reports lack many of the basics: how many human moderators are
there, and how many cover each language? How are moderators trained? The company’s community
standards enforcement report includes rough estimates of how many pieces of content of which categories get removed, but does not tell us why or how these decisions
are taken.
Meta makes billions from its own exploitation of our data, too often choosing their profits over our privacy—opting to collect as much as possible while denying users intuitive
control over their data. In many ways this problem underlies the rest of the corporation’s harms—that its core business model depends on collecting as much information about users as
possible, then using that data to target ads, as well as target competitors.
That’s why EFF, with others, launched the Santa Clara Principles on how corporations like Meta can best
obtain meaningful transparency and accountability around the increasingly aggressive moderation of user-generated content. And as platforms like Facebook, Instagram, and X continue to
occupy an even bigger role in arbitrating our speech and controlling our data, there is an increased urgency to ensure that their reach is not only stifled, but reduced.
Flawed Approach to Moderating Misinformation with Censorship
Misinformation has been thriving on social media platforms, including Meta. As we said in our initial statement, and have writtenbefore, Meta and other platforms should use a variety of fact-checking and
verification tools available to it, including both community notes and professional fact-checkers, and have robust systems in place to check against any flagging that results from
it.
Meta and other platforms should also employ media literacy tools such as encouraging users to read articles before sharing them, and to provide resources to help their users
assess reliability of information on the site. We have also called for Meta and others to stop privileging governmental
officials by providing them with greater opportunities to lie than other users.
While we expressed some hope on Tuesday, the cynicism expressed by others seems warranted now. Over the years, EFF and many others have worked to push Meta to make
improvements. We've had some success with its "Real Names" policy, for
example, which disproportionately affected the LGBTQ community and political dissidents. We also fought for, and won improvements on, Meta's policy on allowing images of
breastfeeding, rather than marking them as "sexual content." If Meta truly values freedom of expression, we urge it to redirect its focus to empowering historically marginalized
speakers, rather than empowering only their detractors.
>> mehr lesen
EFF Statement on Meta's Announcement of Revisions to Its Content Moderation Processes
(Tue, 07 Jan 2025)
Update: After this blog post was published (addressing Meta's blog post
here), we learned Meta also revised its public "Hateful Conduct" policy
in ways EFF finds concerning. We address these changes in this blog
post, published January 9, 2025.
In general, EFF supports moves that bring more freedom of expression and transparency to platforms—regardless of their political motivation. We’re
encouraged by Meta's recognition that automated flagging and responses to flagged content have caused all sorts of mistakes in moderation. Just this week, it was reported that some of those "mistakes" were
heavily censoring LGBTQ+ content. We sincerely hope that the lightened restrictions announced by Meta will apply uniformly, and not just to hot-button U.S. political
topics.
Censorship, broadly, is not the answer to misinformation. We encourage social media companies to employ a variety of non-censorship tools to address
problematic speech on their platforms and fact-checking can be one of those tools. Community notes, essentially crowd-sourced fact-checking, can be a very valuable tool for addressing
misinformation and potentially give greater control to users. But fact-checking by professional organizations with ready access to subject-matter expertise can be another. This has
proved especially true in international contexts where they have been instrumental in refuting, for example, genocide denial.
So, even if Meta is changing how it uses and preferences fact-checking entities, we hope that Meta will continue to look to fact-checking entities as an
available tool. Meta does not have to, and should not, choose one system to the exclusion of the other.
Importantly, misinformation is only one of many content moderation challenges facing Meta and other social media companies. We hope Meta will also look
closely at its content moderation practices with regards to other commonly censored topics such as LGBTQ speech, political dissidence,
and sex work.
Meta’s decision to move its content teams from California to “help reduce the concern that biased employees are overly censoring content” seems more
political than practical. There is of course no population that is inherently free from bias and by moving to Texas, the “concern” will likely not be reduced, but just relocated from
perceived “California bias” to perceived “Texas bias.”
Content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well, involving millions of
difficult decisions. On the one hand, Meta has been over-moderating some content for years, resulting in the suppression of valuable
political speech. On the other hand, Meta's previous rules have offered protection from certain types of hateful speech, harassment, and harmful disinformation that isn't illegal in
the United States. We applaud Meta’s efforts to try to fix its over-censorship problem but will watch closely to make sure it is a good-faith effort and rolled out fairly and not
merely a political maneuver to accommodate the upcoming U.S. administration change.
>> mehr lesen
Sixth Circuit Rules Against Net Neutrality; EFF Will Continue to Fight
(Tue, 07 Jan 2025)
Last week, the Sixth U.S. Circuit Court of Appeals ruled against the FCC,
rejecting its authority to classify broadband as a Title II “telecommunications service.” In doing so, the court removed net neutrality protections for all Americans and took
away the FCC’s ability to meaningfully regulate internet service providers.
This ruling fundamentally gets wrong the reality of internet service we all live with every day. Nearly 80% of Americans
view broadband access to be as important as water and electricity. It is no longer an extra, non-necessary “information service,” as it was seen 40 years ago, but it is a vital
medium of communication in everyday life. Business, health services, education, entertainment, our social lives, and more have increasingly moved online. By ruling that broadband
“information service” and not a “telecommunications service” this court is saying that the ISPs that control your broadband access will continue to face little to no oversight for
their actions.
This is intolerable.
Net neutrality is the principle that ISPs treat all data that travels over their network equally, without improper discrimination in favor of particular apps, sites, or
services. At its core, net neutrality is a principle of equity and protector of innovation—that, at least online, large monopolistic ISPs don’t get to determine winners and losers.
Net neutrality ensures that users determine their online experience, not ISPs. As such, it is fundamental to user choice, access to information, and free expression online.
By removing protections against actions like blocking, throttling, and paid prioritization, the court gives those willing and able to pay ISPs an advantage over those who are
not. It privileges large legacy corporations that have partnerships with the big ISPs, and it means that newer, smaller, or niche services will have trouble competing, even if they
offer a superior service. It means that ISPs can throttle your service–or that of, say, a fire department fighting the largest
wildfire in state history. They can block a service they don’t like. In addition to charging you for access to the internet, they can charge services and websites for access to
you, artificially driving up costs. And where most Americans have little choice in home broadband providers, it
means these ISPs will be able to exercise their monopoly power not just on the price you pay for access, but how you access and engage with information as well.
Moving forward, now more than ever it becomes important for individual states to pass their own net neutrality laws, or defend the ones they have on the books. California passed
a gold standard net neutrality law in 2018 that has survived judicial scrutiny. It is up to us to ensure it remains in
place.
Congress can also end this endless whiplash of reclassification and decide, once and for all, by passing a law classifying broadband internet services firmly under Title II.
Such proposals have been introduced before; they ought
to be introduced again.
This is a bad ruling for Team Internet, but we are resilient. EFF–standing with users, innovators, creators, public interest advocates, librarians, educators, and everyone
else who relies on the open internet–will continue to champion the principles of net neutrality and work toward an equitable and open internet for all.
>> mehr lesen
Last Call: The Combined Federal Campaign Pledge Period Closes on January 15!
(Tue, 07 Jan 2025)
The pledge period for the Combined Federal Campaign (CFC) closes on Wednesday, January
15! If you're a U.S. federal employee or retiree, now is the time to make your pledge and support EFF’s work to
protect your rights online.
If you haven’t before, giving to EFF through the CFC is quick and easy! Just head on over to GiveCFC.org and click
“DONATE.” Then you can search for EFF using our CFC ID 10437 and make a pledge via payroll deduction,
credit/debit, or an e-check. If you have a renewing pledge, you can also choose to increase your support there as well!
The CFC is the world’s largest and most successful annual charity campaign for U.S. federal employees and retirees. Last year members of this community
raised nearly $34,000 to support EFF’s initiatives advocating for privacy and free expression online. That
support has helped us:
Fight for the public's right to access police drone footage
Encourage the Fifth Circuit Court of Appeals to rule that location-based geofence warrants are
unconstitutional
Push back against countless censorship laws, including the Kids Online Safety Act
Continue to see more of the web encrypted thanks to Certbot and Let's
Encrypt
Federal employees and retirees have a tremendous impact on our democracy and the future of civil liberties and human rights online. By making a pledge
through the CFC, you can shape a future where your privacy and free speech rights are protected. Make your pledge today usingEFF’s CFC ID 10437!
>> mehr lesen
EFF Goes to Court to Uncover Police Surveillance Tech in California
(Mon, 06 Jan 2025)
Which surveillance technologies are California police using? Are they buying access to your location data? If so, how much are they paying? These are basic questions the Electronic
Frontier Foundation is trying to answer in a new lawsuit called Pen-Link v. County of San Joaquin Sheriff’s Office.
EFF filed a motion in California Superior Court to join—or intervene in—an existing lawsuit to get access to
documents we requested. The private company Pen-Link sued the San Joaquin Sheriff’s Office to block the agency from disclosing to EFF the unredacted contracts between them, claiming
the information is a trade secret. We are going to court to make sure the public gets access to these records.
The public has a right to know the technology that law enforcement buys with taxpayer money. This information is not a trade secret, despite what private companies try to claim.
How did this case start?
As part of EFF’s transparency mission, we sent public records requests to California law enforcement agencies—including the San Joaquin Sheriff’s
Office—seeking information about law enforcements’ use of technology sold by two companies: Pen-Link and its subsidiary, Cobwebs Technologies.
The Sheriff’s Office gave us 40 pages of redacted documents. But at the request of
Pen-Link, the Sheriff’s Office redacted the descriptions and prices of the products, services, and subscriptions offered by Pen-Link and Cobwebs.
Pen-Link then filed a lawsuit to permanently
block the Sheriff’s Office from making the information public, claiming its prices and descriptions are trade secrets. Among other things, Pen-Link requires its law enforcement
customers to sign non-disclosure agreements to not reveal use of the technology without the company’s consent. In addition to thwarting transparency, this raises serious questions about defendants’ rights to obtain discovery in criminal cases.
“Customer and End Users are prohibited from disclosing use of the Deliverables, names of Cobwebs' tools and technologies, the existence of this agreement or the relationship between
Customers and End Users and Cobwebs to any third party, without the prior written consent of Cobwebs,” according to Cobwebs’ Terms.
Unfortunately, thesekinds of terms are not new.
EFF is entering the lawsuit to make sure the records get released to the public. Pen-Link’s lawsuit is known as a “reverse” public records lawsuit because it seeks to block, rather
than grant access to public records. It is a rare tool traditionally only used to protect a person’s constitutional right to privacy—not a business’ purported trade secrets. In
addition to defending against the “reverse” public records lawsuit, we are asking the court to require the Sheriff’s Office to give us the un-redacted records.
Who is Pen-Link and Cobwebs Technologies?
Pen-Link and its subsidiary Cobwebs Technologies are private companies that sell products and services to law enforcement. Pen-Link has been around for years and may be best known as
a company that helps law enforcement execute wiretaps after a court grants approval. In 2023, Pen-Link acquired the company Cobwebs Technologies.
The redacted documents indicate that San Joaquin County was interested in Cobwebs’ “Web Intelligence Investigation Platform.” In other cases, this platform has included separate products like WebLoc, Tangles, or a “face processing subscription.” WebLoc is a platform that provides law enforcement with a vast amount
of location data sourced from large data sets. Tangles uses AI to glean
intelligence from the “open, deep and dark web.” Journalists at multiplenewsoutlets have chronicled this technology and have published Cobwebs training manuals that demonstrate that its product can be used to target activists
and independent journalists. The company has also provided proxy social media accounts for undercover investigations, which led Meta to name it a surveillance-for-hire company and to delete
hundreds of accounts associated with the platform. Cobwebs has had multiple high-value contracts with federal agencies like Immigration and Customs Enforcement (ICE) and the Internal
Revenue Service (IRS) and state entities, like the Texas Department of Public Safety and the
West Virginia Fusion Center. EFF classifies this type of product as a “Third
Party Investigative Platform,” a category that we began documenting in the Atlas of Surveillance project earlier this year.
What’s next?
Before EFF officially joins the case, the court must grant our motion, then we can file our petition and brief the case. A favorable ruling would grant the public access to these
documents and show law enforcement contractors that they can’t hide their surveillance tech behind claims of trade secrets.
For communities to have informed conversations and make reasonable decisions about powerful surveillance tools being used by their governments, our right to information under public
records laws must be honored. The costs and descriptions of government purchases are common data points, regularly subject to disclosure under public records laws.
Allowing PenLink to keep this information secret would dangerously diminish the public’s right to government transparency and help facilitate surveillance of U.S. residents. In the
past, our public records work has exposed similar surveillance technology. In 2022, EFF produced a large exposé on Fog Data Science, the secretive company selling mass
surveillance to local police.
The case number is STK-CV-UWM-0016425. Read more here:
EFF's Motion to InterveneEFF's Points and AuthoritiesTrujillo Declaration & EFF's Cross-PetitionPen-Link's Original ComplaintRedacted documents produced by County of San Joaquin Sheriff’s Office
Related Cases:
Pen-Link v. County of San Joaquin Sheriff’s Office
>> mehr lesen
Online Behavioral Ads Fuel the Surveillance Industry—Here’s How
(Mon, 06 Jan 2025)
A global spy tool exposed the locations of
billions of people to anyone willing to pay. A Catholic group bought location data about gay dating app users in an effort to
out gay priests. A location data broker sold lists of people who attended political
protests.
What do these privacy violations have in common? They share a source of data that’s shockingly pervasive and unregulated: the technology powering nearly every ad you see
online.
Each time you see a targeted ad, your personal information is exposed to thousands of advertisers and data brokers through a process called
“real-time bidding” (RTB). This process does more than deliver ads—it fuels government surveillance, poses national security risks, and gives data brokers easy access to your online
activity. RTB might be the most privacy-invasive surveillance system that you’ve never heard of.
What is Real-Time Bidding?
RTB is the process used to select the targeted ads shown to you on nearly every website and app you visit. The ads you see are the winners of milliseconds-long auctions that
expose your personal information to thousands of companies a
day. Here’s how it works:
The moment you visit a website or app with ad space, it asks a company that runs ad auctions to determine which ads it will display for you. This involves sending
information about you and the content you’re viewing to the ad auction company.
The ad auction company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers.
The bid request may contain personal information like your unique advertising ID, location, IP address,
device details, interests, and demographic information. The information in bid requests is called “bidstream data” and can easily be linked to real people.
Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on ad space.
Advertisers, and their ad buying platforms, can store the personal data in the bid request regardless of whether or not they bid on ad space.
A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive the data. Indeed, anyone posing as an ad buyer can access a
stream of sensitive data about the billions of individuals using websites or apps with targeted ads. That’s a big way that RTB puts personal data into the hands of data brokers, who
sell it to basically anyone willing to pay. Although some ad auction companies have policies against
selling bidstream data, the practice remains
widespread.
RTB doesn’t just allow companies to harvest your data—it also incentivizes it. Bid requests containing more personal
data attract higher bids, so websites and apps are financially motivated to harvest as much of your data as possible. RTB further incentivizes data brokers to track
your online activity because advertisers purchase data from data brokers to inform their
bidding decisions.
Data brokers don’t need any direct relationship with the apps and websites they’re collecting bidstream data from. While some data collection methods require web or app
developers to install code from a data
broker, RTB is facilitated by ad companies that are already plugged into most websites and apps. This allows data brokers to collect data at a staggering
scale. Hundreds of billions of RTB bid requests are
broadcast every day. For each of those bids, thousands of real or fake ad buying platforms may receive data. As a result, entire businesses have emerged to harvest and sell data from
online advertising auctions.
First FTC Action Against Abuse of Real-Time Bidding Data
A recent enforcement action
by the Federal Trade Commission (FTC) shows that the dangers of RTB are not hypothetical—data brokers actively rely on RTB to collect and sell sensitive information. The FTC
found that data broker Mobilewalla was collecting personal data—including precise location information—from RTB auctions without placing ads.
Mobilewalla collected data on over a billion people, with an estimated 60%
sourced directly from RTB auctions. The company then sold this data for a range of invasive purposes, including tracking union organizers, tracking
people at Black Lives Matter protests, and compiling home addresses of healthcare employees for recruitment by competing employers. It also categorized people into custom groups for advertisers, such as “pregnant women,”
“Hispanic churchgoers,” and “members of the LGBTQ+ community.”
The FTC concluded that Mobilewalla's practice of collecting personal data from RTB auctions where they didn’t place ads violated the FTC Act’s prohibition of unfair conduct. The FTC’s
proposed settlement order bans Mobilewalla from collecting consumer data from RTB auctions for any purposes other than participating in those auctions. This action marks
the first time the FTC has targeted the abuse of
bidstream data. While we celebrate this significant milestone, the dangers of RTB go far beyond one data broker.
Real-Time Bidding Enables Mass Surveillance
RTB is regularly exploited for government surveillance. As early as 2017, researchers demonstrated
that $1,000 worth of ad targeting data could be used to track an individuals’ locations and glean sensitive information like their religion and sexual orientation. Since then,
data brokers have been caught selling bidstream data to government intelligence agencies. For example, the data broker Near Intelligence collected data about more than a billion
devices from RTB auctions and sold it to the U.S. Defense
Department. Mobilewalla sold bidstream data to another
data broker, Gravy Analytics, whose subsidiary, Venntell, likewise has sold location data to the FBI, ICE, CBP, and other government
agencies.
In addition to buying raw bidstream data, governments buy surveillance tools that rely on the same advertising auctions. The surveillance company Rayzone posed as an
advertiser to acquire bidstream data, which it repurposed into tracking tools sold to governments around the world. Rayzone’s tools could identify phones that had been in specific
locations and link them to people's names, addresses, and browsing histories. Patternz, another surveillance tool built on bidstream data, was
advertised to security agencies worldwide as a way to track people's locations. The CEO of Patternz highlighted the connection between surveillance and advertising technology when he
suggested his company could track people through “virtually any app that
has ads.”
Beyond the privacy harms from RTB-fueled government surveillance, RTB also creates national security risks. Researchers have warned that RTB could allow foreign states and non-state
actors to obtain compromising personal data about American defense personnel and political leaders. In fact, Google’s ad auctions sent sensitive data to a Russian ad company for
months after it was sanctioned by the U.S. Treasury.
The privacy and security dangers of RTB are inherent to its design, and not just a matter of misuse by individual data brokers. The process broadcasts torrents of our personal
data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately used. This indiscriminate sharing of location data and other
personal information is dangerous, regardless of whether the recipients are advertisers or surveillance companies in disguise. Sharing sensitive data with advertisers enables
exploitative advertising, such as predatory loan companies targeting people in financial
distress. RTB is a surveillance system at its core, presenting corporations and governments with limitless opportunities to use our data against us.
How You Can Protect Yourself
Privacy-invasive ad auctions occur on nearly every website and app, but there are steps you can take to protect yourself:
For apps: Follow EFF’s
instructions to disable your mobile advertising ID and audit app permissions. These steps will reduce the personal data available to the RTB process and make it
harder for data brokers to create detailed profiles about you.
For websites: Install Privacy Badger, a free browser extension built by EFF to block online trackers.
Privacy Badger automatically blocks tracking-enabled advertisements, preventing the RTB process from beginning.
These measures will help protect your privacy, but advertisers are constantlyfindingnewways to collect and exploit your data.
This is just one more reason why individuals shouldn’t bear the sole responsibility of defending their data every time they use the internet.
The Real Solution: Ban Online Behavioral Advertising
The best way to prevent online ads from fueling surveillance is to ban online
behavioral advertising. This would end the practice of targeting ads based on your online activity, removing the primary incentive for companies to track and share
your personal data. It would also prevent your personal data from being broadcast to data brokers through RTB auctions. Ads could still be targeted contextually—based on the content
of the page you’re currently viewing—without collecting or exposing sensitive information about you. This shift would not only protect individual privacy but also reduce the power of
the surveillance industry. Seeing an ad shouldn’t mean surrendering your data to thousands of companies you’ve never heard of. It’s time to end online behavioral advertising and the
mass surveillance it enables.
>> mehr lesen
Decentralization Reaches a Turning Point: 2024 in review
(Wed, 01 Jan 2025)
The steady rise of decentralized networks this year is transforming social media. Platforms like Mastodon, Bluesky, and Threads are still in their infancy but have already
shown that when users are given options, innovation thrives and it results in better tools and protections for our rights online. By moving towards a digital landscape that can’t be
monopolized by one big player, we also see broader improvements to network resiliency and user autonomy.
The Steady Rise of Decentralized Networks
Fediverse and Threads
The Fediverse, a wide variety of sites and services most associated with
Mastodon, continued to evolve this year. Meta’s Threads began integrating with the network, marking a groundbreaking shift for the company. Only a few years ago EFF dreamed of the impact an embrace of interoperability would have for a company that is notorious for building
walled gardens that trap users within its platforms. By allowing Threads users to share their posts with Mastodon and the broader fediverse (and therefore, Bluesky) without leaving their home platform, Meta is introducing millions to the benefits of interoperability. We look
forward to this continued trajectory, and for a day when it is easy to move to or from Threads, and still follow and interact with the same federated community.
Threads’ enormous user base—100 million daily
active users—now dwarfs both Mastodon and Bluesky. Its integration into more open networks is a potential turning point in popularizing the decentralized social web. However, Meta’s poor reputation
on privacy, moderation, and censorship, drove many Fediverse instances
to preemptively block Threads, and may fragment the network..
We explored how Threads stacks up against Mastodon and
Bluesky, across moderation, user autonomy, and privacy. This development highlights the promise of decentralization, but it also serves as a reminder that corporate
giants may still wield outsized influence over ostensibly open systems.
Bluesky’s Explosive Growth
While Threads dominated in sheer numbers, Bluesky was this year’s breakout star. At the start of the year, Bluesky had fewer
than 200,000 users and was still invite-only. In the last few months of 2024 however the project experienced over 500% growth in just one month, and ultimately reached over 25 million
users.
Unlike Mastodon, which integrates into the Fediverse, Bluesky took a different path, building its own decentralized protocol (AT Protocol) to ensure user data and identities
remain portable and users retain a “credible exit.” This innovation allows users to carry their online communities across platforms seamlessly, sparing them the frustration of
rebuilding their community. Unlike the Fediverse, Bluesky has prioritized building a drop-in replacement for Twitter, and is still mostly centralized. Bluesky has a growing
arsenal of tools available to users, embracing community creativity and innovation.
While Bluesky will be mostly familiar to former Twitter users, we ran through some tips for managing your Bluesky feed, and answered some questions for people just joining the platform.
Competition Matters
Keeping the Internet Weird
The rise of decentralized platforms underscores the critical importance of competition in driving innovation. Platforms like Mastodon and Bluesky thrive because they fill gaps
left by corporate giants, and encourage users to find experiences which work best for them. The traditional social media model puts up barriers so platforms can impose restrictive
policies and prioritize profit over user experience. When the focus shifts to competition and a lack of central control, the internet flourishes.
Whether a user wants the community focus of Mastodon, the global megaphone of Bluesky, or something else entirely, smaller platforms let people build experiences independent of
the motives of larger companies. Decentralized platforms are ultimately most accountable to their users, not advertisers or shareholders.
Making Tech Resilient
This year highlighted the dangers of concentrating too much power in the hands of a few dominant companies. A major global IT outage this summer starkly demonstrated the
fragility of digital monocultures, where a single point of
failure can disrupt entire industries. These failures underscore the importance of decentralization, where networks are designed to distribute risk, ensuring that no single system
compromise can ripple across the globe.
Decentralized projects like Meshtastic, which uses radio waves to provide internet connectivity in disaster scenarios, exemplify the kind of resilient infrastructure we need.
However, even these innovations face threats from private interests. This year, a proposal from NextNav to claim the 900 MHz band for its own use put Meshtastic’s
experimentation—and by extension, the broader potential of decentralized communication—at risk. As we discussed in our FCC comments, such moves illustrate how monopolistic power not only stifles competition but also jeopardizes
resilient tools that could safeguard peoples' connectivity.
Looking Ahead
This year saw meaningful strides toward building a decentralized, creative, and resilient internet for 2025. Interoperability and decentralization will likely continue to
expand. As it does, EFF will be vigilant, watching for threats to decentralized projects and obstacles to the growth of open ecosystems.
This article is part of our Year in Review series. Read other articles about the fight for digital
rights in 2024.
>> mehr lesen
Deepening Government Use of AI and E-Government Transition in Latin America: 2024 in Review
(Wed, 01 Jan 2025)
Policies aimed at fostering digital government processes are gaining traction in Latin America, at local and regional levels. While these initiatives can streamline access to
public services, it can also make them less accessible, less clear, and put people's fundamental rights at risk. As we move forward, we must emphasize transparency and
data privacy guarantees during government digital transition processes.
Regional Approach to Digitalization
In November, the Ninth Ministerial Conference on the Information Society in Latin America and the Caribbean approved the 2026
Digital Agenda for the region (eLAC 2026). This initiative unfolds within the UN Economic Commission for Latin America and the Caribbean (ECLAC), a regional cooperation forum focused
on furthering the economic development of LAC countries.
One of the thematic pillars of eLAC 2026 is the digital transformation of the State, including the digitalization of government processes and services to improve efficiency,
transparency, citizen participation, and accountability. The digital agenda also aims to improve digital identity systems to facilitate access to public services and promote
cross-border digital services in a framework of regional integration. In this context, the agenda points out countries’ willingness to implement policies that foster
information-sharing, ensuring privacy, security, and interoperability in government digital systems, with the goal of using and harnessing data for decision-making, policy design and
governance.
This regional process reflects and feeds country-level initiatives that have also gained steam in Latin America in the last few years. The incentives to government digital
transformation take shape against the backdrop of improving government efficiency. It is critical to qualify what efficiency means in practice. Often “efficiency” has
meant budget cuts or shrinking access to public processes and benefits at the expense of fundamental rights. The promotion of fundamental rights should guide a State’s metrics as to
what is efficient and successful.
As such, while digitalization can play an important role in streamlining access to public services and facilitating the enjoyment of rights, it can also
make itmore complex for people to access these same services and generally interact with
the State. The most vulnerable are those in greater need for that interaction to work well and those with an unusual context that often is not accommodated by the technology being
used. They are also the population most prone to having scarce access to digital technologies and limited digital skills.
In addition, whereas properly integrating digital technologies into government processes and routines carries the potential to enhance transparency and civic participation, this
is not a guaranteed outcome. It requires government willingness and policies oriented to these goals. Otherwise, digitalization can turn into an additional layer of complexity and
distance between citizens and the State. Improving transparency and participation involves conceiving people not only as users of government services, but as participants in the
design and implementation of public polices, which includes the ones related to States’ digital transition.
Leveraging digital identity and data-interoperability systems are generally treated as a natural part of government digitalization plans. Yet, they should be taken with care. As we have highlighted, effective and robust data privacy safeguards do not
necessarily come along with states’ investments in implementing these systems, despite the fact they can be expanded into a potential regime of unprecedented data tracking. Among
other recommendations and redlines, it’s crucial to support each person’s right to choose to
continue using physical documentation instead of going digital.
This set of concerns stresses the importance of having an underlying institutional and normative structure to uphold fundamental rights within digital transition processes. Such
a structure involves solid transparency and data privacy guarantees backed by equipped and empowered oversight authorities. Still, States often neglect the crucial role of that
combination. In 2024, Mexico brought us a notorious example of that. Right when the new Mexican government has taken steps to advance the country’s digital
transformation, it has also moved
forward to close key independent oversight authorities, like the National Institute for Transparency, Access to Information and Personal Data Protection
(INAI).
Digitalization and Government Use of Algorithmic Systems for Rights-Affecting Purposes
AI strategies approved in different Latin American countries show how fostering government use of AI is an important lever to AI national plans and a component of government
digitalization processes.
In October
2024, Costa Rica was the first Central American country to launch an AI strategy. One of the strategic axes,
named as "Smart Government", focuses on promoting the use of AI in the public sector. The document highlights that by incorporating emerging technologies in public administration, it
will be possible to optimize decision making and automate bureaucratic tasks. It also envisions the provision of personalized services to citizens, according to their specific needs.
This process includes not only the automation of public services, but also the creation of smart platforms to allow a more direct interaction between citizens and
government.
In turn, Brazil has updated its AI strategy and
published in July the
AI Plan 2024-2028. One of the axes focuses on the use of AI to improve public services. The Brazilian plan also indicates the personalization of public services
by offering citizens content that is contextual, targeted, and proactive. It involves state data infrastructures and the implementation of data interoperability among government
institutions. Some of the AI-based projects proposed in the plan include developing early detection of neurodegenerative diseases and a "predict and protect" system to assess the
school or university trajectory of students.
Each of these actions may have potential benefits, but also come with major challenges and risks to human rights. These involve the massive amount of personal data, including
sensitive data, that those systems may process and cross-reference to provide personalized services, potential biases and disproportionate data processing in risk assessment systems,
as well as incentives towards a problematic assumption that automation can replace human-to-human interaction between governments and their population. Choices about how to collect
data and which technologies to adopt are ultimately political, although they are generally treated as technical and distant from political discussion.
An important basic step relates to government transparency about the AI systems either in use by public institutions or part of pilot programs. Transparency that, at a minimum,
should range from actively informing people that these systems exist, with critical details on their design and operation, to qualified information and indicators about their results
and impacts.
Despite the increasing adoption of algorithmic systems by public bodies in Latin America (for instance, a 2023's researchmapped 113 government ADM systems in use in Colombia), robust transparency initiatives are only in its
infancy. Chile stands out in that regard with its repository of public algorithms, while Brazil
launched the Brazilian AI Observatory (OBIA) in 2024. Similar to the regional ILIA (Latin American Artificial Intelligence Index), OBIA features meaningful data to measure the state of adoption and development of
AI systems in Brazil but still doesn't contain detailed information about AI-based systems in use by government entities.
The most challenging and controversial application from a human-rights and accountability standpoint is government use of AI in security-related activities.
Government surveillance and emerging technologies
During 2024, Argentina's new administration, under President Javier Milei, passed a
set ofactsregulating its police forces' cyber and AI surveillance capacities. One of them,
issued in May, stipulates how police forces must conduct "cyberpatrolling",
or Open-Source Intelligence (OSINT), for preventing crimes. OSINT activities do not
necessarily entail the use of AI, but have increasingly integrated
AI models as they facilitate the analysis of huge amounts of data. While OSINT has important and legitimate uses, including for investigative journalism, its application for
government surveillance purposes has raised many concerns and led to
abuses.
Another regulation issued in July created the "Unit of Artificial
Intelligence Applied to Security" (UIAAS). The powers of the new agency include “patrolling open social networks, applications and Internet sites” as well as “using machine learning
algorithms to analyze historical crime data and thus predict future crimes”. Civil society organizations in Argentina, such as Observatorio de Derecho Informático
Argentino, Fundación Vía Libre, and Access Now, have
gone to courts to enforce their right to access information about the new unit created.
The persistent opacity and lack of effective remedies to abuses in government use of digital surveillance technologies in the region called action from the Special Rapporteur
for Freedom of Expression of the Inter-American Commission on Human Rights (IACHR). The Office of the Special Rapporteur carried out a consultation to receive inputs about
digital-powered surveillance abuses, the state of digital surveillance legislation, the reach of the private surveillance market in the region, transparency and accountability
challenges, as well as gaps and best-practice recommendations. EFF has joined expert interviews and submitted comments in the consultation
process. The final report will be published next year with important analysis and recommendations.
Moving Forward: Building on Inter-American Human Rights Standards for a Proper Government Use of AI in Latin America
Considering this broader context of challenges, we launched a comprehensive report on the application of Inter-American Human Rights
Standards to government use of algorithmic systems for rights-based determinations. Delving into Inter-American Court decisions and IACHR reports, we provide guidance on what state
institutions must consider when assessing whether and how to deploy AI and ADM systems for determinations potentially
affecting people's rights.
We detailed what states’ commitments under the Inter-American System mean when state bodies decide to implement AI/ADM technologies for rights-based determinations. We explained
why this adoption must meet necessary and proportionate principles, and what this
entails. We highlighted what it means to have a human rights approach to state AI-based policies, including crucial redlines for not moving ahead with their deployment. We elaborated
on human rights implications building off key rights enshrined in the American Convention on Human Rights and the Protocol of San Salvador, setting up an operational framework for
their due application.
Based on the report, we have connected to oversight institutions, joining trainings for public prosecutors in Mexico and strengthening ties with the Public Defender's Office in
the state of São Paulo, Brazil. Our goal is to provide inputs for their adequate adoption of AI/ADM systems and for fulfilling their role as public interest entities regarding
government use of algorithmic systems more broadly.
Enhancing public oversight of state deployment of rights-affecting technologies in a context of marked government digitalization is essential for democratic policy making and
human-rights aligned government action. Civil society also plays a critical role, and we will keep working to raise awareness about potential impacts, pushing for rights to be
fortified, not eroded, throughout the way.
This article is part of our Year in Review series. Read other articles about the fight for digital rights
in 2024.
>> mehr lesen
Kids Online Safety Act Continues to Threaten Our Rights Online: 2024 in Review
(Wed, 01 Jan 2025)
At times this year, it seemed that Congress was going to give up its duty to protect our rights online—particularly when the Senate passed the dangerous
Kids Online Safety Act (KOSA) by a large majority in July. But this legislation, which would chill protected speech and almost certainly result in privacy-invasive age verification
requirements for many users to access social media sites, did not pass the House this year, thanks to strong opposition from EFF supporters and others.
KOSA, first introduced in 2022, would allow the Federal
Trade Commission to sue apps and websites that don’t take measures to restrict young people’s access to content. Congress introduced a number of versions of the bill this year, and
we analyzedeach of them. Unfortunately, the threat of this
legislation still looms over us as we head into 2025, especially now that the bill has passed the Senate. And just a few weeks
ago, its authors introduced an amended version to respond to criticisms from some House members.
Despite its many amendments in 2024, we continue to oppose KOSA. No matter which version becomes final, the bill will lead to broad online censorship of
lawful speech, including content designed to help children navigate and overcome the very same harms it identifies.
Here’s how, and why, we worked to stop KOSA this year, and where the fight stands now.
New Versions, Same Problems
The biggest problem with KOSA is in its vague “duty of care” requirements. Imposing a duty of care on a broad swath of online services, and requiring them
to mitigate specific harms based on the content of online speech, will result in those services imposing age verification and content restrictions. We’ve been critical of KOSA for
this reason since it was
introduced in 2022.
In February, KOSA's authors in the Senate released an amended version
of the bill, in part as a response to criticisms from EFF and other groups. The updates changed how KOSA regulates design elements of online services and
removed some enforcement mechanisms, but didn’t significantly change the duty of care, or the bill’s main effects. The updated version of KOSA would still create a censorship regime
that would harm a large number of minors who have First Amendment rights to access lawful speech online, and force users of all ages to verify their identities to access that same
speech, as we wrote at the
time. KOSA’s requirements are comparable to cases in which the government tried to prevent booksellers from disseminating certain books;
those attempts were found unconstitutional
Kids Speak Out
The young people who KOSA supporters claim they’re trying to help have spoken up about the bill. In March, we published the results of a survey of young
people who gave detailed reasons for their opposition to the bill. Thousands told us how
beneficial access to social media platforms has been for them, and why they feared KOSA’s censorship. Too often we’re not hearing from minors in these debates at
all—but we should be, because they will be most heavily impacted if KOSA becomes law.
Young people told us that KOSA would negatively impact their artistic education, their ability to find community online, their opportunity for
self-discovery, and the ways that they learn accurate news and other information. To sample just a few of the comments: Alan, a fifteen-year old, wrote,
I have learned so much about the world and about myself through social media, and without the diverse world i have seen, i would be a completely
different, and much worse, person. For a country that prides itself in the free speech and freedom of its peoples, this bill goes against everything we stand
for!
More Recent Changes To KOSA Haven’t Made It Better
In May, the U.S. House introduced a
companion version to the Senate bill. This House version modified the bill around the edges, but failed to resolve its fundamental censorship
problems. The primary difference in the House version was to create tiers that change how the law would apply to a company, depending on its size.
These are insignificant changes, given that most online speech happens on just a handful of the biggest platforms. Those platforms—including Meta, Snap, X,
WhatsApp, and TikTok—will still have to uphold the duty of care and would be held to the strictest knowledge standard.
The other major shift was to update the definition of “compulsive usage” by suggesting it be linked to the Diagnostic and Statistical Manual of Mental
Disorders, or DSM. But simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental
health disorders.
KOSA Passes the Senate
KOSA passed through the Senate
in July, though legislators on bothsides of the aisle remain
critical of the bill.
A version of KOSA introduced in September, tinkered with the bill again but did not change the censorship requirements. This version replaced language about
anxiety and depression with a requirement that apps and websites prevent “serious emotional disturbance.”
In December, the Senate released yet another version of the
bill—this one written with the assistance of X CEO Linda Yaccarino.
This version includes a throwaway line about protecting the viewpoint of users as long as those viewpoints are “protected by the First Amendment to the Constitution of the United
States.” But user viewpoints were never threatened by KOSA; rather, the bill has always meant to threaten the hosts of the user speech—and it still does. =
KOSA would allow the FTC to exert control over online speech, and there’s no reason to think the incoming FTC won’t use that power. The nominee for FTC
Chair, Andrew Ferguson—who would be empowered to enforce the law, if passed—has promised to protect free speech by “fighting back against the trans agenda,” among other things. KOSA
would give the FTC under this or any future administration wide berth to decide what sort of content should be restricted because they view it as harmful to kids. And even if it’s
never even enforced, just passing KOSA would likely result in platforms taking down protected speech.
If KOSA passes, we’re also concerned that it would lead to mandatory age verification on apps and websites. Such requirements have their own serious privacy
problems; you can read more about our efforts this year to oppose mandatory online ID in the U.S. and internationally.
EFF thanks our supporters, who have sent nearly 50,000 messages to Congress on this topic, for helping us oppose KOSA this year. In 2025, we will continue
to rally to protect privacy and free speech online.
This article is part of our Year in Review series. Read other articles about the
fight for digital rights in 2024.
>> mehr lesen
AI and Policing: 2024 in Review
(Tue, 31 Dec 2024)
There’s no part of your life now where you can avoid the onslaught of “artificial intelligence.” Whether you’re trying to search for a
recipe and sifting through AI-made summaries or
listening to your cousin talk about how they’ve fired their doctor and replaced them with a chatbot, it seems now, more than ever, that AI is the solution to every problem. But, in
the meantime, some people are getting hideously rich by convincing people with money and influence that they must integrate AI into their business or operations.
Enter law enforcement.
When many tech vendors see police, they see dollar signs. Law enforcement’s got deep pockets. They are under political pressure to address crime. They are eager to find
that one magic bullet that finally might do away with crime for good. All of this combines to make them a perfect
customer for whatever way technology companies can package machine-learning algorithms that sift through historical data in order to do recognition, analytics, or
predictions.
AI in policing can take many forms that we can trace back decades–including various forms of face recognition, predictive policing, data analytics, automated gunshot
recognition, etc. But this year has seen the rise of a new and troublesome development in the integration between policing and artificial intelligence: AI-generated police
reports.
Egged on by companies like Truleo
and Axon,
there is a rapidly-growing market for vendors that use a large language model to write police reports for officers. In the case of Axon, this is done by using the audio from
police body-worn cameras to create narrative reports with minimal officer input except for a
prompt to add a few details here and there.
We wrote about what can go wrong when towns
start letting their police write reports using AI. First and foremost, no matter how many boxes police check to say they are responsible for the content of the report, when cross
examination reveals lies in a police report, officers will now have the veneer of plausible deniability by saying, “the AI wrote that part.” After all, we’ve all heard of AI
hallucinations at this point, right? And don’t we all just click through terms of service without reading it carefully?
And there are so many more questions we have. Translation is an art, not a science, so how and why will this AI understand and depict things like physical conflict or important
rhetorical tools of policing like the phrases, “stop resisting” and “drop the weapon,” even if a person is unarmed or is not resisting? How well does it understand sarcasm? Slang?
Regional dialect? Languages other than English? Even if not explicitly made to handle these situations, if left to their own devices, officers will use it for any and all
reports.
Prosecutors in Washington have even asked
police not to use AI to write police reports (for now) out of fear that errors might jeopardize trials.
Countless movies and TV shows have depicted police hating paperwork and if these pop culture representations are any indicator, we should expect this technology to spread
rapidly in 2025. That’s why EFF is monitoring its spread closely and providing more information as we continue to learn more about how it’s being used.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in
2024.
>> mehr lesen
Fighting Online ID Mandates: 2024 In Review
(Tue, 31 Dec 2024)
This year, nearly half of U.S. states passed laws imposing age verification requirements on online platforms. EFF has opposed these efforts because they censor the internet and burden
access to online speech. Though age verification mandates are often touted as “online safety” measures for kids, the laws actually do more harm than good. They undermine the
fundamental speech rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users’ privacy, anonymity, and security.
Age verification bills generally require online services to verify all users’ ages—often through invasive tools like ID checks, biometric scans, and other dubious “age estimation”
methods—before granting them access to certain online content or services. Some state bills mandate the age verification explicitly, including Texas’s H.B. 1181, Florida’s H.B. 3,
and Indiana’s S.B. 17. Other state bills claim not to require age verification, but still
threaten platforms with liability for showing certain content or features to minor users. These bills—including Mississippi’s H.B. 1126, Ohio’s
Parental Notification by Social Media Operators Act, and the federal Kids Online Safety Act—raise the question: how are platforms to know which users
are minors without imposing age verification?
EFF’s answer: they can’t. We call these bills “implicit age verification mandates” because, though they might expressly deny requiring age verification, they still force platforms to
either impose age verification measures or, worse, to censor whatever content or features deemed “harmful to minors” for all users—not just young people—in order to avoid liability.
Age verification requirements are the wrong approach to protecting young people online. No one should have to hand over their most sensitive personal information or submit to invasive biometric
surveillance just to access lawful online speech.
EFF’s Work Opposing State Age Verification Bills
Last year, we saw a slew of dangerous social media regulations for young people introduced across the country. This year, the flood of ill-advised bills grew larger. As of December 2024, nearly every U.S. state legislature has introduced at least one age verification bill, and nearly half the
states have passed at least one of these proposals into law.
Courts agree with our position on age verification mandates. Across the country, courts have repeatedly and consistently held these so-called “child safety” bills unconstitutional,
confirming that it is nearly impossible to impose online age-verification requirements without violating internet users’ First Amendment rights. In 2024, federal district courts in
Ohio, Indiana, Utah, and Mississippi enjoined those states’ age
verification mandates. The decisions underscore how these laws, in addition to being unconstitutional, are also bad policy. Instead of seeking to censor the internet or block young
people from it, lawmakers seeking to help young people should focus on advancing legislation that solves the most pressing privacy and competition problems for all
users—without restricting their speech.
Here’s a quick review of EFF’s work this year to fend off state age verification mandates and protect digital rights in the face of this legislative onslaught.
California
In January, we submitted public comments opposing an especially vague
and poorly written proposal: California Ballot Initiative 23-0035, which would allow plaintiffs to sue online information providers for damages of up to $1 million if they violate
their “responsibility of ordinary care and skill to a child.” We pointed out that this initiative’s vague standard, combined with extraordinarily large statutory damages, will
severely limit access to important online discussions for both minors and adults, and cause platforms to censor user content and impose mandatory age verification in order to avoid
this legal risk. Thankfully, this measure did not make it onto the 2024 ballot.
In February, we filed a friend-of-the-court brief arguing that
California’s Age Appropriate Design Code (AADC) violated the First Amendment. Our brief asked the Ninth Circuit Court of Appeals to rule narrowly that the AADC’s age estimation scheme
and vague description of “harmful content” renders the entire law unconstitutional, even though the bill also contained several privacy provisions that, stripped of the
unconstitutional censorship provisions, could otherwise survive. In its decision in August, the Ninth Circuit confirmed that parts of the AADC likely
violate the First Amendment and provided a helpful roadmap to legislatures for how to write privacy first laws that can survive constitutional challenges. However, the court missed an
opportunity to strike down the AADC’s age-verification provision specifically.
Later in the year, we also filed a letter to California lawmakers
opposing A.B. 3080, a proposed state bill that would have required internet users to show their ID in order to look at sexually explicit content. Our letter explained that bills that
allow politicians to define what “sexually explicit” content is and enact punishments for those who engage with it are inherently censorship bills—and they never stop with minors. We
declared victory in September when the bill failed
to get passed by the legislature.
New York
Similarly, after New York passed the Stop Addictive Feeds Exploitation (SAFE) for Kids Act earlier this year, we filed comments urging the state attorney general (who is responsible
for writing the rules to implement the bill) to recognize that that age verification requirements are incompatible with privacy and free expression rights for everyone. We also noted
that none of the many methods of age verification listed in the attorney general’s call for comments is both privacy-protective and entirely accurate, as variousexperts have reported.
Texas
We also took the fight to Texas, which passed a law requiring all Texas internet users, including adults, to submit to invasive age verification measures on every website deemed by
the state to be at least one-third composed of sexual material. After a federal district court put the law on hold, the Fifth Circuit reversed and let the law take
effect—creating a split among federal circuit courts on the constitutionality of age verification mandates. In May, we filed an amicus brief urging the U.S. Supreme Court to grant review of the
Fifth Circuit’s decision and to ultimately overturn the Texas law on First Amendment grounds.
In September, after the Supreme Court accepted the Texas case, we filed another amicus brief on the merits. We pointed out that the Fifth
Circuit’s flawed ruling diverged from decades of legal precedent recognizing, correctly, that online ID mandates impose greater burdens on our First Amendment rights than in-person
age checks. We explained that there is nothing about this Texas law or advances in technology that would lessen the harms that online age verification mandates impose on adults
wishing to exercise their constitutional rights. The Supreme Court has set this case, Free Speech Coalition v. Paxton, for oral argument in February 2025.
Mississippi
Finally, we supported the First Amendment challenge to Mississippi’s age verification mandate, H.B. 1126, by filing
amicus briefs both in the federal district court and on appeal to the Fifth Circuit. Mississippi’s extraordinarily broad law requires social media services to verify the ages of all
users, to obtain parental consent for any minor users, and to block minor users from exposure to materials deemed “harmful” by state officials.
In our June brief for the district court, we once again
explained that online age verification laws are fundamentally different and more burdensome than laws requiring adults to show their IDs in physical spaces, and impose significant
barriers on adults’ ability to access lawful speech online. The district court agreed with us, issuing a decision that enjoined the Mississippi law and
heavily cited our amicus brief.
Upon Mississippi’s appeal to the Fifth Circuit, we filed another amicus brief—this time highlighting H.B. 1126’s dangerous impact on young
people’s free expression. After all, minors enjoy the same First Amendment right as adults to access and engage in protected speech online, and online spaces are diverse and important
spaces where minors can explore their identities—whether by creating and sharing art, practicing religion, or engaging in politics—and seek critical
resources and support for the very sameharms these bills claim to address. In our brief, we urged
the court to recognize that age-verification regimes like Mississippi’s place unnecessary and unconstitutional barriers between young people and these online spaces that they rely on
for vibrant self-expression and crucial support.
Looking Ahead
As 2024 comes to a close, the fight against online age verification is far from over. As the state laws continue to proliferate, so too do the legal challenges—several of which are
already on file.
EFF’s work continues, too. As we move forward in state legislatures and courts, at the federal level here in the United States, and all over the world, we will continue to advocate
for policies that protect the free speech, privacy, and security of all users—adults and young people alike. And, with your help, we will continue to fight for the future of the open
internet, ensuring that all users—especially the youth—can access the digital world without fear of surveillance or unnecessary restrictions.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in
2024.
>> mehr lesen
Federal Regulators Limit Location Brokers from Selling Your Whereabouts: 2024 in Review
(Tue, 31 Dec 2024)
The opening and closing months of 2024 saw federal enforcement against a number of location data
brokers that track and sell users’ whereabouts through apps installed on their smartphones. In January, the Federal Trade Commission brought successful enforcement actions against
X-Mode Social and InMarket, banning the
companies from selling precise location data—a first prohibition of this kind for the FTC. And in December, the FTC widened its net to two additional companies—Gravy
Analytics (Venntel) and Mobilewalla—barring
them from selling or disclosing location data on users visiting sensitive areas such as reproductive health clinics or places of worship. In previous years, the FTC has
sued location brokers such as Kochava, but the invasive practices of these companies have only gotten worse. Seeing the
federal government ramp up enforcement is a welcome development for 2024.
As regulators have clearly stated, location information is sensitive personal
information. Companies can glean location information from your smartphone in a number of ways. Apps that include Software Development Kits (SDKs) from some companies
will instruct the app to send back troves of sensitive information for analytical insights or debugging purposes. The data brokers may offer market insights or financial incentives for app developers to include their SDKs.
Other companies will not ask apps to directly include their SDKs, but will participate in Real-Time Bidding (RTB) auctions, placing bids for ad-space on devices in locations they
specify. Even if they lose the auction, they can glean valuable device location information just by participating. Often, apps will ask for permissions such as location data for
legitimate reasons aligned with the purpose of the app: for example, a price comparison app might use your whereabouts to show you the cheapest vendor of a product you’re interested
in for your area. What you aren’t told is that your location is also shared with companies tracking you.
A number of revelations this year gave us better insight into how the location data broker industry works, revealing the inner-workings of powerful tools such as Locate X, which allows even those
claiming to work with law enforcement at some point in the future to access troves of mobile location data across the
planet. The mobile location tracking company FOG Data Science, which in 2022 EFF revealed to be selling troves of information to local
police, was this year found also to be soliciting law enforcement for information on the doctors of suspects in order to track them
via their doctor visits.
A number of revelations this year gave us better insight into how the location data broker industry works
EFF detailed how these tools can be stymied via technical means, such as changing a few
key settings on your mobile device to disallow data brokers from linking your location across space and time. We further outlined legislative avenues to ensure structural
safeguards are put in place to protect us all from an out-of-control predatory data industry.
In addition to FTC action, the Consumer Financial Protection Bureau proposed a new
rule meant to crack down on the data broker industry. As the CFPB mentioned, data brokers compile highly sensitive information—like information about a consumer's
finances, the apps they use, and their location throughout the day. The rule would include stronger consent requirements and protections for personal data that has been purportedly
de-identified. Given the abuses the announcement cites, including the distribution and sale of “detailed personal information about military service members, veterans, government
employees, and other Americans,” we hope to see adoption and enforcement of this proposed rule in 2025.
This year has seen a strong regulatory appetite to protect consumers from harms which in bygone years would have seemed unimaginable: detailed records on the movements of nearly
everyone, packaged and made available for pennies. We hope 2025 continues this appetite to address the dangers of location data brokers.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in
2024.
>> mehr lesen
Exposing Surveillance at the U.S.-Mexico Border: 2024 Year in Review in Pictures
(Mon, 30 Dec 2024)
Some of the most picturesque landscapes in the United States can be found along the border with Mexico. Yet, from San Diego’s beaches to the Sonoran Desert,
from Big Bend National Park to the Boca Chica wetlands, we see vistas marred by the sinister spread of surveillance
technology, courtesy of the federal government.
EFF refuses to let this blight grow without documenting it, exposing it, and finding ways to fight back alongside the communities that live in the shadow of
this technological threat to human rights.
Here’s a galley of images representing our work and the new developments we’ve discovered in border surveillance in 2024.
1. Mapping Border Surveillance
EFF’s stand-up display of surveillance at the US-Mexico border. Source: EFF
EFF published the first iteration of our map of surveillance
towers at the U.S.-Mexico border in Spring 2023, having pinpointed the precise location of 290 towers, a fraction of what we knew might be out
there. A year- –and- –a
-half later, with the help of local residents, researchers, and search-and-rescue groups, our map now includes more than 500 towers.
In many cases, the towers are brand new, with some going up as recently as this fall. We’ve also added the
location of surveillance aerostats, checkpoint license plate readers, and face recognition at land ports of entry.
In addition to our online map, we also created a 10’ x 7’ display that we debuted at “Regardless of Frontiers: The First
Amendment and the Exchange of Ideas Across Borders,” a symposium held by the Knight First Amendment Institute at Columbia University in October.
If your institution would be interested in hosting it, please email us at aos@eff.org.
2. Infrastructures of Control
The Infrastructures of Control exhibit at University of Arizona. Source: EFF
Two University of Arizona geographers—Colter Thomas and Dugan Meyer—used our map to explore the border, driving on dirt roads and hiking in the desert, to
document the infrastructure that comprises the so-called “virtual wall.” The result: “Infrastructures of Control,” a photography exhibit in April at the University of Arizona that also included a near-actual size replica of
an “autonomous surveillance tower.”
You can read our interview with Thomas and Meyer here.
3. An Old Tower, a New Lease in Calexico
A remote video surveillance system in Calexico, Calif. Source: EFF
Way back in 2000, the Immigration and Naturalization Service—which oversaw border security prior to the creation of Customs and Border Protection (CBP)
within the Department of Homeland Security (DHS) — leased a small square of land in a public park in Calexico, Calif., where it then installed one of the earliest border surveillance
towers. The lease lapsed in 2020 and with plans for a massive surveillance upgrade looming, CBP rushed to try to renew the lease this year.
This was especially concerning because of CBP’s new strategy of combining artificial intelligence with border camera feeds. So EFF teamed up
with the Imperial Valley Equity and Justice Coalition, American Friends Service Committee, Calexico Needs Change, and Southern Border Communities Coalition
to try to convince the Calexico City Council to either reject the lease or demand that CBP enact better privacy protections for residents in the neighboring community and children
playing in Nosotros Park. Unfortunately, local politics were not in our
favor. However, resisting border surveillance is a long game, and EFF considers it a victory that this tower even got a public debate at
all.
4. Aerostats Up in the Air
The Tactical Aerostat System at Santa Teresa Station. Source: Battalion Search and Rescue (CC BY)
CBP seems incapable of developing a coherent strategy when it comes to tactical aerostats—tethered blimps equipped with long-range, high-definition cameras.
In 2021, the agency
said it wanted to cancel the program, which involved four aerostats in the Rio Grande Valley, before reversing itself. Then in 2022, CBP
launched new aerostats in
Nogales, Ariz. and Columbus, N.M. and announced plans to launch 17 more within a year.
But by 2023, CBP had left the program out of its proposed budget,
saying the aerostats would be decommissioned.
And yet, in fall 2024, CBP launched a new aerostat at the Santa Teresa Border Patrol Station in New Mexico. Our friends at Battalion Search &
Rescue gathered photo evidence for us. Soon after, CBP issued a new solicitation for the aerostat program and a member of
Congress told Border Report that the aerostats
may be upgraded and as many as 12 new ones may be acquired by CBP via the Department of Defense.
Meanwhile, one of CBP’s larger Tethered Aerostats Radar Systems in Eagle Pass, Texas was down for most of the year after deflating in high
winds. CBP has reportedly not been interested in paying hundreds of thousands of dollars to get it up
again.
5. New Surveillance in Southern Arizona
A Buckeye Camera on a pole along the border fence near Sasabe, Ariz. Source: EFF
Buckeye Cameras are motion-triggered cameras that were originally designed for hunters and ranchers to spot
wildlife, but border enforcement authorities—both federal and state/local—realized years ago that they could be used to photograph people crossing the border. These cameras are often
camouflaged (e.g. hidden in trees, disguised as garbage, or coated in sand).
Now, CBP is expanding their use of Buckeye Cameras. During a trip to Sasabe, Ariz., we discovered the CBP is now placing Buckeye Cameras in checkpoints,
welding them to the border fence, and installing metal poles, wrapped in concertina wire, with Buckeye Cameras at the top.
A surveillance tower along the highway west of Tucson. Source: EFF
On that same trip to Southern Arizona, EFF (along with the Infrastructures of Control geographers) passed through a checkpoint west of Tucson, where previously we had identified a relocatable surveillance tower. But this time it
was gone. Why, we wondered? Our question was answered just a minute or two later, when we spotted a new surveillance tower on a nearby hill-top, a new model that we had not previously
seen deployed in the wild.
6. Artificial Intelligence
A graphic from a January 2024 “Industry Day” event. Source: Custom & Border Protection
CBP and other agencies regularly hold “Industry Days” to brief contractors on the new technology and capabilities the agency may want to buy in the near
future. In January, EFF attended one such “Industry Day” designed to bring tech vendors up-to-speed on the government’s horrific vision of a border secured by artificial
intelligence (see the graphic above for an example of that vision).
A graphic from a January 2024 “Industry Day” event. Source: Custom & Border Protection
At this event, CBP released the convoluted flow chart above as part of slide show. Since it’s so difficult to
parse, here’s the best sense we can make out of it:. When someone crosses the border, it triggers an unattended ground sensor (UGS),
and then a camera autonomously detects, identifies, classifies and tracks the person, handing them off camera to camera, and the AI system eventually alerts Border Patrol to dispatch someone to
intercept them for detention.
7. Congress in Virtual Reality
Rep. Scott Peters on our VR tour of the border. Source: Peters’ Instagram
We search for surveillance on the ground. We search for it in public records. We search for it in satellite imagery. But we’ve also learned we can use virtual reality
in combination with Google Streetview not only to investigate surveillance, but also to introduce policymakers to the realities of the policies they
pass. This year, we gave Rep. Scott Peters (D-San Diego) and his team a tour of surveillance at the border in VR, highlighting the impact on communities.
“[EFF] reminded me of the importance of considering cost-effectiveness and Americans’ privacy rights,” Peters wrote afterward in a social media.
We also took members of Rep. Mark Amodei’s (R-Reno) district staff on a similar tour. Other Congressional staffers should contact us at aos@eff.org
if you’d like to try it out.
Learn more about how EFF uses VR to research the border in this interview and this lightning talk.
8. Indexing Border Tech Companies
An HDT Global vehicle at the 2024 Border Security Expo. Source: Dugan Meyer (CC0 1.0 Universal)
In partnership with the Heinrich Böll Foundation, EFF and University of Nevada, Reno student journalist Andrew Zuker built a dataset of hundreds of vendors marketing technology to the U.S. Department of Homeland Security. As part of this research, Zuker journeyed to El Paso, Texas for the Border Security Expo, where he systematically
gathered information from all the companies promoting their surveillance tools. You can read Zuker’s firsthand report here.
9. Plataforma Centinela Inches Skyward
An Escorpión unit, part of the state of Chihuahua’s Plataforma Centinela project. Source: EFF
In fall 2023, EFF released its report
on the Plataforma Centinela, a massive surveillance network being built by the Mexican state of Chihuahua in Ciudad Juarez that will include 10,000+ cameras, face
recognition, artificial intelligence, and tablets that police can use to access all this data from the field. At its center is the Torre Centinela, a 20-story headquarters that was
supposed to be completed in
2024.
The site of the Torre Centinela in downtown Ciudad Juarez. Source: EFF
We visited Ciudad Juarez in May 2024 and saw that indeed, new cameras had been installed along roadways, and the government had begun using “Escorpión” mobile
surveillance units, but the tower was far from being completed. A reporter who visited in November confirmed that not much more progress had been made, although
officials claim
that the system will be fully operational in 2025.
10. EFF’s Border Surveillance Zine
Do you want to review even more photos of surveillance that can be found at the border, whether they’re planted in the ground, installed by the side of the road, or
floating in the air? Download EFF’s new zine in English or Spanish—or if you live a work in the border region, email us as aos@eff.org and we’ll mail you hard copies.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in
2024.
>> mehr lesen
Fighting Automated Oppression: 2024 in Review
(Mon, 30 Dec 2024)
EFF has been sounding the alarm on algorithmic decision making (ADM) technologies for years. ADMs use data and predefined rules or models to make or support decisions, often
with minimal human involvement, and in 2024, the topic has been more active than ever before, with landlords, employers, regulators, and police adopting new tools that have the
potential to impact both personal freedom and access to necessities like medicine and housing.
This year, we wrote detailed reports and comments to US
and internationalgovernments explaining that ADM
poses a high risk of harming human rights, especially with regard to issues of fairness and due process. Machine learning algorithms that enable ADM in complex contexts attempt to
reproduce the patterns they discern in an existing dataset. If you train it on a biased dataset, such as records of whom the police have arrested or who historically gets approved for
health coverage, then you are creating a technology to automate systemic, historical injustice. And because these technologies don’t (and typically
can’t) explain their reasoning, challenging their outputs is very difficult.
If you train it on a biased dataset, you are creating a technology to automate systemic, historical injustice.
It’s important to note that decision makers tend to defer to ADMs or use them as cover to justify their own biases. And even though they are implemented to change how decisions
are made by government officials, the adoption of an ADM is often considered a mere ‘procurement’ decision like buying a new printer, without the kind of public involvement that a
rule change would ordinarily entail. This, of course, increases the likelihood that vulnerable members of the public will be harmed and that technologies will be adopted without
meaningful vetting. While there may be positive use cases for machine learning to analyze government processes and phenomena in the world, making decisions about people is one of the
worst applications of this technology, one that entrenches existing injustice and creates new, hard-to-discover errors that can ruin lives.
Vendors of ADM have been riding a wave of AI hype, and police, border authorities, and spy agencies have gleefully thrown taxpayer money at products that make it harder to hold
them accountable while being unproven at offering any other ‘benefit.’ We’ve written about the use of generative AI to write police reports based on the audio from bodycam footage,
flagged how national security use of
AI is a threat to transparency, and called for an end to AI Use in Immigration Decisions.
The hype around AI and the allure of ADMs has further incentivized the collection of more and more user data.
The private sector is also deploying ADM to make decisions about people’s access to employment, housing, medicine, and more. People have an intuitive understanding of some of
the risks this poses, with most Americans expressing
discomfort about the use of AI in these contexts. Companies can make a quick buck firing people and demanding the remaining workers figure out how to implement
snake-oil ADM tools to make these decisions faster, though it’s becoming increasingly clear that this isn’t delivering the promised productivity gains.
ADM can, however, help a company avoid being caught making discriminatory decisions that violate civil rights laws—one reason why we support
mechanisms to prevent unlawful private discrimination using ADM. Finally, the hype around AI and the allure of ADMs has further incentivized the collection and monetization of more
and more user data and more invasions of privacy online, part of why we continue to push for a privacy-first approach to many of the harmful applications of these
technologies.
In EFF’s podcast episode on AI, we discussed some of the challenges
posed by AI and some of the positive applications this technology can have when it’s not used at the expense of people’s human rights, well-being, and the environment. Unless
something dramatically changes, though, using AI to make decisions about human beings is unfortunately doing a lot more harm than good.
This article is part of our Year in Review series. Read other articles about the
fight for digital rights in 2024.
>> mehr lesen
State Legislatures Are The Frontline for Tech Policy: 2024 in Review
(Mon, 30 Dec 2024)
State lawmakers are increasingly shaping the conversation on technology and innovation policy in the United States. As Congress continues to deliberate key issues such as data
privacy, police use of data, and artificial intelligence, lawmakers are rapidly advancing their own ideas into state law. That’s why EFF fights for internet rights not only in
Congress, but also in statehouses across the country.
This year, some of that work has been to defend good laws we’ve passed before. In California, EFF worked to oppose and defeat S.B. 1076, by State Senator Scott Wilk, which would have undermined the California Delete Act (S.B. 362). Enacted last year, the Delete Act provides consumers with
an easy “one-click” button to ask data brokers registered in California to remove their personal information. S.B. 1076 would have opened loopholes for data brokers to duck compliance
with this common-sense, consumer-friendly tool. We were glad to stop it before it got very far.
Also in California, EFF worked with dozens of organizations led by ACLU California Action to defeat A.B. 1814, a facial recognition bill authored by Assemblymember Phil Ting. The bill would
have made it easy for policy to evade accountability and we are glad to see the California legislature reject this dangerous bill. For the full rundown of our highlights and lowlights
in California, you can check out our recap of this year’s session.
EFF also supported efforts from the ACLU of Massachusetts to pass the Location Shield Act, which, as introduced, would
have required companies to get consent before collecting or processing location data and largely banned the sale of location data. While the bill did not become law this year, we look
forward to continuing the fight to push it across the finish line in 2025.
As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues.
States Continue to Experiment
Several states also introduced bills this year that raise similar issues as the federal Kids Online Safety Act, which attempts to address young people’s safety online but instead
introduces considerable censorship and privacy concerns.
For example, in California, we were able to stop A.B. 3080, authored by
Assemblymember Juan Alanis. We opposed this bill for many reasons, including that it was not clear on what counted as “sexually explicit content” under its definition. This vagueness
set up barriers to youth—particularly LGBTQ+ youth—to access legitimate content online.
We also oppose any bills, including A.B. 3080, that require age verification to access certain sites or social media networks. Lawmakers filed bills that have this requirement in
more than a dozen states. As we said in comments to the New York Attorney General’s office on their
recently passed “SAFE for Kids Act,” none of the requirements the state was considering are both privacy-protective and entirely accurate. Age-verification requirements harm all
online speakers by burdening free speech and diminishing online privacy by incentivizing companies to collect more personal information.
We also continue to watch lawmakers attempting to regulate the creation and spread of deepfakes. Many of these proposals, while well-intentioned, are written in ways that likely
violate First Amendment rights to free expression. In fact, less than a month after California’s governor signed a deepfake bill into law a federal judge put its enforcement on pause (via a preliminary injunction)
on First Amendment grounds. We encourage lawmakers to explore ways to
focus on the harms that deepfakes pose without endangering speech rights.
On a brighter note, some state lawmakers are learning from gaps in existing privacy law and working to improve standards. In the past year, both Maryland and Vermont have advanced
bills that significantly improve state privacy laws we’ve seen before. The Maryland Online Data Privacy
Act (MODPA)—authored by State Senator Dawn File and Delegate Sara Love (now State Senator Sara Love), contains strong data privacy minimization requirements. Vermont’s privacy
bill, authored by State Rep. Monique Priestley, included the crucial right for individuals to sue companies that violate their privacy. Unfortunately, while the bill passed both
houses, it was vetoed by Vermont Gov. Phil
Scott. As private rights of action are among our top priorities in privacy
laws, we look forward to seeing more bills this year that contain this important enforcement measure.
Looking Ahead to 2025
2025 will be a busy year for anyone who works in state legislatures. We already know that state lawmakers are working together on issues such as AI legislation. As we’ve said before, we look forward to being a part of these
conversations and encourage lawmakers concerned about the threats unchecked AI may pose to instead consider regulation that focuses on real-world harms.
As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues. So, we’ll continue to work—along with partners at other
advocacy organizations—to advise lawmakers and to speak up. We’re counting on our supporters and individuals like you to help us champion digital rights. Thanks for your support in
2024.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in
2024.
>> mehr lesen
EFF’s 2023 Annual Report Highlights a Year of Victories: 2024 in Review
(Sun, 29 Dec 2024)
Every fall, EFF releases its annual report, and 2023 was the year of Privacy First. Our annual report dives into our groundbreaking whitepaper along with victories in
freeing the law, right to repair, and more. It’s a great, easy-to-read summary of the year’s work, and it contains interesting tidbits
about the impact we’ve made—for instance, did you know 394,000 people downloaded an episode of EFF’s Podcast, “How to Fix the Internet as of 2023?” Or that EFF had donors in 88 countries?
As you can see in the report, EFF’s role as the oldest, largest, and most trusted digital rights organization became even more important when tech law and policy
commanded the public’s attention in 2023. Major headlines pondered the future of internet freedom. Arguments around free speech, digital privacy, AI, and social media dominated
Congress, state legislatures, the U.S. Supreme Court, and the European Union.
EFF intervened with logic and leadership to keep bad ideas from getting traction, and we articulated solutions to legitimate concerns with care and nuance in our
whitepaper, Privacy First: A Better Way to Protect Against Online Harms. It demonstrated
how seemingly disparate concerns are in fact linked to the dominance of tech giants and the surveillance business models used by most of them. We noted how these business models also
feed law enforcement’s increasing hunger for our data. We pushed for a comprehensive approach to privacy instead and showed how this would protect us all more effectively than harmful
censorship strategies.
The longest running fight we won in 2023 was to free the law: In our legal representation of PublicResource.org, we successfully ensured that copyright law does not block you from
finding, reading and sharing laws, regulations and building codes online. We also won a major victory in helping to pass a law in California to increase tech users’ ability to control
their information. In states across the nation, we helped boost the right to repair. Due to the efforts of the many technologists and advocates involved with Let’s Encrypt, HTTPS
Everywhere, and Certbot over the last 10 year, as much as 95% of the web is now encrypted. And that’s just barely scratching the surface.
Read the Report
Obviously, we couldn’t do any of this without the support of our members, large and small. Thank you. Take a look at the report for more information about the work we’ve been
able to do this year thanks to your help.
This article is part of our Year in Review series. Read other articles about the fight for digital
rights in 2024.
>> mehr lesen
Aerial and Drone Surveillance: 2024 in Review
(Sun, 29 Dec 2024)
We've been fighting againstaerial surveillance for decades because we recognize the immense threat from Big Brother in the sky. Even if you’re behind within the confines of your backyard, you are exposed to eyes from
above.
Aerial surveillance was first conducted with manned aircrafts, which the Supreme Court held was permissible without a warrant in a couple of cases the 1980s. But, as we’ve argued to courts, drones have changed the equation. Drones were a technology developed by the military before it was adopted by domestic law enforcement. And in the past decade,
commercial drone makers began marketing to civilians, making drones ubiquitous in our lives and exposing us to be watched by from above by the government and our
neighbors. But we believe that when we're in the constitutionally protected areas of backyards or homes, we have the right to privacy, no matter how technology has
advanced.
This year, we focused on fighting back
against aerial surveillance facilitated by advancement in these technologies. Unfortunately, many of the legal challenges to aerial and drone surveillance are hindered by those
Supreme Court cases. But, we argued that these cases decided around the same time as when people were playing Space Invaders on the Atari 2600 and watching the Goonies on VHS should not control the legality of
conduct in the age of Animal Crossing and 4k streaming services. As nostalgic as those memories may be, laws from those times are just as outdated as 16k ram packs and magnetic videotapes. And we have applauded courts
for recognizing that.
Unfortunately, the Supreme Court has failed to update its understanding of aerial surveillance, even though other courts
have found certain types of aerial surveillance to violate the federal
and state constitutions.
Because of this ambiguity, law enforcement agencies across the nation have been quick to adopt
various drone systems, especially those marketed as a “drone as first responder” program, which ostensibly allows police to assess a situation–whether it’s
dangerous or requires police response at all–before officers arrive at the scene. Data from the Chula Vista
Police Department in Southern California, which pioneered the model, shows that drones frequently respond to domestic violence, unspecified disturbances, and requests for
psychological evaluations. Likewise, flight logs indicate the drones are often used to investigate crimes related to homelessness. The Brookhaven Police Department in Georgia also has
adopted this model. While these programs sound promising in theory, municipalities have been reticent
in sharing the data despite courts ruling
that the information is not categorically closed to the public.
Additionally, while law enforcement agencies are quick to assure the public that their policy respects privacy concerns, those can be hollow assurances.
The NYPD promised
that they would not surveil constitutionally protected backyards with drones, but Eric Adams decided to use to them to spy on backyard parties
over Labor Day in 2023 anyway. Without strict regulations in place, our privacy interests are at the whims of whoever holds power over these
agencies.
Alarmingly, there are increasing numbers of calls by police departments and drone manufacturers to arm remote-controlled
drones. After wide-spread backlash including resignations from its ethics board, drone manufacturer Axon in 2022 said it would pause a program to develop a drone armed with a taser to be
deployed in school shooting scenarios. We’re likely to see more proposals like this, including drones armed with pepper spray and other crowd
control weapons.
As drones incorporate more technological payload and become cheaper, aerial surveillance has become a favorite
surveillance tool resorted to by law enforcement and other governmental agencies. We must ensure that these technological developments do not encroach on our constitutional rights to
privacy.
This article is part of our Year in Review series. Read other articles about the fight for digital
rights in 2024.
>> mehr lesen
Restrictions on Free Expression and Access to Information in Times of Change: 2024 in Review
(Sun, 29 Dec 2024)
This was an historical year. A year in which elections took place in countries home to
almost half the world’s population, a year of war, and collapse of or chaos withinseveralgovernments. It was also a year of new
technological developments, policy changes, and legislative developments. Amidst these sweeping changes, freedom of expression has never been more important, and around the world,
2024 saw numerous challenges to it. From new legal restrictions on speech to wholesale internet shutdowns, here are just a few of the threats to freedom of expression online that we
witnessed in 2024.
Internet shutdowns
It is sadly not surprising that, in a year in which national elections took place in at least 64 countries, internet shutdowns would be commonplace. Access Now, which tracks
shutdowns and runs the KeepItOn Coalition (of which EFF is a member), found that
seven countries—Comoros, Azerbaijan, Pakistan, India,
Mauritania, Venezuela, and
Mozambique—restricted access to the internet at least partially during election periods. These restrictions inhibit people from being able to share news of what’s happening on the
ground, but they also impede access to basic services, commerce, and communications.
Repression of speech in times of conflict
But elections aren’t the only justification governments use for restricting internet access. In times of conflict or protest, access to internet infrastructure is key for
enabling essential communication and reporting. Governments know this, and over the past decades, have weaponized access as a means of controlling the free flow
of information. This year, we saw Sudan enact a total communications blackout amidst conflict and displacement. The Iranian government has over the past two years repeatedly
restricted access to the internet and social media during protests. And Palestinians in Gaza have been subject to repeated internet blackouts inflicted by Israeli authorities.
Social media platforms have also played a role in restricting speech this year, particularly when it comes to Palestine. We documented unjust content moderation by companies at
the request of Israel’s Cyber Unit, submitted comment
to Meta’s Oversight Board on the use of the slogan “from the river to the sea” (which the Oversight Board notably agreed with),
and submitted comment to the UN Special Rapporteur on Freedom of Expression and Opinion expressing concern about the disproportionate impact of platform restrictions on expression by
governments and companies.
In our efforts to ensure free expression is protected online, we collaborated with numerous groups and coalitions in 2024, including our own global content moderation coalition,
the Middle East Alliance for Digital Rights, the DSA Human Rights Alliance, EDRI, and many
others.
Restrictions on content, age, and identity
Another alarming 2024 trend was the growing push from several countries to restrict access to the internet by age, often by means of requiring ID to get online, thus inhibiting
people’s ability to identify as they wish. In Canada, an overbroad age verification bill, S-210, seeks to prevent
young people from encountering sexually explicit material online, but would require all users to submit identification before going online. The
UK’s Online Safety Act, which EFF has opposed since its first introduction, would also require
mandatory age verification, and would place penalties on websites and apps that host otherwise-legal content deemed “harmful” by regulators to minors. And similarly in the United
States, the Kids Online Safety Act (still under
revision) would require companies to moderate “lawful but awful” content and subject users to privacy-invasive age verification. And in recent weeks, Australia has also
enacted a vague law that aims to block teens and
children from accessing social media, marking a step back for free expression and privacy.
While the efforts of these governments are to ostensibly protect children from harm, as we have repeatedly demonstrated, they can also
cause harm to young people by preventing them from accessing information that is otherwise not taught in schools or otherwise accessible in their
communities.
One group that is particularly impacted by these and other regulations enacted by governments around the world is the LGBTQ+ community. In June, we noted that censorship of
online LGBTQ+ speech is on the rise in a number of
countries. We continue to keep a close watch on governments that seek to restrict access to vital information and communications.
Cybercrime
We’ve been pushing back against cybercrime laws for a long time. In 2024, much of that work focused on the UN Cybercrime Convention, a treaty
that would allow states to collect evidence across borders in cybercrime cases. While that might sound acceptable to many readers, the problem is that numerous countries utilize
“cybercrime” as a means of punishing speech. One such country is Jordan, where a cybercrime law enacted in 2023 has been used against LGBTQ+ people, journalists, human rights
defenders, and those criticizing the government.
EFF has fought back against Jordan’s
cybercrime law, as well as bad cybercrime laws in China, Russia, the Philippines, and elsewhere, and we will continue to do so.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in
2024.
>> mehr lesen
Cars (and Drivers): 2024 in Review
(Sat, 28 Dec 2024)
If you’ve purchased a car made in the last decade or so, it’s likely jam-packed with enough technology to make your brand new phone jealous. Modern cars have sensors, cameras,
GPS for location tracking, and more, all collecting data—and it turns out in many cases, sharing it.
Cars Sure Are Sharing a Lot of Information
While we’ve been keeping an eye on the evolving state of car privacy for years, everything really took off after a New York Times report this past March found that the car maker G.M. was
sharing information about driver’s habits
with insurance companies without consent.
It turned out a number of other car companies were doing the same by using deceptive design so people didn’t always realize they were opting into the program. We walked through how to see for yourself
what data your car collects and shares. That said, cars, infotainment systems, and car maker’s apps are so unstandardized it’s often very difficult for drivers to research, let
alone opt out of data sharing.
Which is why we were happy to see
Senators Ron Wyden and Edward Markey send a letter to the Federal Trade Commision urging it to investigate these practices. The fact is: car makers should not sell our driving
and location history to data brokers or insurance companies, and they shouldn’t make it as hard as they do to figure out what data gets shared and with whom.
Advocating for Better Bills to Protect Abuse Survivors
The amount of data modern cars collect is a serious privacy concern for all of us. But for people in an abusive relationship, tracking can be a nightmare.
This year, California considered three bills intended to help domestic abuse survivors endangered by vehicle tracking. Of those, we initially liked the approach behind two of
them, S.B. 1394 and S.B. 1000. When introduced, both would have served the needs of survivors in a wide range of scenarios without inadvertently creating new avenues
of stalking and harassment for the abuser to exploit. They both required car manufacturers to respond to a survivor's request to cut an abuser's remote access to a car's connected
services within two business days. To make a request, a survivor had to prove the vehicle was theirs to use, even if their name was not on the loan or title.
But the third bill, A.B. 3139, took a different approach. Rather than have people submit requests first and cut access later, this bill required car manufacturers to terminate
access immediately, and only require some follow-up documentation up to seven days later. Likewise, S.B. 1394 and S.B. 1000 were amended to adopt this "act first, ask questions later"
framework. This approach is helpful for survivors in one scenario—a survivor who has no documentation of their abuse, and who needs to get away immediately in a car owned by their
abuser. Unfortunately, this approach also opens up many new avenues of stalking, harassment, and abuse for survivors. These bills ended up being combined into S.B. 1394, which retained some provisions we remain concerned about.
It’s Not Just the Car Itself
Because of everything else that comes with car ownership, a car is just one piece of the mobile privacy puzzle.
This year we fought against A.B. 3138 in California, which proposed
adding GPS technology to digital license plates to make them easier to track. The bill passed, unfortunately, but location data privacy continues to be an important
issue that we’ll fight for.
We wrote about a bulletin released by the
U.S. Cybersecurity and Infrastructure Security Agency about infosec risks in one brand of automated license plate readers (ALPRs). Specifically, the bulletin outlined seven
vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials. The sheer scale of this vulnerability is alarming: EFF
found that just 80 agencies in California, using primarily Vigilant technology, collected more than 1.6 billion license plate scans (CSV) in 2022. This data can be used to track people in real time, identify their
"pattern of life," and even identify their relations and associates.
Finally, in order to drive a car, you need a license, and increasingly states are offering digital IDs. We dug deep into California’s mobile ID app, wrote about the various issues with mobile IDs— which range from equity to privacy
problems—and put together an FAQ to help you decide if
you’d even benefit from setting up a mobile ID if your state offers one. Digital IDs are a major concern for us in the coming years, both due to the unanswered questions about their
privacy and security, and their potential use for government-mandated age verification on the internet.
The privacy problems of cars are of increasing importance, which is why Congress and the states must pass comprehensive consumer data privacy legislation with strong data minimization
rules and requirements for clear, opt-in consent. While we tend to think of data privacy laws as dealing with computers, phones, or IoT devices, they’re just as applicable, and
increasingly necessary, for cars, too.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in
2024.
>> mehr lesen
Behind the Diner—Digital Rights Bytes: 2024 in Review
(Sat, 28 Dec 2024)
Although it feels a bit weird to be writing a year in review post for a site that hasn’t even been live for three months, I thought it would be fun to give
a behind-the-scenes look at the work we did this year to build EFF’s newest site, Digital Rights
Bytes.
Since each topic Digital Rights Bytes aims to tackle is in the form of a question, why not do this Q&A style?
Q: WHAT IS DIGITAL RIGHTS BYTES?
Great question! At its core, Digital Rights Bytes is a place where you can get honest answers to the questions that have been bugging you about
technology.
The site was originally pitched as ‘EFF University’ (or EFFU, pun intended) to
help folks who aren’t part of our core tech-savvy base get up-to-speed on technology issues that may be affecting their everyday lives. We really wanted Digital Rights Bytes to be a
place where newbies could feel safe learning about internet freedom issues, get familiar with EFF’s work, and find out how to get involved, without feeling too
intimidated.
Q: WHY DOES THE SITE LOOK SO DIFFERENT FROM OTHER EFF WORK?
With our main
goal of attracting new readers, it was crucial
to brand Digital Rights Bytes differently from other EFF projects. We wanted Digital Rights Bytes to feel like a place where you and your
friend might casually chat over milkshakes—while being served pancakes by a friendly robot. We took that concept and ran with it, going forward with a full
diner theme for the site. I mean, imagine the counter banter you could have at the Digital Rights Bytes
Diner!
Take a look at the Digital Rights Bytes counter!
As part of this concept, we thought it made sense for each topic to be framed as a question. Of course, at EFF, we get a ton of questions from supporters
and other folks online about internet freedom issues, including from our own family and friends. We took some of the questions we see fairly often,
then decided which would be the most important—and most interesting—to answer.
The diner concept is why the site has a bright neon logo, pink and cyan colors, and a neat vintage looking background on desktop. Even the gif that plays on
the home screen of Digital Rights Bytes shows our animal characters chatting ‘round the diner (more on them soon!)
Q: WHY DID YOU MAKE DIGITAL RIGHTS BYTES?
Here’s the thing: technology continues to expand, evolve, and change—and it’s tough to keep up! We’ve all been the tech noob, trying to figure out why our
devices behave the way they do, and it can be pretty overwhelming.
So, we thought that we could help out with that! And what better way to help educate newcomers than explaining these tech issues in short
byte-sized videos:
A clip from the device repair video.
It took some time to nail down the style for the videos on Digital Rights Bytes. But, after some trial and error, we landed on using animals as our lead
characters. A) because they’re adorable. B) because it helped further emphasize the shadowy figures that were often trying to steal their data or make their tech worse for them. It’s
often unclear who is trying to steal our data or rig tech to be worse for the user, so we thought this was fitting.
In addition to the videos, EFF issue experts wrote concise and easy to read pages further detailing the topic, with an emphasis on linking to other experts
and including information on how you can get involved.
Q: HAS DIGITAL RIGHTS BYTES BEEN SUCCESSFUL?
You tell us! If you’re reading these Year In Review blog posts, you’re probably the designated “ask them every tech question in the world” person of your
family. Why not send your family and friends over to Digital Rights Bytes and let us know if the site has been helpful to them!
We’re also looking to expand the site and answer more common questions you and I might hear. If you have suggestions, you should let us know here or on social media! Just use the hashtag
#DigitalRightsBytes and we’ll be sure to consider it.
This article is part of our Year in Review series. Read other articles about the
fight for digital rights in 2024.
>> mehr lesen
NSA Surveillance and Section 702 of FISA: 2024 in Review
(Sat, 28 Dec 2024)
Mass surveillance authority Section 702 of FISA, which allows the government to collect international communications, many of which happen to have one side in the United States,
has been renewed several times since its creation with the passage of the 2008 FISA Amendments Act. This law has been an incessant threat to privacy for over a decade because the FBI
operates on the “finders keepers” rule of surveillance which means that it thinks because the NSA has “incidentally” collected the US-side of conversations it is now free to sift
through them without a warrant.
But 2024 became the year this mass surveillance authority was not only reauthorized by a lion’s share of both
Democrats and Republicans—it was also the year the law got worse.
After a tense fight, some temporary reauthorizations, and a looming expiration, Congress finally passed the Reforming Intelligence and Securing America Act
(RISAA) in April, 20204. RISAA not only reauthorized the mass surveillance capabilities of Section 702 without any of the necessary reforms that had been floated in
previous bills, it also enhanced its powers by expanding what it can be used for and who has to adhere to the government’s requests for data.
Where Section 702 was enacted under the guise of targeting people not on U.S. soil to assist with national security investigations, there are not such narrow limits on the use
of communications acquired under the mass surveillance law. Following the passage of RISAA, this private information can now be used to vet immigration and asylum seekers and conduct
intelligence for broadly construed “counter narcotics” purposes.
The bill also included an expanded definition of “Electronic
Communications Service Provider” or ECSP. Under Section 702, anyone who oversees the storage or transmission of electronic communications—be it emails, text messages,
or other online data—must cooperate with the federal government’s requests to hand over data. Under expanded definitions of ECSP there are intense and well-realized fears that anyone
who hosts servers, websites, or provides internet to customers—or even just people who work in the same building as these providers—might be forced to become a tool of the
surveillance state. As of December 2024, the fight is still on in Congress to
clarify, narrow, and reform the definition of ECSP.
The one merciful change that occurred as a result of the 2024 smackdown over Section 702’s renewal was that it only lasts two years. That means in Spring 2026 we have to be
ready to fight again to bring meaningful change, transparency, and restriction to Big Brother’s favorite law.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in
2024.
>> mehr lesen
Global Age Verification Measures: 2024 in Review
(Fri, 27 Dec 2024)
EFF has spent this year urging governments around the world, from Canada to Australia, to abandon their reckless plans to introduce age
verification for a variety of online content under the guise of protecting children online. Mandatory age verification tools are surveillance systems that threaten everyone’s rights to
speech and privacy, and introduce more harm than they seek to combat.
Kids Experiencing Harm is Not Just an Online Phenomena
In November, Australia’s Prime Minister, Anthony Albanese, claimed that legislation was needed to protect
young people in the country from the supposed harmful effects of social media. Australia’s Parliament later passed the Online Safety Amendment (Social Media Minimum Age) Bill
2024, which bans children under the age of 16 from using social media and forces platforms to take undefined “reasonable steps” to verify users’ ages or face over $30
million in fines. This is similar to last year’s ban on social
media access for children under 15 without parental consent in France, and Norway also pledged to follow a similar ban.
No study shows such harmful impact, and kids don’t need to fall into a wormhole of internet content to
experience harm—there is a whole world outside the barriers of the internet that contributes to people’s experiences, and all evidence suggests that many young people experience
positive outcomes from social media. Truthful news about what’s going on in the world, such as wars and climate change is available both online and by
seeing a newspaper on the breakfast table or a billboard on the street. Young people may also be subject to harmful behaviors like bullying in the offline world, as well as
online.
The internet is a valuable
resource for both young people and adults who rely on the internet to find community and themselves. As we said about age verification measures
in the U.S. this year, online services that want to host serious discussions about mental health issues, sexuality, gender identity, substance abuse, or a host of other issues, will
all have to beg minors to leave and institute age verification tools to ensure that it happens.
Limiting Access for Kids Limits Access for Everyone
Through this wave of age verification bills, governments around the world are burdening internet users and forcing them to sacrifice their anonymity, privacy, and security
simply to access lawful speech. For adults, this is true even if that speech constitutes sexual or explicit content. These laws are censorship laws, and rules banning
sexual content usually hurt marginalized communities and groups that serve
them the most. History shows
that over-censorship is inevitable.
This year, Canada also introduced an age
verification measure, bill S-210, which seeks to prevent young people from encountering sexually explicit material by requiring all commercial internet services that
“make available” explicit content to adopt age verification services. This was introduced to prevent harms like the “development of pornography addiction” and “the reinforcement of
gender stereotypes and the development of attitudes favorable to harassment and violence…particularly against women.” But requiring people of all ages to show ID to get online
won’t help women or young people. When these large
services learn they are hosting or transmitting sexually explicit content, most will simply ban or remove it outright, using both automated tools and hasty human decision-making. This
creates a legal risk not just for those who sell or intentionally distribute sexually explicit materials, but also for those who just transmit it–knowingly or not.
Without Comprehensive Privacy Protections, These Bills Exacerbate Data Surveillance
Under mandatory age verification requirements, users will have no way to be certain that the data they’re handing over is not going to be retained and used in unexpected ways,
or even shared to unknown third parties. Millions of adult internet users would also be entirely blocked from accessing protected speech online because they are not in possession of
the required form of
ID.
Online age verification is not
like flashing an ID card in person to buy particular physical items. In places that lack comprehensive data privacy legislation, the risk of surveillance is
extensive. First, a person who submits identifying information online can never be sure if websites will keep that information, or how that information might be used or disclosed.
Without requiring all parties who may have access to the data to delete that data, such as third-party
intermediaries, data brokers, or advertisers, users are left highly vulnerable to data breaches and other security harms at companies responsible for storing or
processing sensitive documents like drivers’ licenses.
Second, and unlike in-person age-gates, the most common way for websites to comply with a potential verification system would be to require all users to upload and submit—not
just momentarily display—a data-rich government-issued ID or other document with personal identifying information. In a brief to a U.S. court, EFF explained how this
leads to a host of serious anonymity, privacy, and security concerns. People shouldn't have to disclose to the government what websites
they're looking at—which could reveal sexual preferences or other extremely private information—in order to get information from that website.
These proposals are coming to the U.S. as well. We analyzed various age verification methods in
comments to the New York Attorney General. None of them are both accurate and privacy-protective.
The Scramble to Find an Effective Age Verification Method Shows There Isn't One
The European Commission is also currently working on guidelines for the implementation of the child safety article of the Digital Services Act (Article 28) and may come up
with criteria for effective
age verification. In parallel, the Commission has asked for proposals for a
'mini EU ID wallet' to implement device-level age verification ahead of the expected roll out of digital identities across the EU in 2026. At the same time, smaller social media
companies and dating platforms have for years been arguing that age verification should take place at the device or app-store level, and will likely support the Commission's plans. As
we move into 2025, EFF will continue to follow these developments as the Commission’s apparent expectation on porn platforms to adopt age verification to comply with their risk
mitigation obligations under the DSA becomes clearer.
Mandatory age verification is the wrong approach to protecting young people online. In 2025, EFF will continue urging politicians around the globe to acknowledge these
shortcomings, and to explore less invasive approaches to protecting all people from online
harms.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
>> mehr lesen