You Can Be a Part of this Grassroots Movement
(Do, 26 Dez 2024)
You ever hear the saying, "it takes a village"? I never really understood the saying until I started going to conferences, attending protests, and working on EFF's membership
team.
You see, EFF's mission thrives because we are powered by people like you. Just as we fight for your privacy and free expression through grassroots advocacy, we rely on grassroots
support to make all of our work possible. Will you join our movement with a small monthly donation today?Start A Monthly Donation
Your Support Makes the Difference
Powered by your support, EFF never needs to appease outside interests. With over 75% of our funding coming from individual donations, your contribution ensures that we can keep
fighting for your rights online—no matter what. So, whether you give $5/month or include us in your planned giving, you're ensuring our team can stand up for you.
Donate by December 31 to help unlock bonus grants! Your automatic monthly or annual donation—of any amount—will count towards our Year-End
Challenge and ensures you're helping us win this challenge in future years, too.
BE A PART OF THE CHANGE
When you become a Sustaining Donor with a monthly or annual gift, you are not only supporting EFF's expert lawyers and activists, but you also get to choose
free EFF member gear each year as a thank you from us. For just $10/month you can even choose our newest member t-shirt: Fix Copyright.
Show your support for privacy and free speech with EFF member gear.
We'd love to have you beside us for every legislative victory we share, every spotlight we shine on bad actors, and every tool we develop to keep you safe online.
Start a monthly donation with EFF today and keep the digital freedom movement strong!Join The Digital Rights Movement
Unlock Bonus Grants Before 2025
________________________
EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating ELEVEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible
as allowed by law.
>> mehr lesen
Surveillance Self-Defense: 2024 in Review
(Thu, 26 Dec 2024)
This year, we celebrated the 15th anniversary of our
Surveillance-Self Defense (SSD) guide. How’d we celebrate? We kept at it—continuing to work on, refine, and update one of the longest running security and privacy
guides on the internet.
Technology changes quickly enough as it is, but so does the language we use to describe that technology. In order for SSD to thrive, it needs careful attention throughout the
year. So, we like to think of SSD as a garden, always in need of a little watering, maybe some trimming, and the occasional mowing down of dead technologies.
Brushing Up on the Basics
A large chunk of SSD exists to explain concepts around digital security in the hopes that you can take that knowledge to make your own decisions about your specific needs. As we
often say, security is a mindset, not a purchase. But in order to foster that mindset, you need some basic knowledge. This year, we set out to refine some of this guidance in the
hopes of making it easier to read and useful for a variety of skill levels. The guides we updated included:
Choosing Your ToolsCommunicating with OthersKeeping Your Data SafeSeven Steps to Digital SecurityWhy Metadata MattersWhat Is Fingerprinting?How do I Protect Myself Against Malware?
Big Guides, Big (and Small) Changes
If you’re looking for something a bit longer, then some of our more complicated guides are practically novels. This year, we updated a few of these.
We went through our Privacy Breakdown of Mobile Phones and updated it
with more recent examples when applicable, and included additional tips at the end of some sections for actionable steps you can take. Phones continue to be one of the most
privacy-invasive devices we own, and getting a handle on what they’re capable of is the first step to figuring out what risks you may face.
Our Attending a Protest guide is something we revisit every year (sometimes a couple times a
year) to ensure it’s as accurate as possible. This year was no different, and while there were no sweeping changes, we did update the included PDF guide and added screenshots where
applicable.
We also reworked our How to: Understand and Circumvent Network
Censorship slightly to frame it more as instructional guidance, and included new features and tools to get around censorship, like utilizing a proxy in messaging
tools.
New Guides
We saw two additions to the SSD this year. First up was How to: Detect Bluetooth
Trackers, our guide to locating unwanted Bluetooth trackers—like Apple AirTags or Tile—that someone may use to track your location. Both Android and iOS have made
changes to detecting these sorts of trackers, but the wide array of different products on the market means it doesn’t always work as expected.
We also put together a guide for the iPhone’s Lockdown Mode. While not a
feature that everyone needs to consider, it has proven helpful in some cases, and knowing what those circumstances are is an important step in deciding if it’s a feature you need to
enable.
But How do I?
As the name suggests, our Tool Guides are all about learning how to best protect what
you do on your devices. This might be setting up two-factor authentication, turning on encryption on your laptop, or setting up something like Apple’s Advanced Data Protection. These
guides tend to need a yearly look to ensure they’re up-to-date. For example, Signal saw
the launch of usernames, so we went in and made sure that was added to the guide. Here’s what we updated this year:
How to: Avoid Phishing AttacksHow to: Enable Two-factor AuthenticationHow to: Encrypt Your ComputerHow to: Encrypt Your iPhoneHow to: Use Signal
And Then There Were Blogs
Surveillance Self-Defense isn’t just a website, it’s also a general approach to privacy and security. To that end, we often use our blog to tackle more specific questions or
respond to news.
This year, we talked about the risks you might face using your state’s digital driver’s license, and whether or not the promise of
future convenience is worth the risks of today.
We dove into an attack method in VPNs called TunnelVision,
which showed how it was possible for someone on a local network to intercept some VPN traffic. We’ve reiterated our advice here that VPNs—at least from providers who've worked to
mitigate TunnelVision—remain useful for routing your network connection through a different network, but they should not be treated as a security multi-tool.
Location data privacy is still a major issue this year, with potential and horrific abuses of this data
popping up in the news constantly. We showed how and why you should disable location sharing in apps that
don’t need access to function.
As mentioned above, our SSD on protesting is a perennial always in need of pruning, but sometimes you need to plant a whole new flower, as was the case when we decided to write
up tips for protesters on campuses around the United States.
Every year, we fight for more privacy and security, but until we get that, stronger controls of our data and a better understanding of how technology works is our best
defense.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in
2024.
>> mehr lesen
EU Tech Regulation—Good Intentions, Unclear Consequences: 2024 in Review
(Thu, 26 Dec 2024)
For a decade, the EU has served as the regulatory frontrunner for online services and new technology. Over the past two EU mandates (terms), the EU
Commission brought down many regulations covering all sectors, but Big Tech has been the center of their
focus. As the EU seeks to regulate the world’s largest tech companies, the world is taking notice, and debates about the landmark Digital Markets Act (DMA) and Digital Services Act
(DSA) have spread far beyond Europe.
The DSA’s focus is the governance of online content. It requires increased transparency in content moderation while holding platforms accountable for
their role in disseminating illegal content.
For “very large online platforms” (VLOPs), the DSA imposes a complex challenge: addressing “systemic risks” – those arising from their platforms’ underlying
design and rules - as well as from how these services are used by the public. Measures to
address these risks often pull in opposite directions. VLOPs must tackle illegal content and address public security concerns; while
simultaneously upholding fundamental rights, such as freedom of expression; while also considering impacts
on electoral processes and more nebulous issues like “civic discourse.” Striking this balance is no mean feat, and the role of regulators and civil society in guiding and monitoring
this process remains unclear.
As you can see, the DSA is trying to walk a fine line: addressing safety concerns and the priorities of the market. The DSA imposes uniform rules on platforms that are meant to ensure fairness for individual users, but without so proscribing the platforms’
operations that they can’t innovate and thrive.
The DMA, on the other hand, concerns itself entirely with the macro level – not on the rights of users, but on the obligations of, and restrictions on, the
largest, most dominant platforms.
The DMA concerns itself with a group of “gatekeeper” platforms that control other businesses’ access to digital markets. For these gatekeepers, the DMA
imposes a set of rules that are supposed to ensure “contestability” (that is, making sure that upstarts can contest gatekeepers’ control and maybe overthrow their power) and
“fairness” for digital businesses.
Together, the DSA and DMA promise a safer, fairer, and more open digital ecosystem.
As 2024 comes to a close, important questions remain: How effectively have these laws been enforced? Have they delivered actual benefits to users?
Fairness Regulation: Ambition and High-Stakes Clashes
There’s a lot to like in the DMA’s rules on fairness, privacy and choice...if you’re a technology user. If you’re a tech monopolist, those rules are a nightmare
come true.
Predictably, the DMA was inaugurated with a
no-holds-barred dirty fight between the biggest US tech giants and European enforcers.
Take commercial surveillance giant Meta: the company’s mission is to relentlessly gather, analyze and abuse your personal information, without your consent
or even your knowledge. In 2016, the EU passed its landmark privacy law, called the General Data Protection Regulation. The GDPR was clearly intended to halt Facebook’s romp through
the most sensitive personal information of every European.
In response, Facebook simply pretended the GDPR didn’t say what it clearly said,
and went on merrily collecting Europeans’ information without their consent. Facebook’s defense for this is that they were
contractually obliged to collect this information, because their terms and
conditions represented a promise to users to show them surveillance ads, and if they didn’t gather all that
information, they’d be breaking that promise.
The DMA strengthens the GDPR by clarifying the blindingly obvious point that a privacy law exists to protect your privacy. That means that Meta’s services –
Facebook, Instagram, Threads, and its “metaverse” (snicker) - are no longer allowed to plunder your private information. They must get your consent.
In response, Meta announced that it would create a new paid tier for people who don’t want to be spied on, and thus anyone who continues to use the service
without paying for it is “consenting” to be spied on. The DMA explicitly bans these “Pay or OK” arrangements, but then, the GDPR banned Meta’s spying, too. Zuckerberg and his
executives are clearly expecting that they can run the same playbook again.
Apple, too, is daring the EU to make good on its threats. Ordered to open up its iOS devices (iPhones, iPads and other mobile devices) to third-party app
stores, the company cooked up a Kafkaesque maze of junk fees, punitive contractual clauses, and unworkable conditions and declared itself to be in compliance with the DMA.
For all its intransigence, Apple is getting off extremely light. In an absurd turn of events, Apple’s iMessage system was exempted from the DMA’s
interoperability requirements (which would have forced Apple to allow other messaging systems to connect to iMessage and vice-versa). The EU Commission decided that Apple’s iMessage –
a dominant platform that the company CEO openly boasts about as a source of lock-in – was not a “gatekeeper
platform.”
Platform regulation: A delicate balance
For regulators and the public the growing power of online platforms has sparked concerns: how can we address harmful content, while
also protecting platforms from being pushed to over-censor, so that freedom of expression isn’t on the firing
line?
EFF has advocated for fundamental principles like “transparency,” “openness,” and “technological self-determination.” In our European work, we always
emphasize that new legislation should preserve, not undermine, the protections that have served the internet well. Keep what works, fix what is broken.
In the DSA, the EU got it right, with a focus on platforms’ processes rather than on speech control. The DSA has rules for reporting problematic content,
structuring terms of use, and responding to erroneous content removals. That’s the right way to do platform governance!
But that doesn’t mean we’re not worried about the DSA’s new obligations for tackling illegal content and systemic risks, broad goals that could easily lead
to enforcement overreach and censorship.
In 2024, our fears were realized, when the DSA’s ambiguity as to how systemic risks should be mitigated created a new, politicized enforcement problem.
Then-Commissioner Theirry Breton sent a letter to Twitter, saying that under the DSA, the platform had an obligation to remove content related to far-right xenophobic riots in the UK,
and about an upcoming meeting between Donald Trump and Elon Musk. This letter sparked widespread concern that the DSA was a tool to allow bureaucrats to decide which political speech
could and could not take place online. Breton’s letter sidestepped key safeguards in the DSA: the Commissioner ignored the question of “systemic risks” and instead focused on
individual pieces of content, and then blurred the DSA’s critical line between "illegal” and “harmful”; Breton’s letter also ignored the territorial limits of the DSA, demanding
content takedowns that reached outside the EU.
Make no mistake: online election disinformation and misinformation can have serious real-world consequences, both in the U.S. and globally. This is why EFF
supported the EU Commission’s initiative to gather input on measures platforms should take to mitigate risks linked to disinformation and electoral processes. Together with ARTICLE
19, we submitted comments to the EU Commission on future guidelines for platforms. In our response, we
recommend that the guidelines prioritize best practices, instead of policing speech. Additionally, we recommended that DSA risk assessment and mitigation compliance evaluations
prioritize ensuring respect for fundamental rights.
The typical way many platforms address organized or harmful disinformation is by removing content that violates community guidelines, a measure trusted by
millions of EU users. But contrary to concerns raised by EFF and other civil society groups, a new law in the EU, the EU Media Freedom Act, enforces a 24-hour content moderation
exemption for media, effectively making platforms host content by force. While EFF successfully pushed for crucial changes and stronger protections, we remain concerned about the real-world challenges of enforcement.
This article is part of our Year in Review series. Read other articles about the
fight for digital rights in 2024.
>> mehr lesen
Celebrating Digital Freedom with EFF Supporters: 2024 in Review
(Thu, 26 Dec 2024)
“EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world.” It can be a tough job. A lot of our time is spent fighting bad
things that are happening in the world or fixing things that have been broken for a long time.
But this work is important, and we've accomplished great things this year! Thanks to your help, we pushed the USPTO to withdraw harmful patent review proposals, fought for the public's right to access police drone footage, and continue to see more and
more of the web encrypted thanks to Certbot and Let’s Encrypt.
Of course, the biggest reason EFF is able to fight for privacy and free expression online is support from EFF members. Public support is not only the reason we can operate but is also
a great motivator to wake up and advocate for what’s right—especially when we get to hang out with some really cool folks! And with that, I’d like to reminisce.
EFF's Bay Area Festivities
Early in the year we held our annual Spring Members’ Speakeasy. We invited supporters in the Bay Area to join us at Babylon Burning, where all of EFF’s t-shirts, hoodies, and much of our swag are
made. There, folks got a fun opportunity to hand print their own tote bag! It was a fun opportunity to see t-shirts that even
I had never seen before. Side note, EFF has a lot of mechas on members’ t-shirts.
Vintage EFF t-shirts hung across the walls at Babylon Burning.
The EFF team had a great time with EFF supporters at events throughout the year. Of course, my mind was blown seeing the questions EFF gamemasters (including the Cybertiger) came up with for both Tech Trivia and Cyberlaw Trivia. What was even more impressive was seeing how many answers teams got
right at both events. During Cyberlaw Trivia, one team was able to recite 22 digits of pi, winning the tiebreaker question and the coveted first place prize!
Beating the Heat in Las Vegas
EFF staff with the Uber Contributor Award.
Next, one of my favorite summer pastimes beating the heat in Las Vegas, where we get to see thousands of EFF supporters for the summer security conferences—BSidesLV, Black Hat, and DEF CON. This year over one thousand people signed up
to support the digital freedom movement in just that one week. The support EFF receives during the summer security conferences always amazes me, and it’s a joy to say hi to everyone
that stops by to see us. We received an award from DEF CON and even speed ran a legal case,
ensuring a security researchers' ability to give their talk at
the conference.
While the lawyers were handling the legal case at DEF CON, a subgroup of us had a blast participating in the EFF Benefit Poker Tournament. Fourty-six supporters and
friends played for money, glory, and the future of the web—all while using these new EFF playing cards! In the end,
only one winner could beat the celebrity guests, including Cory Doctorow and Deviant (even
winning the literal shirt off of Deviant's back).
EFFecting Change
This year we also launched a new livestream series: EFFecting Change. With our
initial three events, we covered recent Supreme Court cases and how they affect the internet, keeping yourself safe when seeking reproductive care, and how to protest with privacy in
mind. We’ve seen a lot of support for these events and are excited to continue them next year. Oh, and no worries if you missed one—they’re all recorded here!
Congrats to Our 2024 EFF Award Winners
We wanted to end the year in style, of course, with our annual EFF Awards. This year we gave awards to 404 Media, Carolina Botero, and Connecting Humanity—and you can watch the keynote if you missed it. We’re grateful to honor
and lift up the important work of these award winners.
EFF staff and EFF Award Winners holding their trophies.
And It's All Thanks to You
There was so much more to this year too. We shared campfire tales from digital freedom legends, the Encryptids; poked
fun at bogus copyright law with our latest membership t-shirt; and hosted even more events throughout the country.
As 2025 approaches, it’s important to reflect on all the good work that we’ve done together in the past year. Yes, there’s a lot going on in the world, and times may be challenging,
but with support from people like you, EFF is ready to keep up the fight—no matter what.
Many thanks to all of the EFF members who joined forces with us this year! If you’ve been meaning to join, but haven’t yet, year-end is a great time to do so.
This article is part of our Year in Review series. Read other articles about the fight for digital
rights in 2024.
>> mehr lesen
Fighting For Progress On Patents: 2024 in Review
(Wed, 25 Dec 2024)
The rights we have in the offline world–to speak freely, create culture, play games, build new things and do business–must be available to us online, as well. This core belief
drives EFF’s work to fight the misuse of the patent system.
Despite significant progress we’ve made over the last decade, patents, and in particular vague software patents, remain a serious threat to online rights. The median patent
lawsuit isn't filed by what Americans would recognize as an ‘inventor,’ but by an anonymous limited liability company that provides no products or services, and instead uses patents
to threaten others over alleged infringement. In other words, a patent troll. In the tech sector,
more than 85% of patent lawsuits are filed by these
“non-practicing entities.”
That’s why at EFF, we continue to help individuals and organizations fight patent threats related to everyday activities like using CAPTCHAs and picture menus, tracking packages or vehicles, teaching languages, holding online contests, or playing simple games online.
Here’s where the fight stands as we move into 2025.
Defending the Public’s Right To Challenge Bad Patents
In 2012, recognizing the persistent problem of an overburdened patent office issuing a countless number dubious patents each year, Congress established a system called “inter
partes reviews” (IPRs) to review and challenge patents. While far from perfect, IPRs have led to the cancellation of thousands of patents that should never have been granted in the
first place.
It’s no surprise that big patent owners and patent trolls have long sought to dismantle the IPR system. After unsuccessful attempts to persuade federal courts to dismantle IPRs, they shifted tactics in the
past 18 months, attempting to convince the U.S. Patent and Trademark Office (USPTO) to undermine the IPR system by changing the rules on who can use it.
EFF opposed these proposed changes,
urging our supporters to file public comments. This effort
was a resounding success. After reviewing thousands of comments, including nearly 1,000 inspired by EFF’s call to action, the USPTO withdrew its proposal.
Stopping Congress From Re-Opening The Door To The Worst Patents
The patent system, particularly in the realm of software, is broken. For more than 20 years, the U.S. Patent Office has issued patents on basic cultural or business practices,
often with little more than the addition of computer jargon or trivial technical elements.
The Supreme Court addressed this issue a decade ago with its landmark decision in a case called Alice v. CLS
Bank, ruling that simply adding computer language to these otherwise generic patents isn’t enough to make them valid. However, Alice hasn’t fully protected us from patent trolls. Even with this decision, the cost of challenging a patent can run into hundreds of thousands of dollars, enabling patent
trolls to make “nuisance” demands for amounts of $100,000 or less. But Alice has dampened the severity and frequency of patent troll claims, and allowed for many more businesses to
fight back when needed.
So we weren’t surprised when some large patent owners tried again this year to overturn Alice, with the introduction of the Patent Eligibility Restoration Act (PERA), which would bring the
worst patents back into the system. PERA would also have overturned the Supreme Court ruling that prevents the patenting of human genes. EFF opposed PERA at every stage, and late this year, its supporters abandoned their
efforts to pass it through the 118th Congress. We know they will try again next year–we’ll be ready.
Shining Light On Secrecy In Patent Litigation
Litigation in the U.S is supposed to be transparent, particularly in patent cases involving technologies that impact millions of internet users daily. Unfortunately, this
is not always the case. In Entropic Communications LLC v. Charter Communications, filed in the U.S. District Court for the Eastern District of Texas,
overbroad sealing of documents has obscured the case from public view. EFF intervened in the case to protect the public’s right to
access federal court records, as the claims made by Entropic could have wide-reaching implications for anyone using cable modems to connect to the internet.
Our work to ensure transparency in patent disputes is ongoing. In 2016, EFF intervened in another overly-sealed patent case in the Eastern District of Texas. In 2022, we
did the same in California, securing an important
transparency ruling. That same year, we supported a judge’s investigation into patent owners in
Delaware, which ultimately resulted in referrals for
criminal investigation. The judge’s actions were upheld on appeal this year.
It remains far too easy for patent trolls to extort and exploit individuals and companies simply for creating or using software. In 2025, EFF will continue fighting for a patent
system that’s open, fair, and transparent.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
>> mehr lesen
We Stood Up for Access to the Law and Congress Listened: 2024 in Review
(Wed, 25 Dec 2024)
For a while, ever since they lost in
court, a number of industry giants have pushed a bill that purported to be about increasing access to the law. In fact, it would give them
enormous power over the public ability to access, share, teach, and comment on the law.
This sounds crazy—no one should be able to own the law. But these industry associations claim there’s a glaring exception to the rule: safety and building
codes. The key distinction, they insist, is how these particular laws are developed. Often, when it comes to creating the best practices for an industry, a group of experts comes
together to draft model standards. Many of those standards are then “incorporated by reference” into law, making them legal mandates just are surely as the U.S. tax
code.
But unlike most U.S. laws, the industry association that convene the experts claim that they own a copyright in the results, which means they get to control
– and charge for—access to them.
The consequences aren’t hard to imagine. If you are a journalist trying to figure out if a bridge that collapsed violated legal safety standards, you have
to get the standards from the industry association, and pay for it. If you are renter who wants to know whether your apartment complies with the fire code, you face the same barrier.
And so on.
Many organizations are working to remedy the situation, making standards available online for free (or, in some cases, for free but with a “premium”
version that offers additional services on top). Courts around the country have affirmed their right to do so.
Which brings us to the “Protecting and Enhancing Public Access to Codes Act” or “Pro Codes.” The Act requires industry associations to make standards
incorporated by reference into law available for free to the public. But here’s the kicker – in exchange Congress will affirm that they have a legitimate copyright in those laws.
This is bad deal for the public. First, access will mean read-only, and subject to licensing limits. We already know what that looks like: currently
the associations that make their codes available to the public online do so through clunky, disorganized, siloed websites, largely inaccessible to the print-disabled, and subject to
onerous contractual terms (like a requirement to give up your personal information). The public can’t copy, print, or even link to specific portions of the codes. In other words, you
can look at the law (as long as you aren’t print-disabled and you know exactly what to look for), but you can’t share it, compare it, or comment on it. That’s fundamentally against
the public interest, as many have said. It
gives private parties a windfall to do badly what others, like EFF client Public Resource, already do better and for free.
Second, it’s solving a nonexistent problem. The many volunteers who develop these codes neither need nor want a copyright incentive. The industry
associations don’t need it either—they make plenty of profit though trainings, membership fees, and selling standards that haven’t been incorporated into
law.
Third, it’s unconstitutional under the First, Fifth, and Fourteenth Amendments, which guarantee the public’s right to read, share, and discuss the
law.
We’re pleased that members of Congress have recognized the many problems with this law. Many of you wrote to your members to raise concerns and when it was
brought to a vote in committee, members registered those concerns. While it passed out of the House Judiciary Committee, the House of Representatives was asked to vote on the law “on
suspension,” meaning it can avoid debate and become law if two-thirds of the House vote yes on it. In theory, it’s meant to make it easier to pass uncontroversial
laws.
Because you wrote in, because experts sent letters explaining the problems, enough members of Congress recognized that Pro Codes is not uncontroversial. It
is not a small deal to allow industry giants to own parts of the law.
This year, we are glad that so many people lent their time and energy to understanding the wolf in sheep’s clothing that the Pro Codes Act really was. And
we hope that SDOs take note that they cannot pull the wool over everyone’s eyes. Not while we’re keeping watch.
This article is part of our Year in Review series. Read other articles about the fight for digital rights
in 2024.
Related Cases:
Freeing the Law with Public.Resource.Org
>> mehr lesen
Police Surveillance in San Francisco: 2024 in Review
(Wed, 25 Dec 2024)
From a historic ban on police using face recognition, to
landmark CCOPSlegislation, to the first ban in the United States of
police deploying deadly force via robot, for several years San
Francisco has been leading the way on necessary reforms over how police use technology.
Unfortunately, 2024 was a far cry from those victories.
While EFF continues to fight for common sense police reforms in our own backyard, this year saw a change in city politics to something that was darker and more unaccountable
than we’ve seen in awhile.
In the spring of this year, we opposed Proposition E, a ballot measure which allows the San Francisco Police Department (SFPD) to effectively experiment with any piece of
surveillance technology for a full year without any approval or oversight. This gutted the 2019 Surveillance Technology Ordinance, which required city departments like the SFPD to
obtain approval from the city’s elected governing body before acquiring or using specific surveillance technologies. We understood how dangerous Prop E was to
democratic control and transparency, and even went as far as to fly a plane over San Francisco asking voters to reject
the measure. Unfortunately, despite a strong opposition campaign, Prop E passed in the March 5, 2024 election.
Soon thereafter, we were reminded of the importance of passing democratic control and transparency laws at all levels of government, not just local. AB 481 is a California law requiring law enforcement agencies to get
approval from their local elected governing body before purchasing military equipment, including drones. In the haste to purchase drones after Prop E passed, the SFPD knowingly violated this state law in order to begin purchasing more
surveillance equipment. AB 481 has no real enforcement mechanism, which means concerned residents have to wave our arms around and implore the police to follow the
law. But, we complained loudly enough that the California Attorney General’s office issued a bulletin
reminding law enforcement agencies of their obligations under AB 481.
EFF is an organization proudly based in San Francisco. Our fight to make it a place where technology aids, rather than hinders, safety and equity for all people will
continue–even if that means calling attention to the SFPD’s casual law breaking or helping to defend the privacy laws that made this city a shining example of 21st century
governance.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in
2024.
>> mehr lesen
The Atlas of Surveillance Expands Its Data on Police Surveillance Technology: 2024 in Review
(Tue, 24 Dec 2024)
EFF’s Atlas of Surveillance is one of the most useful resources for those who want to understand the use
of police surveillance by local law enforcement agencies across the United States. This year, as the police surveillance industry has shifted, expanded, and doubled down on its
efforts to win new cop customers, our team has been busily adding new spyware and equipment to this database. We also saw many great uses of the Atlas from journalists, students, and
researchers, as well as a growing number of contributors. The Atlas of Surveillance currently captures more than 11,700 deployments of surveillance tech and remains the most
comprehensive database of its kind. To learn more about each of the technologies, please check out our Street-Level Surveillance
Hub, an updated and expanded version of which was released at the beginning of 2024.
Removing Amazon Ring
We started off with a big change: the removal of our set of Amazon Ring relationships
with local police. In January, Amazon announced that it would no longer facilitate
warrantless requests for doorbell camera footage through the company’s Neighbors app — a move EFF and other organizations had been
calling on for years. Though police can still get access to Ring camera footage by getting a warrant– or through other legal means– we decided that tracking Ring relationships in the
Atlas no longer served its purpose, so we removed that set of information. People should keep in mind that law enforcement can still connect to individual Ring cameras directly
through access facilitated by
Fusus and other platforms.
Adding third-party platforms
In 2024, we added an important growing category of police technology: the third-party investigative platform (TPIP). This is a designation we created for the growing group of
software platforms that pull data from other sources and share it with law enforcement, facilitating analysis of police and other data via artificial intelligence and other tools.
Common examples include LexisNexis Accurint, Thomson Reuters Clear, and
New Fusus data
404 Media released a report last January on the use of Fusus, an Axon system
that facilitates access to live camera footage for police and helps funnel such feeds into real-time crime centers. Their investigation revealed that more than 200,000 cameras across
the country are part of the Fusus system, and we were able to add dozens of new entries into the Atlas.
New and updated ALPR data
EFF has been investigating the use of automated license plate readers (ALPRs) across California for years, and we’ve filed hundreds of California Public Records Act requests
with departments around the state as part of our Data Driven project. This year, we were able to update all
of our entries in California related to ALPR data.
In addition, we were able to add more than 300 new law enforcement agencies
nationwide using Flock Safety ALPRs, thanks to a data journalism scraping project
from the Raleigh News & Observer.
Redoing drone data
This year, we reviewed and cleaned up a lot of the data we had on the police use of drones (also known as unmanned aerial vehicles, or UAVs). A chunk of our data on drones was
based on research done by the Center for the Study of the Drone at Bard College, which became inactive in 2020,
so we reviewed and updated any entries that depended on that resource.
We also added new drone data from Illinois, Minnesota, and Texas.
We’ve been watching Drone as First Responder programs since their inception in Chula Vista, CA, and this year we saw vendors like Axon, Skydio, and Brinc make a big push for
more police departments to adopt these programs. We updated the Atlas to contain cities where we know such programs have been deployed.
Other cool uses of the Atlas
The Atlas of Surveillance is designed for use by journalists, academics, activists, and policymakers, and this was another year where people made great use of the
data.
The Atlas of Surveillance is regularly featured in news outlets throughout the country, including in the MIT Technology Review reporting on drones, and news from the Auburn Reporter about ALPR use in Washington. It
also became the focus of podcasts and is featured in the book “Resisting Data Colonialism – A Practical
Intervention.”
Educators and students around the world cited the Atlas of Surveillance as an important source in their research. One of our favorite projects was from a senior at
Northwestern University, who used the data to make a cool visualization on surveillance technologies
being used. At a January 2024 conference at the IT University of Copenhagen,
Bjarke Friborg of the project Critical Understanding of Predictive
Policing (CUPP) featured the Atlas of Surveillance in his presentation, “Engaging Civil Society.” The Atlas was also cited in multiple
academic papers, including the Annual Review of Criminology, and
is also cited in a forthcoming paper from Professor Andrew Guthrie Ferguson at American University Washington College of Law titled “Video Analytics and Fourth Amendment
Vision.”
Thanks to our volunteers
The Atlas of Surveillance would not be possible without our partners at the University of Nevada, Reno’s Reynolds School of Journalism, where hundreds of students each semester
collect data that we add to the Atlas. This year we also worked with students at California State University Channel Islands and Harvard University.
The Atlas of Surveillance will continue to track the growth of surveillance technologies. We’re looking forward to working with even more people who want to bring transparency
and community oversight to police use of technology. If you’re interested in joining us, get in touch.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in
2024.
>> mehr lesen
The U.S. Supreme Court Continues its Foray into Free Speech and Tech: 2024 in Review
(Tue, 24 Dec 2024)
As we said last year, the U.S. Supreme Court has taken an
unusually active interest in internet free speech issues over the past couple years.
All five pending cases at the end of last year, covering three issues, were decided this year, with varying degrees of First Amendment guidance for internet users and online
platforms. We posted some takeaways from these
recent cases.
We additionally filed an amicus brief in a new case before
the Supreme Court challenging the Texas age verification law.
Public Officials Censoring Comments on Government Social Media Pages
Cases: O’Connor-Ratcliff v. Garnier
and Lindke v. Freed – DECIDED
The Supreme Court considered a pair of cases related to whether government officials who use social media may block individuals or delete their comments because the government
disagrees with their views. The threshold question in these cases was what test must be used to determine whether a government official’s social media page is largely private and
therefore not subject to First Amendment limitations, or is largely used for governmental purposes and thus subject to the prohibition on viewpoint discrimination and potentially
other speech restrictions.
The Supreme Court crafted a two-part fact-intensive test to determine if a government official’s speech on social media counts as “state action” under the First Amendment. The test
includes two required elements: 1) the official “possessed actual authority to speak” on the government’s behalf, and 2) the official “purported to exercise that authority when he
spoke on social media.” As we explained, the
court’s opinion isn’t as generous to internet users as we asked for in our amicus brief, but it does provide guidance to individuals seeking to
vindicate their free speech rights against government officials who delete their comments or block them outright.
Following the Supreme Court’s decision, the Lindke case was remanded back to the Sixth Circuit. We filed an amicus brief in the Sixth Circuit to guide the appellate court in
applying the new test. The court then issued an opinion in which it remanded the case back to the district
court to allow the plaintiff to conduct additional factual development in light of the Supreme Court's new state action test. The Sixth Circuit also importantly held in relation to
the first element that “a grant of actual authority to speak on the state’s behalf need not mention social media as the method of speaking,” which we had argued in our amicus
brief.
Government Mandates for Platforms to Carry Certain Online Speech
Cases: NetChoice v. Paxton and Moody v. NetChoice – DECIDED
The Supreme Court considered whether laws in Florida and Texas violated the First Amendment because they allow those states to dictate when social media sites may not apply standard
editorial practices to user posts. As we argued in our amicus brief urging the court to strike down both laws, allowing
social media sites to be free from government interference in their content moderation ultimately benefits internet users. When platforms have First Amendment rights to curate the
user-generated content they publish, they can create distinct forums that accommodate diverse viewpoints, interests, and beliefs.
In a win for free speech, the Supreme Court held that social media platforms have a First Amendment right to curate the third-party speech they
select for and recommend to their users, and the government’s ability to dictate those processes is extremely limited. However, the court declined to strike down either law—instead it
sent both cases back to the lower courts to determine whether each law could be wholly invalidated rather than challenged only with respect to specific applications of each law to
specific functions. The court also made it clear that laws that do not target the editorial process, such as competition laws, would not be subject to the same rigorous First
Amendment standards, a position EFF hasconsistentlyurged.
Government Coercion in Social Media Content Moderation
Case: Murthy v. Missouri – DECIDED
The Supreme Court considered the limits on government involvement in social media platforms’ enforcement of their policies. The First Amendment prohibits the government from directly
or indirectly forcing a publisher to censor another’s speech (often called “jawboning”). But the court had not previously applied this principle to government communications with
social media sites about user posts. In our amicus brief, we urged the court to recognize that there are both
circumstances where government involvement in platforms’ policy enforcement decisions is permissible and those where it is impermissible.
Unfortunately, the Supreme Court did not answer the
important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they
publish? Rather, it dismissed the cases on “standing” because none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future
coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. Thus, while the Supreme Court did not tell us more
about coercion, it did remind us that it is very hard to win lawsuits alleging coercion.
However, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media
context, that the Supreme Court also decided this year: NRA v.
Vullo. In that case, the National Rifle Association alleged that the New York state agency that oversees the insurance industry threatened insurance companies with
enforcement actions if they continued to offer coverage to the NRA. The Supreme Court endorsed a multi-factored test that many of the lower courts had adopted to answer the
ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government
action in order to punish or suppress the plaintiff ’s speech?” Those factors are: 1) word choice and tone, 2) the existence of regulatory authority (that is, the ability of the
government speaker to actually carry out the threat), 3) whether the speech was perceived as a threat, and 4) whether the speech refers to adverse consequences.
Some Takeaways From These Three Sets of Cases
The O’Connor-Ratcliffe and Lindke cases about social media blocking looked at the government’s role as a social media user. The NetChoice cases about
content moderation looked at government’s role as a regulator of social media platforms. And the Murthy case about jawboning looked at the government’s mixed role as a
regulator and user.
Three key takeaways emerged from these three sets of
cases (across five total cases):
First, internet users have a First Amendment right to speak on social media—whether by posting or commenting—and that right may be infringed when the government seeks to interfere
with content moderation, but it will not be infringed by the independent decisions of the platforms themselves.
Second, the Supreme Court recognized that social media platforms routinely moderate users’ speech: they decide which posts each user sees and when and how they see it, they decide to
amplify and recommend some posts and obscure others, and they are often guided in this process by their own community standards or similar editorial policies. The court moved beyond
the idea that content moderation is largely passive and indifferent.
Third, the cases confirm that traditional First Amendment rules apply to social media. Thus, when government controls the comments section of a social media page, it has the same
First Amendment obligations to those who wish to speak in those spaces as it does in offline spaces it controls, such as parks, public auditoriums, or city council meetings. And
online platforms that edit and curate user speech according to their editorial standards have the same First Amendment rights as others who express themselves by selecting the speech
of others, including art galleries, booksellers, newsstands, parade organizers, and editorial page editors.
Government-Mandated Age Verification
Case: Free Speech Coalition v. Paxton
– PENDING
Last but not least, we filed an amicus brief urging the
Supreme Court to strike down HB 1181, a Texas law that unconstitutionally restricts adults’ access to sexual content online by requiring them to verify their age (see our Year in
Review post on age verification). Under HB 1181, passed in 2023, any website that Texas decides is composed of one-third or more of “sexual material harmful to minors” must collect
age-verifying personal information from all visitors. We argued that the law places undue burdens on adults seeking to access lawful online speech. First, the law forces adults to
submit personal information over the internet to access entire websites, not just specific sexual materials. Second, compliance with the law requires websites to retain this
information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier, for example. Third, while sharing
many of the same burdens as document-based age verification, newer technologies like “age estimation” introduce their own problems—and are unlikely to satisfy the requirements of HB
1181 anyway. The court’s decision could have major consequences for the freedom of adults to safely and anonymously access protected speech online.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in
2024.
>> mehr lesen
EFF Continued to Champion Users’ Online Speech and Fought Efforts to Curtail It: 2024 in Review
(Tue, 24 Dec 2024)
People’s ability to speak online, share ideas, and advocate for change are enabled by the countless online services that host everyone’s views.
Despite the central role these online services play in our digital lives, lawmakers and courts spent the last year trying to undermine a key U.S. law, Section 230, that enables
services to host our speech. EFF was there to fight back on behalf of all internet users.
Section 230 (47 U.S.C. § 230) is not an accident. Congress passed the law in 1996 because it recognized that for users’ speech to
flourish online, services that hosted their speech needed to be protected from legal claims based on any particular user’s speech. The law embodies the principle that everyone,
including the services themselves, should be responsible for their own speech, but not the speech of others. This critical but limited legal protection reflects a careful balance by
Congress, which at the time recognized that promoting more user speech outweighed the harm caused by any individual’s unlawful speech.
EFF helps thwart effort to repeal Section 230
Members of Congress introduced a bill in May
this year that would have repealed Section 230 in 18 months, on the theory that the deadline would motivate lawmakers to come up with a different legal framework in the meantime. Yet
the lawmakers behind the effort provided no concrete alternatives to Section 230, nor did they identify any specific parts of the law they believed needed to be changed. Instead, the
lawmakers were motivated by their and the public’s justifiable dissatisfaction with the largest online services.
As we wrote at the time, repealing Section 230 would be a disaster for
internet users and the small, niche online services that make up the diverse forums and communities that host speech about nearly every interest, religious and political persuasion,
and topic. Section 230 protects bloggers, anyone who forwards an email, and anyone who reposts or otherwise recirculates the posts of other users. The law also protects moderators who
remove or curate other users’ posts.
Moreover, repealing Section 230 would not have hurt the biggest online services, given that they have astronomical amounts of money and resources to handle the deluge of legal claims
that would be filed. Instead, repealing Section 230 would have solidified
the dominance of the largest online services. That’s why Facebook has long ran a campaign urging Congress to weaken Section 230 – a cynical effort to use the law to cement its
dominance.
Thankfully, the bill did not advance, in part because internet users wrote to members of
Congress objecting to the proposal. We hope lawmakers in 2025 put their energy toward ending Big Tech’s dominance by enacting a meaningful and comprehensive consumer data privacy law, or pass laws that enable greater interoperability and competition between social media services. Those efforts will go a long way toward ending Big
Tech’s dominance without harming users’ online speech.
EFF stands up for users’ speech in courts
Congress was not the only government branch that sought to undermine Section 230 in the past year. Two different courts issued rulings this year that will jeopardize people’s ability
to read other people’s posts and make use of basic features of online services that benefit all users.
In Anderson v. TikTok, the U.S. Court of Appeals for the Third Circuit issued a deeply confused opinion, ruling that Section 230 does not apply to the automated system TikTok
uses to recommend content to users. The court reasoned that because online services have a First Amendment right to decide how to present their users’ speech, TikTok’s decisions to
recommend certain content reflects its own speech and thus Section 230’s protections do not apply.
We filed a friend-of-the-court brief in support of TikTok’s
request for the full court to rehear the case, arguing that the decision was wrong on both the First Amendment and Section 230. We also pointed out how the ruling would have
far-reaching implications for users’ online speech. The court unfortunately denied TikTok’s rehearing request, and we are waiting to see whether the service will ask the Supreme Court
to review the case.
In Neville v. Snap, Inc., a California trial court refused to apply Section 230 in a lawsuit that claims basic features of the service, such as disappearing messages,
“Stories,” and the ability to befriend mutual acquaintances, amounted to defectively designed products. The trial court’s ruling departs from a long line of other court decisions that
ruled that these claims essentially try to plead around Section 230 by claiming that the features are the problem, rather than the illegal content that users created with a service’s
features.
We filed a friend-of-the-court brief in support of Snap’s
effort to get a California appellate court to overturn the trial court’s decision, arguing that the ruling threatens the ability for all internet users to rely on basic features of a
given service. Because if a platform faces liability for a feature that some might misuse to cause harm, the platform is unlikely to offer that feature to users, despite the fact that
the majority of people using the feature for legal and expressive purposes. Unfortunately, the appellate court denied Snap’s petition in December, meaning the case continues before
the trial court.
EFF supports effort to empower users to customize their online experiences
While lawmakers and courts are often focused on Section 230’s protections for online services, relatively little attention has been paid to another provision in the law that protects
those who make tools that allow users to customize their experiences online. Yet Congress included this protection precisely because it wanted to encourage the development of software
that people can use to filter out certain content they’d rather not see or otherwise change how they interact with others online.
That is precisely the goal of a tool being developed by Ethan Zuckerman, a professor at
the University of Massachusetts Amherst, known as Unfollow Everything 2.0. The browser extension would allow Facebook users to automate their ability to unfollow friends, groups, or
pages, thereby limiting the content they see in their News Feed.
Zuckerman filed a lawsuit against Facebook seeking a court ruling that Unfollow Everything 2.0 was immune from legal claims from Facebook under Section 230(c)(2)(B). EFF filed a
friend-of-the-court brief in support, arguing
that Section 230’s user-empowerment tool immunity is unique and incentivizes the development of beneficial tools for users, including traditional content filtering, tailoring content
on social media to a user’s preferences, and blocking unwanted digital trackers to protect a user’s privacy.
The district court hearing the case unfortunately dismissed the case, but its ruling did not reach the merits of whether Section 230 protected Unfollow Everything 2.0. The court gave
Zuckerman an opportunity to re-file the case, and we will continue to support his efforts to build user-empowering tools.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in
2024.
>> mehr lesen
Defending Encryption in the U.S. and Abroad: 2024 in Review
(Mon, 23 Dec 2024)
EFF supporters get that strong encryption is tied to one of our most basic rights: the right to have a private conversation. In the digital world, privacy is impossible without
strong encryption.
That’s why we’ve always got an eye out for attacks on encryption. This year, we pushed back—successfully—against anti-encryption laws proposed in the U.S., the U.K. and the E.U.
And we had a stark reminder of just how dangerous backdoor access to our communications can be.
U.S. Bills Pushing Mass File-Scanning Fail To Advance
The U.S. Senate’s EARN IT Bill is a wrongheaded proposal that would push companies away from using encryption and towards scanning our messages and photos. There’s no reason to
enact such a proposal, which technical experts agree would turn our phones into bugs in our
pockets.
We were disappointed when EARN IT was voted out of committee last year, even
though several senators did make clear they wanted to see additional changes before they support the bill. Since then, however, the bill has gone nowhere. That’s because so many
people, including more than 100,000 EFF supporters, have voiced their opposition.
People increasingly understand that encryption is vital to our security and privacy. And when politicians demand that tech companies install dangerous scanning software whether
users like it or not, it’s clear to us all that they are attacking encryption, no matter how much obfuscation takes place.
EFF has long encouraged companies to adopt policies that support encryption, privacy and security by default. When companies do the right thing, EFF supporters will side with
them. EFF and other privacy advocates pushed Meta for years to make end-to-end encryption the default option in Messenger. When Meta implemented the change, they were sued by Nevada’s
Attorney General. EFF filed a brief in that case
arguing that Meta should not be forced to make its systems less secure.
UK Backs Off Encryption-Breaking Language
In the U.K., we fought against the wrongheaded Online Safety Act, which included language that would have let the U.K. government strongarm companies away from using encryption.
After pressure from EFF supporters and
others, the U.K. government gave last-minute assurances
that the bill wouldn’t be applied to encrypted messages. The U.K. agency in charge of implementing the Online Safety Act, Ofcom, has now said that the Act will not apply to
end-to-end encrypted messages. That’s an important distinction, and we
have urged Ofcom to make that even more clear in its written guidance.
EU Residents Do Not Want “Chat Control”
Some E.U. politicians have sought to advance a message-scanning bill that was even more extreme than the U.S. anti-encryption bills. We’re glad to say the EU proposal, which has
been dubbed “Chat Control” by its opponents, has also been stalled because of strong opposition.
Even though the European Parliament last year adopted a
compromise proposal that would protect our rights to encrypted communications, a few key member states at the EU Council spent much of 2024 pushing forward the old,
privacy-smashing version of Chat Control. But they haven’t advanced. In a public hearing earlier this month, 10 EU member states, including Germany and Poland, made clear they would
not vote for this proposal.
Courts in the E.U., like the public at large, increasingly recognize that online private communications are human rights, and the encryption required to facilitate them cannot
be grabbed away. The European Court of Human Rights recognized this in a milestone judgment earlier this year, Podchasov v. Russia, which specifically
held that weakening encryption put at risk the human rights of all internet users.
A Powerful Reminder on Backdoors
All three of the above proposals are based on a flawed idea: that it’s possible to give some form of special access to peoples’ private data that will never be exploited by a
bad actor. But that’s never been true–there is no backdoor that works only for the “good guys.”
In October, the U.S. public learned about a major breach of telecom systems stemming from Salt Typhoon, a sophisticated
Chinese-government backed hacking group. This hack infiltrated the same systems that major ISPs like Verizon, AT&T and Lumen Technologies had set up for U.S. law enforcement and
intelligence agencies to get “lawful access” to user data. It’s still unknown how extensive the damage is from this hack, which included people under surveillance by U.S. agencies but
went far beyond that.
If there’s any upside to a terrible breach like Salt Typhoon, it’s that it is waking up some officials to understand that encryption is vital to both individual and national
security. Earlier this
month, a top U.S. cybersecurity chief said “encryption is your friend,” making a welcome break with the messaging we’ve seen over the years at EFF.
Unfortunately, other agencies, including the FBI, continue to push the idea that strong
encryption can be coupled with easy access by law enforcement.
Whatever happens, EFF will continue to stand up for our right to use encryption to have secure and private online communications.
This article is part of our Year in Review series. Read other articles about the fight for digital rights
in 2024.
>> mehr lesen
2024 Year in Review
(Mon, 23 Dec 2024)
It is our end-of-year tradition at EFF to look back at the last 12 months of digital rights. This year, the number and diversity of our reflections attest
that 2024 was a big year.
If there is something uniting all the disparate threads of work EFF has done this year, it is this: that law and policy should be careful, precise,
practical, and technologically neutral. We do not care if a cop is using a glass pressed against your door or the most advanced microphone: they
need a warrant.
For example, much of the public discourse this year was taken up by generative AI. It seemed that this issue was a
Rorschach test for everyone’s anxieties about technology - be they privacy, replacement of workers, surveillance, or intellectual property. Ultimately, it matters little what the
specific technology is: whenever technology is being used against our rights, EFF will oppose that use. It’s
a future-proof way of protecting us. If we have privacy protections, labor protections, and protections against government invasions, then it does not matter what technology takes
over the public imagination, we will have recourse against its harms.
But AI was only one of the issues we took on this past year. We’ve worked on ensuring that the EU’s new rules regarding large online platforms respect human
rights. We’ve filed countless briefs in support of free expression online and represented plaintiffs in cases where bad actors have sought to silence them, including citizen journalists who were targeted for posting clips of city council
meetings online.
With your help, we have let the United States Congress know that its citizens are for protecting the free press and against laws that would
cut kids off from vital sources of information. We’ve spoken to legislators, reporters, and the public to make sure everyone is informed about the benefits and dangers of new
technologies, new proposed laws, and legal precedent.
Even all of that does not capture everything we did this year. And we did not—indeed, we cannot—do it without you. Your support keeps the lights on and
ensures we are not speaking just for EFF as an organization but for our thousands of tireless members. Thank you, as always.
We will update this page with new stories about digital rights in 2024 every day between now and the new year.
Defending Encryption in the U.S. and AbroadEFF in the PressThe U.S. Supreme Court Continues its Foray into Free Speech and
TechThe Atlas of Surveillance Expands Its Data
on Police Surveillance TechnologyEFF Continued to
Champion Users’ Online Speech and Fought Efforts to Curtail ItWe Stood
Up for Access to the Law and Congress ListenedPolice Surveillance in San FranciscoFighting For Progress On PatentsCelebrating Digital Freedom with EFF SupportersSurveillance Self-DefenseEU Tech Regulation—Good Intentions, Unclear Consequences
>> mehr lesen
EFF Tells Appeals Court To Keep Copyright’s Fair Use Rules Broad And Flexible
(Sat, 21 Dec 2024)
It’s critical that copyright be balanced with limitations that support users’ rights, and perhaps no limitation is more important than fair use. Critics, humorists, artists, and activists all must have rights to re-use
and re-purpose source material, even when it’s copyrighted.
Yesterday, EFF weighed in on another case that could shape the future of our fair use rights. In Sedlik v. Von Drachenberg, a Los Angeles tattoo artist created a tattoo
based on a well-known photograph of Miles Davis taken by photographer Jeffrey Sedlik. A jury found that Von Drachenberg, the tattoo artist, did not infringe the photographer’s
copyright because her version was different from the photo; it didn’t meet the legal threshold of “substantially similar.” After the trial, the judge in the case considered other arguments brought by Sedlik after the trial and upheld the
jury’s findings.
On appeal, Sedlik has made arguments that, if upheld, could narrow fair use rights for everyone. The appeal brief suggests that only secondary users who make “targeted” use of a
copyrighted work have strong fair use defenses, relying on an incorrect reading of the Supreme Court’s decision in Andy Warhol Foundation v. Goldsmith.
Fair users select among various alternatives, for both aesthetic and practical reasons.
Such a reading would upend decades of Supreme Court precedent that makes it clear that “targeted” fair uses don’t get any special treatment as opposed to “untargeted” uses. As
made clear in Warhol, the copying done by fair users must simply be “reasonably necessary” to achieve a new purpose. The principle of protecting new
artistic expressions and new innovations is what led the Supreme Court to protect video cassette recording as fair use in 1984. It also contributed to the 2021 decision in Oracle v.
Google, which held that Google’s copying of computer programming conventions created for desktop computers, in order to make it easier to design for modern
smartphones, was a type of fair use.
Sedlik argues that if a secondary user could have chosen another work, this means they did not “target” the original work, and thus the user should have a lessened fair use
case. But that has never been the rule. As the Supreme Court explained, Warhol could have created art about a product other than Campbell’s Soup; but his choice to copy the famous
Campbell’s logo was fully justified because it was “well known to the public, designed to be reproduced, and a symbol of an everyday item for mass consumption.”
Fair users always select among various alternatives, for both aesthetic and practical reasons. A film professor might know of several films that expertly demonstrate a
technique, but will inevitably choose just one to show in class. A news program alerting viewers to developing events may have access to many recordings of the event from different
sources, but will choose just one, or a few, based on editorial judgments. Software developers must make decisions about which existing software to analyze or to interoperate with in
order to build on existing technology.
The idea of penalizing these non-“targeted” fair uses would lead to absurd results, and we urge the 9th Circuit to reject this argument.
Finally, Sedlik also argues that the tattoo artist’s social media posts are necessarily “commercial” acts, which would push the tattoo art further away from fair use. Artists’
use of social media to document their processes and work has become ubiquitous, and such an expansive view of commerciality would render the concept meaningless. That’s why multiple
appellate courts have already rejected such a view; the 9th Circuit should do so as well.
In order for innovation and free expression to flourish in the digital age, fair use must remain a flexible rule that allows for diverse purposes and uses.
Further Reading:
EFF Amicus Brief in Sedlik v. Von Drachenberg
>> mehr lesen
Ninth Circuit Gets It: Interoperability Isn’t an Automatic First Step to Liability
(Fri, 20 Dec 2024)
A federal appeals court just gave software developers, and users, an early holiday present, holding that software updates aren’t necessarily “derivative,” for purposes of copyright law, just because they are designed
to interoperate the software they update.
This sounds kind of obscure, so let’s cut through the legalese. Lots of developers build software designed to interoperate with preexisting works. This kind
of interoperability is crucial to innovation, particularly in a world where a small number of companies control so many essential tools and platforms. If users want to be able to
repair, improve, and secure their devices, they must be able to rely on third parties to help. Trouble is, Big Tech companies want to be able to control (and charge for) every
possible use of the devices and software they “sell” you – and they won’t hesitate to use the law to enforce that control.
Courts shouldn’t assist, but unfortunately a federal district court did just
that in the latest iteration of Oracle v. Rimini. Rimini provides
support to improve the use and security of Oracle products, so customers don’t have to depend entirely on Oracle itself . Oracle doesn’t want this kind of competition, so it sued
Rimini for copyright infringement, arguing that a software update Rimini developed was a “derivative work” because it was intended to interoperate with Oracle's software, even though
the update didn’t use any of Oracle’s copyrightable code. Derivative works are typically things like a movie based on a novel, or a translation of that novel. Here, the only
“derivative” aspect was that Rimini’s code was designed to interact with Oracle’s code. Unfortunately, the district court initially sided with Oracle, setting a dangerous precedent. If a work is derivative, it may infringe the copyright in the preexisting work
from which it, well, derives. For decades, software developers have relied, correctly, on the settled view that a work is not derivative under copyright law unless it is substantially
similar to a preexisting work in both ideas and expression. Thanks to that rule, software developers can build innovative new tools that interact with preexisting works, including
tools that improve privacy and security, without fear that the companies that hold rights in those preexisting works would have an automatic copyright claim to those
innovations.
Rimini appealed to the Ninth Circuit, on multiple grounds. EFF, along with a diverse group of stakeholders representing consumers, small businesses, software
developers, security researchers, and the independent repair community, filed an amicus brief in support explaining that the district court ruling on interoperability was not just bad policy, but also bad
law.
The Ninth Circuit agreed:
In effect, the district court adopted an “interoperability” test for derivative works—if a product can only interoperate with a preexisting copyrighted
work, then it must be derivative. But neither the text of the Copyright Act nor our precedent supports this interoperability test for derivative works.
The court goes on to give a primer on the legal definition of derivative work, but the key point is this: a work is only derivative if it
“substantially incorporates the other work.”
Copyright already reaches far too broadly, giving rightsholders extraordinary power over how we use everything from music to phones to televisions. This
holiday season, we’re raising a glass to the judges who sensibly reined that power in.
>> mehr lesen
Customs & Border Protection Fails Baseline Privacy Requirements for Surveillance Technology
(Fri, 20 Dec 2024)
U.S. Customs and Border Protection (CBP) has failed to address six out of six main privacy protections for three of its border surveillance programs—surveillance towers, aerostats,
and unattended ground sensors—according to a new assessment by the Government Accountability Office (GAO).
In the report, GAO compared the policies for these technologies against six of the key Fair Information Practice Principles that agencies are supposed to use when evaluating systems
and processes that may impact privacy, as dictated by both Office of Management and Budget guidance and the Department of Homeland Security's own rules.
These include:
Data collection. "DHS should collect only PII [Personally Identifiable Information] that is directly relevant and necessary to accomplish the specified
purpose(s)."
Purpose specification. "DHS should specifically articulate the purpose(s) for which the PII is intended to be used."
Information sharing. "Sharing PII outside the department should be for a purpose compatible with the purpose for which the information was collected."
Data security. "DHS should protect PII through appropriate security safeguards against risks such as loss, unauthorized access or use, destruction, modification,
or unintended or inappropriate disclosure."
Data retention. "DHS should only retain PII for as long as is necessary to fulfill the specified purpose(s)."
Accountability. "DHS should be accountable for complying with these principles, including by auditing the actual use of PII to demonstrate compliance with these
principles and all applicable privacy protection requirements."
These baseline privacy elements for the three border surveillance technologies were not addressed in any "technology policies, standard operating procedures, directives, or other
documents that direct a user in how they are to use a Technology," according to GAO's review.
CBP operates hundreds of surveillance towers along both the northern and
southern borders, some of which are capable of capturing video more than seven miles away. The agency has six large aerostats (essentially tethered blimps) that use radar along the
southern border, with others stationed in the Florida Keys and Puerto Rico. The agency also operates a series of smaller aerostats that stream video in the Rio Grande Valley of Texas,
with the newest one installed this fall in southeastern New Mexico. And the report notes deficiencies with CBP's linear ground detection system, a network of seismic sensors and
cameras that are triggered by movement or footsteps.
The GAO report underlines EFF's concerns that the privacy of people who live and work in the borderlands is violated when federal agencies deploy militarized, high-tech programs to
confront unauthorized border crossings. The rights of border communities are too often treated as acceptable collateral damage in pursuit of border security.
CBP defended its practices by saying that it does, to some extent, address FIPS in its Privacy Impact
Assessments, documents written for public consumption. GAO rejected this claim, saying that these assessments are not adequate in instructing agency staff on how to protect
privacy when deploying the technologies and using the data that has been collected.
In its recommendations, the GAO calls on the CBP Commissioner to "require each detection, observation, and monitoring technology policy to address the privacy protections in the Fair
Information Practice Principles." But EFF calls on Congress to hold CBP to account and stop approving massive spending on border security technologies that the agency continues to
operate irresponsibly.
>> mehr lesen
The Breachies 2024: The Worst, Weirdest, Most Impactful Data Breaches of the Year
(Thu, 19 Dec 2024)
Every year, countless emails hit our inboxes telling us that our personal information was accessed, shared, or stolen in a data breach. In many cases, there is little we can do.
Most of us can assume that at least our phone numbers, emails, addresses, credit card numbers, and social security numbers are all available
somewhere on the internet.
But some of these data breaches are more noteworthy than others, because they include novel information about us, are the result of particularly noteworthy security flaws, or
are just so massive they’re impossible to ignore. For that reason, we are introducing the Breachies, a series of tongue-in-cheek “awards” for some of the most egregious data breaches
of the year.
If these companies practiced a privacy first approach and focused on
data minimization, only collecting and storing what they absolutely need to provide the services they promise, many data breaches would be far less harmful to the victims. But
instead, companies gobble up as much as they can, store it for as long as possible, and inevitably at some point someone decides to poke in and steal that data.
Once all that personal data is stolen, it can be used against the breach victims for identitytheft, ransomwareattacks, and to send unwanted spam. The risk of these attacks isn’t just a minor
annoyance: research shows it can causepsychologicalinjury, including anxiety,
depression, and PTSD. To avoid these attacks, breach victims must spend time and money to freeze and unfreeze their credit reports, to
monitor their credit reports, and to obtain identity theft prevention
services.
This year we’ve got some real stinkers, ranging from private health information to—you guessed it—credit cards and social security numbers.
The Winners
The Just Stop Using Tracking Tech Award: Kaiser PermanenteThe Most Impactful Data Breach for ’90s Kids Award: Hot TopicThe Only Stalkers Allowed Award: mSpyThe I Didn’t Even Know You Had My Information Award: Evolve BankThe We Told You So Award: AU10TIXThe Why We’re Still Stuck on Unique Passwords Award: RokuThe Listen, Security Researchers are Trying to Help Award: City of ColumbusThe Have I Been Pwned? Award: SpoutibleThe Reporting’s All Over the Place Award: National Public DataThe Biggest Health Breach We’ve Ever Seen Award: Change HealthThe There’s No Such Thing As Backdoors for Only “Good Guys” Award: Salt TyphoonBreach of the Year (of the Decade?): SnowflakeTips to Protect Yourself(Dis)Honorable MentionsThe Just Stop Using Tracking Tech Award: Kaiser Permanente
In one of the year's most preventable breaches, the healthcare company Kaiser Permanente exposed 13 million patients’ information via tracking code embedded in
its website and app. This tracking code transmitted potentially sensitive medical information to Google, Microsoft, and X (formerly known as Twitter). The exposed information included
patients’ names, terms they searched in Kaiser’s Health Encyclopedia, and how they navigated within and interacted with Kaiser’s website or app.
The most troubling aspect of this breach is that medical information was exposed not by a sophisticated hack, but through widely used tracking technologies that Kaiser
voluntarily placed on its website. Kaiser has since removed the problematic code, but tracking technologies are rampant across the internet and on other healthcare websites. A
2024 study found tracking technologies sharing information with third
parties on 96% of hospital websites. Websites usually use tracking technologies to serve targeted ads. But these same technologies give advertisers, data brokers, and law enforcement easy access to details about
your online activity.
While individuals can protect themselves from online tracking by using tools like EFF’s Privacy Badger, we need
legislative action to make online privacy the norm for everyone. EFF advocates for a ban
on online behavioral advertising to address the primary incentive for companies to use invasive tracking technology. Otherwise, we’ll continue to see companies
voluntarily sharing your personal data, then apologizing when thieves inevitably exploit a vulnerability in these tracking systems.
Head back to the table of contents.
The Most Impactful Data Breach for ’90s Kids Award: Hot Topic
If you were in middle or high school any time in the ’90s you probably have strong memories of Hot Topic. Baby goths and young punk rockers alike would go to
the mall, get an Orange Julius and greasy slice of Sbarro pizza, then walk over to Hot Topic to pick up edgy t-shirts and overpriced bondage pants (all the while debating who was the
biggest poser and which bands were sellouts, of course). Because of the fundamental position Hot Topic occupies in our generation’s personal mythology, this data breach hits extra
hard.
In November 2024, Have I Been Pwned reported that Hot Topic and its subsidiary Box Lunch suffered a data
breach of nearly 57 million data records. A hacker using the alias “Satanic” claimed responsibility and posted a 730 GB database on a hacker forum with a sale price
of $20,000. The compromised data about approximately 54 million customers reportedly includes: names, email addresses, physical addresses, phone numbers, purchase history, birth
dates, and partial credit card details. Research by Hudson Rock indicates
that the data was compromised using info stealer malware installed on a Hot Topic employee’s work computer. “Satanic” claims that the original infection stems from
the Snowflake data breach (another Breachie winner); though that hasn’t been confirmed because Hot Topic has still not notified customers, nor responded to
our request for comment.
Though data breaches of this scale are common, it still breaks our little goth hearts, and we’d prefer stores did a better job of securing our data. Worse, Hot Topic still
hasn’t publicly acknowledged this breach, despite numerousnewsreports. Perhaps Hot Topic was the real sellout all
along.
Head back to the table of contents.
The Only Stalkers Allowed Award: mSpy
mSpy, a commercially-available mobile stalkerware
app owned by Ukrainian-based company Brainstack, was subject to a data
breach earlier this year. More than a decade’s worth of information about the app’s customers was stolen, as well as the real names and email addresses of Brainstack
employees.
The defining feature of stalkerware apps is their ability to operate covertly and trick users into believing that they are not
being monitored. But in reality, applications like mSpy allow whoever planted the stalkerware to remotely view the contents of the victim’s device in real time. These tools are often
used to intimidate, harass, and harm victims, including by stalkers and abusive (ex) partners.
Given the highly sensitive data collected by companies like mSpy and the harm to targets when their data gets revealed, this data breach is another example of why stalkerware
must be stopped.
Head back to the table of contents.
The I Didn’t Even Know You Had My Information Award: Evolve Bank
Okay, are we the only ones who hadn’t heard of Evolve Bank? It was reported in May that Evolve Bank experienced a data breach—though it actually happened all the way back in February.
You may be thinking, “why does this breach matter if I’ve never heard of Evolve Bank before?” That’s what we thought too!
But here’s the thing: this attack affected a bunch of companies you have heard of, like Affirm (the buy now, pay later service), Wise (the international money transfer service), and
Mercury Bank (a fintech company). So, a ton of services
use the bank, and you may have used one of those services. It’s been reported that 7.6 million Americans were affected by the
breach, with most of the data stolen being customer information, including social security numbers, account numbers, and date of birth.
The small bright side? No customer funds were accessed during the breach. Evolve states that after the breach they are doing some basic things like resetting
user passwords and strengthening their security
infrastructure.
Head back to the table of contents.
The We Told You So Award: AU10TIX
AU10TIX is an “identity verification” company used by the likes of TikTok and X to confirm that users are who they claim to be. AU10TIX and companies like it collect and review
sensitive private documents such as driver’s license information before users can register for a site or access some content.
Unfortunately, there is growingpoliticalinterest in mandating identity or age
verification before allowing people to access social media or adult material. EFF and others oppose these plans
because they threaten both speech and privacy. As we said in 2023, verification mandates would inevitably lead to more data
breaches, potentially exposing government IDs as well as information about the sites that a user visits.
Look no further than the AU10TIX
breach to see what we mean. According to a report by 404 Media in May, AU10TIX left login credentials exposed
online for more than a year, allowing access to very sensitive user data.
404 Media details how a researcher gained access to the company’s logging platform, “which in turn contained links to data related to specific people who had uploaded their
identity documents.” This included “the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license,” as well as
images of those identity documents.
The AU10TIX breach did not seem to lead to exposure beyond what the researcher showed was possible. But AU10TIX and other companies must do a better job at locking down user
data. More importantly, politicians must not create new privacy dangers by requiring identity and age verification.
If age verification requirements become law, we’ll be handing a lot of our sensitive information over to companies like AU10TIX. This is the first We Told You
So Breachie award, but it likely won’t be the last.
Head back to the table of contents.
The Why We’re Still Stuck on Unique Passwords Award: Roku
In April, Roku announced not yet another new way to display more ads, but a data breach
(its second of the year) where 576,000
accounts were compromised using a “credential stuffing attack.” This is a common, relatively easy sort of automated attack where thieves use previously leaked username and password
combinations (from a past data breach of an unrelated company) to get into accounts on a different service. So, if say, your username and password was in the Comcast data breach in 2015, and you used the same username
and password on Roku, the attacker might have been able to get into your account. Thankfully, less than 400 Roku accounts saw unauthorized purchases, and no payment information was
accessed.
But the ease of this sort of data breach is why it’s important to use unique passwords
everywhere. A password manager, including one
that might be free on your phone or browser, makes this much easier to do. Likewise, credential stuffing illustrates why it’s important to use two-factor authentication. After the Roku breach, the company turned on two-factor authentication for all accounts. This
way, even if someone did get access to your account password, they’d need that second code from another device; in Roku’s case, either your phone number or email address.
Head back to the table of contents.
The Listen, Security Researchers are Trying to Help Award: City of Columbus
In August, the security researcher David Ross Jr. (also known as Connor Goodwolf) discovered that a ransomware attack against the City of Columbus, Ohio, was much more serious
than city officials initially revealed. After the researcher informed the press and provided proof, the city accused him of violating multiple laws and obtained a gag order
against him.
Rather than silencing the researcher, city officials should have celebrated him for helping victims understand the true extent of the breach. EFF and security researchers know the value of this work. And EFF has a team of lawyers who help protect researchers and their work.
Here is how not to deal with a security researcher: In July, Columbus learned it had suffered a ransomware attack. A group called Rhysida took responsibility. The city did not pay
the ransom, and the group posted some of the stolen data online. The mayor announced the stolen data was “encrypted
or corrupted,” so most of it was unusable. Later, the researcher, David Ross, helped informlocal news outlets that in fact the breach did include usable personal
information on residents. He also attempted to contact the city. Days later, the city offered free credit monitoring to
all of its residents and confirmed that its original announcement was inaccurate.
Unfortunately, the city also filed a lawsuit, and a judge signed a temporary restraining order preventing the researcher from
accessing, downloading, or disseminating the data. Later, the researcher agreed to a more limited injunction. The city eventually confirmed that the data
of hundreds of thousands of people was stolen in the ransomware attack, including
drivers licenses, social security numbers, employee information, and the identities of juvenile victims, undercover police officers, and confidential informants.
Head back to the table of contents.
The Have I Been Pwned? Award: Spoutible
The Spoutible breach has layers—layers of “no
way!” that keep revealing more and more amazing little facts the deeper one digs.
It all started with a leaky API. On a per-user basis, it didn’t just return the sort of information you’d expect from a social media platform, but also the user’s email, IP
address, and phone number. No way! Why would you do that?
But hold on, it also includes a bcrypt hash of their password. No way! Why would you do that?!
Ah well, at least they offer two-factor authentication (2FA) to protect against password leakages, except… the API was also returning the secret used to generate the 2FA OTP as
well. No way! So, if someone had enabled 2FA it was immediately rendered useless by virtue of this field being visible to everyone.
However, the pièce de resistance comes with the next field in the API: the “em_code.” You know how when you do a password reset you get emailed a secret code that proves you
control the address and can change the password? That was the code! No way!
-EFF thanks guest author Troy Hunt for this contribution to the Breachies.
Head back to the table of contents.
The Reporting’s All Over the Place Award: National Public Data
In January 2024, there was almost no chance you’d have heard of a company called National Public Data. But starting in April, then ramping up in June, stories revealed a breach affecting the background
checking data broker that included names, phone numbers, addresses, and social security numbers of at least 300 million people. By August, the reported number ballooned to 2.9 billion people. In
October, National Public Data filed for bankruptcy, leaving behind
nothing but a breach notification on its website.
But what exactly was stolen? The evolving news coverage has raised more questions than it has answered. Too bad National Public Data has failed to tell the public more about the
data that the company failed to secure.
One analysis found that some of the dataset was inaccurate, with a
number of duplicates; also, while there were 137 million email addresses, they weren’t linked to social security numbers. Another analysis had similar results. As for social security numbers,
there were likely somewhere around 272 million in the
dataset. The data was so jumbled that it had names matched to the wrong email or address, and included a large chunk of people who were deceased. Oh, and that 2.9 billion number? That
was the number of rows of data in
the dataset, not the number of individuals. That 2.9 billion people number appeared to originate from a complaint
filed in Florida.
Phew, time to check in with Count von Count on this one, then.
How many people were truly affected? It’s difficult to say for certain. The only thing we learned for sure is that starting a data broker company appears to be incredibly easy,
as NPD was owned by a retired sheriff’s deputy and a small film
studio and didn’t seem to be a large operation. While this data broker got caught with more leaks than the Titanic, hundreds of others are still out there collecting
and hoarding information, and failing to watch out for the next iceberg.
Head back to the table of contents.
The Biggest Health Breach We’ve Ever Seen Award: Change Health
In February, a ransomware attack on Change Healthcare exposed the private health information of over 100 million people. The company, which
processes 40% of all U.S. health insurance claims, was forced
offline for nearly a month. As a result, healthcare practices nationwide struggled to stay operational and patients experienced limits on access to care. Meanwhile, the stolen data
poses long-term risks for identity theft and insurance fraud for millions of Americans—it includes patients’ personal identifiers, health diagnoses, medications, insurance details,
financial information, and government identity documents.
The misuse of medical records can be harder to detect and correct that regular financial fraud or identity theft. The FTC recommends that people at risk of medical identity theft watch out for suspicious
medical bills or debt collection notices.
The hack highlights the need for stronger cybersecurity in the healthcare industry, which is increasingly targeted by cyberattacks. The Change
Healthcare hackers were able to access a critical system because it lacked two-factor authentication, a basic form
of security.
To make matters worse, Change Healthcare’s recent merger with Optum, which antitrust regulators tried and failed to
block, even further centralized vast amounts of sensitive information. Many healthcare providers blamed corporate consolidation for the scale of disruption.
As the former president of the American Medical Association put
it, “When we have one option, then the hackers have one big target… if they bring that down, they can grind U.S. health care to a halt.” Privacy and competition are
related values, and data breach and monopoly are connected problems.
Head back to the table of contents.
The There’s No Such Thing As Backdoors for Only “Good Guys” Award: Salt Typhoon
When companies build backdoors into their services to provide law enforcement access to user data, these backdoors can be exploited by thieves, foreign governments, and other
adversaries. There are no methods of access that are magically only accessible to “good guys.” No
security breach has demonstrated that more clearly than this year’s attack by Salt Typhoon, a Chinese
government-backed hacking group.
Internet service providers generally have special systems to provide law enforcement and intelligence agencies access to user data. They do that to comply with laws like
CALEA, which require telecom companies to provide a means for “lawful
intercepts”—in other words, wiretaps.
The Salt Typhoon group was able to access the powerful tools that in theory have been reserved for U.S. government agencies. The hackers infiltrated the nation’s biggest telecom
networks, including Verizon, AT&T, and others, and were able to target their surveillance based on U.S. law enforcement wiretap requests. Breaches elsewhere in the system let them
listen in on calls in real time. People under U.S. surveillance were clearly some of the targets, but the hackers also targeted both 2024 presidential campaigns and officials in the
State Department.
While fewer than 150 people have been identified as targets so far, the number of people who were called or texted by those targets run into the “millions,” according to a Senator who has been briefed on the hack. What’s
more, the Salt Typhoon hackers still have not been rooted out of the networks they infiltrated.
The idea that only authorized government agencies would use such backdoor access tools has always been flawed. With sophisticated state-sponsored hacking groups operating across
the globe, a data breach like Salt Typhoon was only a matter of time.
Head back to the table of contents.
The Snowballing Breach of the Year Award: Snowflake
Thieves compromised the corporate customer accounts for U.S. cloud analytics provider Snowflake. The corporate customers included AT&T, Ticketmaster, Santander, Neiman Marcus, and many others:
165 in total.
This led to a massive breach of billions of data records
for individuals using these companies. A combination of infostealer malware infections on
non-Snowflake machines as well as weak security
used to protect the affected accounts allowed the hackers to gain access and extort the customers. At the time of the hack, April-July of this year, Snowflake was not
requiring two-factor authentication, an
account security measure which could have provided protection against the attacks. A number
of arrests were made after security researchers
uncovered the identities of several of the threat actors.
But what does Snowflake do? According to their
website, Snowflake “is a cloud-based data platform that provides data storage, processing, and analytic solutions.” Essentially, they store and index troves of
customer data for companies to look at. And the larger the amount of data stored, the bigger the target for malicious actors to use to put leverage on and extort those companies. The
problem is the data is on all of us. In the case of Snowflake customer AT&T, this includes billions of call and text logs of its customers,
putting individuals’ sensitive data at risk of exposure. A privacy-first approach would employ techniques such as data minimization and either not collect that data in the first place or shorten the retention period that the
data is stored. Otherwise it just sits there waiting for the next breach.
Head back to the table of contents.
Tips to Protect Yourself
Data breaches are such a common occurrence that it’s easy to feel like there’s nothing you can do, nor any point in trying. But privacy isn’t dead. While some information about you is almost certainly out there, that’s no
reason for despair. In fact, it’s a good reason to take action.
There are steps you can take right now with all your online accounts to best protect yourself from the the next data breach (and the next, and the next):
Use unique passwords on all your online accounts. This is made much easier by
using a password manager, which can generate and store those passwords for you. When you have
a unique password for every website, a data breach of one site won’t cascade to others.
Use two-factor authentication when a service offers it. Two-factor
authentication makes your online accounts more secure by requiring additional proof (“factors”) alongside your password when you log in. While two-factor authentication adds
another step to the login process, it’s a great way to help keep out anyone not authorized, even if your password is breached.
Freeze your credit. Many experts
recommend freezing your credit with the major
credit bureaus as a way to protect against the sort of identity theft that’s made possible by some data breaches. Freezing your credit prevents someone from opening up a new line
of credit in your name without additional information, like a PIN or password, to “unfreeze” the account. This might sound absurd considering they can’t even open bank accounts,
but if you have kids, you can freeze their credit too.
Keep a close eye out for strange medical bills. With the number of health companies breached this year, it’s also a good idea to watch for healthcare fraud.
The Federal Trade Commission recommends watching for strange bills,
letters from your health insurance company for services you didn’t receive, and letters from debt collectors claiming you owe money.
Head back to the table of contents.
(Dis)Honorable Mentions
By one report, 2023 saw over 3,000 data breaches. The figure so far
this year is looking slightly smaller, with around 2,200
reported through the end of the third quarter. But 2,200 and counting is little comfort.
We did not investigate every one of these 2,000-plus data breaches, but we looked at a lot of them, including the news coverage and the data
breach notification letters that many state Attorney General offices host on their websites. We can’t award the coveted Breachie Award to every company that was breached this year.
Still, here are some (dis)honorable mentions:
ADT, Advance Auto Parts, AT&T, AT&T (again), Avis, Casio, Cencora, Comcast, Dell, El Salvador, Fidelity, FilterBaby, Fortinet, Framework, Golden Corral, Greylock, Halliburton, HealthEquity, Heritage Foundation, HMG Healthcare, Internet
Archive, LA County Department of Mental
Health, MediSecure, Mobile Guardian, MoneyGram, muah.ai, Ohio Lottery, Omni Hotels, Oregon Zoo, Orrick, Herrington & Sutcliffe, Panda Restaurants, Panera, Patelco Credit Union, Patriot Mobile, pcTattletale, Perry Johnson &
Associates, Roll20, Santander, Spytech, Synnovis, TEG, Ticketmaster, Twilio, USPS, Verizon, VF Corp, WebTPA.
What now? Companies need to do a better job of only collecting the information they need to operate, and properly securing what they store. Also, the U.S. needs to
pass comprehensive privacy protections. At the very least, we need to be able to sue companies when these sorts of breaches happen
(and while we’re at it, it’d be nice if we got more than $5.21 checks in the mail). EFFhaslongadvocated for a strong federal privacy law that
includes a private right of action.
>> mehr lesen
Saving the Internet in Europe: Defending Free Expression
(Thu, 19 Dec 2024)
This post is part two in a series of posts about EFF’s work in Europe. Read about how and why we work in Europe here.
EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners
of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and
technology to the European fight for digital rights.
In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe
can affect digital rights across the globe.
EFF’s approach to free speech
The global spread of Internet access and digital services promised a new era of freedom of expression, where everyone could share and access information,
speak out and find an audience without relying on gatekeepers and make, tinker with and share creative works.
Everyone should have the right to express themselves and share ideas freely. Various European countries have experienced totalitarian regimes and extensive
censorship in the past century, and as a result, many Europeans still place special emphasis on privacy and freedom of expression. These values are enshrined in the European
Convention of Human Rights and the Charter of Fundamental Rights of the European Union – essential legal frameworks for the protection of fundamental rights.
Today, as so much of our speech is facilitated by online platforms, there is an expectation, that they too respect fundamental rights. Through their terms
of services, community guidelines or house rules, platforms get to unilaterally define what speech is permissible on their services. The enforcement of these rules can be arbitrary,
untransparent and selective, resulting in the suppression of contentious ideas and minority voices.
That’s why EFF has been fighting against both government threats to free expression and to hold tech companies accountable for grounding their content moderation practices
in robust human rights frameworks. That entails setting out clear rules and standards for internal processes such as notifications and explanations to users when terms of services are
enforced or changed. In the European Union, we have worked for decades to ensure that laws governing online platforms respect fundamental rights, advocated against censorship and
spoke up on behalf of human rights defenders.
What’s the Digital Services Act and why do we keep talking about it?
For the past years, we have been especially busy addressing human rights concerns with the drafting and
implementation of the DSA the Digital Services Act
(DSA), the new law setting out the rules for online services in the European Union. The DSA covers most online services, ranging from online
marketplaces like Amazon, search engines like Google, social networks like Meta and app stores. However, not all of its rules apply to all services – instead, the DSA follows a
risk-based approach that puts the most obligations on the largest services that have the highest impact on users. All service providers must ensure that their terms of services
respect fundamental rights, that users can get in touch with them easily, and that they report on their content moderation activities. Additional rules apply to online platforms: they
must give users detailed information about content moderation decisions and the right to appeal and additional transparency obligations. They also have to provide some basic
transparency into the functioning of their recommender systems and are not allowed to target underage users with personalized ads. The most stringent obligations apply to the largest
online platforms and search engines, which have more than 45 million users in the EU. These companies, which include X, TikTok, Amazon, Google Search and Play, YouTube, and several
porn platforms, must proactively assess and mitigate systemic risks related to the design, functioning and use of their service their services. These include risks to the exercise of
fundamental rights, elections, public safety, civic discourse, the protection of minors and public health. This novel approach might have merit but is also cause for concern: Systemic
risks are barely defined and could lead to restrictions of lawful speech, and measures to address these risks, for example age verification, have negative consequences themselves,
like undermining users’ privacy and access to information.
The DSA is an important piece of legislation to advance users’ rights and hold companies accountable, but it also comes with significant risks. We are
concerned about the DSA’s requirement that service providers proactively share user data with law enforcement authorities and the powers it gives government agencies to request such
data. We caution against the misuse of the DSA’s emergency mechanism and the expansion of the DSA’s systemic risks governance approach as a catch-all tool to crack down on undesired
but lawful speech. Similarly, the appointment of trusted flaggers could lead to pressure on platforms to over remove content, especially as the DSA does not limit government
authorities from becoming trusted flaggers.
EFF has been advocating for lawmakers to take a measured approach that doesn’t undermine the freedom of expression. Even though we have been successful in
avoiding some of the most harmful ideas, concerns remain, especially with regards to the politicization of the enforcement of the DSA and potential over-enforcement. That’s why we
will keep a close eye on the enforcement of the DSA, ready to use all means at our disposal to push back against over-enforcement and to defend user rights.
European laws often implicate users globally. To give non-European users a voice in Brussels, we have been facilitating the DSA Human Rights Alliance. The DSA HR Alliance is
formed around the conviction that the DSA must adopt a human rights-based approach to platform governance and consider its global impact. We will continue building on and expanding
the Alliance to ensure that the enforcement of the DSA doesn’t lead to unintended negative consequences and respects users’ rights everywhere in the world.
The UK’s Platform Regulation Legislation
In parallel to the Digital Services Act, the UK has passed its own platform regulation, the Online Safety Act (OSA). Seeking to make the UK “the safest place in the world to be
online,” the OSA will lead to a more censored, locked-down internet for British users. The Act empowers the UK government to undermine not just
the privacy and security of UK residents, but internet users worldwide.
Online platforms will be expected to remove content that the UK government views as inappropriate for children. If they don’t, they’ll face heavy penalties.
The problem is, in the UK as in the U.S. and elsewhere, people disagree sharply about what type of content is harmful for kids. Putting that decision in the hands of government
regulators will lead to politicized censorship decisions.
The OSA will also lead to harmful age-verification systems. You shouldn’t have to show your ID to get online. Age-gating systems meant to keep out kids
invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary.
As Ofcom is starting to release their regulations and guidelines, we’re watching how the regulator plans to avoid these human rights pitfalls, and will
continue any fighting insufficient efforts to protect speech and privacy online.
Media freedom and plurality for everyone
Another issue that we have been championing is media freedom. Similar to the DSA, the EU recently overhauled its rules for media services: the
European Media Freedom Act
(EMFA). In this context, we pushed back against rules that would have forced online platforms like YouTube, X, or Instagram to carry any content
by media outlets. Intended to bolster media pluralism, making platforms host content by force has severe consequences: Millions of EU users can no longer trust that online platforms
will address content violating community standards. Besides, there is no easy way to differentiate between legitimate media providers, and such that are known for spreading
disinformation, such as government-affiliated Russia sites active in the EU. Taking away platforms' possibility to restrict or remove such content could undermine rather than foster
public discourse.
The final version of EMFA introduced a number of important safeguards but is still a bad deal for users: We will closely follow its implementation to ensure
that the new rules actually foster media freedom and plurality, inspire trust in the media and limit the use of spyware against journalists.
Exposing censorship and defending those who defend us
Covering regulation is just a small part of what we do. Over the past years, we have again and again revealed how companies’ broad-stroked content
moderation practices censor
users in the name of fighting terrorism, and restrict the voices of LGBTQ folks,sex workers, and underrepresented groups.
Going into 2025, we will continue to shed light on these restrictions of speech and will pay particular attention to the censorship of Palestinian
voices, which has been rampant. We will continue collaborating with our allies in the Digital Intimacy Coalition
to share how restrictive speech policies often disproportionally affect sex workers. We will also continue to closely analyze the impact of the increasing
and changing use of artificial intelligence in content moderation.
Finally, a crucial part of our work in Europe has been speaking out for those who cannot: human rights defenders facing imprisonment and censorship.
Much work remains to be done. We have put forward comprehensive policy
recommendations to European lawmakers and we will continue fighting for an internet where everyone can make their voice heard.
In the next posts in this series, you will learn more about how we work in Europe to ensure that digital markets are fair, offer users choice and respect
fundamental rights.
>> mehr lesen
We're Creating a Better Future for the Internet
(Thu, 19 Dec 2024)
In the early years of the internet, website administrators had to face off with a burdensome and expensive process to deploy SSL certificates. But today, hundreds of thousands of
people have used EFF’s free Certbot tool to spread that sweet HTTPS across the web. Now almost all internet traffic is encrypted, and everyone gets a basic level of security. Small
actions mean big change when we act together. Will you support important work like this and give EFF a Year-End Challenge boost?Give Today
Unlock Bonus Grants Before 2025
Make a donation of ANY SIZE by December 31 and you’ll help us unlock bonus grants! Every supporter gets us closer to a series of seven Year-End Challenge milestones
set by EFF’s board of directors. These grants become larger as the number of online rights supporters grows. Everyone counts! See our
progress.
Digital Rights: Under Construction
Since 1990, EFF has defended your digital privacy and free speech rights in the courts, through activism, and by making open source privacy tools. This team is committed to watching
out for the users no matter what directions technological innovation may take us. And that’s funded entirely by donations.
fix_copyright_and_stay_golden.png
Show your support for digital rights with free EFF member gear.
With help from people like you, EFF has been able to help unravel legal and ethical questions surrounding the rise of AI; push the USPTO to withdraw harmful patent proposals; fight for the public's right to access police drone footage; and show why banning TikTok and passing laws like the Kids Online Safety Act (KOSA) will not achieve internet
safety.
As technology’s reach continues to expand, so do everyone’s concerns about harmful side effects. That’s where EFF’s ample experience in tech policy, the law, and human rights shines.
You can help us.
Donate to defend digital rights today and you’ll help us unlock bonus grants before the year ends.
Join EFF!
Proudly Member-Supported Since 1990
________________________
EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating ELEVEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible
as allowed by law.
>> mehr lesen
There’s No Copyright Exception to First Amendment Protections for Anonymous Speech
(Thu, 19 Dec 2024)
Some people just can’t take a hint. Today’s perfect example is a group of independent movie distributors that have repeatedly tried, and failed, to force
Reddit to give up the IP addresses of several users who posted about downloading movies.
The distributors claim they need this information to support their copyright claims against internet service provider Frontier Communications, because it
might be evidence that Frontier wasn’t enforcing its repeat infringer policy and therefore couldn’t claim safe harbor protections under the Digital Millennium. Copyright Act. Courts
have repeatedly refused to enforce these subpoenas, recognizing the distributors couldn’t pass the test the First Amendment requires
prior to unmasking anonymous speakers.
Here's the twist: after the magistrate judge in this case applied this standard and quashed the subpoena, the movie distributors sought review from the
district court judge assigned to the case. The second judge also denied discovery as unduly burdensome but, in a hearing on the matter, also said there was no First Amendment issue
because the users were talking about copyright infringement. In their subsequent appeal to the Ninth Circuit, the distributors invite the appellate court to endorse the judge’s
statement.
As we explain in an amicus brief supporting
Reddit, the court should refuse that invitation. Discussions about illegal activity clearly are protected
speech. Indeed, the Supreme Court recently affirmed that even “advocacy of illegal acts” is “within the First Amendment’s core.” In fact, protecting such speech is a central purpose
of the First Amendment because it ensures that people can robustly debate civil and criminal laws and advocate for change.
There is no reason to imagine that this bedrock principle doesn’t apply just because the speech concerns copyright infringement –
—especially where the speakers aren’t even defendants in the case, but independent third parties. And unmasking Does in copyright cases carries
particular risks given the long history of copyright claims being used as an excuse to take down lawful as well as infringing content online.
We’re glad to see Reddit fighting back against these improper subpoenas, and proud to stand with the company as it stands up for its users.
>> mehr lesen
UK Politicians Join Organizations in Calling for Immediate Release of Alaa Abd El-Fattah
(Thu, 19 Dec 2024)
As the UK’s Prime Minister Keir Starmer and Foreign Secretary David Lammy have failed to secure the release of British-Egyptian blogger, coder, and activist Alaa Abd El-Fattah,
UK politicians call for tougher measures to secure Alaa’s immediate return to the UK.
During a debate on detained British nationals abroad in early December, chairwoman of the Commons Foreign Affairs Committee Emily Thornberry asked the House of Commons why the UK has continued to organize industry delegations to Cairo while
“the Egyptian government have one of our citizens—Alaa Abd El-Fattah—wrongfully held in prison without consular access.”
In the same debate, Labour MP John McDonnell urged the introduction of a
“moratorium on any new trade agreements with Egypt until Alaa is free,” which was supported
by other politicians. Liberal Democrat MP Calum Miller also highlighted words from
Alaa, who told his mother during a recent prison visit that he had “hope in David Lammy, but I just can’t believe nothing is happening...Now I think either I will die
in here, or if my mother dies I will hold him to account.”
Alaa’s mother, mathematician Laila Soueif, has been on hunger strike for 79 days while she and
the rest of his family have worked to engage the British government in securing Alaa’s release. On December 12, she also started protesting daily outside the Foreign Office and
has since been joined by numerous MPs.
Support for Alaa has come from many directions. On December 6, 12 Nobel laureates wrote to Keir Starmer urging him to secure
Alaa’s immediate release “Not only because Alaa is a British citizen, but to reanimate the commitment to intellectual sanctuary that made Britain a home for bold thinkers and
visionaries for centuries.” The pressure on Labour’s senior politicians has continued throughout the month, with more than 100 MPs and peers writing to David Lammy on December 15 demanding Alaa’ be freed.
Alaa should have been
released on September 29, after serving his five-year sentence for sharing a Facebook post about a death in police custody, but Egyptian authorities have
continued his imprisonment
in contravention of the country’s own Criminal Procedure Code. British consular officials are prevented from visiting him in prison because the Egyptian government refuses to
recognise Alaa’s British citizenship.
David Lammy met with Alaa’s family in November and promised to take action. But the UK’s Prime Minister failed to raise the case at the G20 Summit in Brazil when he met with
Egypt’s President El-Sisi.
If you’re based in the UK, here are some actions you can take to support the calls for Alaa’s release:
Write to your MP (external link): https://freealaa.net/message-mp
Join Laila Soueif outside the Foreign Office daily between 10-11am
Share Alaa’s plight on social media using the hashtag #freealaa
The UK Prime Minister and Foreign Secretary’s inaction is unacceptable. Every second counts, and time is running out. The government must do everything it can to ensure Alaa’s
immediate and unconditional release.
>> mehr lesen
What You Should Know When Joining Bluesky
(Wed, 18 Dec 2024)
Bluesky promises to rethink social media by focusing on openness and user control. But what does this actually mean for the millions of people joining the site?
November was a good month for alternatives to X. Many users hit their balking point after two years of controversial changes turned Twitter into X, a restrictive hub filled with misinformation and hate
speech. Musk’s involvement in the U.S. presidential election was the last straw for many who are now looking for greener pastures.
Threads, the largest alternative, grew about 15% with 35 million new users. However, the most explosive
growth came from Bluesky, seeing over 500% growth and a total
user base of over 25 million users at the time of writing.
We’ve dug into the nerdy details of how Mastodon, Threads, and
Bluesky compare, but given this recent momentum it’s important to clear up some questions for new Bluesky users, and what this new approach to the social web really
means for how you connect with people online.
Note that Bluesky is still in an early stage, and many big changes are anticipated from the project. Answers here are accurate as of the time of writing, and will indicate the
company’s future plans where possible.
Is Bluesky Just Another Twitter?
At face value the Bluesky app has a lot of similarities to Twitter prior to becoming X. That’s by design: the Bluesky team has prioritized making a drop-in replacement for 2022
Twitter, so everything from the layout, posting options, and even color scheme will feel familiar to users familiar with that site.
While discussed in the context of decentralization, this experience is still very centralized like traditional social media, with a single platform controlled by one company,
Bluesky PBLLC. However, a few aspirations from this company make it stand out:
Prioritizing interoperability and community development: Other platforms frequently get this
wrong, so this dedication to user empowerment and open source tooling is commendable.
“Credible Exit” Decentralization: Bluesky the company wants Bluesky, the network, to be able to function even if the company is eliminated or ‘enshittified.’
The first difference is evident already from the wide variety of tools and apps on the network. From blocking certain content to highlighting communities you’re a part of, there
are a lot of settings to make your feed yours— some of which we walked through
here. You can also abandon Bluesky’s Twitter-style interface for an app like Firesky, which presents a
stream of all Bluesky content. Other apps on the network can even be geared towards sharing audio, events, or work as a web forum, all using the same underlying AT protocol. This interoperable and experimental ecosystem parallels another based on the
ActivityPub protocol, called “The Fediverse”, which connects Threads to
Mastodon as well as many other decentralized apps which experiment with the functions of traditional social media sites.
That “credible exit” priority is less immediately visible, but explains some of the ways Bluesky looks different. The most visible difference is that usernames are domain names,
with the default for new users being a subdomain of bsky.social. EFF set it up
so that our account name is our website, @eff.org, which will be the case across the Bluesky network,
even if viewed with different apps. Comparable to how Mastodon handles
verification, no central authority or government documents are needed for verification, just proof of control over a site or record.
As Bluesky decentralizes, it is likely to diverge more from the Twitter experience as the tricky problems of decentralization creep in.
How Is Bluesky for Privacy?
While Bluesky is not engaged in surveillance-based advertising like many incumbent social media platforms, users should be aware that shared information is more public and accessible than they might expect.
Bluesky, the app, offers some sensible data-minimizing defaults like requiring user consent for third-party embedded media, which can include tracking. The real assurance to
users, however, is that even if the flagship apps were to become less privacy protective, the open tools let others make full-featured alternative apps on the same network.
However, by design, Bluesky content is fully public on the network. Users can change privacy settings to encourage
apps on the network to require login to view your account, but it is optional to honor. Every post, every like, and every share is visible to the world. Even blocking data is
plainly visible. By design all of this information is also accessible in one place, as Bluesky aims to be the megaphone for a global audience Twitter once was.
This transparency extends to how Bluesky handles moderation, where users and content
are labeled by a combination of Bluesky moderators, community moderators, and automated labeling. The result is information about you will, over time, be held by these moderators to
either promote or hide your content.
Users leaving X out of frustration for the platform using public content to feed AI training may also find
that this approach of funneling all content into one stream is very friendly to scraping for AI training by third parties. Bluesky’s CEO
has been clear the company will not engage in AI licensing deals, but it’s important
to be clear this is inherent to any network prioritizing openness. The freedom to use public data for creative expression, innovation, and research extends to those who use it to
train AI.
Users you have blocked may also be able to use this public stream to view your posts without interacting with you. If your threat model includes trolls and other bad
actors who might reshare your posts in
other contexts, this is important to consider.
Direct messages are not included in this heap of public information. However they are not end-to-end encrypted, and only hosted by Bluesky
servers. As was the case for X, that means any DM is visible to Bluesky PBLLC. DMs may be accessed for moderation, for valid police warrants, and may even one day be public through a
data breach. Encrypted DMs are
planned, but we advise sensitive conversations be moved to dedicated fully encrypted conversations.
How Do I Find People to Follow?
Tools like Skybridge are being built to make it easier for people to import their
Twitter contacts into Bluesky. Similar to advice we gave for joining Mastodon, keep in mind these tools may need extensive account access, and may need to be re-run as more people
switch networks.
Bluesky has also implemented “starter packs,” which are curated lists of users anyone can
create and share to new users. EFF recently put together a few for you to check out:
Electronic Frontier Foundation StaffElectronic Frontier Alliance membersDigital Rights, News & Advocacy
Is Bluesky In the Fediverse?
“Fediverse” refers to a wide variety of sites and services generally communicating with each other over the ActivityPub
protocol, including Threads, Mastodon, and a number of other projects. Bluesky uses the AT Protocol, which is not currently compatible with ActivityPub, thus it is
not part of “the fediverse.”
However, Bluesky is already being integrated into the vision of an interoperable and decentralized social web. You can follow Bluesky accounts from the fediverse over RSS. A
number of mobile apps will also seamlessly merge Bluesky and fediverse feeds and let you post to both accounts. Even with just one Bluesky or fediverse account, users can also share
posts and DMs to both networks using a project called Bridgy Fed.
In recent weeks this bridging also opened up to the hundreds of millions of Threads users. It just requires an additional step of enabling fediverse sharing, before connecting
to the fediverse Bridgy Fed account. We’re optimistic
that all of these projects will continue to improve integrations even more in the future.
Is the Bluesky Network Decentralized?
The current Bluesky network is not decentralized.
It is nearly all made and hosted by one company, Bluesky PBLLC, which is working on creating the “credible exit” from their control as a platform host. If Bluesky the company
and the infrastructure it operates disappeared tonight, however, the entire Bluesky network would effectively vanish along with it.
Of the 25 million users, only 10,000 are hosted by a non-Bluesky services — most of which through
fediverse connections. Changing to another host is also currently a one-way
exit. All DMs rely on Bluesky owned servers, as does
the current system for managing user identities, as well as the resource-intensive
“Relay” server aggregating content from across the network. The same company also handles the bulk of moderation and develops the main apps used by most users. Compared to networks
like the fediverse or even email, hosting your own Bluesky node currently requires a considerable
investment.
Once this is no longer the case, a “credible exit” is also not quite the same as “decentralized.” An escape hatch for particularly dire circumstances is good, but it falls short
of the distributed power and decision making of decentralized networks. This distinction will become more pressing as the reliance on Bluesky PBLLC is tested, and the company opens up
to more third parties for each component of the network.
How Does Bluesky Make Money?
The past few decades have shown the same ‘enshittification’ cycle too many
times. A new startup promises something exciting, users join, and then the platform turns on users to maximize profits—often through surveillance and restricting user
autonomy.
Will Bluesky be any different? From the team’s outlined plan we can glean that
Bluesky promises not to use surveillance-based advertising, nor lock-in users. Bluesky CEO Jay Graber also promised to not sell user content to AI training licenses and intends to
always keep the service free to join. Paid services like custom domain hosting or paid
subscriptions seem likely.
So far, though, the company relies on investment funding. It was initially incubated by Twitter co-founder Jack Dorsey— who has since distanced himself from the project—and
more recently received 8 million and 15 million dollar rounds of funding.
That later investment round has raised concerns among the existing userbase that Bluesky would pivot to some form of cryptocurrency service, as it was led by Blockchain Capital,
a cryptocurrency focused venture capital company which also had a partner join the Bluesky board. Jay Graber committed to “not hyperfinancialize the social experience” with blockchain
projects, and emphasized that Bluesky does not use blockchain.
As noted above, Bluesky has prioritized maintaining a “credible exit” for users, a commitment to interoperability that should keep the company accountable to the community and
hopefully prevent the kind of “enshittification” that drove people away from X. Holding the company to all of these promises will be key to seeing the Bluesky network and the AT
protocol reach that point of maturity.
How Does Moderation Work?
Our comparison of Mastodon, Threads, and Bluesky
gets into more detail, but as it stands Bluesky’s moderation is similar to Twitter’s before Musk. The Bluesky corporation uses the open moderation tools to label posts and
users, and will remove users from their hosted services for breaking their terms of service. This tooling keeps the Bluesky company’s moderation tied to its “credible exit” goals,
giving it the same leverage any other future operator might have. It also
means Bluesky’s centralized moderation of today can’t scale, and even with a good faith effort it will run into issues.
Bluesky accounts for this by opening its moderation tools to the community. Advanced options are available under settings in the web app, and anyone can label content and users on the site. These labels let users filter, prioritize, or block
content. However, only Bluesky has the power to “deplatform” poorly behaved users by removing them, either by no longer hosting their account, no longer relaying their content to
other users, or both.
Bluesky aspires to censorship resistance, and part of creating a “credible exit” means reducing the company’s ability to remove users entirely. In a future with a variety of
hosts and relays on the Bluesky network, removing a user looks more like removing a website from the internet—not impossible, but very difficult. Instead users will need to settle
with filtering out or blocking speech they object
to, and take some comfort that voices they align with will not be removed
from the network.
The permeability of Bluesky also means community tooling will need to address network abuses, like last May when a pro-Trump botnet on Nostr bridged to
Bluesky via Mastodon to flood timelines. It’s possible that like in the Fediverse, Bluesky may eventually form a network of trusted account hosts and relays to mitigate these
concerns.
Bluesky is still a work in progress, but its focus on decentralization, user control, and interoperability makes it an exciting space to watch. Whether you’re testing the waters
or planning a full migration, these insights should help you navigate the platform.
>> mehr lesen
Australia Banning Kids from Social Media Does More Harm Than Good
(Wed, 18 Dec 2024)
Age verification systems are surveillance
systems that threaten everyone’s privacy and anonymity. But Australia’s government recently decided to ignore these dangers, passing a vague, sweeping piece of age
verification legislation after giving only a day for
comments. The Online Safety Amendment
(Social Media Minimum Age) Act 2024, which bans children under the age of 16 from using social media, will force platforms to take undefined “reasonable steps” to
verify users’ ages and prevent young people from using them, or face over $30 million in fines.
The country’s Prime Minister, Anthony Albanese, claims that the legislation is needed to protect young people in the country from the supposed harmful effects of social media,
despite no study showing such an impact. This legislation will be a net loss for both young people
and adults who rely on the internet to find community and themselves.
The law does not specify which social media platforms will be banned. Instead, this decision is left to Australia’s communications minister who will work alongside the country’s
internet regulator, the eSafety Commissioner, to enforce the rules. This gives government officials dangerous power to target services they do not like, all at a cost to both minor
and adult internet users.
The legislation also does not specify what type of age verification technology will be necessary to implement the restrictions but prohibits using only government IDs for this
purpose. This is a flawed attempt to protect privacy.
Since platforms will have to provide other means to verify their users' ages other than by government ID, they will likely rely on unreliable tools like biometric scanners. The
Australian government awarded the contract
for testing age verification technology to a UK-based company, Age Check Certification Scheme (ACCS) who, according to the company website, “can test all kinds of age
verification systems,” including “biometrics, database lookups, and artificial intelligence-based solutions.”
The ban will not take effect for at least another 12 months while these points are decided upon, but we are already concerned that the systems required to comply with this law
will burden all Australians’ privacy, anonymity, and data
security.
Banning social media and introducing mandatory age verification checks is the wrong approach to protecting young people online, and this bill was hastily pushed through the
Parliament of Australia with little oversight or scrutiny. We urge politicians in other countries—like the U.S. and France—to explore less invasive approaches to protecting
all people from online harms and focus on comprehensive privacy protections,
rather than mandatory age verification.
>> mehr lesen
EFF Statement on U.S. Supreme Court's Decision to Consider TikTok Ban
(Wed, 18 Dec 2024)
The TikTok ban itself and the DC Circuit's approval of it should be of great concern even to those who find TikTok undesirable or scary. Shutting down communications platforms
or forcing their reorganization based on concerns of foreign propaganda and anti-national manipulation is an eminently anti-democratic tactic, one that the U.S. has previously
condemned globally.
The U.S. government should not be able to restrict speech—in this case by cutting off a tool used by 170 million Americans to receive information and communicate with the
world—without proving with evidence that the tools are presently seriously harmful. But in this case, Congress has required and the DC Circuit approved TikTok’s forced divestiture
based only upon fears of future potential harm. This greatly lowers well-established standards for restricting freedom of speech in the U.S.
So we are pleased that the Supreme Court will take the case and will urge the justices to apply the appropriately demanding First Amendment scrutiny.
>> mehr lesen
Speaking Freely: Winnie Kabintie
(Wed, 18 Dec 2024)
Winnie Kabintie is a journalist and Communications Specialist based in Nairobi, Kenya. As an award-winning youth media advocate, she is passionate about empowering young
people with Media and Information Literacy skills, enabling them to critically engage with and shape the evolving digital media landscape in meaningful ways.
Greene: To get us started, can you tell us what the term free expression means to you?
I think it's the opportunity to speak in a language that you understand and speak about subjects of concern to you and to anybody who is affected or influenced by the subject of
conversation. To me, it is the ability to communicate openly and share ideas or information without interference, control, or restrictions.
As a journalist, it means having the freedom to report on matters affecting society and my work without censorship or limitations on where that information can be shared. Beyond
individual expression, it is also about empowering communities to voice their concerns and highlight issues that impact their lives. Additionally, access to information is a vital
component of freedom of expression, as it ensures people can make informed decisions and engage meaningfully in societal discourse because knowledge is power.
Greene: You mention the freedom to speak and to receive information in your language. How do you see that currently? Are language differences a big obstacle that you see
currently?
If I just look at my society—I like to contextualize things—we have Swahili, which is a national language, and we have English as the secondary official language. But when it
comes to policies, when it comes to public engagement, we only see this happening in documents that are only written in English. This means when it comes to the public barazas
(community gatherings) interpretation is led by a few individuals, which creates room for disinformation and misinformation. I believe the language barrier is an obstacle to freedom
of speech. We've also seen it from the civil society dynamics, where you're going to engage the community but you don't speak the same language as them, then it becomes very difficult
for you to engage them on the subject at hand. And if you have to use a translator, sometimes what happens is you're probably using a translator for whom their only advantage, or
rather the only advantage they bring to the table, is the fact that they understand different languages. But they're not experts in the topic that you're discussing.
Greene: Why do you think the government only produces materials in English? Do you think part of that is because they want to limit who is able to understand them? Or is it just,
are they lazy or they just disregard the other languages?
In all fairness, I think it comes from the systematic approach on how things run. This has been the way of doing things, and it's easier to do it because translating some words
from, for example, English to Swahili is very hard. And you see, as much as we speak Swahili in Kenya—and it's our national language—the kind of Swahili we speak is also very diluted
or corrupted with English and Sheng—I like to call “ki-shenglish”. I know there were attempts to translate the new Kenyan Constitution, and they did
translate some bits of the summarized copy, but even then it wasn’t the full Constitution. We don't even know how to say certain words in Swahili from English which makes it difficult
to translate many things. So I think it's just an innocent omission.
Greene: What makes you passionate about freedom of expression?
As a journalist and youth media advocate, my passion for freedom of expression stems from its fundamental role in empowering individuals and communities to share their
stories, voice their concerns, and drive meaningful change. Freedom of expression is not just about the right to speak—it’s about the ability to question, to challenge injustices, and
to contribute to shaping a better society.
For me, freedom of expression is deeply personal as I like to question, interrogate and I am not just content with the status quo. As a journalist, I rely on this freedom to
shed light on critical issues affecting society, to amplify marginalized voices, and to hold power to account. As a youth advocate, I’ve witnessed how freedom of expression enables
young people to challenge stereotypes, demand accountability, and actively participate in shaping their future. We saw this during the recent Gen Z revolution in Kenya when youth took to the streets to reject the proposed
Finance Bill.
Freedom of speech is also about access. It matters to me that people not only have the ability to speak freely, but also have the platforms to articulate their issues. You can
have all the voice you need, but if you do not have the platforms, then it becomes nothing. So it's also recognizing that we need to create the right platforms to advance freedom of
speech. These, in our case, include platforms like radio and social media platforms.
So we need to ensure that we have connectivity to these platforms. For example, in the rural areas of our countries, there are some areas that are not even connected to the
internet. They don't have the infrastructure including electricity. It then becomes difficult for those people to engage in digital media platforms where everybody is now engaging. I
remember recently during the Reject Finance Bill process in Kenya, the political elite
realized that they could leverage social media and meet with and engage the youth. I remember the President was summoned to an X-space and he showed up and there was dialogue with
hundreds of young people. But what this meant was that the youth in rural Kenya who didn’t have access to the internet or X were left out of that national, historic conversation.
That's why I say it's not just as simple as saying you are guaranteed freedom of expression by the Constitution. It's also how governments are ensuring that we have the channels to
advance this right.
Greene: Have you had a personal experience or any personal experiences that shaped how you feel about freedom of expression? Maybe a situation where you felt like it was being
denied to you or someone close to you was in that situation?
At a personal level I believe that I am a product of speaking out and I try to use my voice to make an impact! There is also this one particular incident that stands out during
my early career as a journalist. In 2014 I amplified a story from a video shared on facebook by writing a news article that was published on The Kenya Forum, which at the time was one
of the two publications that were fully digital in the country covering news and feature articles.
The story, which was a case of gender based
assault, gained traction drawing attention to the unfortunate incident that had seen a woman stripped naked allegedly for being “dressed indecently.” The public uproar sparked the
famous #MyDressMyChoice protest in Kenya where women took to the streets countrywide to protest against sexual violence.
Greene: Wow. Do you have any other specific stories that you can tell about the time when you spoke up and you felt that it made a difference? Or maybe you spoke up, and there was
some resistance to you speaking up?
I've had many moments where I've spoken up and it's made a difference including the incident I shared in the previous question. But, on the other hand, I also had a moment where
I did not speak out years ago, when a classmate in primary school was accused of theft.
There was this girl once in class, she was caught with books that didn't belong to her and she was accused of stealing them. One of the books she had was my deskmate’s and I was
there when she had borrowed it. So she was defending herself and told the teacher, “Winnie was there when I borrowed the book.” When the teacher asked me if this was true I just said,
“I don't know.” That feedback was her last line of defense and the girl got expelled from school. So I’ve always wondered, if I'd said yes, would the teacher have been more lenient
and realized that she had probably just borrowed the rest of the books as well? I was only eight years old at the time, but because of that, and how bad the outcome made me feel, I
vowed to myself to always stand for the truth even when it’s unpopular with everyone else in the room. I would never look the other way in the face of an injustice or in the face of
an issue that I can help resolve. I will never walk away in silence.
Greene: Have you kept to that since then?
Absolutely.
Greene: Okay, I want to switch tracks a little bit. Do you feel there are situations where it's appropriate for government to limit someone's speech?
Yes, absolutely. In today’s era of disinformation and hate speech, it’s crucial to have legal frameworks that safeguard society. We live in a society where people, especially
politicians, often make inflammatory statements to gain political mileage, and such remarks can lead to serious consequences, including civil unrest.
Kenya’s experience during the 2007-2008 elections is a powerful reminder of
how harmful speech can escalate tensions and pit communities against each other. That period taught us the importance of being mindful of what leaders say, as their words have the
power to unite or divide.
I firmly believe that governments must strike a balance between protecting freedom of speech and preventing harm. While everyone has the right to express themselves, that right
ends where it begins to infringe on the rights and safety of others. It’s about ensuring that freedom of speech is exercised responsibly to maintain peace and harmony in
society.
Greene: So what do we have to be careful about with giving the government the power to regulate speech? You mentioned hate speech can be hard to define. What's the risk of letting
the government define that?
The risk is that the government may overstep its boundaries, as often happens. Another concern is the lack of consistent and standardized enforcement. For instance, someone with
influence or connections within the government might escape accountability for their actions, while an activist doing the same thing could face arrest. This disparity in treatment
highlights the risks of uneven application of the law and potential misuse of power.
Greene: Earlier you mentioned special concern for access to information. You mentioned children and you mentioned women. Both of those are groups of people where, at least in some
places, someone else—not the government, but some other person—might control their access, right? I wonder if you could talk a little bit more about why it's so important to ensure
access to information for those particular groups.
I believe home is the foundational space where access to information and freedom of expression are nurtured. Families play a crucial role in cultivating these values, and it’s
important for parents to be intentional about fostering an environment where open communication and access to information are encouraged. Parents have a responsibility to create
opportunities for discussion within their households and beyond.
Outside the family, communities provide broader platforms for engagement. In Kenya, for example, public forums known as barazas serve as spaces
where community members gather to discuss pressing issues, such as insecurity and public utilities, and to make decisions that impact the neighborhood. Ensuring that your household is
represented in these forums is essential to staying informed and being part of decisions that directly affect you.
It’s equally important to help people understand the power of self-expression and active participation in decision-making spaces. By showing up and speaking out, individuals can
contribute to meaningful change. Additionally, exposure to information and critical discussions is vital in today’s world, where misinformation and disinformation are prevalent.
Families can address these challenges by having conversations at the dinner table, asking questions like, “Have you heard about this? What’s your understanding of misinformation? How
can you avoid being misled online?”
By encouraging open dialogue and critical thinking in everyday interactions, we empower one another to navigate information responsibly and contribute to a more informed and
engaged society.
Greene: Now, a question we ask everyone, who is your free speech hero?
I have two. One is a Human Rights lawyer and a former member of Parliament Gitobu
Imanyara. He is one of the few people in Kenya who fought by blood and sweat, literally, for the freedom of speech and that of the press in Kenya. He will
always be my hero when we talk about press freedom. We are one of the few countries in Africa that enjoys extreme freedoms around speech and press freedom and it’s thanks to people
like him.
The other is an activist named Boniface Mwangi. He’s a person who never shies away from
speaking up. It doesn’t matter who you are or how dangerous it gets, Boni, as he is popularly known, will always be that person who calls out the government when things are going
wrong. You’re driving on the wrong side of the traffic just because you’re a powerful person in government. He'll be the person who will not move his car and he’ll tell you to get
back in your lane. I like that. I believe when we speak up we make things happen.
Greene: Anything else you want to add?
I believe it’s time we truly recognize and understand the importance of freedom of expression and speech. Too often, these rights are mentioned casually or taken at face value,
without deeper reflection. We need to start interrogating what free speech really means, the tools that enable it, and the ways in which this right can be infringed upon.
As someone passionate about community empowerment, I believe the key lies in educating people about these rights—what it looks like when they are fully exercised and what it
means when they are violated and especially in today’s digital age. Only by raising awareness can we empower individuals to embrace these freedoms and advocate for better policies
that protect and regulate them effectively. This understanding is essential for fostering informed, engaged communities that can demand accountability and meaningful change.
>> mehr lesen
“Can the Government Read My Text Messages?”
(Wed, 18 Dec 2024)
You should be able to message your family and friends without fear that law enforcement is reading everything you send. Privacy is a human right, and that’s
why we break down the ways you can protect your ability to have a private
conversation.
Learn how governments are able to read certain text messages, and how to ensure your messages are end-to-end encrypted on Digital Rights Bytes, our new site dedicated to helping break down tech issues into
byte-sized pieces.
Whether you’re just starting to think about your privacy online, or you’re already a regular user of encrypted messaging apps, Digital Rights Bytes is here
to help answer some of the common questions that may be bothering you about the devices you use. Watch the short video that explains how to keep your communications private
online--and share it with family and friends who may have asked similar questions!
Have you also wondered why it is so expensive to fix your phone, or if you really own the digital media you paid for? We’ve got answers to those and other
questions as well! And, if you’ve got additional questions you’d like us to answer in the future, let us know on your social
platform of choice using the hashtag #DigitalRightsBytes.
>> mehr lesen
10 Resources for Protecting Your Digital Security | EFFector 36.15
(Tue, 17 Dec 2024)
Get a head-start on your New Year's resolution to stay up-to-date on digital rights news by subscribing to EFF's EFFector newsletter!
This edition of the newsletter covers our top ten
digital security resources for those concerned about the incoming administration, a new bill that could put an end to SLAPP lawsuits, and our recent amicus brief arguing that device searches at the border require a warrant
(we've been arguing this for a long time).
You can read the full newsletter here, and even get future editions directly to your inbox when you subscribe! Additionally, we've got an audio edition of EFFector on the Internet Archive, or you
can view it by clicking the button below:
LISTEN ON YouTube
EFFECTOR 36.15 - 10 Resources for Protecting Your Digital Security
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human
rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other
stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us
fight for a brighter digital future.
>> mehr lesen
Still Flawed and Lacking Safeguards, UN Cybercrime Treaty Goes Before the UN General Assembly, then States for Adoption
(Tue, 17 Dec 2024)
Update (12/21/24): A vote on the UN Cybercrime Treaty by the UN General Assembly was postponed to a later date to allow time for a review of its budget implications. The new
date for the vote was not set.
Most UN Member States, including
the U.S., are expected to support adoption of the flawed UN Cybercrime Treaty when it’s scheduled to go before the UN General Assembly this week for a vote, despite warnings that
it poses dangerous risks to human rights.
EFF and its civil society partners–along with cybersecurity and internet companies, press organizations, the International Chamber of Congress, the
United Nations High Commissioner for Human Rights,
and others–for years have raised red flags that the treaty authorizes open-ended evidence gathering powers for crimes with little nexus to core cybercrimes,
and has minimal safeguards and limitations.
The final draft, unanimously
approved in August by over 100 countries that had participated in negotiations, will permit intrusive surveillance practices in the name of engendering cross-border
cooperation.
The treaty that will go before the UN General Assembly contains many
troubling provisions and omissions that don’t comport with international human rights standards and leave the implementation of human rights safeguards to the discretion of Member
States. Many of these Member States have poor track records on human rights and national laws that don’t protect privacy while criminalizing free speech and gender expression.
Thanks to the work of a coalition of civil society groups that included EFF, the U.S. now seems to recognize this potential danger. In a statement by the U.S. Deputy
Representative to the Economic and Social Council, the U.S. said it “shares the legitimate concerns” of industry and civil society, which warned that some states could leverage their
human rights-challenged national legal frameworks to enable transnational repression.
We expressed grave concerns that the treaty facilitates requests for user data that will enable cross-border spying and the targeting and harassment of those, for example, who expose and work
against government corruption and abuse. Our full analysis of the treaty can be found here.
Nonetheless, the U.S. said it will support the convention when it comes
up for this vote, noting among other things that its terms don’t permit parties from using it to violate or suppress human rights.
While that’s true as far as it goes, and is important to include in principle, some Member States’ laws empowered by the treaty already fail to meet human rights standards. And
the treaty fails to adopt specific safeguards to truly protect human rights.
The safeguards contained in the convention, such as the need for judicial review in the chapter on procedural measures in criminal investigations, are undermined by being
potentially discretionary and contingent on state’s domestic laws. In many countries, these domestic laws don’t require judicial authorization based on reasonable suspicion for
surveillance and or real-time collection of traffic.
For example, our partner Access Now points out that in Algeria, Lebanon, Palestine,
Tunisia, and Egypt, cybercrime laws require telecommunications service providers to preemptively and systematically collect large amounts of user data without judicial
authorization.
Meanwhile, Jordan’s cybercrime law has been used
against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.
The U.S. says it is committed to combating human rights abuses by governments that misuse national cybercrime statues and tools to target journalists and activists. Implementing
the treaty, it says, must be paired with robust domestic safeguards and oversight.
It’s hard to imagine that governments will voluntarily revise cybercrime laws as they ratify and implement the treaty; what’s more realistic is that the treaty normalizes such
frameworks.
Advocating for improvements during the two years-long negotiations was a tough slog. And while the final version is highly problematic, civil society achieved some wins. An
early negotiating document named 34 purported cybercrime offenses to be included, many of which would criminalize forms of speech. Civil society warned of the dangers of including
speech-related offenses; the list was dropped in later drafts.
Civil society advocacy also helped secure specific language in the general provision article on human rights specifying that protection of fundamental rights includes freedom of
expression, opinion, religion, conscience, and peaceful assembly. Left off the list, though, was gender expression.
The U.S., meanwhile, has called on all states “to take necessary steps within their domestic legal systems to ensure the Convention will not be applied in a manner inconsistent
with human rights obligations, including those relating to speech, political dissent, and sexual identity.”
Furthermore, the U.S. government pledges to demand accountability – without saying how it will do so – if states seek to misuse the treaty to suppress human rights. “We will
demand accountability for States who try to abuse this Convention to target private companies’ employees, good-faith cybersecurity researchers, journalists, dissidents, and others.”
Yet the treaty contains no oversight provisions.
The U.S. said it is unlikely to sign or ratify the treaty “unless and until we see implementation of meaningful human rights and other legal protections by the convention’s
signatories.”
We’ll hold the government to its word on this and on its vows to seek accountability. But ultimately, the destiny of the U.S declarations and the treaty’s impact in
the U.S are more than uncertain under a second Trump administration, as ratification would require both the Senate’s consent and the President's formal ratification.
Trump withdrew from climate, trade, and arms agreements in his first term, so signing the UN Cybercrime Treaty may not be in the cards – a positive outcome, though
probably not motivated by concerns for human rights.
Meanwhile, we urge states to vote against adoption this week and not ratify the treaty at home. The document puts global human rights at risk. In a rush to to win
consensus, negotiators gave Member States lots of leeway to avoid human rights safeguards in their “criminal” investigations, and now millions of people around the world
might pay a high price.
>> mehr lesen
Saving the Internet in Europe: How EFF Works in Europe
(Mon, 16 Dec 2024)
This post is part one in a series of posts about EFF’s work in Europe.
EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in
recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the
European fight for digital rights.
In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital
rights across the globe.
Why EFF Works in Europe
European lawmakers have been highly active in proposing laws to regulate online services and emerging technologies. And these laws have the potential to impact the whole world.
As such, we have long recognized the importance of engaging with organizations and lawmakers across Europe. In 2007, EFF became a member of the European Digital Rights Initiative (EDRi), a collective of NGOs, experts, advocates and academics that have for two decades worked to
advance digital rights throughout Europe. From the early days of the movement, we fought back against legislation threatening user privacy in Germany, free expression in the UK, and the right to innovation across the continent.
Over the years, we have continued collaborations with EDRi as well as other coalitions including IFEX, the international
freedom of expression network, Reclaim Your Face, and Protect
Not Surveil. In our EU policy work, we have advocated for fundamental principles like transparency, openness, and information self-determination. We emphasized that
legislative acts should never come at the expense of protections that have served the internet well: Preserve what works. Fix what is broken. And EFF has made a real difference: We have
ensured that recent internet
regulation bills don’t turn social networks into censorship tools and safeguarded users’ right to private conversations. We also helped
guide new fairness rules in digital markets to focus
on what is really important: breaking the chokehold of major platforms over the internet.
Recognizing the internet’s global reach, we have also stressed that lawmakers must consider the global impact of regulation and enforcement, particularly effects on
vulnerable groups and underserved communities. As part of this work, we facilitate a global
alliance of civil society organizations representing diverse communities across the world to ensure that non-European voices are heard in Brussels’ policy
debates.
Our Teams
Today, we have a robust policy team that works to influence policymakers in Europe. Led by International Policy Director Christoph Schmon and supported by Assistant Director of EU Policy Svea Windwehr, both of whom are based in Europe, the team brings a set of unique expertise in European digital
policy making and fundamental rights online. They engage with lawmakers, provide policy expertise and coordinate EFF’s work in Europe.
But legislative work is only one piece of the puzzle, and as a collaborative organization, EFF pulls expertise from various teams to shape policy, build capacity, and campaign
for a better digital future. Our teams engage with the press and the public through comprehensive analysis of digital rights issues, educational guides, activist workshops, press
briefings, and more. They are active in broad coalitions across the EU and the UK, as well as in East and Southeastern Europe.
Our work does not only span EU digital policy issues. We have been active in the UK advocating for user rights in the context of the Online Safety Act, and also work on issues
facing users in the Balkans or accession countries. For instance, we recently collaborated with Digital Security Lab Ukraine on a workshop on content moderation held in Warsaw, and
participated in the Bosnia and Herzegovina Internet Governance Forum. We are also an active member of the High-Level Group of Experts
for Resilience Building in Eastern Europe, tasked to advise on online regulation in Georgia, Moldova and Ukraine.
EFF on Stage
In addition to all of the behind-the-scenes work that we do, EFF regularly showcases our work on European stages to share our mission and message. You can find us at conferences
like re:publica, CPDP, Chaos Communication Congress, or Freedom not Fear, and at local events like regional Internet Governance Forums. For instance, last year Director for
International Freedom of Expression Jillian C. York gave a talk with Svea Windwehr at
Berlin’s re:publica about transparency
reporting. More recently, Senior Speech and Privacy Activist Paige Collings
facilitated a session on queer justice in the digital age at a workshop held in Bosnia and Herzegovina.
There is so much more work to be done. In the next posts in this series, you will learn more about what EFF will be doing in Europe in 2025 and beyond, as well as some of our
lessons and successes from past struggles.
>> mehr lesen
Speaking Freely: Prasanth Sugathan
(Fri, 13 Dec 2024)
Interviewer: David Greene
This interview has been edited for length and clarity.*
Prasanth Sugathan is Legal Director at Software Freedom Law Center, India. (SFLC.in). Prasanth is a
lawyer with years of practice in the fields of technology law, intellectual property law, administrative law and constitutional law. He is an engineer turned lawyer and has worked
closely with the Free Software community in India. He has appeared in many landmark cases before various Tribunals, High Courts and the Supreme Court of India. He has also deposed
before Parliamentary Committees on issues related to the Information Technology Act and Net Neutrality.
David Greene: Why don’t you go ahead and introduce yourself.
Sugathan: I am Prasanth Sugathan, I am the Legal Director at the Software Freedom Law Center, India. We are a nonprofit organization based out of New Delhi, started in the year
2010. So we’ve been working at this for 14 years now, working mostly in the area of protecting rights of citizens in the digital space in India. We do strategic litigation, policy
work, trainings, and capacity building. Those are the areas that we work in.
Greene: What was your career path? How did you end up at SFLC?
That’s an interesting story. I am an engineer by training. Then I was interested in free software. I had a startup at one point and I did a law degree along with it. I got
interested in free software and got into it full time. Because of this involvement with the free software community, the first time I think I got involved in something related to
policy was when there was discussion around software patents. When the patent office came out with a patent manual and there was this discussion about how it could affect the free
software community and startups. So that was one discussion I followed, I wrote about it, and one thing led to another and I was called to speak at a seminar in New Delhi. That’s
where I met Eben and Mishi from the Software Freedom Law Center. That was before SFLC India was started, but then once Mishi started the organization I joined as a Counsel. It’s been
a long relationship.
Greene: Just in a personal sense, what does freedom of expression mean to you?
Apart from being a fundamental right, as evident in all the human rights agreements we have, and in the Indian Constitution, freedom of expression is the
most basic aspect for a democratic nation. I mean without free speech you can not have a proper exchange of ideas, which is most important for a democracy. For any citizen to speak
what they feel, to communicate their ideas, I think that is most important. As of now the internet is a medium which allows you to do that. So there definitely should be minimum
restrictions from the government and other agencies in relation to the free exchange of ideas on this medium.
Greene: Have you had any personal experiences with censorship that have sort of informed or influenced how you feel about free expression?
When SFLC.IN was started in 2010 our major idea was to support the free software community. But then how we got involved in the debates on free speech and privacy on the
internet was when in 2011 there were the IT Rules were introduced by the government as a draft for discussion and finally notified. This was on regulation of intermediaries,
these online platforms. This was secondary legislation based on the Information Technology
Act (IT Act) in India, which is the parent law. So when these discussions happened we got involved in it and then one thing led to another. For example, there was a
provision in the IT Act called Section 66-A which criminalized the sending of offensive messages through a computer or other communication devices. It was, ostensibly, introduced to
protect women. And the irony was that two women were arrested under this law. That was the first arrest that happened, and it was a case of two women being arrested for the comments
that they made about a leader who expired.
This got us working on trying to talk to parliamentarians, trying to talk to other people about how we could maybe change this law. So there were various instances of content
being taken down and people being arrested, and it was always done under Section 66-A of the IT Act. We challenged the IT Rules before the Supreme Court. In a judgment in a 2015 case
called Shreya Singhal v. Union of India the Supreme Court read down the rules
relating to intermediary liability. As for the rules, the platforms could be asked to take down the content. They didn’t have much of an option. If they don’t do that, they lose their
safe harbour protection. The Court said it can only be actual knowledge and what actual knowledge means is if someone gets a court order asking them to take down the content. Or let’s
say there’s direction from the government. These are the only two cases when content could be taken down.
Greene: You’ve lived in India your whole life. Has there ever been a point in your life when you felt your freedom of expression was restricted?
Currently we are going through such a phase, where you’re careful about what you’re speaking about. There is a lot of concern about what is happening in India currently. This is
something we can see mostly impacting people who are associated with civil society. When they are voicing their opinions there is now a kind of fear about how the government sees it,
whether they will take any action against you for what you say, and how this could affect your organization. Because when you’re affiliated with an organization it’s not just about
yourself. You also need to be careful about how anything that you say could affect the organization and your colleagues. We’ve had many instances of nonprofit organizations and
journalists being targeted. So there is a kind of chilling effect when you really don’t want to say something you would otherwise say strongly. There is always a toning down of what
you want to say.
Greene: Are there any situations where you think it’s appropriate for governments to regulate online speech?
You don’t have an absolute right to free speech under India’s Constitution. There can be restrictions as stated under Article 19(2) of the Constitution. There can be reasonable
restrictions by the government, for instance, for something that could lead to violence or something which could lead to a riot between communities. So mostly if you look at hate
speech on the net which could lead to a violent situation or riots between communities, that could be a case where maybe the government could intervene. And I would even say those are
cases where platforms should intervene. We have seen a lot of hate speech on the net during India’s current elections as there have been different phases of elections going on for
close to two months. We have seen that happening with not just political leaders but with many supporters of political parties publishing content on various platforms which aren’t
really in the nature of hate speech but which could potentially create situations where you have at least two communities fighting each other. It’s definitely not a desirable
situation. Those are the cases where maybe platforms themselves could regulate or maybe the government needs to regulate. In this case, for example, when it is related to elections,
the Election Commission also has its role, but in many cases we don’t see that
happening.
Greene: Okay, let’s go back to hate speech for a minute because that’s always been a very difficult problem. Is that a difficult problem in India? Is hate speech well-defined? Do
you think the current rules serve society well or are there problems with it?
I wouldn’t say it’s well-defined, but even in the current law there are provisions that address it. So anything which could lead to violence or which could lead to animosity
between two communities will fall in the realm of hate speech. It’s not defined as such, but then that is where your free speech rights could be restricted. That definitely could fall
under the definition of hate speech.
Greene: And do you think that definition works well?
I mean the definition is not the problem. It’s essentially a question of how it is implemented. It’s a question of how the government or its agency implements it. It’s a
question of how platforms are taking care of it. These are two issues where there’s more that needs to be done.
Greene: You also talked about misinformation in terms of elections. How do we reconcile freedom of expression concerns with concerns for preventing misinformation?
I would definitely say it’s a gray area. I mean how do you really balance this? But I don’t think it’s a problem which cannot be addressed. Definitely there’s a lot for civil
society to do, a lot for the private sector to do. Especially, for example, when hate speech is reported to the platforms. It should be dealt with quickly, but that is where we’re
seeing the worst difference in how platforms act on such reporting in the Global North versus what happens in the Global South. Platforms need to up their act when it comes to
handling such situations and handling such content.
Greene: Okay, let’s talk about the platforms then. How do you feel about censorship or restrictions on freedom of expression by the platforms?
Things have changed a lot as to how these platforms work. Now the platforms decide what kind of content gets to your feed and how the algorithms work to promote content which is
more viral. In many cases we have seen how misinformation and hate speech goes viral. And content that is debunking the misinformation which is kind of providing the real facts, that
doesn’t go as far. The content that debunks misinformation doesn’t go viral or come up in your feed that fast. So that definitely is a problem, the way platforms are dealing with it.
In many cases it might be economically beneficial for them to make sure that content which is viral and which puts forth misinformation reaches more eyes.
Greene: Do you think that the platforms that are most commonly used in India—and I know there’s no TikTok in India— serve free speech interests or not?
When the Information Technology Rules were introduced and when the
discussions happened, I would say civil society supported the platforms, essentially saying these platforms ensured we can enjoy our free speech rights, people can enjoy their free
speech rights and express themselves freely. How the situation changed over a period of time is interesting. Definitely these platforms are still important for us to express these
rights. But when it comes to, let’s say, content being regulated, some platforms do push back when the government asks them to take down the content, but we have not seen that much.
So whether they’re really the messiahs for free speech, I doubt. Over the years, we have seen that it is most often the case that when the government tells them to do something, it is
in their interest to do what the government says. There has not been much pushback except for maybe Twitter challenging it in the court. There have not been many instances where
these platforms supported users.
Greene: So we’ve talked about hate speech and misinformation, are there other types of content or categories of online speech that are either problematic in India now or at least
that regulators are looking at that you think the government might try to do something with?
One major concern which the government is trying to regulate is about deepfakes, with even the Prime Minister speaking about it. So suddenly that is something of a priority for
the government to regulate. So that’s definitely a problem, especially when it comes to public figures and particularly women who are in politics who often have their images
manipulated. In India we see that at election time. Even politicians who have been in the field for a long time, their images have been misused and morphed images have been
circulated. So that’s definitely something that the platforms need to act on. For example, you cannot have the luxury of, let’s say, taking 48 hours to decide what to do when
something like that is posted. This is something which platforms have to deal with as early as possible. We do understand there’s a lot of content and a lot of reporting happening,
but in some cases, at least, there should be some prioritization of these reporting related to non-consensual sexual imagery. Maybe then the priority should go up.
Greene: As an engineer, how do you feel about deepfake tech? Should the regulatory concerns be qualitatively different than for other kinds of false information?
When it comes to deepfakes, I would say the problem is that it has become more mainstream. It has become very easy for a person to use these tools that have become more
accessible. Earlier you needed to have specialized knowledge, especially when it came to something like editing videos. Now it’s become much easier. These tools are made easily
available. The major difference now is how easy it is to access these applications. There can not be a case of fully regulating or fully controlling a technology. It’s not essentially
a problem with the technology, because there would be a lot of ethical use cases. Just because something is used for a harmful purpose doesn’t mean that you completely block the
technology. There is definitely a case for regulating AI and regulating deepfakes, but that doesn’t mean you put a complete stop to it.
Greene: How do you feel about TikTok being banned in India?
I think that’s less a question of technology or regulation and more of a geopolitical issue. I don’t think it has anything to do with the technology or even the transfer of data
for that matter. I think it was just a geopolitical issue related to India/ China relations. The relations have kind of soured with the border disputes and other things, I think that
was the trigger for the TikTok ban.
Greene: What is your most significant legal victory from a human rights perspective and why?
The victory that we had in the fight against the 2011 Rules and the portions related to intermediary liability, which was shot down by the Supreme Court. That was important
because when it came to platforms and when it came to people expressing their critical views online, all of this could have been taken down very easily. So that was definitely a case
of free speech rights being affected without much recourse. So that was a major victory.
Greene: Okay, now we ask everyone this question. Who is your free speech hero and why?
I can’t think of one person, but I think of, for example, when the country went through a bleak period in the 1970s and the government declared a national state of emergency. During that time we had journalists and politicians who fought for
free speech rights with respect to the news media. At that time even writing something in the publications was difficult. We had many cases of journalists who were fighting this,
people who had gone to jail for writing something, who had gone to jail for opposing the government or publicly criticizing the government. So I don’t think of just one person, but we
have seen journalists and political leaders fighting back during that state of emergency. I would say those are the heroes who could fight the government, who could fight law
enforcement. Then there was the case of Justice H.R. Khanna, a judge who stood up for citizen’s rights and gave his dissenting opinion against the majority view, which cost him the
position of Chief Justice. Maybe I would say he’s a hero, a person who was clear about constitutional values and principles.
>> mehr lesen
EFF Speaks Out in Court for Citizen Journalists
(Thu, 12 Dec 2024)
No one gets to abuse copyright to shut down debate. Because of that, we at EFF represent Channel 781, a group of citizen journalists whose YouTube channel was temporarily shut
down following copyright infringement claims made by Waltham Community Access Corporation (WCAC). As part of that case, the federal court in Massachusetts heard oral arguments
in Channel 781 News v. Waltham Community Access Corporation, a pivotal case for copyright law and digital journalism.
WCAC, Waltham’s public access channel, records city council meetings on video. Channel 781, a group of independent journalists, curates clips of those meetings for its YouTube
channel, along with original programming, to spark debate on issues like housing policy and real estate development. WCAC sent a series of DMCA takedown notices that accused Channel
781 of copyright infringement, resulting in YouTube deactivating Channel 781’s channel just days before a critical municipal election.
Represented by EFF and the law firm Brown Rudnick LLP, Channel 781 sued WCAC for misrepresentations in its DMCA takedown notices. We argued that using clips of government
meetings from the government access station to engage in public debate is an obvious fair use under copyright. Also, by excerpting factual recordings and using captions to improve
accessibility, the group aims to educate the public, a purpose distinct from WCAC’s unannotated broadcasts of hours-long meetings. The lawsuit alleges that WCAC’s takedown requests
knowingly misrepresented the legality of Channel 781's use, violating Section 512(f) of the DMCA.
Fighting a Motion to Dismiss
In court this week, EFF pushed back against WCAC’s motion to dismiss the case. We argued to District Judge Patti Saris that Channel 781’s use of video clips of city government
meetings was an obvious fair use, and that by failing to consider fair use before sending takedown notices to YouTube, WCAC violated the law and should be liable for damages.
If Judge Saris denies WCAC’s motion, we will move on to proving our case. We’re confident that the outcome will promote accountability for copyright holders who misuse the
powerful notice-and-takedown mechanism that the DMCA provides, and also protect citizen journalists in their use of digital tools.
EFF will continue to provide updates as the case develops. Stay tuned for the latest news on this critical fight for free expression and the protection of digital rights.
>> mehr lesen
X's Last-Minute Update to the Kids Online Safety Act Still Fails to Protect Kids—or Adults—Online
(Thu, 12 Dec 2024)
Late last week, the Senate released yet another version of the Kids Online Safety Act, written, reportedly, with the assistance of X CEO Linda Yaccarino in a flawed attempt
to address the critical free
speech issues inherent in the bill. This last minute draft remains, at its core, an unconstitutional censorship bill that threatens the online speech and privacy
rights of all internet users.
TELL CONGRESS: VOTE NO ON KOSA
no kosa in last minute funding bills
Update Fails to Protect Users from Censorship or Platforms from Liability
The most important update, according to its authors, supposedly minimizes the impact of the bill on free speech. As we’ve said before, KOSA’s “duty of care” section is its
biggest problem, as it would force a broad swath of online services to make policy changes based on the content of online speech. Though the bill’s authors inaccurately claim KOSA
only regulates designs of platforms, not speech, the list of harms it enumerates—eating disorders, substance use disorders, and suicidal behaviors, for example—are not
caused by the design of a platform.
The authors have failed to grasp the difference between immunizing individual expression and protecting a platform from the liability that KOSA would place on
it.
KOSA is likely to actually increase the risks to children, because it will prevent them from accessing online resources about topics like addiction, eating disorders, and
bullying. It will result in services imposing age verification requirements and content restrictions, and it will stifle minors from finding or accessing their own supportive
communities online. For these reasons, we’ve been critical of KOSA since it was introduced in 2022.
This updated bill adds just one sentence to the “duty of care” requirement:“Nothing in this section shall be construed to allow a government entity to enforce subsection a [the
duty of care] based upon the viewpoint of users expressed by or through any speech, expression, or information protected by the First Amendment to the Constitution of the United
States.” But the viewpoint of users was never impacted by KOSA’s duty of care in the first place. The duty of care is a duty imposed on platforms, not
users. Platforms must mitigate the harms listed in the bill, not users, and the platform’s ability to share users’ views is what’s at risk—not the
ability of users to express those views. Adding that the bill doesn’t impose liability based on user expression doesn’t change how the bill would be interpreted or
enforced. The FTC could still hold a platform liable for the speech it contains.
Let’s say, for example, that a covered platform like reddit hosts a forum created and maintained by users for discussion of overcoming eating disorders. Even though the speech
contained in that forum is entirely legal, often helpful, and possibly even life-saving, the FTC could still hold reddit liable for violating the duty of care by allowing young people
to view it. The same could be true of a Facebook group about LGBTQ issues, or for a post about drug use that X showed a user through its algorithm. If a platform’s defense were that
this information is protected expression, the FTC could simply say that they aren’t enforcing it based on the expression of any individual viewpoint, but based on the fact that the
platform allowed a design feature—a subreddit, Facebook group, or algorithm—to distribute that expression to minors. It’s a superfluous carveout for user speech and expression that
KOSA never penalized in the first place, but which the platform would still be penalized for distributing.
It’s particularly disappointing that those in charge of X—likely a covered platform under the law—had any role in writing this language, as the authors have failed to grasp the
world of difference between immunizing individual expression, and protecting their own platform from the liability that KOSA would place on it.
Compulsive Usage Doesn’t Narrow KOSA’s Scope
Another of KOSA’s issues has been its vague list of
harms, which have remained broad enough that platforms have no clear guidance on what is likely to cross the line. This update requires that the harms of “depressive disorders and
anxiety disorders” have “objectively verifiable and clinically diagnosable symptoms that are related to compulsive usage.” The latest text’s definition of compulsive usage, however,
is equally vague: “a persistent and repetitive use of a covered platform that significantly impacts one or more major life activities, including socializing, sleeping, eating,
learning, reading, concentrating, communicating, or working.” This doesn’t narrow the scope of the bill.
The bill doesn’t even require that the impact be a negative one.
It should be noted that there is no clinical definition of “compulsive usage” of online services. As in past versions of KOSA, this updated
definition cobbles together a definition that sounds just medical, or just legal, enough that it appears legitimate—when in fact the definition is
devoid of specific legal meaning, and dangerously vague to boot.
How could the persistent use of social media not significantly impact the way someone socializes or communicates? The bill doesn’t even require
that the impact be a negative one. Comments on an Instagram photo from a potential partner may make it hard to sleep for several nights in a row; a lengthy new YouTube video may
impact someone’s workday. Opening a Snapchat account might significantly impact how a teenager keeps in touch with her friends, but that doesn’t mean her preference for that over text
messages is “compulsive” and therefore necessarily harmful.
Nonetheless, an FTC weaponizing KOSA could still hold platforms liable for showing content to minors that they believe results in depression or
anxiety, so long as they can claim the anxiety or depression disrupted someone’s sleep, or even just changed how someone socializes or communicates. These so-called “harms” could
still encompass a huge swathe of entirely legal (and helpful) content about everything from abortion access and gender-affirming care to drug use, school shootings, and tackle
football.
Dangerous Censorship Bills Do Not Belong in Must-Pass Legislation
The latest KOSA draft comes as incoming nominee for FTC Chair, Andrew Ferguson—who would be empowered to enforce the law, if passed—has reportedly vowed to protect free speech by “fighting back against the
trans agenda,” among other things. As we’ve said for years (and abouteveryversion of the bill), KOSA would give the FTC under this or any future
administration wide berth to decide what sort of content platforms must prevent young people from seeing. Just passing KOSA would likely result in platforms taking down protected
speech and implementing age verification requirements, even if it's never enforced; the FTC could simply express the types of content they believe harms children, and use the mere
threat of enforcement to force platforms to comply.
No representative should consider shoehorning this controversial and unconstitutional bill into a continuing resolution. A law that forces platforms to censor truthful online
content should not be in a last minute funding bill.
TELL CONGRESS: VOTE NO ON KOSA
no kosa in last minute funding bills
>> mehr lesen
Brazil’s Internet Intermediary Liability Rules Under Trial: What Are the Risks?
(Wed, 11 Dec 2024)
The Brazilian Supreme Court is on the verge of
deciding whether digital platforms can be held liable for third-party content even without a judicial order requiring removal. A panel of eleven
justices is examining two cases jointly, and one of them directly challenges whether Brazil’s internet intermediary liability regime for user-generated content aligns with the
country’s Federal Constitution or fails to meet constitutional standards. The outcome of these cases can seriously undermine important free expression and privacy safeguards if they
lead to general content monitoring obligations or broadly expand notice-and-takedown mandates.
The court’s examination revolves around Article 19 of Brazil’s Civil Rights Framework for the Internet (“Marco Civil da Internet”, Law n.
12.965/2014). The provision establishes that an internet application provider can only be held liable for third-party content if it fails to
comply with a judicial order to remove the content. A notice-and-takedown exception to the provision applies in cases of copyright infringement, unauthorized disclosure of private
images containing nudity or sexual activity, and content involving child sexual abuse. The first two exceptions are in Marco Civil, while the third one comes from a prior rule
included in the Brazilian child protection law.
The decision the court takes will set a precedent for lower courts regarding two main topics: whether Marco Civil’s internet intermediary liability
regime is aligned with Brazil's Constitution and whether internet application providers have the obligation to monitor online content they host and remove it when
deemed offensive, without judicial intervention. Moreover, it can have a regional and cross-regional impact as lawmakers and courts look across borders at platform regulation
trends amid global coordination initiatives.
After a public hearing held last year, the Court's sessions about the cases started in late November and, so far, only Justice Dias Toffoli, who is in
charge of Marco Civil’s constitutionality case, has concluded the presentation of his vote. The justice declared Article 19 unconstitutional and established the notice-and-takedown
regime set in Article 21 of Marco Civil, which relates to unauthorized disclosure of private images, as the general rule for intermediary liability. According to his vote, the
determination of liability must consider the activities the internet application provider has actually carried out and the degree of interference of these activities.
However, platforms could be held liable for certain content regardless of notification, leading to a monitoring duty. Examples include
content considered criminal offenses, such as crimes against the democratic state, human trafficking, terrorism, racism, and violence against children and women. It
also includes the publication of notoriously false or severely miscontextualized facts that lead to violence or have the potential to disrupt the electoral process. If there’s
reasonable doubt, the notice-and-takedown rule under Marco Civil’s Article 21 would be the applicable regime.
The court session resumes today, but it’s still uncertain whether all eleven justices will reach a judgement by year’s end.
Some Background About Marco Civil’s Intermediary Liability Regime
The legislative intent back in 2014 to establish Article 19 as the general rule for internet application providers' liability for user-generated content
reflected civil society’s concerns over platform censorship. Faced with the risk of being held liable for user content, internet platforms generallyprioritize
their economic interests and security over preserving users’ protected expression and over-remove content to avoid
legal battles and regulatory scrutiny. The enforcement overreach of copyright rules
online was already a problem when the legislative discussion of Marco Civil took place. Lawmakers chose to rely on courts to balance the different rights at
stake in removing or keeping user content online. The approval of Marco Civil had wide societal support and was considered a win for advancing users’ rights online.
The provision was in line with the Special
Rapporteurs for Freedom of Expression from the United Nations and the Inter-American Commission on Human Rights (IACHR). In that regard, the then
IACHR’s Special Rapporteur had clearly remarked that a strict liability regime creates strong incentives for private censorship, and would run against the State’s duty to favor an institutional framework that protects and
guarantees free expression under the American Convention on Human Rights. Notice-and-takedown regimes as the general rule also raised concerns of over-removal and the weaponization of notification mechanisms to censor protected speech.
A lot has happened since 2014. Big Tech platforms have consolidated their dominance, the internet ecosystem is more centralized, and algorithmic mediation
of content distribution online has intensified, increasingly relying on a corporate surveillance structure. Nonetheless, the concerns Marco Civil reflects remain relevant just as the
balance its intermediary liability rule has struck persists as a proper way of tackling these concerns. Regarding current challenges,
changes to the liability regime suggested in Dias Toffoli's vote will likely reinforce rather than reduce corporate surveillance, Big Tech’s predominance,
and digital platforms’ power over online speech.
The Cases Under Trial and The Reach of the Supreme Court’s Decision
The two individual cases under analysis by the Supreme Court are more than a decade old. Both relate to the right to honor. In the first one, the plaintiff, a high school teacher,
sued Google Brasil Internet Ltda to remove an online community created by students to offend her on the now defunct Orkut platform. She asked for the deletion of the community
and compensation for moral damages, as the platform didn't remove the community after an extrajudicial notification. Google deleted the community
following the decision of the lower court, but the judicial dispute about the compensation continued.
In the second case, the plaintiff sued Facebook after the company didn’t remove an
offensive fake account impersonating her. The lawsuit sought to shut down the fake account, obtain the identification of the account’s IP address, and compensation for moral damages.
As Marco Civil had already passed, the judge denied the moral compensation request. Yet, the appeals court found that Facebook could be liable for not removing the fake account after
an extrajudicial notification, finding Marco Civil’s intermediary liability regime unconstitutional vis-à-vis Brazil’s constitutional protection to consumers.
Both cases went all the way through the Supreme Court in two separate extraordinary appeals, now examined jointly. For the Supreme Court to analyze
extraordinary appeals, it must identify and approve a “general repercussion” issue that unfolds from the individual case. As such, the topics under analysis of the Brazilian Supreme
Court in these appeals are not only the individual cases, but also the court’s understanding about the general repercussion issues involved. What the court stipulates in this regard
will orient lower courts’ decisions in similar cases.
The two general repercussion issues under scrutiny are, then, the constitutionality of Marco Civil’s internet intermediary liability regime and whether
internet application providers have the obligation to monitor published content and take it down when considered offensive, without judicial intervention.
There’s a lot at stake for users’ rights online in the outcomes of these cases.
The Many Perils and Pitfalls on the Way
Brazil’s platform regulation
debate has heated up in the last few years. Concerns over the gigantic power of Big Tech platforms, the negative effects of their
attention-driven business model, and revelations of plans and actions from the previous presidential administration to remain in power arbitrarily inflamed discussions of regulating
Big Tech. As its main vector, draft bill 2630 (PL
2630), didn’t move forward in the Brazilian Congress, the Supreme Court’s pending cases gained traction as the available alternative for
introducing changes.
We’ve written about intermediary liability
trends around the globe, how to move
forward, and the risks that changes in safe harbors regimes end up reshaping intermediaries’ behavior in ways that ultimately harm
freedom of expression and other rights for internet users.
One of these risks is relying on strict liability regimes to moderate user expression online. Holding internet application providers liable for
user-generated content regardless of a notification means requiring them to put in place systems of content monitoring and filtering with automated takedowns of potential infringing
content.
While platforms like Facebook, Instagram, X (ex-Twitter), Tik Tok, and YouTube already use AI tools to moderate and curate the sheer volume of content they
receive per minute, the resources they have for doing so are not available for other, smaller internet application providers that host users’ expression. Making automated content
monitoring a general obligation will likely intensify the concentration of the online ecosystem in just a handful of large platforms. Strict liability regimes also inhibit or even
endanger the existence of less-centralizedcontent moderation models, contributing yet again to entrenching Big Tech’s dominance and business model.
But the fact that Big Tech platforms already use AI tools to moderate and restrict content doesn’t mean they do it well. Automated content monitoring
is hard
at scale and platforms constantly fail at purging content that violates its rules without sweeping up protected content. In addition to historical issues with AI-based
detection of copyright infringement that have deeply undermined fair use
rules, automated systems often flag and censor crucial information that should stay online.
Just to give a few examples, during the wave of protests in Chile, internet platforms wrongfullyrestricted content reporting police's harsh repression of demonstrations, having deemed it violent content. In Brazil, we saw similar concerns
when Instagram censored images of Jacarezinho’s community’s massacre in 2021, which was the
most lethal police operation in Rio de Janeiro’s history. In other geographies, the quest to restrict extremist content has removed videos documenting human rights violations in conflicts in countries like Syria and Ukraine.
These are all examples of content similar to what could fit into Justice Toffoli’s list of speech subject to a strict liability regime. And while this
regime shouldn’t apply in cases of reasonable doubt, platform companies won’t likely risk keeping such content up out of concern that a judge decides later that it wasn’t a reasonable
doubt situation and orders them to pay damages. Digital platforms have, then, a strong incentive to calibrate their AI systems to err on the side of censorship. And depending on
how these systems operate, it means a strong incentive for conducting prior censorship potentially affecting protected expression, which defies Article 13 of the American Convention.
Setting the notice-and-takedown regime as the general rule for an intermediary’s liability also poses risks. While the company has the chance to analyze and
decide whether to keep content online, again the incentive is to err on the side of taking it down to avoid legal costs.
Brazil's own experience in courts shows how tricky the issue can be. InternetLab's
research based on rulings involving free expression online indicated that Brazilian courts of appeals denied content removal requests in more
than 60% of cases. The Brazilian Association of Investigative Journalism (ABRAJI) has also highlighted data showing that at some point in judicial
proceedings, judges agreed with content removal requests in around half of the cases, and some were reversed later on. This is especially concerning in
honor-related cases. The more influential or powerful the person involved, the higher the chances of arbitrary content removal, flipping the
public-interest logic of preserving access to information. We should not forget companies that thrived by offering reputation management services built upon the use
of takedown
mechanisms to disappear critical content online.
It's important to underline that this ruling comes in the absence of digital procedural justice guarantees. While Justice Toffoli’s vote asserts platforms’
duty to provide specific notification channels, preferably electronic, to receive complaints about infringing content, there are no further specifications to avoid the misuse of
notification systems. Article 21 of Marco Civil sets that notices must allow the specific identification of the contested content (generally understood as the URL) and elements to
verify that the complainant is the person offended. Except for that, there is no further guidance on which details and justifications the notice should contain, and whether the
content’s author would have the opportunity, and the proper mechanism, to respond or appeal to the takedown request.
As we said
before, we should not mix platform accountability with reinforcing digital platforms as points of control over people's online expression and
actions. This is a dangerous path considering the power big platforms already have and the increasing intermediation of digital technologies in everything we do. Unfortunately, the
Supreme Court seems to be taking a direction that will emphasize such a role and dominant position, creating also additional hurdles for smaller platforms and decentralized models to
compete with the current digital giants.
>> mehr lesen
Introducing EFF’s New Video Series: Gate Crashing
(Tue, 10 Dec 2024)
The promise of the internet—at least in the early days—was that it would lower the barriers to entry for any number of careers. Traditionally, the spheres
of novel writing, culture criticism, and journalism were populated by well-off straight white men, with anyone not meeting one of those criteria being an outlier. Add in giant
corporations acting as gatekeepers to those spheres and it was a very homogenous culture. The internet has changed that.
There is a lot about the internet that needs fixing, but the one thing we should preserve and nurture is the nontraditional paths to success it creates. In
this series of interviews, called “Gate Crashing,” we look to highlight those people and learn from their examples. In an ideal world, lawmakers will be guided by lived experiences
like these when thinking about new internet legislation or policy.
In our first video, we look at creators who honed their media criticism skills in fandom spaces. Please join Gavia Baker-Whitelaw and Elizabeth Minkel,
co-creators of the Rec Center newsletter, in a wide-ranging
discussion about how they got started, where it has led them, and what they’ve learned about internet culture and policy along the way.
%3Ciframe%20title%3D%22YouTube%20video%20player%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FaeplIxvskx8%3Fsi%3DJJtXxSdTkjYiTrTT%26autoplay%3D1%26mute%3D1%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%20allowfullscreen%3D%22allowfullscreen%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E
Privacy info. This embed will serve content
from youtube.com
>> mehr lesen
Speaking Freely: Tomiwa Ilori
(Tue, 10 Dec 2024)
Interviewer: David Greene
*This interview has been edited for length and clarity.
Tomiwa Ilori is an expert researcher and a policy analyst with focus on digital technologies and human rights. Currently, he is an advisor for the B-Tech Africa Project at UN Human Rights and a Senior ICFP Fellow at
HURIDOCS. His postgraduate qualifications include masters and doctorate degrees from the Centre for Human Rights, Faculty of Law, University of Pretoria. All views and opinions expressed in this interview are
personal.
Greene: Why don’t you start by introducing yourself?
Tomiwa Ilori: My name is Tomiwa Ilori. I’m a legal consultant with expertise in digital rights and policy. I work with a lot of organizations on digital rights and policy
including information rights, business and human rights, platform governance, surveillance studies, data protection and other aspects.
Greene: Can you tell us more about the B-Tech project?
The B-Tech project is a project by the UN human rights office and the idea behind it is to mainstream the UN Guiding Principles on Business and Human Rights (UNGPs)
into the tech sector. The project looks at, for example, how social media platforms can apply human rights due diligence frameworks or processes to their products and services
more effectively. We also work on topical issues such as Generative AI and its impacts on human rights. For example, how do the UNGPs apply to Generative AI? What guidance can the
UNGPs provide for the regulation of Generative AI and what can actors and policymakers look for when regulating Generative AI and other new and emerging technologies?
Greene: Great. This series is about freedom of expression. So my first question for you is what does freedom of expression mean to you personally?
I think freedom of expression is like oxygen, more or less like the air we breathe. There is nothing about being human that doesn’t involve expression, just like drawing breath.
Even beyond just being a right, it’s an intrinsic part of being human. It’s embedded in us from the start. You have this natural urge to want to express yourself right from being an
infant. So beyond being a human right, it is something you can almost not do without in every facet of life. Just to put it as simply as possible, that’s what it means to
me.
Greene: Is there a single experience or several experiences that shaped your views about freedom of expression?
Yes. For context, I’m Nigerian and I also grew up in the Southwestern part of the country where most of the Yorùbá people live. As a Yoruba person and as someone who grew up listening and speaking the Yoruba language,
language has a huge influence on me, my philosophy and my ideas. I have a mother who loves to speak in proverbs and mostly in Yorùbá. Most of these proverbs which are usually profound
show that free speech is the cornerstone of being human, being part of a community, and exercising your right to life and existence. Sharing expression and growing up in that kind of
community shaped my worldview about my right to be. Closely attached to my right to be is my right to express myself. More importantly, it also shaped my view about how my right to be
does not necessarily interrupt someone else’s right to be. So, yes, my background and how I grew up really shaped me. Then, I was fortunate that I also grew up and furthered my
studies. My graduate studies including my doctorate focused on freedom of expression. So I got both the legal and traditional background grounded in free speech studies and practices
in unique and diverse ways.
Greene: Can you talk more about whether there is something about Yorùbá language or culture that is uniquely supportive of freedom of
expression?
There’s a proverb that goes, “A kìí pa ohùn mọ agogo lẹ́nu” and what that means in a loose English translation is that you cannot shut the clapperless bell up, it is the bell’s
right to speak, to make a sound. So you have no right to stop a bell from doing what it’s meant to do, it suggests that it is everyone’s right to express themselves. It suffices to
say that according to that proverb, you have no right to stop people from expressing themselves. There’s another proverb that is a bit similar which is,“Ọmọdé gbọ́n, àgbà gbọ́n, lafí
dá ótù Ifẹ̀” which when loosely translated refers to how both the old and the young collaborate to make the most of a society by expressing their wisdom.
Greene: Have you ever had a personal experience with censorship?
Yes and I will talk about two experiences. First, and this might not fit the technical definition of censorship, but there was a time when I lived in Kampala and I had to pay
tax to access the internet which I think is prohibitive for those who are unable to pay it. If people have to make a choice between buying bread to eat and paying a tax to access the
internet, especially when one item is an opportunity cost for the other, it makes sense that someone would choose bread over paying that tax. So you could say it’s a way of censoring
internet users. When you make access prohibitive through taxation, it is also a way of censoring people. Even though I was able to pay the tax, I could not stop thinking about those
who were unable to afford it and for me that is problematic and qualifies as a kind of censorship.
Another one was actually very recent. Even though the internet service provider insisted that they did not shut down or throttle the internet,, I remember that during the
recent protests in Nairobi, Kenya in June of 2024, I experienced an internet shutdown for
the first time. According to the internet service
provider, the shut down was as a result of an undersea cable cut. Suddenly my emails just stopped working and my Twitter (now X) feed won’t load. The connection
appeared to work for a few seconds, and then all of a sudden it would stop, then work for some time, then all of a sudden nothing. I felt incapacitated and helpless. That’s the way I
would describe it. I felt like, “Wow, I have written, thought, spoken about this so many times and this is it.” For the first time I understood what it means to actually experience an
internet shutdown and it’s not just the experience, it’s the helplessness that comes with it too.
Greene: Do you think there is ever a time when the government can justify an internet shutdown?
The simple answer is no. In my view, those who carry out internet shutdowns, especially state actors, believe that since freedom of expression and some other associated rights
are not absolute, they have every right to restrict them without measure. I think what many actors that are involved in internet shutdowns use as justification is a mask for their
limited capacity to do the right thing. Actors involved in shutting down the internet say that they usually do not have a choice. For example, they say that hate speech,
misinformation, and online violence are being spread online in such a way that it could spill over into offline violence. Some have even gone as far as saying that they’re shutting
down the internet because they want to curtail examination fraud. When these are the kind of excuses used by actors, it demonstrates the limited understanding of actors on what
international human rights standards prescribe and what can actually be done to address the online harms that are used to justify internet shutdowns.
Let me use an example: international human rights standards provide clear processes for instances where state actors must address online harms or where private actors must
address harms to forestall offline violence. The perception is that these standards do not even give room for addressing harms, which is not the case. The process requires that
whatever action you take must be legal i.e. be provided clearly in a law, must not be vague, must be unequivocal and show in detail the nature of the right that is limited. Another
requirement says that whatever action to be taken to limit a right must be proportional. If you are trying to fight hate speech online, don’t you think it is disproportionate to shut
down the entire network just to fight one section of people spreading such speech? Another requirement is that its necessity must be justified i.e. to protect clearly defined public
interest or order which must be specific and not the blanket term ‘national security.’ Additionally international human rights law is clear that these requirements must be cumulative
i.e. you can not fulfill the requirement of legality and not fulfill that of proportionality or necessity.
This shows that when trying to regulate online harms, it needs to be very specific. So, for example, state actors can actually claim that a particular content or speech is
causing harm which the state actors must prove according to the requirements above. You can make a request such that just that content alone is restricted. Also these must be put in
context. Using hate speech as an example. There’s the RabatAction Plan on Hate
Speech which was developed by the UN, and it’s very clear on the conditions that must be met before the speech can be categorized as hate speech. So are these
conditions met by state actors before, for example, they ask platforms to remove particular hate content? There are steps and processes involved in the regulation of problematic
content, but state actors never go simply for targeted removal that comply with international human rights standards, they usually go for the entire network.
I’d also like to add that I find it problematic and ironic that most state actors who are supposedly champions of digital transformation are also the ones quick to shut down the
internet during political events. There is no digital transformation that does not include a free, accessible and interoperable internet. These are some of the challenges and
problematic issues that I think we need to address in more detail so we can hear each other better, especially when it comes to regulating online speech and fighting internet
shutdowns.
Greene: So shutdowns are then inherently disproportionate and not authorized by law. You talked about the types of speech that might be limited. Can you give us a sense of what
types of online speech you think might be appropriately regulated by governments?
For categories of speech that can be regulated, of course, that includes hate speech. It’s under international law as provided for underArticle 20 of the International Covenant
on Civil and Political Rights (ICCPR) prohibits propagation of war, etc. The
International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) also provides for this. However,
these applicable provisions are not carte blanche for state actors. The major conditions that must be met before avspeech qualifies as hate speech
must be fulfilled before it can be regarded as one. This is done in order to address instances where powerful actors define what constitutes hate speech and violate human rights under
the guise of combating it. There are still laws that criminalize disaffection against the state which are used to prosecute dissent.
Greene: In Nigeria or in Kenya or just on the continent in general?
Yes, there are countries that still have lèse-majesté laws in criminal laws and
penal codes. We’ve had countries like Nigeria that were trying to come up with a version of such laws for the online space, but which have been fought down by mostly civil
society actors.
So hate speech does qualify as speech that could be limited, but with caveats. There are several conditions that must be made before speech qualifies as hate speech. There must
be context around the speech. For example, what kind of power does the person who makes the speech wield? What is the likelihood of that speech leading to violence? What audience has
the speech been made to? These are some of the criteria that must be fulfilled before you say, “okay, this qualifies as hate speech.”
There’s also other clearly problematic content, child sexual abuse material for example, that are prima facie illegal and must be censored or
removed or disallowed. That goes without saying. It’s customary international human rights law especially as it applies to platform governance. Another category of speech could also
be non-consensual sharing of intimate images which could qualify as online gender-based violence. So these are some of the categories that could come under regulation by
states.
I also must sound a note that there are contexts to applying speech laws. It is also the reason why speech laws are one of the most difficult regulations to come up with because
they are usually context-dependent especially when they are to be balanced against international human rights standards. Of course, some of the biggest fears in platform
regulation that touch on freedom of expression is how state actors could weaponize those laws to track or to attack dissent and how businesses platform speech mainly for
profit.
Greene: Is misinformation something the government should have a role in regulating or is that something that needs to be regulated by the companies or by the speakers? If it’s
something we need to worry about, who has a role in regulating it?
State actors have a role. But in my opinion I don’t think it’s regulation. The fact that you have a hammer does not mean that everything must look like a nail. The fact that a
state actor has the power to make laws does not mean that it must always make laws on all social problems. I believe non-legal and multi-stakeholder solutions are required for
combatting online harms. State actors have tried to do what they do best by coming up with laws that regulate misinformation. But where has that led us? The arrest and harassment of
journalists, human rights defenders and activists. So it has really not solved any problems.
When your approach is not solving any problems, I think it’s only right to re-evaluate. That’s the reason I said state actors have a role. In my view, state actors need to step
back in a sense that you don’t necessarily need to leave the scene, but step back and allow for a more holistic dialogue among stakeholders involved in the information ecosystem. You
could achieve a whole lot more through digital literacy and skills than you will with criminalizing misinformation. You can do way more by supporting journalists with fact-checking
skills than you will ever achieve by passing overbroad laws that limit access to information. You can do more by working with stakeholders in the information ecosystem like platforms
to label problematic content than you will ever by shutting down the internet. These are some of the non-legal methods that could be used to combat misinformation and actually get
results. So, state actors have a role, but it is mainly facilitatory in the sense that it should bring stakeholders together to brainstorm on what the contexts are and the kinds of
useful solutions that could be applied effectively.
Greene: What do you feel the role of the companies should be?
Companies also have an important role, one of which is to respect human rights in the course of providing services. What I always say for technology companies is that, if a
certain jurisdiction or context is good enough to make money from, it is good enough to pay attention to and respect human rights there.
One of the perennial issues that platforms face in addressing online harms is aligning their community standards with international human rights standards. But oftentimes what
happens is that corporate-speak is louder than the human rights language in many of these standards.
That said, some of the practical things that platforms could do is to step out of the corporate talk of, “Oh, we’re companies, there’s not much we can do.” There’s a lot they
can do. Companies need to get more involved, step into the arena and walk with key state actors, including civil society, to educate and develop capacity on how their
platforms actually work. For example, what are the processes involved, for example, in taking down a piece of content? What are the processes involved in getting appeals? What are the
processes involved in actually getting redress when a piece of content has been wrongly taken down? What are the ways platforms can accurately—and I say accurately emphatically
because I’m not speaking about using automated tools—label content? Platforms also have responsibilities in being totally invested in the contexts they do business in. What are the
triggers for misinformation in a particular country? Elections, conflict, protests? These are like early warning sign systems that platforms need to start paying attention to to be
able to understand their contexts and be able to address the harms on their platforms better.
Greene: What’s the most pressing free speech issue in the region in which you work?
Well, for me, I think of a few key issues. Number one, which has been going on for the longest time, is the government’s use of laws to stifle free speech. Most of the laws that
are used are cybercrime laws, electronic communication laws, and old press codes and criminal codes. They were never justified and they’re still not justified.
A second issue is the privatization of speech by companies regarding the kind of speech that gets promoted or demoted. What are the guidelines on, for example, political
advertisements? What are the guidelines on targeted advertisement? How are people’s data curated? What is it like in the algorithm black box? Platforms’ roles on who says what,
how, when and where also is a burning free speech issue. And we are moving towards a future where speech is being commodified and privatized. Public media, for example, are now
being relegated to the background. Everyone wants to be on social media and I’m not saying that’s a terrible thing, but it gives us a lot to think about, a lot to chew
on.
Greene: And finally, who is your free speech hero?
His name is Felá Aníkúlápó Kútì. Fela was a political musician and the originator of Afrobeat not
afrobeats with an “s” but the original Afrobeat which that one came from. Fela never started out as a political musician, but his music became highly political and highly popular
among the people for obvious reasons. His music also became timely because, as a political musician in Nigeria who lived during the brutal military era, it resonated with a lot of
people. He was a huge thorn in the flesh of despotic Nigerian and African leaders. So, for me, Fela is my free speech hero. He said quite a lot with his music that many people in his
generation would never dare to say because of the political climate at that time. Taking such risks even in the face of brazen violence and even death was remarkable.
Fela was not just a political musician who understood the power of expression. He was also someone who understood the power of visual expression. He’s unique in his own way and
expresses himself through music, through his lyrics. He’s someone who has inspired a lot of people including musicians, politicians and a lot of new generation activists.
>> mehr lesen
A Fundamental-Rights Centered EU Digital Policy: EFF’s Recommendations 2024-2029
(Tue, 10 Dec 2024)
The European Union (EU) is a hotbed for tech regulation that often has ramifications for users globally. The focus of our work in Europe is to ensure
that EU tech policy is made responsibly and lives up to its potential to protect users everywhere.
As the new mandate of the European institution begins – a period where newly elected policymakers set legislative
priorities for the coming years – EFF today published recommendations for a European tech policy agenda that centers on fundamental rights, empowers
users, and fosters fair competition. These principles will guide our work in the EU over the next five years. Building on our previous work and success in the EU, we will continue to
advocate for users and work to ensure that technology supports freedom, justice, and innovation for all people of the
world.
Our policy recommendations cover social media platform intermediary liability, competition and interoperability, consumer protection, privacy and
surveillance, and AI regulation. Here’s a sneak peek:
The EU must ensure that the enforcement of platform regulation laws like the Digital Services Act and the European Media Freedom Act are centered on the
fundamental rights of users in the EU and beyond.
The EU must create conditions of fair digital markets that foster choice innovation and fundamental rights. Achieving this requires enforcing the user-rights centered provisions
of the Digital Markets Act, promoting app store freedom, user choice, and interoperability, and countering AI monopolies.
The EU must adopt a privacy-first approach to fighting online harms like targeted ads and deceptive design and protect children online without reverting to harmful age
verification methods that undermine the fundamental rights of all users.
The EU must protect users’ rights to secure, encrypted, and private communication, protect against surveillance everywhere, stay clear of new data retention mandates, and
prioritize the rights-respecting enforcement of the AI Act.
Read on for our full set of
recommendations.
>> mehr lesen
FTC Rightfully Acts Against So-Called “AI Weapon Detection” Company Evolv
(Fri, 06 Dec 2024)
The Federal Trade Commission has entered a settlement with self-styled
“weapon detection” company Evolv, to resolve the FTC’s claim that the company “knowingly”
and repeatedly” engaged in “unlawful” acts of misleading claims about their technology. Essentially, Evolv’s technology, which is in schools, subways, and stadiums,
does far less than they’ve been claiming.
The FTC alleged in their complaint that despite the lofty claims made by Evolv,
the technology is fundamentally no different from a metal detector: “The company has insisted publicly and repeatedly that Express is a ‘weapons detection’ system and not a ‘metal
detector.’ This representation is solely a marketing distinction, in that the only things that Express scanners detect are metallic and its alarms can be set off by metallic objects
that are not weapons.” A typical contract for Evolv costs tens of thousands of dollars per year—five times the cost of traditional metal
detectors. One district in Kentucky spent $17 million to outfit its schools with the software.
The settlement requires notice, to the many schools which use this technology to keep weapons out of classrooms, that they are allowed to cancel their contracts. It also blocks
the company from making any representations about their technology’s:
ability to detect weapons
ability to ignore harmless personal items
ability to detect weapons while ignoring harmless personal items
ability to ignore harmless personal items without requiring visitors to remove any such items from pockets or bags
The company also is prohibited from making statements regarding:
Weapons detection accuracy, including in comparison to the use of metal detectors
False alarm rates, including comparisons to the use of metal detectors
The speed at which visitors can be screened, as compared to the use of metal detectors
Labor costs, including comparisons to the use of metal detectors
Testing, or the results of any testing
Any material aspect of its performance, efficacy, nature, or central characteristics, including, but not limited to, the use of algorithms, artificial intelligence, or other
automated systems or tools.
If the company can’t say these things anymore…then what do they even have left to sell?
There’s a reason so many people accuse artificial intelligence of being “snake oil.” Time and again, a company takes public data in order to power “AI” surveillance, only for
taxpayers to learn it does
no such thing. “Just walk out” stores actually required people watching you on camera to determine what you purchased. Gunshot
detection software that relies on a combination of artificial intelligence and human “acoustic experts” to purportedly identify and locate gunshots “rarely produces evidence of a gun-related
crime.” There’s a lot of well-justified suspicion about what’s really going on within the black box of corporate secrecy in which artificial intelligence so often
operates.
Even when artificial intelligence used by the government isn’t “snake oil,” it often does more harm than good. AI systems can introduce or exacerbate harmful biases that have
massive negative impacts on people’s lives. AI systems have been implicated with falsely accusing people of welfare fraud, increasing racial bias in jail sentencing as well as policing and crime prediction, and falsely identifying people as suspects based on facial
recognition.
Now, the politicians, schools, police departments, and private venues have been duped again. This time, by Evolv, a company which purports to sell “weapon detection technology”
which they claimed would use AI to scan people entering a stadium, school, or museum and theoretically alert authorities if it recognizes the shape of a weapon on a
person.
Even before the new FTC action, there was indication that this technology was not an effective solution to weapon-based violence. From July to October, New York City rolled out
a trial of Evolv technology in 20 subway systems in an attempt to keep people from bringing weapons on to the transit system. Out of 2,749 scans there were 118 false positives. Twelve knives and no guns were
recovered.
Make no mistake, false positives are dangerous. Falsely telling officers to
expect an armed individual is a recipe for an
unarmed person to be injured or even killed.
Cities, performance venues, schools, and transit systems are understandably eager to do something about violence–but throwing money at the problem by buying unproven technology
is not the answer and actually takes away resources and funding from more proven and systematic approaches. We applaud the FTC for standing up to the lucrative security theater
technology industry.
>> mehr lesen
This Bill Could Put A Stop To Censorship By Lawsuit
(Thu, 05 Dec 2024)
For years now, deep-pocketed individuals and corporations have been turning to civil lawsuits to silence their opponents. These Strategic Lawsuits Against Public Participation, or SLAPPs, aren’t designed to win on the merits, but rather to harass journalists,
activists, and consumers into silence by suing them over their protected speech. While 34 states have laws to protect against
these abuses, there is still no protection at a federal level.
Today, Reps. Jamie Raskin (D-MD) and Kevin Kiley (R-CA) introduced the bipartisan
Free Speech Protection Act. This bill is the best chance
we’ve seen in many years to secure strong federal protection for journalists, activists, and everyday people who have been subject to harassing meritless lawsuits.
take action
Tell Congress We Don't want a weaponized court system
The Free Speech Protection Act is a long overdue tool to protect against the use of SLAPP lawsuits as legal weapons that benefit the wealthy and powerful. This bill will help everyday
Americans of all political stripes who speak out on local and national issues.
Individuals or companies who are publicly criticized (or even simply discussed) will sometimes use SLAPP suits to intimidate their critics. Plaintiffs who file these suits don’t
need to win on the merits, and sometimes they don’t even intend to see the case through. But the stress of the lawsuit and the costly legal defense alone can silence or chill the free
speech of defendants.
State anti-SLAPP laws work. But since state laws are often not applicable in federal court, people and companies can still maneuver to manipulate the court system, filing cases in
federal court or in states with weak or nonexistent anti-SLAPP laws.
SLAPPs All Around
SLAPP lawsuits in federal court are increasingly being used to target activists and online critics. Here are a few recent examples:
Coal Ash Company Sued Environmental Activists
In 2016, activists in Uniontown, Alabama—a poor, predominantly Black town with a median per capita income of around $8,000—were sued for $30 million by a Georgia-based
company that put hazardous coal ash into Uniontown’s residential landfill. The activists were sued over statements on their website and Facebook page, which said things like
the landfill “affected our everyday life,” and, “You can’t walk outside, and you cannot breathe.” The plaintiff settled the case after the ACLU stepped in to defend the activist
group.
Shiva Ayyadurai Sued A Tech Blog That Reported On Him
In 2016, technology blog Techdirt published articles disputing Shiva Ayyadurai’s claim to
have “invented email.” Techdirt founder Mike Masnick was hit with a $15 million libel lawsuit in federal court. Masnick, an EFF Award winner, fought back in court and
his reporting remains online, but the legal fees had a big effect on his business. With a strong federal anti-SLAPP law, more writers and publishers will be able to fight back against
bullying lawsuits without resorting to crowd-funding.
Logging Company Sued Greenpeace
In 2016, environmental non-profit Greenpeace was sued along with several individual activists by Resolute Forest Products. Resolute sued over blog post statements such as Greenpeace’s allegation that Resolute’s
logging was “bad news for the climate.” (After four years of litigation, Resolute was ordered to pay nearly $1 million
in fees to Greenpeace—because a judge found that California’s strong anti-SLAPP law should apply.)
Congressman Sued His Twitter Critics And Media Outlets
In 2019, anonymous Twitter accounts were sued by Rep. Devin
Nunes, then a congressman representing parts of Central California. Nunes used lawsuits to attempt to unmask and punish two Twitter users who used the handles
@DevinNunesMom and @DevinCow to criticize his actions as a politician. Nunes filed these actions in a state court in Henrico County, Virginia. The location had little connection to
the case, but Virginia’s weak anti-SLAPP law has enticed many plaintiffs there.
Over the next few years, Nunes went on to sue many other journalists who published critical articles about him, using state and federal courts to sue CNN, The Washington Post, his hometown paper The Fresno Bee, MSNBC, a group of his own
constituents, and others. Nearly all of these lawsuits were dropped or dismissed by courts. If a federal anti-SLAPP law were in place, more defendants would have a chance of
dismissing such lawsuits early and recouping their legal fees.
Fast Relief From SLAPPs
The Free Speech Protection Act gives defendants of SLAPP suits a powerful tool to defend themselves.
The bill would allow a defendant sued for speaking out on a matter of public concern to file a special motion to dismiss, which the court must generally decide on within 90
days. If the court grants the speaker-defendant’s motion, the claims are dismissed. In many situations, defendants who prevail on an anti-SLAPP motion will be entitled to have the
plaintiff reimburse them for their legal fees.
take action
Tell Congress to pass the free speech protection act
EFF has been defending the rights of online speakers for more than 30 years. A strong federal anti-SLAPP law will bring us
closer to the vision of an internet that allows anyone to speak out and organize for change, especially when they speak against those with more power and resources. Anti-SLAPP laws
enhance the rights of all. We urge Congress to pass The Free Speech Protection Act.
>> mehr lesen
Let's Answer the Question: "Why is Printer Ink So Expensive?"
(Thu, 05 Dec 2024)
Did you know that most printer ink isn’t even expensive to make? Why
then is it so expensive to refill the ink on your printer?
The answer is actually pretty simple: monopolies, weird laws, and companies exploiting their users for profit. If this sounds mildly infuriating and makes
you want to learn ways to fight back, then head over to our new site, Digital
Rights Bytes! We’ve even created a short video to explain what the heck is going on here.
We’re answering the common tech questions that may be bugging you. Whether you’re hoping to learn something new or want to share resources with your family
and friends, Digital Rights Bytes can be your one-stop-shop to learn more about the technology you use every day.
Digital Rights Bytes also has answers to other common questions about device repair, ownership of your digital media, and more. If you’ve got additional
questions you’d like us to tackle in the future, let us know on your favorite social platform using
the hashtag #DigitalRightsBytes!
>> mehr lesen
Location Tracking Tools Endanger Abortion Access. Lawmakers Must Act Now.
(Wed, 04 Dec 2024)
EFF wrote recently about Locate
X, a deeply troubling location tracking tool that allows users to see the precise whereabouts of individuals based on the locations of their smartphone devices.
Developed and sold by the data surveillance company Babel Street, Locate X collects smartphone location data from a variety of sources and collates that data into an easy-to-use tool
to track devices. The tool features a navigable map with red dots, each representing an individual device. Users can then follow the location of specific devices as they move about
the map.
Locate X–and other similar
services–are able to do this by taking advantage of our largely unregulated location data market.
Unfettered location tracking puts us all at risk. Law enforcement agencies can purchase their way around warrant requirements
and bad actors can pay for services that
make it easier to engage in stalking and harassment. Location tracking tools particularly threaten groups especially vulnerable to targeting, such as immigrants, the LGBTQ+ community, and even U.S. intelligence personnel abroad. Crucially, in a
post-Dobbs United States, location surveillance also poses a serious danger to abortion-seekers across the country.
EFF has warned before about how the
location data market threatens reproductive rights. The recent reports on Locate X illustrate even more starkly how the collection and sale of location data endangers patients in
states with abortion bans and restrictions.
In late October, 404 Media
reported that privacy advocates from Atlas Privacy, a data removal company, were able to get their hands on Locate
X and use it to track an individual device’s location data as it traveled across state lines to visit an abortion clinic. Although the tool was designed for law enforcement, the
advocates gained access by simply asserting that they planned to work with law enforcement in the future. They were then able to use the tool to track an individual device as it
traveled from an apparent residence in Alabama, where there is a complete abortion ban, to a reproductive health clinic in Florida, where abortion is banned after 6 weeks of
pregnancy.
Following this report, we published a
guide to help people shield themselves from tracking tools like Locate X. While we urge everyone to take appropriate technical precautions for their situation, it’s
far past time to address the issue at its source. The onus shouldn’t be on individuals to protect themselves from such invasive surveillance. Tools like Locate X only exist because
U.S. lawmakers have failed to enact legislation that would protect our location data from being bought and sold to the highest bidder.
Thankfully, there’s still time to reshape the system, and there are a number of laws legislators could pass today to help protect us from mass location surveillance. Remember:
when our location information is for sale, so is our safety.
Blame Data Brokers and the Online Advertising Industry
There are a vast array of apps available for your smartphone that request access to your location. Sharing this information, however, may allow your location data to be
harvested and sold to shadowy companies known as data brokers. Apps request access to device location to provide various features, but once access has been granted, apps can mishandle
that information and are free to share and sell your whereabouts to third parties, including data brokers. These companies collect data showing the precise movements of hundreds of
millions of people without their knowledge or meaningful consent. They then make this data available to anyone willing to pay, whether that’s a private company like Babel Street (and
anyone they in turn sell to) or government agencies, such as law enforcement, the military, or
ICE.
This puts everyone at risk. Our location data reveals far more than most people realize, including where we live and work, who we spend time with, where we worship, whether
we’ve attended protests or political gatherings, and when and where we seek medical care—including reproductive healthcare.
Without massive troves of commercially available location data, invasive tools like Locate X would not exist.
For years, EFF haswarned about the risk of law enforcement or
bad actors using commercially available location data to track and punish abortion seekers. Multiple data brokers have specifically targeted and sold location information tied to
reproductive healthcare clinics. The data broker SafeGraph, for example, classified Planned Parenthood as a “brand” that could be tracked,
allowing investigators at Motherboard to purchase data for over 600 Planned Parenthood facilities across the U.S.
Meanwhile, the data broker Near sold the location data of abortion-seekers
to anti-abortion groups, enabling them to send targeted anti-abortion ads to people who visited clinics. And location data firm Placer.ai even once offered heat maps showing where visitors to
Planned Parenthood clinics approximately lived. Sale to private actors is disturbing given that several states have introduced and passed
abortion “bounty hunter” laws, which allow private citizens to enforce abortion restrictions by suing abortion-seekers for cash.
Government officials in abortion-restrictive states are also targeting location information (and
other personal data) about people who visit abortion clinics. In Idaho, for example, law enforcement
used cell phone data to charge a mother and son with kidnapping for aiding an abortion-seeker who traveled across state lines to receive care. While police can obtain this data by
gathering evidence and requesting a warrant based on probable cause, the data broker industry allows them to bypass legal requirements and buy this information en masse, regardless of
whether there’s evidence of a crime.
Lawmakers Can Fix This
So far, Congress and many states have failed to enact legislation that would meaningfully rein in the data broker industry and protect our location information. Locate X is
simply the end result of such an unregulated data ecosystem. But it doesn’t have to be this way. There are a number of laws that Congress and state legislators could pass right now
that would help protect us from location tracking tools.
1. Limit What Corporations Can Do With Our Data
A key place to start? Stronger consumer privacy protections. EFF has consistentlypushed for legislation that would limit
the ability of companies to harvest and monetize our data. If we enforce strict rules on how location data is collected, shared, and sold, we can stop it from ending up in the hands
of private surveillance companies and law enforcement without our consent.
We urge legislators to consider comprehensive, across-the-board data privacy
laws. Companies should be required to minimize the collection and processing of location data to only what is strictly necessary to offer the service the user
requested (see, for example, the recently-passed Maryland Online Data Privacy Act).
Companies should also be prohibited from processing a person’s data, except with their informed, voluntary, specific, opt-in consent.
We also support reproductive health-specific data privacy laws, like Rep. Sara Jacobs’ proposed “My Body My Data” Act. Laws like this would create important protections for a variety of
reproductive health data, even beyond location data. Abortion-specific data privacy laws can provide some protection against the specific problem posed by Locate X. But to fully
protect against location tracking tools, we must legally limit processing of all location data and not just data at sensitive locations, such as
reproductive healthcare clinics.
While a limited law might provide some help, it would not offer foolproof protection. Imagine this scenario: someone travels from Alabama to New York for abortion care. With a
data privacy law that protects only sensitive, reproductive health locations, Alabama police could still track that person’s device on the journey to New York. Upon reaching the
clinic in New York, their device would disappear into a sensitive location blackout bubble for a couple of hours, then reappear outside of the bubble where police could resume
tracking as the person heads home. In this situation, it would be easy to infer where the person was during those missing two hours, giving Alabama police the lead they need.
The best solution is to minimize all location data, no exceptions.
2. Limit How Law Enforcement Can Get Our Data
Congress and state legislatures should also pass laws limiting law enforcement’s ability to access our location data without proper legal safeguards.
Much of our mobile data, like our location data, is information law enforcement would typically need a court order to access. But thanks to the data broker industry, law
enforcement can skip the courts entirely and simply head to the commercial market. The U.S. government has turned this loophole into a way to gather personal data on individuals without a search
warrant.
Lawmakers must close this loophole—especially if they’re serious about protecting abortion-seekers from hostile law enforcement in abortion-restrictive states. A key way to do
this is for Congress to pass the Fourth Amendment is
Not For Sale Act, which was originally introduced by Senator Ron Wyden in 2021 and made the important and historic step of passing the U.S. House of Representatives earlier this year.
Another crucial step is to ban law enforcement from sending “geofence warrants” to corporate holders of location data. Unlike
traditional warrants, a geofence warrant doesn’t start with a particular suspect or even a device or account; instead police request data on every device in a given geographic area
during a designated time period, regardless of whether the device owner has any connection to the crime under investigation.This could include, of course, an abortion
clinic.
Notably, geofence warrants are very popular with law enforcement. Between 2018 and 2020, Google alone received more than 5,700 demands of this type from states that now have anti-abortion and
anti-LGBTQ legislation on the books.
Several federal and state courts havealreadyfound individual geofence warrants to be
unconstitutional and some have even ruled they are “categorically prohibited by the Fourth
Amendment.” But instead of waiting for remaining courts to catch up, lawmakers should take action now, pass legislation banning geofence
warrants, and protect all of us–abortion-seekers included–from this form of dragnet surveillance.
3. Make Your State a Data Sanctuary
In the wake of the Dobbs decision, many states stepped up to serve as health care sanctuaries for people seeking abortion care that they
could not access in their home states. To truly be a safe refuge, these states must also be data sanctuaries. A state that has data about people who
sought abortion care must protect that data, and not disclose it to adversaries who would use it to punish them for seeking that healthcare. California has already passed laws to this effect, and more
states should follow suit.
What You Can Do Right Now
Even before lawmakers act, there are steps you can take to better shield your location data from tools like Locate X. As noted above, we published a Locate X-specific guide several weeks ago.
There are also additional tips on EFF’s Surveillance
Self-Defense site, as well as manyotherresources available to provide more
guidance in protecting your digital privacy. Many general privacy practices also offer strong protection against location tracking.
But don’t stop there: we urge you to make your voice heard and contact your representatives. While these precautions offer immediate protection, only stronger laws will ensure
comprehensive location privacy in the long run.
>> mehr lesen
Top Ten EFF Digital Security Resources for People Concerned About the Incoming Trump Administration
(Wed, 04 Dec 2024)
In the wake of the 2024 election in the United States, many people are concerned about tightening up their digital privacy and security practices. As always, we recommend that
people start making their security plan by understanding their risks. For most people in the
U.S., the threats that they face and the methods by which they are likely to be surveilled or harassed have not changed, but the consequences of digital privacy or security failures
may become much more serious, especially for vulnerable populations such as journalists, activists, LGBTQ+ people, people seeking or providing abortion-related care, Black or
Indigenous people, and undocumented immigrants.
EFF has decades of experience in providing digital privacy and security resources, particularly for vulnerable people. We’ve written a lot of resources over the years and here
are the top ten that we think are most useful right now:
1. Surveillance Self-Defense
https://ssd.eff.org/
Our Surveillance Self-Defense guides are a great place to start your journey of securing yourself against digital threats. We know that it can be a bit overwhelming, so we
recommend starting with our guide on making a security plan so you can familiarize yourself with
the basics and decide on your specific needs. Or, if you’re planning to head out to a protest soon and want to know the most important ways to protect yourself, check out our guide
to Attending a Protest. Many people in the groups most likely to be targeted in the upcoming
months will need advice tailored to their specific threat models, and for that we recommend the Security Scenarios module as a quick way to find the right information for your particular
situation.
2. Street-Level Surveillance
https://sls.eff.org/
If you are creating your security plan for the first time, it’s helpful to know which technologies might realistically be used to spy on you. If you’re going to be out on the
streets protesting or even just existing in public, it’s important to identify which threats to take seriously. Our Street-Level Surveillance team has spent years studying the
technologies that law enforcement uses and has made this handy website where you can find information about technologies including drones, face recognition, license plate readers,
stingrays, and more.
3. Atlas Of Surveillance
https://atlasofsurveillance.org/
Once you have learned about the different types of surveillance technologies police can acquire from our Street-Level surveillance guides, you might want to know which
technologies your local police has already bought. You can find that in our Atlas of Surveillance, a crowd-sourced map of police surveillance technologies in the United
States.
4. Doxxing: Tips To Protect Yourself Online & How to Minimize Harm
https://www.eff.org/deeplinks/2020/12/doxxing-tips-protect-yourself-online-how-minimize-harm
Surveillance by governments and law enforcement is far from the only kind of threat that people face online. We expect to see an increase in doxxing and harassment of vulnerable
populations by vigilantes, emboldened by the incoming administration’s threatened policies. This guide is our thinking around the precautions you may want to take if you are
likely to be doxxed and how to minimize the harm if you’ve been doxxed already.
5. Using Your Phone in Times of Crisis
https://www.eff.org/deeplinks/2022/03/using-your-phone-times-crisis
Using your phone in general can be a cause for anxiety for many people. We have a short guide on what considerations you should make when you are using your phone in times of
crisis. This guide is specifically written for people in war zones, but may also be useful more generally.
6. Surveillance-Self Defense for Campus Protests
https://www.eff.org/deeplinks/2024/06/surveillance-defense-campus-protests
One prediction we can safely make for 2025 is that campus protests will continue to be important. This blog post is our latest thinking about how to put together your security
plan before you attend a protest on campus.
7. Security Education Companion
https://www.securityeducationcompanion.org/
For those who are already comfortable with Surveillance Self-Defense, you may be getting questions from your family, friends, or community about what to do now. You may even
consider giving a digital security training session to people in your community, and for that you will need guidance and training materials. The Security Education Companion has
everything you need to get started putting together a training plan for your community, from recommended lesson plans and materials to guides on effective teaching.
8. Police Location Tracking
https://www.eff.org/deeplinks/2024/11/creators-police-location-tracking-tool-arent-vetting-buyers-heres-how-protect
One police surveillance technology we are especially concerned about is location tracking services. These are data brokers that get your phone's location, usually through the
same invasive ad networks that are baked into almost every app, and sell that information to law enforcement. This can include historical maps of where a specific device has been, or
a list of all the phones that were at a specific location, such as a protest or abortion clinic. This blog post goes into more detail on the problem and provides a guide on how to
protect yourself and keep your location private.
9. Should You Really Delete Your Period Tracking App?
https://www.eff.org/deeplinks/2022/06/should-you-really-delete-your-period-tracking-app
As soon as the Supreme Court overturned Roe v. Wade, one of the most popular bits of advice going around the internet was to “delete your period
tracking app.” Deleting your period tracking app may feel like an effective countermeasure in a world where seeking abortion care is increasingly risky and criminalized, but it’s not
advice that is grounded in the reality of the ways in which governments and law enforcement currently gather evidence against people who are prosecuted for their pregnancy outcomes.
This blog post provides some more effective ways of protecting your privacy and sensitive information.
10. Why We Can’t Just Tell You Which Messenger App to Use
https://www.eff.org/deeplinks/2018/03/why-we-cant-give-you-recommendation
People are always asking us to give them a recommendation for the best end-to-end encrypted messaging app. Unfortunately, this is asking for a simple answer to an extremely
nuanced question. While the short answer is “probably Signal most of the time,” the long answer goes into why that is not always the case. Since we wrote this in 2018, some companies
have come and gone, but our thinking on this topic hasn’t changed much.
Bonus external guide
https://digitaldefensefund.org/learn
Our friends at the Digital Defense Fund have put together an excellent collection of guides aimed at particularly vulnerable people who are thinking about digital security for
the first time. They have a comprehensive collection of links to other external guides as well.
***
EFF is committed to keeping our privacy and security advice accurate and up-to-date, reflecting the needs of a variety of vulnerable populations. We hope these resources will
help you keep yourself and your community safe in dangerous times.
>> mehr lesen
Speaking Freely: Aji Fama Jobe
(Tue, 03 Dec 2024)
*This interview has been edited for length and clarity.
Aji Fama Jobe is a digital creator, IT consultant, blogger, and tech community leader from The Gambia. She helps run Women TechMakers Banjul, an organization that provides
visibility, mentorship, and resources to women and girls in tech. She also serves as an Information Technology Assistant with the World Bank Group where she focuses on resolving IT
issues and enhancing digital infrastructure. Aji Fama is a dedicated advocate working to leverage technology to enhance the lives and opportunities of women and girls in Gambia and
across Africa.
Greene: Why don’t you start off by introducing yourself?
My name is Aji Fama Jobe. I’m from Gambia and I run an organization called Women TechMakers Banjul that provides resources to women and girls in Gambia, particularly in the
Greater Banjul area. I also work with other organizations that focus on STEM and digital literacy and aim to impact more regions and more people in the world. Gambia is made up of six
different regions and we have host organizations in each region. So we go to train young people, especially women, in those communities on digital literacy. And that’s what I’ve been
doing for the past four or five years.
Greene: So this series focuses on freedom of expression. What does freedom of expression mean to you personally?
For me it means being able to express myself without being judged. Because most of the time—and especially on the internet because of a lot of cyber bullying—I tend to think a
lot before posting something. It’s all about, what will other people think? Will there be backlash? And I just want to speak freely. So for me it means to speak freely without being
judged.
Greene: Do you feel like free speech means different things for women in the Gambia than for men? And how do you see this play out in the work that you do?
In the Gambia we have freedom of expression, the laws are there, but the culture is the opposite of the laws. Society still frowns on women who speak out, not just in the
workspace but even in homes. Sometimes men say a woman shouldn’t speak loud or there’s a certain way women should express. It’s the culture itself that makes women not speak up in
certain situations. In our culture it’s widely accepted that you let the man or the head of the family—who’s normally a man, of course—speak. I feel like freedom of speech is really
important when it comes to the work we do. Because women should be able to speak freely. And when you speak freely it gives you that confidence that you can do something. So it’s a
larger issue. What our organization does on free speech is address the unconscious bias in the tech space that impacts working women. I work as an IT consultant and sometimes when
we’re trying to do something technical people always assume IT specialists are men. So sometimes we just want to speak up and say, “It’s IT woman, not IT guy.”
Greene: We could say that maybe socially we need to figure this out, but now let me ask you this. Do you think the government has a role in regulating online speech?
Those in charge of policy enforcement don’t understand how to navigate these online pieces. It’s not just about putting the policies in place. They need to train people how to
navigate this thing or how to update these policies in specific situations. It’s not just about what the culture says. The policy is the policy and people should follow the rules, not
just as civilians but also as policy enforcers and law enforcement. They need to follow the rules, too.
Greene: What about the big companies that run these platforms? What’s their role in regulating online speech?
With cyber-bullying I feel like the big companies need to play a bigger role in trying to bring down content sometimes. Take Facebook for example. They don’t have many people
that work in Africa and understand Africa with its complexities and its different languages. For instance, in the Gambia we have 2.4 million people but six or seven languages. On the
internet people use local languages to do certain things. So it’s hard to moderate on the platform’s end, but also they need to do more work.
Greene: So six local languages in the Gambia? Do you feel there’s any platform that has the capability to moderate that?
In the Gambia? No. We have some civil society that tries to report content, but it’s just civil society and most of them do it on a voluntary basis, so it’s not that strong. The
only thing you can do is report it to Facebook. But Facebook has bigger countries and bigger issues to deal with, and you end up waiting in a lineup of those issues and then the
damage has already been done.
Greene: Okay, let’s shift gears. Do you consider the current government of the Gambia to be democratic?
I think it is pretty democratic because you can speak freely after 2016 unlike with our last
president. I was born in an era when people were not able to speak up. So I can only compare the last regime and the current one. I think now it’s more democratic because people are able to speak out online. I can remember back
before the elections of 2016 that if you said certain things online you had to
move out of the country. Before 2016 people who were abroad would not come back to Gambia for fear of facing reprisal for content they had posted online. Since 2016 we have seen
people we hadn’t seen for like ten or fifteen years. They were finally able to come back.
Greene: So you lived in the country under a non-democratic regime with the prior administration. Do you have any personal stories you could tell about life before 2016 and feeling
like you were censored? Or having to go outside of the country to write something?
Technically it was a democracy but the fact was you couldn’t speak freely. What you said could get you in trouble—I don’t consider that a democracy.
During the last regime I was in high school. One thing I realized was that there were certain political things teachers wouldn’t discuss because they had to protect themselves.
At some point I realized things changed because before 2016 we didn’t say the president’s name. We would give him nicknames, but the moment the guy left power we felt free to say his
name directly. I experienced censorship from not being able to say his name or talk about him. I realized there was so much going on when the Truth, Reconciliation, and Reparations Commission (TRC) happened and people
finally had the confidence to go on TV and speak about their stories.
As a young person I learned that what you see is not everything that’s happening. There were a lot of things that were happening but we couldn’t see because the media was
restricted. The media couldn’t publish certain things. When he left and through the TRC we learned about what happened. A lot of people lost their lives. Some had to flee. Some people
lost their mom or dad or some got raped. I think that opened my world. Even though I’m not politically inclined or in the political space, what happened there impacted me. Because we
had a political moment where the president didn’t accept the elections, and a lot of people fled and went to Senegal. I stayed like three or four months and the whole country was on
lockdown. So that was my experience of what happens when things don’t go as planned when it comes to the electoral process. That was my personal experience.
Greene: Was there news media during that time? Was it all government-controlled or was there any independent news media?
We had some independent news media, but those were from Gambians outside of the country. The media that was inside the country couldn’t publish anything against the government.
If you wanted to know what was really happening, you had to go online. At some point, WhatsApp was blocked so we had to move to Telegram and other social media. I also realized that
at some point because my dad was in Iraq and I had to download a VPN so I could talk to him and tell him what was happening in the country because my mom and I were there. That’s why
when people censor the internet I’m really keen on that aspect because I’ve experienced that.
Greene: What made you start doing the work you’re doing now?
First, when I started doing computer science—I have a computer science background—there was no one there to tell me what to do or how to do it. I had to navigate things for
myself or look for people to guide me. I just thought, we don’t have to repeat the same thing for other people. That’s why we started Women TechMakers. We try to guide people and
train them. We want employers to focus on skills instead of gender. So we get to train people, we have a lot of book plans and online resources that we share with people. If you want
to go into a certain field we try to guide you and send you resources. That’s one of the things we do. Just for people to feel confident in their skills. And everyday people say to
me, “Because of this program I was able to get this thing I wanted,” like a job or an event. And that keeps me going. Women get to feel confident in their skills and in the places
they work, too. Companies are always looking for diversity and inclusion. Like, “oh I have two female developers.” At the end of the day you can say you have two developers and
they’re very good developers. And yeah, they’re women. It’s not like they’re hired because they’re women, it’s because they’re skilled. That’s why I do what I do.
Greene: Is there anything else you wanted to say about freedom of speech or about preserving online open spaces?
I work with a lot of technical people who think freedom of speech is not their issue. But what I keep saying to people is that you think it’s not your issue until you experience
it. But freedom of speech and digital rights are everybody’s issues. Because at the end of the day if you don’t have that freedom to speak freely online or if you are not protected
online we are all vulnerable. It should be everybody’s responsibility. It should be a collective thing, not just government making policies. But also people need to be aware of what
they’re posting online. The words you put out there can make or break someone, so it’s everybody’s business. That’s how I see digital rights and freedom of expression. As a collective
responsibility.
Greene: Okay, our last question that we ask everybody. Who is your free speech hero?
My mom’s elder sister. She passed away in 2015, but her name is Mariama Jaw and she was in the political space even during the time when people were not able to
speak. She was my hero because I went to political rallies with her and she would say what people were not willing to say. Not just in political spaces, but in general conversation,
too. She’s somebody who would tell you the truth no matter what would happen, whether her life was in danger or not. I got so much inspiration from her because a lot of women don’t go
into politics or do certain things and they just want to get a husband, but she went against all odds and she was a politician, a mother and sister to a lot of people, to a lot of
women in her community.
>> mehr lesen
Today’s Double Feature: Privacy and Free Speech
(Tue, 03 Dec 2024)
It’s Power Up Your Donation Week! Right now, your contribution to the Electronic Frontier Foundation will go twice as far to protect digital privacy, security, and
free speech rights for everyone. Will you donate today to get a free 2X match?Power Up!
Give to EFF and get a free donation match
Thanks to a fund made by a group of dedicated supporters, your donation online gets an automatic match up to $307,200 through December 10! This means every dollar you
give equals two dollars to fight surveillance,
oppose censorship, defend encryption, promote open
access to information, and much more. EFF makes every cent count.
Lights, Laptops, Action!
Who has time to decode tech policy, understand the law, then figure out how to change things for the users? EFF does. The purpose of every attorney, activist, and technologist at EFF
is to watch your back and make technology better. But you are the superstar who makes it possible with your support.
'Fix Copyright' member shirt inspired by Steamboat Willie entering the public domain.
With the help of people like you, EFF has been able to help unravel legal and ethical questions surrounding the rise of AI; keep policymakers on the road to net neutrality; encourage the Fifth Circuit Court of Appeals to rule that
location-based geofence warrants are
unconstitutional; and explain why banning TikTok
and passing laws like the Kids Online Safety Act (KOSA) will not achieve
internet safety.
The world struggles to get tech right, but EFF’s experts advocate for you every day of the year. Take action by renewing your EFF membership! You can set the stage for civil
liberties and human rights online for everyone. Please give today and let your donation go twice as far for digital rights!Power Up!
Support internet freedom
(and get an Instant match!)
Already an EFF Member?
Strengthen the community when you help us spread the word about Power Up Your Donation Week! Here’s some sample language that you can share:
Donate to EFF this week for an instant match! Double your impact on digital privacy, security, and free speech rights for everyone. https://eff.org/power-up
Bluesky |
Email | Facebook | LinkedIn |
X
(More at eff.org/social)
Each of us has the power to help in the movement for internet freedom. Our future depends on forging a web where we can have private conversations and explore the world online with
confidence, so I thank you for your moral support and hope to have you on EFF's side as a member, too.
________________________
EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating ELEVEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible
as allowed by law.
>> mehr lesen
Amazon and Google Must Keep Their Promises on Project Nimbus
(Mon, 02 Dec 2024)
When a company makes a promise, the public should be able to rely on it. Today, nearly every person in the U.S. is a customer of either Amazon or Google—and many of us are
customers of both technology giants. Both of these companies havemadepublicpromises that they will ensure their
technologies are not being used to facilitate human rights violations. These promises are not just corporate platitudes; they’re commitments to every customer and to society at
large.
It’s a reasonable thing to ask if these promises are being kept. And it’s especially important since Amazon and Google have been increasingly implicated by reportsthattheirtechnologies,
specifically their joint cloud computing initiative called Project Nimbus, are being used to facilitate mass surveillance and human rights violations of Palestinians in the Occupied
Territories of the West Bank, East Jerusalem, and Gaza. This was the basis of our public call in August 2024 for the
companies to come clean about their involvement.
But we didn’t just make a public call. We sent letters directly to the Global
Head of Public Policy at Amazon and to Google’s Global Head of
Human Rights in late September. We detailed what these companies have promised and asked them to tell us by November 1, 2024 how they were complying. We hoped that
they could clear up the confusion, or at least explain where we, or the reporting we were relying on, were wrong.
But instead, they failed to respond. This is unfortunate, since it leads us to question how serious they were in their promises. And it should lead you to question that
too.
Project Nimbus: Technology at the Expense of Human Rights
Project Nimbus provides advanced cloud and AI capabilities to the Israeli government, tools that an increasing numberofcrediblereportssuggest are being used to target
civilians under pervasive surveillance in the Occupied Palestinian Territories. This is more than a technical collaboration—it’s a human rights crisis in the making as evidenced by
data-driven targeting programs like Project Lavender and Where’s Daddy, which have
reportedly led to detentions, killings, and the systematic oppression of journalists, healthcare workers, aid workers, and ordinary families.
Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation.
The consequences are serious. Vulnerable communities in Gaza and the West Bank suffer violations of their human rights, including their rights to privacy, freedom of movement,
and free association, all of which can be fostered and furthered by pervasive surveillance. These documented violations underscore the ethical responsibility of Amazon and Google,
whose technologies are at the heart of this surveillance scheme.
Amazon and Google’s Promises
Amazon and Google have made public commitments to align with the UN Guiding Principles
on Business and Human Rights and their own
AIethics frameworks. These
frameworks are supposed to ensure that their technologies do not contribute to harm. But their silence on these pressing concerns speaks volumes, undermining trust in their supposed
dedication to these principles and casting doubt on their sincerity.
Unanswered Letters, Unanswered Accountability
When we sentletters to Amazon and Google, it was with direct, actionable questions about their
involvement in Project Nimbus. We asked for transparency about their contracts, clients, and risk assessments. We called for evidence that due diligence had been conducted and
demanded explanations of the steps taken to prevent their technologies from facilitating abuse.
Our coredemands were straightforward and tied directly to the company’s commitments:
Disclose the scope of their involvement in Project Nimbus.
Provide evidence of risk assessments tied to this project.
Explain how they are addressing credible reports of misuse.
Despite these reasonable and urgent requests, which are tied directly to the companies’ stated legal and ethical commitments, both companies have remained silent, and their
silence isn’t just an insufficient response—it’s an alarming one.
Why Transparency Cannot Wait
Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation. For both of these companies, it’s an obligation they have promised to the rest
of us. For global companies that wield immense power, silence in the face of abuse is inexcusable.
The Fight for Accountability
EFF is making these letters public to highlight the human rights obligations Amazon and Google have undertaken and to raise reasonable questions they should answer in light of
public reports about the misuse of their technologies in the Occupied Palestinian Territories. We aren’t the first ones to raise concerns, but, having raised these
questions publicly,
and now having given the companies a chance to clarify, we are increasingly concerned about their complicity.
Google and Amazon have promised all of us—their customers and noncustomers alike—that they would take steps to ensure that their technologies support a future where technology
empowers rather than oppresses. It’s increasingly clear that those promises are being ignored, if not entirely broken. EFF will continue to push for transparency and
accountability.
>> mehr lesen
One Down, Many to Go with Pre-Installed Malware on Android
(Wed, 27 Nov 2024) Last
year, we investigated a Dragon Touch children’s tablet (KidzPad Y88X 10) and confirmed that it was linked to a string of fully compromised Android TV Boxes that
also had multiplereports
of malware, adware, and a sketchy firmware update channel. Since then, Google has taken the (now former) tablet
distributor off of their list of Play Protect certified
phones and tablets. The burden of catching this type of threat should not be placed on the consumer. Due diligence by manufacturers, distributors, and resellers is
the only way to tackle this issue of pre-installed compromised devices making their way into the hands of unknowing customers. But in order to mitigate this issue, regulation and
transparency need to be a part of the strategy.
As of October, Dragon Touch is not selling any tablets on their website anymore. However, there is lingering inventory still out there in places like Amazon and Newegg. There are storefronts that exist only on reseller sites for better customer reach, but considering Dragon Touch also
wiped their blog of any mention of their tablets, we assume a
little more than a strategy shift happened here.
We wrote a guide to
help parents set up their kid’s Android devices safely, but it’s difficult to choose which device to purchase to begin with. Advising people to simply buy a more expensive iPad or
Amazon Fire Tablet doesn’t change the fact people are going to purchase low-budget devices. Lower budget devices can be just as reputable if the ecosystem provided a path for better
accountability.
Who is Responsible?
There are some tools in development for consumer education, like the newly developed, voluntary Cyber Trust Mark by the FCC. This
label would aim to inform consumers of the capabilities and guarantee that minimum security standards were met for an IoT device. However, the consumer holding the burden to check for
pre-installed malware is absolutely ridiculous. Responsibility should fall to regulators, manufacturers, distributors, and resellers to check for this kind of threat.
More often than not, you can search for low budget Android devices on retailers like Amazon or Newegg, and find storefront pages with little transparency on who runs the store
and whether or not they come from a reputable distributor. This is true for more than just Android devices, but considering how many products are created for and with the Android
ecosystem, working on this problem could mean better security for thousands of products.
Yes, it is difficult to track hundreds to thousands of distributors and all of their products. It is hard to keep up with rapidly developing threats in the supply chain. You
can’t possibly know of every threat out there.
With all due respect to giant resellers, especially the multi-billion dollar ones: tough luck. This is what you inherit when you want to “sell everything.” You also
inherit the responsibility and risk of each market you encroach or supplant.
Possible Remedy: Firmware Transparency
Thankfully, there is hope on the horizon and tools exist to monitor compromised firmware.
Last year, Google presented Android Binary
Transparency in response to pre-installed malware. This would help track firmware that has been compromised with these two components:
An append-only log of firmware information that is immutable, globally observable, consistent, and auditable. Assured with cryptographic properties.
A network of participants that invest in witnesses, log health, and standardization.
Google is not the first to think of this
concept. This is largely extracting lessons of success from Certificate
Transparency. Yet, better support directly from the Android ecosystem for Android images would definitely help. This would provide an ecosystem of transparency of
manufacturers and developers that utilize the Android Open Source Project (AOSP) to be just as respected as higher-priced brands.
We love open source here at EFF and would like to continue to see innovation and availability in devices that aren’t necessarily created by bigger, more expensive names. But
there needs to be an accountable ecosystem for these products so that pre-installed malware can be more easily detected and not land in consumer hands so easily. Right now you
can verify your Pixel device if
you have a little technical skill. We would like verification to be done by regulators and/or distributors instead of asking consumers to crack out their command lines to verify
themselves.
It would be ideal to see existing programs like Android Play Protect certified run a log like this with open-source log implementations, like Trillian. This way, security
researchers, resellers, and regulating bodies could begin to monitor and query information on different Android Original Equipment Manufacturers (OEMs).
There are tools that exist to verify firmware, but right now this ecosystem is a wishlist of sorts. At EFF, we like to imagine what could be better. While a hosted comprehensive log of
Android OEMs doesn’t currently exist, the tools to create it do. Some early participants for accountability in the Android realm include F-Droid’s Android SDK Transparency Log and the Guardian Project’s
(Tor) Binary Transparency
Log.
Time would be better spent on solving this problem systemically, than researching whether every new electronic evil rectangle or IoT device has malware or not.
A complementary solution with binary transparency is the Software Bill of Materials (SBOMs). Think of this as a “list of ingredients” that make up software. This is another idea
that is not very new, but has gathered more institutional and government
support. The components listed in an SBOM could highlight issues or vulnerabilities that were reported for certain components of a software. Without binary transparency though,
researchers, verifiers, auditors, etc. could still be left attempting to extract firmware from devices
that haven’t listed their images. If manufacturers readily provided these images, SBOMs can be generated more easily and help create a less opaque market of electronics. Low
budget or not.
We are glad to see some movement from last year’s investigations. Right in time for Black Friday. More can be done and we hope to see not only devices taken down more swiftly
when reported, especially with shady components, but better support for proactive detection. Regardless of how much someone can spend, everyone deserves a safe, secure device that
doesn’t have malware crammed into it.
>> mehr lesen
Tell the Senate: Don’t Weaponize the Treasury Department Against Nonprofits
(Wed, 27 Nov 2024)
Last week the House of Representatives passed a dangerous bill that would allow the
Secretary of Treasury to strip a U.S. nonprofit of its tax-exempt status. If it passes the Senate and is signed into law, H.R. 9495 would give broad and easily abused new powers to
the executive branch. Nonprofits would not have a meaningful opportunity to defend themselves, and could be targeted without disclosing the reasons or evidence for the
decision.
This bill is an existential threat to nonprofits of all stripes. Future administrations could weaponize the powers in this bill to target nonprofits on either end of the
political spectrum. Even if they are not targeted, the threat alone could chill the activities of some nonprofit organizations.
The bill’s authors have combined this attack on nonprofits, originally written as H.R. 6408, with other legislation that would prevent the IRS from imposing fines and penalties
on hostages while they are held abroad. These are separate matters. Congress should separate these two bills to allow a meaningful vote on this dangerous expansion of executive power.
No administration should be given this much power to target nonprofits without due process.
tell your senator
Protect nonprofits
Over 350 civil liberties, religious, reproductive health, immigrant rights, human rights, racial justice, LGBTQ+, environmental, and educational organizations signed a letter opposing the bill as written. Now, we need your help.
Tell the Senate not to pass H.R. 9495, the
so-called “Stop Terror-Financing and Tax Penalties on American Hostages Act.”
>> mehr lesen
EFF Tells the Second Circuit a Second Time That Electronic Device Searches at the Border Require a Warrant
(Tue, 26 Nov 2024)
EFF, along with ACLU and the New York Civil Liberties Union, filed a second amicus brief in the U.S. Court of
Appeals for the Second Circuit urging the court to require a warrant for border searches of electronic devices, an argument
EFF has been making in the courts and Congress for nearly a decade.
The case, U.S. v. Smith, involved a traveler who was stopped at Newark airport after returning
from a trip to Jamaica. He was detained by border officers at the behest of the FBI and his cell phone was forensically searched. He had been under investigation for his involvement
in a conspiracy to control the New York area emergency mitigation services (“EMS”) industry, which included (among other things) insurance fraud and extortion. He was subsequently
prosecuted and sought to have the evidence from his cell phone thrown out of court.
As we wrote about last year, the district court
made history in holding that border searches of cell phones require a warrant and therefore warrantless device searches at the border violate the Fourth Amendment. However, the judge
allowed the evidence to be used in Mr. Smith’s prosecution because, the judge concluded, the officers had a “good faith” belief that they were legally permitted to search his phone
without a warrant.
The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2023, U.S. Customs and Border
Protection (CBP) conducted 41,767
device searches.
The Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless
“routine” searches of luggage, vehicles, and other items crossing the border.
The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as
drugs, weapons, and other prohibited items, thereby blocking their entry into the country.
In our brief, we argue that the U.S. Supreme Court’s balancing test in Riley v. California
(2014) should govern the analysis here—and that the district court was correct in applying Riley. In
that case, the Supreme Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest against an arrestee’s privacy interests
in the depth and breadth of personal information stored on a cell phone. The Supreme Court concluded that the search-incident-to-arrest warrant exception does not apply, and that
police need to get a warrant to search an arrestee’s phone.
Travelers’ privacy interests in their cell phones and laptops are, of course, the same as those considered in Riley. Modern devices, a decade later, contain even more data
points that together reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial
status, health conditions, and family and professional associations.
In considering the government’s interests in warrantless access to digital data at the border, Riley requires analyzing how closely such searches hew to the original purpose
of the warrant exception—preventing the entry of prohibited goods themselves via the items being searched. We argue that the government’s interests are weak in seeking unfettered
access to travelers’ electronic devices.
First, physical contraband (like drugs) can’t be found in digital data.
Second, digital contraband (such as child pornography) can’t be prevented from entering the country through a warrantless search of a device at the border because it’s likely, given
the nature of cloud technology and how internet-connected devices work, that identical copies of the files are already in the country on servers accessible via the internet.
As the Smith court stated, “Stopping the cell phone from entering the country would not … mean stopping the data contained on it from entering the country” because any data
that can be found on a cell phone—even digital contraband—“very likely does exist not just on the phone device itself, but also on faraway computer servers potentially located within
the country.”
Finally, searching devices for evidence of contraband smuggling (for example, text messages revealing the logistics of an illegal import scheme) and other evidence for
general law enforcement (i.e., investigating non-border-related domestic crimes, as was the case of the FBI investigating Mr. Smith’s involvement in the EMS conspiracy) are too
“untethered” from the original purpose of the border search exception, which is to find prohibited items themselves and not evidence to support a criminal prosecution.
If the Second Circuit is not inclined to require a warrant for electronic device searches at the border, we also argue that such a search—whether manual or forensic—should be
justified only by reasonable suspicion that the device contains digital contraband and be limited in scope to looking for digital contraband. This extends the Ninth Circuit’s
rule from U.S. v. Cano (2019) in which the court held that only forensic device searches at
the border require reasonable suspicion that the device contains digital contraband, while manual searches may be conducted without suspicion. But the Cano court also held
that all searches must be limited in scope to looking for digital contraband (for example, call logs are off limits because they can’t contain digital contraband in the form
of photos or files).
In our brief, we also highlighted two other district courts within the Second Circuit that required a warrant for border device searches: U.S. v. Sultanov (2024) and U.S. v. Fox(2024). We plan
to file briefs in their appeals, as well. Earlier this month, we filed a brief in another Second Circuit border search case, U.S. v. Kamaldoss. We hope that the Second Circuit will rise to
the occasion in one of these cases and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.
>> mehr lesen
Looking for the Answer to the Question, "Do I Really Own the Digital Media I Paid For?"
(Tue, 26 Nov 2024)
Sure, buying your favorite video game, movie, or album online is super convenient. I personally love being able to pre-order a game and play it the night of
release, without needing to go to a store.
But something you may not have thought about before making your purchase are the differences between owning a physical or digital copy of that media. Unfortunately, there’s quite a few rights you give
up by purchasing a digital copy of your favorite game, movie, or album! On our new site, Digital Rights Bytes, we outline the
differences between owning physical and digital media, and why we need to break down that barrier.
Digital Rights Bytes explains this and answers other common questions about technology that may be getting on your nerves and includes short videos featuring adorable animals. You can also read up on what EFF is doing to ensure
you actually own the digital media you pay for, and how you can take action, too.
Got other questions you’d like us to answer in the future? Let us know on your favorite social platform using the hashtag
#DigitalRightsBytes.
>> mehr lesen
Organizing for Digital Rights in the Pacific Northwest
(Fri, 22 Nov 2024)
Recently I traveled to Portland, Oregon to speak at the PDX People’s Digital Safety Fair, meet up
with five groups in the Electronic Frontier Alliance, and attend BSides PDX 2024. Portland’s first ever Digital Safety Fair was a success and five of our six EFA organizations in the
area participated: Personal Telco Project, Encode
Justice Oregon, PDX Privacy, TA3M Portland, and Community Broadband
PDX. I was able to reaffirm our support with these organizations, and table with most of them as they met local people interested in digital rights. We distributed
EFF toolkits as a resource, and we made sure EFA brochures and stickers had a presence on all their tables. A few of these organizations were also present at BSides PDX, and it was
great seeing them being leaders in the local infosec and cybersecurity community.
PDX Privacy’s mission is to bring about transparency and control in the acquisition and use of surveillance systems in the
Portland Metro area, whether personal data is captured by the government or by commercial entities. Transparency is essential to ensure privacy protections, community control,
fairness, and respect for civil rights.
TA3M Portland is an informal meetup designed to connect software creators and activists
who are interested in censorship, surveillance, and open technology.
The Oregon Chapter of Encode Justice, the world’s first and largest youth movement for human-centered
artificial intelligence, works to mobilize policymakers and the public for guardrails to ensure AI fulfills its transformative potential. Its mission is to ensure we encode justice
and safety into the technologies we build.
(l to r) Pictured here with the PDXPrivacy’s Seth, Boaz and new President, Nate. Pictured with Chris Bushick, legendary Portland privacy advocate of TA3M PDX. Pictured with
the leaders of Encode Justice Oregon.
There's growing momentum in the Seattle and Portland areas
Community Broadband PDX’s focus is on expanding the existing dark fiber broadband network in Portland to all residents, creating an
open-source model where the city owns the fiber, and it’s controlled by local nonprofits and cooperatives, not large ISP’s.
Personal Telco is dedicated to the idea that users have a central role in how their communications networks are operated.
This is done by building our own networks that we share with our communities, and by helping to educate others in how they can, too.
At the People’s Digital Safety Fair I spoke in the main room on the campaign to
bring high-speed broadband to Portland, which is led by Community Broadband PDX and the Personal TelCo Project. I made a direct call to action for those in attendance
to join the campaign. My talk culminated with, “What kind of ACTivist would I be if I didn’t implore you to take an ACTion? Everybody pull out your phones.” Then I guided the room to
the website for Community Broadband PDX and to the ‘Join Us’ page where people in that moment signed up to join the campaign, spread the word with their neighbors, and get organized
by the Community Broadband PDX team. You can reach out to them at cbbpdx.org and personaltelco.net. You can get in touch with all the groups mentioned in this blog with their hyperlinks above, or use our
EFA allies directory to see who’s organizing in your area.
(l to r) BSidesPDX 2024 swag and stickers. A photo of me speaking at the People’s Digital Privacy Fair on broadband access in PDX. Pictured with Jennifer Redman, President of
Community Broadband PDX and former broadband administrator for the city of Portland, OR. A picture of the Personal TelCo table with EFF toolkits printed and EFA brochures on hand.
Pictured with Ted, Russell Senior, and Drew of Personal Telco Project. Lastly, it's always great to see a member and active supporter of EFF interacting with one of our EFA
groups.
It’s very exciting to see what members of the EFA are doing in Portland! I also went up to Seattle and met with a few organizations, including one now in talks to join the EFA.
With new EFA friends in Seattle, and existing EFA relationships fortified, I'm excited to help grow our presence and support in the Pacific Northwest, and have new allies with
experience in legislative engagement. It’s great to see groups in the Pacific Northwest engaged and expanding their advocacy efforts, and even greater to stand by them as they
do!
Electronic Frontier Alliance members get support from a community of like-minded grassroots organizers from across the US. If your group defends our digital rights, consider
joining today. https://efa.eff.org
>> mehr lesen
Speaking Freely: Anriette Esterhuysen
(Fri, 22 Nov 2024)
*This interview took place in April 2024 at NetMundial+10 in São Paulo, Brazil. This interview has been edited for length and clarity.
Anriette Esterhuysen is a human rights defender and computer networking trailblazer from South Africa. She has pioneered the use of Internet and Communications Technologies (ICTs) to promote social justice in
South Africa and throughout the world, focusing on affordable Internet access. She was the executive director of Association for
Progressive Communications from 2007 to 2017. In November 2019 Anriette was appointed by the Secretary-General of the United Nations to chair
the Internet Governance Forum’sMultistakeholder Advisory Group.
Greene: Can you go ahead and introduce yourself for us?
Esterhuysen: My name is Anriette Esterhuysen, I am from South Africa and I’m currently sitting here with David in Sao Paulo, Brazil. My closest association remains with
the Association for Progressive Communications where I was executive director from 2000 to 2017. I continue to work
for APC as a consultant in the capacity of Senior Advisor on Internet Governance and convenor of the annual African School on Internet
Governance (AfriSIG).
Greene: Can you tell us more about the African School on Internet Governance (AfriSIG)?
AfriSIG is fabulous. It differs from internet governance capacity building provided by the technical community in that it aims to build critical thinking. It also does not gloss
over the complex power dynamics that are inherent to multistakeholder internet governance. It tries to give participants a hands-on experience of how different interest groups and
sectors approach internet governance issues.
AfriSIG started as a result of Titi Akinsanmi, a young Nigerian doing postgraduate studies in South Africa, approaching APC and saying, “Look, you’ve got to do something.
There’s a European School of Internet Governance, there’s one in Latin America, and where is there more need for capacity-building than in Africa?” She convinced me and my
colleague Emilar Vushe Gandhi, APC Africa Policy Coordinator at the time, to organize an African internet
governance school in 2013 and since then it has taken place every year. It has evolved over time into a partnership between APC and the African Union Commission and Research ICT
Africa.
It is a residential leadership development and learning event that takes place over 5 days. We bring together people who are already working in internet or communications policy
in some capacity. We create space for conversation between people from government, civil society, parliaments, regulators, the media, business and the technical community on what in
Africa are often referred to as “sensitive topics”. This can be anything from LGBTQ rights to online freedom of expression, corruption, authoritarianism, and accountable governance.
We try to create a safe space for deep diving the reasons for the dividing lines between, for example, government and civil society in Africa. It’s very delicate. I love doing it
because I feel that it transforms people’s thinking and the way they see one another and one another’s roles. At the end of the process, it is common for a government official to say
they now understand better why civil society demands media freedom, and how transparency can be useful in protecting the interests of public servants. And civil society activists have
a better understanding of the constraints that state officials face in their day-to-day work. It can be quite a revelation for individuals from civil society to be confronted with the
fact that in many respects they have greater freedom to act and speak than civil servants do.
Greene: That’s great. Okay now tell me, what does free speech mean to you?
I think of it as freedom of expression. It’s fundamental. I grew up under Apartheid in South Africa and was active in the struggle for democracy. There is something deeply wrong
with being surrounded by injustice, cruelty and brutality and not being allowed to speak about it. Even more so when one's own privilege comes at the expense of the oppressed, as was
the case for white South Africans like myself. For me, freedom of expression is the most profound part of being human. You cannot change anything, deconstruct it, or learn about it at
a human level without the ability to speak freely about what it is that you see, or want to understand. The absence of freedom of expression entrenches misinformation, a lack of
understanding of what is happening around you. It facilitates willful stupidity and selective knowledge. That’s why it’s so smart of repressive regimes to stifle freedom of
expression. By stifling free speech you disempower the victims of injustice from voicing their reality, on the one hand, and, on the other, you entrench the unwillingness of those who
are complicit with the injustice to confront that they’re part of it.
It is impossible to shift a state of repression and injustice without speaking out about it. That is why people who struggle for freedom and justice speak about it, even if
doing so gets them imprisoned, assassinated or executed. Change starts through people, the media, communities, families, social movements, and unions, speaking about what needs to
change.
Greene: Having grown up in Apartheid, is there a single personal experience or a group of personal experiences that really shaped your views on freedom of expression?
I think I was fortunate in the sense that I grew up with a mother who—based on her Christian beliefs—came to see Apartheid as being wrong. She was working as a social worker for
the main state church—the Dutch Reformed Church (DRC) —at the time of the Cottesloe
Consultation convened in Johannesburg by the World Council of Churches (WCC) shortly
after the Sharpeville Massacre. An outcome statement from this consultation, and later
deliberations by the WCC in Geneva, condemned the DRC for its racism. In response the DRC decided to leave the WCC. At a church meeting my mother attended she listened to the
debate and someone in the church hierarchy who spoke against this decision and challenged the church for its racist stance. His words made sense to her. She spoke to him after the
meeting and soon joined the organization he had started to oppose Apartheid, the Christian
Institute. His name was Beyers Naudé and he became an icon of the
anti-Apartheid struggle and an enemy of the apartheid state. Apparently, my first protest march was in a pushchair at a rally in 1961 to oppose the rightwing National Party
government's decision for South Africa to leave the Commonwealth.
There’s no single moment that shaped my view of freedom of expression. The thing about living in the context of that kind of racial segregation and repression is that you see it
every day. It’s everywhere around you, but like Nazi Germany, people—white South Africans—chose not to see it, or if they did, to find ways of rationalizing it.
Censorship was both a consequence of and a building block of the Apartheid system. There was no real freedom of expression. But because we had courageous journalists, and a
broad-based political movement—above ground and underground—that opposed the regime, there were spaces where one could speak/listen/learn. The Congress of Democrats established in the 1950s after the Communist Party was banned was a
social justice movement in which people of different faiths and political ideologies (Jewish, Christian and Muslim South Africans alongside agnostics and communists) fought for
justice together. Later in the 1980s, when I was a student, this broad front approach was revived through the United Democratic Front. Journalists did amazing things. When censorship was at its
height during the State of Emergency in the 1980s, newspapers would go to print with columns of blacked-out text—their way of telling the world that they were being censored.
mediacensorshipexample.png
I used to type up copy filed over the phone or cassettes by reporters for the Weekly Mail when I was a student. We had to be fast because everything had to be checked by the
paper’s lawyers before going to print. Lack of freedom of expression was legislated. The courage of editors and individual journalists to defy this, and if they could not, to make it
obvious made a huge impact on me.
Greene: Is there a time when you, looking back, would consider that you were personally censored?
I was very much personally censored at school. I went to an Afrikaans secondary school. And I kind of have a memory of when, after going back after a vacation, my math
teacher—who I had no personal relationship with —walked past me in class and asked me how my holiday on Robben
Island was. I thought, why is he asking me that? A few days later I heard from a teacher I was friendly with that there was a special staff meeting about me. They
felt I was very politically outspoken in class and the school hierarchy needed to take action. No actual action was taken... but I felt watched, and through that, censored, even if
not silenced.
I felt that because for me, being white, it was easier to speak out than for black South Africans, it would be wrong not to do so. As a teenager, I had already made that choice.
It was painful from a social point of view because I was very isolated, I didn’t have many friends, I saw the world so differently from my peers. In 1976 when the Soweto riots broke out I remember someone in my class saying, “This is exactly what we’ve been waiting for
because now we can just kill them all.” This is probably also why I feel a deep connection with Israel/Palestine. There are many dimensions to the Apartheid analogy. The one
that stands out for me is how, as was the case in South Africa too, those with power—Jewish Israelis—dehumanize and villainize the oppressed - Palestinians.
Greene: At some point did you decide that you want human rights more broadly and freedom of expression to be a part of your career?
I don’t think it was a conscious decision. I think it was what I was living for. It was the raison d’etre of my life for a long time. After high school, I had secured places at
two universities. At one for a science degree and at the other for a degree in journalism. But I ended up going to a different university making the choice based on the strength of
its student movement. The struggle against Apartheid was expressed and conceptualized as a struggle for human rights. The Constitution of democratic South Africa was crafted by human
rights lawyers and in many respects it is a localized interpretation of the Universal Declaration.
Later, in the late 1980s, when I started working on access to information through the use of Information and Communication Technologies (ICTS) it felt like an extension of
the political work I had done as a student and in my early working life. APC, which I joined as a member—not staff—in the 1990s, was made up of people from other parts of the world
who had been fighting their own struggles for freedom—Latin America, Asia, and Central/ Eastern Europe. All with very similar hopes about how the use of these technologies can enable
freedom and solidarity.
Greene: So fast forward to now, currently do you think the platforms promote freedom of expression for people or restrict freedom of expression?
Not a simple question. Still, I think the net effect is more freedom of expression. The extent of online freedom of expression is uneven and it’s distorted by the platforms in
some contexts. Just look at the biased pro-Israel way in which several platforms moderate content. Enabling hate speech in contexts of conflict can definitely have a silencing effect.
By not restricting hate in a consistent manner, they end up restricting freedom of expression. But I think it’s disingenuous to say that overall the internet does not increase
freedom of expression. And social media platforms, despite their problematic business models, do contribute. They could of course do it so much better, fairly and consistently, and
for not doing that they need to be held accountable.
Greene: We can talk about some of the problems and difficulties. Let’s start with hate speech. You said it’s a problem we have to tackle. How do we tackle it?
You’re talking to a very cynical old person here. I think that social media amplifies hate speech. But I don’t think they create the impulse to hate. Social media business
models are extractive and exploitative. But we can’t fix our societies by fixing social media. I think that we have to deal with hate in the offline world. Channeling energy and
resources into trying to grow tolerance and respect for human rights in the online space is not enough. It’s just dealing with the symptoms of intolerance and populism. We need to
work far harder to hold people, particularly those with power, accountable for encouraging hate (and disinformation). Why is it easy to get away with online hate in India? Because
Modi likes hate. It’s convenient for him, it keeps him in political power. Trump is another example of a leader that thrives on hate.
What’s so problematic about social media platforms is the monetization of this. That is absolutely wrong and should be stopped—I can say all kinds of things about it. We need to
have a multi-pronged approach. We need market regulation, perhaps some form of content regulation, and new ways of regulating advertising online. We need access to data on what
happens inside these platforms. Intervention is needed, but I do not believe that content control is the right way to do it. It is the business model that is at the root of the
problem. That’s why I get so frustrated with this huge global effort by governments (and others) to ensure information integrity through content regulation. I would rather they
spend the money on strengthening independent media and journalism.
Greene: We should note we are currently at an information integrity conference today. In terms of hate speech, are there hazards to having hate speech laws?
South Africa has hate speech laws which I believe are necessary. Racial hate speech continues to be a problem in South Africa. So is xenophobic hate speech. We have an election
coming on May 29 [2024] and I was listening to talk radio on election issues and hearing how political parties use xenophobic tropes in their campaigns was terrifying. “South Africa
has to be for South Africans.” “Nigerians run organized crime.” “All drugs come from Mozambique,” and so on. Dangerous speech needs to be called out. Norms are important.
But I think that establishing legalized content regulation is risky. In contexts without robust protection for freedom of expression, such regulation can easily be abused by states to
stifle political speech.
Greene: Societal or legal norms?
Both. Legal norms are necessary because social norms can be so inconsistent, volatile. But social norms shape people’s everyday experience and we have to strive to make
them human rights aware. It is important to prevent the abuse of legal norms—and states are, sadly, pretty good at doing just that. In the case of South Africa hate speech regulation
works relatively well because there are strong protections for freedom of expression. There are soft and hard law mechanisms. The South African Human Rights Commission developed a social media charter to counter harmful
speech online as a kind of self-regulatory tool. All of this works—not perfectly of course—because we have a constitution that is grounded in human rights. Where we need to be more
consistent is in holding politicians accountable for speech that incites hate.
Greene: So do we want checks and balances built into the regulatory scheme or are you just wanting it existing within a government scheme that has checks and balances built
in?
I don’t think you need new global rule sets. I think the existing international human rights framework provides what we need and just needs to be strengthened and its
application adapted to emerging tech. One of the reasons why I don’t think we should be obsessive about restricting hate speech online is because it is a canary in a coal mine. In
societies where there’s a communal or religious conflict or racial hate, removing its manifestation online could be a missed opportunity to prevent explosions of violence
offline. That is not to say that there should not be recourse and remedy for victims of hate speech online. Or that those who incite violence should not be held accountable. But
I believe we need to keep the bar high in how we define hate speech—basically as speech that incites violence.
South Africa is an interesting case because we have very progressive laws when it comes to same-sex marriage, same-sex adoption, relationships, insurance, spousal recognition,
medical insurance and so on, but there’s still societal prejudice, particularly in poor communities. That is why we need a strong rights-oriented legal framework.
Greene: So that would be another area where free speech can be restricted and not just from a legal sense but you think from a higher level principles sense.
Right. Perhaps what I am trying to say is that there is speech that incites violence and it should be restricted. And then there is speech that is hateful and discriminatory,
and this should be countered, called out, and challenged, but not censored. When you’re talking about the restriction—or not even the restriction but the recognition and calling
out of—harmful speech it’s important not just to do that online. In South Africa stopping xenophobic speech online or on public media platforms would be relatively simple. But it’s
not going to stop xenophobia in the streets. To do that we need other interventions. Education, public awareness campaigns, community building, and change in the underlying
conditions in which hate thrives which in our case is primarily poverty and unemployment, lack of housing and security.
Greene: This morning someone who spoke at this event was speaking about misinformation said, “The vast majority of misinformation is online.” And certainly in the US, researchers
say that’s not true, most of it is on cable news, but it struck me that someone who is considered an expert should know better. We have information ecosystems and online does not
exist separately.
It’s not separate. Agree. There’s such a strong tendency to look at online spaces as an alternative universe. Even in countries with low internet penetration, there’s a tendency
to focus on the online components of these ecosystems. Another example would be child online protection. Most child abuse takes place in the physical world, and most child abusers are
close family members, friends or teachers of their victims—but there is a global obsession with protecting children online. It is a shortsighted and ‘cheap’ approach and it
won’t work. Not for dealing with misinformation or for protecting children from abuse.
Greene: Okay, our last question we ask all of our guests. Who is your free speech hero?
Desmond Tutu. I have many free speech heroes but Bishop Tutu is a standout because he could be so charming
about speaking his truths. He was fearless in challenging the Apartheid regime. But he would also challenge his fellow Christians. One of his best lines was, “If LGBT people are
not welcome in heaven, I’d rather go to the other place.” And then the person I care about and fear for every day is Egyptian blogger Alaa Abd el-Fattah. I remember walking at night through the streets of Cairo with him in 2012. People kept
coming up to him, talking to him, and being so obviously proud to be able to do so. His activism is fearless. But it is also personal, grounded in love for his city, his country, his
family, and the people who live in it. For Alaa freedom of speech, and freedom in general, was not an abstract or a political goal. It was about freedom to love, to create art, music,
literature and ideas in a shared way that brings people joy and togetherness.
Greene: Well now I have a follow-up question. You said you think free speech is undervalued these days. In what ways and how do we see that?
We see it manifested in the absence of tolerance, in the increase in people claiming that their freedoms are being violated by the expression of those they disagree with, or who
criticize them. It’s as if we’re trying to establish these controlled environments where we don’t have to listen to things that we think are wrong, or that we disagree with. As you
said earlier, information ecosystems have offline and online components. Getting to the “truth” requires a mix of different views, disagreement, fact-checking, and holding people who
deliberately spread falsehoods accountable for doing so. We need people to have the right to free speech, and to counter-speech. We need research and evidence gathering, investigative
journalism, and, most of all, critical thinking. I’m not saying there shouldn't be restrictions on speech in certain contexts, but do it because the speech is illegal or actively
inciteful. Don’t do it because you think it will achieve so-called information integrity. And especially, don’t do it in ways that undermine the right to freedom of expression.
>> mehr lesen