Deeplinks

The Executive Order Targeting Social Media Gets the FTC, Its Job, and the Law Wrong (Di, 02 Jun 2020)
This is one of a series of blog posts about President Trump's May 28 Executive Order. Other posts are here, here, and here. The inaptly named Executive Order on Preventing Online Censorship seeks to insert the federal government into private Internet speech in several ways. In particular, Sections 4 and 5 seek to address possible deceptive practices, but end up being unnecessary at best and legally untenable at worst. These provisions are motivated in part by concerns, which we share, that the dominant platforms do not adequately inform users about their standards for moderating content, and that their own free speech rhetoric often doesn’t match their practices. But the EO’s provisions either don’t help, or introduce new and even more dangerous problems. Section 4(c) says, “The FTC (Federal Trade Commission) shall consider taking action, as appropriate and consistent with applicable law, to prohibit unfair or deceptive acts or practices in or affecting commerce, pursuant to section 45 of title 15, United States Code. Such unfair or deceptive acts or practice may include practices by entities covered by Section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.” Well, sure. Platforms should be honest about their restriction practices, and held accountable when they lie about them. The thing is, the FTC already has the ability to “consider taking action” about deceptive commercial practices. But the real difficulty comes with the other parts of this section. Section 4(a) sets out the erroneous legal position that large online platforms are “public forums” that are legally barred from exercising viewpoint discrimination and have little ability to limit the categories of content that may be published on their sites. As we discuss in detail in our post dedicated to Section 230, every court that has considered this legal question has rejected it, including recent decisions by U.S. District Courts of Appeal for the Ninth and D.C. Circuits. And for good reason: treating social media companies like “public forums” gives users less ability to respond to misuse, not more. Instead, those courts have correctly adopted the rule on editorial freedom from the Supreme Court’s 1974 decision in Miami Herald Co. v Tornillo. In that case, the court rejected strikingly similar arguments—that the newspapers of the day were misusing their editorial authority to favor one side over the other in public debates and that government intervention was necessary to “insure fairness and accuracy and to provide for some accountability." Sound familiar? The Supreme Court didn’t go for it: the “treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time.” The current Supreme Court agrees. Just last term, in Manhattan Community Access v Halleck, the Supreme Court affirmed that the act of serving as a platform for the speech of others did not eliminate that platform’s own First Amendment right to editorial freedom. But the EO doesn’t just get the law wrong—it wants the FTC to punish platforms that don’t adhere to the erroneous position that online platforms are “public forums” legally barred from editorial freedom. Section 4(d) commands the FTC to consider whether the dominant platforms are inherently engaging in unfair practices by not operating as public forums as set forth in Section 4(a). This means that a platform could be completely honest, transparent, and open about its content moderation practices but still face penalties because it did not act like a public forum. So, platforms have a choice—take their guidance from the Supreme Court or from the Trump administration. Additionally, Section 4(b) refers to the White House’s Tech Bias Reporting Tool launched last year to collect reports of political bias. The EO states that 16,000 reports were received and they will be forwarded to the FTC. We filed a Freedom of Information Act (FOIA) request with the White House’s Office of Science and Technology Policy for those complaints last year and wer told that that office had no records (https://www.eff.org/document/eff-fioa-request-tech-bias-story-sharing-tool). Section 5 commands the Attorney General to convene a group to look at existing state laws and propose model state legislation to address unfair and deceptive practices by online platforms. This group will be empowered to collect publicly available information about: how platforms track user interactions with other users; the use of “algorithms to suppress political alignment or viewpoint”; differential policies when applied to the Chinese government; reliance on third-party entities with “indicia of bias,” and viewpoint discrimination with respect to user monetization. To the extent that this means that decisions will be made based on actual data rather than anecdote and supposition, that is a good thing. But given this pretty one-sided list, there does seem to be a predetermined political decision the EO wants to reach, and the resulting proposals that come out of this may create yet another set of problems. All of this exacerbates a growing environment of legal confusion for technology and its users that bodes ill for online expression. Keep in mind that “entities covered by section 230” describes a huge population of online services that facilitate online user communication, from Wikimedia to the Internet Archive to the comments section of local newspapers. However you feel about Big Tech, rest assured that the EO’s effects will not be confined to the small group of companies that can afford to navigate these choppy waters.
>> mehr lesen

Trump’s Executive Order Threatens to Leverage Government’s Advertising Dollars to Pressure Online Platforms (Tue, 02 Jun 2020)
This is one of a series of blog posts about President Trump's May 28 Executive Order. Other posts can be found here, here, here, and here. The inaptly named  Executive Order on Preventing Online Censorship (EO) seeks to insert the federal government into private Internet speech in several ways. Section 3 of the EO threatens to leverage the federal government’s significant online advertising spending to coerce platforms to conform to the government’s desired editorial position. This raises significant First Amendment concerns. The EO provides: Sec. 3.  Protecting Federal Taxpayer Dollars from Financing Online Platforms That Restrict Free Speech.  (a)  The head of each executive department and agency (agency) shall review its agency’s Federal spending on advertising and marketing paid to online platforms.  Such review shall include the amount of money spent, the online platforms that receive Federal dollars, and the statutory authorities available to restrict their receipt of advertising dollars. (b)  Within 30 days of the date of this order, the head of each agency shall report its findings to the Director of the Office of Management and Budget. (c)  The Department of Justice shall review the viewpoint-based speech restrictions imposed by each online platform identified in the report described in subsection (b) of this section and assess whether any online platforms are problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices. The First Amendment is implicated by this provision because it is, at its essence, the government punishing a speaker for expressing a political viewpoint. The Supreme Court has recognized that "[t]he expression of an editorial opinion . . . lies at the heart of First Amendment protection." The First Amendment thus generally protects speakers against enforced neutrality. Although the government may have broad leeway to decide where it wants to run its advertisements, here it seems that the government would otherwise place advertisements on these platforms but for the sole fact that it dislikes the political viewpoint reflected by the platform's editorial and curatorial decisions. This is true regardless of whether the platform actually has an editorial viewpoint or if the government simply perceives a viewpoint it finds inappropriate. This decision is especially suspect when the platform’s speech is unrelated to the advertisement or the government program or policy being advertised. It might present a different situation if the message in the government’s advertisement would be undermined by the platform’s editorial decisions, or,  if by advertising, the government would be perceived as adopting the platform’s viewpoint. But neither of those is contemplated by the EO. The EO thus seems purely retaliatory, and designed solely to coerce the platforms to meet the government’s conception of acceptable “neutrality”—a severe penalty for having a political viewpoint. The goal of federal government advertising is to reach the broadest audience possible: think of the Consumer Product Safety Commission’s Quinn the Quarantine Fox ads, or the National Park Service’s promotions about its units. This advertising is not a reward for the platform for its perceived neutrality. It’s a service to Americans who need vital information. In other contexts, the Supreme Court has made clear that the government’s spending decisions can generally not be “the product of invidious viewpoint discrimination.” The court has applied this rule to strike down a property tax exemption that was available only to those who took loyalty oaths, explaining that “the deterrent effect is the same as if the State were to fine them for this speech.” And the court also applied it when a county canceled a contract with a trash hauler who was a fervent critic of the county’s government. Even when the court rejected a First Amendment challenge to a requirement that the National Endowment for the Arts consider “general standards of decency and respect for the diverse beliefs and values of the American public” as one of many factors in awarding arts grants, it emphasized that the criterion did not give the government authority to “leverage its power to award subsidies on the basis of subjective criteria into a penalty on disfavored viewpoints,” and funding decisions should not be “calculated to drive certain ideas or viewpoints from the marketplace.” By denying ad dollars that it would otherwise spend solely because it disagrees with a platform’s editorial views, or dislikes that it has editorial views, the government violates these fundamental principles. And this in turn harms the public, which may need or want information contained in government advertisements.
>> mehr lesen

Internet Users of All Kinds Should Be Concerned by a New Copyright Office Report (Mon, 01 Jun 2020)
Outside of the beltway, people all over the United States are taking to the streets to demand fundamental change. In the halls of Congress and the White House, however, many people seem to think the biggest thing that needs to be restructured is the Internet. Last week, the president issued an order taking on one legal foundation for online expression: Section 230. This week, the Senate is focusing on another: Section 512 of the Digital Millennium Copyright Act (DMCA). The stage for this week’s hearing was set by a massive report from the Copyright Office that’s been five years in the making. We read it, so you don’t have to. Since the DMCA passed in 1998, the Internet has grown into something vital that we all use. We are the biggest constituency of the Internet—not Big Tech or major media companies—and when we go online we depend on an Internet that depends on Section 512 Section 512 of the DMCA is one of the most important provisions of U.S. Internet law. Congress designed the DMCA to give rightsholders, service providers and users relatively precise “rules of the road” for policing online copyright infringement. The center of that scheme is the “notice and takedown” process. In exchange for substantial protection from liability for the actions of their users, service providers must promptly take down any content on their platforms that has been identified as infringing, and take several other prescribed steps. Copyright owners, for their part, are given a fast, extra-judicial procedure for obtaining redress against alleged infringement, paired with explicit statutory guidance regarding the process for doing so, and provisions designed to deter and remedy abuses of that process. Without Section 512, the risk of crippling liability for the acts of users would have prevented the emergence of most social media outlets and online forums we use today. With the protection of that section, the Internet has become the most revolutionary platform for the creation and dissemination of speech that the world has ever known. Thousands of companies and organizations, big and small, rely on it every day. Interactive platforms like video hosting services and social networking sites that are vital to democratic participation, and also to the ability of ordinary users to forge communities, access information, and discuss issues of public and private concern, rely on Section 512 every day. But large copyright holders, led by major media and entertainment companies, have complained for years that Section 512 doesn’t put enough of a burden on service providers to actively police online infringement. Bowing to their pressure, in December of 2015, Congress asked the Copyright Office to report on how Section 512 is working. Five years later, we have its answer—and overall it’s pretty disappointing. Just Because One Party Is Unhappy Doesn’t Mean the Law is Broken The Office believes that because rightsholders are dissatisfied with the DMCA, the law’s objectives aren’t being met. There are at least two problems with this theory. First, major rightsholders are never satisfied with the state of copyright law (or how the Internet works today in general)—they constantly seek broader restrictions, higher penalties, and more control over users of creative work. Their displeasure with Section 512 may in fact be a sign that the balance is working just fine. Second, Congress’s goal was to ensure that the Internet would be an engine for innovation and expression, not to ensure perfect infringement policing. By that measure, Section 512, though far from perfect, is doing reasonably well when we consider the ease of which we can distribute knowledge and culture. Misreading the Balance, Discounting Abuse Part of the problem may be that the Office fundamentally misconstrues the bargain that Congress struck when it passed the DMCA. The report repeatedly refers to Section 512 as a balance between rightsholders and service providers. But Section 512 is supposed to benefit a third group: the public. We know this because Congress built-in protections for free speech, knowing that the DMCA could be abused. Congress knew that Section 512’s quick and easy takedown process could result in lawful material being censored from the Internet, without any court supervision, much less advance notice to the person who posted the material, or any opportunity to contest the removal. To inhibit abuse, Congress made sure that the DMCA included a series of checks and balances. First, it created a counter-notice process that allows for putting content back online after a two-week waiting period. Second, Congress set out clear rules for asserting infringement under the DMCA. Third, it gave users the ability to hold rightsholders accountable if they send a DMCA notice in bad faith. With these provisions, Section 512 creates a carefully crafted system. When properly deployed, it gives service providers protection from liability, copyright owners tools to police infringement, and users the ability to challenge the improper use of those tools. The Copyright Office’s report speaks of the views of online service providers and rightsholders, while paying only lip service to the millions of Internet users that don’t identify with either group. That may be what led the Office to give short shrift to the problem of DMCA abuse, complaining that there wasn’t enough empirical evidence. In fact, a great deal of evidence was submitted into the record, among them a detailed study by Jennifer Urban, Joe Karaganis, and Brianna Schofield. Coming on the heels of a lengthy Wall Street Journal report describing how people use fake DMCA claims to get Google to take news reports offline, the Office’s dismissive treatment of DMCA abuse is profoundly disappointing. Second-Guessing the Courts An overall theme of the report is that courts all over the country have been misinterpreting the DMCA ever since its passage in 1998. One of the DMCA’s four safe harbors covers “storage at the direction of a user.” The report suggests that appellate courts “expanded” the DMCA when they concluded, one court after another, that services such as transcoding, playback, and automatically identifying related videos, qualify as part of that storage because they are so closely related to it.  The report questions another appellate court ruling that peer-to-peer services qualify for protection. And the report is even more critical of court rulings regarding when a service provider is on notice of infringement, triggering a duty to police that infringement. The report challenges one appellate ruling which requires awareness of facts and circumstances from which a reasonable person would know a specific infringement had occurred. Echoing an argument frequently raised by rightsholders and rejected by courts, the report contends that general knowledge that infringement is happening on a platform should be enough to mandate more active intervention. What about the subsection of the DMCA that says plainly that service providers do not have a duty to monitor for infringement? The Office concludes that this provision is merely intended to protect user privacy. The Office also suggests the Ninth Circuit’s decision in Lenz v Universal Music was mistaken. In that case, the appeals court ruled that entities who send takedown notices must consider whether the use they are targeting is a lawful fair use, because failure to do so would necessarily mean they could not have formed a good faith belief that the material was infringing, as the DMCA requires. The Office worries that, if the Ninth Circuit is correct, rightsholders might be held liable for not doing the work even if the material is actually infringing. This is nonsensical—in real life, no one would sue under Section 512(f) to defend unlawful material, even if the provision had real teeth, because doing so would risk being slapped with massive and unpredictable statutory damages for infringement. And the Office’s worry is overblown. It is not too much to ask a person wielding a censorship tool as powerful as Section 512, which lets a person take others’ speech offline based on nothing more than an allegation, to take the time to figure out if they are wielding that tool appropriately. Given that one Ninth Circuit judge concluded that the Lenz decision actually “eviscerates § 512(f) and leaves it toothless against frivolous takedown notices,” it is hard to take rightsholders’ complaints seriously—but the Office did. In short, the Office has taken upon itself to second-guess the many judges actually tasked with interpreting the law because it does not like their conclusions. Rather than describe the state of the law today and advise Congress as an information resource, it argues for what the law should be per the viewpoint of a discrete special interest. Advocacy for changing the law belongs to the public and their elected officials. It is not the Copyright Office’s job, and it sharply undermines any claim the Report might make to a neutral approach. Mere Allegations Can Mean Losing Internet Access for Everyone on the Account In order to take advantage of the safe harbor included in Section 512 of the DMCA, companies have to have a “repeat infringer” policy. It’s fairly flexible, since different companies have different uses, but the basic idea is that a company must terminate the account of a user who has repeatedly infringed. Perhaps the most famous iteration of this requirement is YouTube’s “Three Strikes” policy: if you get three copyright strikes in 90 days on YouTube, your whole account is deleted, all your videos are removed, and you can’t create new channels. Fear of getting to three strikes has not only made YouTubers very cautious, it has created a landscape where extortion can flourish. One such troll would make bogus copyright claims, and then send messages to users demanding money in exchange for withdrawing the claims. When one user responded with a counter-notification—which is what they are supposed to do to get bogus claims dismissed—the troll allegedly “swatted” the user with the information in the counter-notice. And that’s just the landscape for YouTube. The Copyright Office’s report suggests that the real problem of repeat infringer policies is that courts aren’t requiring service providers to create and enforce stricter ones, kicking more people off the Internet. The Office does suggest that a different approach might be needed for students and universities, because students need the Internet for “academic work, career searching and networking, and personal purposes, such as watching television and listening to music,” and students living in campus housing would have no other choice for Internet access if they were kicked off the school’s network. But all of us, not just students, use the Internet for work, career building, education, communication, and personal purposes. And few of us could go to another provider if an allegation of infringement kicked us off the ISP we have. Most Americans have only one or two high-speed broadband providers with a majority of us stuck with a cable monopoly for high-speed access. The Internet is vital to people’s everyday lives. To lose access entirely because of an unproven accusation of copyright infringement would be, as the Copyright Office briefly acknowledges, “excessively punitive.”  The Copyright Office to the Rescue? Having identified a host of problems, the Office concludes by offering to help fix some of them. Its offer to provide educational materials seems appropriate enough, though given the skewed nature of the Report itself, we worry that those materials will be far from neutral. Far more worrisome, however, is the offer to help manufacture an industry consensus on standard technical measures (STMs) to police copyright infringement. According to Section 512, service providers must accommodate STMs in order to receive the safe harbor protections. To qualify as an STM, a measure must (1) have been developed pursuant to a broad consensus in an “open, fair, voluntary, multi-industry standards process”; (2) be available on reasonable and nondiscriminatory terms; and (3) cannot impose substantial costs on service providers. Nothing has ever met all three requirements, not least because no “open, fair, voluntary, multi-industry standards process” exists. The Office would apparently like to change that, and has even asked Congress for regulatory authority to help make it happen. Trouble is, any such process is far too likely to result in the adoption of filtering mandates. And filtering has many, many, \issues, such that the Office itself says filtering mandates should not be adopted, at least not now. The Good News Which brings us to the good news. The Copyright Office stopped short of recommending that Congress require all online services to filter for infringing content—a dangerous and drastic step they describe with the bland-sounding term “notice and staydown”—or require a system of website blocking. The Office wisely noted that these proposals could have a truly awful impact on freedom of speech. It also noted that filtering mandates could raise barriers to competition for new online services, and entrench today’s tech giants in their outsized control over online speech—an outcome that harms both creators and users. And the Office also recognized the limits of its expertise, noting that filtering and site-blocking mandates would require “an extensive evaluation of . . . the non-copyright implications of these proposals, such as economic, antitrust, [and] speech. . . .” The Can of Worms Is Open Looking ahead, the most dangerous thing about the Report may be that some Senators are treating its recommendations for “clarification” as an invitation to rewrite Section 512, inviting the exact legal uncertainty the law was intended to eliminate. Senators Thom Tillis and Patrick Leahy have asked the Office to provide detailed recommendations for how to rewrite the statute – including asking what it would do if it were starting from scratch. Based on the report, we suspect the answer won’t include strong protections for user rights.
>> mehr lesen

Tech Learning Collective: A Grassroots Technology School Case Study (Mon, 01 Jun 2020)
Grassroots education is important for making sure advanced technical knowledge is accessible to communities who may otherwise be blocked or pushed out of the field. By sharing this invaluable knowledge and skills, local groups can address and dissolve these barriers to organizers hoping to step up their cybersecurity. The Electronic Frontier Alliance (EFA) is a network of community-based groups across the U.S.  dedicated to advocacy and community education at the intersection of the EFA’s five guiding principles: privacy, free expression, access to knowledge, creativity, and security. Tech Learning Collective, a radical queer and femme operated group headquartered in New York City, sets itself apart as an apprenticeship-based technology school that integrates their workshops into a curriculum for radical organizers. Their classes range from fundamental computer literacy to hacking techniques and aim to serve students from historically marginalized groups. We corresponded with the collective over email to discuss the history and strategy of the group's ambitious work, as well as how the group has continued to engage their community amid the COVID-19 health crisis. Here are excerpts from our conversation: What inspired you all to start the Tech Learning Collective? How has the group changed over time? In 2016, a group of anarchist and autonomist radicals met in Brooklyn, NY, to seek out methods of mutual self-education around technology. Many of us did not have backgrounds in computer technology. What we did have was a background in justice movement organizing at one point or another, whether at the WTO protests before the turn of the century, supporting whistleblowers such as Chelsea Manning, participating in Occupy Wall Street, or in various other campaigns. This first version of Tech Learning Collective met regularly for about a year as a semi-private mutual-education project. It succeeded in sowing the seeds of what would later become several additional justice-oriented technology groups. None of the members were formally trained or have ever held computer science degrees. Many of the traditional techniques and environments offering technology education felt alienating to us. So, after a (surprisingly short!) period of mutual self-education, we began offering free workshops and classes on computer technologies specifically for Left-leaning politically engaged individuals and groups. Our goal was to advocate for more effective use of these technologies in our movement organizing. We quickly learned that courses needed to cater to people with skill levels ranging from self-identified “beginners” to very experienced technologists, and that our efforts needed to be self-sustaining. Partly, this was because many of our comrades had sworn off technical self-sufficiency as a legitimate avenue for liberation in a misguided but understandable reaction to the poisonous prevalence of machismo, knowledge grandstanding, and blatant sociopathy they saw exhibited by the overwhelming majority of “techies.” It was obvious that our trainers needed to exemplify a totally new culture to show them that cyber power, not just computer literacy, was a capability worth investing their time in for the sake of the movement. Tech Learning Collective’s singular overarching goal is to provide its students with the knowledge and abilities to liberate their communities from corporate and government overseers, especially as it relates to owning and operating their own information and communications infrastructures, which we view as a necessary prerequisite for meaningful revolutionary actions. Using these skills, our students assist in the organization of activist work like abortion access and reproductive rights, anti-surveillance organizing, and other efforts that help build collective power beyond mere voter representation. Who is your target audience? Anyone who is serious about gaining the skills, knowledge, and power they need to materially improve the lives of their community, neighbors, and friends and who also shares our pro-social values is welcome at our workshops and events. Importantly, this means that self-described “beginners” are just as welcome at our events as very experienced technologists, and we begin both our materials and our methodology at the actual beginning of computer foundations...  We know what it's like to wade into the world of digital security as a novice because we've all done it at one point or another. We felt confounded or overwhelmed by the vast amount of information suddenly thrown at us. Worse, much of this information purported to be “for beginners”, making us feel even worse about our apparent inability to understand it. “Are we just stupid?”, we often asked ourselves. You are not stupid. [...]  We insist that you can understand this stuff. The TLC is incredibly active, with an impressive 15 events planned for June. How does your group share this workload and avoid burnout among collective members? There are three primary techniques we use to do this. These will be familiar to anyone who has ever worked in an office or held a position in management. They are automation, separation of concerns, and partnerships. After all, just because we are anti-capitalist does not mean we ignore the obviously effective tools and techniques we have at our disposal for realizing our goals. The first pillar, automation, is really what we are all about. It's what almost all of our classes teach in one form or another. In a Tech Learning Collective class, you will often hear the phrase, “If you ever do one thing on a computer twice, you've made a mistake the second time.” This is a reminder that computers were built for automation. That's what they're for. So, almost every component of Tech Learning Collective's day-to-day operations is automated. [...]  The only time a human needs to be involved is when another human wants to talk to us. Otherwise, the emails you're getting from us were written many months ago and are being generated by scripts and templates. Without that we would need to at least double if not triple or quadruple the number of people who could devote many hours to managing the logistics of making sure events happen. But that's boring, tedious, repetitive work, and that's what computers are for. Secondly, separation of concerns: this is both a management and a security technique. In InfoSec, we call this the compartmentalization principle. You might be familiar with it as “need to know,” and it states that only the people who need to be concerned with a certain thing should have to spend any brainpower on it in the first place, or indeed have any access to it at all. This means that when one of our teachers wants to host a workshop, they don't need to involve anyone else in the collective. They are autonomous, free to act however they wish within the limits of their role. This makes it possible for our collective members to dip in and out whenever they need to, thus avoiding burnout while increasing quality. If one of us has to step away for a while, the collective can still function smoothly. Finally, partnerships allow us to do things we could not do on our own. This also helps distribute the overarching workload, like creating practice labs or writing educational materials for new workshops. We work extremely closely with a number of other groups [since] our core collective members straddle several other activist and educational collectives. At the time of writing, we are in the middle of the COVID-19 health crisis. Many groups are struggling with shelter-in-place, but fortunately TLC seems to have adapted very well. What are some strategies you are employing to continue your work? This is almost an unfair question, because the nature of what we do at Tech Learning Collective lends itself well to the current crises. The biggest change that the COVID-19 pandemic has forced us to adapt to is the shuttering of our usual venues for in-person workshops. Fortunately, we were already ramping up our online and distance learning options even before the pandemic. So we simply put that into high gear. The easily automatable nature of handling logistics for online events also made it possible to do many more of them, which is one reason you're seeing so much more activity from us these days. In certain ways, for many in our collective, this "new normal" is actually a rather dated 90's-era cyberpunk dystopia that we've been experiencing for many, many years. In that sense, we're happy that you don't have to enter this reality alone and defenseless. We kinda’ built Tech Learning Collective for exactly this scenario. We want to help you thrive here. Finally, what does the future look like for TLC? We're not sure! When we started TLC, we never thought it would end up becoming an online, international, radical political hacker school. In just the last two months since we've been forced to become a wholly virtual organization, we've held classes with students from Japan, Italy, New Zealand, the UK, Mexico, and beyond, as well as many parts of the United States of course. Many of them are now repeat participants working their way through our entire curriculum, which is the best compliment we could have asked for. We hope they'll stick around to join our growing alumni community after that. We're also (slowly) expanding our “staff” outside of New York City, which isn't something we thought would happen for many years, if at all. But right now, we're primarily focused on moving the rest of our in-person curriculum online and creating new online workshops. Many of the workshops unveiled this month or planned for next month are new, like our workshops on writing shell scripts, exploiting Web applications, auditing firewalls and other network perimeter defenses, and an exciting "spellwork" workshop to learn about the "spirits" that live on in the magical place inside every computer called the Command Line. So in the near future, expect to see more workshops like these, as well as more of our self-paced “Foundations” learning modules that you can try out anytime for free right in your Web browser from our Web site. After that? Well, some say another world is possible. We're hackers. Hacking is about showing people what's possible, especially if they insist it could never happen. Our thanks to Tech Learning Collective for their continued efforts to bring an empowering technology education to marginalized peoples in New York City and, increasingly, around the world. You can find and support other Electronic Frontier Alliance affiliated groups near you by visiting eff.org/fight.  If you are interested in holding workshops for your community, you can find freely available workshop materials at the EFF’s Security Education Companion and security guides from our Surveillance Self-Defence project. Of course, you can also connect to similar groups by joining the Electronic Frontier Alliance.
>> mehr lesen

Trump’s Executive Order Seeks To Have FCC Regulate Platforms. Here’s Why It Won’t Happen (Mon, 01 Jun 2020)
This is one of a series of blog posts about President Trump's May 28 Executive Order. Other posts are here, here, here, and here. The inaptly named  Executive Order on Preventing Online Censorship seeks to insert the federal government into private Internet speech in several ways. Through Section 2 of the Executive Order (EO), the president has attempted to demand the start of a new administrative rulemaking. Despite the ham-fisted language, such a process can’t come into being. No matter how much someone might wish it. The EO attempts to enlist the Secretary of Commerce and Attorney General to draft a rulemaking petition with the Federal Communications Commission (FCC) that asks it  that independent agency to interpret 47 U.S.C. § 230 (“Section 230”), a law that underlies much of the architecture for the modern Internet. Quite simply, this isn’t allowed. Specifically, the petition will ask the FCC to examine: “(i) the interaction between subparagraphs (c)(1) and (c)(2) of section 230, in particular to clarify and determine the circumstances under which a provider of an interactive computer service that restricts access to content in a manner not specifically protected by subparagraph (c)(2)(A) may also not be able to claim protection under subparagraph (c)(1), which merely states that a provider shall not be treated as a publisher or speaker for making third-party content available and does not address the provider’s responsibility for its own editorial decisions; “(ii)  the conditions under which an action restricting access to or availability of material is not “taken in good faith” within the meaning of subparagraph (c)(2)(A) of section 230, particularly whether actions can be “taken in good faith” if they are: “(A)  deceptive, pretextual, or inconsistent with a provider’s terms of service; or “(B)  taken after failing to provide adequate notice, reasoned explanation, or a meaningful opportunity to be heard; and “(iii)  any other proposed regulations that the NTIA concludes may be appropriate to advance the policy described in subsection (a) of this section.” There are several significant legal obstacles to this happening. First, the Federal Communications Commission (FCC) has no regulatory authority over the platforms the President wishes the agency to regulate. The FCC is a telecommunications/spectrum regulator and only the communications infrastructure industry (companies such as AT&T, Comcast, Frontier as well as airwaves) are subject to the agency’s regulatory authority. This is the position of both the current, Trump-appointed FCC Chair as well as the  courts that have considered the question. In fact, this is why the issue of net neutrality is legally premised on whether or not broadband companies are telecommunications carriers. While that question, whether broadband providers are telecommunications carriers under the law, is one where we disagree with current FCC leadership, neither this FCC nor any previous one has taken the position that social media companies are telecommunications carriers. So to implement regulations targeting social media companies, the FCC would have to explain how—under what legal authority—it is allowed to issue regulations aimed at social media companies. We don’t see it doing so.   But say the FCC ignores this likely fatal flaw and proceeds anyway. The EO triggers a long and slow process which is unlikely to be completed, much less one that results in an enforcement action, this year. That process will involve a Notice of Proposed Rules (NPRM), with the FCC issuing a statement explaining its rationale for regulating these companies, what authorities it has to regulate them, and the possible regulations the FCC intends to produce. The commission must then solicit public comment in response to its statement. The process also involves public comment periods and agreement by a majority of FCC Commissioners on the regulations they want to issue. Absent a majority, nothing can be issued and the proposed regulations effectively die from inaction. If a majority of FCC Commissioners do agree and move forward, a lawsuit will inevitably follow to test the legal merits of the FCC’s decision, both on whether the government followed the proper procedures in issuing the regulation and whether it has the legal authority to issue rules in the first place. Needless to say, the EO has initiated a long and uncertain process. Certainly one that will not be completed before the November election, if ever.
>> mehr lesen

California Cops Can No Longer Pass the Cost of Digital Redaction onto Public Records Requesters (Mon, 01 Jun 2020)
At a dark time when the possibility of police accountability seems especially bleak, there is a new glimmer of light courtesy of the California Supreme Court. Under a new ruling, government agencies cannot pass the cost of redacting police body-camera footage and other digital public records onto the members of the public who requested them under the California Public Records Act (CPRA). The case, National Lawyers Guild vs. Hayward was brought by civil rights groups against the City of Hayward after they filed requests for police body-camera footage related to protests on UC Berkeley’s campus following the deaths of Eric Garner and Michael Brown. Hayward Police agreed to release the footage, but not before assessing nearly $3,000 for redacting the footage and editing that they claimed NLG needed to pay before they’d release the video. The California Supreme Court sided with NLG, as well as the long list of transparency advocates and news organizations that filed briefs in the case. The court ruled that: “Just as agencies cannot recover the costs of searching through a filing cabinet for paper records, they cannot recover comparable costs for electronic records. Nor, for similar reasons, does ‘extraction’ cover the cost of redacting exempt data from otherwise producible electronic records.” The court further acknowledged that such charges “could well prove prohibitively expensive for some requesters, barring them from accessing records altogether.” This is an unqualified victory for government transparency. So what does this mean in practical terms for public records requesters? As people march against police violence across the Golden State, many members of the press and non-profits will likely use the CPRA to obtain evidence of police breaking the law or otherwise violating people’s civil rights. These videos can prove to be invaluable records of police activity and misconduct, though they can also capture individuals suffering medical emergencies, violence, and other moments of distress. The CPRA attempts to balance these and other interests by allowing public agencies to redact personally identifying details and other information while still requiring that the videos be made public. So when making a request for body-camera footage, the first thing requesters should know is that sometimes the individuals handling public records requests are not keeping up with legal decisions, particularly one issued last week. To preempt these misinterpretations of the law, requesters could consider including a line in their letters that says something like: “Pursuant to NLG vs. Hayward, S252445 (May 28, 2020), government agencies may not charge requesters for the cost of redacting or editing body-worn camera footage.” More broadly, the decision’s reasoning doesn’t just apply to body-camera footage, but all digital records. This is because the court’s ruling recognizes that because the CPRA already prohibits agencies from charging requesters for redacting non-digital records, that same prohibition applies to digital records. So, in requests for electronic information, such as emails or datasets, you could include the line: “Pursuant to NLG vs. Hayward, S252445 (May 28, 2020), government agencies may not charge requesters for the cost of redacting digital records.” Additionally, people filing CPRA requests for digital records should know that the law does permit agencies to charge for the costs of duplicating records, though in the case of digital records that cost should be no more than the price of media the copy is written to - in NLG’s case, it was $1 for a USB memory stick. The CPRA also permits agencies, in certain narrow circumstances, to charge for its staff’s time spent programming or extracting data to respond to a public records request. The good news is that the California Supreme Court’s decision last week significantly narrowed the circumstances under which an agency can claim these costs and pass them along to requesters. According to the court, data “extraction” under the CPRA “refers to a particular technical process—a process of retrieving data from government data stores—when this process is” required to produce a record that can be released. The court said the provision would permit charges when, for example, a request for demographic data of state employees requires an agency to pull that data from a larger human resources database. But “extraction” does not cover the time spent searching for responsive records, such as when an official has to search through email correspondence or a physical file cabinet. Requesters should thus be prepared to push back on any agency claims that seek to assess charges for merely searching for responsive records. And requesters should also be on the lookout for exorbitant charges associated with data “extraction” even when the CPRA permits it, as such techniques in practice can amount to little more than a database query or formula.
>> mehr lesen

Don’t Mix Policing with COVID-19 Contact Tracing (Mon, 01 Jun 2020)
Over the weekend, Minnesota’s Public Safety Commissioner analogized COVID-19 contact tracing with police investigation of arrested protesters. This analogy is misleading and dangerous. It also underlines the need for public health officials to practice strict data minimization—including a ban on sharing with police any personal information collected through contact tracing. On May 30, at a press conference about the ongoing protests in Minneapolis against racism and police brutality, Commissioner John Harrington stated: As we’ve begun making arrests, we have begun analyzing the data of who we have arrested, and begun, actually, doing what you would think as almost pretty similar to our COVID. It’s contact tracing. Who are they associated with? What platforms are they advocating for? We strongly disagree. Contact tracing a public health technique used to protect us from a deadly pandemic. In its traditional manual form (not to be confused with automated contact tracing apps), contact tracing involves interviews of people who have been infected, to ascertain who they have been in contact with, in order to identify other infected people before they infect still more people. On the other hand, interrogating arrested protesters about their beliefs and associations is a longstanding police practice. So is social media surveillance by police of dissident movements. These practices must be carefully restricted, lest they undermine our First Amendment rights to associate, assemble, and protest, and our Fourth Amendment rights to be free from unreasonable searches and seizures. We have similar concerns about a notorious practice that the NSA calls “contact chaining”: automated analysis of communications metadata in order to identify connections between people. Any blurring of police work with contact tracing can undermine public health. In prior outbreaks, people who trusted public health authorities were more likely to comply with containment efforts. On the other hand, a punitive approach to containment can break that trust. For example, people may avoid testing if they fear the consequences of a test result. Thus, we must ensure strict data minimization in COVID-19 contact tracing. At a minimum, this means that police must have no access to any personal information collected by public health officials in the course of interviewing COVID-19 patients about their movements, activities, and associations. People are less likely to cooperate with contact tracing if they fear the consequences. For this reason, EFF also opposes police access to the home addresses of COVID-19 patients. Of course, there is much more to data minimization: Public health officials conducting COVID-19 contact tracing must collect as little personal information as possible for containment purposes. For example, they don’t need to know about a patient’s movements months earlier, because COVID-19 patients are only infectious for 14 days. Public health officials must delete the personal information they collect as soon as it is no longer helpful to contact tracing. This may be a very short retention period, given the very short infectiousness period. Public health officials must not disclose this information to other entities, especially if those entities are likely to use the information for anything other than contact tracing. For example, they must not disclose this information to police departments, immigration enforcement agencies, or intelligence services. When corporations assist with contact tracing, they must abide by data minimization rules, too. For example, they must not be allowed to use it for targeted advertising, or to monetize it in any other manner. We need new laws to guarantee such data minimization, not just for contact tracing, but for all COVID-19 responses that gather personal information. Finally, we must not allow police to “COVID wash” controversial police practices. Manual contact tracing is a public health measure that many people view as necessary and proportionate to the ongoing public health crisis. On the other hand, when police investigate the political beliefs and associations of protesters, whether by interrogation or social media snooping, abuse often follows. The misplaced analogy between these two very different practices can unduly blunt justified criticisms of police responses to protesters.
>> mehr lesen

Dangers of Trump’s Executive Order Explained (Mon, 01 Jun 2020)
This is one of a series of blog posts about President Trump's May 28 Executive Order. Links to other posts are below. The inaptly named Executive Order on Preventing Online Censorship (EO) is a mess on many levels: it’s likely unconstitutional on several grounds, built on false premises, and bad policy to boot. We are no fans of the way dominant social media platforms moderate user content. But the EO, and its clear intent to retaliate against Twitter for marking the president’s tweets for fact-checking, demonstrates that governmental mandates are the wrong way to address concerns about faulty moderation practices. The EO contains several key provisions. We will examine them in separate posts linked here: 1. The FCC rule-making provision 2. The misinterpretation of and attack on Section 2303. Threats to pull government advertising 4. Review of unfair or deceptive practices Although we will focus on the intended legal consequences of the EO, we must also acknowledge the danger the Executive Order poses even if it is just political theater and never has any legal effect. The mere threat of heavy-handed speech regulation can inhibit speakers who want to avoid getting into a fight with the government, and deny readers information they want to receive. The Supreme Court has recognized that “people do not lightly disregard public officers’ thinly veiled threats” and thus even “informal contacts” by government against speakers may violate the First Amendment. The EO’s threats to free expression and retaliation for constitutionally-protected editorial decisions by a private entity are not even thinly veiled: they should have no place in any serious discussion about concerns over the dominance of a few social media companies and how they moderate user content. That said, we too are disturbed by the current state of content moderation on the big platforms. So, while we firmly disagree with the EO, we have been highly critical of the platforms’ failure to address some of the same issues targeted in the EO’s policy statement, specifically: first, that users deserve more transparency about how, when and how much content is moderated; second, that decisions often appear inconsistent; and, third, that content guidelines are often vague and unhelpful. Starting long before the president got involved, we have said repeatedly that the content moderation system is broken and called for platforms to fix it. We have documented a range of egregious content moderation decisions (see our onlinecensorship.org, Takedown Hall of Shame, and TOSsed Out projects). We have proposed a human rights framing for content moderation called the Santa Clara Principles, urged companies to adopt it, and then monitored whether they did so (see our 2018 and 2019 Who Has Your Back reports). But we have rejected government mandates as a solution, and this EO demonstrates why it is indeed the wrong approach. In the hands of a retaliatory regime, government mandates on speech will inevitably be used to punish disfavored speakers and platforms, and for other oppressive and repressive purposes. Those decisions will disproportionately impact the marginalized. Regardless of the dismal state of content moderation, it is truly dangerous to put the government in control of online communication channels. The EO requires the Attorney General to “develop a proposal for Federal legislation that would be useful to promote the policy objectives of this order.” This is a dangerous idea generally because it represents another unwarranted government intrusion into private companies’ decisions to moderate and curate user content. But it’s a particularly bad idea in light of the current Attorney General’s very public animus toward tech companies and their efforts to provide Internet users with secure ways to communicate, namely through end-to-end encryption. Attorney General William Barr already has plenty of motivation to break encryption, including through the proposed EARN IT Act; the EO’s mandate gives Barr more ammunition to target Internet users’ security and privacy in the name of promoting some undefined “neutrality.” Some have proposed that the EO is simply an attempt to bring some due process and transparency to content moderation. However, our analysis of the various parts of the EO illuminate why that’s not true.  What about Competition? For all its bluster, the EO doesn’t address one of the biggest underlying threats to online speech and user rights: the concentration of power in a few social media companies. If the president and other social media critics really want to ensure that all voices have a chance to be heard, if they are really concerned that a few large platforms have too much practical power to police speech, the answer is not to create a new centralized speech bureaucracy, or promote the creation of fifty separate ones in the states. A better and actually constitutional option is to reduce the power of the social media giants and increase the power of users by promoting real competition in the social media space. This means eliminating the legal barriers to the development of tools that will let users control their own Internet experience. Instead of enshrining Google, Facebook, Amazon, Apple, Twitter, and Microsoft as the Internet’s permanent overlords, and then striving to make them as benign as possible, we can fix the Internet by making Big Tech less central to its future. The Santa Clara Principles provide a framework for making content moderation at scale more respectful of human rights. Promoting competition provides a way to make the problems caused by content moderation by the big tech companies less important. Neither of these seem likely to be accomplished by the EO. But the chilling effect the EO will likely have on hosts of speech, and, consequently, the public—which relies on the Internet to speak out and be heard—is likely very real.
>> mehr lesen

Black Lives Matter, Online and in the Streets: Statement from EFF in the Wake of the Police Killings of Breonna Taylor and George Floyd (Mon, 01 Jun 2020)
Black lives matter on the streets. Black lives matter on the Internet.  EFF stands with the communities mourning the victims of police homicide. We stand with the protesters who are plowed down by patrol cars. We stand with the journalists placed in handcuffs or fired upon while reporting these atrocities. And we stand with all those using their cameras, phones and digital tools to make sure we cannot turn away from the truth. There is no doubt that we are in deeply troubled times. From lockdown in our homes, many of us are watching with heart-stopping horror as the cellphone footage of extreme police violence washes down our feeds. Others feel compelled to join the protests in person and to bear witness and document it for the rest of us over the digital networks that connect us all. The president is sowing chaos through incendiary, authoritarian orders and pressure toward more violence, sometimes through official channels, more often on Twitter. The build-up of even more sophisticated mass and targeted surveillance tools in the hands of American law enforcement, and the erosion of local control and protections against misuse, have all been normalized over the past two decades. Now the pandemic management technology being pushed by some tech companies and governments over the last few months is primed to be deployed as a massive new surveillance and control apparatus.  With all of it, who will feel the brunt of the harm?  Black lives.  Our hearts are breaking for our Black coworkers, neighbors and friends, who have suffered from this trauma for far too long. While we’ve seen small rays of hope by a few in government and law enforcement, they are overshadowed by the rest. Overall, we are enraged at the response by the police all over the country to these protests, doubling down on abusive tactics and ramping up tracking of protesters. And we are worried about what’s coming: protest movements often bring out the worst in constitutional abuse. We’ve seen police surveillance tools grow and metastasize, with law enforcement officials specifically targeting the Black-led movement to end racist police violence. To the protesters and reporters on the front lines, we urge you to stay safe, both physically and digitally. Our Surveillance Self-Defense Guide for protesters was designed for these situations. We’ll be updating it as we see new tactics and strategies and tools.  To our racial justice, economic justice, and environmental justice allies, EFF is here to help whenever you need some hands who understand technology and the law. To the reporters from struggling newsrooms: we are available to answer your questions and we have resources to help you.  And to everyone, we pledge to redouble our efforts to beat back police surveillance and abuse, and to build and protect the tools that allow you to organize, assemble, and speak securely and without censorship. 
>> mehr lesen

Immunity Passports Are a Threat to Our Privacy and Information Security (Fri, 29 May 2020)
With states beginning to ease shelter-in-place restrictions, the conversation on COVID-19 has turned to questions of when and how we can return to work, take kids to school, or plan air travel. Several countries and U.S. states, including the UK, Italy, Chile, Germany, and California, have expressed interest in so-called “immunity passports”—a system of requiring people to present supposed proof of immunity to COVID-19 in order to access public spaces, work sites, airports, schools, or other venues. In many proposed schemes, this proof would be stored in a digital token on a phone. Immunity passports would threaten our privacy and information security, and would be a significant step toward a system of national digital identification that can be used to collect and store our personal information and track our location. Immunity passports are purportedly intended to help combat the spread of COVID-19. But there is little evidence that they would actually accomplish that. On a practical level, there is currently no test for COVID-19 immunity; what we have are antibody tests. But we don’t know whether people with antibodies have immunity. Meanwhile, there has been a flood of flawed tests and fraudulent marketing schemes about antibody tests. Even when validated tests are widely available, they may not be 100 percent accurate. The system should be a non-starter unless it can guarantee due process for those who want to challenge their test results. This has often been a problem before; as we saw with the “no-fly” lists created after 9/11, it is very difficult to get off the list, even for those whose inclusion was a mistake. The problem with immunity passports isn’t just medical—it’s ethical. Access to both COVID-19 testing and antibody testing is spotty. Reports abound of people who fear they have been infected desperately trying to get tested to no avail. Analysis has shown that African Americans are far less likely than white, Hispanic, or Asian patients to be tested before they end up in the emergency room. Mobile testing sites administered by Verily (a subsidiary of Google’s parent Alphabet) require people to have a smartphone and a Google account. Residents in San Francisco’s Tenderloin district, one of the city’s poorest neighborhoods, were turned away from testing sites because they didn’t have cell phones. Requiring smartphone-based immunity verification to access public spaces like offices and schools would exacerbate existing inequities and reinforce a two-tiered system of the privileged, who can move about freely in society, and the vulnerable, who can’t work, shop, or attend school because they don’t have a cell phone or access to testing. We’ve been here before. When yellow fever struck the South in the 1850s, those thought to be “unacclimated” to the disease were unemployable. This burdened Black and lower-income people more than privileged members of society. As we saw then, conditioning access to society on immunity incentivizes “bug-chasing”—that is, people deliberately trying to get sick in order to get the immunity passport. No one should have to expose themselves to a potentially deadly disease with no cure to find work.  Risks of Digitized Immunity Passports The push for immunity passports has largely been premised on the promise of technological solutions to a public health crisis. A proposed bill in California, for example, would use blockchain technology to facilitate an immunity passport system on peoples’ smartphones. We oppose this bill. Technological advancements such as blockchain technology or other methods  of implementation do not address our objections to this type of system in of itself. Moreover, digital-format immunity passports could normalize digital-format proof-of-status documents more generally. Advocates of immunity passports visualize a world where we can’t pass through a door to a workplace, school, or restaurant until the gatekeeper scans our credentials. This would habituate gatekeepers to demand such status credentials, and habituate the public to submit to these demands. This digital system could easily be expanded to check not just a person’s immunity status, but any other bit of personal information that a gatekeeper might deem relevant, such as age, pregnancy, HIV status, or criminal history. The system could also be adjusted to document not just a particular person’s status, but also when that person passed through a door that required proof of such status. And all data of all such passages could be accumulated into one database. This would be a troubling step towards digital national identification, which EFF has long opposed because it would create new ways to digitally monitor our movements and activities. Digital format documentation also brings the risk of presenting such documentation under duress to varying authorities. Handing over your phone to police, unlocked or not, includes significant risks, especially for people in vulnerable communities—risks that could lead to unintended consequences for the presenter and a potential abuse of power by law enforcement. Moreover, requiring people to store their medical test results in a digital format would expose private medical information to the danger of data breaches. Again, this is hardly new—we have seen exactly these types of breaches in the past when medical information has been digitized and collected. Just last year, for example, an HIV database in Singapore leaked the personal information of more than 14,000 individuals living with HIV. We should learn from our past mistakes, and ensure that technology works to empower people, instead of creating new vulnerabilities.   
>> mehr lesen

Watch EFF Cybersecurity Director Eva Galperin's TED Talk About Stalkerware (Fri, 29 May 2020)
Stalkers and abusive partners want access to your device for the same reason governments and advertisers do: because “full access to a person's phone is the next best thing to full access to a person's mind,” as EFF Director of Cybersecurity Eva Galperin explains in her TED talk on “stalkerware” and her efforts to end the abuse this malicious software enables. mytubethumb play %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FxzWFrHHTrs8%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com After years of studying how nation-state actors use advanced malware to spy on journalists, activists, lawyers, scientists, and others who practice dissent and advocacy, Galperin shifted her focus to how stalkers, abusive partners, and exes use advanced malware to spy on and manipulate their victims. Of the companies that market stalkerware—sometimes under the guise of child safety or employee-monitoring software—Galperin told the TED audience, “Do these companies know that their tools are being used as tools of abuse? Absolutely.”  We call on antivirus companies to recognize stalkerware for what it is: malicious technology with no acceptable use case. With groups like the Coalition Against Stalkerware, Galperin and EFF are leading the fight to educate users and push antivirus companies to “change the norm” around how they treat this technology to prevent abuse and protect victims.
>> mehr lesen

Trump Executive Order Misreads Key Law Promoting Free Expression Online and Violates the First Amendment (Thu, 28 May 2020)
This post based its initial analysis on a draft Executive Order. It has been updated to reflect the final order, available here. President Trump’s Executive Order targeting social media companies is an assault on free expression online and a transparent attempt to retaliate against Twitter for its decision to curate (well, really just to fact-check) his posts and deter everyone else from taking similar steps.  The good news is that, assuming the final order looks like the draft we reviewed on Wednesday, it won’t survive judicial scrutiny. To see why, let’s take a deeper look at its incorrect reading of Section 230  (47 U.S.C. § 230) and how the order violates the First Amendment. The Executive Order’s Error-Filled Reading of Section 230 The main thrust of the order is to attack Section 230, the law that underlies the structure of our modern Internet and allows online services to host diverse forums for users’ speech. These platforms are currently the primary way that the majority of people express themselves online. To ensure that companies remain able to let other people express themselves online, Section 230 grants online intermediaries broad immunity from liability arising from publishing another’s speech. It contains two separate and independent protections. Subsection (c)(1) shields from liability all traditional publication decisions related to content created by others, including editing, and decisions to publish or not publish. It protects online platforms from liability for hosting user-generated content that others claim is unlawful. For example, if Alice has a blog on WordPress, and Bob accuses Clyde of having said something terrible in the blog’s comments, Section 230(c)(1) ensures that neither Alice nor WordPress are liable for Bob’s statements about Clyde. The subsection also would also protect Alice and WordPress from claims from Bob for Clyde's comment even if Alice removed Bob's comment. Subsection (c)(2) is an additional and independent protection from legal challenges brought by users when platforms decide to edit or to not publish material they deem to be obscene or otherwise objectionable. Unlike (c)(1), (c)(2) requires that the decision be in “good faith.” In the context of the above example, (c)(2) would protect Alice and WordPress when Alice decides to remove a term within the comment from Clyde that she considers to be offensive. Clyde cannot successfully sue Alice for that editorial action as long as Alice acted in good faith. The legal protections in subsections (c)(1) and (c)(2) are completely independent of one another. There is no basis in the language of Section 230 to qualify (c)(1)’s immunity on platforms obtaining immunity under (c)(2). And courts, including the U.S. Court of Appeals for the Ninth Circuit, have correctly interpreted the provisions as distinct and independent liability shields:  Subsection (c)(1), by itself, shields from liability all publication decisions, whether to edit, to remove, or to post, with respect to content generated entirely by third parties. Subsection (c)(2), for its part, provides an additional shield from liability, but only for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider ... considers to be obscene ... or otherwise objectionable.” Even though neither the statute nor court opinions that interpret it mush these two Section 230 provisions together, the order asks the Federal Communications Commission to start a rulemaking and consider linking the two provision's liability shields. The order asks the FCC to consider whether a finding that a platform failed to act in "good faith" under subsection (c)(2) also disqualifies the platform from claiming immunity under section (c)(1). In short, the order tasks government agencies with defining “good faith” and eventually deciding whether any platform’s decision to edit, remove, or otherwise moderate user-generated content meets it, upon pain of losing access to all of Section 230's protections. Should the order result in FCC rules interpreting 230 that way, a platform's single act of editing user content that the government doesn’t like could result in losing both kinds of protections under 230. This essentially will work as a trigger to remove Section 230’s protections entirely from a host of anything that someone disagrees with. But the impact of that trigger would be much broader than simply being liable for the moderation activities purportedly done in bad faith: Once a platform was deemed not in good faith, it could lose (c)(1) immunity for all user-generated content, not just the triggering content. This could result in platforms being subjected to a torrent of private litigation for thousands of completely unrelated publication decisions. The Executive Order’s First Amendment Problems Taking a step back, the order purports to give the Executive Branch and federal agencies powerful leverage to force platforms to publish what the government wants them to publish, on pain of losing Section 230’s protections. But even if section 230 permitted this, and it doesn’t, the First Amendment bars such intrusions on editorial and curatorial freedom. The Supreme Court has consistently upheld the right of publishers to make these types of editorial decisions. While the order faults social media platforms for not being purely passive conduits of user speech, the Court derived the First Amendment right from that very feature. In its 1974 decision in Miami Herald Co v. Tornillo, the Court explained: A newspaper is more than a passive receptacle or conduit for news, comment, and advertising. The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials -- whether fair or unfair -- constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time. Courts have consistently applied this rule to social media platforms, including the 9th Circuit’s recent decision in Prager U v. Google and a decision yesterday by the U.S. Court of Appeals for the District of Columbia in a case brought by Freedom Watch and Laura Loomer against Google. In another case, a court ruled that when online platforms "select and arrange others’ materials, and add the all-important ordering that causes some materials to be displayed first and others last, they are engaging in fully protected First Amendment expression—the presentation of an edited compilation of speech generated by other persons." And just last term in Manhattan Community Access v. Halleck, the Supreme Court rejected the argument that hosting the speech of others negated these editorial freedoms. The court wrote, “In short, merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.” It went on to note that “Benjamin Franklin did not have to operate his newspaper as ‘a stagecoach, with seats for everyone,’” and that “The Constitution does not disable private property owners and private lessees from exercising editorial discretion over speech and speakers on their property." The Supreme Court also affirmed that these principles applied "Regardless of whether something 'is a forum more in a metaphysical than in a spatial or geographic sense.’” EFF filed amicus briefs in Prager U and Manhattan Community Access, urging that very result. These cases thus foreclose the President’s ability to intrude on platforms’ editorial decisions and to transform them into public forums akin to parks and sidewalks. But even if the First Amendment were not implicated, the President cannot use an order to rewrite an act of Congress. In passing 230, Congress did not grant the Executive the ability to make rules for how the law should be interpreted or implemented. The order cannot abrogate power to the President that Congress has not given. We should see this order in light of what prompted it: the President’s personal disagreement with Twitter’s decisions to curate his own tweets. Thus despite the order’s lofty praise for “free and open debate on the Internet,” this order is in no way based on a broader concern for freedom of speech and the press. Indeed, this Administration has shown little regard, and much contempt, for freedom of speech and the press. We’re skeptical that the order will actually advance the ideals of freedom of speech or be justly implemented. There are legitimate concerns about the current state of online expression, including how a handful of powerful platforms have centralized user speech to the detriment of competition in the market for online services and users’ privacy and free expression. But the order announced today doesn't actually address those legitimate concerns and it isn't the vehicle to fix those problems. Instead, it represents a heavy-handed attempt by the President to retaliate against an American company for not doing his bidding. It must be stopped.
>> mehr lesen

EFF to Court: Broadband Privacy Law Passes First Amendment Muster (Thu, 28 May 2020)
When it comes to surveillance of our online lives, Internet service providers (ISPs) are some of the worst offenders. Last year, the state of Maine passed a law targeted at the harms ISPs do to their customers when they use and sell their personal information. Now that law is under attack from a group of ISPs who claim it violates their First Amendment rights. The lawsuit raises a number of issues—including free speech and data privacy—that are crucial to maintaining an open Internet. So EFF filed an amicus brief arguing that Maine’s law does not violate the First Amendment. The brief explains that the law’s requirement that ISPs obtain their customers’ opt-in consent before using or disclosing their personal information is narrowly tailored to the state’s substantial interests in protecting ISP customers’ data privacy, free speech, and information security. The case is called ACA Connects v. Frey. We were joined by three other groups dedicated to both free speech and data privacy on the Internet: the ACLU, the ACLU of Maine, and the Center for Democracy and Technology. Why EFF Supports Broadband Privacy Laws ISPs have distinctive powers to surveil our online lives. We can’t get to the Internet without an ISP, and most Americans don’t have a choice among ISPs, so they cannot switch if they are unhappy with their current provider. ISPs can see everything that travels back and forth between our devices and the Internet. Even when we encrypt our web traffic to protect the content, ISPs can see our metadata, such as which web servers we visit. ISPs have a long and troubling history of abusing their distinctive powers to intrude on our online privacy. In response, the FCC adopted broadband privacy rules in 2016, with EFF support. Unfortunately, Congress and the President repealed these FCC rules in 2017, over EFF opposition. So the battle for broadband privacy moved to the states. EFF supports broadband privacy bills around the country, and did so in Maine. Maine enacted its broadband privacy law in 2019. It goes into effect in July 2020. The law requires ISPs to obtain a consumer’s opt-in consent before using or disclosing what the law calls “customer personal information.” This term is defined to include (1) personally identifying information, such as a customer’s billing information and social security number, and (2) information derived from a customer’s use of broadband service, such as browsing history, geolocation, and health information. The laws also bars “pay for privacy” schemes; that is, ISPs cannot punish consumers who withhold their consent, by refusing service, charging a penalty, or withholding a discount. In February 2020, a consortium of ISPs filed a lawsuit against the State of Maine. They argue, among other things, that Maine’s broadband privacy law violates the First Amendment rights of ISPs. We disagree. Why Maine’s Law Passes First Amendment Muster EFF is second to none in working to protect free speech on the Internet for all the people of the world. We recognize that the Maine law limits the expression of ISPs: the law regulates how ISPs create and disseminate information, which is speech within the meaning of the First Amendment. But not all government limits on expression deserve the highest level of First Amendment protection from courts. Here, given the particular relationship between ISPs and their customers, a reduced level of protection is appropriate. First, the Maine law regulates commercial speech, which the Supreme Court has described as “expression related solely to the economic interests of the speaker and its audience,” in an opinion called Central Hudson Gas v. New York (1980). Second, the speech regulated by Maine does not concern a “public issue,” in the words of a Supreme Court opinion called Dun & Bradstreet v. Greenmoss Builders (1985). In cases involving speech regulations with these characteristics, courts enforce a slightly relaxed form of First Amendment protection known as intermediate scrutiny. The government must show (1) it has a “substantial interest” that it is seeking to achieve through the law, and (2) the law “directly advances,” and is “narrowly drawn” to, this interest. But the government need not show the challenged law is “the least restrictive means” to achieve the government’s interests. This analysis is not changed by Sorrell v. IMS Health (2011), a Supreme Court decision that struck down a Vermont law that regulated use and disclosure of pharmacy prescription information, but only as to a narrow set of speakers (drug sellers) and a narrow message (marketing of brand-name drugs). The Maine law, on the other hand, does not discriminate on the basis of viewpoint, and it is uniformly targeted to an entire tech sector. Maine Has Substantial Interests In Protecting Users’ Privacy, Speech, and Information Security Here, Maine’s broadband privacy law advances three substantial government interests. First, our privacy over our personal information is a fundamental human right. We should have a say in how others process data about us. New technologies make it increasingly easy for businesses to harvest and monetize vast amounts of our personal information. Second, our freedom of speech often relies on conversational privacy. As the Supreme Court explained in Bartnicki v. Vopper (2001), “the fear of public disclosure of private conversations might well have a chilling effect on private speech.” To be clear, the ISPs are not the only party in this case with First Amendment interests. Rather, ISPs’ customers also have a First Amendment interest that the court must weigh—their interest in keeping their expressive information private, including who they communicate with online and what websites they visit to read. Third, information security is strengthened by our ability to control the flow of our personal data. By regulating how ISPs use and disclose our data, the Maine law reduces the incentive for ISPs to collect and store vast troves of our data. Thus, in the event of a data breach at an ISP, less of our data will be at risk. Adversaries like identity thieves, stalkers, and foreign nations can use breached data for further attacks against us. For example, our Internet use patterns can expose when we are not at home, and our interests and associations that are exposed by browsing can facilitate phishing attacks. Maine’s Law Is Narrowly Tailored The requirement of Maine law (i.e., that ISPs obtain opt-in consent before using or disclosing customers’ data) is narrowly tailored to Maine’s substantial interests (i.e., protection of ISP customers’ data privacy, free speech, and information security). As the Supreme Court explained in DOJ v. Reporters Committee (1989), “the individual’s control of information concerning [their] person” lies at the center of our privacy rights. The opt-in consent requirement restores to consumers control over the personal information that they expose to ISPs when they visit the Internet. The ISPs argue that the Maine law’s requirement of opt-in consent is not narrowly tailored, because there is an alternative regulatory approach that, according to the ISPs, would be less burdensome on their processing of customer data: empowerment of customers to opt-out of this processing. But defaults matter. Studies show that tech users generally do not change default settings. Many customers who strongly prefer that ISPs do not use and disclose their personal data are not aware (1) that ISPs are doing so, (2) that they can opt-out, and (3) how to navigate the settings to flip the default. Thus, the requirement of opt-in consent is far more protective of data privacy than a consumer option to opt-out. Numerous federal appellate and trial courts have upheld consumer data privacy laws like the one at issue here because they are narrowly tailored to substantial government interests. The one outlier appellate court decision is older, subject to a persuasive dissent, and not followed by subsequent decisions. Finally, the Maine law is tailored to an economic sector, broadband providers, that presents particular threats to data privacy, as discussed above. Many other consumer data privacy laws are sector-specific, including those regulating cable, video rentals, health services, financial services, credit reporting, telecommunications carriers, websites, and electronic communication services and remote computing services. The sector-specific approach taken here by Maine does not heighten the First Amendment scrutiny. Next Steps Moving forward, EFF will continue to advocate for enactment of broadband privacy laws, at the federal and state levels, and to defend these laws in court against poorly taken First Amendment challenges. This work is part of a larger constellation of EFF advocacy for Internet users. For example, we support net neutrality laws that ban ISPs from discriminating for or against different websites, apps, or services. Likewise, we support fiber-for-all laws that would, among other things, promote competition and give consumers more choice among ISPs. And we support consumer data privacy laws that apply to all manner of entities that harvest and monetize our personal information, including third-party trackers. You can read here our amicus brief in ACA Connects v. Frey [PDF], the ISPs’ First Amendment challenge to the Maine broadband privacy law.
>> mehr lesen

How Big Tech Monopolies Distort Our Public Discourse (Thu, 28 May 2020)
Long before the pandemic crisis, there was widespread concern over the impact that tech was having on the quality of our discourse, from disinformation campaigns to influence campaigns to polarization. It's true that the way we talk to each other and about the world has changed, both in form (thanks to the migration of discourse to online platforms) and in kind, whether that's the rise of nonverbal elements in our written discourse (emojis, memes, ASCII art and emoticons) or the kinds of online harassment and brigading campaigns that have grown with the Internet. A common explanation for the change in our discourse is that the biggest tech platforms use surveillance, data-collection, and machine learning to manipulate us, either to increase "engagement" (and thus pageviews and thus advertising revenues) or to persuade us of things that aren't true, for example, to convince us to buy something we don't want or support a politician we would otherwise oppose. There's a simple story about that relationship: by gathering a lot of data about us, and by applying self-modifying machine-learning algorithms to that data, Big Tech can target us with messages that slip past our critical faculties, changing our minds not with reason, but with a kind of technological mesmerism. This story originates with Big Tech itself. Marketing claims for programmatic advertising and targeted marketing (including political marketing) promise prospective clients that they can buy audiences for their ideas through Big Tech, which will mix its vast data-repositories with machine learning and overwhelm our cognitive defenses to convert us into customers for products or ideas. We should always be skeptical of marketing claims. These aren't peer-reviewed journal articles, they're commercial puffery. The fact that the claims convince marketers to give billions of dollars to Big Tech is no guarantee that the claims are true. After all, powerful decision-makers in business have a long history of believing things that turned out to be false. It's clear that our discourse is changing. Ideas that were on the fringe for years have gained new centrality. Some of these ideas are ones that we like (gender inclusivity, racial justice, anti-monopolistic sentiment) and some are ideas we dislike (xenophobia, conspiracy theories, and denial of the science of climate change and vaccines). Our world is also dominated by technology, so any change to our world probably involves technology. Untangling the causal relationships between technology and discourse is a thorny problem, but it's an important one. It's possible that Big Tech has invented a high-tech form of mesmerism, but whether you believe in that or not, there are many less controversial, more obvious ways in which Big Tech is influencing (and distorting) our discourse. Locating Precise Audiences Obviously, Big Tech is incredibly good at targeting precise audiences, this being value proposition of the whole ad-tech industry. Do you need to reach overseas students from the Pacific Rim doing graduate studies in Physics or Chemistry in the midwest? No problem. Advertisers value this feature, but so does anyone hoping to influence our discourse. Locating people goes beyond "buying an audience" for an ad. Activists who want to reach people who care about their issues can use this feature to mobilize them in support of their causes. Queer people who don't know anyone who is out can find online communities to help them understand and develop their own identities. People living with chronic diseases can talk about their illnesses with others who share their problems. This precision is good for anyone who's got a view that outside of the mainstream, including people who have views we don't agree with or causes we oppose. Big Tech can help you find people to cooperate with you on racist or sexist harassment campaigns, or to foment hateful political movements. A discourse requires participants: if you can't find anyone interesting in discussing an esoteric subject with you, you can't discuss it. Big Tech has radically altered our discourse by making it easy for people who want to talk about obscure subjects to find discussants, enabling conversations that literally never could have happened otherwise. Sometimes that's good and sometimes it's terrible, but it's absolutely different from any other time. Secrecy Some conversations are risky. Talking about your queer sexuality in an intolerant culture can get you ostracized or subject you to harassment and violence. Talking about your affinity for cannabis in a place where it isn't legal to consume can get you fired or even imprisoned. The fact that many online conversations take place in private spaces means that people can say things they would otherwise keep to themselves for fear of retribution. Not all of these things are good. Being caught producing deceptive political ads can get you in trouble with an election regulator and also send supporters to your opponents. Advertising that your business discriminates on the basis of race or gender or sexuality can get you boycotted or sued, but if you can find loopholes that allow you to target certain groups that agree with your discriminatory agenda, you can win their business. Secrecy allows people to say both illegal and socially unacceptable things to people who agree with them, greatly reducing the consequences for such speech. This is why private speech is essential for social progress, and it’s why private speech is beneficial to people fomenting hatred and violence. We believe in private speech and have fought for it for 30 years because we believe in its benefits—but we don't deny its costs. Combined with targeting, secrecy allows for a very persuasive form of discourse, not just because you can commit immoral acts with impunity, but also because disfavored minorities can whisper ideas that are too dangerous to speak aloud. Lying and/or Being Wrong The concentration of the tech industry has produced a monoculture of answers. For many people, Google is an oracle, and its answers— the top search results—are definitive. There's a good reason for that: Google is almost always right. Type "How long is the Brooklyn Bridge" into the search box and you'll get an answer that accords with both Wikipedia, and its underlying source, the 1967 report of the New York City Landmarks Preservation Commission. Sometimes, though, Google is tricked into lying by people who want to push falsehoods onto the rest of us. By systematically exploring Google's search-ranking system (a system bathed in secrecy and subjected to constant analysis by the Search Engine Optimization industry), bad actors can and do change the top results on Google, tricking the system into returning misinformation (and sometimes, it's just a stupid mistake). This can be a very effective means of shifting our discourse. False answers from a reliable source are naturally taken at face value, especially when the false answer is plausible (adding or removing a few yards from the Brooklyn Bridge's length), or where the questioner doesn't really have any idea what the answer should be (adding tens of thousands of miles per second to the speed of light). Even when Google isn't deliberately tricked into giving wrong answers, it can still give wrong answers. For example, when a quote is widely misattributed and later corrected, Google can take months or even years to stop serving up the misattribution in its top results. Indeed, sometimes Google never gets it right in cases like this, because the people who get the wrong answer from Google repeat it on their own posts, increasing the number of sources where Google finds the wrong answer. This isn't limited to just Google, either. The narrow search verticals that Google doesn't control—dating sites, professional networking sites, some online marketplaces—generally dominate their fields, and are likewise relied upon by searchers who treat them as infallible, even though they might acknowledge that it's always wise to do so. The upshot is that what we talk about, and how we talk about it, is heavily dependent on what Google tells us when we ask it questions. But this doesn't rely on Google changing our existing beliefs: if you know exactly what the speed of light is, or how long the Brooklyn Bridge is, a bad Google search result won't change your mind. Rather, this is about Google filling a void in our knowledge. There's a secondary, related problem of "distinctive, disjunct idioms." Searching for "climate hoax" yields different results from searching for "climate crisis" and different results still from "climate change." Though all three refer to the same underlying phenomenon, they reflect very different beliefs about it. The term you use to initiate your search will lead you into a different collection of resources. This is a longstanding problem in discourse, but it is exacerbated by the digital world. "Sort by Controversial" Ad-supported websites make their money from pageviews. The more pages they serve to you, the more ads they can show you and the more likely it is that they will show you an ad that you will click on. Ads aren't very effective, even when they're highly targeted, and the more ads you see, the more inured you become to their pitches, so it takes a lot of pageviews to generate a sustaining volume of clicks, and the number of pageviews needed to maintain steady revenue tends to go up over time. Increasing the number of pageviews is hard: people have fixed time-budgets. Platforms can increase your "engagement" by giving you suggestions for things that will please you, but this is hard (think of Netflix's recommendation engine). But platforms can also increase engagement by making you angry, anxious, or outraged, and these emotions are much easier to engender with automated processes. Injecting enervating comments, shocking images, or outlandish claims into your online sessions may turn you off in the long term, but in the short term, these are a reliable source of excess clicks. This has an obvious impact on our discourse, magnifying the natural human tendency to want to weigh in on controversies about subjects that matter to you. It promotes angry, unproductive discussions. It's not mind control—people can choose to ignore these "recommendations" or step away from controversy—but platforms that deploy this tactic often take on a discordant, angry character. Deliberate Censorship Content moderation is very hard. Anyone who's ever attempted to create rules for what can and can't be posted quickly discovers that these rules can never be complete—for example, if you class certain conduct as "harassment," then you may discover that conduct that is just a little less severe than you've specified is also experienced as harassment by people on its receiving end. As hard as this is, it gets much harder at scale, particularly when services cross-cultural and linguistic lines: as hard as it is to decide whether someone crosses the line when that person is from the same culture as you and is speaking your native language, it's much harder to interpret contributions from people of differing backgrounds, and language barriers add another layer of incredible complexity. The rise of monolithic platforms with hundreds of millions (or even billions) of users means that a substantial portion of our public discourse is conducted under the shadow of moderation policies that are not—and cannot— be complete or well-administered. Even if these policies have extremely low error rates—even if only one in a thousand deleted comments or posts is the victim of overzealous enforcement— systems with billions of users generate hundreds of billions of posts per day, and that adds up to many millions of acts of censorship every day. Of course, not all moderation policies are good, and sometimes, moderation policies are worsened by bad legal regimes. For example, SESTA/FOSTA, a bill notionally aimed at ending human sexual trafficking, was overbroad and vague to begin with, and the moderation policies it has spawned have all but ended certain kinds of discussions of human sexuality in public forums, including some that achieved SESTA/FOSTA's nominal aim of improving safety for sex workers (for example, forums where sex workers kept lists of dangerous potential clients). These subjects were always subject to arbitrary moderation standards, but SESTA/FOSTA made the already difficult job of talking about sexuality virtually impossible. Likewise, the Communications Decency Act's requirement for blacklists of adult materials on federally subsidized Internet connections (such as those in public schools and libraries) has foreclosed on access to a wealth of legitimate materials, including websites that offer information on sexual health and wellbeing, and on dealing with sexual harassment and assault. Accidental Censorship In addition to badly considered moderation policies, platforms are also prone to badly executed enforcement errors, in other words. Famously, Tumblr installed an automatic filter intended to block all "adult content" and this filter blocked innumerable innocuous images, from images of suggestive root vegetables to Tumblr's own examples of images that contained nudity but did not constitute adult content and would thus be ignored by its filters. Errors are made by both human and automated content moderators. Sometimes, errors are random and weird, but some topics are more likely to give rise to accidental censorship than others: human sexuality, discussions by survivors of abuse and violence (especially sexual violence), and even people whose names or homes sound or look like words that have been banned by filters (Vietnamese people named Phuc were plagued by AOL's chat-filters, as were Britons who lived in Scunthorpe). The systematic nature of this accidental censorship means that whole fields of discourse are hard or even impossible to undertake on digital platforms. These topics are the victims of a kind of machine superstition, a computer gone haywire that has banned them without the approval or intent of its human programmers, whose oversights, frailties and shortsightedness caused them to program in a bad rule, after which they simply disappeared from the scene, leaving the machine behind to repeat their error at scale. Third-Party Censorship Since the earliest days of digital networks, world governments have struggled with when and whether online services should be liable for what their users do. Depending on which country an online provider serves, they may be expected to block, or pre-emptively remove, copyright infringement, nudity, sexually explicit material, material that insults the royal family, libel, hate speech, harassment, incitements to terrorism or sectarian violence, plans to commit crimes, blasphemy, heresy, and a host of other difficult to define forms of communication. These policies are hard for moderation teams to enforce consistently and correctly, but that job is made much, much harder by deliberate attempts by third parties to harass or silence others by making false claims about them. In the simplest case, would-be censors merely submit false reports to platforms in hopes of slipping past a lazy or tired or confused moderator in order to get someone barred or speech removed. However, as platforms institute ever-finer-grained rules about what is, and is not, grounds for removal or deletion, trolls gain a new weapon: an encyclopedic knowledge of these rules. People who want to use platforms for good-faith discussions are at a disadvantage relative to "rules lawyers" who want to disrupt this discourse. The former have interests and jobs about which they want to communicate. The latter's interest and job is disrupting the discourse. The more complex the rules become, the easier it is for bad-faith actors to find in them a reason to report their opponents, and the harder it is for good-faith actors to avoid crossing one of the ruleset's myriad lines. Conclusion The idea that Big Tech can mold discourse through bypassing our critical faculties by spying on and analyzing us is both self-serving (inasmuch as it helps Big Tech sell ads and influence services) and implausible, and should be viewed with extreme skepticism. But you don't have to accept extraordinary claims to find ways in which Big Tech is distorting and degrading our public discourse. The scale of Big Tech makes it opaque and error-prone, even as it makes the job of maintaining a civil and productive space for discussion and debate impossible. Big Tech's monopolies—with their attendant lock-in mechanisms that hold users' data and social relations hostage—remove any accountability that might come from the fear that unhappy users might switch to competitors. The emphasis on curbing Big Tech's manipulation tactics through regulatory measures has the paradoxical effect of making it more expensive and difficult to enter the market with a Big Tech competitor. A regulation designed to curb Big Tech will come with costs that little tech can't possibly afford, and becomes a license to dominate the digital world disguised as a regulatory punishment for lax standards. The scale—and dominance— of tech firms results in unequivocal, obvious, toxic public discourse. The good news is that we have tools to deal with this: breakups, merger scrutiny, limits on vertical integration. Perhaps after Big Tech has been cut down to size, we'll still find that there's some kind of machine-learning mesmerism that we'll have to address, but if that's the case, our job will be infinitely easier when Big Tech has been stripped of the monopoly rents it uses to defend itself from attempts to alter its conduct through policy and law.
>> mehr lesen

Two Federal COVID-19 Privacy Bills: A Good Start and a Misstep (Thu, 28 May 2020)
COVID-19, and containment efforts that rely on personal data, are shining a spotlight on a longstanding problem: our nation’s lack of sufficient laws to protect data privacy. Two bills before Congress attempt to solve this problem as to COVID-19 data. One is a good start that needs improvements. The other is a misstep that EFF strongly opposes. The Public Health Emergency Privacy Act (PHEPA) was introduced by U.S. Senators Richard Blumenthal and Mark Warner, and U.S. Representatives Anna Eshoo, Jan Schakowsky and Suzan DelBene. It has some major elements that privacy advocates have called for. It requires opt-in consent and data minimization, and limits data disclosures to government. It has a strong private right of action and does not preempt state laws. And it bars denial of voting rights to people who decline to opt-in to tracking programs. But it does not protect such people from discrimination in access to employment, public accommodations, or government benefits. Also, it has overly broad exemptions for manual contact tracing, public health research, public health authorities, and entities regulated by the federal Health Insurance Portability and Accountability Act (HIPAA). The COVID-19 Consumer Data Protection Act (CCDPA) was introduced by U.S. Senators Roger Wicker, Jim Thune, Jerry Moran, and Marsha Blackburn. It preempts state laws, has no private right of action, and exempts a broad set of surveillance by employers. It is a non-starter. Responses to COVID-19 Burden Our Data Privacy The ways companies and governments are using our data to respond to the COVID-19 crisis illustrates our lack of data privacy laws. Governments are partnering with businesses to create websites where we provide our health and other information to obtain screening for COVID-19 testing and treatment. States are conducting manual contact tracing, often by contracting with businesses to build new data management systems. Public health authorities are encouraging us to download proximity tracking apps. Some of these apps also track our location, which EFF opposes. There are many ways to misuse our COVID-related data. Some restaurants are collecting contact information from patrons to notify them later of any infection risk; disturbingly but not surprisingly, a restaurant employee used one patron’s information to send them multiple harassing messages. Companies might divert our COVID data to advertising. Public health agencies might share our COVID data with police or other agencies. All this data might be stolen by identify thieves, stalkers, and foreign nations. We Need a Comprehensive Privacy Law … Existing U.S. laws do not sufficiently protect us from misuse of COVID-related data. For example, HIPAA protections of health data apply only to narrowly defined healthcare providers and their business associates. The strongest state data privacy laws only apply to certain kinds of data (like Illinois’ biometric privacy law), data processors (like Vermont’s data broker registration law), or data protections (like California’s rights to access, delete, and opt-out of the sale of data). So, we need a strong, comprehensive federal consumer data privacy law. EFF has three top priorities for a federal privacy law: no federal preemption of state data privacy laws; strong enforcement by giving consumers a private right of action against companies that violate the privacy rules; and a ban on discrimination against consumers who exercise their privacy rights. Such legislation also must require opt-in consent before data processing, and minimization of data processing to what is necessary for a business to give a consumer what they asked for. Thus, there is a lot to like about the Consumer Online Privacy Rights Act introduced last year by U.S. Senators Maria Cantwell, Brian Schatz, Amy Klobuchar, and Edward Markey. While that bill needs strengthening amendments, such a law would do a great deal to protect our COVID-related data. … Or At Least a COVID-19 Privacy Law If our nation currently lacks the political will to enact a comprehensive consumer data privacy law, then we at least need a COVID-specific law. For the reasons above, it would need opt-in consent, data minimization, a private right of action, no preemption, and protections to prevent discrimination against people who don’t consent. Non-discrimination has particular urgency here. There is not just a risk that a government or business entity will process a person’s data, or make them use a tracking app, without their consent. There also is risk of denial of benefits and access to people who refuse to share their data or use an app. For example, if a person declines to download a tracking app, an employer might deny workplace access, a restaurant might deny table service, or a government agency might deny a benefit. But any use of such apps must be truly voluntary. It is also important to restrain the flow of personal data to the government. The outbreak has prompted demands for new institutions and new technologies to gather new kinds of data about us. History shows that governments generally don’t give back emergency powers. PHEPA is a Good Start … PHEPA broadly applies to data that is reasonably linkable to a person or device and that concerns COVID-19. It expressly includes health data (such as medical test results), and outbreak tracking data (such as location, proximity, or any data collected by a personal device). The bill extends to government and private entities that electronically process covered data, or that develop websites or mobile apps for COVID-19 purposes. The bill provides important COVID-19 privacy protections. A covered entity: Shall not process covered data absent the subject’s opt-in consent (with certain exceptions). Shall practice data minimization, by only processing data as “necessary, proportionate, and limited for a good faith public health purpose.” Shall not disclose covered data to the government, except to a public health authority, and only with a good faith public health purpose. Shall not use covered data for commercial ads. Shall let people correct inaccurate data about them. Shall publish a privacy policy, and (for larger entities) quarterly reports. Shall take reasonable steps to secure covered data. The bill bars denial of the right to vote on the grounds of a person’s covered data, medical condition, or non-participation in a program that collects covered data. This would give some protection to people who refuse to download a tracking app. It provides a strong private right of action, in addition to enforcement by the Federal Trade Commission (FTC) and the State Attorney Generals. It explicitly provides that state laws are not preempted. In short, there is a lot to like here, including opt-in consent, data minimization, a private right of action, no preemption, no discrimination in voting rights, and more. We appreciate the leadership of Sens. Blumenthal and Warner, and Reps. Schakowsky and DelBene. … And It Has Room For Improvement We respectfully suggest the following strengthening amendments to PHEPA. First, it should ban discrimination against people who decline to use a COVID-19 tracking app, including by denying them employment, education, public accommodations, or government benefits. Such discrimination—and the resulting pressure to download a tracking app—is an urgent privacy threat. The bill makes a good start: it would ban denial of voting rights to someone who won’t participate in a COVID-19 tracking program. But more protections are needed. Second, the bill has broad exemptions that should be removed or sharply limited: It exempts manual contact tracing programs. But these will amass vast troves of personal data. And this data will be held by the private corporations that contract with states to undertake contact tracing. It exempts public health research about COVID-19. But people should be able to use COVID resources, such as tracking apps or screening websites, without having to become research subjects. It exempts public health authorities. But these government officials should have to follow the bill’s rules on, for example, subject consent, data minimization, confidentiality, and non-disclosure to other units of government. It exempts entities covered by HIPAA, including the business associates of healthcare providers. But such entities should be required to follow the bill’s important new privacy rules, unless those rules conflict with HIPAA. Third, the bill says that if a person revokes consent to data processing, then a covered entity shall stop processing “as soon as practicable, but in no case later than 15 days,” and shall destroy or de-identify data already collected. But if someone revokes their consent, that should be respected immediately. An entity that wants to process covered data must be prepared to stop processing it as soon as someone revokes consent. Also, the covered data should be destroyed, without an option to retain it in de-identified form. There is inherent risk that de-identified data can be re-identified. Fourth, the bill provides that a covered entity shall destroy or de-identify covered data within 60 days of the end of the outbreak, as defined by federal and state government. But management of the COVID-19 outbreak could last for years, while much COVID-related data will be stale within weeks. For example, the COVID-19 incubation period is 14 days, so there is no need for lengthier retention of data collected by proximity tracking apps. Also, stale data must be destroyed and not merely de-identified, as just explained. We urge the authors to take these critical steps to strengthen their bill. The CCDPA is a Misstep The CCDPA is a nonstarter for EFF. First, it would preempt state laws “related to” the processing of covered data (location, proximity, persistent identifiers, and health information) for a covered purpose (tracking COVID-19, measuring social distancing, and contact tracing). This would cut back existing legal rights of Californians to access, delete, or opt-out of the sale of data collected for COVID purposes, and of Illinoisans to be free from unconsented biometric surveillance for COVID purposes. Where COVID-19 data is involved, the CCDPA would also cut back existing state laws that address medical privacy, information security, data breach notification, and unfair trade practices. Even worse, the CCDPA would end the power of state legislatures, acting as “laboratories of democracy,” to innovate new ways to protect COVID-related privacy. And preemption under the CCDPA would be permanent—even after the outbreak ends. Second, the CCDPA lacks a private right of action, and allows enforcement only by the FTC and State Attorney Generals. But the task of enforcement is too big just for these agencies, which have finite budgets and many competing obligations. Also, many agencies suffer regulatory capture, meaning regulated businesses have undue influence over agency enforcement decisions. Third, the CCDPA exempts COVID-related data that employers use to screen entry to workplaces. This is a greenlight to businesses to fire employees unless they submit to surveillance of their movements, associations, and health—so long as the businesses say they are trying to prevent a workplace outbreak. Conclusion Governments and businesses are collecting vast troves COVID-related data, including our health, locations, associations with others, and much more. This further shows that a comprehensive data privacy law is long overdue. At a minimum, we need a COVID-19 privacy law. PHEPA is a good start. We hope Congress will build on it. Correction: An earlier version of this post inadvertently omitted Rep. Anna Eshoo from the list of sponsors for the Public Health Emergency Privacy Act. This version has been corrected.
>> mehr lesen

A Plan to Pay Artists, Encourage Competition, and Promote Free Expression (Wed, 27 May 2020)
Update/Correction, May 27 2020, 2PM Pacific. An earlier version of this article contained the phrase "the the online music industry is currently generating more revenues than the music industry did at the height of the CD bubble"; this has been corrected to read "the online music industry is currently generating more revenues than the music industry at any time since the CD bubble." As Congress gets ready for yet another hearing on copyright and music, we’d like to suggest that rather than more “fact-finding,” where the facts are inevitably skewed toward the views of the finder, our legislators start focusing on a concrete solution that builds on and learns from decades of copyright policy: blanket licensing. It will need an update to make it work for the Internet age, but as complicated as that will be, it has the profound benefit of adhering to copyright’s real purpose: spurring creativity and innovation. And it's far better than the status quo, where audiences and musicians alike are collateral damage in an endless war between giant tech companies and giant entertainment companies. We all have lots of experience with blanket licensing, though we may not realize it. Nightclubs, restaurants, cafes, and radio stations all have their own soundtracks: the music that helps define the experience of any venue or business. Whether they favor jazz, rock, classical, or heavy metal, venues choose music that reflects what they want to convey to people about the character of the business. And they can make those choices because no music publisher can dictate what they play—Jazz Club B can play the same tracks as Jazz Club A. A publisher can't do a deal with a chain of restaurants or radio stations giving them the sole right to play their top hits. This has been vital to the progress of music. It prevents the dominant music venues from becoming gatekeepers by insisting on exclusive access in exchange for playing publishers' leading tracks. If that happened, competitors without exclusive deals wither away, or would never launch. But when the Internet came along, and Congress gave record labels a right to collect performance royalties, we lost sight of that principle of universal access.  The only statutory licenses for recordings that cover Internet services are narrow, and full of limitations. The result is a toxic dynamic in which a handful of companies dominate online music services. A few online giants—like Spotify—are standalone music companies, but most of the major music channels, like YouTube, iTunes, and Amazon Prime, are divisions of large, monopolistic conglomerates with very deep pockets. Apple, Google, and Amazon have leveraged their dominant positions in search and e-commerce to become even more dominant. If you only sell to high bidders, then eventually all the low bidders will disappear and the high bidders have all the sellers over a barrel. The online giants desperately need competition to discipline them. That's the usual pattern: successful businesses breed competitors who try to offer something that's better (for customers, or suppliers, or workers, or all three). Getting audience-facing music service competitors into the mix will liberate musicians and music companies from operating at the sufferance and mercy of Big Tech. And we know how to do it: create a system of universal licenses for recorded music that make playing music over the Internet more like playing music over the radio or in a club. Let companies pay a per-user license fee that gives them access to the same catalog that Amazon, Apple, and Google claim, without having to cut deals with every label and musician. The Music Modernization Act, passed in 2018, was a step in the right direction. It created a new blanket license for musical compositions, covering downloads and interactive streaming. Let’s build on that momentum and create a complimentary license for sound recordings. A Blanket License for the Internet In broad strokes, here's how a robust Internet license for sound recordings would work. If you want to offer music to the public—if you want to start a streaming site, or let users exchange music, or share videos with music clips in them like TikTok users do—all you need to do is set up an account with a rights clearinghouse, called a "collecting society." You pay the collecting society a monthly license fee that goes up with the number of users you have. If you have one user and Facebook has 2.5 billion users, then your license fee is 1/2,500,000,000 of Facebook's fee. You also allow the collecting society to audit the use of music on your platform. They'll use statistically rigorous sampling methods to assemble an accurate picture of which music is in use on your platform, and how popular each track is. The collecting society will then pay rightsholders for your use of the music. That's it, more or less. It's not complicated, but it will be a challenge. There are a lot of details we have to get right. Let's get into some of them. Collecting Societies Collecting societies get a bad rap, and not without reason. Independent labels and musicians have long accused the societies of undercounting their music and handing money that is rightfully theirs to big music corporations and the musicians who've signed up with them. Collecting society executives have been mired in corruption and embezzlement scandals, and other misdeeds that have put the whole sector in bad odor. At the same time, public interest groups have locked horns with collecting societies for years over proposals to make it easier to censor the Internet, and the societies have never stopped trying to expand the scope of who needs a music license—from nightclubs to restaurants to cafes to market stalls to school plays to classrooms. But a better collecting society is possible. Indeed, the problems with societies over the years have demonstrated the pitfalls that a new collecting society must avoid. Some requirements for a new collecting society: It must be transparent. From the methodology for sampling online music usage, to the raw data it analyzes, to the conclusions it reaches, to the payments it makes, the entire business should be open and subject to public scrutiny. It must be fair. Statistical analysis is an incredibly powerful tool, but it's also  to do well. The statistical method used to sample and extrapolate online music usage must be visible to all. It must be limited. From executive salaries to the scope of its activities, the collecting society must be limited to act as a utility player in the online music ecosystem, whose sole purpose is fairly apportioning music from online services to music creators. Fairness Under the current system, the recorded music industry is concentrated in the hands of three major labels, each of which has a long history of artist-unfriendly business practices that saw successful musicians who made millions for corporations go broke and die in poverty. The power imbalance between the concentrated industry and the vast number of musicians who'd like to enter the industry favors one-sided, unfair contracts. That’s one reason copyright systems around the world include some form of reversion right through which creators can unilaterally cancel their contracts with their publishers, labels, or studios, and get the rights back. Reversion points to another way to make online music usage fairer for artists. Blanket licenses for online music could and should also establish a minimum fraction of blanket licenses that go directly to artists, irrespective of their contracts with their labels. The current statutory license for “non-interactive” Internet streaming gives 50% of royalties to artists. We think that’s fair. Artists have long railed against online music distributors like Spotify and Pandora, saying that they receive inadequate compensation for the use of their work. The streaming companies counter by opening their books and showing that they've paid billions in license fees. Can both sides be right? Indeed, they can. If almost all of the streaming money is hoarded by the labels who get to arm-twist musicians into one-sided contracts, it's entirely possible for Spotify and Pandora to spend billions to license music while the musicians get next to nothing. The online music industry is currently generating more revenues than the music industry at any time since the CD bubble, and yet, musicians are going hungry. The labels’ market concentration has made the deals on offer to musicians progressively worse, as the probability that musicians can take their music to a rival label dwindles every time the big music companies merge with one another. Statutorily guaranteeing that, at minimum, half of all license payments go directly to artists, irrespective of their label contracts, is a way to ensure that online music listeners and online music makers are on the same side and the more people love a musician's art, the more money the musician makes. Competition Artists and users are the biggest losers in the current ecosystem, thanks to the lack of competition.  If you want to listen to a favorite song, there's an (approximately) one in three chance that you're going to get it from one of the Big Three labels. When it comes to home Internet service, most people in the U.S. have only one or two equally expensive carriers. You'll search with Google, socialize with Facebook, and distribute your videos on YouTube. Blanket licenses pay artists while promoting competition. If you want to start a TikTok, Facebook, YouTube, Apple Music, or Amazon Prime competitor, you’ll be free to make the very best service you can, and you will have access to the exact same catalog that the established services offer. As you add users, your license payments go up as a function of your popularity. If you're an overnight sensation, great, your windfall needs to be divvied up with the creators whose music helped you succeed. If you're a slow burner and take years to ignite, then you pay very little to cover the usage of your small but loyal user base. If you want to start a specialty service to fill a specific niche, you don't have to hire a business-development team and an army of lawyers to do deals with the labels. For artists, this is almost a license to print money. Every time a new service pops up online with a great idea for music, it represents a way for you to get paid. If a service interests new fans in your music, or gets existing fans to congregate around it, you get paid right away—their success is based on their ability to excite your listeners, not their ability to convince your label's corporate lawyers to do a deal with them. Free Expression Best of all, blanket licenses enable the kind of creativity that we've all come to know and love in the digital era. Rather than putting musicians on the wrong side of the speech debate, insisting that others' creations be censored off the Internet, blanket licensing aligns the interests of musicians with the interests of audiences, and puts them on the side of free expression. Every artist should be on the side of free expression, always. This is how things worked in the pre-Internet world. The blanket licenses that clubs and radio stations rely on—and the mechanical licenses that let anyone record their own cover of an existing song—meant that artists had the right to get paid for the use of their music, but not the right to tell a DJ they didn't like that she couldn't spin their album, nor the right to force another musician to destroy their cover of a song they wrote. Details: Who, What, How This plan has some pretty gnarly details that need to be worked out through collaboration with all the important stakeholders, especially creators. But we want to make sure we signpost those so you know what they are and can get to thinking about them: The license should cover both digital performance and distribution rights in sound recordings, so that all kinds of music services can participate. The license should cover "synch" rights for making things like YouTube and TikTok videos, but it should not cover movie studios or advertisers that want to include musicians' work in their products—a blanket license should add to musicians' income streams, not destroy them; The collecting society needs a rigorous statistical sampling and analysis system; We need a way to divide up money among musicians who collaborate on a song; We need a way to divide up money among musicians who mash up, sample, or remix someone else's song under this license; We need a way to verify the claims of musicians who represent themselves as rightsholders over a given recording or composition. These are hard problems and they'll take real work. But solving these problems is much easier than making things fair for creators and audiences while continuing on our current, monopolistic path, with Big Tech and Big Content fighting one another for the right to profit from the rest of us.
>> mehr lesen

The House Is Voting on Section 215, Again. The Bill Still Needs More Reform (Wed, 27 May 2020)
Later this week, the House of Representatives is once again voting on whether or not to extend the authorities in Section 215 of the PATRIOT Act—a surveillance law with a rich history of government overreach and abuse, along with two other PATRIOT Act provisions, and possibly, an amendment. Congress considered several bills to reauthorize and reform Section 215 earlier this year, but the law expired on March 15 without renewal. In the days before that deadline, the House of Representatives passed the USA FREEDOM Reauthorization Act, without committee markup or floor amendments, which would have extended Section 215 for three more years, along with some modest reforms. However, the Senate failed to reach an agreement on the bill, allowing the authorities to expire. As we have written before, if Congress can’t agree on real reforms to these problematic laws, they should remain expired. A savings clause in the expired law gives intelligence agencies some, limited ongoing ability to use the authority, and  the government has plenty of other surveillance tools at its disposal. Allowing the law to expire rather than rush into extending the authorities was a positive step. But rather than hold hearings to determine what additional reforms are needed, earlier this month, the Senate considered amendments to the bill the House passed in March. The Lee-Leahy amendment, which passed overwhelmingly,  would strengthen the provisions regarding the Foreign Intelligence Surveillance Court’s (FISC) appointment of amici, outside experts to independently analyze surveillance requests that are particularly sensitive. Another amendment, sponsored by Senators Wyden and Daines, failed 59-37, one vote short of the required 60. This amendment would have clarified that Section 215 cannot be used to obtain an individuals’ Internet browsing or search history. In our view, it was already clear that Section 215 could never lawfully be used to obtain browsing and search history information. Numerous courts, including the FISC itself, have found that browsing and search history constitute the “content” of communications. And, as the Supreme Court in Riley v. California explained, “Internet search and browsing history, for example, can be found on an Internet-enabled phone and could reveal an individual's private interests or concerns — perhaps a search for certain symptoms of disease, coupled with frequent visits to WebMD.” This data implicates an individual’s reasonable expectation of privacy, and a warrant is therefore required. In light of this precedent, it is difficult to imagine Internet companies complying with a Section 215 order for browsing history, given that most companies—including Google, Facebook, Comcast, and other giants—require the government to produce a warrant before handing over the contents of communications. And even if the government did seek to use Section 215 in this way, the FISC would be obligated to appoint an amicus to argue against this novel interpretation of the law; any opinion the FISC reached would need to be declassified and released (and no opinion like that exists, to our knowledge). So, while it was already clear to us that Section 215 could never allow the government to collect such information, the Wyden-Daines amendment would have made this crystal clear. After the Wyden-Daines amendment failed, the Senate passed the underlying bill 80-16, and sent it back to the House once more. Given that the Wyden-Daines amendment only failed by one vote, it’s no surprise that lawmakers on the House side are pushing for a vote on a similar amendment. Now called the Lofgren-Davidson Amendment, House leaders have made a deal to consider the amendment and are working with the sponsors on final text. Both the Lofgren-Davidson amendment and the underlying bill are expected to be on the House floor later this week. If the amendment and the underlying bill passes the House, the Senate would have to approve the new language before the bill gets sent to the President for his signature. Congress should feel no pressure to reauthorize any of the provisions. Even without Section 215, the government still has a wide range of surveillance tools at its disposal, and Congress should take the time to ensure that meaningful reforms— reforms that address the torrent of surveillance abuses that have come to light over the past year— are added to the bill. The Lee-Leahy amendment and the Wyden-Daines amendments are commonsense additions. But they do not cure the bill’s underlying shortcomings or fix some of the most egregious problems with FISA. Congress should make additional changes to the legislation to ensure that our national security surveillance laws are not abused and adequately protect civil liberties.
>> mehr lesen

Hearing Tuesday: EFF Urges California Lawmakers to Pass Fiber Broadband for All Bill To Ensure Full Internet Access For Everyone During the Pandemic and Beyond (Fri, 22 May 2020)
Over One Million Californians Lack Access to High-Speed Broadband Sacramento, California—On Tuesday, May 26, the Electronic Frontier Foundation (EFF) will ask California senators to approve a measure that will transition the state’s outdated communications network to fiber, bringing affordable high-speed broadband service, essential during the pandemic and beyond, to all residents. EFF Senior Legislative Counsel Ernesto Falcon will testify Tuesday afternoon at a California Senate Energy, Utilities and Communications Committee hearing, where lawmakers will consider SB 1130. Co-sponsored by EFF, the bill will raise minimum standards for telecommunications companies providing broadband service to communities, requiring that any broadband network funded by the state must be high-capacity fiber and open access. The COVID-19 outbreak has made it clear that high-speed access to broadband is indispensable, and the more than one million Californians without it are facing serious ongoing harm and risks. Because of the state’s failure to act, far too many people are forced to rely on a 1990s-era DSL line in their community or, worse, have no Internet access at all. This disproportionally affects residents with limited income and those living outside major cities, who have struggled to access distance learning for their children and work from home, even as both have become necessities. Falcon will testify that SB 1130 will fix this dynamic by raising minimum standards for broadband service, ensuring that the state invests in high-capacity networks that are future-proofed, and embracing open-access principles that will provide Californians with more choice in service providers. The hearing will be will be livestreamed for the public on the California Senate website. WHO: EFF Senior Legislative Counsel Ernesto Falcon WHAT: California Senate Energy, Utilities and Communications Committee hearing When: Tuesday, May 26 (Hearing will begin after 1:30 pm session ends) Livestream: https://www.senate.ca.gov/ For more on fiber for all: https://www.eff.org/deeplinks/2019/03/us-desperately-needs-fiber-all-plan https://www.eff.org/wp/case-fiber-home-today-why-fiber-superior-medium-21st-century-broadband  
>> mehr lesen

EFF to Appeals Court: Reverse Legal Gotchas on Ordinary Internet Activities (Fri, 22 May 2020)
In the Internet age, copyright decisions can have enormous consequences for all kinds of activities, because almost everything we do on the Internet involves making copies. And when courts make a mistake, they may create all sorts of unexpected legal risks. As we explained to the Eleventh Circuit yesterday, a recent ruling from the district court in MidlevelU v. ACI Information Group did just that not once, but twice. ACI runs Newstex, a news aggregation service that collects and curates blog posts, press releases, and articles on various topics. Like most online news aggregators, it uses Really Simple Syndication (RSS) feeds to gather titles, abstracts, and sometimes the full text of articles from other websites. RSS is ubiquitous on the Internet—it’s the protocol that makes “blogrolls” and other news feeds work—and many people use it without even knowing it. You may have found this very post through an RSS reader like Feedly or a website that subscribes to EFF’s RSS feed. RSS is also an open protocol, like most of the Internet’s most important protocols. It’s been around a long time, and its meaning is well understood: it promotes “syndication,” or re-publishing, of posts and articles. MidlevelU has an RSS feed. The site, which recently changed its name to ThriveAP, publishes articles of interest to nurse practitioners, physician assistants, and other medical personnel. MidlevelU sued ACI for copyright infringement for subscribing to MidlevelU’s RSS feed and republishing excerpts of the articles it found there. Although MidlevelU accused ACI of copying 823 of its articles, it only included 50 of them in the suit, because only those 50 had been registered with the Copyright Office—a requirement before filing suit. The case went to trial in a Florida federal court. The judge instructed the jury that while they could only award damages for the 50 articles that were actually in the suit, they could “consider” all 823 articles when setting the amount of damages. The jury found that ACI had infringed on 27 articles, but awarded statutory damages of $7,500 for each one. Legal Gotchas for Using RSS There were two big problems with this decision. First, by putting the full text of its articles into its RSS feed, MidlevelU was inviting others to syndicate and excerpt them. That’s what RSS is for. And MidlevelU had no text or warning on its site saying that others couldn’t republish its RSS feeds. Publishing articles through a protocol that’s meant for syndication, and then complaining when people use it as intended, is unfair—and the law doesn’t allow it. It’s like putting an essay on a public web page, findable by search engines, and then suing people for copyright infringement when they visit the page and their browser makes a cache copy of the essay. In legal terms, that’s wrong because MidLevelU’s choice to use RSS created an “implied license” to copy. We all rely on implied licenses for many of the things we do on the Internet. As we wrote in our brief, The Internet works because users around the world employ protocols for sending, reading, and organizing content. These protocols prescribe rules, but over time, they also describe the ingrained expectations of Internet users, including businesses. People must be able to rely on representations conveyed through Internet protocols such as RSS that have developed over decades of usage to become part and parcel of the network’s infrastructure. By refusing to recognize that presenting content to the world using a common protocol means giving the world permission to engage with that content in the usual way, the court created a legal “gotcha” that threatens all kinds of ordinary Internet uses. A Back Door to Massive Damages The second problem was the court’s instructions to the jury about damages. “Statutory damages” in copyright are an oddity in the law. They let juries (or sometimes judges) award as much as $20,000 in damages for each infringed work, without the plaintiff ever having to prove they were harmed by the copying. And if the infringement is found to be “willful,” the maximum damages per work go up to $150,000. Awards vary widely from one case to the next, leaving people no way to predict what their legal exposure might be. These damages can easily reach amounts that would bankrupt all but the largest businesses—all without any proof of harm or illicit profit. And statutory damages can be imposed even against people who work hard to avoid copyright infringement, such as by relying on fair use—or on the use of protocols like RSS that are understood to permit particular kinds of copying. Effectively, if statutory damages are a possibility, then any possibility of being accused of infringement becomes an intolerable game of financial Russian roulette. Statutory damages discourage people from relying on fair use, and chills lawful, valuable speech and activities. Congress didn’t put a lot of limits on statutory damages, but it did make one firm rule: statutory damages aren’t available for any work that wasn’t registered with the Copyright Office before the infringement began (except that recently published works get a three-month grace period). It’s an important rule, because it encourages people to register their works with the Copyright Office, creating a record of ownership and growing the Library of Congress’s collection. The registration requirement also narrows the universe of works that are eligible for statutory damages. That’s important because copyright is automatic and ubiquitous. It applies to nearly every bit of creative work from the moment it’s set down in a tangible form. Without the registration requirement, nearly every photo, every scribble on a napkin, every tweet would carry the possibility of massive, unpredictable copyright damages. The trial court in the MidlevelU case did an end run around the registration requirement, by telling the jury it could “consider” over 700 posts that weren’t part of the suit when awarding statutory damages. That instruction probably caused the jury to award a higher amount then they would have if they considered only the 27 posts they actually found infringed. In effect, MidlevelU got about $250 in damages for each of 823 posts, even though only 50 were ever registered with the Copyright Office. MidlevelU didn’t have to prove that the copying harmed them—and that’s significant, because it probably drove more traffic to their website. And ACI never got the chance to defend itself with respect to the 700+ unregistered works, whether by making a case for fair use or raising other defenses. In bypassing the registration requirement, the trial court also encouraged copyright trolling. Statutory damages are the fuel that powers this form of abusive lawsuit. Attorneys who bring threats of litigation against home Internet users they accuse of copying movies illegally—mostly porn—depend on being able to threaten statutory damages. That’s because they can’t really prove harm, and the risk of up to $150,000 in damages drives many people to settle for $2,000 to $5,000, even if they weren’t the ones who copied the movie. The jury instruction from the MidlevelU case would give trolls a new tool: accuse people of infringing multiple movies, only some of which are registered. Careless approaches to copyright liability and damages threaten all Internet users with uncertainty and possible exploitation. In this case, we’re hoping the court of appeals looks at the bigger picture, and corrects these errors.
>> mehr lesen

EFF to UN Expert on Racial Discrimination: Mass Border Surveillance Hurts Vulnerable Communities (Thu, 21 May 2020)
EFF submitted a letter to the United Nations’ Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance to testify to the negative impacts of mass surveillance on vulnerable communities at the U.S. border. The Special Rapporteur called for submissions on “Race, Borders, and Digital Technologies” that examine the harmful effects of electronic surveillance on vulnerable communities and free movement at the border. These submissions will inform the Special Rapporteur’s 2020 thematic report to the U.N. General Assembly about how digital technologies used for border enforcement and administration reproduce, reinforce, and compound racial discrimination. Ms. E. Tendayi Achiume was appointed the 5th Special Rapporteur on contemporary forms of racism, racial discrimiantion, xenophobia and related intolerance in 2017. In the United Nations, Special Rapporteurs are independent experts appointed by the U.N. Human Rights Council who serve in a personal capacity and report on human rights from a thematic or country-specific perspective. Special Rapporteurs also report back annually to the U.N. General Assembly (which is made up of 193 Member States). With the support of the U.N. Office of the High Commissioner on Human Rights, Special Rapporteurs undertake country visits, intervene directly with States on alleged human rights violations, and conduct thematic studies like this report. In our submission, we explained that EFF has spent the last several years expanding our expertise in mapping, litigation, research, and advocacy against the use of digital surveillance technologies at the U.S. borders. The Atlas of Surveillance: Southwestern Border Communities project, published in partnership with the Reynolds School of Journalism at the University of Nevada, Reno, found that dozens of law enforcement agencies along the U.S.-Mexico border use biometric surveillance, automated license plate readers, aerial surveillance, and other technologies that not only track migration across the border, but also constantly surveil the diverse communities that live in the border region.   Litigation is one tool EFF has used to fight back against invasive surveillance at the border and the government secrecy that hides it. In our case, Alasaad v. Wolf, we worked with the national ACLU and ACLU of Massachusetts to challenge the government’s warrantless, suspicionless searches of electronic devices at the U.S. border. We argued that warrantless border searches of electronic devices constitute grave privacy invasions because of the vast amount of personal information that can be revealed by a search of an individual’s electronic devices such as smartphones and laptops. In November 2019, a Massachusetts federal district court held that the government must have reasonable suspicion that an electronic device contains digital contraband in order to conduct a border search. While not the warrant standard that we had argued, the court’s ruling is the most rights-protective decision in the country on searches of electronic devices at the border. Alasaad is currently on appeal in the U.S. Court of Appeals for the First Circuit. In addition, EFF has two ongoing Freedom of Information Act (FOIA) lawsuits regarding the border, one on GPS tracking at the border, and the other on Rapid DNA testing of migrant families at the border.  Our letter also highlights EFF’s successful advocacy with the California Attorney General’s Office to classify immigration enforcement as a form of misuse of the California Law Enforcement Telecommunications System (CLETS). As a result of this change in policy, U.S. Immigration and Customs Enforcement (ICE) was altogether barred from using CLETS. We hope that our submission adds to the United Nations and the larger international community’s understanding of the vast surveillance systems being set up and deployed at the U.S. border, and the disproportionate impact of these technologies on vulnerable communities.  Related Cases:  Alasaad v. McAleenan
>> mehr lesen

International Proposals for Warrantless Location Surveillance To Fight COVID-19 (Thu, 21 May 2020)
Time and again, governments have used crises to expand their power, and often their intrusion into citizens’ lives. The COVID-19 pandemic has seen this pattern play out on a huge scale. From deploying drones or ankle monitors to enforce quarantine orders to proposals to use face recognition or thermal imaging cameras for monitoring public spaces, governments around the world have been adopting intrusive measures in their quest to contain the pandemic. EFF has fought for years against the often secretive governmental use of cell phone location data. Governments have repeatedly sought to obtain this data without a court order, dodged oversight of how they used and accessed it, misleadingly downplayed its sensitivity, and forced mobile operators to retain it. In the past, these uses were most often justified with arguments of law enforcement or national security necessity. Now, some of the same location surveillance powers are being demanded—or sometimes simply seized—without making a significant contribution to contain COVID-19. Despite the lack of evidence to show the effectiveness of location data to stop the spread of the virus, a number of countries’ governments have used the crisis to introduce completely new surveillance powers or extend old ones to new COVID-related purposes. For example, data retention laws compel telecom companies to continuously collect and store metadata of a whole population for a certain period of time. In Europe, the Court of Justice of the European Union declared such mandates illegal under EU law.  Like other emergency measures, it may be an uphill battle to roll back new location surveillance once the epidemic subsides. And because governments have not shown its effectiveness, there’s no justification for this intrusion on people’s fundamental freedoms in the first place. Individualized Location Tracking  Mobile carriers happen to know their subscribers’ phone’s locations (usually the same as the locations of the subscribers themselves) from moment to moment because of the way cellular networks work. That knowledge has turned into one of the most extensive data sources for governments—and not infrequently advertisers, stalkers, or spies—interested in tracking people’s movements. But while phone location data is sufficient to show whether someone went to church or the movies, it simply is not accurate enough to show whether two people were close enough together to transmit the virus (commonly characterized as a distance of two meters, or about six feet). While location surveillance is problematic at any time, the coronavirus crisis has led to a rapid uptick in its use; many measures to facilitate it have been passed by fast-tracked legislative procedures during national state of emergencies. Some governments have even bypassed legislators entirely and relied on executive power to roll out expanded location surveillance—making it even less transparent and democratically legitimate than usual. Governments may use the urgency of the crisis to erode limits on the ways people’s location histories can be used, demand this data be turned over to authorities in bulk, or require companies to stockpile records of where their customers have been. COVID-inspired cell phone location surveillance around the globe Attempts at rapid expansions of government location surveillance authority have come to light in at least seven countries. In Israel, in a significant win for privacy, Israel’s High Court of Justice has recently revoked the authorization of the police to access location data for contact tracing without a court order. On March 16th, the government had approved emergency regulations, 48 hours after Prime Minister Benjamin Netanyahu announced his government’s intention to approve health tracking methods. The regulations enabled both the police and Israel's domestic security agency (usually known as Shabak or Shin Bet, after its Hebrew acronym) to track the whereabouts of persons that might be infected or are suspected to be infected with COVID-19 without a warrant. The emergency regulation has now been suspended, and the Court has ordered that the government address the use of mobile phone tracking through legislation. Despite the win, the fight against warrantless access to location data is far from over: on May 5th, the parliament’s Intelligence Subcommittee voted 6-3 to extend the Shin Bet’s warrantless access to location data to track infected people, while the government is working towards advancing legislation to enable this form of surveillance more permanently. Right after the approval of the emergency regulations on March 16th, the Association for Civil Rights in Israel filed a petition to Israel’s High Court stressing the need to protect democracy during the pandemic: Democracy is measured precisely in those situations when the public is afraid, exposed day and night to nightmare scenarios [...]. Precisely in such moments, it is vital to act in a considered and level-headed manner, and not to take draconian and extreme decisions and to accustom the public to the use of undemocratic means [...]. In South Africa, where a state of disaster has been in place since March 15th, the government amended a law to create a COVID-19 Tracing Database. The database will include personal data of those who are infected or suspected to be infected of COVID-19, including their COVID-19 test results, as well as the details of those who have come or are suspected to have come into contact with them. The Act authorizes the Director-General of Health to order telecom companies to disclose the location of infected or suspected to be infected person, without prior notice, as well as the location of those who were in contact or suspected to have been in contact with them, and to included all of this data in the COVID-19 Tracing Database. The law was met with severe backlash from civil society, and has since been amended twice. In a win for privacy, the last amendment deleted the provisions that obliged telecommunications companies to disclose location data for inclusion in that database.  Poland, which has been in a state of emergency since mid-March, has a track record of encroaching on the rule of law, even triggering the EU's legal process for addressing violations of European values. The EU Commission has stated that the Polish judiciary is under “the political control of the ruling majority. In the absence of judicial independence, serious questions are raised about the effective application of EU law." Now with COVID-19, the Polish government has also introduced several COVID acts, providing new surveillance powers for the executive. Article 11 of the COVID-19 act obliges telecom operators to collect and give access to location data of people infected with COVID-19 or those under quarantine upon a simple request, as well as aggregate location data of an operator’s clients. The new legislation states that these measures will remain in place until the pandemic has ended. Slovakia is another eastern European country that has expanded telecom companies’ obligations to retain metadata during the crisis. Slovakia has been in a partial state of emergency since March 15th, during which several amendments to the country’s telecommunications act were fast tracked through parliament. The amendments, which immediately caused outrage, authorized national health authorities to obtain location data from telecommunications operators in the context of a pandemic. As in Poland, the amended law allows both for the retention of anonymized aggregate data, as well as for individual location data.  After  being challenged before the Slovakian Constitutional Court, these measures have recently been suspended due to their vagueness and insufficient safeguards against misuse. Croatia’s government attempted to introduce similar, fast-tracked amendments to the country’s electronic communications law. The bill would have authorized the exceptional processing of location data to “protect national and public safety,” and would have obliged telecommunications operators to share the data with the Ministry of Health. As in other countries, the proposal was met with outrage among civil society, experts, and opposition, as  more than forty civil society organizations signed onto a letter demanding the government to withdraw the bill. The criticism was eventually successful, but the Croatian example underlines the wider pattern of states looking to expand at any opportunity new surveillance powers in the crisis, in the Balkans and beyond. Bulgaria, yet another Eastern European country in a state of emergency, has passed an emergency law, which included amendments to the country’s electronic communications act. The law now obliges telecommunications companies to store and (upon request) provide metadata to competent authorities, including the police, to monitor citizens' compliance with quarantine measures. The law does not require requests to be authorized by courts but merely provides for a after-the-fact judicial review process which the country also uses when retaining data to  prevent terrorist attacks. Not limited in time, the measures will remain in force even after the state of emergency has come to an end.  Like Poland, Bulgaria has been showing authoritarian tendencies for several years, and this extension of the country’s data retention regime, ushered in during the COVID crisis, may help solidify autocracy. The pattern of European countries reaching for location data surveillance also pokes holes in the popular image of the European Union as particularly protective of the right to privacy. Peru, like some European countries, has also issued a state of emergency decree. It compels telephone companies to grant emergency call centers access to cell sites and GPS data of those who have called the national emergency number and are infected or suspected of COVID-19. The decree also authorizes the emergency call centers to access the historical location data of the devices from which the call was made, including three days before such call. Peruvian digital rights NGO cast doubts about the legal basis of such surveillance measures. It also raised concern of potential pitfalls that restricting the right to privacy in a state of emergency can cause in Peru. Regularly Peru has declared a state of emergency in conflict rural areas where activists have been protesting to defend their land, the environment, and their rights.   South Korea, a country that has been fighting coronavirus outbreaks since the Middle East Respiratory Syndrome (MERS) epidemic in 2015, has dramatically restricted the right to privacy in the context of the pandemic. The Infectious Disease Control and Prevention Act and its enforcement decree allows health officials to obtain sensitive personal data on the infected and those suspected to be infected, as well as their contacts and those suspected to be in contact. Such data includes names, resident registration numbers, addresses, telephone numbers, prescriptions, medical treatment records, immigration control records, credit card, debit card, and pre-paid card statements, transit card records, and CCTV recordings from third parties companies. Police can seize this personal data without consent of the data subjects and without any judicial oversight. The Act also allows health officials and administrators of municipalities to collect location data on the infected (or suspected to be infected) and their contacts (or suspected contacts) from telecommunications operators and location data providers (from cell site and GPS). Ecuador, the country with the third-worst COVID-19 outbreak in Latin America, has also relied on executive powers to expand location surveillance using GPS and cell site data. President Lenin Moreno issued a vaguely worded emergency decree authorizing the government to “use satellites and mobile telephone companies to monitor the location of people in a state of quarantine or mandatory isolation”. Latin American NGOs immediately reacted, reminding Ecuador that any surveillance measure should be necessary and proportionate, and hence, effective to contain the virus. The NGOs statement echos the words of the U.N Special Rapporteurs, who jointly called upon U.N Member States to follow international human rights standards: "While we recognize the severity of the current health crisis and acknowledge that the use of emergency powers is allowed by international law in response to significant threats, we urgently remind States that any emergency responses to the coronavirus must be proportionate, necessary and non-discriminatory".  The appeal builds upon the UN High Commissioner for Human Rights to put Human Rights at the centre of the Coronavirus outbreak response. ConclusionLocation surveillance comes with a host of risks to citizens’ privacy, freedom of expression and data protection rights. EFF has long been fighting against warrantless access to location data or blanket data retention mandates, and has called on governments to be more transparent on their surveillance programs. Especially now, during a major health crisis, in which the government has not shown the efficacy of location data using GPS or cell site data about individuals, governments should be as transparent as possible about what data they are collecting for what purposes. Above all, the necessity and proportionality of any location data surveillance schemes must be demonstrated.
>> mehr lesen

New Low for a Bad Patent: Patent Troll Sues Ventilator Company (Thu, 21 May 2020)
Patent trolls don’t care much about innovation. Their lawsuits and threats are attempts at rent-seeking; they’re demanding money from people who make, use, or sell technology just for doing what they were already doing—for crossing the proverbial “bridge” that the patent troll has decided to lurk under. You might think that, during the Coronavirus outbreak and concurrent economic downturn, meritless patent threats might ease up a bit. After all, a lot of companies—particularly smaller ones—are having a hard time making ends meet. And about 32% of patent troll lawsuits do target small and medium-sized businesses. But that’s not what’s happening. In fact, lawsuits by patent trolls are up this year—20% higher than in last year, and 30% higher than 2018. By the count of one company that tracks them, patent trolls have filed 470 lawsuits in the first 4 months of 2020. Meritless patent assertions take a major toll on the economy. In boom times, that’s bad enough; during a recession, it can be even more painful. Patent trolls demand money that struggling companies don’t have; and thriving companies will have less to spend on R&D and innovation. Not only are we seeing a rise in overall litigation, but we can see specific cases that are likely to impact companies involved in direct medical response. Last month, we noted the case of Labrador Diagnostics LLC, a patent troll that sued a company that makes and distributes COVID-19 tests, using patents that it acquired from Theranos, the fraudulent blood-testing company. Now, a shell company called Swirlate IP has acquired a patent that describes generic data transmission, and has used it to sue five different companies—including ResMed [PDF], a company that makes ventilators. Other targets include Livongo Health [PDF], Corning Optical Communications [PDF], Badger Meter [PDF], and Continental Automotive [PDF]. What is Swirlate? It’s a limited liability company whose address is a “Pack and Mail Shoppe” in a strip mall in Plano, Texas. Unified Patents, which is offering a $3,000 bounty for prior art on one of Swirlate’s patents, has linked Swirlate to IP Edge—a big patent assertion company owned by three IP lawyers, that controls a vast swath of shell companies like Swirlate that it uses to hold its patents and sue operating companies. IP Edge shell companies have been recipients of multiple EFF Stupid Patent of the Month awards, and IP Edge has been linked to some of the most lawsuit-happy patent trolls of all time. Patenting Generic Data Transfer Swirlate IP is using two similar patents to sue ResMed, U.S. Patents No. 7,567,622 and 7,154,961. Let’s take a close look at the ‘622 patent. Once you dig through the technical jargon, the ‘622 patent just really describes data transmission. For instance, the first claim of the ‘622 patent talks about “modulating data packets”—a generic and conventional procedure that can be performed with any computer capable of connecting to the internet, not to mention analog technology. It’s done according to a “modulation scheme,” which could be any kind of already-available technical standard—the patent doesn’t describe the scheme. Subsequent steps describe using a “diversity branch,” using a “receiver,” the “retransmi[ssion]” of data packets when the first transmission wasn’t successfully decoded, and “demodulating,” or decoding, the transmitted data. There’s nothing about how it’s done. The accused products, like ResMed’s BiPAP breathing machine, use standard technology like LTE transmission. Unfortunately, this is a pattern we’ve seen over and over again with technology patterns. It’s particularly egregious in this situation, because the lawyers who run IP Edge are set to profit from a lawsuit against a company that is directly responding to the COVID-19 crisis. The patent was originally owned by Panasonic, but was sold in 2015; it ended up in the hands of Swirlate in April. But Panasonic didn’t invent ARQ retransmission, or the various other types of data transmission methods that are often agreed to by international standard-setting bodies. And it certainly didn’t invent LTE, an international data transmission protocol that was finalized in 2008. Yet, in the hands of Swirlate, this patent is being used to sue over products that use everyday LTE technology. This isn’t the first time Panasonic has sold patents to litigious patent trolls; in 2018, it sold a portfolio of patents to Wi-LAN, a Canadian patent assertion company. What’s the solution to all this? We’ve suggested that the government take immediate action against patent abusers who are exacerbating the health crisis. But that’s only part of the solution here; three of Swirlate’s targets aren’t health care companies. We need to have other protections against patent threats like this. Those include a strong “inter partes review” system to check out already granted patent patents; and strong fee-shifting laws for companies that insist on pushing forward with low-quality patents in court. Ultimately, we need a patent office that will just say no to generic patents like these. That means a patent office that will enforce Supreme Court rulings like Alice v. CLS Bank, that have tightened up the rules on patenting generic ideas.
>> mehr lesen

No to California Bill on Verified Credentials of COVID-19 Test Results (Wed, 20 May 2020)
EFF opposes a California bill, A.B. 2004, that would authorize the issuers of COVID-19 test results to do so with digital verifiable credentials. This bill would take us a step towards national digital identification, create information security risks, exacerbate social inequities in access to smartphones and COVID-19 tests, endorse one solution to an evolving technological problem, and fail to limit who may view credentials of test results. The bill also would not effectively advance its stated goal of addressing the COVID-19 outbreak. What This Bill Does The official bill analysis for A.B. 2004  states that the “purpose of the bill” is to “authorize the use of blockchain-based technology to provide verifiable credentials for medical test results, including COVID-19 antibody tests …” The bill’s author wrote that such credentials could be used for “returning to work, travel or any other processes wherein verification of a COVID-19 test would be needed.” The analysis states these credentials could be used as “‘immunity certificates’ for antibody tests in order to resume economic activity,” and might encourage people to participate in automated contact tracing. The text of A.B. 2004 proposes to do this by saying public entities and other issuers of “COVID-19 test results or other medical test results may use verifiable credentials, as defined by the World Wide Web Consortium (W3C), for the purpose of providing test results to individuals.” The bill also requires that such credentials must follow certain W3C specifications, specifically based on the “Verifiable Credentials Model” the W3C published in November 2019. This model identifies “distributed ledgers” as one example of “verifiable data registries. A Worrying Step Towards National Digital Identification EFF has long-opposed mandatory national identification systems. These schemes, as used today in numerous countries, typically assign an identification number to each person. Each individual must then use it for a broad range of identification purposes. Such schemes facilitate government surveillance of all occasions when people use their identification. Large amounts of personal information are linked to the identification number and stored in a centralized database. The requirement to produce identity cards or numbers on demand habituates people into participating in their own surveillance. For these reasons, we oppose the federal “Real ID” law, which creates a vast federal database linking together state-issued identifications. Likewise, we are troubled by digital driver’s licenses, because they might be used to aggregate data about all the occasions when people use their driver’s license as identification. Obviously, a system of blockchain verified credentials would have important differences from such national identification and digital driver’s license schemes, because blockchain is a decentralized public ledger. Still, blockchain verified credentials would make it a normal practice for people to present a digital token as a condition to entering a physical space, and for gatekeepers—such as security guards or law enforcement officers—to demand such digital tokens. Such a system could be expanded to document not just a medical test result, but also every occasion when the subject presented that result to a gatekeeper. It could also be expanded to serve as a verified credential of any other bit of personal information that might be relevant to a gatekeeper, such as age, pregnancy, or HIV status. And all of the personal information associated with a blockchain verified credential could be linked to other digital record-keeping systems. Presenting Digital Credentials Creates New Information Security Risks We have information security concerns surrounding the moment when a person presents their digital verifiable credential to a gatekeeper. If the digital credential is an image in the person’s phone, then the person must unlock their phone to show it to the gatekeeper. This creates inherent risk that the gatekeeper will physically seize the phone, and examine or even copy the personal information inside the unlocked phone. This risk is especially high if the gatekeeper is a police officer or other government official. Alternatively, the verified credential might be electronically transmitted from the person’s phone to the gatekeeper’s device. But such transmission would create a new threat vector for adversaries to surveil or steal both the transmitted credential and other information inside the person’s phone. Smartphone-Based Credentials Don’t Account for Broader Social Inequities We have social equity concerns about a smartphone-based system of digital verified credentials of COVID-19 test results. About one-in-five people in the United States do not have a smartphone, according to a Pew Research Center study in 2019. The smartphone “have-nots” include 47% of people who are 65 or older, 34% of people who did not graduate from high school, 29% of people who earn less than $30,000 per year, and 29% of people living in rural areas. Moreover, there are racial and ethnic inequities in access to COVID-19 testing, among other inequities in access to COVID-19 health care. Thus, if our society deploys smartphone-based verification credentials of COVID-19 test results as the primary system to control access to public spaces like offices and schools, that would aggravate existing inequities in access to both smartphones and COVID-19 testing. A.B. 2004 Endorses A Single Way to Solve A Technological Problem Technologies often change faster than laws, and unpredictably so. As a result, a rule that seems sensible today can easily become a security weak point tomorrow. So, it’s an EFF principle that legislators should avoid endorsing one technological approach while discouraging others. Unfortunately, A.B. 2004 endorses one approach for developers in California who seek to build digital verified credentials of medical test results. Although the W3C’s Verifiable Credentials Data Model is not itself a limit on technological development, A.B. 2004 amounts to one, by singling out a particular verifiable-credential scheme as the favored approach. The bill thus disfavors other possible data delivery and storage solutions. A.B. 2004 Has No Limits on Who May View a Verified Credential A.B. 2004 authorizes the issuers of medical test results to do so with verifiable credentials. But it does not limit to whom such results may be issued, or upon who’s authority. It is not clear how the bill would interact with existing medical privacy laws like HIPAA and California’s Confidentiality of Medical Information Act. Also, according to the W3C Model on which the bill is built: “The persistence of digital information, and the ease with which disparate sources of digital data can be collected and correlated, comprise a privacy concern that the use of verifiable and easily machine-readable credentials threatens to make worse.” Thus, the bill is a blank check to issuers to disseminate a verified credential, without first obtaining consent from the subject of that credential. This Bill Would Not Effectively Advance Its Stated Goals Finally, when government proposes to use a technology, in the name of solving a problem, in a way that burdens our freedoms, we must ask: has the government shown the technology would be effective at solving the problem? If not, the burdens on our freedoms are not justified. Here, the proponents of using digital verified credentials of COVID-19 test results have not shown that this technology would help address the outbreak in a manner recommended by the public health community. First, there is an inherent problem with using verified credentials for the results of any medical test involving COVID-19: while the credentials might establish that a particular person received a particular result from a particular test, the credentials cannot establish the validity of the underlying test. Any negative test result for the presence of the virus can be a false negative, meaning the test subject has the virus but the test erroneously reports they do not. Some COVID-19 tests have a false negative rate of as high as 15%. A verified credential of a negative test result implies “this person does not have COVID-19,” but a negative test result actually means only “this person probably does not have it.” Second, one of the bill’s stated goals is to establish digital verified credentials showing whether a person is immune from COVID-19. But no immunity test currently exists. As the World Health Organization recently concluded: “There is currently no evidence that people who have recovered from COVID-19 and have antibodies are protected from a second infection.” Third, one of the bill’s stated goals is to establish digital verified credentials for purposes of screening people for entry to public places, based on whether or not they present a health threat to others. But while digital verified credentials might be suited to facts that are highly static, such as whether a person is older than 21, they are poorly suited to facts that commonly change over time, such as whether a person is pregnant. Indeed, the abstract of the W3C’s Data Model provides use cases that are highly static: whether a person has obtained a driver’s license, a university degree, or a passport. Here, on the other hand, digital verified credentials of negative virus test results would only show non-infectiousness at an earlier point in time, potentially days or weeks before a person presents their credentials to a gatekeeper. In the meantime, the person might have been infected. Worse, the immutability of the blockchain might allow that person to continue to present gatekeepers with test results showing non-infectiousness—even after a subsequent test shows infectiousness. Fourth, one of the bill’s stated goals is to encourage people to use contact tracing apps. But in the ascendant versions of such apps in the United States, such as the Apple-Google Bluetooth-based “exposure notification” system, people only share ephemeral identifiers with each other’s phones and sometimes with a shared server, and never share medical test results with either. Likewise, while a testing authority may give an infected person a credential that allows them to upload their ephemeral identifiers to the shared server, the testing authority does not share that person’s test results with anyone. In short, contact tracing apps in the United States should not and generally will not involve the transfer of medical test results. So, there is no reason that a new system of verified credentials of test results would encourage a person to download a contact tracing app. Blockchain Verified Credentials Will Not Help End This Crisis Blockchain holds promise to help solve some problems in a decentralized way, such as privacy-protective digital currencies. But the use of blockchain, or other digital verified credentials, to prove COVID-19 test results will not help address the current public health crisis. Instead, it will create new problems for data privacy, social equity, and technological innovation. Thus, EFF opposes California A.B. 2004. You can read our opposition letter here, which is co-signed by the ACLU of California.
>> mehr lesen

COVID-19 Patients’ Right to Privacy Against Quarantine Surveillance (Wed, 20 May 2020)
Governments around the world are using surveillance technologies to monitor whether COVID-19 patients are complying with instructions to quarantine at home. These include GPS ankle shackles, phone apps that track location, and phone apps that require patients to periodically take quarantine selfies and send them to government monitors. All of these surveillance technologies burden fundamental rights. And they can harm public health, by discouraging people from getting tested. No patient should be compelled to submit to such surveillance technologies merely because they tested positive for COVID-19, or are otherwise believed to be at elevated risk of infection. The varieties of quarantine surveillance Judges in West Virginia and Kentucky have ordered people to wear GPS ankle shackles (often called electronic monitoring), after they tested positive and then allegedly broke stay-at-home orders. Hawaii considered ordering all people arriving from out-of-state to wear GPS ankle shackles, to ensure compliance with that state’s mandatory 14-day quarantine upon arrival. That plan was reportedly shelved when the state’s attorney general raised concerns. The Florida governor indicated that his state is developing an app to monitor whether out-of-state visitors comply with that state’s 14-day quarantine upon arrival. Also, the infamous face surveillance vendor Clearview AI was reportedly in talks with state agencies to use its technology to track COVID-19 patients. Using face recognition to enforce quarantine might mean putting large swaths of public areas under video surveillance because someone walking by might be identified as a COVID-19 patient. Governments in other countries are using additional technologies. For example, South Korea requires quarantined people to download an app that uses GPS to track their location and alert the government if they leave home. Poland requires quarantined people to download an app, and use it to comply with recurring prompts to take a selfie with a time-and-place stamp, and then send the photo to the government. Israel uses drones to peer through the windows of quarantined people’s homes. In theory, governments might attempt to monitor patient quarantines with other surveillance technologies. These include cell site simulators directed at a patient’s phone, automated license plate readers directed at a patient’s car, GPS devices attached to a patient’s car, or record demands for location data sent to a patient’s communication service provider. The harms of quarantine surveillance Electronic surveillance of COVID-19 patients may undermine public health. In prior outbreaks, people who trusted public health authorities were more likely to comply with containment efforts. A punitive approach to containment can break that trust. For example, people may avoid testing if they fear the consequences of a test result showing they are infectious. Already, fewer than half of the people in the United States trust the government to take care of their health, according to a recent study. Moreover, electronic surveillance to ensure that COVID-19 patients comply with home quarantine would greatly burden their fundamental rights. For example: Compulsion to download a surveillance app undermines the right of individuals to autonomously control their smartphones, including how these devices process their personal information. Government should not be able to turn our most intimate tools into our parole officers. Ordinarily, when one person does this to another, we call it stalkerware. Also, the apps that governments force people to install may introduce new security vulnerabilities that make it easier for adversaries to hack their phones. Forcing patients to send selfies to the government is a form of compelled speech. Also, many selfies will reveal sensitive information from inside the home, such as the patient’s grooming when in private, the presence of other people, and personal, expressive effects such as political campaign posters. Captured images of books would intrude on the privacy of our home libraries, which has received special solicitude from the Supreme Court. GPS ankle shackles are uncomfortable, can trigger false alarms, and often must be paid for by the detainee. Face surveillance is so intrusive that the government should not use it at all, including to address COVID-19. Further, electronic surveillance to monitor home quarantine carries the inherent risk of discriminatory application against people of color and other vulnerable groups. There are racial disparities in the use of GPS ankle shackles in the criminal justice system, and racial disparities have already emerged in how police are enforcing social distancing rules. All too often, new surveillance technologies are deployed in a discriminatory manner. Thus, for reasons of both efficacy and human rights, COVID-19 patients and others should not be subjected to any kind of surveillance technology to monitor their home quarantine, based solely on a positive test result or suspicion they are at elevated infection risk. Evidence that a person may be infected simply does not tend to show that they will violate a stay-home order. Most people will comply with public health instructions because they want to keep their families and communities safe. Constitutional limits on quarantine surveillance The U.S. Constitution limits when the government may use surveillance technology to monitor whether a COVID-19 patient is complying with a stay-home quarantine order. Under the Supreme Court’s watershed Carpenter decision in 2018, the government conducts a “search” that triggers Fourth Amendment scrutiny when it uses technology to automatically create a “detailed chronicle of a person’s physical presence.” Lower courts have extended Carpenter beyond the particular surveillance technology at issue in that case (historical cell site location information or CSLI) to additional technologies, including real-time CSLI, and pole cameras aimed at a suspect’s property. Here, weeks of automated location tracking of a quarantined patient should likewise be considered a search. Ordinarily, the government needs a warrant to conduct a search, based on a judge’s finding of probable cause of crime and particularity about who and what will be searched. In this context, a judge must find probable cause that a particular person will break quarantine. Perhaps that could be shown by evidence that a specific patient already broke quarantine in the past. It could not be shown just by a patient’s current infectiousness. Some have suggested that quarantine surveillance falls within the “special needs” doctrine. This is an exception from the Fourth Amendment’s ordinary warrant requirement, if the government has a purpose “beyond the normal need for law enforcement” and the burdened privacy interests are “minimal.” EFF has long resisted the use of this doctrine as an excuse to engage in highly intrusive surveillance without a warrant, including NSA mass internet surveillance and DNA extraction from arrested people. The doctrine should not apply here, when police strap a GPS shackle to a patient’s ankle, or force a patient to download a tracking app, and threaten to arrest the patient if this surveillance technology shows they stepped foot outside their home. Even if government could show a special need, the doctrine still requires a balancing of the benefits and costs of requiring a warrant to conduct the type of search at issue. Here, quarantine surveillance is highly intrusive, and the government should readily be able to seek prompt judicial review of any alleged need for quarantine surveillance. The U.S. Constitution also requires procedural due process, meaning notice and an opportunity to be heard when the government deprives a person of their liberty. Here, patients must have a timely and fair opportunity to challenge the use of surveillance technology to monitor their home quarantine. Conclusion Automated location tracking is a grave intrusion on personal liberty, including by means of a GPS ankle shackle or compelled download of a surveillance app. This surveillance is not justified merely because a person has been ordered to quarantine at home after they tested positive for COVID-19 or are deemed to have an elevated infection risk. One need not speculate regarding what kinds of surveillance power governments will demand next. In the name of containing COVID-19, the governments of Russia and China already have required the general population (not just infected patients) to download location-tracking apps onto their phones. Then these governments use these apps to monitor and limit the movements of the general population. History shows that when crises abate, governments hold onto the powers they seize to address the crises.
>> mehr lesen

California Prisons Block AI Researchers from Examining Parole Denials (Wed, 20 May 2020)
EFF Clients Want Access to Public Records on Race and Ethnicity in Parole Hearings San Francisco - A team of researchers who want to develop a machine learning platform to help analyze and detect any patterns of bias in California parole-suitability decisions has been blocked for years by the state’s Department of Corrections and Rehabilitation (CDCR). In a lawsuit filed today by the Electronic Frontier Foundation (EFF), the researchers argue that the state’s public records law requires the release of race and ethnicity data they need to develop their work. “We want to create a machine learning tool that can extract factors from parole hearing transcripts, describe the current decision-making process, and identify which decisions appear inconsistent with that process and might be worthy of reconsideration. We need race data to do that,” said Catalin Voss, a PhD student at Stanford University. “Modeling any criminal justice process without accounting for race is unthinkable. Race is embedded in the American justice system. Empirically, the only way to disentangle race from other factors is with the race data,” said Jenny Hong, a PhD candidate at Stanford. The team—which also includes Nick McKeown, a Professor of Computer Science at Stanford, and Kristen Bell, an Assistant Professor of Law at the University of Oregon—sent their first California Public Records Act (CPRA) request for race and ethnicity data in September 2018. However, officials from CDCR denied that request and several more since, claiming exemptions to the CPRA that a judge in an earlier case has already ruled do not apply to this data. “When I first connected with Catalin about developing this research, I assured him we’d be able to get data about race because it’s public record. I thought it might take two months. It has been two years, and we have no real answers,” said Bell. “The California Department of Corrections and Rehabilitation is making the same arguments with us that have already lost in court,” said EFF Staff Attorney Saira Hussain. “Our clients want to use machine learning to identify patterns of discrimination—something you’d think prison officials might want to learn more about.” Bell has conducted previous research on California parole-suitability decisions and found that race and other illegitimate factors had a significant influence on parole decisions for individuals sentenced to life with the possibility of parole as juveniles. During negotiations between the researchers and CDCR about a potential data release, one official said that the records would be provided only if Bell ceased her involvement in the project. “What’s most upsetting is knowing that our experience is likely not unique,” said Bell. “There has been much debate about evidence-based criminal justice reform in California, but how can we know if we’re moving any closer to justice when the prison system is preventing independent researchers from accessing race data?” “Government officials should not be deciding which researchers get access to information based on whether they think they will like the answers,” said EFF Staff Attorney Cara Gagliano. “Our clients simply want CDCR to follow the law and provide the records they need to do their work.” For the full petition in Voss v. CDCR: https://www.eff.org/document/petition-voss-v-cdcr Contact:  Cara Gagliano Staff Attorney cara@eff.org Saira Hussain Staff Attorney saira@eff.org
>> mehr lesen

Facebook's Oversight Board: Who (and What) Is Missing From the Picture So Far (Wed, 20 May 2020)
We’ve been skeptical of Facebook’s Oversight Board from day one. We’ll follow closely and keep open minds, because we appreciate it is a first attempt at some semblance of much-needed governance and external review. But no amount of “oversight” can fix the underlying problem: Content moderation is extremely difficult to get right, particularly at Facebook scale. And, given the tiny percentage of disputes the Board will address, we doubt that it will make much of a dent in the universe of content moderation failures. Now that the Board’s first members have been announced, we have new concerns. Content moderation errors disproportionately impact vulnerable communities—many of which are located outside of the United States. And yet, twenty-five percent of the initial Board is composed of Americans, with still others located or working in the United States. Many members seem to have general experience with law or institution-building, which may be helpful, but we’re not seeing much specific experience with international human rights frameworks. We’re also concerned about who’s not on the Board. As Kara Swisher remarked in the New York Times, “[T]here are no loudmouths, no cranky people and, most important, no one truly affected by the dangerous side of Facebook.” We see too few individuals from certain regions—such as the Middle East and North Africa and Southeast Asia. We don’t see any LGBTQ advocates, transgender Board members, or disability advocates. And we don’t see much representation of Internet users who create the content at issue. Although the Oversight Board is designed to identify and decide the most globally significant disputes, the Board in its current composition seems more directed at addressing parochial U.S. concerns, such as alleged moderation of conservative speakers, than other issues that are far more globally relevant—such as the frequent removal of documentation of human rights violations or the inconsistent enforcement of the rules.  Part of the difficulty here is that Facebook has tried to replicate the concept of the court but removed a crucial aspect of many court systems: the checks and balances built into the appointment of judges. In well-functioning democracies, judges may be elected directly by voters—and dismissed by them—or appointed by one branch of government but then affirmed by another. While not perfect, these processes create a system of accountability. Finally, Board members need experience with content moderation and its discontents. Content moderation is an incredibly complex topic and one that is difficult to fully understand, particularly in the face of corporate opacity. We are concerned that a Board lacking in content moderation experts will rely on Facebook—a company that has been notoriously opaque about its internal operations—to provide them with knowledge about the practice. The Board should look instead to the many outside people and organizations, including EFF, that have been working in this area, some for decades, for in-depth understanding of the challenges it faces. One good feature of the Board is that it has the ability to call on experts—and it should do so, early and often.
>> mehr lesen

Stopping the Google-Fitbit Merger: Your Stories Needed! (Wed, 20 May 2020)
There's a dirty secret in the incredible growth of Silicon Valley's tech giants: it's a cheat. Historically, US antitrust regulators would be deeply concerned about mergers with major competitors in concentrated markets ("mergers to monopoly") and acquisitions of small companies to neutralize future competitive threats ("catch and kill"). And while often permitted, vertical integration ("platform monopolies" where the company that owns a key service competes with its own customers) would at least merit close review. But regulators have mostly been giving a pass to mergers and acquisitions in the tech space. And that’s had huge consequences. To a casual observer, companies like Google—now a division of parent company Alphabet—seem like energetic idea factories, spinning out new divisions at a bewildering rate. But a closer look reveals that Google's real source of "innovation" is its wallet as much as its brain trust: the company buys other companies more often than most of us buy groceries. Two of Google's signature products—Search and Gmail—are in-house projects, but the vast majority of its other successes came from snapping up other companies. (And it's hardly alone in this regard: Apple, Amazon, Microsoft, and the other titans of Silicon Valley have all grown primarily through gobbling up other companies, rather than by making their own winning products). After years of complacency, U.S. financial regulators are finally asking awkward, pointed questions about these mergers. A poster child for what's wrong with merger-driven growth is the Google-Fitbit acquisition, which would see the dominant wearable fitness tracker company disappear into the Googleplex, along with its massive trove of sensitive user data. That's where you come in. We want to ask the Department of Justice to stop this merger, and we want stories from Fitbit owners to help us explain why. For example: Did your employer force (or "strongly encourage") you to wear a Fitbit in order to receive company health benefits? Did you buy a Fitbit because you didn't want to give Google even more of your data? Does the Google-Fitbit merger make you feel like there's no point in opting out of Google data-collection because they'll just buy any company that has a successful alternative? If you've got an on-point personal story about your Fitbit, we want to hear about it! Contact us at mergerstories@eff.org.
>> mehr lesen

Victory! German Mass Surveillance Abroad is Ruled Unconstitutional (Tue, 19 May 2020)
In a landmark decision, the German Constitutional Court has ruled that mass surveillance of telecommunications outside of Germany conducted on foreign nationals is unconstitutional. Thanks to the chief legal counsel, Gesellschaft für Freiheitsrechte (GFF), this a major victory for global civil liberties, but especially those that live and work in Europe. Many will now be protected after lackluster 2016 surveillance reforms continued to authorize the surveillance on EU states and institutions for the purpose of “foreign policy and security,” and permitted the BND to collaborate with the NSA. In its press release about the decision, the court found that the privacy rights of the German constitution also protects foreigners in other countries and that the German intelligence agency, Bundesnachrichtendienst (BND), had no authority to conduct telecommunications surveillance on them:  “The Court held that under Art. 1(3) GG German state authority is bound by the fundamental rights of the Basic Law not only within the German territory. At least Art. 10(1) and Art. 5(1) second sentence GG, which afford protection against telecommunications surveillance as rights against state interference, also protect foreigners in other countries. This applies irrespective of whether surveillance is conducted from within Germany or from abroad. As the legislator assumed that fundamental rights were not applicable in this matter, the legal requirements arising from these fundamental rights were not satisfied, neither formally nor substantively.”  The court also decided that as currently structured, there was no way for the BND to restrict the type of data collected and who it was being collected from. Unrestricted mass surveillance posed a particular threat to the rights and safety of lawyers, journalists and their sources and clients: “In particular, the surveillance is not restricted to sufficiently specific purposes and thereby structured in a way that allows for oversight and control; various safeguards are lacking as well, for example with respect to the protection of journalists or lawyers. Regarding the transfer of data, the shortcomings include the lack of a limitation to sufficiently weighty legal interests and of sufficient thresholds as requirements for data transfers. Accordingly, the provisions governing cooperation with foreign intelligence services do not contain sufficient restrictions or safeguards. The powers under review also lack an extensive independent oversight regime. Such a regime must be designed as continual legal oversight that allows for comprehensive oversight and control of the surveillance process.” The hearing comes after a coalition of media and activist organizations including the Gesellschaft für Freiheitsrechte filed a constitutional complaint against the BND for its dragnet collection and storage of telecommunications data. One of the leading arguments against massive data collection by the foreign intelligence service is the fear that sensitive communications between sources and journalists may be swept up and made accessible by the government. Surveillance which, purposefully or inadvertently, sweeps up the messages of journalists jeopardizes the integrity and health of a free and functioning press, and could chill the willingness of sources or whistleblowers to expose corruption or wrongdoing in the country. In September 2019, based on similar concerns about the surveillance of journalists, South Africa’s High Court issued a watershed ruling that the country’s laws do not authorize bulk surveillance, in part because there were no special protections to ensure that the communications of lawyers and journalists were not also swept up and stored by the government. In EFF’s own landmark case against the NSA’s dragnet surveillance program, Jewel v. NSA, the Reporters Committee for Freedom of the Press recently filed an Amicus brief making similar arguments about surveillance in the United States. “When the threat of surveillance reaches these sources,” the brief argues, “there is a real chilling effect on quality reporting and the flow of information to the public.” The NSA is also guilty of carrying out mass surveillance of foreigners abroad in much the same way that the BND was just told it can no longer do.  Victories in Germany and South African may seem like a step in the right direction toward pressuring the United States judicial system to make similar decisions, but state secrecy remains a major hurdle. In the United States, our lawsuit against NSA mass surveillance is being held up by the government argument that it cannot submit into evidence any of the requisite documents necessary to adjudicate the case. In Germany, both the BND Act and its sibling, the G10 Act, as well as their technological underpinnings, are both openly discussed making it easier to confront their legality. The German government now has until the end of 2021 to amend the BND Act to make it compliant with the court’s ruling. EFF wishes our hearty congratulations to the lawyers, activists, journalists, and concerned citizens that worked very hard to bring this case before the court. We hope that this victory is just one of many we are—and will be—celebrating as we continue to fight together to dismantle global mass surveillance. Related Cases:  Jewel v. NSA
>> mehr lesen

Balanced Copyright Rules Can Help Save Lives During the COVID 19 Crisis (Tue, 19 May 2020)
Our friends and right to repair leaders at iFixit are can-do people. If they see a need they can fill, they step up to do it – even if that need is massive. And they’ve done precisely that with a new and user-friendly archive of repair information for mission-critical medical equipment, including easy-to-use repair guides that boil down key information. Thanks to this project, biomedical technicians can quickly and easily access the information they need to keep medical equipment up and running, saving time, money, and lives. You might think manufacturers of medical equipment would already provide such a database, but you’d be wrong. Many manufacturers refuse to put their manuals online, or, if they do, the manuals are clunky PDFs that are hard to navigate and use, especially when you are trying to work quickly and carefully. So technicians turned to iFixit for help, and iFixit responded in old-school Internet fashion, sending out a call for documents and people to help organize them.  They were overwhelmed by the response, and this new collection is the result. But there’s (at least) one remaining problem. Medical equipment manufacturers claim copyright in these manuals, and they haven’t hesitated to wield it to limit online availability. Which is a shame, because such projects have strong countervailing rights, at least under U.S. law, such as limitations on platform liability and the fair use doctrine. Section 512 of the Digital Millennium Copyright Act provides a safe harbor for online platforms so that they can host content uploaded by their users without needing to navigate the underlying doctrines of potential secondary liability if users infringe copyright. As long as they comply with the requirements of that safe harbor, including removing content on receiving a notice, they are insulated from copyright liability on the basis of their users’ uploads. And users can counter-notice to restore content where, as here, it is a fair use. Fair use depends on four factors, weighed together in light of the purposes of copyright. First, courts look to the purpose of the use. Is it transformative, i.e., is it new and different from that of the original creator? Is it commercial? Here, iFixit has pulled together a collection of manuals in one database, making them more findable, accessible, and useful, at no cost to the user. The database and user guides present non-copyrightable information from the manuals to aid in searching and more quickly comprehending that information, with the original manuals available so that technicians can verify and correct the summary information in a crowdsourced way. Whatever copyrightable elements exist in these manuals, they are irrelevant to the project’s purpose of disseminating and explaining factual repair information in order to save lives. Factor one favors fair use. Second, courts look to the nature of the work. Is it more factual or more expressive? Is it already published? Here, the works in question are highly factual and likely long since published. Factor two favors fair use. Third, courts consider whether the second user copied more than necessary for their purpose. Here, the project must copy entire manuals, or risk leaving out a crucial details or context the technician will need to make the repair. This is an essential step in generating crowdsourced summaries and easy-to-use guides, as well. Factor three favors fair use. Fourth, courts consider whether the use will cause harm to a licensing market. Equipment manufacturers are in the business of selling equipment, not licensing repair manuals, and while they may occasionally sell them as part of trainings, it strains the imagination to conceive of the manuals as an independent licensing market. Allowing manufacturers a copyright monopoly over repair information risks creating a corollary monopoly on the maintenance of those devices. Far from a legitimate licensing market, that would be a misuse of copyright to inhibit  competition in an adjacent market for non-copyrightable goods and services. Factor four favors fair use. Finally, does a fair use finding further or hinder the purposes of copyright? This one is easy. There’s no public interest in locking down these manuals because Manufacturers don’t need a copyright incentive to draft or publish these manuals; they do so as a natural corollary to their real business of selling medical equipment. The Medical Device Repair Database, however, does promote the public interest by encouraging the creation of user guides that will help technicians, and the strapped hospitals they work for, do their job more effectively. Contrary to the belief of some rightsholders, copyright law is all about balance, and it protects secondary uses as well as original ones, especially when the usual copyright incentives were neither relevant nor needed to create the original work. That balance is working here – and a good thing too.
>> mehr lesen

Security Expert Tadayoshi Kohno Joins EFF Advisory Board (Tue, 19 May 2020)
Portrait of Tadayoshi Khono EFF is proud to announce a new addition to our crack advisory board: security expert and scholar Tadayoshi Kohno. A professor at University of Washington’s Paul G. Allen School of Computer Science & Engineering, Kohno is a researcher whose work focuses on identifying and fixing security flaws in emerging technologies, the Internet, and the cloud. Kohno examines and tests software and networks with the goal of developing solutions to security and privacy risks before those risks become a threat. His research focuses on helping protect the security, privacy, and safety of users of current and future generation technologies. Kohno has revealed security flaws in electronic voting machines, implantable cardiac defibrillators and pacemakers, and automobiles. He recently studied flaws in augmented reality (AR) apps, and last year co-developed a tool for developers to build secure multi-user AR platforms. A 2019 report he co-authored about the genealogy site GEDmatch, used to find the Golden State Killer, showed vulnerabilities to multiple security risks that could allow bad actors to create fake genetic profiles and falsely appear as a relative to people in the GEDmatch database. Kohno has spent the last 20 years working to raise awareness about computer security among students, industry leaders, and policy makers. He is the recipient of an Alfred P. Sloan Research Fellowship, a U.S. National Science Foundation CAREER Award, and a Technology Review TR-35 Young Innovator Award. He has presented his research to the U.S. House of Representatives, and had his research profiled in the NOVA ScienceNOW “Can Science Stop Crime?” documentary and the NOVA “CyberWar Threat” documentary. Kohno received his Ph.D. from the University of California at San Diego, where he earned the department’s Doctoral Dissertation Award. We’re thrilled that Kohno has joined EFF’s advisory board.  
>> mehr lesen

The House Passed Legislation to Keep People Online Despite COVID-19, And the Senate Should Follow (Tue, 19 May 2020)
The just-passed HEROES Act is a massive relief package designed to alleviate the harm of a massive crisis. In it is the Emergency Broadband Benefit Program, which would make it easier for Americans affected by COVID-19 to stay connected to the Internet. As the Senate takes up this legislation, it should make Internet access a priority, not a bargaining chip.  The Details The Emergency Broadband Benefit program mandates that ISPs offer a broadband service for free to COVID-19 impacted people that the government will pay $50 per month to cover ($75 per month for tribal lands). Internet Service Providers (ISPs) will also be required to offer the discounted promotional packages they offered as of May 1, 2020, which means any special offers an ISP was providing for high-speed service at an affordable price (including any teaser prices), will be locked in place for the duration of the emergency. The legislation would also abolish long term contracts consumers would be forced to sign to obtain those lower rates by prohibiting early termination fees. In many cases, this arrangement, even in cable monopoly markets, would guarantee that a COVID-19 impacted person could obtain high-speed access for free. The legislation envisions authorizing the program with up to $8.8 billion and would be operational until September 30, 2021. Eligible households would have to meet one of the several criteria designed to capture Americans who are going to be most negatively impacted by COVID-19, most notably households that had a “substantial loss of income” since February 29, 2020, a documented layoff, or an application for unemployment. In essence, it is a federal guarantee to broadband access during this extremely challenging time. Broadband Is an Essential Service and Should Remain a Priority Healthcare, job hunting, communication, schooling--the Internet is vital for the U.S. to weather this crisis. This is even more true for people who have lost jobs, students of all ages who depend on their parents’ Internet, and those living in rural, low-income, and tribal lands, which ISPs have failed to invest resources in. Internet access is not just a footnote in any relief packages, it is a priority.  The inevitable process of negotiating between the House and Senate will force elected officials to make choices on what will and will not be law. If you have a friend or family member who is facing serious economic hardship as a result of COVID-19, you need to contact your Senators and tell them to include free broadband access to the unemployed in whatever final package is produced from Congress. Take Action Tell Your Senators to Keep Americans Online
>> mehr lesen

House Legislation Guarantees Internet Access for Those Affected by COVID-19 (Wed, 13 May 2020)
The House of Representatives has introduced new COVID-19 emergency response legislation to address the largest public health and economic calamity the United States has faced in generations. Like the crisis it is meant to address, the bill is massive. One provision deserves particular attention: guaranteeing free Internet access if you have been economically harmed by COVID-19 (as originally envisioned in legislation promoted by Congressman Marc Veasey). Internet access is more important than ever, as those who can work from home, kids are attending school online, and people in general rely on the Internet for information. People are losing jobs at almost unprecedented rates—the U.S. is facing the highest unemployment rate since the Great Depression.  No one can afford to lose Internet access on top of all the other economic stress we are facing. Americans will be experiencing social distancing, in some form or another, for a long time. That means many businesses, education institutions, religious services, and social functions that rely on large gatherings will do so online. Even organizations that open their doors for limited numbers of in-person interactions may end up relying on the Internet to make up the difference or serve at-risk populations.  Broadband has always been an essential service. But under the conditions brought on by the global pandemic, it is the only way to fully participate in society. Tens of millions of Americans have lost their incomes through no fault of their own, and are complying with the public health emergency requirements to stay at home and reduce their contact with one another to avoid spreading the infection. This makes broadband access to their homes the only means of communicating safely with their friends, family, and prospective employers.  For the COVID-19 impacted parents with children, broadband access will be the means of providing remote education. And for seniors and others highly at risk from the virus, broadband access will be the means to stay in touch with their families from their homes—and for their families to regularly check in on them. For everyone, a lot of face-to-face healthcare has been supplemented or replaced by telehealth. Broadband has always been an essential service. But under the conditions brought on by the global pandemic, it is the only way to fully participate in society. At a very basic level, the Emergency Broadband Benefit Program in this bill requires Internet Service Providers (ISPs) to offer free broadband service for COVID-19 impacted people, with the government compensating the ISPs at a set rate. The bill also locks in any of the promotional rates ISPs offered during the crisis for the duration of the crisis—ISPs cannot start charging people more until the crisis is over. And those special deals will come with fewer strings, since it effectively ends mandatory long-term contracts attached to promotional low prices by eliminating early termination fees. As mentioned, this bill is massive. As various parts of it are renegotiated, Congress must keep in mind that Internet access is vital and cannot be bargained away.  Many Americans have little to no choice among high-speed Internet providers. And monopolies and near-monopolies have little to no incentive to keep prices low or speeds high. This bill will address the price part of the equation, ensuring that Americans can stay online during the crisis. This is not something merely tacked on to COVID-19 relief; rather, it must be a priority for policymakers. 
>> mehr lesen

Governments Shouldn’t Use “Centralized” Proximity Tracking Technology (Wed, 13 May 2020)
Companies and governments across the world are building and deploying a dizzying number of systems and apps to fight COVID-19. Many groups have converged on using Bluetooth-assisted proximity tracking for the purpose of exposure notification. Even so, there are many ways to approach the problem, and dozens of proposals have emerged. One way to categorize them is based on how much trust each proposal places in a central authority. In more “centralized” models, a single entity—like a health organization, a government, or a company—is given special responsibility for handling and distributing user information. This entity has privileged access to information that regular users and their devices do not. In “decentralized” models, on the other hand, the system doesn’t depend on a central authority with special access. A decentralized app may share data with a server, but that data is made available for everyone to see—not just whoever runs the server.  Both centralized and decentralized models can claim to make a slew of privacy guarantees. But centralized models all rest on a dangerous assumption: that a “trusted” authority will have access to vast amounts of sensitive data and choose not to misuse it. As we’ve seen, time and again, that kind of trust doesn’t often survive a collision with reality. Carefully constructed decentralized models are much less likely to harm civil liberties. This post will go into more detail about the distinctions between these two kinds of proposals, and weigh the benefits and pitfalls of each. Centralized Models There are many different proximity tracking proposals that can be considered “centralized,” but generally, it means a single “trusted” authority knows things that regular users don’t. Centralized proximity tracking proposals are favored by many governments and public health authorities. A central server usually stores private information on behalf of users, and makes decisions about who may have been exposed to infection. The central server can usually learn which devices have been in contact with the devices of infected people, and may be able to tie those devices to real-world identities.  For example, a European group called PEPP-PT has released a proposal called NTK. In NTK, a central server generates a private key for each device, but keeps the keys to itself. This private key is used to generate a set of ephemeral IDs for each user. Users get their ephemeral IDs from the server, then exchange them with other users. When someone tests positive for COVID-19, they upload the set of ephemeral IDs from other people they’ve been in contact with (plus a good deal of metadata). The authority links those IDs to the private keys of other people in its database, then decides whether to reach out to those users directly. The system is engineered to prevent users from linking ephemeral IDs to particular people, while allowing the central server to do exactly that. Some proposals, like Inria’s ROBERT, go to a lot of trouble to be pseudonymous—that is, to keep users’ real identities out of the central database. This is laudable, but not sufficient, since pseudonymous IDs can often be tied back to real people with a little bit of effort. Many other centralized proposals, including NTK, don’t bother. Singapore’s TraceTogether and Australia’s COVIDSafe apps even require users to share their phone numbers with the government so that health authorities can call or text them directly. Centralized solutions may collect more than just contact data, too: some proposals have users upload the time and location of their contacts as well. Decentralized Models In a “decentralized” proximity tracking system, the role of a central authority is minimized. Again, there are a lot of different proposals under the “decentralized” umbrella. In general, decentralized models don’t trust any central actor with information that the rest of the world can’t also see. There are still privacy risks in decentralized systems, but in a well-designed proposal, those risks are greatly reduced. EFF recommends the following characteristics in decentralized proximity tracking efforts: The goal should be exposure notification. That is, an automated alert to the user that they may have been infected by proximity to a person with the virus, accompanied by advice to that user about how to obtain health services. The goal should not be automated delivery to the government or anyone else of information about the health or person-to-person contacts of individual people. A user’s ephemeral IDs should be generated and stored on their own device. The ephemeral IDs can be shared with devices the user comes into contact with, but nobody should have a database mapping sets of IDs to particular people.  When a user learns they are infected, as confirmed by a physician or health authority, it should be the user’s absolute prerogative to decide whether or not to provide any information to the system’s shared server.  When a user reports ill, the system should transmit from the user’s device to the system’s shared server the minimum amount of data necessary for other users to learn their exposure risk. For example, they may share either the set of ephemeral IDs they broadcast, or the set of IDs they came into contact with, but not both. No single entity should know the identities of the people who have been potentially exposed by proximity to an infected person. This means that the shared server should not be able to “push” warnings to at-risk users; rather, users’ apps must “pull” data from the central server without revealing their own status, and use it to determine whether to notify their user of risk. For example, in a system where ill users report their own ephemeral IDs to a shared server, other users’ apps should regularly pull from the shared server a complete set of the ephemeral IDs of ill users, and then compare that set to the ephemeral IDs already stored on the app because of proximity to other users.   Ephemeral IDs should not be linkable to real people or to each other. Anyone who gathers lots of ephemeral IDs should not be able to tell whether they come from the same person. Decentralized models don’t have to be completely decentralized. For example, public data about which ephemeral IDs correspond to devices that have reported ill may be hosted in a central database, as long as that database is accessible to everyone. No blockchains need to be involved. Furthermore, most models require users to get authorization from a physician or health authority before reporting that they have COVID-19. This kind of “centralization” is necessary to prevent trolls from flooding the system with fake positive reports. Apple and Google’s exposure notification API is an example of a (mostly) decentralized system. Keys are generated on individual devices, and nearby phones exchange ephemeral IDs. When a user tests positive, they can upload their private keys—now called “diagnosis keys”—to a publicly accessible database. It doesn’t matter if the database is hosted by a health authority or on a peer-to-peer network; as long as everyone can access it, the contact tracing system functions effectively. What Are the Trade-Offs? There are benefits and risks associated with both models. However, for the most part, centralized models benefit governments, and the risks fall on users. Centralized models make more data available to whoever sets themselves up as the controlling authority, and they could potentially use that data for far more than contact tracing. The authority has access to detailed logs of everyone that infected people came into contact with, and it can easily use those logs to construct detailed social graphs that reveal how people interact with one another. This is appealing to some health authorities, who would like to use the data gathered by these tools to do epidemiological research or measure the impact of interventions. But personal data collected for one purpose should not be used for another (no matter how righteous) without the specific consent of the data subjects. Some decentralized proposals, like DP-3T, include ways for users to opt-in to sharing certain kinds of data for epidemiological studies. The data shared in that way can be de-identified and aggregated to minimize risk. More important, the data collected by proximity tracking apps isn’t just about COVID—it’s really about human interactions. A database that tracks who interacts with whom could be extremely valuable to law enforcement and intelligence agencies. Governments might use it to track who interacts with dissidents, and employers might use it to track who interacts with union organizers. It would also make an attractive target for plain old hackers. And history has shown that, unfortunately, governments don’t tend to be the best stewards of personal data. Centralization means that the authority can use contact data to reach out to exposed people directly. Proponents argue that notifications from public health authorities will be more effective than exposure notification from apps to users. But that claim is speculative. Indeed, more people may be willing to opt-in to a decentralized proximity tracking system than a centralized one. Moreover, the privacy intrusion of a centralized system is too high. Even in an ideal, decentralized model, there’s some degree of unavoidable risk of infection unmasking: that when someone reports they are sick, everyone they've been in contact with (and anyone with enough Bluetooth beacons) can theoretically learn the fact that they are sick. This is because lists of infected ephemeral IDs are shared publicly. Anyone with a Bluetooth device can record the time and place they saw a particular ephemeral ID, and when that ID is marked as infected, they learn when and where they saw the ID. In some cases this may be enough information to determine who it belonged to.  Some centralized models, like ROBERT, claim to eliminate this risk. In ROBERT’s model, users upload the list of IDs they have encountered to the central authority. If a user has been in contact with an infected person, the authority will tell them, "You have been potentially exposed," but not when or where. This is similar to the way traditional contact tracing works, where health authorities interview infected people and then reach out directly to those they’ve been in contact with. In truth, ROBERT’s model makes it less convenient to learn who’s infected, but not impossible.  Automatic systems are easy to game. If a bad actor only turns on Bluetooth when they’re near a particular person, they’ll be able to learn whether their target is infected. If they have multiple devices, they can target multiple people. Actors with more technical resources could more effectively  exploit the system. It’s impossible to solve the problem of infection unmasking completely—and users need to understand that before they choose to share their status with any proximity app. Meanwhile, it’s easy to avoid the privacy risks involved with granting a central authority privileged access to our data. Conclusion EFF remains wary of proximity tracking apps. It is unclear how much they will help; at best, they will supplement tried-and-tested disease-fighting techniques like widespread testing and manual contact tracing. We should not pin our hopes on a techno-solution. And with even the best-designed apps, there is always risk of misuse of personal information about who we've been in contact with as we go about our days. One point is clear: governments and health authorities should not turn to centralized models for automatic exposure notification. Centralized systems are unlikely to be more effective than decentralized alternatives. They will create massive new databases of human behavior that are going to be difficult to secure, and more difficult to destroy once this crisis is over.
>> mehr lesen

Court Upholds Public Right of Access to Court Documents (Tue, 12 May 2020)
A core part of EFF’s mission is transparency and access to information, because we know that in a nation bound by the rule of law, the public must have the ability to know the law and how it is being applied. That’s why the default rule is that the public must have full access to court records—even if those records contain unsavory details. Any departure from that rule must be narrow and well-justified. But litigants and judges aren’t always rigorous in upholding that principle. For example, when Brian Fargo sued Jennifer Tejas for allegedly defamatory Instagram posts, he asked that the court seal portions of his filings that contained those posts, references to other people and private medical information. The court granted Fargo’s request, with little explanation or apparent care. That approach set a dangerous precedent for others. The public has a right to know what courts consider defamatory. So, with help from the First Amendment Clinic at UCLA School of Law, EFF and the First Amendment Coalition moved to unseal the records containing the Instagram posts and references to other people. The judge denied that request. Undeterred, we appealed–and won (PDF download). The appeals court chided the trial court for its failure to adequately justify its sealing order, and its equal failure to make sure the order was narrowly tailored so that as little as possible would be hidden from the public. While it did allow some information to remain sealed–information related to private medical records can be kept from the public, and pseudonyms should be used in some exhibits to protect the privacy of third parties–it ordered the rest released. We are grateful to the First Amendment Clinic for their help in vindicating the public’s right to know. And we hope this case will serve as a reminder to judges and litigants to take that right seriously in the future.
>> mehr lesen

The Santa Clara Principles During COVID-19: More Important Than Ever (Mon, 11 May 2020)
This blog post with co-authored with Spandana Singh of New America's Open Technology Institute, a Santa Clara Principles partner. As COVID-19 has spread around the world and online platforms have scrambled to adjust their operations and workforces to a new reality, company commitments to the Santa Clara Principles on Transparency and Accountability in Content Moderation have fallen by the wayside. The Principles—drafted in 2018 by a group of organizations, advocates, and academic experts who support the right to free expression online—outline minimum levels of transparency and accountability that tech platforms should provide around their moderation of user-generated content. However, the values and standards outlined in the Principles are particularly important during crises such as the ongoing pandemic because they hold companies accountable for their policies and practices. The spread of the virus around the globe has been unprecedented, and companies have had to respond quickly. But that’s an explanation, not an excuse: going forward platforms must do more to safeguard digital rights and user expression. As a result of the pandemic, many technology companies have made changes to the way they moderate content. For example, Reddit—which relies heavily on community rather than commercial moderation—has brought in experts to weigh in on COVID-19-related subreddits. Facebook announced in late April that it was working with local governments in the US to ban events that violated social distancing orders. And Twitter has issued a detailed guide to their policy and process changes during this time. Several companies—including YouTube, Twitter, and Facebook—have turned to automated tools to support their content moderation efforts. According to Facebook and other companies, the increased use of automation is because most content moderators are unable to review content from home, due to safety, privacy, legal, and mental health concerns. In light of this development, some companies have put new measures, or safeguards, into place, recognizing that automation may not always yield accurate results. For example, Twitter has committed to not permanently suspending accounts during this time, while Facebook is providing a way for users who are dissatisfied with moderation decisions to register their disagreement. Nevertheless, companies have warned users that they should expect more moderation errors, given that automated tools are often unable to accurately detect and remove speech, particularly with content that requires contextual analysis. The Santa Clara Principles remain a guiding framework of how companies can provide greater transparency and accountability around their practices, even, and especially, during this opaque and challenging time. In particular: Companies should collect and preserve data related to content moderation during the pandemic. These numbers should be used to inform a COVID-19 specific transparency report, which outlines the scope and scale of company content moderation efforts during the pandemic. In addition, these data points should be used to inform future discussions with policymakers, civil society, and academics on the limits of automated content moderation tools.  Further, companies should provide meaningful and adequate notice to users who have had their content or accounts flagged or removed. They should also give users notice of any changes in content or moderation policies during this time.  Finally, companies should continue to offer robust appeals processes to their users in as timely of a fashion as possible. Given internet platforms’ growing reliance on automated content moderation tools during this time, such mechanisms for remedy are more important than ever.
>> mehr lesen

Understandable But Nonetheless Troubling: Facebook's Ban On In-Person Events (Mon, 11 May 2020)
We're closely watching how Facebook enforces its newly-announced policy that limits speech by users who are organizing public protests. This policy is deserving of special attention since it effects free expression on two levels: the organization of the protest itself, and the speech about it. This new policy adds to Facebook’s overall content moderation burden at a time when the company is already unable to keep up with user reports and appeals. One of the great benefits of Facebook is that it is a powerful and affordable organizing tool—therefore, the potential consequences of this policy are serious. The new policy, first reported by journalists, has been added to Facebook's COVID-19 Policy Updates & Protections Page. Of the policy, Facebook writes: Under our coordinating harm policy, we're removing content that:   Advocates for in-person meetings or events against government health guidance. This does not include discussion and debate of public policy or proposed new guidance from policymakers and elected officials.   Coordinates in-person events or gatherings that encourage people with COVID-19 to join. Although we understand Facebook's motivations here, this policy is troubling for several reasons. First, as with all content removal decisions, Facebook has still not fully implemented the Santa Clara Principles, and is currently failing to provide sufficient avenues to appeal removal decisions. Second, under the best circumstances, content moderation is extremely difficult to do well. It's full of tricky context, huge grey areas, and impossible line-drawing. Facebook moderates a ton of content—the result of which is hugely problematic, and not for lack of effort. When it comes to complexity, this policy is no different, and indeed, may be even more difficult to implement fairly and consistently. We had initially hoped Facebook would base these decisions on an objective measure like state or local law; Facebook had previously said they were consulting with local governments and "unless government prohibits the event during this time, we allow it to be organized on Facebook." But the written policy Facebook has adopted uses instead the hazier standard of "government health guidance," which seems to incorporate gatherings that are discouraged though not legally prohibited. Granted, even relying on law would involve a thousand difficult decisions. But laws at least aim for some level of precision. Under US law, for example, laws that restrict First Amendment-protected activity like protests must meet an even higher level of precision: complete limits on gatherings, irrespective of content, must be justified by an important governmental interest and leave open ample means of communication; limits on some but not all gatherings, including those that disfavor expressive gatherings like protests, likely have to be proven to be necessary and the least speech restrictive alternative to advancing a critical public interest. But guidelines, being hortatory and unenforceable, do not. But even if such standards were precise, the task Facebook has burdened itself with is daunting. There are thousands of these guidelines in the US alone—and thousands more worldwide—with many variations among them. What is permissible in one town may, because of slight variations in the guidelines, be impermissible in the next town over. And compounding this and every content moderation-related problem now is the increased use of automated decision-making. As we’ve previously written, the current uptick in automated content moderation is understandable given the circumstances, but nevertheless troubling—and must be treated as a temporary measure to be rolled back as soon as it’s safe for moderators to return to work.  Realistically, Facebook is probably mostly targeting events that specifically urge the flouting of social distancing rules, that specifically call for prohibited crowding and defiance of public health measures. Perhaps they wrote their policy to be fastidiously viewpoint-neutral, wanting to avoid, rightfully, a policy that banned only protests against shelter-in-place rules. They seem to try to address this by including an explanation that the policy does not restrict "discussion and debate of public policy or proposed new guidance from policymakers and elected officials." But if this is the case, we should also be concerned by the existence of a broadly written policy when they envision only limited enforcement. COVID-19 is compounding content moderation impossibilities in numerous ways. This Events Policy is another one, and it is sure to result in mistakes and angry users. That’s why we recommend that companies roll back any policy changes made in light of COVID-19.    
>> mehr lesen

The Patent Office Is “Adjusting” to a Supreme Court Ruling by Ignoring It (Thu, 07 May 2020)
In 2014, the Supreme Court decided the landmark Alice v. CLS Bank case. The Court held generic computers, performing generic computer functions, can’t make something eligible for patent protection. That shouldn’t be controversial, but it took Alice to make this important limitation on patent-eligibility crystal clear. Last year, the Patent Office decided to work around that decision, so that the door to bogus software patents could swing open once again. The office issued new guidance telling its examiners how to avoid applying Alice. In response to that proposal, more than 1,500 of you told the Patent Office to re-consider its guidance to make sure that granted patents are limited to those that are eligible for protection under Alice. Unfortunately, the Patent Office wouldn’t do it. The office and its director, Andre Iancu, refused to adapt its guidance to match the law, even when so many members of the public demanded it. Now, the Patent Office has issue a report, “Adjusting to Alice,” that summarizes the results of its revised guidance on patent eligibility. What do those results show? That examiners are granting more patents and rejecting fewer, than they did when they were following Alice. For example, the Patent Office touted a steep decline in the likelihood that a patent application will receive a first (or preliminary) rejection for lack of patent-eligibility. According to the report, the likelihood of a first rejection on these grounds rose by 31% in the first year and a half after Alice, but has fallen by 25% in the year since the Patent Office issued its revised guidance. The Patent Office is practically back to applying the pre-Alice patent-eligibility standards. In simple terms: they’re ignoring Alice. The Patent Office seems proud of this fact, but lax standards are nothing to brag about. Increased eligibility rejections are a good thing because they mean that patent examiners are filtering out patents that shouldn’t be issued. Such rejections also promote improvements in patent quality and clarity, since applicants can amend their application and make it better. The Patent Office should be encouraging examiners to issue rejections like these. They will improve the clarity of granted patents, and the public’s ability to understand their scope. That gets us closer to the patent system’s intended purpose of promoting innovation. People who care about promoting innovation should be concerned that the Patent Office’s report includes only first rejections. Why isn’t the Office measuring the impact of its guidance on final rejections? Because there is none: examiners practically never make final rejections based on Alice. First rejections are important because they are the only mechanism the public has been able to trust examiners to use to prevent ineligible patents from issuing. That makes these results especially troubling: if the Patent Office isn’t issuing first rejections based on ineligibility, it isn’t issuing any. As a result, the public has no reason to think a granted patent should pass muster under Alice. In 2016, the Director of the Patent Office promised to “focus on enhancing patent quality” and to “improve patent quality for the benefit of all.” Unfortunately, the Patent Office’s Adjusting to Alice report completely ignores the impact of its new guidance on the clarity or quality of issued patents. Based on the Patent Office’s report, all that matters is “certainty”—i.e., the certainty that an application will be granted, and that a patent will be issued. That kind of certainty may be good news for patent applicants and those who make money off granted patents, but it’s bad news for everyone who builds, makes, and uses products in fields like software, that are entangled in patent thickets. For people who work with technology, the Patent Office’s self-congratulatory report is bad news. It means there will be more abstract software patents, and more patent trolls who exploit them. We hope the Patent Office changes course. Until it does, anyone accused of patent infringement should be aware: there’s no certainty that granted patents have passed even the most basic test for eligibility. And courts should know that in many cases, they will have to step up and do what patent examiners should be doing: apply Alice fully and fairly to patents claiming computer-implemented “inventions.”      
>> mehr lesen

Unix and Adversarial Interoperability: The ‘One Weird Antitrust Trick’ That Defined Computing (Thu, 07 May 2020)
The Unix operating system was created at Bell Labs in 1969. Today, it rules the world. Both Android and iOS are flavors of Unix. So is MacOS. So is GNU/Linux in all its flavors, like Ubuntu and Debian. So is Chrome OS. Virtually every "smart" gadget you own is running some flavor of Unix, from the no-name programmable Christmas lights you put up in December to the smart light-bulb and smart-speaker in your living room. Over the years, many companies have marketed versions of Unix: Apple and Microsoft, HP and IBM, Silicon Graphics and Digital. Some of the most popular Unixes came from universities (like BSD, from UC Berkeley) and from hobbyists (the Linux kernel was created by a 22-year-old hobbyist named Linus Torvalds). But there's one company that never marketed Unix: AT&T, the company that paid for Unix's development. They never got into the Unix business. In 1949, Harry Truman's Department of Justice launched an antitrust complaint against AT&T, alleging that the company had engaged in anticompetitive conduct to secure a monopoly for its hardware division, Western Electric. But when the US entered the Korean War, AT&T was able to secure a break by citing its centrality to the US military. With the Pentagon fighting to keep AT&T intact, the Eisenhower administration let AT&T off the hook: in 1956 the US dropped its lawsuit in exchange for a "consent decree," through which AT&T promised to get out of the general electronics business and to share its patents and technical documentation with existing and new competitors. Despite the consent decree, AT&T continued to fund a large and rollicking research and development department, the Bell Telephone Laboratories (BTL) in New Jersey. BTL was home to some of computing history's most storied pioneers, including Ken Thompson and Dennis Ritchie, the principal inventors of Unix, who basically created the project out of intellectual curiosity. Thanks to the consent decree, AT&T couldn't do much with Unix, and so it remained an internal project until Ken Thompson gave a talk on his work at a 1973 Association for Computing Machinery conference. His paper stirred interest from academic and commercial computer science, and AT&T's lawyers decided that the consent decree meant that they couldn't start a new business based on Unix. Instead, they offered the operating system under the consent decree's terms: for a modest sum, anyone could get the Unix source code and adapt it for both commercial and noncommercial use. Soon, a thriving community of Unix hackers—many working for competing firms!—were quietly swapping patches and improvements, and these made their way back to the Unix maintainers at AT&T, who found ways to smuggle them back out in both official and unofficial ways (some AT&T engineers would leave data-tapes in secluded spots, then make anonymous calls to Unix hobbyists, letting them know where the tapes could be found!). This legendary culture of knowledge sharing and collective effort presaged the free software/open source movement, but just as importantly, it is an example of adversarial interoperability, the process of making new products that can connect to, or add value to, incumbent products or services—without the consent of the incumbent system's makers. AT&T was a monopolist with a well-earned reputation for ruthlessness in crushing its competitors. Without the consent decree, AT&T would never have allowed this Unix culture to flourish. Even with the consent decree, the company did its best to undermine its own engineers who maintained Unix at Bell Labs. It was only because these engineers were more loyal to technical excellence than they were to their employers' spiteful directives that Unix was able to make so much progress, so quickly. The story of Unix is a case study in the role that adversarial interoperability plays in competition regulation. The DoJ's consent decree didn't merely ban AT&T from certain monopolistic conduct—it set up rules and incentives that encouraged AT&T to share its technical documentation, and, just as importantly, it stripped AT&T of the legal weapons it needed to stop competitors from making products that interoperated with its own. Today's tech giants have a whole new arsenal of anti-competitive weapons: between anti-circumvention laws, patents and abusive terms of service, Big Tech has powerful legal tools that let them decide exactly who may compete with them, and how (and they're making new ones all the time). AT&T had everything going for it: as a regulated vertical monopoly with massive cash reserves, allies in the Pentagon, and the ultimate "network effect" advantage, the company seemed unassailable. But the mere act of allowing competitors into one of its markets kickstarted a computing revolution that, decades later, is still underway. From your phone to your laptop to your car to your lightbulb, you are still enjoying the fruits of that long-ago 1956 consent decree.
>> mehr lesen

Second Paraguay Who Defends Your Data? Report: ISPs Still Have a Long Way Towards Public Commitments to Privacy and Transparency (Wed, 06 May 2020)
Keeping track of ISPs’ commitments to their users, today Paraguay’s leading digital rights organization TEDIC is launching its second edition of ¿Quién Defiende Tus Datos? (Who Defends Your Data?), a report in collaboration with EFF. Transparent practices and firm privacy commitments are particularly crucial right now. During times of crisis and emergency, companies must, more than ever, show that users can trust them with sensitive information about their habits and communications. While Paraguayan ISPs have made progress with their privacy policies and taking part in forums pledging promotion of human rights, they still have a long way to go to give users what is needed for fully building this trust. Paraguayan ISPs should make greater efforts in being transparent about their practices and procedures as well as having stronger public commitments to their users, such as taking steps to notify users about government data requests. Overall, Tigo remains the best-ranked company in the report, followed by Claro and Personal. Copaco and Vox received the worst ratings. The second edition brings two new categories: assessing whether companies have publicly available guidelines for law enforcement requests, and whether their privacy policies and terms of service are provided following proper web accessibility standards. This year’s report focuses on telecommunication companies with more than fifteen thousand internet users across the country, which together represent the whole base of mobile broadband customers (except for Copaco, whichonly provides fixed services). The full study is available in Spanish, and we outline the main findings below. Main Findings Each ISP was evaluated in the following seven categories: privacy policies, judicial order, user notification, policies for promoting political commitments, transparency, law enforcement guidelines, and accessibility standards. Regarding privacy policies, this edition looked into companies’ publicly available documents and checked whether they provided clear and easily accessible information about personal data collection, processing, and sharing with third parties, as well as the retention time and security practices. While no company scored in the previous report, more than half of them showed improvements in this year’s edition. Tigo stands out with a full star, followed closely by Claro’s privacy policies. Claro did not earn the full star, as it failed to provide sufficient information on how personal data are collected and stored. Personal also received a partial score for publishing policies that properly detail how users’ data are shared with third parties. When it comes to requiring a warrant before handing over users’ communications content for law enforcement authorities, Tigo is the only ISP to clearly and publicly commit to doing so. Claro stated that the company complies with applicable legislation, judicial proceedings, and government requests. TEDIC's report highlights that, in response to the research team, Claro and other companies claimed they do request judicial authorization for handing over communications content. Yet, these claims are still not reflected in the policies these companies’ public, verifiable policies. Regarding government access to traffic data, a Supreme Court ruling in 2010 authorized prosecutors to request such data directly despite the country’s telecommunications law’s assertion that the constitutional safeguard of inviolability of communications refers not only to the content itself, but also to what indicates the existence of a communication, which would cover traffic data. The 2010 ruling has been applied to online context, running also afoul of the Inter-American Court of Human Rights case law recognizing that communications metadata should receive the same level of protection granted to content. TEDIC’s report recommends that companies publicly commit to requesting judicial authorization when handing metadata to authorities. Clarifying this discrepancy in favor of users' privacy is still a challenge and companies should play a greater role in taking it on and fighting for their users in courts or in Congress. Tigo is the only ISP to receive partial stars in the transparency and law enforcement guidelines categories for documents published by its parent corporation Millicom. Regarding the transparency report, Millicom falls short of providing detailed information for Paraguay. The report aggregates data per region, disclosing statistical figures for interception and metadata that merge the requests received in Paraguay, Colombia, and Bolivia. Transparency reports are valuable tools for providing insight into how often governments request data and how companies respond to it, but this is not the case if the figures for each country are not disclosed. However, Millicom does provide relevant insight when it states that Paraguay’s authorities mandate direct access to their mobile network, though it doesn't specify the legal ground that compels companies to do so. As for law enforcement guidelines, Millicom publishes global key steps that its subsidiaries must follow when complying with government requests, but the ISP doesn’t make available to the public its detailed global and locally tailored procedures. Getting companies’ commitment to notify users about government data requests remains a hard challenge. Just like in the last edition of the report, no company received credit in this category. While international human rights standards reinforce how crucial user notification is to ensure due process and effective remedies, ISPs are usually reluctant to take steps towards putting a proper notification procedure in place. Three out of five companies (Claro, Tigo, and Personal) scored in the web accessibility category, though there is still room for improvement.    TEDIC’s work is part of a larger initiative across Latin America and Spain kicked off in 2015 and inspired by EFF’s Who Has Your Back? Project. Earlier this year, both Fundación Karisma in Colombia and ADC in Argentina published new reports. The second edition of Eticas Foundation in Spain comes next, with new instalments in Panamá, Peru, and Brazil already in the pipeline.  
>> mehr lesen

Cryptoparty Ann Arbor: A Case Study in Grassroots Activism (Wed, 06 May 2020)
Grassroots activism, in its many forms, allows a community to mobilize around a shared set of ideals and creates an environment whereby participants can share information and resources to help facilitate the advancement of their common aims. The Electronic Frontier Alliance (EFA) is a grassroots network of community and campus organizations, unified by a commitment to upholding the principles of the EFA: privacy, free expression, access to knowledge, creativity, and security.  An active member of the EFA, Cryptoparty Ann Arbor, connects with their community by hosting digital security workshops with an emphasis on educating people about privacy issues in the digital age. We spoke with Mike and John—two of the groups core organizers— to discuss how Cryptoparty Ann Arbor came to be, their experiences with reaching a wider audience in their community and the future of the group.   How and when did Cryptoparty Ann Arbor get started? Mike: I started Cryptoparty Ann Arbor a few years ago because I was concerned about the security and privacy of the technology we use, and problems like dragnet surveillance and surveillance capitalism. But it's not enough to just learn about and use different tools and alternatives; it's a whole social project of changing how people use technology. So, I started holding monthly workshops at the local hackerspace to have a place for people to learn more about these issues, as well as how to use different tools to better defend themselves from surveillance. As more people got involved, things grew into a bit more of a collective, and now we have multiple people willing to help host workshops, and have developed good connections with many of the people who've attended our trainings. What are the biggest challenges when catering to participants that have a varied level or knowledge around cyber security? John: One of the biggest challenges when catering to participants is making sure we align our recommendations with their capabilities and expectations. We have to be willing to meet people where they are, and think of ways to get them to where they would like to be, not just where we want them as educators so we can check off some kind of list. Sometimes it means encouraging faster, easier solutions rather than the "most secure." Helping people develop their Security Plan (what some refer to as a Threat Model, or Risk Assessment) can be difficult, but makes it much easier to make the best possible recommendations.What do you find to be the core needs of your community? Do they differ from your initial expectations? John:  The core needs for our community are: Basic Digital Security Privacy, Education, and Awareness. It aligns well with the intent of the Cryptoparty movement: a decentralized way to pass on knowledge about protecting yourself in the digital space, including but not limited to: encrypted communication, preventing being tracked while browsing the web, and general security advice regarding computers and smartphones. Ann Arbor public library is the venue for many of your events. How did that relationship begin? John: The Ann Arbor District Library relationship began when one of our fellow organizers (Dave) reached out to contacts there and suggested we host some events. Initially we started with the usual cryptoparty workshops, such as security trainings. We’ve added other types of events since then, for example, Dave organized a wonderful screening and panel discussion of the movie, "The Internet's Own Boy’, in honor of Aaron Swartz Day. How has using a space like a public library improved the accessibility of your events? John: The Ann Arbor District Library allows us to reach so many people who would otherwise be unaware of our group or events. We have noticed much higher attendance and participation. They have been extremely helpful and supportive of our events and workshops to help us with these events. The advertising and promotion they do is extremely effective at getting people that you may not usually think to target within the Ann Arbor local population. Because the Library is such an accessible place with a diverse user base it helps us reach a cross section of the population, regular attendees include folks from low income households, students, disabled people and the elderly. People and places that the Library already has a great reach and respected presence. It can be easy for Cryptoparty to become a techie echo-chamber. We want to reach as many people as possible, especially those who do not have the knowledge or do not know how to access it. What do you envisage for the future of Cryptoparty Ann Arbor? John: The future of the group will most likely be continuing workshops and training, both virtual and physical, and expanding to become more involved in policy and advocacy, such as the About Face campaign to ban facial recognition. COVID-19 may have slowed us down on some of this work, but it will NOT stop us! Our thanks to Cryptopary Ann Arbor.  To find an Electronic Frontier Alliacne affiliated group near you, visit eff.org/fight. If you already part of grassroots or community group in your area please consider joining the Alliance.
>> mehr lesen

Using Drones to Fight COVID-19 is the Slipperiest of All Slopes (Wed, 06 May 2020)
As governments search in vain for a technological silver bullet that will contain COVID-19 and allow people to safely leave their homes, officials are increasingly turning to drones. Some have floated using them to enforce social distancing, break up or monitor places where gatherings of people are occurring, identify infected people with supposed “fever detecting” thermal imaging, or even assist in contact tracing by way of face recognition and mass surveillance. Any current buy-up of drones would constitute a classic example of how law enforcement and other government agencies often use crises in order to justify the expenditures and negate the public backlash that comes along with buying surveillance equipment. For years, the LAPD, the NYPD, and other police departments across the country have been fighting the backlash from concerned residents over their acquisitions of surveillance drones. These drones present a particular threat to free speech and political participation. Police departments often deploy them above protests, large public gatherings, and on other occasions where people might practice their First Amendment-protected rights to speech, association, and assembly. The threats to civil liberties created by drones increase exponentially if those drones are, as some current plans propose, equipped with the ability to conduct face surveillance. If police now start to use drones to identify people who are violating quarantine and walking around in public after testing positive for COVID-19, police can easily use the same drones to identify participants in protests or strikes once the crisis is over. Likewise, we oppose the attachment of thermal imaging cameras to government drones, because the government has failed to show that such cameras are sufficiently accurate to remotely determine whether a person has a fever. Yet, police could use these cameras to identify the whereabouts of protesters in public places at nighttime.  Some have suggested that drones may be a useful way of monitoring the density of gatherings in a public places, like a jogging areas, or as a safer alternative to sending an actual person to determine crowd density. EFF has clear guidelines to evaluate such proposals: Would it work? Is it too invasive? Are their sufficient safeguards? So to start, we’d want to hear from public health experts that drones would be effective for this purpose. Further, we’d want guarantees that such drones are part of a temporary public health approach to social distancing, and not a permanent criminal justice approach to gatherings in public places. For example, there would need to be guidelines that would only allow public health officials, rather than law enforcement, access to the drones. The drones should also never be equipped with face recognition or inaccurate “fever detecting” thermal cameras. They should not be used to photograph or otherwise identify individual people. No government agencies should use this moment to purchase new drones, which ensures that agencies would find excuses in the future to use the drones for other purposes, in order to justify the initial expense. We don’t want more government drones flying over concerts and rallies in the not-so-distant future.  Police surveillance technology is disproportionately deployed against people of color, undocumented people, unhoused individuals, and other vulnerable populations. Having drones become part of the criminal justice apparatus, rather than being controlled by public health officials with no punitive focus,  runs the risk of new over-policing of already over-policed neighborhoods and increasing the racially biased dissemination of fines, summonses, and in-person harassment. If drones must be deployed at all, they need firm guardrails to avoid disproportionately impacting specific communities. As always, local police and public health officials should not acquire new surveillance technologies, or use old surveillance technologies in new ways, without first asking for permission from their city councils or other legislative authorities. Those bodies should hear from the public before deciding. If the civil liberties costs outweigh the public health benefits, new spy tech should be rejected. During the COVID-19 crisis, community control of government surveillance technologies, including drones, is more important than ever.   Even in a time of crisis, we must not normalize policing by robot. Videos of Italian mayors using drones with speakers to shout at people defying shelter-in-place orders are supposed to be funny, but we find them alarming. People often turn toward first responders at the worst moments of their lives. We should not be getting people used to, or even amused by, outsourcing more and more of the necessary human side of policing to robots.
>> mehr lesen

California’s Lawmakers Must Enact Privacy Rules to Advance COVID-19 Efforts (Tue, 05 May 2020)
EFF strongly backs calls, including from California Senate Judiciary Chair Hannah-Beth Jackson, for Governor Gavin Newsom to ensure that his response to this crisis respects Californians’ constitutional right to privacy. We urge the California legislature and Governor Newsom to pass measures that would protect our privacy now, in the aftermath of this crisis, and beyond. As a national leader in privacy and a leading voice in setting policy regarding the coronavirus, California must step up and do the right thing as it makes policy to address the effects of COVID-19. Right now, companies and governments are trying many new things to deal with an unprecedented public health crisis. These include public-private partnerships where government services come from mobile apps and website portals built by corporations such as Verily, a subsidiary of Google’s parent company Alphabet. Yet Verily’s launch shows how companies are making vague promises and commitments about how they will protect information they collect, and how it can be used later. It’s also unclear how governments can use information collected from such programs, and whether our personal information is the currency paid by governments to companies for public health programs to deal with this crisis. Crises often open the door to erroneous judgment, panicked decisions, and programs that—while perhaps well-intentioned—damage privacy and prove difficult to roll back. But privacy does not stand in opposition to public health. In fact, privacy is a necessary piece of the equation to build public trust, which is in turn is a necessary ingredient of successful public health programs. Some types of technology, such as face surveillance, should be off the table because of the magnitude of their privacy harms. EFF has also called repeatedly for any proposed government or company programs that track the spread of COVID-19 to place key privacy safeguards at their heart. But to fix the serious trust problems, we need consumer data privacy laws—not just promises. There is a bankruptcy of trust between consumers and companies that collect personal information, and if companies are going to build public health tools, that mistrust is bad for all of us. A recent poll from The Washington Post and the University of Maryland found that half of Americans capable of downloading an app to track the spread of coronavirus wouldn’t, primarily due to a “distrust of Google, Apple and tech companies generally, with a majority expressing doubts about whether they would protect the privacy of health data.” Privacy does not stand in opposition to public health. In fact, privacy is a necessary piece of the equation to build public trust. And who can blame them? Companies that harvest and monetize our personal information have shown time and again that they will not look beyond their own balance sheets to consider the privacy harms to their customers. Now, more than ever, we cannot allow companies to make land grabs for our data—especially from people who are forced by the outbreak to conduct more of their work and personal lives online than ever before. We cannot tolerate short-sighted company proposals or public-private partnerships that trade our information away, especially for unvetted, untested promises that this will advance our public health and our economic health. Many privacy groups in California support a bill that would solve a big privacy problem not addressed by the California Consumer Privacy Act (CCPA): limiting how companies collect, use, share, and store our personal information to what companies actually need to give us what we asked for. This is often called “minimization.” For example, when companies collect our personal data to address a public health crisis, the CCPA currently does not stop them from also collecting personal information that is irrelevant to the crisis, from using or sharing the data for non-crisis purposes, or from keeping the data long after the crisis ends. The only CCPA rights that allow people to control this information are the rights to delete and opt-out of sales. The language in AB 3119 (Wicks) would solve this problem by requiring minimization. That means no collection, use, sharing, or retention of our data for purposes that we haven't agreed to, subject only to narrow exceptions. This bill would also close CCPA loopholes that, according to major tech companies, allow them to continue to share our personal data with third parties—actions that are out of step with the letter and spirit of this landmark law. Yet the California Assembly’s Privacy and Consumer Protection committee has said it will not hear this bill this year. That decision echoes last year, when Assemblymember Ed Chau, the chair of that committee and author of the CCPA, refused to hear a bill that would have made strides to close these and other loopholes. This is a serious mistake that deprives Californians of privacy protections they have long needed, and need even more during the COVID-19 crisis. It is simply disingenuous to argue that we can’t protect privacy and public health at the same time. In fact, privacy protection is vital for any such efforts to succeed. Setting clear legal guidelines that businesses must follow protects ordinary people in an extraordinary time, and would be an important step to reestablishing trust between consumers and the companies that make money of their information. We urge California’s legislators and Governor Newsom to do the right thing for all of us, and rightfully place privacy at the heart of the state’s response efforts as we move forward.
>> mehr lesen

Courts Issue Rulings in Two Cases Challenging Law Enforcement Searches of License Plate Databases (Tue, 05 May 2020)
This week, the Ninth Circuit Court of Appeals issued an opinion in United States v. Yang, a case challenging the search of an automated license plate reader database under the Fourth Amendment. Although the court, citing EFF’s amicus brief, recognized ALPRs capture massive amounts of data on Americans across the country, it decided not to reach the search issue. Instead it held that because Yang was driving a rental car after his rental agreement ended when the search occurred, he didn’t have the right to challenge the search. The Ninth Circuit’s decision follows an April opinion from Massachusetts’s highest court in another ALPR case, Commonwealth v. McCarthy. In McCarthy, the court held that, although ALPRs raise clear privacy issues under the Fourth Amendment and Massachusetts’s Article 14, McCarthy hadn’t introduced sufficient facts to show that the search at issue in his case rose to the level of a constitutional violation. ALPRs are high-speed, computer-controlled camera systems that are attached to vehicles, such as police cars, or can be mounted on street poles, highway overpasses, or mobile trailers. Some models can photograph up to 1,800 license plates every minute, and every week, law enforcement agencies across the country use these cameras to collect data on millions of vehicles. The plate numbers, together with location, date, and time information, are uploaded to central servers, and made instantly available to other agencies. The location data generated by ALPRs is so precise it can place a vehicle in front of a specific home or business, as was the case in Yang, or within a specific lane on a bridge, as in McCarthy. Law enforcement agencies may maintain their own databases of ALPR data or store their data with private companies. In the Yang case, the law enforcement agency didn’t collect its own ALPR data but instead accessed a commercial database that advertises it contains 6.5 billion plates collected both by law enforcement and by private contractors. EFF, along with the ACLU and several other organizations, filed amicus briefs in both Yang and McCarthy last year, as well as in another case decided on different grounds in the California Court of Appeal, People v. Gonzales. In each of these cases, we argued the Supreme Court’s 2018 opinion in Carpenter v. United States—where the Court held law enforcement must get a warrant to access historical cell site location information (CSLI)—should apply to ALPRs. Like CSLI, the aggregation of ALPR data can paint a picture of where a vehicle and its occupants have traveled, including to sensitive and private places like homes, doctors’ offices, and places of worship. ALPR data collection is detailed and indiscriminate; anyone who drives is likely to have their past locations logged in a database available to police. And, like CSLI databases, ALPR databases facilitate retrospective searches of cars whose drivers were not under suspicion when the plates were scanned. The McCarthy court adopted many of our arguments. Although the Massachusetts Supreme Judicial Court ultimately ruled against the defendant, the court indicated it might have ruled differently if he had introduced more facts on how Massachusetts collects, stores, and shares license plate data. The court cited both our amicus brief and the California Supreme Court’s opinion in our 2017 ALPR case and clearly understood the privacy implications of ALPR data collection. It noted that ALPRs placed near sensitive locations “can reveal information about an individual’s life and associations” and “allow the police to reconstruct people’s past movements without knowing in advance who police are looking for, thus granting police access to a category of information otherwise and previously unknowable.” The court also noted that, “[w]ith enough cameras in enough locations, the historic location data from an ALPR system in Massachusetts would invade a reasonable expectation of privacy and would constitute a search for constitutional purposes.” Frustratingly, the Yang court declined to address the application of the Fourth Amendment to ALPR searches, finding instead that Yang lacked standing to challenge the search in the first place. The plate scans at issue were of a rental vehicle that Yang failed to return once his rental period expired. Relying on some minimal evidence that the car rental company may not generally authorize renters to hold onto cars after their rental period is up, the court held that Yang had no expectation of privacy related to the vehicle. But it’s hard to square that conclusion with another recent Supreme Court case, Byrd v. United States. In Byrd, the Court found that a driver who was not authorized to drive a rental car under the terms of the rental agreement could nevertheless challenge a law enforcement search of the car. Despite the obvious relevance of Byrd, the Ninth Circuit in Yang did not even cite the case, let alone try to distinguish its facts. This might be grounds for the defendant to ask the Ninth Circuit to reconsider its ruling.  While we’re disappointed that neither court clearly held ALPR database searches violate the constitution, we’re heartened that Massachusetts’s highest court left the door open for future cases challenging searches of ALPR databases where the defendant can show evidence that the databases draw on an extensive network of cameras. Related Cases:  Automated License Plate Readers- ACLU of Southern California & EFF v. LAPD & LASD
>> mehr lesen

COVID-19 and Online Freedom for #GivingTuesdayNow (Tue, 05 May 2020)
On May 5, EFF is joining forces with other nonprofit groups and individuals everywhere for a global day of support called #GivingTuesdayNow. It's a direct response to the unprecedented community and societal needs created by COVID-19. We have been fortunate to see heroic efforts by people coming together to address the impact of the pandemic, whether as frontline healthcare workers, kind neighbors, or even creative entertainers keeping our isolated spirits up. This unity—made more poignant by our physical distance— is how we will be able to heal our world in every sense. Nonprofit organizations and civil society groups are meant to step up for you where governments can’t or won’t. That role is especially important as the pandemic continues to affect every individual and industry in ways we do not yet fully comprehend. With physical distancing in place, the Internet is a critical lifeline to health information, our loved ones, and our sanity. That’s why EFF hasn’t stopped fighting for even a moment to protect your digital privacy and security, online expression, creative innovation, and access to technology. In this extraordinary time, we’re adapting EFF’s skills to weigh in on issues like contact tracing, surveillance proposals, and defending encryption. The EFF team fortunately has unique expertise in defending civil liberties while on the Internet! Just days ago, we triumphed in stopping the sale of the world’s .ORG registry to a private equity firm. As a small organization based in San Francisco we are proud of our global impact, but EFF relies on your support to get the job done—now more than ever. Let’s help out however we can. There are numerous ways to support the people and the world around us. On #GivingTuesdayNow, I'm asking for help to ensure the future of Internet freedom stays bright especially in these strange days. We have set an ambitious donation goal of $10,000 by the end of Giving Tuesday Now, and you can help by joining EFF or just spreading the word! Here’s some sample language that you can share: Internet freedom, security, and access are more important today than ever before. Join me in supporting @EFF on #GivingTuesdayNow https://eff.org/GTN Twitter | Facebook Livestream—At Home With EFF: Helping Our Communities Find out more about how EFF is supporting your rights online during the third edition of our At Home With EFF livestream series on Tuesday, May 5 at 2 PM Pacific. You're invited to chat with the EFF team on Twitch, hear the latest in the fight to protect the .ORG registry, and learn how you can support your community through online mutual aid safely. RSVP now at https://eff.org/AtHome. Learn more about how EFF is defending digital freedom during the pandemic. EFF has a central hub where you can find our work related to the coronavirus at eff.org/COVID-19. We have also compiled our critical thoughts on online rights and the pandemic in a new ebook: EFF’s Guide to Digital Rights and the Pandemic. To get the ebook, you can make an optional contribution to support EFF’s work, or you can download it at no cost. We released the ebook under a Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits sharing among users. As we each reflect on how we can help ourselves and the world, please know that the staff at EFF is grateful to have your support. Join EFF Lend a Hand on #GivingTuesdayNow and every day
>> mehr lesen

Catch Up On "At Home with EFF," and Join Us For Giving Tuesday Now! (Tue, 05 May 2020)
Back by popular demand, we're hosting a third At Home with EFF event tomorrow at 2 pm (PT)! In addition to our EFF all-stars, we'll be joined by special guests Şerife Wong, founder of Icarus Salon, and the magical Brad Barton (aka reality thief). As this event coincides with Giving Tuesday, our panels will highlight considerations for nonprofits and mutual aid organizers, including an update on last week's victory in protecting the .ORG registry from being sold to a private equity firm. RSVP  At Home with EFF is our virtual event series where we invite you into our homes for discussions on defending our rights online during the COVID-19 pandemic. Our first event addressed many of the immediate concerns we had at the start of this pandemic. During our second event, on April 22nd, we discussed how our reliance on online platforms during a public health crisis brings up many concerns over free speech and privacy. If you feel like you missed out, you're in luck! You can watch the entirety of the first and second events below. mytubethumb play %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FEm_FoF0V8PM%3Fautoplay%3D1%26mute%3D1%22%20width%3D%22640%22%20height%3D%22385%22%20allow%3D%22autoplay%22%3E%C2%A0%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com mytubethumb play %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2F4tdhELawgQs%3Ft%3D550%26autoplay%3D1%26mute%3D1%22%20width%3D%22640%22%20height%3D%22385%22%20allow%3D%22autoplay%22%3E%C2%A0%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com We also hope you'll join us again tomorrow, Tuesday May 5th at 2pm (PT), for our conversation on how we can better support our communities, both through support of nonprofits and mutual aid organizations. We’ll also take some time to celebrate our recent victory in defending .ORG, and discuss what this means for nonprofits online.
>> mehr lesen

Recognizing World Press Freedom Day During COVID-19 (Mon, 04 May 2020)
In the face of a global pandemic, there is an urgent need for reporting relating to the spread of the coronavirus and how governments are responding. But it is in times of crisis that the civil liberties we value most are put to the test—and that is exactly what is happening now as governments around the world clamp down on journalism and stifle the free flow of critical information. With so little currently known about the novel coronavirus, governments around the world have seized the opportunity to control the narrative around the virus and their responses to it. In countries including Algeria, Azerbaijan, China, Hungary, Indonesia, Iran, Palestine, Russia, South Africa, Thailand, and more, authorities have banned individuals and journalists from sharing false or misleading information about the coronavirus. Criminalizing “false information,” however, gives the party in control of law enforcement the power to define what information is “true” or “correct.” And such laws also give the government and the power to censor, detain, arrest, and prosecute those who share information that doesn’t align with the official state narrative. This is already happening. In Cambodia, police have arrested at least 17 people for spreading “false information” about coronavirus—including four members of the opposition political party, all of whom remain in detention, and a teenage girl expressing fears on social media about the  rumored spread of the virus at her school. In Turkey, authorities have detained people for making “unfounded” postings on social media criticizing the Turkish government’s response to the pandemic and suggesting that the coronavirus was spreading widely in the country—even though, according to independent reporting, this is exactly the case. Police in Indian-administered Kashmir have detained journalists and threatened them with prosecution. The detained journalists had posted on social media about coronavirus, and about government censorship and militancy in Kashmir. Even Puerto Rico, a United States territory that—like the fifty states—is bound by the free speech protections enshrined in the Constitution, has enacted a plainly unconstitutional law prohibiting, in certain circumstances, the spread of some types of “false information” related to the government’s response to the virus. But as the world battles a novel and little-understood virus threatening lives and livelihoods around the globe, ensuring the free flow of information is more important now than ever. Who knows how the course of the virus could have been different if China had not silenced Wuhan Doctor Li Wenliang when he sought to sound the alarm about the new coronavirus during its earliest days, instead of silencing him with accusations of spreading false rumors?   By embracing China’s approach, governments are choosing to censor, instead of foster, reporting about how the crisis is unfolding. The threat of interrogation, detention, and arrest chills journalists, political activists, and individuals from sharing their experiences, investigating official actions, or challenging the government’s narrative. To be sure, governments play a critical role in battling the global pandemic—and that includes by acting as sources of important information. But that does not mean that governments should anoint themselves the sole arbiters of truth and falsity, and strip individuals’ rights to investigate the government’s claims, question the official narrative, and share their research, observations, or experiences.   After all, the very premise of “false news” laws—that there always exists an identifiable, objective “truth”—is often hollow. Particularly in this quickly evolving crisis, even the most well-intentioned parties’ understandings of the virus are changing rapidly. Only two months ago, the U.S. government was stating that face masks were not effective and instructing people not to wear them—but today, the opposite guidance is in effect (and has been in some other countries for some time now). Moreover, the United States Supreme Court has made clear that even intentionally false speech cannot be criminalized where that speech does not cause material, provable harm. Allowing the government to police individuals’ speech for truth or falsity would simply be too great a burden on the right to speak free from government scrutiny. “Our constitutional tradition stands against the idea that we need Oceania's Ministry of Truth.” Although international human rights law (namely, Article 19 of the International Covenant on Civil and Political Rights, or ICCPR) allows for certain restrictions on free expression—provided by law and deemed necessary—for the protection of national security, public health, public order, or morals, a 2017 joint declaration of special mandates unequivocally opposed general prohibitions on fake news: General prohibitions on the dissemination of information based on vague and ambiguous ideas, including “false news” or “non-objective information,” are incompatible with international standards for restrictions on freedom of expression . . . and should be abolished. And in a new report on COVID-19 and freedom of expression, David Kaye, the UN Special Rapporteur on Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, acknowledged the harms that mis- and disinformation poses in a pandemic, noted the elusiveness of any singular definition of disinformation, and re-emphasized the importance of countering untruths. But the Special Rapporteur warned against laws aimed at punishing false information, cautioning: Measures to combat disinformation must never prevent journalists and media actors from carrying out their work or lead to content being unduly blocked on the Internet. . . . Vague prohibitions of disinformation effectively empower government officials with the ability to determine the truthfulness or falsity of content in the public and political domain.  As we observe World Press Freedom Day and celebrate the work of the press to hold governments accountable, we must also protect the ability of journalists, activists, and citizens to speak out without fear that they will be arrested or imprisoned for the information they share.
>> mehr lesen

.ORG Domain Registry Sale to Ethos Capital Rejected in Stunning Victory for Public Interest Internet (Fri, 01 May 2020)
ICANN Withholds Consent, Says Deal Lacked ‘Meaningful Plan to Protect’ .ORG Community San Francisco—In an important victory for thousands of public interest groups around the world, a proposal to sell the .ORG domain registry to private equity firm Ethos Capital and convert it to a for-profit entity was rejected late yesterday by the Internet Corporation for Assigned Names and Numbers (ICANN). The Electronic Frontier Foundation (EFF), which worked hand in hand with Access Now, NTEN, National Council of Nonprofits, Americans for Financial Reform, and many other organizations to oppose the sale, applauds ICANN’s well-reasoned decision to stop the $1.1 billion transaction from moving forward. In a statement, ICANN said rejecting the deal was the right thing to do because it lacked a meaningful plan to protect the interests of nonprofits and NGOs that rely on the .ORG registry to exist on the Internet and connect with the people they serve. The sale would change Public Interest Registry (PIR), the nonprofit operator of .ORG, into an entity bound to serve the interests of its corporate stakeholders, not the nonprofit world. ORG is the third-largest Internet domain name registry, with over 10 million domain names held by a diverse group of charities, public interest organizations, and nonprofits, from the Girl Scouts of America and American Bible Society to Farm Aid and Meals On Wheels. “We’re gratified that ICANN listened to the .ORG community, which was united in its opposition to the sale,” said EFF Senior Staff Attorney Mitch Stoltz. “Under the deal, .ORG would be converted to a for-profit entity controlled by domain name industry insiders and their secret investors. Nonprofits are vulnerable to the governments and corporations who they often seek to hold accountable. The public interest community rightly questioned whether an owner motivated by profits would stand up to demands for censorship of charities who rely on .ORG so that people can find and rely on their vital services.” “The sale of .ORG was announced, without .ORG community input, not long after price caps on registration fees for domain names were lifted and PIR acquired new powers to allegedly ‘protect’ the rights of third parties,” said EFF Staff Attorney Cara Gagliano. “It was obvious to many that .ORG registrants could face higher operating costs and degradation of service as Ethos sought to increase fees and seek profitable arrangements with businesses keen to silence nonprofits. This concern grew after it was revealed that the transaction required taking on a $360 million debt obligation.” If PIR wishes to press forward, it still must seek approval from courts in the state of Pennsylvania, where PIR is incorporated.  As part of that process, the Pennsylvania state Attorney General may weigh in. EFF urges both to follow ICANN’s lead and reject the transaction. This will pave the way for a transparent process to select a new operator for .ORG that will act in the interests of the nonprofits that it serves.   Contact:  Mitch Stoltz Senior Staff Attorney mitch@eff.org Cara Gagliano Staff Attorney cara@eff.org
>> mehr lesen

Victory! ICANN Rejects .ORG Sale to Private Equity Firm Ethos Capital (Fri, 01 May 2020)
In a stunning victory for nonprofits and NGOs around the world working in the public interest, ICANN today roundly rejected Ethos Capital’s plan to transform the .ORG domain registry into a heavily indebted for-profit entity. This is an important victory that recognizes the registry’s long legacy as a mission-based, non-for-profit entity protecting the interests of thousands of organizations and the people they serve. We’re glad ICANN listened to the many voices in the nonprofit world urging it not to support the sale of Public Interest Registry, which runs .ORG, to private equity firm Ethos Capital. The proposed buyout was an attempt by domain name industry insiders to profit off of thousands of nonprofits and NGOs around the world. Saying the sale would fundamentally change PIR into an “entity bound to serve the interests of its corporate stakeholders” with “no meaningful plan to protect or serve the .ORG community,” ICANN made clear that it saw the proposal for what it was, regardless of Ethos’ claims that nonprofits would continue to have a say in their future.  "ICANN entrusted to PIR the responsibility to serve the public interest in its operation of the .ORG registry," they wrote, "and now ICANN is being asked to transfer that trust to a new entity without a public interest mandate." The sale threatened to bring censorship and increased operating costs to the nonprofit world. As EFF warned, a private equity-owned registry would have a financial incentive to suspend domain names—causing websites to go dark—at the request of powerful corporate interests and governments.  In a blog post about its decision, ICANN also pointed out how the deal risked the registry’s financial stability. They noted that the $1.1 billion proposed sale would change PIR “from a viable not-for-profit entity to a for-profit entity with a US$360 million debt obligation.” The debt was not for the benefit of PIR or the .ORG community, but for the financial interests of Ethos and its investors. And Ethos failed to convince ICANN that it would not drain PIR of its financial  resources, putting the stability and security of the .ORG registry at risk. "ICANN entrusted to PIR the responsibility to serve the public interest in its operation of the .ORG registry, and now ICANN is being asked to transfer that trust to a new entity without a public interest mandate." ICANN was not convinced by the token “stewardship council” that Ethos proposed in an attempt to add an appearance of accountability. Echoing EFF’s own letter, they noted that “the membership of the Stewardship Council is subject to the approval of PIR's board of directors and, as a result, could become captured by or beholden to the for-profit interests of PIR's owners and therefore are unlikely to be truly independent of Ethos Capital or PIR's board.” Many organizations worked hard to persuade ICANN to reject the sale. We were joined by the National Council of Nonprofits, NTEN, Access Now, The Girl Scouts of America, Consumer Reports, the YMCA, Demand Progress, OpenMedia, Fight for the Future, Wikimedia, Oxfam, Greenpeace, Consumer Reports, FarmAid, NPR, the American Red Cross, and dozens of other household names. Nonprofit professionals and technologists even gathered in Los Angeles in January to tell ICANN their concerns in person. The coalition defending the .ORG domain was as diverse as .ORG registrants themselves, encompassing all areas of public interest: aid organizations, corporate watchdogs, museums, clubs, theater companies, religious organizations, and much, much more. Petitions to reject the sale received over 64,000 signatures, and nearly 900 organizations signed on. Joining them in their concerns were Members of Congress, UN Special Rapporteurs, and state charity regulators [pdf]. A late development that affected ICANN’s decision was the letter [pdf] from California’s Attorney General, Xavier Becerra. Citing EFF and other members of the coalition, Becerra’s letter urged ICANN to reject the sale. Although ICANN received many last-minute appeals from some parts of its policymaking community urging the organization to ignore Becerra’s letter, ICANN acknowledged that as it is a California nonprofit, it could not afford to ignore its state regulator. Because PIR is incorporated in Pennsylvania, that state’s courts must approve its conversion into a for-profit company. Pennsylvania’s attorney general is investigating the sale, and may also weigh in. In its rationale, ICANN states that it will allow PIR and Ethos to submit a new application if they are able to get the approval of this other body with authority over the deal. But all of the reasons behind ICANN’s rejection of the sale will confront Ethos in Pennsylvania, as well. This decision by ICANN is a hard-fought victory for nonprofit Internet users. But the .ORG registry still needs a faithful steward, because the Internet Society has made clear it no longer wants that responsibility. ICANN should hold an open consultation, as they did in 2002, to select a new operator of the .ORG domain that will give nonprofits a real voice in its governance, and a real guarantee against censorship and financial exploitation. Donate to EFF Defend Internet Freedom
>> mehr lesen

COVID-19 and Technology: Commonly Used Terms (Thu, 30 Apr 2020)
New technical proposals to track, contain, and fight COVID-19 are coming out nearly every day, and the distinction between public health strategies, technical approaches, and other terms can be confusing. On this page we attempt to define and disambiguate some of the most commonly used terms. Bookmark this glossary—we intend to update it with new terms and definitions regularly. For more information on COVID-19 and protecting your rights, as well as general information on technology, surveillance, and the pandemic, visit our collection of COVID-19-related writing. Contact tracing: This is the long-standing public health process of identifying who an infected person may have come into contact with while they were contagious. In traditional or manual contact tracing, healthcare workers interview an infected individual to learn about their movements and people with whom they have been in close contact. Healthcare workers then reach out to the infected person’s potential contacts, and may offer them help, or ask them to self-isolate and get a test, treatment, or vaccination if available. Digital contact tracing: Some companies, governments, and others are experimenting with using smartphone apps to complement public health workers’ contact tracing efforts. Most implementations focus on exposure notification: notifying a user that they have been near another user who’s been diagnosed positive, and getting them in contact with public health authorities. Additionally, these kinds of apps—which tend to use either location tracking or proximity tracking—can only be effective in assisting the fight against COVID-19 if there is also widespread testing and interview-based contact tracing. Even then, they might not help much. Among other concerns, any app-based or smartphone-based solution will systematically miss groups least likely to have a smartphone and most at risk of COVID-19: in the United States, that includes elderly people, low-income households, and rural communities. Contact tracing using location tracking: Some apps propose to determine which pairs of people have been in contact with each other by collecting location data (including GPS data) for all app users, and looking for individuals who were in the same place at the same time. But location tracking is not well-suited to contact tracing of COVID-19 cases. Data from a mobile phone’s GPS or from cell towers is simply not accurate enough to indicate whether two people came into close physical contact (i.e. within 6 feet). But it is accurate enough to expose sensitive, individually identifiable information about a person’s home, workplace, and routines. Contact tracing using proximity tracking: Proximity tracking apps use Bluetooth Low Energy (BLE) to determine whether two smartphones are close enough for their users to transmit the virus. BLE measures proximity, not location, and thus is better suited to contact tracing of COVID-19 cases than GPS or cell site location information. When two users of the app come near each other, both apps estimate their proximity  using Bluetooth signal strength. If the apps estimate that they are less than approximately six feet apart for a sufficient period of time, the apps exchange identifiers. Each app logs an encounter with the other’s identifier. When a user of the app learns that they are infected with COVID-19, other users can be notified of their own infection risk. Many different kinds of proximity tracking apps have been built and proposed. For example, Apple and Google have announced plans for an API to allow developers to build this kind of app. 
>> mehr lesen

Frontier’s Bankruptcy Reveals Why Big ISPs Choose to Deny Fiber to So Much of America (Thu, 30 Apr 2020)
Even before it announced that it would seek Chapter 11 bankruptcy, Frontier had a well-deserved reputation for mismanagement and abusive conduct. In an industry that routinely enrages its customers, Frontier was the literal poster-child for underinvestment and neglect, an industry leader in outages and poor quality of service, and the inventor of the industry's most outrageous and absurd billing practices. As Frontier’s bankruptcy has shown, there was no good reason they—and all old big Internet service providers—couldn’t provide blazing-fast fiber on par with services in South Korea and Japan. Frontier's bankruptcy announcement forced the company to explain in great detail its finances, past investment decisions, and ultimately why it has refused to upgrade so many of its DSL connections to fiber to the home. This gives us a window into why ISPs like Frontier—large, dominant, with little-to-no competition—are choosing not to invest in better, faster, and more accessible Internet infrastructure. The reason American Internet lags so far behind South Korea, Japan, and Norway isn’t because fiber isn’t profitable. It just falls under the old adage “you have to spend money to make money,” an anathema to American ISPs’ entrenched position of prioritizing short-term profit over making lasting investments. So long as major national ISPs continue to operate with that same short-term mindset, they will never deliver high-speed fiber to the home broadband of their own accord. If they will not do it, then policymakers need to be thinking about incentivizing others to do it. Why Spend Money Now to Make Money Later When You’re Making Money Now? Instead of being incentivized to grow a satisfied consumer base by investing in better service and expanding to underserved customers, publicly traded companies' incentives are dominated by quarterly reporting. They are driven to show larger profits every three months, and that short-term profitability woos big-dollar sources of investment and pleases the analysts whose judgments move the financial markets. This short-termism precludes investments that bear fruit in the future. That is why for years, the telecom sector has invested almost exclusively in programs that pay out in three to five years and neglected anything that pays out over 10 years or more. This was why Verizon terminated its FiOS efforts more than a decade ago. When Verizon first started to deploy FiOS, and competing with cable companies such as Comcast and Charter, investment analysts criticized the company. They denounced the effort by a phone company to upgrade its old copper network to fiber as a waste of billions of dollars that would be countered by cable companies that could keep pace with early fiber speeds through a series of cheap, incremental upgrades to their coaxial lines. Verizon would have to invest $18 billion to cover just 14 percent of the country with fiber optic while cable companies across the entire country would match the early offerings of FiOS for less than $10 billion. Investors denounced fiber investment as a waste because Verizon would have to spend many billions more on fiber to get the same results as the cable giants would get with cable lines.  Of course, these dollars-to-dollars estimates missed the real point: fiber has the vastly superior maximum speeds, while cable tops out at a tiny fraction of fiber's possible speed. Even though the superiority of fiber is obvious today, the thinking of big ISPs has not changed. That blinkered, short-term mindset doesn't just explain America's anemic fiber rollout, it also explains so much about Frontier's bankruptcy.  Frontier has filed papers explaining how it intends to escape bankruptcy, and these conclusively show that millions of Americans currently stuck in the DSL Internet slow-lanes could be upgraded to blazing-fast fiber without a dime in government subsidies. Frontier's own chart, below, shows the company's estimate of the profitability of its current fiber assets. Note that the company itself estimates that by 2031 the revenues from fiber would exceed costs and thus deliver increases in profit. Note also that for the first five years, the company would lose money on fiber. Fiber has high upfront costs (like a house), but it pays off handsomely over time. The inability to capitalize on superior investment opportunities because they take too long to mature is the very definition of dysfunctional short-termism. Bankruptcy has forced Frontier to entertain these previously ignored long-term opportunities in its effort to restructure itself and return to business. In Frontier’s chart below, “CAGR Reinvestment” represents projections of increasing their spending into deploying fiber in 2021 with the pay-off coming in 2031. Untethered from the public market’s emphasis on constant profit, Frontier has concluded that investing more in more fiber for more people would generate more profits in 2031 and beyond. How many fiber connections does Frontier now plan to upgrade in order to capture those long-neglected, long-term profits?  Around 3,000,000 households dependent on legacy DSL could be upgraded to fiber to the home and deliver a 20 percent return on that investment by 2031. Frontier estimates that its IRR—aka its return on investment—would come in at around one billion dollars. Earning that cool billion in profit requires the company to invest about $1.9 billion in the communities it serves. Frontier's historical calculus for deciding when, where, and how to invest excluded anything with less than a 20% return on investment. That's the kind of cherry-picking that bankrupt companies can't afford to engage in, and so now Frontier is eager to earn a 20% return on its infrastructure.  The fact that nearly three million homes could have been profitably served with fiber without government subsidy, yet were not been given fiber is a wake-up call. The only reason we are learning about this now is because Frontier is forced to tell us under bankruptcy law. Bankruptcy is also the only reason Frontier is considering doing it. When You Have a Monopoly, Why Bother Improving? The revelations from Frontier's bankruptcy filings don't end there. Equally important is how Frontier cultivated, maintained, and abused its monopolies. ISPs like Frontier know exactly where they have monopolies, and therefore know exactly who has no choice and therefore is not worth spending money on. Frontier's documents reveal that the company treats its status as the monopoly provider of high-speed Internet access for 1.6 million households as a uniquely identifiable asset. Frontier wants investors to know that it can precisely demarcate its monopoly territories because it wants to show investors where it can get money (to repay its debt and get out of bankruptcy) by charging a captive audience more and delivering less. The fact that Frontier—and its competitors—treat monopolies as a bankable asset would seem a sign that there should be some oversight. Since the FCC has removed its ability to oversee this industry since 2017 under the so-called Restoring Internet Freedom Order, that oversight will have to be from the states. Internet access is an essential service that American households cannot reasonably forgo without inflicting real social and economic harms on themselves, even when the pandemic isn't raging outside their doors. Clearly, ISPs know they can extract excessive profits from those households until an alternative arrives, which undoubtedly plays a role in Frontier’s and other big ISPs’ opposition to local governments building broadband alternatives for their community. Major ISPs are fond of touting America's supposed “competitive landscape” as a reason to dismantle net neutrality and ban community broadband, but the truth is they are dependent on unfettered monopolies in order to realize the rate of profit their short-term investors demand.  None of that is a secret, but the dots were never connected quite so explicitly as when Frontier just assured investors, in writing, that it was making a lot of money from more than one million people who have no feasible alternatives, and that this justified "investing" political dollars to block cities from building networks, even where there is no cable internet deployment.  Frontier's bankruptcy documents reveal that these political investments were always viewed as cheaper than the network investments they would otherwise have to make to keep its customers once they were no longer held hostage to its ailing, crumbling, overpriced network. This Is Standard Industry Practice, Frontier Is Not an Outlier Giant monopoly ISPs have had decades to bring America's Internet into the 21st century. They have been singularly terrible at delivering decent speed, reliable service, reasonable customer support, or competitive prices. The only thing these companies have demonstrated competence in is making money for their investors. And Frontier's bankruptcy reveals that even that core competence is vastly overrate).  It's long past time we gave up on waiting for Big Telco to do its job. Instead, America should look to the entities with proven track-records for getting fiber to our curbs:  small, private, competitive ISPs and local governments. These are the home of the "patient money" that doesn't mind ten-year payoffs for investments in fiber. Fiber is vastly superior to every other means of delivering high-speed Internet to our homes, schools, institutions, and businesses. Nothing else even comes close (not 5G, either). For more, check out EFF’s own technical report on the relative speeds of different broadband technologies,  and learn why we want state governments to guarantee universal, affordable, competitive fiber to the home networks. That's why we actively support legislation in California to have the state finance a universal open-access fiber infrastructure built by smaller entities.  Policymakers shouldn't assume that the dirty laundry Frontier just aired in its bankruptcy is unique to that one company. Frontier's problem wasn't that it couldn't run a broadband service – it was it couldn't sustain the short-termism that Verizon adopted when it ditched FiOS and that AT&T adopted when it killed its own fiber buildout the second its legal obligations to deliver fiber expired. Frontier's biggest mistake was buying rural legacy networks from AT&T and Verizon, which allowed those companies to offload their neglected networks onto Frontier’s lap. Frontier's bankruptcy is the inevitable consequence of long-term network neglect caused by an emphasis on short term profits. AT&T and Verizon should be deploying fiber everywhere to compete with cable everywhere. They're not, and they're still profitable on paper, but only because they can paper over their steadily eroded customer numbers because they are making profits through their wireless divisions and their content subsidiaries  But when tiny 6,000 person rural cooperatives are deploying fiber to the home but your local town is still stuck with slow DSL from a big telephone company, it is not because the company can’t make money investing in your community, it is because they have chosen not to, and then lobbied to make it illegal for anyone else to do it.
>> mehr lesen