Part One: 20 Dark Dates
Paranoid was created to protect people from a new threat to their data privacy: smart speakers that can listen in on people’s private lives. Lets take a look at the history of data privacy.
It’s all part of a time-honored trend. For decades now, your digital information has been hard-sought commodity—to hackers, to corporations, and to governments. Whether by trickery, opaque user agreements, legislation, or outright theft, they’re after your data.
For context, let’s take a fun trip through tech history. Part Two of this blog will be optimistic—focusing on efforts to put the data privacy genie back in the bottle. But to start, let’s trace some of the events that brought us where we are today: teetering on the brink of the post-privacy era.
1989: Invention of the World Wide Web
Fun fact for those under 30: there was a time, long ago, when the Internet didn’t exist.
When British scientist Tim Berners-Lee invented the World Wide Web in 1989, he envisioned a system for sharing information among universities and institutes. He scarcely could have imagined a day when it would connect the entire world, with most of us carrying Internet-capable devices in our pockets.
So, why would this clearly wondrous invention be considered a dark day for data privacy? By connecting the world’s computers, the WWW created new pathways for infiltration. Your computer and your phone are now gateways to the world, including those who would love to get at the secrets locked within.
1994: The First Cookie
Lou Montulli, a programmer working on the Netscape web browser , thought it would be useful if websites could store tiny packets of information on visitors’ computers. For instance, when a visitor added an item to their shopping cart, that selection could be recorded and then retrieved when it came time to check out.
Unix programmers had been using the term “magic cookies” to describe data stored by programs for later use, and Montulli borrowed it for his invention. His first cookie simply checked whether a visitor had previously been on the Netscape website.
Cookies often make your website visits more enjoyable and efficient. However, many websites employ cookies from third parties, which allow companies to track specific users across multiple sites—and to target them with advertising. With enough data, companies can develop sophisticated user profiles.
Most web browsers now include an option to block third-party cookies, but only a few do so by default.
1995: Spyware Goes Mainstream
In October 1995, the term “spyware” made its first appearance on a Usenet bulletin board. Within a few years, most computer owners were all too familiar with the term. Spyware secretly gathers and transmits information from a computer without its owner’s knowledge. One popular spyware type is the tracking cookie (see above). Spyware can expose surfing habits or personal identification details (including banking information), or can monitor a person’s computer activity down to the individual keystroke. Ironically, much of the early spyware infected computers via fake “anti-spyware programs.”
The prevalence of spyware has declined in recent years, as companies have become more adept at tracking people through “legitimate” channels. Whether it’s an app’s “terms of service” document you click without reading, or whether it’s through your public posts on social media, modern internet use has rendered most spyware redundant. These days, we spy on ourselves.
1997: Privacy is Not a Priority
As the online world began to wake up to emerging privacy threats, the Electronic Privacy Information Center (EPIC) decided to do a bit of research. What are the 100 most popular sites on the Internet, they wondered, and how well do they respect user privacy?
1999: First GPS Cell Phone
Near the close of the millennium, European tech nerds were thrilled by the launch of the Benefon, the world’s first GPS-enabled cell phone. This new feature could be literally lifesaving: when a user called 911, operators would be able to send help to their precise locations.
However, privacy experts saw more sinister potential. If a 911 operator could track a phone’s location, other government officials could too. People like James Dempsey, an activist at the Center for Democracy and Technology, pushed to restrict government access to private GPS phone records. “Your phone has become an ankle bracelet,” he warned.
2001: Patriot Act Passed
The dust had barely settled after 9/11 when the U.S. government, under President George W. Bush passed the USA PATRIOT Act (Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism) on October 26.
The legislation was an easy political sell to a nation reeling in shock. The government was able to massively expand surveillance power over its own citizens, in the promise of battling the threat of terrorist violence. As we will find out later in this list, most people didn’t fully comprehend the long-term implications.
2002: “Total Information Awareness”
The Total Information Awareness (TIA) system was the Bush Administration’s ambitious program to gather and correlate data on a mass scale, in order to detect and predict terrorist activities before they became a threat. Some government officials immediately realized the system could be abused, thus posing a threat privacy of ordinary citizens. Senator Ron Wyden (D-Ore) called it “the biggest surveillance system in the history of the United States.” A few months later, the TIA was redubbed “Terrorism Information Awareness,” a name with less Orwellian overtones.
That wasn’t enough to save the program; it was defunded by Congress in the fall of 2003. However, the TIA’s influence and goals live on in the National Security Agency (NSA)—again, we’ll get back to this later in the list. In fact, let’s go there now.
2005: The NSA is Listening
Under the Patriot Act, the NSA had been granted expanded surveillance powers, purportedly targeting only people with known terrorist links. But a bombshell 2005 New York Times article revealed that the government had been eavesdropping on hundreds, perhaps even thousands of people in the U.S.—without warrants. Although the NSA had been known to extensively surveil people overseas, American were shocked to learn that the government was listening in on American citizens, often on American soil.
2006: Facebook Launches News Feed
Every day, billions of people share (and often overshare) photos, videos, and personal details on their Facebook News Feed. But many users were aghast when the News Feed was first introduced in October 2006. Suddenly people’s Facebook updates were displayed in a magazine-style stream for the world to see. Although Facebook privacy settings still gave them the power to restrict the visibility of individual posts, the News Feed felt like a massive expansion of personal exposure. Within the first day, a Facebook group called “Students Against Facebook News Feed” attracted over 100,000 members.
2007: Google Street View Introduced
The Internet community was wowed in May 2007 when Google unveiled Street View. Google’s block-by-block, street-level images brought a new dimension to online maps—and also introduced some privacy concerns. Car license plates were often readable, and countless people caught by the roving cameras were recognizable. The vehicles carrying those cameras also collected details of local Wi-Fi networks. Within months, Google introduced a feature to remove unwanted faces by request, and eventually began blurring all faces and license plates.
2009: Facebook Back in the Privacy Doghouse
Kevin Bankston, privacy activist with the Electronic Frontier Foundation, wrote, “These new ‘privacy’ changes are clearly intended to push Facebook users to publicly share even more information than before. Even worse, the changes will actually reduce the amount of control that users have over some of their personal data.”
2010: Google Pushes the “Buzz”
The 2010 launch of Google Buzz did not go as planned. In its first bold foray into the social networking market, the company decided to leverage its massive existing Gmail user base. At the product’s launch, every Gmail user was given two options: “Sweet! Check out Buzz,” or “Nah, go to my inbox.”
If you clicked “Sweet,” you were enrolled in Buzz—and a sizeable chunk of your information was made public by default, including contact information of the people you emailed most (for some people these included ex-spouses, students, or employers). If you said “Nah,” Google still enrolled you in some of Buzz’s features. And, if you managed to find the “Turn off Buzz” option, even that failed to completely remove you from the network.
The result? Google was forced to settle a scathing legal action from the FTC for its privacy violations, and the company pulled the plug on Buzz in December 2011.
2012: Canvas Fingerprinting
Canvas fingerprinting, also known as browser fingerprinting, allows websites to identify individual visitors by exploiting a feature of HTML5 called the canvas element. Although HTML5 Canvas was designed to help render graphics, it also effectively collects details about a user’s browser, operating system, and hardware. The specific combination of those details can allow websites to identify one specific user, and then recognize that user on subsequent visits. Even if you disable cookies and always log in from a VPN, your canvas fingerprint follows you wherever you travel on the web.
2014: The Birth of Alexa
Amazon introduced the Echo smart speaker in 2014. Within months, millions of people were asking “Alexa” to provide weather forecasts, play music, or read recipes. In other words, millions of private households owned internet-connected devices with sophisticated arrays of microphones. These devices listened constantly for their “wake” word, but were activated all too frequently by other snatches of conversation. Google Assistant soon joined the smart speaker market, along with other competitors such as the Apple HomePod.
These devices can be wonderfully handy, but they also harvest data on behalf of their manufacturers. That data can include unintentional recordings, made when the devices are mistakenly activated. (Paranoid has been designed to stop them.)
2017: Equifax Hack is a Wakeup Call
For most people, data breaches were something of a theoretical curiosity; they happened to other people. That all changed when Chinese hackers infiltrated the severs of Equifax, one of the world’s largest credit reporting agencies. The breach remained undetected for months, during which time the intruders harvested troves of personal data: full names, birth dates, social security numbers, credit card information, and more. By the time the breach was discovered, the hackers had compromised the data of about 145.5 million Americans. A $650 million settlement in 2019 helped compensate for the damage, but the Equifax hack forever changed the way people think of data breaches.
2018: Cambridge Analytica Scandal
Facebook came under fire once again in 2018, for the part it unwittingly played in the 2016 election of Donald Trump. Cambridge Analytica (CA), a British consulting firm founded by Trump ally Stephen Bannon, commissioned a Facebook “quiz app” designed for the sole purpose of harvesting data. Roughly 270,000 Facebook users downloaded the app (called “thisismydigitallife”) and took the quiz. In the process, CA harvested their data, along with data from their Facebook friends.
CA ended up with a collection of up to 87 million user profiles, and used them to promote pro-Trump ads and content. The rest is history.
2019: Your Smart Speaker Has Ears—Human Ears
The global love affair with smart speakers hit a speed bump in 1989, as multiple news stories broke about the widespread use of human contractors to review and evaluate uploaded recordings. Often these recordings featured intentional commands (although sometimes we ask the internet for things we’d rather keep private). But workers also heard snatches of intimate conversations, business interactions, or questions from people’s children. Some workers even reportedly shared “funny” recordings among themselves in private chat rooms.
2020: Clearview AI Knows Your Face
The mind-bendingly tumultuous year of 2020 got off to an appropriately dystopian start in January, when the New York Times told the terrifying story of Clearview AI. An obscure Australian tech entrepreneur had managed to scrape billions of images from social media sites, and utilized the data to create a facial recognition app with terrifying potential. When you upload a photo of a face, Clearview AI can identify the person almost immediately, by matching the photo with others in the database.
By the time the story came to light, hundreds of law enforcement agencies across the U.S. had already developed relationships with the company. A February data breach later revealed commercial customers of the company including Best Buy and Macy’s. Is this where facial recognition is headed, or can we pull back from the brink? Stay tuned.
2020: COVID-19 Paves the Way For New Levels of Surveillance
2020’s pandemic threw the planet into social and economic turmoil, killed hundreds of thousands, and sickened millions more—some of whom may never fully recover. Contact tracing emerged as a crucial early tool in helping slow the spread. A number of national governments saw contract tracing apps as a potential game-changer, despite the accompanying threats to digital privacy. When you track people’s physical locations AND their health status, you risk exposing intensely personal information. For example, flaws in South Korea’s system meant hackers could uncover the personal identities of infected people, including their addresses.
202_: This Space is Reserved
Where and when will we encounter the next major global threat to data privacy? We have no way of knowing; but, in the meantime, we’re holding open the last spot on our list. We’re betting we won’t have to wait long before filling it.
The History of Data Privacy, Part Two:
10 Positive Developments
“You have zero privacy anyway. Get over it.”
Scott McNealy, CEO Sun Microsystems, January 25, 1999
It’s now over two decades since Scott McNealy famously declared the death of privacy, and his quote remains controversial to this day. To be fair, it may be very close to the truth today, in an era where our GPS-enabled smart phones track our every step, and when many of us have always-listening smart speakers in our homes (and even our bedrooms).
But, while the erosion of our data privacy may seem inevitable and irreversible, there are people in industry and government who have fought to protect and preserve what privacy we have left. In this “History of Data Privacy, Part Two” we’ll look back at some of the most notable examples of this pushback.
1994: HTTPS Helps Secure Web Traffic
Before 1994, web users had to simply trust that the sites they visited were authentic, and that nobody was listening in on the data they sent and received. Netscape, the most popular web browser of the era, developed HTTPS—Hypertext Transfer Protocol Secure—as a method of authenticating websites and encrypting any data exchanged with them. HTTPS didn’t exactly take the Internet by storm (more about that later), but its introduction marked an important milestone in digital privacy.
1991: Antivirus Software Hits the Mainstream
In the early 90s, the term “computer virus” was still (thankfully) relatively unknown. In fact, the term had only been coined in 1983. But, as this new threat began to gain momentum, software developers sensed a market opportunity. In 1991 US company Symantec debuted Norton Antivirus and AVG Technologies was founded in the Czech Republic (releasing their own antivirus software the following year).
1995: European Data Protection Directive
The European Union has a long history of privacy protection, but the 1995 Data Protection Directive provided a new benchmark. It essentially highlighted when and how people’s personal data were allowed to be processed. Although the directive didn’t have the force of law, it provided the framework for personal data privacy regulations in countries throughout the EU and remained on the books until the adoption of the GDPR in 2018 (see below).
2012: FTC Fines Google $22.5 Million
Given the dizzying market values of today’s tech giants, a $22.5-million fine sounds like pocket change. But in 2012, when Google agreed to pay that sum to settle FTC charges, it marked the largest-ever penalty for an FTC violation. Apple’s Safari was among the first web browsers to block third-party cookies by default. Google promised Safari users that the setting would automatically protect them from them, but then secretly installed an advertising tracking cookie anyway. The FTC was not impressed. (Google seems to have survived the fine; its parent company, Alphabet, is currently valued at around $1 trillion.)
2013: Edward Snowden Unmasks the NSA
When a 29-year-old former CIA computer technician released a 12-minute online video interview, he instantly became the most famous whistleblower in history. Edward Snowden had been with “the Agency” since 2006. Over the years he had become increasingly aghast at the level of global surveillance being run by the US National Security Agency (NSA) and other international government agencies, and the data provided to them by telecommunication companies. Snowden claims to have brought his concerns to colleagues and superiors, although US government officials deny it. In any case, Snowden smuggled and released tens of thousands of classified documents, including up to 200,000 from the NSA alone. “The public needs to decide whether these programs and policies are right or wrong,” he said at the time. Snowden remains in exile, granted asylum by Russia, but stands by his decision.
2014: EU Court Upholds “Right to be Forgotten”
Countless millions of people worldwide have embarrassing or even defamatory information about them on the internet—information that can haunt their reputations if anyone so much as Googles them. In 2014, however, Europe’s highest court ruled that people should be able to force search companies to erase links to them unless there are “particular reasons” why they shouldn’t. Although the ruling has had limited effect (for example, the US views it as a violation of the right to free speech), it gave private individuals new control over their online identities.
2014: Google Gives HTTPS Preferred Ranking
For the first two decades of HTTPS, the secure protocol was used only by a small minority of websites. Then, in August 2014, Google announced that it would begin using HTTPS as a “ranking signal.” In other words, Google’s search engine would begin to prefer secured websites over unsecured websites and rank them slightly higher in search results. The goal was to encourage website administrators to switch to HTTPS. It worked; since 2014, Google has seen a gradual but steady rise in the proportion of encrypted web traffic.
2015: California’s “Online Eraser Law” Takes Effect
Minors in California were granted their own “right to be forgotten” on January 1, 2015, when the Online Eraser Law took effect. From that date, any Californian under the age of 18 could remove any content or information that they had posted on an Internet service.
2018: GDPR Enacted
We may never be able to stuff the privacy genie back into its bottle, but 2018 marked a sea change in the overall power dynamic. The General Data Protection Regulation built upon earlier moves by the EU (see above) to provide the world’s most sweeping protection for people’s online data. With it, people could request to review and control any personal data collected on them by companies online. It also forced companies worldwide to update their privacy policies, and served as a template for other jurisdictions looking to advance personal privacy protection—including…
2020: CCPA Becomes Law
Well at least 2020 started on a positive note, assuming you’re an advocate for personal data privacy. On January 1, the California Consumer Privacy Act (CCPA) came into force, bringing GDPR-like data protection to North America for the first time. Like the GDPR, the CCPA gives California residents the right to ask businesses to disclose the personal information they hold on them, and to delete it if they so request. California may be only one state but, with a population near 40 million, the laws it creates resonate nationwide. Virtually every major business in North America had to address its privacy policies to meet the demands of the CCPA.