Ah, 2013. We were young, innocent, and overly idealistic back then. We still hoped the smart future would resemble that year’s film Her — an Academy Award winner in which Joaquin Phoenix’s character falls in love with his sultry AI virtual assistant. But instead, we now find ourselves in a creepy dystopian remake of Orwell’s 1984. As a writer for a tech company, some of the topics I research all day keep me up at night. Arguably, even more so than my true crime habit. However, those two interests help me to draw a parallel: prior to the discovery of DNA, criminals were not able to foresee the future consequences of the genetic material they had left behind at crime scenes. Pursuing cold case convictions is undisputedly a positive — but it serves as a metaphor for how little we can possibly grasp about how technology will evolve, and what the long-term consequences of our trail of data breadcrumbs could be.
The problem with our personal data is that it is stored indefinitely, perhaps forever. Legislation is not keeping up with the breakneck speed at which technology is advancing, and corporations are spending millions lobbying politicians to water down privacy laws. Due to the terms of service of individual corporations and the lack of laws governing them, users often have no legal right to see their data or demand it be permanently destroyed. Research has shown that even deleting files from the client end, such as messages or voice recordings, do not mean they are actually deleted from the company’s servers. And even if you fully customize your privacy settings or upload things only viewable to you, as a Facebook data breach proved: it might not matter.
That personal data is being used to identify us via facial recognition, to track our location, to analyze our wants and needs, to make inferences about us (even identifying whether or not we’re pregnant based on what kinds of lotion we buy), target exploitative ads to us, influence us politically, listen in to our private conversations, analyze our wants and needs, and even make life-changing determinations about us — including how long we should spend in prison, and whether or not we should be allowed into foreign countries. More worryingly, these invasions of privacy extend not only to us, but also to our loved ones and to our children, as we invite seemingly benign technologies into our homes. Technologies are being developed today that may even, in the not-so-distant future, get to decide who lives and who dies.
Join us on a deep dive, exploring ominously foreboding technologies that are emerging or already in existence, and what they could mean for humanity long-term.
AI (ARTIFICIAL INTELLIGENCE)
“AI is a fundamental existential risk for human civilization.”
Elon Musk
AI is already smarter than us…
“On the artificial intelligence front, I have access to the very most cutting edge AI, and I think people should be really concerned about it,” Elon Musk — CEO of Tesla and SpaceX — revealed during a Q&A with Brian Sandoval, Governor of Nevada, “[They] could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information… The pen is mightier than the sword.”
In 2014, the University of Reading announced that — for the first time ever — a computer had passed the 65-year-old Turing test, which stipulates that to be successful, a computer must be mistaken for human at least 30% of the time during a series of five-minute-long digital chats. A program called Eugene Goostman scored a 33% with the judges at the Royal Society in London, in a victory described as “historic”. Since chatbots are already in use at many companies, this technological advancement has the potential to wipe out the entire customer service industry.
…and we’ve been the ones training it.
Have you ever used CAPTCHA (or reCAPTCHA) to type out a word or identify stop lights to ‘prove you’re human’? Congratulations, you helped to train AI. Thanks to our uninformed, unpaid labor — that invites comparison to the human ‘batteries’ powering The Matrix — reCAPTCHA (owned by Google) was able to digitize their entire Google Books archive plus millions of articles. Next, they began using it to identify objects from photos — such as streetlights, signs, and house numbers — for improved Google image searches, Google Maps functionality, and, likely, steering the driverless cars of tomorrow.
Elon Musk’s non-profit OpenAI — in an unprecedented move — refused to release their own research for fear of the dangers. Jack Clark, OpenAI’s head of policy, says that the organization needs more time to experiment with GPT2’s capabilities so that they can anticipate malicious uses. “Some potential malicious uses of GPT2 could include generating fake positive reviews for products (or fake negative reviews of competitors’ products); generating spam messages; writing fake news stories that would be indistinguishable from real news stories; and spreading conspiracy theories. Furthermore, because GPT2 learns from the internet, it wouldn’t be hard to program GPT2 to produce hate speech and other offensive messages.”
“We are trying to develop more rigorous thinking here. We’re trying to build the road as we travel across it.”
Jack Clark, OpenAI’s Head of Policy
When speaking at SXSW, Musk insisted: “Mark my words — AI is far more dangerous than nukes. Far.” Musk, who self-identifies as someone who is normally wary of government oversight, advocates for greater regulation of AI, and referred to the lack thereof as “insane” due to the potential of grave danger to the public.
Though there are far too many to name, some of the additional risks associated with the rise of AI include massive job losses due to automation, automated financial systems (such as stock trading) widening income disparity, complete political chaos, devastating warfare facilitated by autonomous weapons, and the complete end of objective truth as we know it (due to the existence of fake news and deepfakes).
(Sidenote: as a writer, I honestly believed we would be the last profession rendered obsolete by the AI revolution. I naïvely expected that a genuine human voice, including the subtleties of language, would be the hardest to fake. As it turns out, it is already being done.)
SMART SPEAKERS
We welcome them into our homes, ask them personal questions, and let them collect and store our voice recordings… but how much do we really understand about our smart speakers, or what they are doing with our sensitive information? As we discussed in-depth in this article: “manufacturers withhold the fact that human subcontractors are listening in, hide undisclosed microphones in products that aren’t listed in the tech specs, and record users inadvertently up to 19 times a day — recordings they admit are stored forever. But, when those same recordings are requested to help solve a murder, they refuse: suddenly pretending to care about user privacy.” The key to protecting your long-term privacy is to prevent this data from existing in the first place. Once it is created, it may be stored indefinitely.
Common Sense is a leading nonprofit organization that describes itself as “dedicated to improving the lives of kids and families by providing the trustworthy information, education, and independent voice”. They published a full privacy evaluation of the market’s top six virtual assistant contenders: Apple’s Siri, Google Assistant, Amazon’s Alexa, Facebook Portal, Microsoft’s Cortana, and Samsung’s Bixby. It is worth noting that only Apple’s Siri even “passed” their evaluation.
Apple’s Siri (HomePod) came in first place with a pass and an overall rating of 79%. Next, Google Assistant (Google Home) and Microsoft’s Cortana tied with a score of 75%. Facebook Portal scored 59%, Amazon’s Alexa (Echo) 54%, and in dead last, Samsung’s Bixby receiving a score of 53%. All but Apple’s Siri did not receive a “pass”, but rather a “warning”.
Some of the most alarming findings from the Common Sense study:
- Despite the fact that their survey uncovered that 59% of participants reported that their child has interacted with a voice assistant, and 29% reported that their child has used it to help with homework, only 32% of them had ever set up parental controls on those smart speakers.
- Samsung’s terms state that children under 13 should not even be using their services. (“Children under 13 years of age should not attempt to register for its services or send any personal information about themselves to Samsung.”)
- 41% of users have turned off a smart speaker because it was unintentionally activated.
- 31% of parents reported that their child has said something mean, rude, or inappropriate to a voice-activated assistant – and that’s just what they know of.
- Common Sense agrees that virtual assistants may store data that could be used in the future in ways even manufacturers have not yet imagined.
Labels that could follow you or your family members include:
- Mental illness
- Personal medical information
- Sexual orientation
- Financial status
- Disabilities — both physical and mental
- Misbehavior (manufacturers have submitted patents to monitor and reprimand children)
Full Disclosure: Our company — aptly named Paranoid™ — recently invented 3 different voice-activated devices (pictured above) that stop your smart speakers from constantly eavesdropping. You can check them out here.
FACIAL RECOGNITION
Few have heard of a company called Clearview, yet it’s already being used (controversially) by thousands of law enforcement agencies to recognise unidentified subjects. The software uses data they have scraped from the internet and social media sites — a questionable process many are highly concerned about, including legislators. According to an article by Gizmodo, whose reporters gained access to the secretive app and its code: “Other bits of code appear to hint at features under development, such as references to a voice search option; an in-app feature that would allow police to take photos of people to run through Clearview’s database; and a ‘private search mode’.” When Gizmodo asked Clearview about the private search mode, they did not get a response. Clearview has already suffered one known data breach of their client list, which they casually dismissed, suggesting that it’s an unavoidable symptom of living in the 21st century.
A U.S. study from the National Institute of Standards and Technology (NIST) evaluated 189 facial recognition software algorithms from 99 different developers, using four collections of photographs containing 18.27 million images of 8.49 million people. The team saw higher rates of false positives (i.e. misidentifications) for Asians and African Americans compared to Caucasians, with the differentials ranging in factor from 10 to 100 times. When observing only U.S.-developed algorithms, they experienced similar false positives with people of color, with the highest rate of misidentification within the American Indian demographic. The highest rate of false positives occurred among African American females in particular. This should be of significant concern if facial recognition is used to identify and convict criminals.
In China, facial recognition technology is already being used to text, fine, and ‘name and shame’ jaywalkers. Repeat infractions will affect a jaywalker’s social credit score, impeding their ability to do things such as apply for bank loans. Even more alarmingly, across the 70,000 public restrooms built or renovated across Chinese tourist sites, some now employ facial recognition technology in their toilets: “When you stare at it (the machine) for three seconds, you’ll get toilet paper”. Why would anyone put cameras near toilets, you ask? Why, to limit the amount of toilet paper one person can take, of course! What else?
Facial recognition technology isn’t just being used to identify you. Now, cameras trained on your face are using something referred to as “emotion AI”, which uses your eye movement and facial expression to analyse your mood. (And, naturally, use that information to better advertise to you.) Apple purchased Emotient in 2016, Facebook is developing their own emotion-reading technologies, and Amazon has improved Alexa to start detecting human emotion. According to an article by The Guardian: if you stand to look at the numerous advertisement screens in Piccadilly Circus in London referred to as the “Piccadilly Lights” (similar to those in Times Square), you will be recorded by two cameras that identify faces and approximate their age, sex, and mood. A system called iBorderCtrl, which is being used in a pilot phase at the borders of Hungary, Greece, and Latvia, uses similar technology to determine whether those being screened at immigration are giving truthful answers. A reporter for The Intercept, the first journalist to test the system, triggered a false positive despite giving all truthful answers.
According to a highly alarming 2018 study published by Stanford University, researchers were able to use deep neural networks (a form of machine learning / AI) to analyze 35,326 facial images to predict sexual orientation. According to the research: “Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81% of cases, and in 74% of cases for women. Human judges achieved much lower accuracy: 61% for men and 54% for women. The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person.”
Follow-up research was conducted by a researcher from the University of Pretoria, inspired by Stanford’s aforementioned study, attempting to replicate it and verify their results using a new dataset of 20,910 photographs sourced from dating websites. The capacity to predict sexual orientation was confirmed — the use of deep neural networks achieved accuracies of 68% for males and 77% for females. As the study’s abstract asserts: “The advent of new technology that is able to detect sexual orientation in this way may have serious implications for the privacy and safety of gay men and women.” This research comes with very heavy implications, particularly in the many countries on earth where it is still illegal to be homosexual — in some, it is punishable by death.
APPS
Remember FaceApp? It made its rounds across Facebook, encouraging users to submit a photo of their face to see what you’d look like as the opposite sex, or an older version of yourself. It was revealed the company behind FaceApp was based in Russia, and the FBI publicly warned citizens not to use the app due to fears it may be a “counterintelligence threat”, or being used to advance their facial recognition database.
TikTok (a Chinese-owned app) is being similarly scrutinized. First, they were caught snooping on iPhone users’ clipboards, which can include troves of private data, such as passwords. Then, hacktivist group Anonymous warned its followers to delete the app immediately, and shared a popular (unverified) Reddit thread where an engineer claimed to have “reverse engineered” the app and found a whole host of privacy violations and security issues. Some have gone so far as to refer to it as “spyware”. According to an article from Forbes, their reporter reached out to TikTok regarding these claims, who had no comment. The app has already been banned in India, and other countries could potentially follow suit.
During an outstanding feature by CBC Marketplace, investigative journalists proved just how easy it is for app-makers to do some very alarming things, including seeing private messages and taking photos of you without your knowledge or consent. But it’s all legal, so long as you agree to the Terms & Conditions… You know, that thing nobody ever reads?
The main takeaway: we cannot trust the security vetting processes of apps. We must be the advocates of our own privacy and security. And, sadly, that involves not downloading a lot of the third-party apps out there.
PREDICTIVE ANALYTICS
Target’s name and corresponding logo turned out to be more apt than we realized when it was revealed that a teenage girl’s shopping habits determined she was pregnant before anyone else knew. A Target employee shared this story with a journalist from The New York Times: A Target store manager in Minneapolis was surprised after an irate father came in to complain about a book of maternity coupons they’d sent to his teenage daughter, addressed to her by name. When the manager phoned the customer a few days later to apologize again, the father confessed that his daughter was indeed pregnant, admitted there were “activities” going on under his roof that he hadn’t been aware of, and apologized.
So… how did Target hit the bullseye? Statistician Andrew Pole — who was forbidden thereafter from speaking to reporters by Target’s HQ — was quite forthcoming, even proud, of the model he developed for them. He identified around 25 specific purchases (such as lotions and vitamins) that — when analyzed together — helped to determine a “pregnancy prediction score” for each customer, even being able to predict their due date within a narrow window.
“We are very conservative about compliance with all privacy laws. But even if you’re following the law, you can do things where people get queasy.”
Andrew Pole, Target Statistician
The customers they track are not just those who signed up for loyalty programs. Target assigns a Guest ID number to anyone who has ever paid with a credit card, filled out a survey, opened an email they sent, or visited their website. From there, they are then able to purchase more data about us, including our age, address, martial status (including divorce history), ethnicity, estimated salary, and even what websites we visit. The scariest part: that article was published in 2012! If they had this technology the same year the iPhone 5 came out, just imagine what they have now.
POLITICAL INFLUENCE
A company called Cambridge Analytica went from being unheard-of to the subject of headlines across the globe virtually overnight. A former staffer-turned-whistleblower, Christopher Wylie, testified to a committee of British MPs that his former employer used questionably acquired data — the jury is still out on whether it was technically illegal — to interfere with elections all over the globe. Many believe the company’s efforts helped to skew the outcomes of the 2016 American presidential election and the Brexit campaign, among others. According to Wylie, their techniques included spreading videos of people being murdered, in order to further an anti-Muslim message that suggested a certain candidate in the Nigerian election would use such tactics to deter dissent. In his testimony, Wylie testified that the video was distributed “with the sole intent of intimidating voters” — also known as voter suppression.
Alexander Nix, former CEO of Cambridge Analytica, even bizarrely bragged about his now-infamous techniques in a speech he called “From Mad Men to Math Men” at OMR Festival 2017 in Germany. More recently, Congresswoman Alexandria Ocasio-Cortez grilled Facebook’s Founder/CEO Mark Zuckerberg about the absence of regulation regarding political ads, particularly the lack of fact-checking facilitating the spread of misinformation (as exposed by Cambridge Analytica). He gave very vague answers, demonstrating Facebook’s lack of interest in removing deliberate misinformation from its platform.
This is a serious, non-partisan issue undermining democracy. Cambridge Analytica simply auctioned off their shadowy services to the highest bidder, influencing elections across the globe. And now they’ve set the precedent, forcing new campaigns to either adopt the same techniques or be crushed by opponents who do.
DEEPFAKES
Imagine an address from the President or Prime Minister of your country: standing tall in front of the flag, informing the nation of a serious issue. Such messages can and have brought entire nations to action. Now, imagine the entire message was fake, yet its inauthenticity was undetectable to the naked eye. Such a video could cause unimaginable harm. This technology already exists and is becoming more sophisticated every year.
The future consequences are alarming and clear: once the technology evolves enough that even experts cannot determine what is real and what is not real, malicious parties will be able to convince the masses that anyone said anything. Alternately, this technology could be used as an argument to undermine the authenticity of real footage, eroding any sense of accountability for past actions and remarks.
In North Korea, there are only state-sponsored news agencies, which are already well-known to broadcast misleading information and world news to brainwash its population. Deepfakes provide totalitarian nations like this with an even better tool to effectively control its citizens, with the ability to fake speeches from other world leaders, convince people they are involved in a world war, or that all other countries on earth are inhospitable to life… the options are endless.
The worst part? Knowing that world governments, social media sites, and private corporations already have stored video and audio recordings of us, which is all that is needed to make deepfakes, even with today’s technology.
LIFE-ALTERING ALGORITHMS
Algorithms have been imperceptibly creeping into every facet of our lives. A term few understood a decade ago, the mysterious force that somehow knows what we want to see on our feeds, is now being used to shape individuals’ lives — from deciding which ads we are most likely to be vulnerable to, to how long someone should spend in prison.
In 2014, Eric Holder — then U.S. Attorney General — gave a speech to the National Association of Criminal Defense Lawyers, urging the U.S. Sentencing Commission formally investigate the usage of data-driven “risk scores” used in sentencing. He warned: “I am concerned that they may inadvertently undermine our efforts to ensure individualized and equal justice. By basing sentencing decisions on static factors and immutable characteristics — like the defendant’s education level, socioeconomic background, or neighborhood — they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”
Despite the fact that these systems had already been in use, shaping the lives of individuals forever, there was a lack of independent studies of these “criminal risk assessments”. ProPublica, a non-profit organization that produces investigative journalism to incite real-world impact, decided to launch their own study of machine risk assessments used in sentencing. ProPublica cited research by Sarah Desmarais and Jay Singh, who studied 19 of the methodologies used in the U.S., and found that: “In most cases, validity had only been examined in one or two studies” and “frequently, those investigations were completed by the same people who developed the instrument”.
The danger here is two-fold. Firstly, individuals at the mercy of such algorithms are usually not privy to the underlying base code — meaning that they (and their legal counsel) are unable to understand the data and reasoning informing its life-shaping decisions. Secondly, that base code has been programmed by human beings, using arrest data that contains inherent biases. When an expert testifies for or against a defendant, legal counsel is allowed to cross-examine them, scrutinize their qualifications, expertise, and even their character. However, these life-changing algorithms are mysterious and inscrutable. Christopher Slobogin, Director of the Criminal Justice Program at Vanderbilt Law School, told ProPublica: “Risk assessments should be impermissible unless both parties get to see all the data that go into them. It should be an open, full-court adversarial proceeding.”
According to ProPublica’s findings, which included obtaining the risk scores assigned to over 10,000 criminal defendants:
- “Black defendants were often predicted to be at a higher risk of recidivism than they actually were. Our analysis found that black defendants who did not recidivate over a two-year period were nearly twice as likely to be misclassified as higher risk compared to their white counterparts (45 percent vs. 23 percent).”
- “White defendants were often predicted to be less risky than they were. Our analysis found that white defendants who re-offended within the next two years were mistakenly labeled low risk almost twice as often as black re-offenders (48 percent vs. 28 percent).”
- “The analysis also showed that even when controlling for prior crimes, future recidivism, age, and gender, black defendants were 45 percent more likely to be assigned higher risk scores than white defendants. Black defendants were also twice as likely as white defendants to be misclassified as being a higher risk of violent recidivism.”
A similar program called PredPol was designed to make policing more efficient. PredPol showed a map of potential “crime hotspots” to deploy police officers to. However, this data was based on previous arrests. According to an article from The Guardian, the program “was suggesting majority black neighbourhoods at about twice the rate of white ones” and causing a “feedback loop” of deploying more police to be sent to already over-policed areas. Yet, when statisticians used national data to model the city’s likely drug use, it was far more evenly distributed across neighborhoods. According to Kristian Lum, Lead Statistician of the non-profit Human Rights Data Analysis Group: “If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate.”
Data-fed algorithms inject invisible biases into an exponentially growing number of facets of human life. Why? Because we’re the ones who build them. Amazon made headlines when they decided to shut down an AI hiring tool they’d built, which had been discriminating against women. It was trained using patterns of previous hires, which reflected those same biases. LinkedIn came under similar scrutiny when it was observed that searching for the name of a female contact may prompt the website to ask if you were actually looking for a similar male name. As the Seattle Times reported, searching “Stephanie Williams” produces a prompt that asks: “Did you mean Stephen Williams?”, despite there being 2,500 users named Stephanie Williams. The pattern was repeated when suggesting “Andrew Jones” when searching for “Andrea Jones”, Danielle to Daniel, Alexa to Alex, and so on. As Microsoft’s short-lived chatbot “Tay” vividly demonstrated — when self-learning AI is fed data created by individuals, it can become just as biased, offensive, and sadistic as human beings.
CHILDREN’S DATA
Child privacy violations happen constantly, and are shockingly underreported. When child privacy law violations are actually prosecuted, as we discussed in this article, the fines represent a minuscule fraction of the company’s revenue — not nearly enough to deter future infractions.
The Atlantic published an article profiling several “child influencers” with hundreds of thousands of followers. Perhaps the most unexpected and distressing takeaway was the fact that these children were unaware of their staggering online presence:
- “Ryker always knew that his mom liked taking pictures of him, but he was never explicitly aware that people actually saw them, Collette explained. […] Collette Wixom amassed more than 300,000 followers by posting photos of her sons.”
- “Because Vada is unaware of her own Instagram presence, Foos said she just tells Vada that the snapshots are a way of having fun with her mom. Foos always asks her daughter’s permission before snapping a photo and rarely shoots longer than five minutes.”
Sadly, that is not consent as it is not informed consent. Even if these children were given the opportunity to provide informed consent, they are too young to understand the gravity of the long-term consequences; just ask any former child star. There is no reason that these parents would be unable to tell their school-aged children about their social media presence — in fact, children can understand very complex concepts and most child psychologists recommend speaking to them honestly (in an age-appropriate way), even about death and trauma.
Alarming Statistics:
Findings from a 2016 study of 2,000 parents conducted by The Parent Zone on behalf of Nominet:
- “On average parents post nearly 1,500 photos by their child’s fifth birthday — a 54% increase since last year [2015].”
- “Over a third of parents admit that over 50% of their Facebook friends are not ‘true friends’ that they would say hello to if they bumped into them in the street.”
- “The study found that 85% of parents last reviewed their social media privacy settings over a year ago and only 10% are very confident in managing them.”
- “45% of parents allow these Facebook ‘friends’ to view their posts, a further 20% allow Friends of Friends, and 8% have their posts completely open to everyone.”
According to research by AVG Technologies, as summarized in CNN:
- 82% of children under the age of two (among 10 Western countries) currently have some kind of digital profile or footprint, with images of them posted online.
- The highest rate is in the U.S., where 92% of children under the age of two have images of them posted online.
- Other national averages range from 91% in New Zealand, 84% in Canada and Australia, and 43% in Japan – the only country from the study whose average was less than half.
Many children now begin their digital lives even before they are born: when parents upload their prenatal sonogram scans to the internet. Public birth announcements — which often include full name, location, and date of birth — can increase the risk of identity theft.
“It’s shocking to think that a 30-year-old has an online footprint stretching back 10-15 years at most, while the vast majority of children today will have online presence by the time they are two years old — a presence that will continue to build throughout their whole lives.”
JR Smith, CEO of AVG
DATA BREACHES & HACKING
Tons of mind-blowing hacks are currently possible, with more being introduced every day. Everything from hacking lightbulbs to spy on conversations, to the ability to completely override the controls on newer vehicles, or even shut down utility grids. Recently, many laughed when it was revealed that Boeing 747s still receive critical updates via floppy disk. However, this is probably for the best. Keeping these systems off the internet is ideal, since theoretically anything connected to the internet is hackable. Until 2019, the United States’ arsenal of nuclear weapons still used a computer system that also relied on floppy disks.
Think you have never been the victim of a data breach? Think again. Troy Hunt, a Microsoft Regional Director and international speaker on web security, developed Have I been pwned? as a free resource for individuals to quickly determine whether they’ve been compromised (“pwned”) during a data breach. Simply enter your email address and it will compare it against data recovered from data breaches and leaks. As of the date of writing, the website currently boasts over 10 billion “pwned” accounts in its dataset. And these are just the ones they’ve been able to find.
Even if you trust the intent of the companies storing troves of your personal data, nothing connected to the internet is ever truly impenetrable by third parties. Only two months after their Cambridge Analytica scandal, Facebook admitted that third-party apps may have had access to private photos (from stories and even photos people chose not to post) during a 12-day period in 2018, impacting up to 6.8 million users.
Our best advice: never upload anything to the internet that you wouldn’t feel comfortable having as the top search result under your name on search engines forever.
DNA & HEALTH
Some have theorized that data related to medical history and lifestyle choices (such as drinking or smoking) could affect future insurance premiums. While theoretically possible, it is unlikely to be passed into law anytime soon. However, data is already impacting the insurance industry today: several major car insurance providers launched apps that track your driving habits including speed, rate of deceleration, distance driven, and cell phone use. However, when you allow permissions to datapoints such as location and storage, that permission is not just limited to when the app is in use, and there is ambiguity about what exactly it’s being used for.
Anne Wojcicki, 23andMe’s cofounder and current CEO, was married to Google cofounder Sergey Brin at the time of the company’s inception, and her sister Susan Wojcicki has been CEO of Youtube (also owned by Google) since 2014. Google — arguably the corporation with the single biggest data dossier of human beings to date — invested $3.9 million into 23andMe in 2007. Since then, they have taken investments from multiple medical companies, the largest of which being $300 million from GlaxoSmithKline, a pharmaceutical giant who intends to use the genetic information of its 5 million customers to develop new drugs. Dr. Arthur Caplan, Head of the Division of Medical Ethics at the New York University School of Medicine, told Time Magazine: “…any genetic privacy concerns also extend to your blood relatives, who likely did not consent to having their DNA tested.”
Consumers should be very wary of the intentions of any entity who wants to access personal health data, as it is regularly misused and inappropriately accessed. According to Verizon’s annual Data Breach Investigations Report, more than one third (34%) of data breaches in 2019 involved insiders. Luckily, in 2020 we saw that number improve to 30%, though it appears to fluctuate from year to year. In the healthcare industry, a startling majority (59%) of 2019 data breaches involved internal actors — higher than in any other industry that year. There have been countless headlines about medical staff abusing their access to private medical records — most notably, those of celebrities in their care, or their own acquaintances. By 2020, the healthcare industry managed to reduce their rate of internal actor perpetrated breaches to 48%. Though still very significant, this likely represents an attitude shift away from inherent trust, and toward more diligent restricting of access to internal employees.
AUTONOMOUS KILLER ROBOTS
Let’s end on a positive note, shall we? Killer robots. While we already have killer drones, the small but important difference is clear: they are human-operated. But autonomous killer robots are already in development, and may present the single greatest threat to humankind.
In a very odd move for a company whose original motto was “Don’t be evil”, Google was working on a project for the U.S. Military named Project Maven to interpret drone video footage. Four thousand of their employees signed a letter asking them to back away from the project, fearing the technology would assist in drone strikes, and many of them even resigned. After being backed into a corner by their employees, Google decided to not renew the contract.
In South Africa, it is presumed that either a mechanical or software error was to blame for an instance where a robot cannon killed nine soldiers and wounded fourteen others. There was a lot of online speculation – including several major publications writing contradicting stories – about whether or not the U.S. had a similar close call. U.S. Army Program Manager Kevin Fahey, during his keynote speech at the RoboBusiness Conference, referenced SWORDS — an armed unmanned ground vehicle (UGV) deployed in Iraq — and was quoted as having said that “the gun started moving when it was not intended to move” and that “once you’ve done something that’s really bad, it can take 10 or 20 years to try it again.” The vague answers that followed ignited a media storm, and over a decade later we still have very few answers about what exactly happened with SWORDS and why it was “yanked” from the battlefield.
Many experts have warned that the deadline for the UN to ban autonomous weapons is approaching, after which it will be too late to reverse. An open letter signed by thousands — including Stephen Hawking, Elon Musk, and Steve Wozniak — warns of an encroaching AI arms race, and suggests banning unmanned autonomous weapons. According to the letter which, to date, has been signed by 4,502 AI/Robotics researchers alone: “Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”
As discussed previously, facial recognition is far from infallible, and most frequently misidentifies women and people of color. Therefore, there’s an immense risk of innocent people being killed by autonomous weapons using flawed recognition algorithms.
END NOTES
In today’s world, we are constantly bombarded with too many alarming headlines to read, and most publications only cover one topic at a time, which gets lost in the haze. The goal of this lengthy piece was to do the time-consuming research, gathering as much information as possible about current and emerging technologies, and to summarize them for our diligent readers. When looking at the larger picture, it becomes clear that we cannot possibly fathom all of the future implications of these technologies. We must demand more comprehensive privacy legislation, scrutinize the implementation of life-altering programs, and choose not to support corporations whose practices we do not agree with. However, at the individual level, we encourage you to re-evaluate your relationship with tech, share your knowledge with others, and take steps now to reclaim your privacy and prevent future consequences.