If there’s one issue that has preoccupied law-makers, politicians, consumers, researchers, and consumer rights bodies in 2018, it is privacy. In dictionary terms, privacy means “the quality or state of being apart from company or observation.” 2018 has bought privacy to the forefront in regard to data sharing, location intelligence, and facial recognition technology. While our minds have reeled at the advancements of technology and what they might mean our lives in futures years, they’ve also lurched at the realization that privacy in most instances is nothing but a dream and that a user you are the product, not the customer.
The stark reality is that most social media users do a pretty good job at depleting their own privacy without any efforts of nefarious forces behind the scenes: they tag places they regularly visit, take pictures of their children in school uniforms and share their sleep and running schedules. They make political statements online and indicate attendance at political events. People even mark when they are on holidays (and their homes are presumably vacant) and take photographs of boarding passes prior to boarding planes.
2019: A Year of Reckoning
But this year, we’ve seen personal data ripe for collection in everything from shopping cart handles to digital pills. We’re in an era of data collection and surveillance — whether we opt in or not in many instances. Hands up who watched Charlie Brooker’s latest Black Mirror offering, the movie Bandersnatch? In case you haven’t seen it yet, it’s an interactive movie in ‘choose your own adventure’ style, requiring the viewer to make a decision between two options to continue watching. Not such a big deal, unless you consider that each decision — whether to die or let someone else die; what breakfast cereal you preference; or whether you decide to attend a medical appoint or go with Colin, for example — is recorded and owned by Netflix. What will they do with this data and who might they share it with?
Our data is collected through Internet browsers, email, loyalty cards, weather apps, online maps, sat nav, and wearables. Research by The Times (reported in The New York Times) revealed that at least 75 companies receive anonymous, precise location data from apps whose users enable location services to get local news and weather and other info. Several of those businesses claim to track up to 200 million mobile devices in the United States — about half those in use last year. This includes people’s travels in startling detail, accurate to within a few yards and in some cases updated more than 14,000 times a day.
Data Monetization Is Evolving
We’re evolving to a ‘Machine Economy’where Internet-connected devices will trade everything from storage, computation/analytics to electricity and sensor data. Companies who collect data for their own internal purposes, such as predictive analytics maintenance, product development, and UX, are recognizing that this data can have a second life through monetization where it sold to third parties such as local municipalities, town planners, device makers, advertisers, and researchers.
One example is smart city providers Streetline, who collect and collate numerous types of data from in-ground sensors from traffic cameras to Wi-Fi connections, piecing together this information to create a map of parking spaces and signaling whether spots are occupied or vacant. They use this information to earn revenue from both the municipal governments and drivers using the service. For individuals, Streetline can harvest the data from people using the app to find parking, thus improving the overall accuracy of the system’s parking maps. For municipalities, there is a monthly service fee along with an up-front installation cost of about $200 per parking space to install a wireless sensor that tells the city government whether a vehicle is parked there. Municipalities can recoup these costs through more efficient management of city parking.
An Israeli startup, Otonomo, has built the first connected car database that collects, packages, and sells data to insurers, retailers, and city planners and others willing to pay for it. Otonomo’s marketplace packages car-generated data parameters into data bundles. Service providers can subscribe to these data bundles and receive aggregated anonymized data and data from specific car owners (pending the OEM and car owners approval). In addition, they can get reports, analytics, and notifications tailored to their specific needs. In return, it takes a percentage of sales. More than 2 million cars are already on Otonomo’s platform. With the sheer number of data points are in the vehicles of today — and in the future — it’s clear big data means big profits.
I believe that, as consumers, we have a right to control what data that is collected about it, how it is collected, and with whom it is shared. It’s worth considering what we share, with whom, and what it’s worth. We’re starting to see the monetization of personal data. There are even coffee shops where college students give away their names, phone numbers, email addresses, college majors, and other details to a recruitment company in exchange for free coffee.
Your Data Isn’t as Anonymous as You Think
The biggest problem is when seemingly disparate pools of technically anonymized data can be combined to identify people with a high degree of accuracy. A recent study by MIT researchers on the growing practice of compiling massive, anonymized datasets about people’s movement patterns revealed how this can happen. It was the first-ever analysis of so-called user “matchability” in two large-scale datasets from Singapore, one from a mobile network operator and one from a local transportation system. A statistical model tracked location stamps of users in both datasets and provides a probability that data points in both sets come from the same person. In experiments, the researchers found the model could match around 17 percent of individuals in one week’s worth of data and more than 55 percent of individuals after one month of collected data.
The researchers told MIT News, “In publishing the results — and, in particular, the consequences of de-anonymizing data — we felt a bit like ‘white hat’ or ‘ethical’ hackers,” adds co-author Carlo Ratti, a professor of the practice in MIT’s Department of Urban Studies and Planning and director of MIT’s Senseable City Lab. “We felt that it was important to warn people about these new possibilities [of data merging] and [to consider] how we might regulate it.”
We can expect to see judicial and legal proceedings over data breaches in 2019. In December, the Attorney General of Washington DC announced that the state will sue Facebook over the Cambridge Analytica scandal. A similar lawsuit has been filed by the attorney general of the District of Columbia. The FTC has also been investigating whether Facebook’s relationship with Cambridge Analytica — and its handling of users’ data — violated a 2011 agreement brokered by the U.S. government that required the tech giant to improve its privacy practices or risk major fines. The penalty, which is assessed based on the total number of violations and users affected, could foreseeably reach into the billions of dollars. However, some question how motivated the FTC is to respond to the outcry and I wonder how they will be able to make tech companies kneel to any real restrictions on the sharing of personal data when they are the companies making the technology to do so in the first place. It’s also worth remembering that last year Facebook was fined a mere £500,000 by the UK’s Information Commissioner’s Office in the wake of the Cambridge Analytica scandal, an amount that offers little but window dressing.
It will be interesting to see the impact of GDPR on data sharing practices. In theory, the GDPR only applies to EU citizens’ data; the global Internet means that nearly every online service is affected. Some companies have chosen to make their sites and services no longer available to those in Europe (including a number of US regional newspapers, clothing company Modcloth, Instapaper and Unrollme:
What is clear is that we can expect to see more privacy breaches in 2019 with the onus very much on consumers to be active stewards of their own data privacy — what data that is collected about us, how it is collected, and with whom it is shared. I predict that the increasing use of facial recognition technology will be increasing ethical issues about privacy and surveillance that we’ve largely consigned to China. I also predict we’ll see a growth in data brokerage services (those in existence like IOTA’s efforts are largely focused on business data) for personal data. Our data is valuable and we have to decide as consumers whether we value privacy or convenience more,and what we will need to give up to regain our privacy.
This story is published in Noteworthy, where 10,000+ readers come every day to learn about the people & ideas shaping the products we love.
Follow our publication to see more product & design stories featured by the Journal team.