What the ongoing Cambridge Analytica story and last week’s congressional hearings with Facebook’s CEO are really all about is that everybody’s waking up to the meaning of “big data.”
Here’s what we mean if you’re interested in talking with your students about all this. It’s a big story not just because of “fake news” or because Facebook has more than 2.2 billion users or because Cambridge Analytica may have affected a presidential election. It’s big because it’s personal now. People weren’t going to understand the implications of big data and the technology behind it until Facebook was in the story. Although there are uncountable retailers, banks, publishers, campaigns, governments and bad actors benefiting from big data, Facebook brought it home to everybody because the data there is so visibly us – our own and our friends’ and loved ones’ everyday likes and lives, in all our own words, photos and videos, posted by us.
This is probably a story we’ll be discussing with our grandchildren as well as children and students. So here are some talking points for classroom and informal conversations about this pivotal moment:
First, what is “big data”? Well, the dictionary definition is: “extremely large data sets that may be analyzed computationally [like with machine learning] to reveal patterns, trends, and associations, especially relating to human behavior and interactions.” Data is just information that comes in all kinds of forms: text, numbers, photos, videos, etc. Even though not all of it needs to stay private, what we’re finding out is, it’s hard to tell how much people and companies can tell about us when the kind of data that’s fine to make public gets blended with other data that’s stored or private. That unknown concerns us, which is why we’re hearing more and more calls for “transparency.” So you can tell from the definition that “big data” is about a whole lot more than lots of information; it’s more about what can be discovered from the data than the data itself. That can be all kinds of things, good and bad, from banks being able to find patterns of fraud to governments stopping infectious diseases from spreading to companies like Cambridge Analytica using people’s information to create and place ads aimed at getting people to vote a certain way.
So is social media big data? It’s only part of it. It’s just the very visible part that regular people like us contribute to. When we post comments, photos and videos, “like” others’ content, click on ads, buy things online, visit other sites, etc. we’re adding all kinds of information (called “psychographic data,” which I’ll explain in a minute) to the databases at social media companies and sometimes elsewhere, whether unethically, criminally or just mistakenly, as happened with Cambridge Analytica, which bought some 87 million people’s data from someone who Facebook says violated its policy. Facebook doesn’t sell data to other companies, it says; the way it makes money is from advertisers who, based on our detailed data in its ad placement system, place their ads on the pages of users who will really like the ads (and maybe buy the thing being advertised). Does that make sense? All that detailed information we share – and the technology I’ll tell you about in a minute – makes it possible for advertising to be more relevant, or more “highly targeted,” than ever before in the history of advertising, which makes it more valuable than ever to advertisers (because more likely to lead to a purchase). Some companies, called data brokers, do sell your data so that the buyers will have even more data on us to help them get even better at placing ads that will make us want to buy stuff.
What else makes up big data? Just about every kind of information we share anywhere – by playing online games, filling out online forms, taking online quizzes, setting up accounts in apps, banking online, shopping online, sending emails, taking out car loans, sharing health information, searching for information, and so on. Sometimes some of that information is in separate databases or data centers, and sometimes big chunks of it get mixed together and sold or hacked into by criminals who want to steal and sell our information. That’s why we hear about “data breaches” in the news, for example at credit bureaus that store all kinds of valuable information about us.
What technology made that possible? A number of tech developments, of course, starting with the Internet and digital technology enabling so much of the world’s information to move off of paper and onto digital devices and then connecting so many of those devices. But what helped “big data” take off from that foundation was a small set of tech developments about 10 years ago: 1) the ability to store almost unlimited information or data on a huge number of computers, connect them all together and search all that data like it was on a single computer, 2) machine learning, which started earlier but really took off when fueled with all that data so that it could detect patterns and “discover” things that couldn’t be “seen” before, and 3) the ability to do all that with all kinds of data, the old demographic kind that advertisers had used for a long time and a new, more random or unstructured kind called “psychographic” data.
So about psychographic data: Up until around the time that social media started to take off, also in the middle of the last decade, advertisers, political campaigns and others were mostly targeting us with the demographic data I mentioned above – information like age, gender, single/married, household income, geographic location, memberships, etc. Psychographic data is more random: for example, whether a person collects things, worries about their appearance, feels family’s important, likes fishing, works out, attends worship services, buys self-help books, etc. It’s the kind we post in social media. According to news reports (including this one at the New Yorker), it’s what a Cambridge University researcher collected through a quiz he created as an app on Facebook then sold to Cambridge Analytica.
Is that why everybody’s so worried about Cambridge Analytica? Well, there’s more to that part of the story. C.A. is based in the UK, and the British government is investigating what it did with voter data and whether it violated British law. Then Facebook will conduct its own investigation, its CEO Mark Zuckerberg said in the congressional hearings last week. But beyond that, C.A.’s parent company, SCL, has been called a “military contractor” by a US professor named David Carroll, who is suing its Cambridge Analytica subsidiary “to disclose how it came up with the psychographic targeting profile it had on him,” according to Columbia Journalism Review. Carroll says SCL has worked or is working with political campaigns in countries all over the world, using the same technique of blending demographic and psychographic data to see if it can influence election outcomes to benefit its clients.
What do we do about all this? That’s not clear yet. Some companies, such as Kik Messenger’s up in Toronto, are working on new business models (see TechCrunch and Coinbase), because big data is making the old free-content-paid-for-by-advertising of the network TV era feel threatening. Some people are thinking there needs to be regulation. But of what? If of social media companies, as I wrote in my last post, before that happens, we all and especially policymakers need to understand that companies like Facebook, Google and Twitter are now social institutions that need to be accountable to more than just shareholders. They’re not just tech companies, media companies, or even some blend of those (Claire Wardle, a scholar I cited here, called them “a hybrid form of communication”). Interestingly, even Mark Zuckerberg said to lawmakers last week that, though the details are important, he’s not against regulation. But, given the pace of technological change, any new laws will at least need expiration dates. And, as I hope was pretty clear above, this isn’t just about social media or elections or “fake news,” so regulation can’t only focus on those. This is about “big data,” which is about more than our data privacy even. It’s about how we maintain the safety and integrity of our identities, institutions and other things that matter to us in this ever more connected world.
It’s a puzzle, but we’ve got this. We will figure this out. We do need lots of perspectives and skill sets in the conversation – all the stakeholders, including tech and social media companies, which are doing some waking up of their own. New business models are entering the scene. Old-school adversarial and exclusionary approaches will only slow the process down. So will messages that claim technology users, including children, are just technology’s victims. We need to think critically not only about how technology is affecting us but also about claims that it’s hijacking our brains. As attorney Mike Godwin, who was the Electronic Frontier Foundation’s first staff counsel, put it in a recent discussion on Facebook, “My big question for those who believe Facebook has overcome the free will of 2 billion people: How did all of you escape?”
And if you’re talking with young people about all this, please don’t forget that Facebook, Twitter and other social media are also organizing and mobilizing tools as well as platforms for these young activists and many others around the world.