The Naked Future

The Naked Future

$16.00

SKU: 9781591847700

Description

“A thorough yet thoroughly digestible book on the ubiquity of data gathering and the unraveling of personal privacy.” —Daniel Pink, author of Drive

Thanks to recent advances in technology, prediction models for individual behavior grow more sophisticated by the day. Whether you’ll marry, commit a crime or fall victim to one, or contract a disease are becoming easily accessible facts. The naked future is upon us, and the implications are staggering.

Patrick Tucker draws on fascinating stories from health care to urban planning to online dating. He shows how scientists can predict your behavior based on your friends’ Twitter updates, anticipate the weather a year from now, figure out the time of day you’re most likely to slip back into a bad habit, and guess how well you’ll do on a test before you take it.

Tucker knows that the rise of Big Data is not always a good thing. But he also shows how we’ve gained tremendous benefits that we have yet to fully realize.“Thought-provoking, eye-opening, and highly entertaining.”
Ray Kurzweil, author of How to Create a MindPatrick Tucker is the technology editor of Defense One and the former deputy editor of The Futurist magazine. His writing has also appeared in Slate, Technology Review, The Wilson Quarterly, and The Utne Reader, among other outlets. He lives in Baltimore, Maryland.

INTRODUCTION

IMAGINE waking up tomorrow to discover your new top-of-the-line smartphone, the device you use to coordinate all your calls and appointments, has sent you a text. It reads:

Today is Monday and you are probably going to work. So have a great day at work today!—Sincerely, Phone.

Would you be alarmed? Perhaps at first. But there would be no mystery where the data came from. It’s mostly information that you know you’ve given to your phone.

Now consider how you would feel if you woke up tomorrow and your new phone predicted a much more seemingly random occurrence:

Good morning! Today, as you leave work, you will run into your old girlfriend Vanessa (you dated her eleven years ago), and she is going to tell you that she is getting married. Do try to act surprised!

What conclusion could you draw from this but that someone has been stalking your Facebook profile and knows you have an old girlfriend named Vanessa? And that this someone has probably been stalking her profile as well and spotted her engagement announcement. Now this ghoul has hacked your calendars and your phone!

Unsure what to do, let’s say you ignore it for the time being. But then, as you’re leaving work, the prophecy holds true and you pass Vanessa on the sidewalk. Remembering the text from that morning, you congratulate her on the engagement. Her mouth drops and her eyes widen with alarm.

“How did you know I was engaged?” she asks.

You’re about to say, “My phone sent me a text,” but you stop yourself just in time.

“Didn’t you post something to your Facebook profile?” you ask.

“Not yet,” she answers and walks hurriedly away.

You should have paid attention to your phone and just acted surprised.

This scenario is closer to reality than you might think. In fact, the technology and data already exist to make it happen. We give it away to retailers, phone companies, the government, social networks, and especially our own phones without realizing it. In the next few years that data will become more useful to more people. This is what I call the naked future.

The capital-F Future was born of the Enlightenment-era notion of progress, the idea that the present—in the form of institutions, products, fashions, tastes, and modes of life—can and must be continually reformed and improved. This is why our interaction with the future as groups and as nations is an expression of both personal and national identity. As a public idea, the future shapes buying, voting, and social behavior. The future is an improved present, safer, more convenient, better managed through the wonders of technology and invention.

But the future—in the form of intention—is also an incredibly private idea. Your future, whether it’s what you’re going to do tonight, next year, or the next time you’ve got a thousand bucks to burn, is invisible to everyone but you. We are jealous guards of the personal, secret future, and with good reason. Imagine if any act you were going to commit was laid bare before the world, how naked you would feel.

In the next two decades, we will be able to predict huge areas of the future with far greater accuracy than ever before in human history, including events long thought to be beyond the realm of human inference. The rate by which we can extrapolate meaningful patterns from the data of the present is quickening as rapidly as is the spread of the Internet because the two are inexorably linked. The Internet is turning prediction into an equation. Mathematicians, statisticians, computer scientists, marketers, and hackers are using a global network of sensors, software programs, information collection devices, and apps to reveal in ever-greater detail the effects of our perpetual reform on the world around us. From programs that chart potential flu outbreaks to expensive (yet imperfect) “quant” algorithms that anticipate bursts of stock-market volatility, computer-aided prediction is everywhere.

Big Data Is Dead. Long Live Big Data

Between November 2010 and February 2013, the number of queries related to the term “big data” jumped by a factor of twenty-nine. That means that if big data were a country that grew every time someone searched for it on Google, it would be the size of the United Kingdom in 2010 and the size of Australia just three years later. It’s a hot topic, but it’s also a phrase that means something different depending on who is trying to sell you what. A couple of years ago, the term referred to data sets so large that the owners of those sets couldn’t derive any insight from them. Big data was a euphemism for unstructured and unworkable bits of information locked away in servers, or worse, on paper. This quality of bigness made those little values on spreadsheets effectively valueless. No more. Go to any IT conference today and you’ll find rooms full of vendors so eager to work with your big data they will be unable to refrain from shoving flash drives into your pockets. Large companies and the government now work with big data all the time.

On February 16, 2012, the phrase “big data” made an evolutionary leap with the publication of a piece by Charles Duhigg in the New York Times. The article exposed how the retail chain Target used records of millions of transactions (and information from its baby registry) to draw a corollary between the purchase of various common items such as unscented baby lotion and pregnancy. When Target began sending coupons for baby supplies to customers who it had statistically deduced were in a family way, one customer’s father had a fit, demanded an explanation, and realized that a soulless company with a lot of records had discovered something extremely intimate about his daughter before she had had a chance to break the news to him. The story was picked up on The Colbert Report and The Daily Show, and was repeated on blogs and news stories around the world. Big data went from a boring business idea to a menacing force for evil. It was a secret statistical prescient power that enormous institutions used against the rest of us. The Guardian newspaper’s 2013 revelations about the scope and power of the NSA to surveil communications among U.S. citizens only added to this narrative. We feel we have arrived at an age in which our devices communicate about us in a language we cannot hear to parties we cannot see. Big data belongs to them, not us. We are its victims.

This view of big data is not entirely incorrect. As you’ll find in this book, companies, emboldened by new capabilities, are eager to use the enormous data sets they’ve amassed to squeeze more money out of their present and future customers. Governments, too, are using big data to do more with less, which is fine—as long as you approve of everything the government does.

But the view of big data as a dark force available only to large institutions is limited. Big data will shrink, becoming small enough to fit inside single-push notification on a single user’s phone. Most of what we understand about it represents its past, when it was solely a capability that the powerful used to gain leverage over the weak. The future of this resource is incredibly open to consumers, activists, and regular people. But big data is only one piece of a larger trend that’s reshaping life on this planet and exposing the future.

With very little fanfare, we have left the big data era and have entered the telemetric age, derived from the word “telemetry”: “The process or practice of obtaining measurements in one place and relaying them for recording or display to a point at a distance. The transmission of measurements by the apparatus making them.”1 Telemetry is the collection and transfer of data in real time, as though sensed. If you’ve ever been in a hospital and had an EKG, ECG, or any sort of monitoring device attached to you, if you’ve ever been able to see your cardiac activity displayed heartbeat for heartbeat with the knowledge that that data stream was also reaching the nurse down the hall, possibly even your doctor on his smartphone, then you’ve experienced telemetry. The reach and power of telemetry is what separates the less predictable world in which we evolved our humanity from the more predictable one in which that humanity will grow and be tested.

Telemetry is what divides the present from the naked future.

As sensors, cameras, and microphones constitute one way for computer systems to collect information about their—and our—shared environment, these systems are developing perceptions that far exceed our own. Much of what we do, how we live, how we interact with institutions, organizations, and one another takes place online, is readable telemetrically, and leaves clues about where we’ve been and where we’re going. When you make an appointment and save it to the calendar application on your iPhone, when you leave your house and set a home alarm that dialogues directly with your city’s police department, when you activate your phone’s GPS, when you use your debit-procured Metrocard to access the subway and then use a radio frequency identification (RFID) enabled security tag to enter your office, you’ve created a trail that’s transparent to anyone (or anything) with access to the servers and hard drives on which that data is stored. How big is that trail? Between checking your phone, using GPS, sending e-mail, tweets, and Facebook posts, and especially streaming movies and music, you create 1.8 million megabytes a year. It’s enough to fill nine CD-ROMs every day. The device-ification of modern life in the developed world is the reason why more than 90 percent of all the data that exists was created in just the last three years.2 Most of this is what’s called metadata: bits of information that you create (or your devices make on your behalf) through your digital interactions. Only about 10 percent is ever stored permanently and very little of it affects you directly but all of it says something about you. And it’s growing exponentially. There will be forty-four times as much digital information in 2020 (35 zettabytes) as there was in 2009 (8 zettabytes) according to the research group IDC.3

We think of each of these actions—the making of an appointment, the purchase of that fare through your subway fare card, the swiping of that RFID-enabled security badge—as separate ones of no real consequence to us, as big data. Think of that data instead as sensory data, as pinpricks that can be felt or sounds that can be heard like musical notes. The little actions, transactions, and exchanges of daily life do have a rhythm after all, and correspond to one another in a manner not unlike a melody. If you’re like most people, your life has a certain routine: you leave for work at the same time each day; you shop at the same stores on your lunch hour; you take the same route home. Any tune composed of a repetitious sequence of notes becomes predictable. With sensors, geographic information systems, and geo-location-based apps, more of those notes become audible.

You’ve probably never heard this song. In the big data present, it’s distant companies, market, and government forces that pick up the sound of our metadata. But this book isn’t about the present. In the naked future the song is audible to everyone. The devices and digital services that we allow into our lives will make noticeable to us how predictable we really are.

The different ways we relate to the future publicly and personally will fundamentally change as a result of the fact that we will be making far more accurate and personal predictions. Huge areas of the future will be exposed. It will truly be a naked future.

The Future App

Throughout this book I refer to various hypothetical programs or apps like the one at the start of this introduction. These could be cloud-based programs we access on our smartphones, augmented-reality headsets, Microsoft brain implants (the blue screen of death would be literal in this sense), or any future platform. Although there are several apps such as Osito and Google Now that already use personal data to deliver customized predictions, most of the future-predicting apps in this book are made up. What they represent is the end point where telemetric data combine with processing to present an end user with a snapshot of the future. Though the future apps come from big data, just as we evolved from earlier humans, what they represent is something very different: an individual answer or solution to a unique, personal problem.

Predictability based on an abundance of personal data rises in almost direct inverse proportion to private data’s remaining private. So how do we protect our privacy in the digital age?

In researching this book, I talked to people at Google, Stanford, MIT, Facebook, and Twitter; I hung out with hackers, entrepreneurs, scientists, cops, spies, and a billionaire or two. I was amazed by the promise of the telemetric age. I’m a future junkie. I get excited listening to smart people with world-changing ideas because if I didn’t, I would be a pretty poor science journalist. But when I shared my experiences with friends, family, and colleagues and listened to their point of view, I realized that my reaction was not typical. Where I saw a thrilling and historic transformation in the world’s oldest idea—the future—other people saw only Target, Facebook, Google, and the government using their data to surveil, track, and trick them. They were firmly planted in the big data present, in which it is us against them. They all had the same question: What can you do to prevent all of this from happening?

The threat of creeping techno-totalitarianism is real. But the realization of our worst fears is not the inevitable result of growing computational capability. Just as the costs of using big data have decreased for institutions, those costs will continue to trend downward as systems improve and as consumer services spring up in a field that is currently dominated by business-to-business players. The balance of power will shift—somewhat—in favor of individuals. Your phone may be from Apple; your carrier may be AT&T; your browser may be Google; but your data is yours first because you created it through your actions. Think of it not as a liability but as an asset you can take ownership of and use. In the naked future, your data will help you live much more healthily, realize more of your own goals in less time, avoid inconvenience and danger, and, as detailed in this book, learn about yourself and your own future in a way that no generation in human history ever thought possible. In fact, your data is your best defense against coercive, Target-like marketing and perhaps even against intrusive government practices. Your data is nothing less than a superpower waiting to be harnessed.

We still have choices to make. I’ll discuss some of the forms those choices will take. But the worst possible move we as a society can make right now is demand that technological progress reverse itself. This is futile and shortsighted. We may be uncomfortable with the way companies, the NSA, and other groups use and abuse our information but that doesn’t mean we will be producing less data anytime soon. As I mentioned earlier, according to the research group IDC there will be forty-four times as much digital information in 2020 as there was in 2009.4 You have a clear choice: use your data or someone else will.

This is not a book about a change that is going to happen so much as a change that has already occurred but has yet to be acknowledged or fully felt. This is not a declaration of independence from corporate America, the government, or anything else. It’s the record of our journey to this new place: the naked future.

CHAPTER 1

Namazu the Earth Shaker

THE date is April 12, 2011. I’m on a highway in the Japanese prefecture of Fukushima, home to a now infamous nuclear power plant that’s in the process of melting down. I’ve just left the city of Ishinomaki where I was covering relief efforts that began following last month’s earthquake and tsunami and I’m now heading back to Tokyo. In the car with me are two Japanese fishermen who speak no English, an Australian fireman named Simon, a British reporter stringing for a newspaper out of the Middle East, and a Japanese relief coordinator. Our route is taking us well within the eighty-kilometer “evacuation zone” that the U.S. government has advised its citizens to stay the hell out of. None of us have any illusions that it’s safe to be here. For this reason, and because we’re behind schedule, we’re driving extremely fast.

Everyone on this road is driving fast.

Suddenly, a loud, sirenlike noise tears through the car’s interior. Simon pulls his walkie-talkie from his Gore-Tex jacket. A bright red light cries out in distress at rhythmic intervals.

“Pull over,” Simon commands. The driver applies the brakes, not exactly slamming them but not gradually depressing them, either, and steers the car to the side of the road. Like a surreal piece of choreographed theater, every other car on the road also slows and banks.

A moment later, we feel the ground beneath us rise and fall. This is a 6.0 tremor, large enough that—had we been traveling at our previous speed of more than eighty miles per hour—we likely would have crashed. The fishermen, Simon, the car’s other occupants, and I look around at one another. We share a silent acknowledgment that we have just barely avoided a terrible accident.

I’m alive today thanks in part to Japan’s Earthquake Early Warning (EEW) system, a network of more than four thousand seismographic sensor stations.1 These devices detect the low-level initial tremors called primary waves or P-waves that are released by seismic activity. An earthquake’s P-wave telegraphs the size of the secondary wave or S-wave, the tremors that crash cities and bring the fury of the sea to shore. The system computes the signals as input and issues output, the feedback of which takes the form of Simon’s phone going off.

The alert is issued automatically the second that the seismometer detects the signal and transfers it to headquarters.

Because earthquakes are a frequent occurrence in Japan, the alarm now goes off so often it has almost become background noise. In the moments before the 2011 earthquake hit, television broadcasts across the country were briefly interrupted by a crisp, telephonic ringing. A bright blue box appeared on every television screen showing the eastern coast of Japan and a large red X offshore depicting the earthquake’s epicenter. In one of the eerier video clips that emerged from March 11, 2011, members of Japan’s parliament can be seen debating a piece of legislation. Because they’re accustomed to the signals they’re slow to react to the warning at first. When they realize the size of the earthquake, they look nervously to the swinging chandeliers above them. The picture cuts to a flustered anchorman who warns of a possible tsunami off the coast of the prefecture of Miyagi.2

The Japanese have been applying creativity and resourcefulness to earthquake prediction for centuries. Historically, national myth held that earthquakes were caused by the movements of a giant catfish, or namazu, called the Earth Shaker. Though the idea seems ridiculous today, the Japanese took it very seriously at various points throughout their history. In 1592 the samurai warlord Toyotomi Hideyoshi issued what is perhaps the strangest building-code edict in history to the men constructing his castle in the Fushimi district of Kyoto: “Be sure to implement all catfish countermeasures.”

In the later Edo period small catfish were awarded a reputation as earthquake predictors. Strange fish behavior was thought to be an indication that the giant namazu was on the prowl for mischief.

Today, the idea feels fanciful. Several centuries of steady scientific progress have taught us to look for concrete causal relationships in order to understand how one physical entity might influence another. We know that the earth’s tectonic plates are affected neither by subterranean fish, nor the position of the constellation of Cassiopeia, nor the current level of God’s wrath but by physical systems of enormous complexity and limited accessibility. Our understanding of the world through the lens of science suggests that P-waves indicate S-waves, but there exists no physical mechanism by which a catfish could know of an earthquake days in advance. Anecdotal evidence to the contrary proves only that humans have active imaginations, because catfish don’t predict earthquakes.

Turns out, they almost do.

One of the key triggers of large seismic events is the buildup of pressure between rock formations in the earth’s crust. This pressure also releases electrical activity and will do so days before large quake events. Loose “defect electrons” rise up through porous gaps in the earth’s crust; they ionize when they meet the air. Under the right circumstances, this can cause subtle hydrogen peroxide increases in certain fault lines proximate to bodies of water, making such bodies just a bit toxic to very sensitive marine fauna.

British zoologist Rachel Grant observed this phenomenon firsthand when hundreds of toads fled a pond near L’Aquila, Italy, in the days just prior to an enormous 2010 earthquake. As Grant wrote in her paper that was published in the Journal of Zoology, “Our study is one of the first to document animal behavior before, during and after an earthquake. Our findings suggest that toads are able to detect pre-seismic cues such as the release of gases and charged particles, and use these as a form of earthquake early warning system.”3

Catfish, like toads, have extremely sensitive skin. But unlike toads, they can’t abandon a body of water that’s becoming toxic. They can only thrash about or behave strangely, like the Earth Shaker.4

In his book The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t, statistician Nate Silver is rather hard on Grant. He suggests, though not explicitly, that she’s reached an insupportable conclusion, as her paper seems to assert that the observed toad behavior is “evidence that they [toads] had predicted the earthquake.” He describes her work as the sort of thing that “exhausts” real seismologists and notes dismissively, “Some of the stuff in academic journals is hard to distinguish from ancient Japanese folklore.”5

Silver is certainly a talented statistician deserving of the celebrity that’s been awarded him. He’s right to point out that history is littered with failed attempts to predict earthquakes, often by observing strange animal behavior. He’s also right to point out that statistical analysis of previous earthquakes is surely a far more useful signal than is toad behavior, at least for now.

But he’s misstating Grant’s intent. She’s not suggesting that the toad behavior is “evidence that they predicted the earthquake.” Neither the toads of L’Aquila, nor the catfish of Japan, nor even the EEW are actually predicting anything and Rachel Grant knows this perfectly well. These are feats not of prognostication but of detection. Grant and her colleagues acknowledge that testing the hypothesis outside a laboratory setting has thus far been impossible because they still don’t know when and where an earthquake will strike. And neither do the toads. When they’re in a pond with higher hydrogen peroxide levels they become uncomfortable and they leave. They are indifferent to earthquakes, to Nate Silver, and to the future.

It’s humans who predict things.

As we attempt to make use of this abundance of telemetric data, we’re going to make errors. One of the statistical traps Silver and other statisticians warn against is overfitting, or applying a specific solution to a general problem. In the case of earthquakes, this could mean watching toads rather than history because toad behavior lends itself to a very specific type of prediction method.

We are about to enter a golden age of overfitting, if such a thing can be said to exist. The sheer volume of data we now generate as individuals and institutions suggests that more people will be able to create more models with data points and observations that offer the false promise of certainty. We will model more and so we will make more errors, but an increase in modeling activity will not diminish the costs or consequences of those errors. Many small mistakes will feel extremely large particularly in the context of international stock and commodities markets. Overfitting also speaks to an impulsivity that’s in our nature. We gravitate toward evidence, data, and facts that support a conclusion we’ve already reached or bolster the argument we’re trying to make. Finally there’s enough data to lend some support to virtually any argument, no matter how crazy. To overfit is human.

But the fact that electrical activity from pressure increases days before large seismic events is beyond dispute. It’s exactly the sort of predictor that could reliably indicate an approaching disaster if only humanity could devise some cost-effective way to place millions of sensitive electron detectors deep beneath the earth’s surface near fault lines. It’s science fiction. But at one point, so was the idea of a sensor spiderweb that could detect P-waves.

We are turning our physical environment into a catfish.

A Global Nervous System Emerges

In 1988 a scientist at Xerox PARC named Mark D. Weiser put forward a novel vision for the future. Computer hardware, he said, would migrate from deskbound PCs to pads, boards, and “smart” systems that were part of the physical environment. The term Weiser gave this new sensing environment was “ubiquitous computing.”

This vision for the future speaks a lot about the man who came up with it. Weiser was not a typical computer hardware genius. Take a look at his informal writings and the accounts of people who knew him and you will not find a man who loved gadgets and code for their own sake but someone motivated by a passion for actual experience, a sensualist, a devotee of skydiving, rock repelling, and lead drummer in a punk band called Severe Tire Damage. Through ubiquitous computing he imagined a future in which humans interacted with computers on an unconscious level, through regular activity; a future in which computers served to remove annoyances and answer questions like “Where are the car keys?” “Can I get a parking place?” and “Is that shirt I saw last week at Macy’s still on the rack?”6 while keeping us connected to what we care about. Computers weren’t supposed to get in our way, or be constantly in our hands, or be connected to our ears through shiny white earplugs, or demand that we answer their every chirp and bell. As they became better they were supposed to become more numerous but also disappear into the background.

A decade after his death, it’s the “ubiquitous” portion of Weiser’s ubiquitous computing vision that’s becoming reality for most of us. The total number of devices connected to the Internet first exceeded the size of the global human population in 2008 or so, according to Cisco, and is growing far faster.

Cisco forecasts that there will be 50 billion machine-to-machine devices in existence by 2020, up from 13 billion in 2013. Today, we call ubiquitous computing by another name: the Internet of Things.

For large institutional or corporate consumers of information, the spread of sensors and computer hardware across the physical environment amounts to better inventory tracking and customer targeting, which will help bottom lines. The Internet of Things can be found most immediately in the RFID tags that have made their way onto everything from enormous inventory palettes to the clothing labels that Swiss textile company TexTrace7 sews into American Apparel clothing to track shipments. Most RFID tags that we encounter today are small squares of paper, plastic, or glass containing a microchip and an antenna at a cost of about twenty cents. The microchip holds information about the product (or thing the RFID is connected to). The antenna allows an RFID reader to access data on the chip via a unique radio signal. Unlike a simple printed bar or quick response (QR) code, the RFID tag doesn’t have to be directly under the reader to work. The reader need only be close by. This allows retailers to monitor the inventory in their store in something like real time. Some futurists have suggested that RFID could one day render the checkout station obsolete. In this future, when you saw a product that you wanted you would simply pluck it from the shelf and—so long as you had a user account or were identifiable to that store—walk out the door. The product’s RFID tag would tell the retailer the product had been purchased and your account would be debited. Sound far-fetched? Millions of Americans today buy access to toll roads through the dashboard-mounted RFID tags that are part of the E-ZPass system. The act of purchasing takes the form of a simple deceleration and a brief exchange of data between the RFID tag’s antenna and the tollbooth’s reader. And RFID is just one of the many smart or sensing tags and microchips that are making their way into our physical environment at rapidly decreasing cost.

For patients and graying baby boomers, the Internet of Things is ushering in a revolution in real-time medical care. It is alive inside the chest of Carol Kasyjanski, a woman who in 2009 became the second human being to receive a Bluetooth-enabled pacemaker that allows her heart to dialogue directly with her doctor.8 The first was former U.S. vice president Dick Cheney, who received one in 2007, but never activated the device’s broadcasting capability for fear of hackers.

The military uses the Internet of Things to do more with less. In Afghanistan it takes the form of the fifteen hundred “unattended ground sensors” that the U.S. Army is leaving littered across the Afghan countryside as the U.S. mission there winds down. These sensors, which are intended to pick up human movement, are intended to allow the Pentagon to eavesdrop on the countryside and detect how Afghans (or Pakistanis) are moving over their country.

It is, quite simply, all of the computerized sensory information that can be gathered and transmitted in real time about what is happening right now. When this happens to machines we call this big data. When it happens to us we call it sensing.

In many ways, this expanding, computer-connected environment is inconspicuous (as Weiser intended). The presence of sensors able to detect ammonia, a common component of explosive material, in the New York City subway is not something I devote thought to when I’m taking the downtown 6 train; I’m just glad it’s there.9

The Internet of Things is not a far-off dream; it’s here. We’ve been accepting the presence of more sensors in our environment for decades now. It’s impossible to argue against the usefulness of Japan’s EEW, or radon detection devices in subterranean structures, or home security systems that sense when a door is being opened and alert the police and homeowner. The average 777 has so many sensors on board that a three-hour flight can generate a terabyte of data. Twenty flights generate the data equivalent of every piece of text in the Library of Congress.

For the owners of the copper wires, the fiber-optic cables, the cell phone towers, and the servers on which the Internet runs, the growth of the Internet of Things means massive future profits. The firm Gartner has predicted that the global market for “contextually aware computing” will exceed $96 billion per year by 2015. It’s no wonder such companies as Cisco, IBM, and Verizon spend millions of dollars in ad, marketing, and grant campaigns to persuade the world that a “smarter” planet is so very good for everyone. And it is, in many ways. But first and foremost, a smarter planet is good for them.

Importantly, the Internet of Things is not solely the product of companies and governments. It’s become a homegrown phenomenon as much as a big telecom money machine, and it’s empowering regular people in some very surprising ways.

The Internet of Things, Three Vignettes

On March 11, 2011, engineer Seigo Ishino was at his office in the city of Kawasaki near Tokyo when the EEW system sounded. Like any rational person caught in a massive tremor, he crawled under his desk until the quake passed. He emerged a few minutes later unscathed but, as a result of the seismic event that had just occurred, his life was now far more complicated than it had been just that morning. News of technical problems at the Fukushima Daiichi nuclear plant spread quickly in those early hours through Japan and then around the globe. Tokyo was close enough to Fukushima (about 160 miles) that the meltdown posed a serious concern particularly for children and pregnant women, as radiation is most harmful to babies and kids. Seigo’s wife was eight months pregnant at the time. He was faced with some hard choices. Was the level of radioactive cesium and iodine spewing out of the plant dangerous enough to compel him to relocate his family farther south? If so, he needed to act quickly to get a train ticket, as the price was rapidly ascending. There was also the question of where they would stay, and how he would earn money because he would effectively be abdicating his duties at his present job and had no job prospects outside of Tokyo. Was the danger significant enough to warrant an evacuation from Japan? The many thousands of foreigners who were attempting to leave that week were also driving up the cost of airfare and there were questions about how to obtain an exit visa. Alternatively, was it safe to stay where he was? What about the food and the water supply? He needed more information.

The Kan administration’s press secretary, Yukio Edano, began giving regular press conferences, clad in a bizarre blue jumpsuit, to inform the public that radiation levels were not dangerous and that the situation at the troubled plant was under control. The official messaging took a turn for the ridiculous on March 12 when the Kan administration assured the public that the pressure levels at the reactor had stabilized only to then admit, a few hours later, that a massive buildup of pressure had blown the walls off the reactor building. Yukio Edano again took to the podium to steadfastly affirm that the situation was improving as the reactor in the picture behind him smoked and fumed.

Seigo elected to stay but, like millions of other Japanese, he no longer trusted the official story that was coming from the government and from TEPCO, the corporate entity that operated the plant, both of which he regards as “most untruthful.”

Seigo was a member of an international group of community designers, engineers, hackers, and hobbyists who built sensors and installed them in buildings and other aspects of the built environment to monitor energy use. The community was centered around a platform called Pachube (now Xively), which allowed users with sensor data to share it in real time on the group’s site. Not long after the news of the meltdown spread throughout Japan, thousands of people across the country were tweeting Geiger counter data and hundreds of Pachube users were streaming their data directly to the Pachube site.

Seigo began work on a smartphone app (for Android) that combined Google Maps, real-time information about radiation levels, and publicly available data about wind currents. The resulting Winds of Fukushima app worked as a sort of living map that provided constant information not just on where radiation existed but also where it was going, in the form of bright blue arrows.

Of particular concern to Seigo was the safety of food and drinking water. The Winds of Fukushima app confirmed that radiation was spreading far wider than the government was indicating in news reports. Seigo began to buy his food from the south side of Japan and drink water imported from the United States.

Winds of Fukushima is hardly a technological miracle. It takes a very conventional stream of data (current wind direction), combines it with a second data stream (real-time readings of radiation), and makes this new, combined data available in a format that the public can easily find and use: a Google map. Its most revolutionary aspect is how quickly it emerged in the wake of the disaster. A decade ago, the task of coordinating among hundreds of Geiger counter–armed volunteers, building a platform for all of them to stream data, and finding a vendor willing to sell the software internationally was neither cheap nor easy. Thanks to communities of interconnected amateur techies, open APIs like Google Maps, and direct-to-market software vending platforms like the Android app stores, Seigo was able to build and publish Winds of Fukushima from a small Yokohama apartment in virtually no time at all. The app went live in the Android store about six weeks after the initial quake but it actually took Seigo only a few days to create it (though he admits he barely slept).

Pachube was started by an architect named Usman Haque who wanted to build a sensing feature into his building designs so that, years after construction, he and his fellow designers could log on to Pachube and get a sense of how the buildings were being used. He wanted to let the occupants, too, reconfigure their living environments around their actual use patterns, their living data. Today, the Cosm system that acquired Pachube allows developers to build apps, programs, and immediately derive insights off massive amounts of data coming from a suddenly awake world.

“Everyone gets insight into the environment around them, data contributors get applications that are directly relevant to their immediate environment, and application developers get access to a marketplace for their software,” Pachube evangelist Ed Borden remarked in a blog post.

A world that senses its occupants and shares that information may be one where people become much smarter about how they live. It’s also a world where information that is accessible only to government suddenly becomes available to hackers and activists. Depending on the content of that information, and the method you go about obtaining it, a simple civic act such as trying to fix your local sewer system can look provocative to the local authority whose power you just usurped, as another Pachube user named Leif Percifield discovered in New York.

Seeing the Hot Water Before It Hits the River

The date is April 18, 2012. Leif Percifield, a few of his friends, and I are in canoes in Brooklyn’s famous Gowanus Canal. A shy drizzle rains down on us as we paddle out over rusted bicycles, tin cans, and other bits of metal and plastic that have imbedded themselves in this canal bed. The air smells slightly of sewage, which is why we’re here.

We reach our destination, the portion of the canal that meets Bond and Fourth streets. Leif secures a shoe-box-size plastic container with a solar panel atop it to a mooring above the combined sewer overflow pipe, or CSO. Two long wires extend from the device; these are tipped at the end by a small sensor. Leif made the box, which is a prototype, the day before using off-the-shelf components (an Arduino motherboard) and parts he created himself with a printable circuit-board machine at Parsons School of Design. Leif plunges his hands into the water, elbow deep, to affix the sensor tip as close as possible to the pipe. When he’s done, he does a cursory clean of his hands and checks his iPhone.

“It works!” he says. The sunken sensor is now broadcasting the temperature and conductivity of the water. Hotter water, and water with more electricity conducting minerals, are sure signs of sewage runoff.

Just about everyone in New York takes for granted a few key facts about the Gowanus Canal. The most important of these is that it’s beyond fixing. Not only does sewage water run into the canal when it rains but the water is laden with decades’ worth of heavy toxic metals, which has earned its designation as a Superfund site, one of the most poisonous environments in the United States. The U.S. Clean Water Act says the city of New York is supposed to clean this place up, remove the metals, and keep sewage from spilling into its waters. But before that can happen New York City and the U.S. Army Corps of Engineers must conduct a feasibility study, which neither New York City, the EPA, nor the army are in any hurry to complete.

“The numbers that we have say that three hundred million gallons of sewage go into the Gowanus Canal a year,” says Leif. He adds that the numbers are based on computer models and he believes them to be flawed. Members of the community have accused the New York Department of Energy of tweaking the data in order to put off the costly work of fixing the storm runoff problem.

Leif’s goal is to get people in the city to participate in rehabilitating the canal. That’s not easy. But if he can map where flows are bigger or smaller, he thinks he can put together a more accurate assessment of what’s going on and essentially predict how dirty the water will be on any given day as a result of environmental factors. This information is of no real use to one person but a community can edit their water usage, their showering and flushing, based on the sewage water level. The name of Leif’s blog says it all: Don’t Flush Me.

You would expect the city would appreciate Leif’s efforts to better monitor the sewer system. But his relationship with local New York City authorities quickly became rocky. His previous project literally got him in a lot of hot water: he actually went into the city sewer system to fit it with a network of sensors.

“The air is not pleasant,” he says of the New York underground. “But I was thinking it would be putrid. Instead it was more acrid. And it was incredibly hot, twenty-five degrees hotter underground than aboveground. People use hot water, you know, and hot things come out of your body.”

The sensors he attempted to install were supposed to read the water level and a fast rise was a good indication of coming overflow. The experiment didn’t pan out. The sensors didn’t stay in place and the Bluetooth signal inside the sewer was too weak. The data was trapped. Had the sensor system functioned, he would have been the second person in history to be able to predict when the sewers were going to overflow. The first was Cynthia Rudin, an MIT researcher who figured it out with a statistical formula.

Leif’s project was simple, commonsense infrastructure stewardship. But when he posted a few pictures of his adventure online he immediately got a call from New York’s city officials. They ordered him downtown and made it clear that he was “in trouble” for what he had done. He was told to cease his activities. Leif believes this is because of how he was able to show how easy it is to get into the New York City sewage system.

Leif has grown better at working with the New York City Department of Environmental Protection but his experience reveals how complicated our relationship with authority becomes in this interconnected era. The program Leif put in place all by himself was very similar to ones in place in Maryland and Washington, D.C., to manage sewage run off in the Chesapeake Bay, but the latter are managed by local authorities with little citizen input so they’re less controversial (and arguably rather ineffective). Everyone can agree, at least publicly, that fixing sewage backup should be a top priority. But when citizens armed with sensor boards suddenly start outflanking government on government’s own turf, tensions can rise.

Most of us grew up in an environment where we comfortably assumed that local government always had more information than we did about what was going on within our city, certainly the best data on the state of infrastructure. We also instinctively trust local government as the provider of information during an emergency, even when it’s an emergency in which we’re directly involved. See a fire? Call 911 and ask for services, wait for someone to come to where you are and tell you what’s happening. This is an inefficient way to collect and distribute information during a time of crisis.

The Internet of Things is ushering in a new era of proactive citizenry. It’s an era where much of the most important information during a fire, a flood, a citywide disaster doesn’t come from government but from you and your suddenly empowered neighbors, people like Gordon Jones.

Seeing the Fire Before You Are in It

In the summer of 2007 Gordon Jones was living in Charleston, South Carolina. A fire broke out at a nearby furniture store, killing nine firefighters, the largest number of firemen to die on duty since the 9/11 terrorist attack. An enormous memorial followed. Emergency workers from around the country came out to Charleston as the facts of the incident were reported on the news in rounds, like a funerary dirge. The public safety workers succeeded in pulling out several survivors from the blaze before the roof caved in on them.

Jones was working at the time for Global Emergency Resources (GER), a company that markets a software tool for monitoring ambulances and hospitals during emergencies. Watching the local coverage of the memorial service for the firefighters, he realized that the technology he was developing could have saved lives: “I said to myself, What if somebody, one of the people trapped inside the store, had a smartphone to broadcast what the scene looked like? That might have made a difference.”

It sounded like a worthwhile and potentially profitable start-up. Jones founded a company and shortly after announced the launch of the Guardian Watch app. Guardian Watch enables anyone with a cell phone to live-stream video and pictures of an event directly to emergency personnel. This may not sound that significant but think of an alarm system as nothing more than an information distribution network. Some alarm systems are better than others. Guardian Watch enables thousands of people to provide streaming visual data about a situation at an information transfer rate of hundreds of thousands of bytes per second, the average upload speed of a 4G or higher phone. Guardian Watch was the first iPhone app to take advantage of the smartphone’s full capabilities to give emergency workers a visual and auditory sense of what may be ahead of them.

“A picture is worth a thousand words and a video is worth a thousand pictures,” says Jones. This statement, though perhaps a bit corny, encapsulates why Guardian Watch really is a clear improvement over traditional emergency response systems. It delivers information that’s user-specific, varies depending on context, and moves at a speed and scale that make sense for emergencies—namely more and faster.

A decade ago, increasing the scale of information collection and distribution to the point where it would have made a difference to one of those Charleston firefighters would have been a daunting technological challenge. Today, the tools, platform, and infrastructure already exist and have been widely distributed. You’re carrying all of this around in your pocket.

The single biggest driver of the Internet of Things is the smartphone, that always-on, GPS-enabled sensor that more than 64 percent of the U.S. population carries around with them. We know that smartphones today make it easier to find restaurants, share experiences as they occur, shop, and study. Mobile technology makes data creation and curation possible anywhere, which means we’re creating and curating much more of that data more of the time.

Guardian Watch already faces competition from other groups looking to leverage the information gathering and broadcasting technology of smartphones. A Silicon Valley–based start-up called CiviGuard takes the idea a step further. The platform integrates streams from Twitter, Facebook, and local emergency channels and presents the user with a “networked window” of an emergency situation playing out in real time. It gives geo-tagged advice that’s specific to an individual user based on a variety of variables, the most important of which is location. What that means is this: depending on the situation, a user may be told to stay where she is while a different user may be told stay away from that area. Most important, CiviGuard includes a scenario function to allow users to conduct virtual emergency simulations.

Imagine you’re in Manhattan and there’s just been a terrorist attack. Want to know which streets are most likely to become blocked when the news spreads? How your company’s supply chain will be disrupted? Where to find food and water while they’re still available on store shelves? CiviGuard will tell you and will do so based on a rapidly updating understanding of what’s going on around the city. And should CiviGuard not pan out, the Environmental Systems Resources Institute (Esri) can also build you a custom geographic information system that does all of the above, and can integrate it with population density, water tables, jurisdiction, and hundreds of other maps.

The same real-time broadcasting capability that will allow me to better navigate my way out of a disaster can be used for other purposes as well. The mapping of human behavior promises enormous benefits, but it also speaks to a future where invisibility and anonymity are no longer the default setting for life.

The Internet of Things is also the intersection camera that snaps a picture of my license when I try to beat the yellow light. It’s the smart electricity meter that California’s Pacific Gas and Electric Company now insists its customers use, allowing the utility to optimize energy delivery but also to better track individual energy use. If you’re a PG&E customer, the Internet of Things is the reason why your energy company can infer when you’re home and when you’re not based on when and how you use certain devices.10

In our rush to overlay data collection devices across the physical environment, we overlooked the fact that the same devices we use to perceive our environment can just as easily be turned on us.

We will be seen. We will be tagged. It’s happening.

Checked In. Your iPhone Knows Where You’re Going

Want to know how many people with smartphones are in terminal 4 of New York’s JFK Airport, standing on line to get tickets to The Daily Show, browsing the shoe store down the street? A company called Navizon sells a device that can track every phone using Wi-Fi within a given area. Just plug this device into a nearby wall outlet to monitor that action in real time. Because we know that more than 60 percent of the U.S. population now owns a smartphone, a couple of months’ worth of data will tell you how many people are likely to be in any area that you’re surveilling on any given day and time of the week. Leave the device plugged in for a few decades and you’ll have a reasonable estimate for how many people will be at a specific place, at a specific time, on a specific day of the year. This is the sort of information that big phone companies like Verizon and AT&T have at their fingertips. When you walk around with your cell phone on, you give these companies data about your location. AT&T and Verizon then strip that data of identifying information and sell it to city planners, commercial interests, and others. Verizon even claims the ability to build a demographic profile of people gathered together in a specific place for a specific thing, such as in a stadium for a rock concert or a sporting event.11

Navizon puts the same sort of capability in the hands of individuals with small budgets but larger time horizons. Navizon CEO and founder Cyril Houri is marketing the device as a way for entrepreneurs to do location planning. There are some limitations. Because the device measures Wi-Fi from smartphones, it’s also biased toward younger adults (18–25) who are—not surprisingly—more likely to own a smartphone than are people over age sixty-five. High-income earners also show up more often than low-income earners. But the current profile of smartphone users is not the future profile.

Navizon’s analytics system won’t disclose the names of specific people whom the device picks (unless those people opt in to the Navizon buddy network) but the system can recognize individual phones. It has to, in order to count them. If someone follows roughly the same pattern every day, hitting work, the store (or the bar), then home in the same time window, the difference between tagging the phone and tagging the person effectively disappears. MIT researchers Yves-Alexandre de Montjoye, César A. Hidalgo, Michel Verleysen, and Vincent Blondel from the Université catholique de Louvain took a big data set of anonymized GPS and cell phone records for 1.5 million people, the sort of stripped-down location data that Verizon and AT&T sell to corporate partners to figure out the types of people who can be found at specific locations at particular times of day. The data consisted of records of particular phones checking in with particular cell antennas. What the researchers found was that for 95 percent of the subjects, just four location data points were enough to link the mobile data to a unique person.12

A growing percentage of smartphone users voluntarily surrender data about themselves wherever they use geo-social apps. Facebook, Twitter, and Google+ all have “check in” features that broadcast your location to people in your network. Other, more creative services will facilitate specific interactions based on what you’re looking to do wherever you happen to be.

A now dead app called Sonar could identify the VIPs in the room; Banjo will tell you the names of nearby Twitter, Facebook, and Instagram users; a service called Grindr, launched back in 2009, will pinpoint the location of the nearest gay man who may be interested in a relationship—of either the long- or short-term variety. An app called Mixxer will do the same for everyone else.

To the smartphone-suspicious, these services seem to be more trouble than they’re worth. What’s the value of knowing the Twitter handle of the person at the next table in a restaurant, when, at best, such an app just detracts from the authentic experience of real life? At worst, it’s giving away personal info to strangers.

However, to a growing number of smartphone owners, check-ins and geo-social Web apps like Foursquare are an integral aspect of smartphone ownership. More than 18 percent of smartphone owners use some sort of geo-social service (as of February 2012), a number up 33 percent in one year, with heaviest use concentrated among the young. Importantly, more than 70 percent of smartphone owners use some sort of location-based service on the phone, even if it is just the GPS.13

US

Additional information

Dimensions 0.7600 × 5.4300 × 8.3800 in
Imprint

ISBN-13

ISBN-10

Author

Audience

BISAC

,

Subjects

forecasting, mathematics, science gifts, probability, math books, behavioral economics, science book, science gifts for adults, engineer gifts, automation, analytics, social science, econometrics, predictive analytics, data analytics, big data, internet of things, futurist, data science, data visualization, quants, defense technology, math book, science, MAT029000, technology, numbers, engineering, artificial intelligence, ai, maths, math, Japan, business, science books, non-fiction, statistics, strategy, data, Futurism, SCI075000, robots, engineer