Artificial intelligence — software with the ability to observe analyze and act on its own — will play a big part in shaping our future. But will it destroy it?
It’s a serious question, at least as far as Elon Musk is concerned. Over the past few years, he’s taken a grim view of the long-term prospects of AI, envisioning that, as computers get smarter than humans, they’ll hold our very existence in their hands, and we may not like what they decide to do with that power.
On the other side of the spectrum you have Mark Zuckerberg, who, as you might expect, is much more pro-AI. In a recent Facebook Live post, Zuck said he was “really optimistic” about AI and that it would deliver innumerable improvements to our lives over the next 5-10 years. He called naysayers like Musk “irresponsible” for speculating about “doomsday scenarios.” Musk fired back with a CEO-caliber burn.
AI in the real world
Zuckerberg’s got the edge here, though: His attitude is pretty common in tech — you can’t throw a stone in Silicon Valley without denting someone’s AI platform. It also helps that the public is starting to see the benefits of AI and machine learning, most directly in digital assistants like Alexa and Siri. It’s tempting to dismiss Musk’s concerns as ravings not even worthy of a B-list Hollywood screenwriter. And, honestly, we should.
A future “superintelligence” is not the primary concern of current AI technology. But Musk is right to voice concerns about the power that AI will have over us. The potential consequences of unchecked AI development are real and much closer to home than many people think.
So forget Skynet for a second. The promise and the peril of current AI tech is better found in efforts like Andy Rubin’s company Essential, which aims to be the “universal translator” for the smart home. Rubin’s plan isn’t just to create a phone and a swarm of useful smart-home projects, but to make them — and his platform, Ambient OS — universally compatible.
The vision is a few steps beyond just having apps for everything on your phone, and even beyond current platform(-ish) solutions like Apple Home or Google Weave. In an Essential world, everything in your house is wired to the internet, and Ambient OS learns from all your interactions and input to anticipate and serve up exactly what you want at any given moment, regardless of who built the device or its core software. This recent Wired profileof Rubin nails it:
Everything just works. It’ll know what you want because it watches you and learns that it should start warming up the car when you’re putting your shoes on, because that’s always the last thing you do in the morning. When you say, “Tell Anna it’s time for dinner,” the system should know who Anna is, which room she’s in, and which speaker to use to alert her. The only way for that to work is if absolutely everything is connected.
It’s a compelling dream, though we’re years away from it happening. But most agree: It will happen, and the level of connectivity and data collection is orders of magnitude beyond today. And that simple but powerful picture has a dark side.
Tracking all the interactions of a person with physical systems, like a house, is a new type of dataset, sometimes called the “physical graph.” Analyzing a person’s or a family’s physical graph — and crunching that data in aggregate — makes the utopian vision of Rubin and Zuckerberg possible.
Now what if I told you the device manufacturers and your ISP get to look at all that data whenever they want, and they might share it with others?
Not such a pretty picture now. But, in a world where every device you own connects to the internet, there’s a good chance that’s the future we’re hurtling toward.
The price of AI
Earlier this year, the US declined to implement privacy laws that would have stood in the way of ISPs from harvesting that kind of user data. AT&T recently mused that it might even begin charging customers extra for privacy protections. Google and Facebook’s entire business models are predicated on the bargain of, “give us data, and we’ll give you free services.”
In other words, the trend line for user privacy isn’t great right now. Imagine an AI-driven future where, at the touch of a button, someone or some faceless corporation can activate “God mode” on anyone’s entire life — what they’ve done, where they are now, even what they might do in the near future. This is why privacy polices, standards, and regulations need clarification. Whether the solution is public or private, a baseline standard of what a company can or can’t do with your data is needed — and that goes double in a world driven by AI.
The power of networked computers to monitor and predict human behavior is an incredible achievement, and it’s only going to get better. But besides merely serving us, AI could be used to influence and manipulate, sometimes so subtly we might not even know it’s happening. There’s a dark future here, and Skynet is nowhere in sight.