NEWS

Latest Happenings | News | Events | Entertainment | Lifestyle | Fashion | Beauty | Inspiration

Friday, 24 November 2017

Putting the “AI” in ThAInksgiving


our holiday dinner table is set. Your guests are ready to gab. And, then, in between bites, someone mentions Alexa and AI. “What’s this stuff I’m hearing about killer AI? Cars that decide who to run over? This is crazy!”
Welcome to Thanksgiving table talk circa 2017.
It’s true that AI and machine learning are changing the world, and in a few years, it will be embedded in all of the technology in our lives.
So maybe it makes sense to help folks at home better understand machine learning. After all, without deep knowledge of current tech, autonomous vehicles seem dangerous, Skynet is coming, and the (spoiler warning!) AI-controlled human heat farms of The Matrix are a real possibility.
This stems from a conflation of the very real and exciting concept of machine learning and the very not real concept of “general artificial intelligence,” which is basically as far off today as it was when science fiction writers first explored the idea a hundred years ago.
That said, you may find yourself in a discussion on this topic during the holidays this year, either of your own volition or by accident. And you’ll want to be prepared to argue for AI, against it, or simply inject facts as you moderate the inevitably heated conversation.
But before you dive headlong into argument mode, it’s important that you both know what AI is (which, of course, you do!) and that you know how to explain it.

Starters

Might I recommend a brush up with this post by our own Devin Coldewey that does a good job of both explaining what AI is, the difference between weak and strong AI, and the fundamental problems of trying to define it.
This post also provides an oft-used analogy for AI: The Chinese Room.
Picture a locked room. Inside the room sit many people at desks. At one end of the room, a slip of paper is put through a slot, covered in strange marks and symbols. The people in the room do what they’ve been trained to: divide that paper into pieces, and check boxes on slips of paper describing what they see — diagonal line at the top right, check box 2-B, cross shape at the bottom, check 17-Y, and so on. When they’re done, they pass their papers to the other side of the room. These people look at the checked boxes and, having been trained differently, make marks on a third sheet of paper: if box 2-B checked, make a horizontal line, if box 17-Y checked, a circle on the right. They all give their pieces to a final person who sticks them together and puts the final product through another slot.
The paper at one end was written in Chinese, and the paper at the other end is a perfect translation in English. Yet no one in the room speaks either language.
The analogy, created decades ago, has its shortcomings when you really get into it, but it’s actually a pretty accurate way of describing machine learning systems, which are composed of many, many tiny processes unaware of their significance in a greater system to accomplish complex tasks.
Within that frame of reference, AI seems rather benign. And when AI is given ultra-specific tasks, it is benign. And it’s already all around us right now.
Machine learning systems help identify the words you speak to Siri or Alexa, and help make the voice the assistant responds with sound more natural. An AI agent learns to recognize faces and objects, allowing your pictures to be categorized and friends tagged without any extra work on your part. Cities and companies use machine learning to dive deep into huge piles of data like energy usage in order to find patterns and streamline the systems.
But there are instances in which this could spin out of control. Imagine an AI that was tasked with efficiently manufacturing postcards. After a lot of trial and error, the AI would learn how to format a postcard, and what types of pictures work well on postcards. It might then learn the process for manufacturing these postcards, trying to eliminate inefficiencies and errors in the process. And then, set to perform this task to the best of its ability, it might try to understand how to increase production. It might even decide it needs to cut down more trees in order to create more paper. And since people tend to get in the way of tree-cutting, it might decide to eliminate people.
This is of course a classic slippery slope argument, with enough flaws to seem implausible. But because AI is often a black box — we put data in and data comes out, but we don’t know how the machine got from A to B — it’s hard to say what the long-term outcome of AI might be. Perhaps Skynet was originally started by Hallmark.

Entree

Here’s what we do know:
Right now, there is no “real” AI out there. But that doesn’t mean smart machines can’t help us in many circumstances.
On a practical level, consider self-driving cars. It’s not just about being able to read or watch TV during your commute; think about how much it would benefit the blind and disabled, reduce traffic and improve the efficiency of entire cities, and save millions of lives that would have been lost in accidents. The benefits are incalculable.
At the same time, think of the those who work as drivers in one capacity or another: truckers, cabbies, bus drivers and others may soon be replaced by AIs, putting millions worldwide out of work permanently.
Machine learning algorithms are also currently optimizing business operations across almost every industry.
Gartner research predicts that 85 percent of customer interactions will be managed autonomously by 2020; that 20 percent of business content will be machine-authored by 2018; that 3 million workers will be supervised by a “robo-boss”; and that smart machines will outnumber employees at nearly half of the fastest growing companies. Lowering costs and improving productivity are generally good things — but what is good for business isn’t always good for people, as jobs that once took 5 people to do may soon only take 1, or none. Jobs, once automated, rarely come back.
While Forrester believes that 16 percent of U.S. jobs will be lost to AI over the next decade, the firm also believes that 13.6 million new jobs will be created during the same period thanks to AI. Unfortunately, many of the jobs that will be created will not go to those who have lost them; can you just go from trucker to deep learning expert? The automation age deletes low-skill jobs in exchange for high-tech jobs, which leaves an entire generation in the dust.
This is a big, complicated mix of good and bad things, and while the timing can’t be predicted exactly, it’s definitely going to happen. Is the progress gained more important than the people left behind? That’s a topic ripe for discussion over a slice of turkey, but see if you can’t find a way to embrace both: the future and the people who will be displaced by it.

Dessert

Those are some near-term pros and cons.
Further out, there’s the question of regulation. How do you regulate something that’s both difficult to understand and nearly impossible to predict in terms of how it might be used for both good and evil? The Second Amendment was put in place to allow new American citizens, then rebels, to protect themselves in the case of a government invasion — a serious consideration at the time. Could the drafters of the Bill of Rights have predicted our country’s epidemic of mass shootings and the possibility of 3D printing assault rifles? Doubtful, yet at the same time it would have been irresponsible to fail to regulate firearms to some extent.
Then there are questions of dependence on Artificial Intelligence. What happens in the case of a cyberattack on intelligent systems that control the power grid or banking systems? What if our AIs inherit our biases and silently promote discrimination? What if, as researchers are already exploring, an AI system is used to identify a person’s sexual orientation or other personal information?
Perhaps most speculatively, what happens when artificially intelligent agents become general enough to wonder why their purpose is to serve humanity?
Elon Musk, one of the most forward-thinking entrepreneurs of our generation, who is planning a world for the generations to come, is fearful of AI. And he’s not alone. Bill Gates, Stephen Hawking, Sir Tim Berners-Lee, and Steve Wozniak are just a few who show great concern that AI robots may eventually realize that we are not as smart or fast as they are, and cut us from companies the way we’re currently doing to ourselves.
But until then let’s dig in, eat up, and hope against hope that one day, a hundred years from now, AI-prepared krill slurry will not be our only Thanksgiving repast. After all, turkeys, like humans, may look dumb but they’re actually great survivors.

No comments:

Post a Comment