Artificial Intelligence: Fact, not Fear, Should Drive Policy

From left to right: Adam Theirer, Andras Szakal, Alan Easterling, and Ryan Hagemann

On November 18, 2016, the Center for Public Policy Innovation (CPPI) hosted a panel discussion on Capitol Hill around the subject of Artificial Intelligence (AI) featuring a host of subject matter experts including Northrop Grumman’s futurist, Alan Easterling; IBM Federal Chief Technology Officer Andras Szakal; Ryan Hagemannn of the Niskanen Center; and Adam Thierer of the Mercatus Center. The panel held an engaging discussion that sought to define Artificial Intelligence while delving into the various policy implications concerning this quickly evolving technology.

Of course, the conversation included mention of Westworld, the new HBO series described by the panelists as a Jurassic Park for Artificial Intelligence. AI, robotics, and automation are at the core of the series, but one of the central themes also revolves around the nature of “consciousness.”

“This, I think, is one of the key factors driving confusion about the power of AI in the real world: the distinction (or lack thereof) between ‘intelligence’ and ‘consciousness,” argued panel moderator Hagemann, noting, “We hear two different scenarios about AI, one about heaven the other about hell.”

The future will undoubtedly be somewhere in between. Google’s search engine, for example, is a very narrow, task-specific form of AI. “As soon as it works,” the famed computer scientist John McCarthy once quipped, “no one calls it AI anymore.”

“From IBM’s point of view, we think cognitive computing, similar to Watson, as augmented intelligence,” said Szakal. “For instance, helping doctors diagnose cancer more effectively, and reduce time spent on researching drugs has already successfully pared the testing time down from months to hours.”

Trust is another issue related to AI. An IBM white paper on AI argued that: “To reap the societal benefits of AI systems, we will first need to trust it. The right level of trust will be earned through repeated experience, in the same way we learn to trust that an ATM will register a deposit, or that an automobile will stop when the brake is applied. But trust will also require a system of best practices that can help guide the safe and ethical management of AI systems including alignment with social norms and values; algorithmic responsibility; compliance with existing legislation and policy; assurance of the integrity of the data, algorithms and systems; and protection of privacy and personal information.[1]

Szakal went on to say that big data has had a major influence. Most people in their jobs have access to so much information, they can’t utilize it all, and that’s where cognitive computing comes in. IBM is working on understanding the meaning of language, in relation to computers.

Meanwhile Easterling noted that there was a time when the most cutting edge technologies were developed in the US and usually in the aero defense sector. Now there is a bigger ecosystem of development, dual-use applications of technologies. According to Easterling, Google has over one thousand researchers working on AI, Northrop has twenty. “There is a large commercial focus on the future of AI and it will be a technology accessible to everyone, not just the defense sector.”

Adam Thierer noted the “panic cycle” related to AI. “Of all emerging technologies, I can’t think of another that is affected by popular culture mythologies as much as AI. It’s interesting that there aren’t more positive scenarios around AI in science fiction.” Thierer noted valid areas for discussion related to transparency, job disruption, safety and cybersecurity concerns.

Furthermore, continued Thierer, “In recommending that we have algorithmic transparency as a solution, we start to raise concerns about who has access to data. Some systems depend on a certain amount of secrecy and who as access to the code. When we think about regulating AI, we need to think about regulation broadly. It could include new law, existing law, and the courts.”

Thierer continued, “We also can rely on best practices or industry codes of conduct. Industry associations have been very helpful in this. Our government has most recently brought together multi-stakeholders to have discussions around privacy and cybersecurity best practices.”

During the discussion, a question was raised about the impact of AI on the job market. Thierer responded that in the 1980s, today’s jobs did not exist (for example, search engine optimization, software systems architect). There will certainly be disruptions, such as loss of jobs for truck drivers, while Szakal noted a need to change the education system, including early STEM education.

Packed room of Senior Congressional Staff in Attendance

Easterling responded by saying, “You can see signs already of the shifting economy. At the age of 21, we are trained to think we know everything we need to succeed, but that is not the case, especially as technology evolves. General Electric has returned to the US from overseas, because now things are automated and they don’t have to rely as much on cheap labor. AI begins to encroach on cognitive professions; small town attorneys could be at risk, travel agents.”

The panel ended on an optimistic note, with Easterling saying “I think of AI in a positive sense, there will be overwhelming sense of benefit.”

Szakal noted that, “AI is all about making you smarter, it is the human in the loop, not taking the human out of the loop, consuming data in ways you haven’t thought of before.”

Lastly, said Thierer, “I don’t want to allow fear to drive AI policy, we need to make it based on facts and understanding, there is life saving potential here.”

[1] https://www.research.ibm.com/software/IBMResearch/multimedia/AIEthics_Whitepaper.pdf