What will artificial intelligence look like in 15 years?

As the conversation about artificial intelligence grows louder, public perception of its eventual integration into every day life has shifted from general fears to more specific questions about implementation.

FILE - In this July 17, 2016, file photo, shoppers talk to SoftBank Corp.'s companion robot Pepper, equipped with a "heart" designed to not only recognize human emotions but react with simulations of anger, joy and irritation, at a store in Tokyo.

Shizuo Kambayashi/AP

September 6, 2016

Whether they are assisting your doctor in surgery, driving your car, analyzing crime patterns, or cleaning and providing the security system for your home, artificial intelligence (AI) will play a big role in urban living in by 2030. But to maximize the benefits of an AI-wired city tomorrow, expert and the public need to have a frank conversation today, according to the first report from Stanford University's 100-year study on AI, which was released last week.

"As a society, we are now at a crucial juncture in determining how to deploy AI-based technologies in ways that promote, not hinder, democratic values such as freedom, equality, and transparency," the panel states in its report, which analyzes the role AI will play in the typical North American city in 2030, focusing on eight domains: transportation, home robots, healthcare, education, entertainment, low-resource communities, public safety and security, and employment and the workplace.

"Policies should be evaluated as to whether they foster democratic values and equitable sharing of AI’s benefits, or concentrate power and benefits in the hands of a fortunate few."

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

The report outlines not only the ways AI could potentially be used throughout everyday life, but also how public opinion surrounding the implementation of AI has changed, and will continue to.

"In each domain there is a high potential for artificial intelligence technologies to improve the quality of life in the typical north American city by the year 2030, but in each case there are barriers to overcome, and [in] some more than others," Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts, tells The Christian Science Monitor.

With autonomous cars immanently on the horizon – but still getting into accidents – and new Federal Aviation Association (FAA) rules over drone use, we have already encountered examples of both the barriers and the solutions: some are technological, like meeting safety standards, but others are social.

"There are definitely people who bring up fears that are spurred on by science fiction literature and movies, but then we also get a lot of interest in what we are doing," Dr. Stone tells the Monitor. "AI tends to be very polarizing: some people tend to be very excited about it, others are very fearful, and sometimes the same people have both of those different attitudes."

Often, doubts about AI's integration into society take on a dystopian tone. However, more mundane topics like job loss and inequality have come to dominate the conversation, Stone says. 

Howard University hoped to make history. Now it’s ready for a different role.

"Throughout history technological advances have affected the workplace," Stone tells the Monitor. "In the perceivable future, most jobs will not be replaced by AI technologies, but will be augmented or changed. The healthcare advances are not going to replace doctors, but they may change the skills that the doctors need or how doctors spend their time."

The question of exacerbating already existing inequalities is more difficult. However, the report recommends the immediate start of a discussion on how the additional wealth created by AI can be spread equitably and fairly – a point often overlooked by worries of robots displacing human workers. AI methods can help plan equitable food distribution, for example, or to spread health and safety information.

"Care must also be taken to prevent AI systems from reproducing discriminatory behavior, such as machine learning that identifies people through illegal racial indicators, or through highly-correlated surrogate factors, such as zip codes," the panel notes. "But if deployed with great care, greater reliance on AI may well result in a reduction in discrimination overall, since AI programs are inherently more easily audited than humans."

But AI's potential to help all communities, not just wealthier ones, needs to be talked about in order to boost trust in new technologies, too, helping build relationships to ensure they can be implemented in the most helpful way possible down the road. Building that trust has been challenging when AI debates seem theoretical, but that is quickly changing.

Stone says that there are different pathways to trust. There are technological solutions that would prove the machine's reliability, such as a drivers license test-type screening that AI machines must pass before being deployed for public use. However, Stone believes that something as simple as increased exposure to AI will ultimately be most successful.

"People need to have first hand experience with them," Stone tells the Monitor. "I trust the various applications on my computer not because someone certified them, but because I have used them hundreds of times and I have seen the same behavior over and over again. Once people start seeing autonomous cars on the road and get experience with them stopping at the right time and starting at the right time, that will add to the trust."