Myths in Artificial Intelligence: The Case for (Not) Being A…

Among all the technologies that shape the future of the computer industry and, inevitably, humanity, Artificial Intelligence (AI) and Machine Learning spark the most conversation.

This applies equally to tech gurus, the media, and a general audience. Why? AI-related technology is already present in our daily lives in the form of relatively simple “smart assistants” like Amazon’s Alexa, content recommendation services like Pandora, or programmatic advertising platforms. A lot of the things we’ve become used to in our daily experiences—both online and offline—wouldn’t be possible if Machine Learning algorithms had stayed in computer labs and academic institutions.

Yet, it seems that growing exposure to practical use-cases of AI and related technologies doesn’t clarify the topic. Instead, AI and its implications are still confusing many people for a number of reasons.

Why Are People So Confused About Artificial Intelligence?

Recent advancements in the field, like computer vision (which we’re also using here at StopAd), have contributed to increased attention from the media. However, the resulting massive coverage hasn’t been helpful to the average consuming public (most of us), whose understanding of AI and related concepts remains vague.

In some cases, AI hype is used as a cheap trick by startups just to get PR and publicity—this article describes how digital marketing industry executives identify cases when reference to the technology is used merely as an extra selling-point.

Sensationalist journalism is using eye-catching headlines laden with tech-terms-turned-buzzword like “deep learning,” “AI-powered,” and “neural networks.” Clichés of apocalyptic Hollywood flicks featuring self-aware robots set on wiping out humanity are repeatedly used by journalists, which cultivates misconceptions and generalizations about the field. This kind of sketchy content attracts attention, increases visits, and drives additional ad revenues for media outlets. However, the side effect is similar to that of fake news. Hidden behind clickbait headlines are inaccurate, often exaggerated facts that have nothing to do with reality.

Meanwhile, the case with tech visionaries isn’t much better.

Elon Musk believes that AI may be a threat to humanity if controlled by “a handful of major companies” and calls for proactive regulation “before it’s too late.” Stephen Hawking has similar views, fearing AI will replace humans. On the other side, we have Mark Zuckerberg, calling Tesla’s owner a “naysayer” for repeatedly expressing his negativity towards AI. Naturally, Musk responded: “I’ve talked to Mark about this. His understanding of the subject is limited.”

The fact remains, however, that neither Musk nor Zuckerberg are trained AI researchers, but their exchange of pleasantries doesn’t provide much meaningful context. You’d expect that opinion leaders whose companies are using Machine Learning and other AI-related technology would have a less black-or-white and more practical outlook on the state of the industry, acknowledging both the benefits and possible issues that come with AI technology.

Futurists are another group worth mentioning when it comes to discussing AI. They hardly agree either.

What Are Futurists Saying About Artificial Intelligence?

AI is a key component in futurist predictions of what’s awaiting mankind. A computer science professor, sci-fi author, and mathematician, Vernor Vinge, introduced a hypothesis of “Technological Singularity.” In his 1993 essay “The Coming of Technological Singularity: How to Survive in Post-Human Era,” he argues that acceleration of technological advancements will result in the creation of intelligence that is greater than human—superintelligent computers, for instance. By exponentially increasing the self-improvement speed of technology, the existence of a superintelligent entity will inevitably lead to an “intelligence explosion,” decisively surpassing human intelligence. Naturally, this event would render useless all existing concepts and understanding of the fundamental order of things.

Another prominent advocate of Singularity is Google’s Ray Kurzweil, a computer scientist, inventor, and of course, futurist. In one of his books, The Singularity Is Near, he argues that according to his “Law of Accelerated Returns,” growth in many areas including technology is exponential, in contradiction to the human-specific “linear” outlook. According to Kurzweil, the growth rate builds up at a somewhat moderate pace at first, only to skyrocket exponentially at a certain point. One example of this is the tendency is the growth rate of computational power, observed by Gordon Moore, co-founder of Intel and named after him.

Moore’s Law states that computer processing power will double every two years along with the number of CPU transistors, but it’s not infinite. Growth will stall in the 2020s. Kurzweil predicts that computational power will keep growing in accordance with Moore’s Law, as chip makers replace silicon CPUs with different technology, like nanotubes, etc. Exponential advances in several science fields including AI will lead to Singularity when humans will be heavily augmented by non-biological intelligence and nano bots for care of health and achieving immortality. Kurzweil predicts that Singularity will occur circa 2045.

Criticisms of Futurist Predictions

The futurists’ stance on AI and human fate has been criticized by colleagues in the scientific community. The co-founder of Microsoft, Paul Allen, warns against the shortcomings of Kurzweil’s “assertions.” Allen points out that predictions built on mere extrapolation ignore some of the crucial challenges in AI research, neuroscience, and other fields, like software development. Besides the computational power of hardware needed for superintelligence to exist by 2045, equally capable software is a must, he stressed. While not completely dismissing the possibility of Singularity, Allen draws several examples which question the validity of Kurzweil’s prediction, adding that on an applied level, the timing of fundamental discoveries and breakthroughs doesn’t fit into his model because they occur sporadically, thus being hardly predictable.

“Truly significant conceptual breakthroughs don’t arrive when predicted, and every so often new scientific paradigms sweep through the field and cause scientists to re-evaluate portions of what they thought they had settled. We see this in neuroscience with the discovery of long-term potentiation, the columnar organization of cortical areas, and neuroplasticity. These kinds of fundamental shifts don’t support the overall Moore’s Law-style acceleration needed to get to the singularity on Kurzweil’s schedule.”
—Paul Allen

What Are the Consequences of Confusion About AI?

Confusion caused by an overwhelming variety of sources and different levels of expertise among AI authors is not without consequences. Misconceptions and misinformed opinions flourish, leaving us misguided and biased.

One of the reasons is that not all views on AI are reflecting the field’s current situation, opting for a more sensationalist or futuristic stance. Many of the scientists currently working in the field rarely share their opinions outside of their circle. The discussion that is going on among them is often too complex for general audiences, so it really depends on a journalist’s decision to reach out for commentary from researchers in the know.

But there’s also a problem among us, readers.

The confusion arises not just from available information; it truly stems from the questions we ask—or don’t ask. With complicated topics like AI or nanotechnology the answers do not yield a simple “yes” or “no.” More importantly, science is not dogmatic, so every new bit of knowledge scientists discover may alter the state of things dramatically. What was deemed true yesterday may be disproved by obtaining a minute detail of information today.

To get a clear picture of what AI is, there are 2 things that we need: reliable sources of information and some effort on our part to process that information. Let’s look at some of the most persistent myths about AI, simultaneously highlighting the real state of things at present. To discuss these, appropriately we have to address some terminology.

AI-related Definitions and Terminology

There is no unified definition for “intelligence” that everybody agrees on; the same goes for defining the AI field. Definitions may vary based on background, but among academics Pei Wang from Temple University notes that “multiple working definitions exist, and it will remain the case in near future.”

For practical purposes, we’ll refer to the definition of AI by John McCarthy:

It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

It’s worth pointing out that AI belongs to computer science, and it is an extremely broad concept that overlaps with many scientific fields from philosophy to physics.

Below are the definitions of some of the terms that will come up as you read on.

Weak AI is basically using capabilities of AI to perform specific narrow tasks. Examples include: question answering systems like Watson, self-driving cars, or recommendation systems. AI may far surpass human abilities by using Natural Language Processing, Machine Learning, etc., but it still needs to be trained on huge data sets to be able to provide relevant answers about certain things; this is called supervised learning.

Strong AI (AGI), also known as Artificial General Intelligence, is able to communicate and behave so that it’s behavior cannot be distinguished from that of a human being. It should have all traits of humans: emotional intelligence, motivation, and sentience. It should also be able to understand which information it lacks and spot lies. As of today, AGI is an extremely distant future—if possible at all.

Machine Learning is a large subset of AI that is involved with the research of how computers may learn and make predictions from sets of data based on algorithms, without being directly programmed. Algorithms improve over time based on an influx of new data to learn from, providing a more accurate output. Machine Learning is crucial for predictive data analytics which is used across many industries from finance to programmatic advertising.

The Most Prevalent Myths About Artificial Intelligence

“Science must begin with myths, and with the criticism of myths.” — Karl Popper

In case you’ve been exposed to some popular misconceptions about AI, you should know that the AI field is more concerned with complicated math, algorithms, and learning patterns than bringing an end to the world. Still, if some myths occupy your thoughts, read on.

Myth #1: AI Will Take Away Our Jobs

Truth: Not really. Jobs will shift.

Greater levels of automation and increased cost effectiveness of AI implementation is certainly a big plus for companies. In fact, there have already been some layoffs. However, this luddite outlook on the situation is a bit of an overreaction. The current state of AI doesn’t allow it to fully replace workers at already automated facilities, like car plants, because assembly line robots aren’t going to last on their own. Jordan Bitterman, who works with Watson Content at IBM pointed out that the introduction of automobiles may have hit certain trades (horse trade and carriage sales, for instance), but it also created the automobile industry and vast suburban infrastructure.

Some lower-end jobs that require narrow skill sets and rely on generally repetitive tasks, like cashiers, will most likely be automated quite soon, as well as entry-level positions like paralegals. But in most cases, AI will serve as an augmenting factor. That is, AI will most likely assume burdensome and distracting tasks in lieu of those tasks being put on human workers. Forbes composed a list of jobs that aren’t going to be automated by AI anytime soon. A lot of them require human-to-human contact or sentience of which weak AI is incapable.

In my article on online security trends for 2018, I mentioned an increased emphasis on AI in cybersecurity. The truth is that cybersec companies are struggling to hire security researchers since the number of candidates is far less than demand for their skills. AI is helping companies to effectively monitor and even predict potential new threats that are taken care of by researchers. This approach streamlines security tasks and complements a limited labor market. Similar kinds of human-machine cooperation are already practiced in healthcare, providing doctors with insights based on machine learning.

Stephane Kasriel, CEO of Upwork, mentions a few possible changes that AI will bring. He emphasizes a requirement for persistent self-education, stating that “traditional college degrees no longer lead to long-term employment opportunities—fresh training on new skills is much more impactful.”

Myth #2: AI Will Outsmart Us and Eventually Take Over

Truth: Highly unlikely for the foreseeable future.

We have already discussed Elon Musk’s grim predictions, and while there isn’t 100% certainty that this won’t ever happen, chances are next-to-none for the foreseeable future. This piece by Ken Goldberg, professor of engineering at Berkeley, tackles alarmist theories based on the widely criticized fallacy of Kurzweil’s extrapolation of Moore’s Law.

Furthermore, Artificial General Intelligence capable of making a malicious judgement to destroy or enslave humanity isn’t something we should worry about. While some narrow implementations of weak AI can beat us at Jeopardy or run massive data analysis, it still can’t boast the capabilities of the human brain when it comes to creative thinking, emotional intelligence, motivation, etc. I’m more concerned about IoT smart assistants being attacked by human-made malware that may eavesdrop on my activity.

Myth #3: AI and Robots Are the Same

Truth: No.

It is actually quite the contrary. Robotics is a completely separate field. Robots are physical devices explicitly programmed to perform a fixed set of tasks. AI is software technology. Occasionally we see AI-powered robots that induce the confusion, but it’s important to understand that AI is software and robots are hardware.

Each of these myths is based on some unrealistic or exaggerated fact, leaving it imprinted in our minds and reinforced by the stereotypes from popular culture. The image of AI has become so distorted that in public opinion what once was a field of computer science has become something equivalent to supernatural arts.

In part, this is due to the massive scale of change that AI adoption is bringing into many industries. Other reasons include a lack of reliable information about the progress in the field and complicated concepts that one has to grasp in order to discern truth from AI-hype. For example, many are likely surprised to learn that the AI-field emerged as subset of computer science back in 1950s, over 4 decades ago, not 4 years ago. Meanwhile, a decent understanding of the history of the AI, can significantly improve critical thinking around the topic. Most laypeople are just poorly educated on the topic. Incidentally, this article presents a brief overview of the field with its problematic, stalemates, and milestones of which there have been a few over the years.

As AI implementation progresses, it is crucial avoid biased opinions by developing a habit of self-education in order to stay up-to-date with latest developments in research. Doing this, however, means getting good information.

Where Can I Get Accurate Artificial Intelligence News?

There is no doubt that AI has many more applications that will benefit us as end users. But adoption would occur much faster if our perspective were unbiased.


VISIT THE SOURCE ARTICLE
Author: George Paliy

Copyright © 2018 MINDSCULPT.ME