Stop losing winnable deals

Boost win rates, increase average deal size, and increase revenue per rep with Gong, the #1 revenue AI platform.

Thank you for your submission.

What Can’t AI Do Yet? Concerns and Limitations of the World-Changing Technology

Company

The Evolution of AI from the 1960s to Today

Back in the 1960s, Artificial Intelligence (AI) was believed to be right around the corner. Significant funds were invested. Systems were built. But, results were disappointing and interest faded for a long period. <

Understanding Machine Learning: The Core of Modern AI


Today, in 2021, AI can do much more. It evolves quickly, and can now recognize faces, transcribe speech, detect anomalies, and more. But, there’s a lot it still can’t do. In fact, it still can’t do most things that people would think of as true “intelligence.”

This is because the main advancement in AI over the last decade is using an approach called “machine learning,” which is exactly what it sounds like: a machine learning how to perform tasks based on the examples it has seen in the past.

Machine Learning in Action: Practical Examples

For example, if you want a machine to learn how to change certain words and sentences from present tense to past tense, then you need to feed the machine with a number of “correct” examples. This step is called “training” in machine-learning jargon. Generally speaking, the more examples you give the machine, the more accurate it becomes. Then, it can repeat the same task for as many new sentences as needed, even (of course) for sentences it has never seen. The nice thing is that because the machine is “a machine,” it doesn’t get confused. It doesn’t get tired. And it can continue to process example after example—unlike a human being who needs to rest, eat, and sleep.

For well-defined tasks, machines are vastly more efficient and effective than humans.

Limitations of Current AI and Machine Learning

But then, based on the success of this approach, one may think that this is true “intelligence,” and it’s not. 

When it comes to doing things that are not well-defined, or for which it’s hard to tell whether an answer is “correct” or not—like creative thinking or assessing brand new situations—humans are much better. 

Many people would consider intelligence as being able to understand what to do even in situations where we have no previous examples or experience to pull from. We humans have up-to-date world knowledge—what apples taste like, what pianos sound like, and so on—to help us, which machines don’t yet have.

So, in most cases, AI isn’t about replacing humans with machines. Rather, it is about automating certain tasks (usually the repetitive kind) and unlocking society’s potential to spend more time, energy, and resources furthering other forms of our intelligence, specifically the thinking that occurs when there is no prior knowledge available.

There are clear limitations to what AI and machine learning can currently do as of now.

For example:

  1. Machines cannot learn how to do something without clear, replicable examples.

Writing a good story would be very difficult for a machine to learn how to do.

The reason is because there aren’t good “training sets,” in the sense that there are many stories out in the world, but not very many data sets that clearly evaluate, measure, and explain why one story is “good” and one story is “bad.” Stories are largely subjective, meaning it would be difficult for a machine to discern differences on its own.

For example, at my company Gong, our platform analyzes sales conversations and can detect action items that come up in the conversation. We’ve trained the system using hundreds of examples. But, the engine still can’t accurately determine whether a product demonstration is a great one or not. 

With photos, for example, machines have become pretty good at generating images of “fake people”, based on other photos the machine has seen. But, when it comes to generating something “good” or “interesting,” things get much harder. In a brand of AI called Natural Language Understanding (NLU), the latest, much-discussed technology is called GPT-3. It can complete sentences and write whole paragraphs. For example, GPT-3 wrote this entire article, and while that is an amazing feat, it’s still far from anything that would be considered incredibly new or different.

So, while the Gong platform can recognize topics discussed on a sales call (because it’s seen numerous such calls), it’s still far from handling a call on its own.

  1. The real advancements in AI haven’t been in “creative thinking,” but in accuracy and efficiency.

The core approach of AI today isn’t radically different from what it was 10 or 20 years ago.

There are two substantial differences: first, thanks to the internet we now have access to much more data and can train machines with millions, sometimes billions, of examples. Through the process, we learned how to optimize the learning process and the algorithms. For example, deep neural networks have allowed us to create AI-based systems that can benefit from huge amounts of data. And, we’ve become very proficient in the end-to-end process of finding examples, labeling them, training, measuring, and optimizing. 

In some cases, the incremental improvement has resulted in a drastic change, practically speaking. The difference between a camera that identifies 50% of license plates and one that identifies 99.9% is the difference between a system that’s not useful and something that replaces human labor. 

Similarly, in the case of Gong, we transcribe sales video calls. Ten years ago, accuracy was 60% or 70%, which means the transcription was not good enough to read. With today’s technology, humans can read a call’s transcript, which is significantly faster than listening to a call.

But, the improvement is fundamentally in efficiency, and not necessarily in the machine’s true ability to “think” on its own.

  1. Because AI learns from the past, the biggest concern with the technology is bias.

If a machine is given images and trained to identify males and females, that is a fairly objective goal and with enough training, the machine will achieve a very high level of accuracy.

However, if you give a machine images of employees and try to train the machine to identify which have been “good employees” at the company, the system is going to use the same rules as above to try to solve the equation. So if there has been some historical bias or race-related issue, or gender-related bias within the company, the machine is going to keep using that as a “rule” from which to learn—and will make decisions accordingly.

This is the fundamental pro and con to machine learning today. It is only as “smart” as the data it is being trained on—and the data it’s being trained on is only as good as something that has already happened in the past. AI practitioners are working hard to find ways to mitigate this bias, but it’s far from straightforward because it’s fundamental to how machine learning works.

Which means machines are likely not going to quickly be the change-makers we would like them to be in humanity.

See the magic of Gong in action

Thank you for your submission.