ChatGPT compared to chatbots that have come before is simply astounding. It’s dangerous though to think that it can do everything. Understanding some of its limits will help us to understand what it can’t do.
ChatGPT has limitations in understanding context, retrieving knowledge, creativity, sarcasm, irony, emotions, and logic puzzles inherent to programming. Understanding the limitations is crucial to using ChatGPT effectively.
So do you really want to know about some of chatGPT’s failings? Let’s take a look at a few of them and how they might affect the work that you might do with this AI.
Table of Contents
- ChatGPT’s Biggest Challenge – Understanding You In Context
- Limitations in Knowledge Retrieval – It’s What You Don’t Know…
- Limitations of Creativity and Innovation
- The Good News
ChatGPT’s Biggest Challenge – Understanding You In Context
In trying to understand what we’re writing ChatGPT is performing natural language processing (NLP). There are several issues that ChatGPT has when trying to interpret what we’re writing.
ChatGPT struggles to fully grasp the context of human language.
Example: “I went to the bank.”
Without a contextual clue, it’s hard to know whether you went to take some money out or to a river. Based on the rest of the chat usually, ChatGPT would be able to understand what you meant.
Humans are still better at figuring out contextual clues.
When ChatGPT typically fails:
- If the important contextual clue was not very clear for instance, my mother has a house close to the Rhine. (The Rhine is a river but it is not explicitly stated.)
- Or if the contextual clue was a long way away in terms of the words separating the clue and the example sentence ChatGPT might not understand.
ChatGPT Has Problems in Recognizing Sarcasm
Example: “Oh Super…”
Once again from context, a human would recognize if the tone of the statement is sarcastic. ChatGPT is more likely to understand the statement literally.
- This type of misunderstanding can lead to positive reinforcement of negative results.
ChatGPT Has Trouble Interpreting Human Emotions
Example: “I’m so Happy”
Although chatGTP is being used for therapy by its own admission it’s not good at understanding emotion. Even the phrase I’m so happy can be misinterpreted. Here’s what chat GTP says about how it can misinterpret even simple statements of emotion.
When ChatGPT doesn’t fully understand the emotions being expressed, it can lead to inconsistent or inaccurate responses in several ways. For example:
If a user expresses happiness, but ChatGPT only recognizes the word “happy” and not the full context of the statement, it may respond with generic, surface-level information about happiness rather than a personalized, relevant response.
If ChatGPT is unable to accurately detect the emotions being expressed, it may misinterpret the sentiment of the statement and provide a response that is inconsistent with the speaker’s intended meaning.
The lack of understanding of emotions can also lead to a lack of empathy in ChatGPT’s responses, making it difficult for the model to build a relationship with the user. This can negatively impact the overall user experience and make it harder for ChatGPT to effectively communicate with users.
These are just a few examples of how the limitations in detecting human emotions and sentiment can lead to inconsistent or inaccurate responses from ChatGPT. Understanding these limitations is important for improving the overall effectiveness of the model and ensuring that it is being used in the most appropriate and effective way possible.
ChatGPT is Not Consistent In How It Answers Questions
If you were to look at how a human being answered questions on different days you would probably find that they’re inconsistent. For example, you might ask somebody what their favorite food is on one day they might say steak and then another day they might answer milkshakes.
ChatGPT suffers from this same type of inconsistency. Occasionally the inconsistency will affect factual questions like, What is the capital of France?
So what’s going on? ChatGPT will answer your questions depending on what you’ve already been chatting about. For instance, if you’ve been chatting about World War I and you asked the question what is the capital of France you might get the answer, Bordeaux. Since Bordeaux was indeed the capital of France during World War I.
In this case, the answer was not incorrect but it also was probably not what you were expecting. This happens in part because ChatGPT understands pattern recognition and probability only. When you’re speaking about World War I one of the facts that might be important is where the French Capital was.
It’s important to keep in mind when you’re using ChatGPT that your prior chat will affect the answers that it gives you.
There are other examples of how ChatGPT might answer differently for the same question. For instance, if you ask ChatGPT what is the meaning of love you might get a different answer each time. This is not inherently a problem with ChatGPT but with the fact that the answer itself is not well defined in the text that ChatGPT has analyzed.
Limitations in Knowledge Retrieval – It’s What You Don’t Know…
ChatGPT was trained on a vast amount of data but there are still limits. There are a lot of specific limitations when it comes to retrieving information.
Limits on access to information
There are many subjects where ChatGPT will not necessarily know the answer. This can happen because the information is not available within the data that ChatGPT has analyzed.
For instance, this might happen when you’re talking about a rare or very obscure topic. It also happens when you ask for specific URLs or references oftentimes.
Inconsistency
Sometimes the information that ChatGPT references is inconsistent. Different sources have different answers to the same question. Chat GTP has difficulty determining at times which source is more authoritative or reliable.
Limited access to current information
ChatGPT has a knowledge cut off of 2021, that is when the database of information that was analyzed to produce it was finished. Any events products or other developments that have occurred since 2021 cannot be referenced by ChatGPT.
There is a way around this problem. You have to actually furnish information to ChatGPT. You can do that by asking it to read articles that you furnish it. This will allow chat GTP to reference more current information about a subject.
It also should be mentioned that because ChatGPT uses a large amount of data some of which is not 100% reliable that you should check everything that it writes for you. You will need to be the fact checker for ChatGPT if you’re using it in any type of scholarly or professional context.
Limitations of Creativity and Innovation
One of the ways that I think about chat GPT is like a cut-and-paste bot. What I mean by that is that it takes information from lots of places and pastes it together to answer a question. For lots of questions that’s fantastic but there are limits.
Limits on original thought
Chat GPT depends on the data that has already been included in its neural network. It generates great responses oftentimes with that data but it’s not original.
- The idea is that it presents may actually include two different ideas that have not been put together before, but neither idea would be original.
- So we see that GPT is capable of synthesis but not of innovation.
Difficulty with abstract concepts
ChatGPT does not know how to interpret questions like “What is beauty?”. This is an abstract concept and although it has analyzed data about that concept it has no inherent ability to understand it.
So what do I mean by that?
You may have studied electricity in school enough to understand how a light bulb works. The equations that are used to understand electricity and magnetism though are probably things you’ve never studied. You understand the effects but not the underlying reality the equations represent.
ChatGPT has a similar problem. It can tell you what people say is beautiful but it doesn’t understand Beauty itself.
Limits on generating new ideas
Although chat GPT is good at putting together information that is already in its neural network. It’s not very innovative. It’s limited to the training data that it already has and using it to put different concepts together. It does not generate Concepts on its own.
Because chat GPT can’t create new Concepts it’s left with synthesizing different concepts together. Although that’s a limitation understanding that it can pull concepts from many different sources is powerful.
For instance, in programming chat GPT can be used to create subroutines. By giving it specifications of what a program is supposed to do it can generate a small program by putting together code from multiple sources. That allows it to create a program that’s never been written before.
In this example chat GPT is not writing code per se it’s cutting and pasting different pieces of code together and making them work. Many times this is helpful but you will always need to check the code to make sure it actually works.
Why? Because of the final limitation.
Limitations on logical thinking
Chat GPT operates based on the information that has been included in its neural network. This allows it to use pattern recognition to predict what a response should be. In other words, ChatGPT uses probability to determine what the most likely answer is.
That works well if the training data contains data that lets it predict the answer.
Logical problems oftentimes will not fit existing patterns in the data that ChatGPT was trained on.
For instance, consider the logic problem,
Five people were eating apples, A finished before B, but
behind C. D finished before E, but behind B. What was the finishing order?
The right answer is, CABDE
Here’s ChatGPTs Answer
A, C, B, D, E
ChatGPT gets it wrong, Why?
ChatGPT gets it wrong because it doesn’t analyze the problem. What it does is based on the problem ChatGPT uses pattern recognition to guess the most probable answer. Since the answer is not in its training data it has no way of getting the right answer.
In other words, if it can’t look it up ChatGPT is out of luck.
The Good News
Chat GPT is getting better. There is interest at Open.ai the creator of ChatGPT to increase ChatGPT’s ability to program for instance. Other Innovations will be included over time I’m sure. Fundamentally though there will always be limitations to this type of AI.