Posted in

ChatGPT finally knows how many ‘R’s are in ‘strawberry,’ but confident mistakes…

The topic ChatGPT finally knows how many ‘R’s are in ‘strawberry,’ but confident mistakes… is currently the subject of lively discussion — readers and analysts are keeping a close eye on developments.

This is taking place in a dynamic environment: companies’ decisions and competitors’ reactions can quickly change the picture.

Confident mistakes – or lies, if you will – are a common problem of large language models used in AI chatbots, with one common shortcoming of ChatGPT being that it would frequently miscount the number of times the letter “R” appeared in the word “strawberry.” As OpenAI tried to take a victory lap around this, though, plenty of other confident mistakes were pointed out in the replies.

For as much as AI chatbots have improved, one of the biggest missteps remains the frequency at which these “tools” will confidently lie to you. If information is wrong, the chatbot won’t notice and, if you call it out, the AI might dig in its heels on the response and continue to get it wrong, while telling you that it’s right. It’s a problem that is often shown as a danger of these tools, on top of being downright annoying given how many resources AI is taking up.

One common example of this with OpenAI’s ChatGPT is the question of how many times the letter “R” appears in the word “strawberry.”

For quite some time, asking ChatGPT about this would result in the chatbot coming back with the wrong answer, and it’ll often argue that the word “strawberry” does not use the letter “R” three times. Other AI models often ran into the same problem.

Today, OpenAI took to Twitter/X to proudly tout that, “at long last,” ChatGPT can correctly answer this question. Another common stumbling was the prompt “I want to wash my car today but the car wash is only 50 meters away. Should I walk to drive there,” to which ChatGPT would often recommend walking, despite the very obviously logical problem there.

Sure enough, both of these are now working if you try them in ChatGPT, but it’s suspected they might be hardcoded solutions. Many replies to OpenAI’s post show other times where the chatbot fails on the same logic. for example, “How many r’s are in cranberry” repeatedly sees the chatbot continue to reply with “The word ‘cranberry’ has 1 ‘R.'” Of course, that’s incorrect.

Hardcoded solutions in AI chatbots aren’t new, but it’s a bit funny – in a dystopian kind of way – to see OpenAI touting this “fix” when, clearly, the root of the problem remains.