myCPE
Introducing MY-CPE, your one-stop solution for certified experts' continuous learning and CPE credit acquisition
Join 2,50,000+
professionals today

Add Insights to your inbox - get the latest
professional news for free.

Did AI Trip Over 'Strawberry'?

  • Share on -

29 AUG 2024 / FINTECH & AI

Did AI Trip Over 'Strawberry'?

Did AI Trip Over 'Strawberry'?

Artificial intelligence is all the rage these days, with everyone raving about its incredible potential to change our world. But here’s a head-scratcher: Did you know that even the smartest AI models, like GPT-4 and Claude, sometimes can’t count the letters in a simple word like "strawberry"? Sounds like the setup for a joke, right? But it’s true! Check out the tweet below where it happened:  

Curious about what's going on here? Why are these advanced systems getting tripped up by something so straightforward? Let’s dig a little deeper. 

What’s Under the Hood of These AI Models?

To understand why AI can’t spell “strawberry” correctly, we need to take a peek under the hood. Most large language models (LLMs), like GPT-4, are powered by something called transformer architecture. Transformers have revolutionized natural language processing by breaking text down into small chunks known as tokens. Tokens can be whole words, parts of words, or even single letters, depending on the model's design. 

Here’s where things get a bit tricky. Transformers don’t “read” like humans do. When we see the word "strawberry," we recognize it letter by letter: "s," "t," "r," "a," "w," "b," "e," "r," "r," and "y." The AI, however, sees "strawberry" as a sequence of tokens learned from mountains of data. This method allows it to generate coherent and relevant responses, but it also means it can struggle with tasks requiring a more detailed understanding of text, like counting specific letters. 

Fun Fact: Did you know that the word "strawberry" dates back to Old English, meaning "strewn berry"? Imagine the AI trying to figure out that puzzle! 

Why Does Tokenization Trip Up AI?

Tokenization is both a superpower and a kryptonite for these models. It’s the process of breaking down text into smaller, digestible pieces that the model can handle more efficiently. In English, these are often straightforward spaces that help separate words, making tokenization relatively easy. But in languages like Chinese or Japanese, where spaces don’t define word boundaries, tokenization becomes a real headache, leading to misunderstandings. 

Even within a single language, tokenization isn’t foolproof. When AI tries to spell "strawberry" or count its letters, the model might tokenize parts of the word differently or miss the nuances altogether. It's like trying to solve a jigsaw puzzle without knowing what the final picture should look like. You could say AI sometimes ends up “all over the map” when trying to figure this stuff out. 

What Does This Mean for AI’s Thinking Abilities?

So, what does it all mean when an AI flubs something as simple as counting letters? It points to bigger challenges in how AI models reason. Sure, these models are champs at processing tons of data and spitting out human-like text, but they hit a wall when asked to perform tasks that require logic, context, or common sense. 

Think of it this way: an AI might write a fantastic essay on quantum physics, but ask it to play tic-tac-toe, and it might not know which way is up. Why? Because tasks like puzzles or games require an understanding of rules, strategy, and sometimes, a bit of intuition. Researchers are now looking into ways to boost AI reasoning by developing specialized models and adding more diverse training data. 

As Andrew Nr., a leading AI researcher said in his famous quote “Machines are good at pattern matching, but they still don’t know how to think,”  

What’s the Deal with OpenAI's Project "Strawberry"?

Seeing these challenges, OpenAI isn’t just twiddling its thumbs. They’re working on a new product codenamed "Strawberry" to tackle these very issues. According to sources, Project Strawberry aims to enhance AI’s reasoning capabilities, enabling it to solve more advanced problems, like complex math or navigating the internet to research independently. 

Strawberry represents a big leap forward in AI development, especially in reasoning, where even advanced models like GPT-4 and Claude often stumble. The goal? To build AI that can handle long-horizon tasks (LHT) that require planning and decision-making over extended periods. Think of an AI that could autonomously gather information, synthesize data, and make strategic decisions without human intervention. Pretty neat, right? If Strawberry succeeds, it could be “the next big thing” in AI! 

What Can Professionals Learn from This?

So, why should this matter to you as an accounting, tax, or finance professional? For one, it highlights the current limitations of AI, reminding us that these models aren’t infallible and still require human oversight. While AI can assist with tasks like data analysis or report generation, it may still struggle with tasks requiring logical reasoning or domain-specific expertise. Knowing this can help you better understand where AI can add value and where it might still need a human touch. 

Will AI Ever Learn to Think?

Projects like Strawberry are just the tip of the iceberg in the quest to make AI models more reliable and capable of reasoning like humans. But don’t hold your breath; there’s still a long road ahead. Tokenization, understanding context, and reasoning are complex challenges that even the smartest algorithms are still trying to crack. 

OpenAI’s Project Strawberry offers a glimpse into what the future might hold for AI reasoning, but there are no quick fixes here. Researchers will need to keep innovating and refining these technologies before AI can think like a human—let alone spell “strawberry” correctly. Until then, it’s safe to say AI still has a few “kinks to work out.” 

So, next time you hear someone talk about the magic of AI, just remember: even the smartest AI is still figuring out the basics. And maybe, just maybe, that’s what makes it more fascinating.  Stay tuned for more interesting stories. And don't forget to subscribe to our weekly newsletter for updates.

Join 2,50,000+ subscribers

Join Insights for your daily dose of the latest, uninterrupted updates, all delivered in under 3 minutes