The Ineptitude of Artificial Intelligence in Spelling Accuracy

Why is AI so bad at spelling? But put an AI up against some middle schoolers at the spelling bee, and it’ll get knocked out faster than you can say diffusion. The underlying technology behind image and text generators are different, yet both kinds of models have similar struggles with details like spelling. But, Guzdial says, if we look close enough, it’s not just fingers and spelling that AI gets wrong. Though these AI models are improving at an alarming rate, these tools are still bound to encounter issues like this, which limits the capacity of the technology.

AIs are on a roll these days – acing the SATs, beating chess grandmasters, and breezing through code errors. But there’s one thing it seems to struggle with – spelling. Despite all of its advancements, AI just can’t seem to get words right. In fact, if you ask an AI to create a menu for a Mexican restaurant, you might spot some interestingly spelled dishes like “taao” and “burto” amidst the chaos.

“For all the advancements we’ve seen in AI, it still can’t spell.”

But don’t be fooled by an AI’s ability to write your papers for you, it’s not so great at coming up with ten-letter words without the letters “A” or “E.” One AI was asked for a ten-letter word and responded with “balaclava” – not exactly what was expected. And even companies like Instagram have had mishaps with their AI, generating stickers with questionable words.

“Image generators tend to perform much better on artifacts like cars and people’s faces, and less so on smaller things like fingers and handwriting.” – Asmelash Teka Hadgu

The underlying technology behind image and text generators may differ, but both struggle with the same details, including spelling. Image generators use diffusion models to reconstruct images from noise, while large language models (LLMs) attempt to mimic the human brain by using complex math to match patterns and generate responses.

“Even just last year, all these models were really bad at fingers…they’re getting really good at it locally, but they’re still bad at structuring these whole things together.” – Matthew Guzdial

These models may be able to generate complex text and images, but they still have trouble with the basics. This is because they rely heavily on their training data, and while engineers can train models to recognize certain aspects like hands, they can’t possibly account for every detail in a language as complex as English.

Some models, like Adobe Firefly, have opted to simply not generate text at all, instead producing blank images when prompted with certain details. But as prompts become more specific and detailed, these guardrails become less effective.

“You can think about it almost like they’re playing Whac-A-Mole…they keep adding new things to address, but text is a lot harder.” – Matthew Guzdial

Despite its shortcomings, AI still has its uses – particularly in identifying fake images and misinformation. People with knowledge in certain areas, like music or language, may be able to spot flaws in AI-generated images and texts that the average person wouldn’t. And while these models may improve, they will always have issues to some degree.

“These models are making small, local issues all the time – it’s just that we’re particularly well tuned to recognize some of them.” – Matthew Guzdial

So, while AI may continue to amaze us with its abilities, let’s not forget its limitations and keep the hype in check. After all, as Hadgu says, “the kind of hype that this technology is getting is just insane.”

Avatar photo
Dylan Williams

Dylan Williams is a multimedia storyteller with a background in video production and graphic design. He has a knack for finding and sharing unique and visually striking stories from around the world.

Articles: 834

Leave a Reply

Your email address will not be published. Required fields are marked *