Big names in AI like OpenAI (the company behind ChatGPT) and others are starting to hit a roadblock when it comes to creating smarter AI. Their initial tactic was to keep feeding these large language models with more and more because they think it’ll make them smarter. 

Well, it turns out that approach might be running out of steam. These AI companies are now realizing that gorging more data and computing power down the LLM's throat won’t cut it anymore. They are now looking for help and innovative ways to make AI more like humans.

It now seems like ages, but ChatGPT launched just a few years ago, and with it came a surge in AI tech. Everyone went nuts—from people who thought they'd lose their job to others who automated their entire lives with the AI chat tool.

This also brought in a new wave of tech companies that have plunged into an AI race to see who would build the smartest model. And their approach to getting better results? You guessed it; they want to just keep adding more data and more computing oomph.

But now, some of the smartest people in AI are saying, "Hold up, it's not that simple."

Take Ilya Sutskever, for example. He's a big deal when it comes to AI—he co-founded OpenAI and recently started a new company called Safe Superintelligence (SSI). He's saying that the results from scaling up the pre-training phase—that's when they feed the AI a ton of data to learn language patterns—have hit a wall.

This is pretty significant because Sutskever was one of the main guys pushing the idea that more data and more computing power would lead to big leaps in AI. He was right for a while—that approach gave us ChatGPT, after all. But now he's singing a different tune.

It's not just talk, either. Behind the scenes, researchers at top AI labs have been struggling to create a language model that can beat GPT-4, which has been around for almost two years now. They're not getting the results they expected.

So, what's the new plan? Well, they're looking at something called "test-time compute." It's a fancy way of saying they want to make AI models smarter when they're actually being used, not just during training. Instead of the AI immediately picking one answer, it might come up with a bunch of possibilities, think about them for a bit, and then choose the best one.

This new approach could be a game-changer. It might reshape the whole AI race and change what kind of resources these AI companies need. We're talking about everything from energy to the types of computer chips they use.

Some folks in the industry are pretty excited about this shift. Sonya Huang, a partner at Sequoia Capital, thinks we're moving from a world of massive pre-training clusters to something called "inference clouds." These are distributed, cloud-based servers that help AI models think on their feet.

Now, if you're worried about AI taking over the world, you might find this news reassuring. It shows that making AI smarter isn't as straightforward as some people thought. But don't relax too much—companies like OpenAI are still dead set on creating artificial general intelligence (AGI), which means AI that's smarter than humans across the board.

Balancing AI and Data Privacy. Are We Safe while Using ChatGPT? | HODL FM
AI’s rapid adoption raises data privacy and GDPR compliance questions. Here’s how ChatGPT and others handle data protection.
hodl-post-image

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice of this sort, HODL FM strongly recommends contacting a qualified industry professional.