New AI models from Meta are making waves in technology circles. The two new models, part of the Facebook parent company’s Llama line of artificial intelligence tools, are both open-source, helping them stand apart from competing offerings from OpenAI and other well-known names.
- Meta’s new Llama models have differently sized underlying datasets,
- with the Llama 3 8B model featuring eight billion parameters,
- and the Llama 3 70B model some seventy billion parameters.
The more parameters, the more powerful the model, but not every AI task needs the largest possible dataset.
The company’s new models, which were trained on 24,000 GPU clusters, perform well across benchmarks that Meta put them up against, besting some rivals’ models that were already in the market.
What matters for those of us not competing to build and release the most capable, or largest AI models, what we care about is that they are still getting better with time. And work. And a lot of compute.
While Meta takes an open-source approach to AI work, its competitors are often prefer more closed-source work. OpenAI, despite its name and history, offers access to its models, but not their source code. There’s a healthy debate in the world of AI regarding which approach is better, for both speed of development and also safety. After all, some technologists — and some computing doomers, to be clear — are worried that AI tech is developing too fast and could prove dangerous to democracies and more.
For now, Meta is keeping the AI fires alight, offering a new challenge to its peers and rivals to best their latest. Hit play, and let’s talk about it!
I don’t think the title of your article matches the content lol. Just kidding, mainly because I had some doubts after reading the article.