![Gettyimages 1177951659](https://techgroundnews.com/wp-content/uploads/sites/4/2024/03/GettyImages-1177951659-768x512.jpg)
You could spend it training a generative AI model.
See Databricks’ DBRX, a new generative AI model announced today akin to OpenAI’s GPT series and Google’s Gemini.
Customers can privately host DBRX using Databricks’ Model Serving offering, Rao suggested, or they can work with Databricks to deploy DBRX on the hardware of their choosing.
It’s an easy way for customers to get started with the Databricks Mosaic AI generative AI tools.
And plenty of generative AI models come closer to the commonly understood definition of open source than DBRX.
![Gettyimages 1288172035](https://techgroundnews.com/wp-content/uploads/sites/4/2024/03/GettyImages-1288172035-768x432.jpg)
Last year, Salesforce, the company best known for its cloud sales support software (and Slack), spearheaded a project called ProGen to design proteins using generative AI.
“Many drugs — enzymes and antibodies, for example — consist of proteins,” Madani said.
Fed into a generative AI model, data about proteins can be used to predict entirely new proteins with novel functions.
Other companies and research groups have demonstrated viable ways in which generative AI can be used to predict proteins.
And DeepMind, Google’s AI research lab, has a system called AlphaFold that predicts complete protein structures, achieving speed and accuracy far surpassing older, less complex algorithmic methods.
![Gettyimages 1009979808](https://techgroundnews.com/wp-content/uploads/sites/4/2024/03/GettyImages-1009979808-768x513.jpg)
AI-coustics to fight noisy audio with generative AINoisy recordings of interviews and speeches are the bane of audio engineers’ existence.
According to co-founder and CEO Fabian Seipel, AI-coustics’ technology goes beyond standard noise suppression to work across — and with — any device and speaker.
“We’ve been driven by a personal mission to overcome the pervasive challenge of poor audio quality in digital communications,” Seipel said.
But Seipel says AI-coustics has a unique approach to developing the AI mechanisms that do the actual noise reduction work.
“Speech quality and intelligibility still is an annoying problem in nearly every consumer or pro-device as well as in content production or consumption.
![Xai Grok Gettyimages 1765893916 1](https://techgroundnews.com/wp-content/uploads/sites/4/2024/03/xAI-Grok-GettyImages-1765893916-1-768x512.jpeg)
Elon Musk’s xAI has open-sourced the base code of Grok AI model, but without any training code.
In a blog post, xAI said that the model wasn’t tuned for any particular application such as using it for conversations.
Last week, Musk noted on X that xAI intended to open-source the Grok model this week.
Some AI-powered tool makers are already talking about using Grok in their solutions.
Yep, thanks to @elonmusk and xAI team for open-sourcing the base model for Grok.
![Haje1 A Colorful Illustration Of Steve Blank In 16x9 Aspect Rat F5646179 F466 4f19 B2c0 54bb4d506aff](https://techgroundnews.com/wp-content/uploads/sites/4/2024/03/haje1_a_colorful_illustration_of_Steve_Blank_in_16x9_aspect_rat_f5646179-f466-4f19-b2c0-54bb4d506aff-768x445.jpeg)
Generative AI models like Midjourney’s are trained on an enormous number of examples — e.g.
Some vendors have taken a proactive approach, inking licensing agreements with content creators and establishing “opt-out” schemes for training data sets.
The problem with benchmarks: Many, many AI vendors claim their models have the competition met or beat by some objective metric.
Anthropic launches new models: AI startup Anthropic has launched a new family of models, Claude 3, that it claims rivals OpenAI’s GPT-4.
AI models have been helpful in our understanding and prediction of molecular dynamics, conformation, and other aspects of the nanoscopic world that may otherwise take expensive, complex methods to test.
![Openai Flower](https://techgroundnews.com/wp-content/uploads/sites/4/2024/03/openai-flower-768x432.jpg)
OpenAI’s legal battle with The New York Times over data to train its AI models might still be brewing.
But OpenAI’s forging ahead on deals with other publishers, including some of France’s and Spain’s largest news publishers.
OpenAI on Wednesday announced that it signed contracts with Le Monde and Prisa Media to bring French and Spanish news content to OpenAI’s ChatGPT chatbot.
So, OpenAI’s revealed licensing deals with a handful of content providers at this point.
The Information reported in January that OpenAI was offering publishers between $1 million and $5 million a year to access archives to train its GenAI models.
![Sima Instructions](https://techgroundnews.com/wp-content/uploads/sites/4/2024/03/sima-instructions-768x508.jpg)
AI models that play games go back decades, but they generally specialize in one game and always play to win.
From this data — and the annotations provided by data labelers — the model learns to associate certain visual representations of actions, objects, and interactions.
AI agents trained on multiple games performed better on games they hadn’t been exposed to.
But of course many games involve specific and unique mechanics or terms that will stymie the best-prepared AI.
And simple improvised actions or interactions are also being simulated and tracked by AI in some really interesting research into agents.
![Found 2022 Featured 1](https://techgroundnews.com/wp-content/uploads/sites/4/2024/03/found-2022-featured-1-768x432.jpg)
The future of recycling is here, and of course, it involves robots and artificial intelligence.
Rebecca Hu, the co-founder of the robotics company Glacier, creates robots that help recycling plants separate and, well, recycle material.
Before, sorting recyclable materials was a manual job that took hours for someone to do.
Today, Glacier has been using AI cameras so that robots can better identify recyclable materials.
Hu said training the robots to spot materials was akin to teaching a toddler how to tell two things apart.
![Gettyimages 1335295270](https://techgroundnews.com/wp-content/uploads/sites/4/2024/03/GettyImages-1335295270-768x432.jpg)
“So much of the AI conversation has been dominated by … large language models,” Jones said, “but the reality is that no one model can do everything.
Pienso believes that any domain expert, not just an AI engineer, should be able to do just that.”Pienso guides users through the process of annotating or labeling training data for pre-tuned open source or custom AI models.
“Pienso’s flexible, no-code interface allows teams to train models directly using their own company’s data,” Jones said.
“This alleviates the privacy concerns of using … models, and also is more accurate, capturing the nuances of each individual company.”Companies pay Pienso a yearly license based on the number of AI models they deploy.
It’s fostering a future where we’re building smarter AI models for a specific application, by the people who are most familiar with the problems they are trying to solve.”
![Screenshot 2024 03 11 At 5.04.17a ¯pm Transformed](https://techgroundnews.com/wp-content/uploads/sites/4/2024/03/Screenshot_2024-03-11_at_5.04.17a_¯PM-transformed-768x448.png)
Should artists whose work was used to train generative AI like ChatGPT be compensated for their contributions?
OpenAI is in a delicate legal position where it concerns the ways in which it uses data to train generative AI systems like the art-creating tool DALL-E 3, which is incorporated into ChatGPT.
“Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents,” writes the company in a January blog post.
OpenAI has licensing agreements in place with some content providers, like Shutterstock, and allows webmasters to block its web crawler from scraping their site for training data.
In addition, like some of its rivals, OpenAI lets artists “opt out” of and remove their work from the data sets that the company uses to train its image-generating models.