Now rebranded as Mosaic AI, the platform has become integral to Databricks’ AI solutions.
Today, at the company’s Data + AI Summit, it is launching a number of new features for the service.
Databricks is launching five new Mosaic AI tools at its conference: Mosaic AI Agent Framework, Mosaic AI Agent Evaluation, Mosaic AI Tools Catalog, Mosaic AI Model Training, and Mosaic AI Gateway.
And we’ve also found in our internal AI applications, like the assistant applications for our platform, that this is the way to build them,” he said.
But when you really pushed people, they were using Open AI.
The U.S. Federal Trade Commission will examine the rise of AI technology across all fronts, said FTC Chair Lina Khan, speaking at TechCrunch’s StrictlyVC event in Washington, D.C., on Tuesday.
In fact, it’s already seeing an uptick in consumer complaint cases in some areas, like voice cloning fraud, Khan said.
Asked what areas of AI the FTC was watching, Khan explained that it was everything.
Of course, policing AI comes with its challenges, despite the number of technologists the FTC has hired to help in this area.
Another area of focus for the FTC is the focus on what openness really means in the AI context, Khan explained.
Stability AI, the startup behind the AI-powered art generator Stable Diffusion, has released an open AI model for generating sounds and songs that it claims was trained exclusively on royalty-free recordings.
Called Stable Audio Open, the generative model takes a text description (e.g.
Stability AI says that it’s not optimized for this, and suggests that users looking for those capabilities opt for the company’s premium Stable Audio service.
Stable Audio Open also can’t be used commercially; its terms of service prohibit it.
And it doesn’t perform equally well across musical styles and cultures or with descriptions in languages other than English — biases Stability AI blames on the training data.
As I wrote recently, generative AI models are increasingly being brought to healthcare settings — in some cases prematurely, perhaps.
Hugging Face, the AI startup, proposes a solution in a newly released benchmark test called Open Medical-LLM.
Hugging Face is positioning the benchmark as a “robust assessment” of healthcare-bound generative AI models.
It’s telling that, of the 139 AI-related medical devices the U.S. Food and Drug Administration has approved to date, none use generative AI.
But Open Medical-LLM — and no other benchmark for that matter — is a substitute for carefully thought-out real-world testing.
Meta has released the latest entry in its Llama series of open source generative AI models: Llama 3.
Meta describes the new models — Llama 3 8B, which contains 8 billion parameters, and Llama 3 70B, which contains 70 billion parameters — as a “major leap” compared to the previous-gen Llama models, Llama 2 8B and Llama 2 70B, performance-wise.
In fact, Meta says that, for their respective parameter counts, Llama 3 8B and Llama 3 70B — trained on two custom-built 24,000 GPU clusters — are are among the best-performing generative AI models available today.
So what about toxicity and bias, two other common problems with generative AI models (including Llama 2)?
The company’s also releasing a new tool, Code Shield, designed to detect code from generative AI models that might introduce security vulnerabilities.
The Linux Foundation today announced the launch of the Open Platform for Enterprise AI (OPEA), a project to foster the development of open, multi-provider and composable (i.e.
modular) generative AI systems.
Now, OPEA’s members are very clearly invested (and self-interested, for that matter) in building tooling for enterprise generative AI.
Domino offers a suite of apps for building and auditing business-forward generative AI.
And VMWare — oriented toward the infrastructure side of enterprise AI — last August rolled out new “private AI” compute products.
It may seem like a paradox to have virtualized Kubernetes clusters.
Loft Labs saw a similar problem with resource utilization in Kubernetes clusters that VMware saw with server utilization, and has built a virtualization tool to make them more efficient by sharing common underlying applications.
Loft Labs lets users share these common applications with multiple virtual clusters in the same way that VMs share server resources.
“We’re essentially turning many clusters into one cluster, and then have virtual clusters on top of the common applications,” CEO Lukas Gentele told TechCrunch.
And the thing that we learned was the problem of sharing Kubernetes clusters, isolating tenants in the cluster and how hard it is.
Apple has removed iGBA, a Game Boy emulator app for the iPhone, after approving its launch over the weekend.
First launched on Sunday, iGBA was an ad-supported copy of the open-source project GBA4iOS that offered a Game Boy game emulator for iOS.
The new app worked as described, allowing users to download both Game Boy Advance and Game Boy Color ROMs from the web and then open them in the app to play.
The Cupertino-based tech giant has been pushed to make the App Store more open thanks to the EU’s Digital Markets Act (DMA).
Following an update to its App Store rules to comply with the new regulation, Apple had announced it would also allow streaming game stores globally.
On Thursday, the company announced it’s expanding its fediverse integrations to 400 more Flipboard creators and introducing fediverse notifications in the Flipboard app itself.
In total, Flipboard says there are now over 11,000 curated Flipboard magazines available to federated social networking users.
In addition to the newly federated magazines, Flipboard is also bringing a more integrated fediverse experience to its own app.
4.3.25), Flipboard users will be able to see their new followers from the fediverse in their Flipboard Following tab, while their Flipboard notifications will now include fediverse reactions and conversations.
The company had already begun curating content for fediverse users across a handful of “news desks” (dedicated fediverse accounts) that directed users to interesting articles and links across topics.
The National Highway Traffic Safety Administration has opened a third investigation into EV startup Fisker’s Ocean SUV, this time centered on problems getting the doors to open.
The agency says the complaints point to a an “intermittent failure” of the door latch and handle system.
The Ocean SUV is already being investigated by ODI over problems with its braking system, and for complaints about the vehicle rolling away on uneven surfaces.
It paused production of the Ocean in March and reported just $121 million in the bank.
But the new safety probe suggests a deeper problem with the SUV’s doors.