Accenture announced today that it would acquire the learning platform Udacity as part of an effort to build a learning platform focused on the growing interest in AI.
While the company didn’t specify how much it paid for Udacity, it also announced a $1 billion investment in building a technology learning platform it’s calling LearnVantage.
While it could also offer more general technology training, the company made clear that it is particularly interested in offering training to get workers up to speed on AI.
As for Udacity, which was founded in 2011, it gave the usual kinds of statements a company makes when it gets acquired by a much larger organization like Accenture.
That is, it believes that it can reach more people and help them acquire skills at part of the larger entity.
Pixel phone updatesThe company is now allowing users to capture and upload 10-bit HDR videos on Instagram.
This feature is available to users on the Pixel 7, Pixel 7 Pro, Pixel 8, Pixel 8 Pro, and Pixel Fold.
Later in the month, the company made the feature available to Pixel 8 and Pixel 8 Pro users.
Google today said the Pixel 7 and Pixel 7 Pro users will be able to use this feature soon.
Pixel Watch updatesGoogle is rolling out two updates to the original Pixel Watch to help users train better, namely Pace training and Heart Zone training.
Like most other code generators, StarCoder 2 can suggest ways to complete unfinished lines of code as well as summarize and retrieve snippets of code when asked in natural language.
Trained with 4x more data than the original StarCoder, StarCoder 2 delivers what Hugging Face, ServiceNow and Nvidia characterize as “significantly” improved performance at lower costs to operate.
Setting all this aside for a moment, is StarCoder 2 really superior to the other code generators out there — free or paid?
As with the original StarCoder, StarCoder 2’s training data is available for developers to fork, reproduce or audit as they please.
Hugging Face, which offers model implementation consulting plans, is providing hosted versions of the StarCoder 2 models on its platform.
Google has apologized (or come very close to apologizing) for another embarrassing AI blunder this week, an image generating model that injected diversity into pictures with a farcical disregard for historical context.
While the underlying issue is perfectly understandable, Google blames the model for “becoming” over-sensitive.
But if you ask for 10, and they’re all white guys walking goldens in suburban parks?
Where Google’s model went wrong was that it failed to have implicit instructions for situations where historical context was important.
These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.
Phrasing requests in a certain way — meanly or nicely — can yield better results with chatbots like ChatGPT than prompting in a more neutral tone.
So what’s the deal with emotive prompts?
Nouha Dziri, a research scientist at the Allen Institute for AI, theorizes that emotive prompts essentially “manipulate” a model’s underlying probability mechanisms.
Why is it so trivial to defeat safeguards with emotive prompts?
Another reason could be a mismatch between a model’s general training data and its “safety” training datasets, Dziri says — i.e.
In its IPO prospectus filed today with the U.S. Securities and Exchange Commission, Reddit repeatedly emphasized how much it thinks it stands to gain — and has gained — from data licensing agreements with the companies training AI models on its over one billion posts and over 16 billion comments.
“In January 2024, we entered into certain data licensing arrangements with an aggregate contract value of $203.0 million and terms ranging from two to three years,” the prospectus reads.
“We expect a minimum of $66.4 million of revenue to be recognized during the year ending December 31, 2024 and the remaining thereafter.”Now, it’s a mystery as to which AI vendors are licensing data from Reddit so far.
Why’s Reddit data valuable?
Reddit previously didn’t gate access to its data for AI training purposes.
Massive training data sets are the gateway to powerful AI models — but often, also those models’ downfall.
Morcos’ company, DatologyAI, builds tooling to automatically curate data sets like those used to train OpenAI’s ChatGPT, Google’s Gemini and other like GenAI models.
“However, not all data are created equal, and some training data are vastly more useful than others.
History has shown automated data curation doesn’t always work as intended, however sophisticated the method — or diverse the data.
The largest vendors today, from AWS to Google to OpenAI, rely on teams of human experts and (sometimes underpaid) annotators to shape and refine their training data sets.
The trouble is, many of these models — if not most — were trained on artwork without artists’ knowledge or permission.
And while some vendors have begun compensating artists or offering ways to “opt out” of model training, many haven’t.
Another, Kin.art, uses image segmentation (i.e., concealing parts of artwork) and tag randomization (swapping an art piece’s image metatags) to interfere with the model training process.
“We prevent your artwork from being inserted in the first place.”Now, Kin.art has a product to sell.
While the tool is free, artists have to upload their artwork to Kin.art’s portfolio platform in order to use it.
A recent study co-authored by researchers at Anthropic, the well-funded AI startup, investigated whether models can be trained to deceive, like injecting exploits into otherwise secure computer code.
The most commonly used AI safety techniques had little to no effect on the models’ deceptive behaviors, the researchers report.
Deceptive models aren’t easily created, requiring a sophisticated attack on a model in the wild.
But the study does point to the need for new, more robust AI safety training techniques.
“Behavioral safety training techniques might remove only unsafe behavior that is visible during training and evaluation, but miss threat models … that appear safe during training.
Pivotal, the Palo Alto, California-based company backed by Larry Page, kicked off online sales Monday night at CES 2024 for Helix, a lightweight electric personal aircraft that doesn’t require a pilot’s license to fly.
Helix marks an evolution for Pivotal, a company previously known as Opener that has been working on lightweight electric vertical and takeoff aircraft for more than a decade.
The Helix, due to its lightweight status weighing about 348 pounds, complies with FAA Part 103 (Ultralight) category in the United States.
The base $190,000 package includes the Helix aircraft with a white-and-carbon fiber exterior finish and a digital flight panel, canopy, HD landing camera, charger, vehicle cart, custom marking and warranty.
The Helix aircraft will be manufactured in Palo Alto.