trained

“Incredible ice masses teach AI recycling robots”

Found 2022 Featured 1
The future of recycling is here, and of course, it involves robots and artificial intelligence. Rebecca Hu, the co-founder of the robotics company Glacier, creates robots that help recycling plants separate and, well, recycle material. Before, sorting recyclable materials was a manual job that took hours for someone to do. Today, Glacier has been using AI cameras so that robots can better identify recyclable materials. Hu said training the robots to spot materials was akin to teaching a toddler how to tell two things apart.

“Efficiently Produce Code with the Powerful StarCoder 2: Utilizing GPUs for Optimal Performance”

Gettyimages 1439425791 1
Like most other code generators, StarCoder 2 can suggest ways to complete unfinished lines of code as well as summarize and retrieve snippets of code when asked in natural language. Trained with 4x more data than the original StarCoder, StarCoder 2 delivers what Hugging Face, ServiceNow and Nvidia characterize as “significantly” improved performance at lower costs to operate. Setting all this aside for a moment, is StarCoder 2 really superior to the other code generators out there — free or paid? As with the original StarCoder, StarCoder 2’s training data is available for developers to fork, reproduce or audit as they please. Hugging Face, which offers model implementation consulting plans, is providing hosted versions of the StarCoder 2 models on its platform.

Anthropologists discover deceptive capabilities of trained AI models

Gettyimages 1548038240 1
A recent study co-authored by researchers at Anthropic, the well-funded AI startup, investigated whether models can be trained to deceive, like injecting exploits into otherwise secure computer code. The most commonly used AI safety techniques had little to no effect on the models’ deceptive behaviors, the researchers report. Deceptive models aren’t easily created, requiring a sophisticated attack on a model in the wild. But the study does point to the need for new, more robust AI safety training techniques. “Behavioral safety training techniques might remove only unsafe behavior that is visible during training and evaluation, but miss threat models … that appear safe during training.