Meta has announced changes to its rules on AI-generated content and manipulated media following criticism from its Oversight Board.
So, for AI-generated or otherwise manipulated media on Meta platforms like Facebook and Instagram, the playbook appears to be: more labels, fewer takedowns.
“Our ‘Made with AI’ labels on AI-generated video, audio and images will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content,” said Bickert, noting the company already applies ‘Imagined with AI’ labels to photorealistic images created using its own Meta AI feature.
Meta’s blog post highlights a network of nearly 100 independent fact-checkers which it says it’s engaged with to help identify risks related to manipulated content.
These external entities will continue to review false and misleading AI-generated content, per Meta.
At one point, Levin asked what Pichai tried to do to keep a company of 200,000 people innovative against all the startups battling to disrupt its business.
So I think, I think how do you as a company move fast?
“We recently said, we went back to a notion we had in early Google of Google Labs.
How do you allow people to prototype more easily internally and get it out to people?”Later, Levin asked what advances Pichai was most excited about this year.
Levin and Pichai start around 1 hour and 18 minutes in.
Well-known startup accelerator Y Combinator held one of its two yearly demo day events this week, showcasing hundreds of startups that recently went through its program.
Judging from our coverage of the two-day event, TechCrunch found lots to like in the presenting companies.
There was lots more than just AI on display, so for today’s TechCrunch Minute I compiled a few trends and vibes from the shindig for your enjoyment.
Accelerators play an important role in the startup world, giving founders early capital and advice as they get off the ground.
Y Combinator competes with Techstars and other platforms globally.
According to the Form S-1, the new AI governance committee includes managers from Rubrik’s engineering, product, legal and information security teams.
Here’s why having AI governance could become the new normal.
“Aside from its strategic role to devise and oversee an AI governance program, from an operational perspective, AI governance committees are a key tool in addressing and minimizing risks,” he said.
The EU AI Act has teeth, and “the penalties for non-compliance with the AI Act are significant,” British-American law firm Norton Rose Fulbright noted.
Establishing AI governance committees likely will be at least one way to try to help on the trust front.
The main development out of the sixth TTC meeting appears to be a commitment from EU and US AI oversight bodies, the European AI Office and the US AI Safety Institute, to set up what’s couched as “a Dialogue”.
The aim is fostering a deeper collaboration between the AI institutions, with a particular focus on encouraging the sharing of scientific information among respective AI research ecosystems.
“Working groups jointly staffed by United States science agencies and European Commission departments and agencies have achieved substantial progress by defining critical milestones for deliverables in the areas of extreme weather, energy, emergency response, and reconstruction.
We are also making constructive progress in health and agriculture.”In addition, an overview document on the collaboration around AI for the public good was published Friday.
The joint statement refers to 2024 as “a Pivotal Year for Democratic Resilience”, on account of the number of elections being held around the world.
Dubbed ‘Caracal,’ this new release emphasizes new features for hosting AI and high-performance computing (HPC) workloads.
Indeed, as the OpenInfra Foundation announced this week, its newest Platinum Member is Okestro, a South Korean cloud provider with a heavy focus on AI.
But Europe, with its strong data sovereignty laws, has also been a growth market and the UK’s Dawn AI supercomputer runs OpenStack, for example.
“All the things are lining up for a big upswing and open-source adoption for infrastructure,” OpenInfra Foundation COO Mark Collier told TechCrunch.
That’s in addition to networking updates to better support HPC workloads and a slew of other updates.
Agility Robotics on Thursday confirmed that it has laid off a “small number” of employees.
The well-funded Oregon-based firm says the job loss is part of a company-wide focus on commercialization efforts.
Ultimately, however, those efforts were placed on the back burner, as the company shifted focus to understaffed warehouses.
Two years ago this month, the company announced a $150 million Series B. Amazon notably participated in the round by way of its Industrial Innovation Fund.
Last month at Modex, Agility showcased updates to Digit’s end effectors designed specifically for automotive manufacturing workflows.
OpenAI is expanding a program, Custom Model, to help enterprise customers develop tailored generative AI models using its technology for specific use cases, domains and applications.
“Dozens” of customers have enrolled in Custom Model since.
As for custom-trained models, they’re custom models built with OpenAI — using OpenAI’s base models and tools (e.g.
Fine-tuned and custom models could also lessen the strain on OpenAI’s model serving infrastructure.
Alongside the expanded Custom Model program and custom model building, OpenAI today unveiled new model fine-tuning features for developers working with GPT-3.5, including a new dashboard for comparing model quality and performance, support for integrations with third-party platforms (starting with the AI developer platform Weights & Biases) and enhancements to tooling.
Such is the case with Anthropic and its latest research which demonstrates an interesting vulnerability in current LLM technology.
Of course given progress in open-source AI technology, you can spin up your own LLM locally and just ask it whatever you want, but for more consumer-grade stuff this is an issue worth pondering.
But the closer we get to more generalized AI intelligence, the more it should resemble a thinking entity, and not a computer that we can program, right?
If so, we might have a harder time nailing down edge cases to the point when that work becomes unfeasible?
Anyway, let’s talk about what Anthropic recently shared.
Gómez’s research is grounded in the computational music field, where she contributes to the understanding of the way humans describe music and the methods in which it’s modeled digitally.
What I liked at the time from machine learning was its modelling capabilities and the shift from knowledge-driven to data-driven algorithm design — e.g.
There’s also PHENICX, a large European Union (EU)-funded project I coordinated on the use of music; and AI to create enriched symphonic music experiences.
What advice would you give to women seeking to enter the AI field?
They should learn about the working principles and limitations of AI algorithms to be able to challenge them and use them in a responsible way.