Now, it appears Meta is using its Quest VR store to demonstrate how it thinks devices with app stores should approach online age verification.
Since it’s easy to lie about someone’s age when entering only a birthdate, Meta says it’ll require people who accidentally enter a wrong birthdate to verify with an ID or credit card.
Meta has previously told developers that, starting in March 2024, it will require them to identify their app’s intended age group (preteens, teens or adults).
It also announced the launch of its user age group APIs, which officially launched last month.
Meta first added parental supervision tools to its VR headset in 2022.
OpenAI is making its flagship conversational AI accessible to everyone, even people who haven’t bothered making an account.
Instead, you’ll be dropped right into conversation with ChatGPT, which will use the same model as logged-in users.
You can chat to your heart’s content, but be aware you’re not getting quite the same set of features that folks with accounts are.
You won’t be able to save or share chats, use custom instructions, or other stuff that generally has to be associated with a persistent account.
OpenAI offers this helpful gif:More importantly, this extra-free version of ChatGPT will have “slightly more restrictive content policies.” What does that mean?
YouTube is now requiring creators to disclose to viewers when realistic content was made with AI, the company announced on Monday.
YouTube says the new policy doesn’t require creators to disclose content that is clearly unrealistic or animated, such as someone riding a unicorn through a fantastical world.
It also isn’t requiring creators to disclose content that used generative AI for production assistance, like generating scripts or automatic captions.
They will also have to disclose content that alters the footage of real events or places, such as making it seem as though a real building caught on fire.
Creators will also have to disclose when they have generated realistic scenes of fictional major events, like a tornado moving toward a real town.
India has waded into global AI debate by issuing an advisory that requires tech firms to get government permission before launching new models.
India’s Ministry of Electronics and IT issued the advisory to firms on Friday.
It seeks compliance with “immediate effect” and asks tech firms to submit “Action Taken-cum-Status Report” to the ministry within 15 days.
The new advisory, which also asks tech firms to “appropriately” label the “possible and inherent fallibility or unreliability” of the output their AI models generate, marks a reversal from India’s previous hands-off approach to AI regulation.
Less than a year ago, the ministry had declined to regulate AI growth, instead identifying the sector as vital to India’s strategic interests.
Most tech startups are born from a few early engineers building the company’s initial product.
As those first builders work together, they begin to establish a developer culture — sometimes deliberately, sometimes not.
At Web Summit in Lisbon in November, two founders discussed the importance of building a developer culture that’s distinct from a company’s overall culture.
And we really wanted to instill that in the developer culture early on,” she said.
Ludmila Pontremolez, CTO and co-founder at Zippi, a Brazilian fintech startup, spent time as an engineer at Square prior to launching Zippi.
Beeper, the app that brings iMessage to Android users, is implementing a fix that it says will allow users to once again access the service after Apple blocked it.
However, the fix requires you to have access to a Mac computer, or have a friend on Beeper with a Mac.
“This 1:1 mapping of registration data to individual user—in our testing—makes the connection very reliable,” the post reads.
“If you use Beeper Mini, you can use your Mac registration data with it as well, and Beeper Mini will start to work again.
Beeper says that in its testing, it found that 10-20 iMessage users can safely use the same registration data.
The Chinese Ministry of Transport recently unveiled a set of trial guidelines for autonomous vehicle services like robotaxis, self-driving trucks and robobuses.
The rules also specify the requirements for safety operators at various degrees of automation.
Autonomous cargo trucks should “in principle” be equipped with in-car safety operators.
Robotaxis with advanced automation should have one in-car safety operator.
They should also establish an agreement with the vehicle manufacturers and safety operators on the respective party’s scope of responsibilities.
The Chinese Ministry of Transport recently unveiled a set of trial guidelines for autonomous vehicle services like robotaxis, self-driving trucks and robobuses.
The rules also specify the requirements for safety operators at various degrees of automation.
Autonomous cargo trucks should “in principle” be equipped with in-car safety operators.
Robotaxis with advanced automation should have one in-car safety operator.
They should also have an agreement with the vehicle manufacturers and safety operators on the respective party’s scope of responsibilities.
The verification system that Twitter is introducing, which requires people to pay in order to have their accounts verified, is causing a lot of chaos. CEO Elon Musk himself has…
People who are in the B2B market are keying in on tools that will help them with their entire organization. These tools include Figma and Slack, which have become common…