Case in point, OpenAI today announced a partnership with Common Sense Media, the nonprofit organization that reviews and ranks the suitability of various media and tech for kids, to collaborate on AI guidelines and education materials for parents, educators and young adults.
As a part of the partnership, OpenAI will work with Common Sense Media to curate “family-friendly” GPTs — chatbot apps powered by OpenAI’s GenAI models — in the GPT Store, OpenAI’s GPT marketplace, based on Common Sense’s rating and evaluation standards, OpenAI CEO Sam Altman says.
Common Sense’s framework aims to produce a “nutrition label” for AI-powered apps, according to Common Sense co-founder and CEO James Steyer, toward shedding light on the contexts in which the apps are used and highlight areas of potential opportunity and harm against a set of “common sense” tenets.
An Impact Research poll commissioned by Common Sense Media late last year found that 58% of students aged 12 to 18 have used ChatGPT compared to 30% of parents of school-aged children.
“Together, Common Sense and OpenAI will work to make sure that AI has a positive impact on all teens and families,” Steyer said in an emailed statement.
Enterprises need to understand how much to budget into AI tools, how to weigh the benefits of AI versus new recruits, and how to ensure their training is on point.
A recent study also found that who is using AI tools is a critical business decision, as less experienced developers get far more benefits out of AI than experienced ones.
At Waydev, we’ve spent the past year experimenting on the best way to use generative AI in our own software development processes, developing AI products, and measuring the success of AI tools in software teams.
This is what we’ve learned on how enterprises need to prepare for a serious AI investment in software development.
Then use an engineering management platform (EMP) or software engineering intelligence platform (SEIP) to track whether your adoption of AI is moving the needle on those variables.
While it has positively impacted productivity and efficiency in the workplace, AI has also presented a number of emerging risks for businesses.
At the same time, however, nearly half (48%) said they enter company data into AI tools not supplied by their business to aid them in their work.
This rapid integration of generative AI tools at work presents ethical, legal, privacy, and practical challenges, creating a need for businesses to implement new and robust policies surrounding generative AI tools.
AI use and governance: Risks and challengesDeveloping a set of policies and standards now can save organizations from major headaches down the road.
The previously mentioned Harris Poll found that 64% perceive AI tool usage as safe, indicating that many workers and organizations could be overlooking risks.
Meta announced today that it is rolling out new DM restrictions on both Facebook and Instagram for teens that prevent anyone from messaging teens.
What’s more, Meta is also making its parental controls more robust by allowing guardians to allow or deny changes in default privacy settings made by teens.
Previously, when teens changed these settings, guardians got a notification, but they couldn’t take any action on them.
Meta first rolled out parental supervision tools for Instagram in 2022, which gave guardians a sense of their teens’ usage.
Meta didn’t specify what work it is doing to ensure the privacy of teens while executing these features.
Prompt Security was founded by Itamar Golan (CEO) and Lior Drihem (CTO), who both previously worked at Check Point and Orca Security.
The company’s tools automatically detect patterns related to GenAI usage and then layers an enforcement policy on top of that.
Golan stressed that the company is trying to build an entire platform here by covering various aspects of an organization’s GenAI usage.
“We are trying to build a one-stop solution for GenAI security.
Over time, the company plans to launch more services that help its customers increase their GenAI security posture.
Since launching its self-repair program in August 2022, Samsung has been aggressively adding new devices to the mix.
Another upgrade this week brings the current offering up to 50 products (when you factor in variants like the Plus and Ultra), including smartphones, tablets, TVs, laptops, monitors, soundbars and even a projector.
There are 14 new devices in all, including the Galaxy S23 series, Galaxy Z Fold 5, Galaxy Z Flip 5, Galaxy Tab S9 series, the Galaxy Book 2 series and the aforementioned projector, Freestyle 2.
The news also finds Samsung adding a number of different parts options, including speakers, the SIM tray, side key and volume buttons on Galaxy phones and tablets.
Samsung’s approach is generally more in line with Google’s in terms of access to tools and parts.
Google announced today that Gemini, its family of multimodal large language models, now powers the conversational experience within the Google Ads platform.
With this new update, it will be easier for advertisers to quickly build and scale Search ad campaigns.
The conversational experience is designed to help build Search campaigns through a chat-based tool.
The tool uses your website URL to create Search campaigns by generating relevant ad content, including assets and keywords.
“We observed that it helps them build higher quality Search campaigns with less effort.”The new tool will join Google’s other AI-powered tools for advertisers.
At last year’s DockerCon, Docker launched its Docker Build remote build service and today it is taking this a step further with the launch of Docker Build Cloud, a fully managed service that, you guessed it, allows development teams to offload their image builds to the cloud.
“Every week, millions of developers run ‘docker build [x],” Giri Sreenivas, Docker’s chief product officer, told me, referring to the standard command developers use to kick off their Docker builds.
“Docker builds are being run in both places, so let’s go ahead and make sure we can support accelerating builds in either of those locations,” he said.
Developers can then buy Docker Build Cloud plans starting at $5 per seat/month for 200 build minutes, with extra time on top of that costing $0.05/minute.
That cloud mean, for example, that Docker Cloud Build would work in tandem with Docker Scout, its service for finding vulnerable packages in a container, and then create a more secure build that developers could switch over to.
Staniszewski says that he and Dabkowski, who grew up in Poland, were inspired to create voice cloning tools by poorly dubbed American films.
Paying customers can upload voice samples to craft new styles using ElevenLabs’ voice cloning.
The infamous message board 4chan, known for its conspiratorial content, used ElevenLabs’ tools to share hateful messages mimicking celebrities like actress Emma Watson.
Then there’s the elephant in the room: the existential threat platforms like ElevenLabs pose to the voice acting industry.
Even this didn’t please some voice actors, however — including SAG-AFTRA’s own members.
OpenAI has its first higher education customer: Arizona State University (ASU).
Today, ASU announced that it’s collaborating with OpenAI to bring ChatGPT, OpenAI’s AI-powered chatbot, to the university’s researchers, staff and faculty.
Since then, some have reversed their bans, while others have begun hosting workshops on generative AI tools and their potential for learning.
Launched in August, ChatGPT Enterprise can perform the same tasks as ChatGPT, such as writing emails, debugging computer code and drafting essays.
“Right now, we’re hyper-focused on putting ChatGPT Enterprise into the hands of our knowledge core … to be at the forefront of discovery and implementation.”