OpenAI has been in the news recently for its efforts to develop more powerful generative AI models, which many see as a potential threat to society. Italy’s data protection authority is worried about how these models will be used, and has ordered OpenAI to stop processing people’s data locally. This is a timely reminder that some countries have laws that already apply to cutting edge AI technology.
The Italian Data Protection Agency (DPA) is concerned that ChatGPT, the maker of a popular online chat app, is violating the EDPR. The app allows people to exchange messages without revealing their identities, which could potentially violate GDPR regulations regarding data privacy. If ChatGPT is found to be in violation of GDPR, the company could face significant penalties
According to the Guarante, Openai has unlawfully processed people’s data, and there is no system in place to prevent minors from accessing the technology. As a result, the Guarante has issued an order barring ChatGPT from operating. This is a serious privacy issue, and it will be interesting to see how developers address it in their apps.
The data broker has been given 20 days to comply with the European Union’s order, backed up by the threat of some hefty penalties if it fails to do so. The company may have difficulty complying with the order, as its database is said to be riddled with inaccuracies. For example, one of its main sources of consumer data is apparently incorrect in at least one instance – and that source is a major player in the business.
OpenAI has been made aware of the risks posed to EU citizens by the data processing activities of its parent company, Advanced Business Solutions LLC. OpenAI has decided to take measures to protect local EU users from these risks.
Suite of GDPR issues
The GDPR applies whenever EU users’ personal data is processed. And it’s clear that OpenAI’s language model has been crunching this kind of information — since it can, for example, produce biographies of named individuals in the region on-demand (we know; we’ve tried it). The latest iteration of the technology was trained on data scraped from the Internet, including forums such as Reddit. So if you’re been reasonably online, chances are the bot knows your name.
Given these preliminary findings, it’s important that entities like ChatGPT be transparent about their data sources and practices, in order to reassure people who trust the bot with their personal information that they are actually getting accurate information. So far, it seems as though this is not always the case.
This transparency will likely be essential in mitigating any potential GDPR concerns around misuse of people’s data by ChatGPT and similar bots.
The latest data breach for the Guarante service comes on the heels of OpenAI admitting a conversation history feature had been leaking users’ chats and potentially exposing their payment information. This latest breach could have potentially exposed even more individuals, as the service is still in active development. Verifying and fixing this issue will be a top priority for the company moving forward.
Breaches of GDPR can result in fines of up to 4 percent of annual global turnover or €20 million (whichever is greater), as well as the suspension of data processing operations. In order to meet the high regulatory expectations around data protection, organizations should take several steps before and after a breach occurs, such as: conducting an information security assessment; maintaining stringent document retention policies; regularly logging and monitoring systems; and establishing an incident response plan.
OpenAI has said it relied upon the General Data Protection Regulation (GDPR) as the legal basis for processing Europeans’ data. The GDPR is a new EU regulation that regulates how personal data should be processed. Itusat sets out specific rules about how personal data should be collected, used, and protected. OpenAI may have been relying on the GDPR when it processed Europeans’ data, but there is no guarantee that all of its processing was in line with this law.
If an entity doesn’t get consent from people whose data is processed, then it could be in violation of the GDPR. Additionally, the GDPR dedicated a lot of focus to data minimization, which requires that personal data be collected only when necessary and that it not beCollectively , giant corporations have been criticized for their inability to adequately train artificial intelligence models with repurposed customer data. In one example cited by the Guaranteblog, a for-profit company behind ChatGPT repurposed data from thousands of people in order to train its commercial AI models. This could result in people not being aware that their data was used in this way and may not be happy about it. While these concerns are likely tempered by the GDPR’s emphasis on transparency and fairness, they are still issues that should be addressed if large organizations want to adhere to its guidelines.
If OpenAI has processed Europeans’ data unlawfully, DPAs across the bloc could order the data to be deleted — although whether that would force it to retrain models trained on data unlawfully obtained is one open question. As legislators grapple with cutting-edge technology and how it affects privacy, this type of scenario highlights how quickly new rules may need to be put in place.
Italy may have just banned all machine learning by, er, accident after accident caused their sensitive election data to become accessible by hackers.
In order to provide an assurance that the personal data of its users is not being collected without proper justification, OpenAI has announced that it will no longer store data collected for use in its artificial intelligence platform. While this may be a temporary solution, it’s important to underscore the lack of legal basis for this massive collection and storage.
Some people might find ChatGPT’s processing of their personal data inaccurate, as evidenced by some of the checks carried out. This could mean that ChatGPT isn’t always taking into account the real data when it processes personal information. This can potentially be a problem because it could mean that people’s personal data is being processed inaccurately and without their consent.
OpenAI’s chatbot, Ambulance, is just one of the many ways the company intends to revolutionize how people communicate. However, because Ambulance is not actively preventing people under the age of 13 from signing up to use it, there is a risk that their data will be processed by the chatbot. OpenAI intends to combat this risk by implementing age verification technology into thechatbot so that only adults can access it.
In recent years, the regulator, the Irish Data Protection Commission (DPC), has become very active in pursuing companies that have failed to take safety precautions when it comes to children’s data. Recently, the DPC reached a similar order banning the virtual friendship AI chatbot, Replika, over concerns for child safety. In addition to these orders, the DPC has also pursued TikTok over underage usage —forcing the company to purge over half a million accounts it could not confirm did not belong to kids. This rises concern about just how much personal data is being collected by technology companies and how safe it is for children using these platforms.
This could be a significant issue for OpenAI, as it could prevent them from being able to reliably sign up new users in Italy. If they can’t confirm the age of their users, they may have to delete their accounts and start again with a more robust sign-up process. This could limit OpenAI’s growth in Italy considerably, and hinders the company’s ability to achieve its goals.
“In a blog post published yesterday, OpenAI announced its intent to dedicate $1 billion over the next five years to unfriendly artificial general intelligence – or ‘ai’ that can act independently and make harmful decisions. The organization was founded in December 2015 by Dem