The Italian Data Protection Authority, the “Garante per la protezione dei dati personali”, has ordered the immediate and temporary suspension of the processing of Italian users’ data by OpenAI’s chatbot, ChatGPT. The move follows a data breach on March 20 that saw conversations between users and payment information from subscribers to the pay-to-use service being leaked.

The Garante privacy’s decision was based on the absence of a legal basis to justify the massive collection and storage of personal data in order to “train” the algorithms that underlie the platform. In addition, the regulator noted the lack of an age verification system for users, which exposes minors to responses that are completely unsuitable for their level of development and self-awareness.

The regulator’s ruling stipulated that OpenAI must report within 20 days on measures taken to comply with the authority’s requirements or face a fine of up to €20m or up to 4% of the company’s annual turnover.

ChatGPT is a well-known conversational artificial intelligence software capable of simulating and processing human conversations. According to the regulator’s investigations, information provided by ChatGPT does not always correspond to real data, leading to inaccurate processing of personal data.

The authority also pointed out the lack of information provided to users and all interested parties whose data is collected by OpenAI, particularly regarding the legal basis for the massive collection and storage of personal data.

The suspension of the service in Italy is a temporary measure, although the length of time it will be in place has not been specified. However, the order could have implications for OpenAI and its future plans. The company, which is based in the United States, has designated a representative in the European Economic Area, despite not having a physical presence there.

The suspension of ChatGPT in Italy raises questions about the use of artificial intelligence technologies in general and their impact on personal data privacy. Concerns have been raised about the potential for these technologies to be used for malicious purposes or to facilitate unethical practices, such as the collection and use of personal data without consent.

In recent years, regulators around the world have been scrutinizing tech companies and their use of personal data, and imposing fines and penalties for data breaches and privacy violations. The European Union’s General Data Protection Regulation (GDPR) is one such measure, which has introduced strict rules for the collection, use, and storage of personal data.

The suspension of ChatGPT in Italy could also have implications for other artificial intelligence chatbots and similar services, which are becoming increasingly popular. It is essential that companies developing and deploying these technologies take steps to ensure the privacy and security of personal data, and that regulators have the tools they need to enforce compliance with data protection laws.

As technology advances and artificial intelligence becomes more sophisticated, the use of these technologies will continue to grow. It is crucial that individuals, companies, and regulators work together to strike the right balance between innovation and privacy, to ensure that the benefits of these technologies are realized without compromising personal data protection.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Leave a Reply

Your email address will not be published. Required fields are marked *