➡ There is a strong attention of the Italian Data Protection Authority about Artificial Intelligence. ✔ Following the well known temporary ban on processing imposed on #OpenAI by the Authority on 30 March of last year, and based on the outcome of its fact-finding activity, on January 2024 the Garante notified breaches of privacy law to OpenAI based on the collected evidences. ✔ Also, on March 2024 the Garante Privacy opened an investigation into OpenAI's "Sora" (the new AI service able to create dynamic, realistic and imaginative scenes from short text instructions), asking OpenAI to provide information on the #algorithm that creates short videos from text instructions. ✔ Finally, this week the Garante wrote to Italian Parliament and Government by highlighting it possesses the necessary competence and independence to implement the #AIAct, in line with the objective of ensuring a high level of protection on fundamental rights. 📢 "Given its impact on people’s rights, AI should fall within the jurisdiction of Authorities with stringent independence requirements, such as #Privacy Authorities, also due to the close interrelation between artificial intelligence and data protection and to the expertise already acquired with regard to automated decision-making.", the Garante wrote in its own communication to the Italian Parliament and Government. 🔊 In the end, it is evident that on the one hand the Garante is stressing (as well as investigating) the synergy between #AI and #DataProtection, and on the other hand it is pushing for their application by a single independent Authority. We'll see how it turns out: stay tuned!
Adriano D'Ottavio’s Post
More Relevant Posts
-
Dive into our latest blog post on how AI companies can overcome challenges posed by strict data privacy regulations while continuing to innovate and grow. Discover key strategies for balancing compliance and technology advancement. https://lnkd.in/dwnUNRW2 #ai #artificialintelligence #ml #machinelearning #dataprivacy #aidataprivacy #dataprivacyai
To view or add a comment, sign in
-
As artificial intelligence technologies advance, concerns about privacy and ethical implications intensify. With the AI Act, the European Union will soon be the first to adopt a law that regulates the use and development of Artificial Intelligence. However, businesses still face a complex challenge. Managing the balance between AI and business interests is not just about legal compliance, but also about maintaining customer trust. In the carousel below, we’ve gathered some essential pieces of advice to navigate AI and data privacy 👇 But the topic is quite heated and constantly changing. That’s why we regularly update our guide on the topic: https://lnkd.in/dG86ksg9. Bookmark it and stay up-to-date! #privacy #dataprotection #ai
To view or add a comment, sign in
-
The AI Act (the EU Artificial Intelligence regulation) has passed the final approval by the EU Council The first-of-its-kind law is expected to reshape business operation of #AI in Europe, from health care decisions to policing. It bans some “unacceptable” technologies while enacting stiff guidance for other high risk applications. For example, it outlaws social scoring systems powered by AI and any biometric-based tools used to guess a person’s race, political leanings or sexual orientation. It bans the use of AI to interpret the emotions of people in schools and workplaces, as well as some types of automated profiling intended to predict a person’s likelihood of committing future crimes. In some cases when emotional recognition is allowed, the users have to be warned. Some categories of AI should follow transparency and #security rules. Werner Vogels, CTO of Amazon , warned the EU against overregulating AI, pointing to the example of its signature data privacy law, known as GDPR, which he described as a very “thick” book. Yann LeCun, chief AI scientist at Meta, told to CNN that “There are clauses in the EU AI act and various other places that do regulate research and development. I don’t think it’s a good idea”. Press release by EU Council https://lnkd.in/dHqQa3u4. Comments to CNN https://lnkd.in/dR2mBZ-Y
To view or add a comment, sign in
-
Co-founder of the AI, Tech & Privacy Academy, LinkedIn Top Voice, Ph.D. Researcher, Polyglot, Latina, Mother of 3. 👉Join our AI governance training programs (900+ participants)
🔵 UNPOPULAR OPINION: the GDPR also applies when creating and training AI datasets - and most tech companies ignore it. This must change. Read this: As the CNIL - Commission Nationale de l'Informatique et des Libertés's infographic below shows, regardless of the data source, data protection law must be observed when creating a training dataset. A reminder that Article 6 of the GDPR establishes that these are the possible ways to process personal data lawfully: - consent - contract - legal obligation - vital interest - public interest - legitimate interested Most AI companies developing large language models today rely on legitimate interest to scrape data from the web and train their models. However, despite seeming an "easy" alternative, legitimate interest has its own legal requisites, including the three-part test (purpose, necessity, balancing), transparency, data minimization, and storage limitation. Most tech companies developing AI today don't comply with any of these (and I did not mention yet data subjects' rights and other data protection principles). With the quick and ubiquitous integration of generative AI and large language models-based capabilities into daily applications, data protection law must be implemented and made effective (or privacy rights and advancements - which took so much effort and time - will be undermined). Privacy matters, ALSO when AI is involved. Join our 4-week Privacy & AI Bootcamp on January 31st and learn more about it. #AI #privacy #dataprotection
To view or add a comment, sign in
-
I help leaders and organisations communicate effectively | Global Communication and PR Strategist | Exec Coach | Board Chair | Internal Comms | Change Manager | Author | Key Note Speaker | AI Strategist | Podcaster
𝐏𝐫𝐨𝐭𝐞𝐜𝐭𝐢𝐧𝐠 𝐩𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐧 𝐭𝐡𝐞 𝐚𝐠𝐞 𝐨𝐟 𝐀𝐈 🔒 Today's focus on the ten principles of responsible AI from the Centre for Strategic Communication Excellence is ensuring our AI tools respect individual privacy and adhere to the highest data protection standards. We must secure personal data and maintain the trust of those we communicate with. The third principle is: 𝐏𝐫𝐢𝐯𝐚𝐜𝐲 𝐚𝐧𝐝 𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐭𝐞𝐜𝐭𝐢𝐨𝐧 "AI tools must respect individual privacy and adhere to data protection laws and regulations within the organisation's context. Communication professionals handle personal data securely and obtain consent for data collection and usage when necessary. Communication professionals ensure their organisations abide by privacy and data protection laws, regulations, and policies.." 🔒 Tip: Ensure all AI-driven communication campaigns are built on a foundation of consent, protecting user data at every step. There are many different laws protecting privacy and data. We need to be aware of and abide by them. 𝐖𝐡𝐚𝐭 𝐚𝐫𝐞 𝐲𝐨𝐮𝐫 𝐭𝐢𝐩𝐬? Link to Responsible AI for the Communication Profession https://lnkd.in/gMGNeShH Image created by OpenAI's DALL·E. #Privacyanddataprotection #ResponsibleAI #StrategicCommunication
To view or add a comment, sign in
-
Anonos: Data without the drama’s latest whitepaper highlights How to Safeguard data privacy within LLMs without Sacrificing Performance. Ted Myerson and Gary LaFever tackle the critical balance between AI innovation and data privacy. Gartner predicts that by 2027, noncompliance with data protection laws will impact AI deployments. This makes the Anonos platform and their patented tech even more vital. Their whitepaper reveals how to protect sensitive data in LLMs without sacrificing performance, achieving nearly identical results with protected data in fine-tuning and similar performance in Retrieval Augmented Generation (RAG). Read more here: #AI #DataPrivacy #Innovation #LLMs #TechTrends #AIRegulation
How to Mitigate LLM Privacy Risks in Fine-Tuning and RAG | Anonos
anonos.com
To view or add a comment, sign in
-
Hong Kong's Office of the Privacy Commissioner for Personal Data (PCPD) released crucial guidelines for managing data in AI systems. As businesses increasingly adopt AI to enhance efficiency and perform in-depth analyses, they face challenges due to unclear regulations. Stay ahead in the AI space by understanding and implementing these guidelines. https://lnkd.in/gVxe9vFi #AI #DataPrivacy #Compliance #Innovation
Hong Kong: Issues New Guidelines on AI Data Use — Meta Connects
metaseconnects.com
To view or add a comment, sign in
-
The debate over foundational models (more commonly known as GenAI) in the EU’s AI Act has hit a deadlock in the final stage of the legislative process. The divergence of opinions on the responsibilities of each player in the AI value chain, particularly between upstream providers and downstream operators, highlights the complexity of introducing a comprehensive regulation for AI systems. It also mirrors a similar challenge many organizations are facing - striking a balance between driving innovation and ensuring proper accountability. While the legislators continue to deliberate, it’s important to consider the implications for AI security. Like with GDPR for data privacy, the direction set by the EU will significantly influence global AI policy, establishing precedents for how we assess risk, assign responsibility, and secure AI. #AIAct #ArtificialIntelligence #EURegulation #AISecurity #CraniumAI https://lnkd.in/eta2w-G2
EU countries mull options on AI law while foundation model stalemate looms large
euractiv.com
To view or add a comment, sign in
-
Professional Privacy Compliance, Artificial Intelligence and Cyber Security Expert with a focus on Europe, Middle East & Africa. Certified Data Protection Auditor. Public speaker. Owner Racingpixels.com
Breaking News: France, Germany and Italy have reached an agreement on how artificial intelligence should be regulated! Until last friday there was still a 50/50 chance we would not see an EU AI Act in this legislation. Read more here: https://lnkd.in/e8P8scNk #EU #AIACT #AI #ARTIFICIALINTELLIGENCE #PRIVACY
Germany, France and Italy reach agreement on future AI regulation
tbsnews.net
To view or add a comment, sign in
-
MLex Update: AI Act is published in Official Journal of the EU Summary - The EU's Artificial Intelligence Act was published in the Official Journal today, starting the clock for the legal deadlines. The AI Act will enter into force on Aug. 1. The bans on applications deemed to pose an unacceptable risk will start to apply on Feb. 2, 2025. By May 2, the codes of practice for general-purpose AI models will have to be ready, as the rules for model providers such as OpenAI and Google will start applying on Aug. 2, 2025. The rest of the AI law's provisions will begin to apply on Aug. 2, 2026, except the categorization of high-risk AI applications covered under sectoral safety laws. Further Data Privacy & Security information on the EU Artificial Intelligence Act, the Rules on regulating the use of #AI and comments on the Act which remains a "work in progress" and identified challenges by MLex Senior AI Correspondant Luca Bertuzzi:
To view or add a comment, sign in
More from this author
-
Dati sanitari: il rischio sotteso al trattamento e la valutazione di impatto sulla protezione dei dati
Adriano D'Ottavio 3y -
La valutazione d’impatto sulla protezione dei dati alla luce delle Linee Guida dell’EDPB sulla data protection by design and by default.
Adriano D'Ottavio 4y -
Gdpr, il principio di finalità: un equilibrio tra privacy e innovazione
Adriano D'Ottavio 5y