📢 CAIDP Provides Comments to Australian Government on Online Safety Act and AI 🇦🇺 In comments to the Australian Government, the Center for AI and Digital Policy wrote, "the online risks which accompany the advent of generative AI are extensive, and include threats to personal privacy, intellectual property, and life-altering outcomes based on AI-enabled decision-making." CAIDP thanked the Australian government for the opportunity to provide public comments on proposed changes to the Online Safety Act and made several specific recommendations concerning AI: 1️⃣ Establish redlines for developers, providers, and deployers of AI systems regarding training data, prohibiting practices which contravene the Australian Privacy Principles, including web-scraping of personal data and intellectual property. 2️⃣ Require transparent and contestable data provenance for AI models trained on web-scraped data so that data subjects may be made aware when their personal, private data and intellectual property has been used to train AI models, providing an opportunity for compensation and extrication of data. 3️⃣ Require rigorous, independent impact assessments prior to deployment to identify and mitigate potential online harms, including biases and rights violations, with ongoing re-assessments across the AI lifecycle. 4️⃣ Require algorithmic transparency for AI systems so that users are aware when they are interacting with an AI/algorithmic system and are provided with clear and valid reasons for outcomes affecting their lives. 5️⃣ Require human oversight and control over AI systems operating online and an affirmative obligation to terminate if human control of the system is no longer possible and/or if the system fails to uphold human/civil rights in keeping with the Universal Guidelines for AI, a precursor to the Australia-endorsed UNESCO Recommendation on the Ethics of Artificial Intelligence. Merve Hickok Marc Rotenberg Caroline Friedman Levy Nayyara Rahman Lyantoniette Chua Center for AI and Digital Policy Europe #australia #onlinesafetyact #aigovernance #webscraping #dataprotection #intellectualproperty #impactassessments
Center for AI and Digital Policy’s Post
More Relevant Posts
-
On May 21, 2024, the Council endorsed the Artificial Intelligence Act (AI Act), marking a significant milestone as the first set of worldwide rules on AI. The AI Act aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI while boosting innovation and establishing Europe as a leader in the field. This regulation applies to both public and private AI systems within the EU and is aimed at all types of AI providers. 𝗡𝗲𝘅𝘁 𝗦𝘁𝗲𝗽𝘀: ▶ Publication in the EU’s Official Journal. ▶ The Act will enter into force 20 days after publication and be fully applicable 24 months later, with specific timelines for certain provisions. 𝗖𝗹𝗮𝘀𝘀𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗥𝗶𝘀𝗸-𝗕𝗮𝘀𝗲𝗱 𝗥𝗲𝗴𝗶𝗺𝗲𝘀 𝗼𝗳 𝗔𝗜 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 AI systems are categorized by risk level: ▶ Minimal Risk: Common AI systems, like spam filters and recommender systems, with no additional obligations. ▶ Transparency Risk: General-purpose AI must meet transparency requirements, including compliance with EU copyright law. ▶ Systemic Risk: Providers must perform model evaluations, mitigate systemic risks, and ensure cybersecurity protection. ▶ High Risk: Requires conformity assessment and post-market monitoring, including public registration and transparency measures. ▶ Unacceptable Risk: Prohibited due to threats to citizens' rights. 𝗘𝘅𝗲𝗺𝗽𝘁𝗶𝗼𝗻𝘀 𝗮𝗻𝗱 𝗦𝘂𝗽𝗽𝗼𝗿𝘁 𝗳𝗼𝗿 𝗜𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻 ▶ Fully exempted: AI for research, development, prototyping, and military, defence, or national security purposes. ▶ Providers of free and open-source models are exempt unless they pose systemic risks. ▶ Regulatory sandboxes will be established to support SMEs and start-ups. 𝗣𝗿𝗼𝗰𝗲𝗱𝘂𝗿𝗲𝘀 𝗮𝗻𝗱 𝗙𝗶𝗻𝗲𝘀 Each Member State will designate a national authority to supervise the AI Act. A new European AI Office will oversee general-purpose AI models. Breaches of the AI Act will result in fines based on the severity and type of infringement, with specific thresholds for different categories. Curious to learn more about these cases and their implications? Read the latest blog post written by Kim Lucassen, Nina Orlić, Kirill Ryabtsev, Stéphanie De Smedt, Emilia Fronczak, Gilles Pitschen, Martijn Schoonewille, Yannick Geryszewski, Marc Ph.M. Wiggers, Ph.D., and Marco de Vries for valuable insights on this topic. Read more: https://lawand.tax/3yMJ4ax #artificialintelligence #innovation #europe #AIproviders #AIact #lawandtax
To view or add a comment, sign in
-
How could data minimization rules in the U.S. affect product improvement and development, and AI? One of the key issues explored in our recent paper ‘Data Minimization in the United States’ Emerging Privacy Landscape: Comparative Analysis and Exploration of Potential Effects’ includes the potential impact of data minimization rules within the U.S. (in particular with respect to the APRA) on the development and subsequent improvement of AI technologies. Our paper highlights: 🔹Potential Impacts on Product Improvement and Research - Although APRA Section 102(d)(7) permits processing of data to “develop or enhance a product or service of the covered entity or service provider, as well as to conduct research or analytics to improve a product or service,” it imposes certain restrictions that undermine this allowance. Specifically, the APRA mandates that (i) any product development or enhancement must use only de-identified data, and (ii) must use data "previously collected in accordance with the APRA." Organizations routinely gather information on how users interact with their services and collect feedback about which aspects of a product they like or dislike. Our research found that organizations may, for instance, have difficulty meeting the high standard for de-identification set out in the APRA, posing challenges to product improvement and research efforts. 🔹Potential Impact on AI - Some AI developers have argued that the APRA would affect the development of AI by overly restricting the volume of data available to AI developers for responsible model training. In some circumstances, developing and testing AI products may involve the use of personal information that may not necessarily be considered “requested” by the individual but is still carried out appropriately with privacy safeguards and within the context of the consumer-business relationship. In this scenario, the permissible purpose of product or service development and improvement under Section 102(d)(7) would not be helpful because it only allows for such purposes using de-identified data. See the short guide below for more information and download the full paper for our complete analysis of data minimization requirements in U.S. state privacy laws and the proposed American Privacy Rights Act: https://lnkd.in/emnCHK_E #usa #data #regulation #APRA
To view or add a comment, sign in
-
Solicitor & AI Subject Matter Researcher. I’m an emerging technology researcher, with a focus on regulation and the impact of technologies, such as Artificial Intelligence on established rights.
This is insightful and helpful to my current research project.
How could data minimization rules in the U.S. affect product improvement and development, and AI? One of the key issues explored in our recent paper ‘Data Minimization in the United States’ Emerging Privacy Landscape: Comparative Analysis and Exploration of Potential Effects’ includes the potential impact of data minimization rules within the U.S. (in particular with respect to the APRA) on the development and subsequent improvement of AI technologies. Our paper highlights: 🔹Potential Impacts on Product Improvement and Research - Although APRA Section 102(d)(7) permits processing of data to “develop or enhance a product or service of the covered entity or service provider, as well as to conduct research or analytics to improve a product or service,” it imposes certain restrictions that undermine this allowance. Specifically, the APRA mandates that (i) any product development or enhancement must use only de-identified data, and (ii) must use data "previously collected in accordance with the APRA." Organizations routinely gather information on how users interact with their services and collect feedback about which aspects of a product they like or dislike. Our research found that organizations may, for instance, have difficulty meeting the high standard for de-identification set out in the APRA, posing challenges to product improvement and research efforts. 🔹Potential Impact on AI - Some AI developers have argued that the APRA would affect the development of AI by overly restricting the volume of data available to AI developers for responsible model training. In some circumstances, developing and testing AI products may involve the use of personal information that may not necessarily be considered “requested” by the individual but is still carried out appropriately with privacy safeguards and within the context of the consumer-business relationship. In this scenario, the permissible purpose of product or service development and improvement under Section 102(d)(7) would not be helpful because it only allows for such purposes using de-identified data. See the short guide below for more information and download the full paper for our complete analysis of data minimization requirements in U.S. state privacy laws and the proposed American Privacy Rights Act: https://lnkd.in/emnCHK_E #usa #data #regulation #APRA
To view or add a comment, sign in
-
The great Lily Ray, taking a deep dive into the future of AI content and its legal landscape in the EU, points out a game-changing shift that might redefine how we deal with AI-generated content. The vibe? We’re stepping into a new era where not flagging AI-created stuff could land you in hot water, at least if you're playing in the EU's sandbox. Gone are the days when we'd only side-eye Google’s Search Quality Guidelines or its take on web spam. Now, the EU is tossing its hat into the ring with legal moves that could very well set the tone for the global response to AI content creation. Could this be the GDPR moment for AI-generated content? It's a twist that's got everyone on the edge of their seats, watching closely as the EU rolls out its AI Act. This isn’t your everyday regulation; it’s a full-on strategy to ensure AI tech doesn’t step out of line, categorizing AI systems by risk and clamping down hard on the ones that could mess things up—like messing with critical infrastructures or messing around in education and employment. But here's the kicker: the AI Act is cool with the low-key AI players, the ones jazzing up video games or filtering out spam. The spotlight, though, is on those high-risk AI applications, demanding they play by a strict set of rules to keep everyone safe and sound. And let’s not forget the AI big leagues—those general-purpose AI models. The Act is set up to make sure these AI powerhouses play fair, adding a fresh layer of transparency and risk management into the mix. With the AI Act, Europe is gearing up to lead by example, showing the world how to balance cutting-edge AI innovation with a hefty dose of respect for human rights and safety. It’s a heads-up that AI in Europe - and possibly beyond - is about to hit a new level of accountability. It's Ironic, isn't it?
The days of freely publishing AI-generated content and not telling users how it was generated might be coming to an end...? In the European Union, at least. Now it's not just about what Google says in the Search Quality Guidelines or its web spam policies. Now it's also a legal issue in the EU, which might set a precedent for how other jurisdictions respond to AI. Is labeling AI-generated content the new GDPR? This will be very interesting to watch. And it turns out - this new law is consistent with what Google has been recommending for AI-generated content since early 2022. #SEO #AI #EU #aiact https://lnkd.in/dYnfHY2s h/t Shaun Anderson
To view or add a comment, sign in
-
Transformative Data Strategy Leader | Harnessing the Power of Data and AI for Growth & Innovation | Creating a Shared Vision for Data Excellence
The EU AI Act has officially entered into force! What you need to know about the AI Act: The EU AI Act, a landmark regulation that was published in the Official Journal of the EU in July 2024, establishes key dates as well as outlines obligations for high-risk AI systems and general purpose AI models to ensure safety and compliance across various sectors. What does it do? It aims to regulate the development and deployment of AI technologies within the EU and globally, setting a foundation for ethical and responsible AI practices that, if GDPR is a precedent, might soon be adopted by other countries. What's Next? Next steps for the implementation of the EU AI Act include the launch of the AI Pact by the AI Office, encouraging voluntary compliance with key provisions before the Act's application. In addition, the Commission will issue delegated acts and guidance on defining AI systems, criteria for high-risk AI, technical documentation requirements, conformity assessments, and more. Codes of practice are to be developed by May 2025, with a focus on practical implementation of high-risk AI requirements and transparency obligations, emphasizing the importance of regulatory clarity and industry collaboration when it comes to defining the future of AI governance. https://lnkd.in/gYJ6YUrb #AI #AIAct #EU #AIGovernance #DataGovernance #aiethics #privacy #IAPP
To view or add a comment, sign in
-
Data Protection Specialist @ TechGDPR, Qualified Lawyer at the Ankara Bar Association | EU Tech Laws, GDPR and Privacy, helping to bridge the gap between law and business.
🤔 Could AI-generated synthetic data reduce the privacy risks for EU policymaking and save the day? 📜 A very short paper that approaches the question on a EU policy level. ⤵ In short Regarding the EU's Data Policy, synthetic data could be an alternative to traditional data sources, but the quality of the data must be high. There are also issues with bias and accuracy in finding and using the best datasets. If done with good standards, data-driven policy-making can help with 📈 Improved forecasting 👩⚖️ Better decision-making 🌍 Policy development 📲 Helping SMEs to engage easily with data The EU also wants to develop universal standards for the 1️⃣ Creation 2️⃣ Management 3️⃣ Interpretation 4️⃣ Sharing of synthetic data to ensure the data is interoperable and can be used effectively. 🔜 Time will tell how the process will go, but this could be a good alternative with less privacy headaches when using data. #ai #syntheticdata #dataprivacy #eupolicy #eucommission #datapolicy
To view or add a comment, sign in
-
AI, Cybersecurity, Data Privacy & Risk Management | Speaker & Strategist | Student of Geopolitics | Aviation Enthusiast
The European Union’s AI Act has gained strong support from key EU Parliament committees. This groundbreaking legislation aims to set guardrails for AI development and use, categorizing AI systems based on risk. Risk will be broken into four categories, and based upon application: Unacceptable risk High risk Limited risk Minimal or no risk An example of unacceptable risk is the cognitive behavioral manipulation of people or specific vulnerable groups (e.g. children). Let's all think about this for a moment. This could potentially extend to targeted marketing that instead of observing consumer behavior and providing suggestions, intends to drive it. We must ensure that AI fosters human achievement rather than fabricating alternate realities through mass manipulation. What are your thoughts? #genai #eu #privacy #airisks
EU's AI Act wins fresh backing ahead of April vote
computerworld.com
To view or add a comment, sign in
-
Director Consulting Expert | Certified architect | Technologist | XR/VR-evangelist | Innovation, Data, Ai & Information security professional
Its finally here. :) EU's New "Artificial Intelligence Act". 👨⚖️ A legislation that marks a significant step towards responsible AI usage. This legislation attempts to balance the dynamism of innovation with the needs for security and privacy. By implementing rules that protect fundamental rights and promote transparency, it strengthens trust in AI technologies and ensures responsible usage. As an expert in Architecture, AI, information security, and data management, I see several positive aspects: - Protection of Fundamental Rights By limiting the use of AI for biometric identification and surveillance, the law addresses crucial privacy concerns. - Framing High-Risk AI Clearly defining and regulating high-risk AI systems is crucial to prevent potential harm. - Support for Innovation and SMEs The law encourages the development and testing of new AI technologies, which is vital for continued growth in the AI sector. However, there are challenges. The risk of overregulation and implementation challenges require careful monitoring. it's important for the law to remain technologically neutral and flexible enough to adapt to AI's rapid development. In summary, the "Artificial Intelligence Act" is a necessary step to ensure that AI technology develops in a way that respects our fundamental values and societal norms in the EU while promoting innovation and growth. Your thoughts and insights on this are very welcome! Please leave a comment! 😊 #ai #aiAct #eu
Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI | Nyheter | Europaparlamentet
europarl.europa.eu
To view or add a comment, sign in
-
🇪🇺 EU AI Act: Implementation Timeline The EU AI Act was published in the Official Journal of the EU on 12 July 2024. Swipe through to see key dates and requirements for the EU AI Act implementation. Are you prepared? 🚀 1 August 2024: Entry into Force • Act officially becomes law • 20 days after publication • Marks the beginning of the transition period 🛡️ 2 February 2025: Prohibitions Effective Ban on unacceptable risk AI practices takes effect, includes: • Subliminal manipulation • Exploitation of vulnerabilities • Social scoring by public authorities • Real-time biometric identification in public spaces (with exceptions) 💼 2 August 2025: GPAI Obligations Begin Obligations for general purpose AI (GPAI) model providers start. Requirements include: • Model documentation • Risk mitigation measures • Incident reporting • Compliance with EU copyright law 📊 2 February 2026: High-Risk AI Guidance European Commission to issue guidance on: • Practical implementation of high-risk AI requirements • List of practical examples of high-risk and not high-risk use cases • Clarification on application of the AI system definition 🔐 2 August 2026: Full Application • Complete enforcement of the AI Act begins • Exceptions for certain AI systems in EU law areas of freedom, security, and justice • All providers, users, and importers must comply with applicable requirements 📝 2 August 2027: Commission Report • European Commission to report on use of delegated powers • Assessment of need for amendment of the definition of AI systems • Evaluation of implementation and effectiveness of codes of practice Is your organization ready for the EU AI Act? Let's connect and discuss how these changes might impact your business. Share your thoughts in the comments! 👓 Want more? Please follow me for regular updates on #dataprivacy and #AIGovernance from China, Hong Kong, Singapore and more. #EUAIACT #AI #Privacy #DateProtection #PrivacyPros
To view or add a comment, sign in
-
➡ There is a strong attention of the Italian Data Protection Authority about Artificial Intelligence. ✔ Following the well known temporary ban on processing imposed on #OpenAI by the Authority on 30 March of last year, and based on the outcome of its fact-finding activity, on January 2024 the Garante notified breaches of privacy law to OpenAI based on the collected evidences. ✔ Also, on March 2024 the Garante Privacy opened an investigation into OpenAI's "Sora" (the new AI service able to create dynamic, realistic and imaginative scenes from short text instructions), asking OpenAI to provide information on the #algorithm that creates short videos from text instructions. ✔ Finally, this week the Garante wrote to Italian Parliament and Government by highlighting it possesses the necessary competence and independence to implement the #AIAct, in line with the objective of ensuring a high level of protection on fundamental rights. 📢 "Given its impact on people’s rights, AI should fall within the jurisdiction of Authorities with stringent independence requirements, such as #Privacy Authorities, also due to the close interrelation between artificial intelligence and data protection and to the expertise already acquired with regard to automated decision-making.", the Garante wrote in its own communication to the Italian Parliament and Government. 🔊 In the end, it is evident that on the one hand the Garante is stressing (as well as investigating) the synergy between #AI and #DataProtection, and on the other hand it is pushing for their application by a single independent Authority. We'll see how it turns out: stay tuned!
To view or add a comment, sign in
58,987 followers