Loading...

Loading...

The rise of industry’s AI self-regulation

  • Posted on June 18, 2024
  • Estimated reading time 6 minutes

The regulation of Artificial Intelligence (AI) continues to be a difficult but popular topic, especially with the formal adoption of the European Union (EU) AI Act and new guidance in the United States (U.S.) following the White House’s Executive Order last October. As governments around the world try to navigate AI innovation and oversight, we are also starting to see industry-led consortiums and the self-regulation of AI take shape, and it’s a growing trend that business and technology leaders should be watching closely.

We expect this movement toward industry self-regulation around AI to pick up. Two key forces are at work here:

  1. AI use-cases and priorities vary substantially by industry. Business and technology leaders are typically far better positioned to understand the current and future impact of AI within their respective industries, and that understanding is needed to create realistic guidelines for good governance and responsible AI. Industry-specific priorities and use cases call for policies, controls, and oversight with a degree of nuance that top-down and broad-based government regulation simply can’t incorporate. To support this, in a recent Avanade research study that analyzed 3000 responses from business and IT professionals across industries including banking, energy, government, health, life sciences, manufacturing, nonprofit, retail and utilities, it was found that respondents from energy organizations showed the most confidence in the AI fluency of their leaders with regards to governance, while government professionals were the least confident of all industries surveyed.

  2. Government oversight can’t keep up with innovation. While some level of government regulation is necessary, we’ve seen time and time again that agencies don’t have the expertise or resources to keep up with technical innovation. AI capabilities especially are advancing quickly right now, and the guidelines needed to deploy AI safely and responsibly will (and should) evolve quickly as the technology advances. It’s also worth noting that if industry self-regulation is shown to be effective, government officials will feel less inclined to pass more heavy-handed regulations that could stifle future innovation.

“I don’t think that the technology is moving too fast; I think we all have work to make sure that whether you’re in government or a business or a nonprofit we’re moving forward what I’ll call safety and innovation at the same speed.”
            - Microsoft Vice Chair and President, Brad Smith (World Economic Forum, Davos)

So how is industry self-regulation around AI helping to move forward AI safety and innovation? A couple key areas that industry-led consortiums and partnerships are working to advance today include:

  1. Sharing AI best practices by publishing guidelines and frameworks for using AI responsibly, including guidance on security, dependability and oversight of AI algorithms. Also, industry-led consortiums are helping connect people with expertise and skillsets needed to handle AI in a responsible way.

  2. Co-development of AI capabilities by facilitating collaboration among consortium members, factoring in robust evaluation standards and a deeper understanding of how humans interact with AI. Also, these collaborations ‘even’ the playing field for participating organizations because regardless of their individual resources, each consortium member has access to the same benefits.

Let’s take a look at some examples:

February 2024, the U.S. National Institute of Standards and Technology (NIST) announced the creation of the U.S. Artificial Intelligence Safety Institute Consortium (AISIC), a collaboration between over 200 U.S. firms across various industries and the U.S. government to promote and support the safe use and deployment of AI. As part of this Consortium, members benefit by participating in knowledge and data sharing, have access to testing environments and red-teaming for secure-development practices, and have access to science-backed information of how humans engage with AI.

March 2024, 16 U.S. healthcare leaders, Microsoft and other healthcare technology organizations announced the creation of the Trustworthy & Responsible AI Network (TRAIN), a consortium aiming to improve the quality, safety, and trustworthiness of AI in healthcare settings. TRAIN will also leverage the best practices set forth by the Coalition of Health AI (CHAI) and OCHIN whose mission is to help drive forward health equity. Like other industry-led consortiums, every organization that participates in TRAIN has access to the consortium’s benefits.

April 2024, Cisco, Accenture, Eightfold, Google, IBM, Indeed, Intel, Microsoft and SAP announced the launch of the AI-Enabled Information and Communication Technology (ICT) Workforce Consortium which will focus on upskilling and reskilling roles likely to be impacted by AI.

In the same Avanade research study mentioned earlier, we found that less than half of employees say they completely trust the results produced by AI and only 36% of CEOs say they are very confident about their leadership’s understanding of generative AI and its governance needs today. So, although the efforts look promising, it is far too early to tell whether industry self-regulation will be able to effectively balance AI innovation with the safety and guardrails needed to ensure AI doesn’t do more harm than good. Also, it is important to note that not many industry-specific consortiums have been formally formed or announced yet, so it is also not known if the existing cross-industry consortiums will sufficiently address industry specific priorities and use-cases. Regardless, this is an important enough trend that technology and business leaders should be gearing up now for, so they’re not left behind. Here’s how:

  1. Evaluate how well your strategy, processes, and policies align with the standards developing in your industry. If you can participate in their development, even better.
  2. Focus on the basics of good AI governance and responsible AI – like registration, documentation, risk management, testing, and monitoring – which will likely be part of any industry standards or government regulations in this space.
  3. Maintain a culture of innovation and employee development. Sponsor experimentation and skills development, expand the participation in innovation to a wider set of people and roles, and focus more on employee/candidate skills and training than degrees and experience.

Let us know what you think. Have you started on any efforts to self-regulate around AI? Would you like to talk about how we’re seeing organizations in your industry rise to the challenge?

If you’re ready interested in delivering AI solutions with confidence, learn more about Avanade’s responsible AI capabilities.

Sources:

  1. Could Industry Self-Regulation Help Govern Artificial Intelligence? (forbes.com)
  2. Embrace Self-Regulation to Harness The Full Potential Of AI (forbes.com)
  3. Why self-regulation is best for artificial intelligence | The Hill
  4. Top AI Companies Join Government Effort to Set Safety Standards - Bloomberg
  5. HIMSS24: Microsoft, 16 health systems form health AI network (fiercehealthcare.com)
  6. AI Regulation is Coming- What is the Likely Outcome? (csis.org)
  7. Regulate AI? How US, EU and China Are Going About It - Bloomberg
  8. Trustworthy AI: String Of AI Fails Show Self-Regulation Doesn’t Work (forbes.com)
  9. New Consortium Aims to Ensure Responsible Use of AI in Healthcare (hitconsultant.net)

Avanade Insights Newsletter

Stay up to date with our latest news.

Share this page
CLOSE
Modal window
Contract