Navigating the Proposed US AI Act: Guiding Businesses on Regulatory Compliance and Innovation
Many business leaders were anxious hearing about recent happenings in Washington as the US Senate brought together their full contingency to learn about AI. Big tech leaders and others were offered the opportunity to influence policy thinking. As part of our ongoing commitment to helping executives and professionals navigate AI disruption, we have addressed below the proposed Bipartisan Framework for U.S. AI Act.
This proposed act would significantly impact businesses across all industries. As this legislation aims to regulate AI development, accountability, transparency, and national security, it is crucial for businesses to understand its potential consequences and take proactive measures to adapt. Below we explore the expected impacts on businesses, provide recommendations on how to prepare for the forthcoming changes, and draw lessons from historical examples of successful regulatory compliance.
I. Licensing Regime and Oversight
Under the proposed legislation, companies developing advanced general purpose AI models (e.g., GPT-4) or using AI in high-risk scenarios would be required to register with an independent oversight body. This licensing regime entails maintaining risk management, testing, data governance, and incident reporting programs.
Impacts on Businesses
Compliance Costs: Businesses would need to allocate resources to establish and maintain risk management, testing, data governance, and incident reporting programs, as required by the licensing regime. These activities may require additional staff, technology infrastructure, and ongoing investments. For example, a FinTech startup specializing in AI-powered fraud detection would need to set up a specialized compliance unit tasked with navigating the intricate AI licensing regulations.
Regulatory Burden: The registration process and ongoing compliance requirements would introduce new administrative burdens for businesses. They would need to navigate the regulatory landscape, stay updated on evolving guidelines, and ensure continuous adherence to the established standards.
Increased Accountability: The legislation would enhance accountability by imposing regulatory oversight on AI development and usage. Businesses would be held responsible for ensuring the safety, ethicality, and transparency of their AI systems and practices. This could involve conducting audits, disclosing information, and demonstrating compliance to regulatory authorities.
Impact on Innovation: Although many are concerned about negative impacts on innovation due to regulation, our current environment is a patchwork that makes innovation high risk. Reasonable and clear laws and regulations will foster innovation, stop states from enacting increasingly confusing and contradictory state-level legislation, and attract investment as they reduce the risk of unknowns and provide businesses with the confidence to explore new opportunities and develop advanced AI models.
Competitive Landscape: The legislation will affect the competitive landscape by imposing additional requirements on businesses. Compliance with the licensing regime will become a differentiating factor, with companies that can demonstrate strong risk management, data governance, and incident reporting programs gaining a competitive advantage.
Legal and Reputational Risks: Non-compliance with the legislation could result in legal penalties, fines, or other enforcement actions. Moreover, businesses that fail to meet the required standards may face reputational damage, loss of trust from customers and partners, and negative public perception.
How to Prepare: Lessons from Pfizer
In the 1980s-90s, industries like pharmaceuticals and medical devices faced increased FDA regulation of product approval processes. Pfizer commits to quality and safety and has developed a comprehensive Corporate Quality Policy. Their culture of quality and integrity includes a clear governance and organization structure, a risk-based management process, and a rigorous training and qualification program for its employees. Pfizer has also engaged in proactive dialogue with regulators and other partners.
Following Pfizer's example, companies can:
- Develop compliance frameworks that embed documentation from the innovation stage
- Streamline registration processes through transparent, data-driven assurance approaches
- Inform the evolution of guidelines with feedback to balance obligations
II. Legal Accountability and Consumer Protection
The proposed Framework emphasizes legal accountability for companies using AI, enabling enforcement and private rights of action when AI systems breach privacy, violate civil rights, or cause harm.
Impacts on Businesses
Legal Exposure: The proposed Act would increase legal exposure for businesses using AI systems. They would be held legally accountable for any privacy breaches, civil rights violations, or harm caused by their AI systems.
Compliance Requirements: Businesses would need to implement measures to ensure the safety, accuracy, and transparency of their AI systems, as well as to protect privacy and civil rights. This may involve conducting thorough risk assessments, implementing robust data protection measures, and establishing mechanisms for addressing user complaints and concerns.
Reputational Risks: Non-compliance or instances of AI systems breaching privacy, violating civil rights, or causing harm could lead to significant reputational damage.
Increased Oversight and Reporting: The Act may require businesses to enhance their oversight mechanisms and reporting practices. They would need to demonstrate compliance, provide transparency regarding AI capabilities and limitations, and promptly address any issues or concerns raised by users or regulatory authorities.
Consumer Perception and Trust: Customers may expect higher levels of accountability, transparency, and ethical practices from AI companies. Meeting these expectations and building consumer trust would become crucial.
Impact on Innovation and Development: The increased legal accountability and compliance requirements may impact the pace of innovation and development of AI technologies. However, this pace impact may be balanced by the increased appetite for innovation investment due to clarity of the rules reducing risk.
How to Prepare: Lessons from Scholastic
In the 1990s, increased privacy regulations like COPPA and HIPAA raised compliance obligations. Scholastic, the leading publisher, diligently complied with COPPA and HIPAA, securing parental consent for children's data and protecting health information. Through proactive engagement with regulators, Scholastic maintained trust, reduced legal risks, and remained focused on their core mission.
As the proposed AI Act increases obligations, businesses can approach compliance holistically through measures such as:
- Appointing a new governance role focused on AI transparency, accountability, and oversight
- Engaging with policymakers and experts to understand requirements
- Developing robust processes and documentation around AI uses and associated risks
III. National Security and International Competition
The legislation emphasizes protecting national security by restricting the transfer of advanced AI models and related technologies to adversary nations or those involved in gross human rights violations.
Impacts on Businesses
Export Restrictions: Businesses engaged in the development and deployment of advanced AI models would need to comply with export control regulations. They may face restrictions on selling, transferring, or sharing these technologies with countries designated as adversaries.
Compliance and Due Diligence: Businesses would need to establish rigorous compliance processes and due diligence measures to ensure that their AI technologies are not being exported to restricted entities.
Supply Chain Complexity: The legislation's focus on national security will lead to increased scrutiny of supply chains associated with AI technologies. Businesses will need to assess and monitor their supply chains to ensure that components, software, or services sourced from external vendors or partners do not violate export control regulations.
Impact on International Collaborations: The legislation will impact international collaborations and partnerships, particularly those involving countries or organizations that fall under the restricted categories.
Ethical Considerations: The legislation's emphasis on protecting national security will raise ethical considerations for businesses involved in AI development. They would need to assess the potential implications of their technologies being used in contexts that contradict human rights or pose risks to national security.
Industry Collaboration and Standards Development: To prepare for the legislation, businesses can engage in industry-wide collaborations and initiatives aimed at shaping the legislation, sharing best practices, and contributing to the development of ethical and responsible AI standards.
How to Prepare: Lessons from Texas Instruments
After the 9/11 attacks, the US government tightened export controls through laws like the Patriot Act. Texas Instruments (TI) showcased exemplary leadership. They swiftly developed a comprehensive export compliance program in line with U.S. regulations. By establishing a dedicated team, implementing rigorous training, and using technology, TI ensured thorough export transaction monitoring. They actively engaged with the U.S. government and partners, advocating for balanced export controls.
Businesses preparing for changes under the US Artificial Intelligence Act can similarly:
- Appoint oversight committees to periodically evaluate high-risk supply chains and collaborations
- Establish an industry-wide pre-screening process for new partnerships in restricted areas
- Pursue active policy engagement, voluntary self-regulation, and proactive partnership efforts