Washington – Business Roundtable today released a series of white papers addressing some of the key issues concerning the regulation of artificial intelligence (AI). The papers address considerations around defining and addressing risk in AI, promoting American leadership in AI, and safely harnessing synthetic content's potential.

“Artificial intelligence holds tremendous potential for the U.S. economy and the people and businesses that power it,” said Adena Friedman, Chair and Chief Executive Officer of Nasdaq and Business Roundtable Technology Committee Chair. “Policymakers have a unique opportunity to reinforce American leadership in AI. Business leaders are committed to collaborating with policymakers to support a strong AI ecosystem, including designing effective and evidence-based guardrails when necessary, while empowering American innovation.”

“AI holds the potential to bolster economic growth and help to solve critical challenges. Our papers underscore the importance of collaboration among a broad range of stakeholder groups, including federal policymakers, academia, industry and civil society,” said Business Roundtable CEO Joshua Bolten. “Business Roundtable stands ready to collaborate with policymakers on transparent, commonsense guardrails that support a strong and safe AI ecosystem.”

Key Policy Recommendations

Defining and Addressing AI Risks
  • Policymakers should focus on high risks and potential outcomes associated with deploying AI models and systems in specific contexts, while avoiding broad classifications of risk for entire sectors, categories of AI or uses of AI. Additionally, policymakers should define “high-risk” through a collaborative, robust stakeholder process.
  • Policymakers should align legislative and regulatory proposals with existing, effective domestic and international policies and industry risk management strategies to promote a harmonized approach and avoid introducing uncertainty and conflicting compliance requirements.
  • Policymakers should identify clear, measurable strategies for evaluating and addressing AI risks that will equip developers and deployers with the necessary information to safely, securely and confidently use AI.
Promoting American Leadership in AI Innovation
  • Policymakers should support strategic public-private partnerships working to strengthen AI innovation infrastructure, including codifying and appropriately funding the National AI Research Resource and the U.S. AI Safety Institute.
  • Policymakers should expand access to technical resources, including efforts to make high-impact government datasets more widely available.
  • Voluntary, harmonized and flexible risk-based standards will ensure that organizations are equipped to evaluate and implement AI tools, systems and services. Standards should be developed through partnership with industry, government and other relevant stakeholders.
Safely Harnessing Synthetic Content’s Potential
  • Policymakers should adopt risk-based guardrails for synthetic content that are adaptable and protect beneficial uses.
  • Policymakers should support initiatives that seek to validate authentic and credible content, ensuring individuals have sufficient information to identify the source and evaluate trustworthiness of the content they encounter.
  • Regulatory guardrails should integrate multiple technical and people-centric approaches to effectively manage the risk of synthetic content to people and society.
  • If policymakers create frameworks that impose penalties for harmful AI-generated synthetic content, they should ensure that responsible parties are given an opportunity to remediate.

Read the white papers in full here.