← All articles

AI Permeates Our Lives: 2023 Expected to Continue Legislative Ramp-Up

Dr. Lisa PalmerJanuary 16, 20234 min read
4 min read
XLinkedIn

How do we both protect people from unintended harms AND ensure a strong national innovation environment for Artificial Intelligence?

The Current Landscape

Despite most people being unaware of it, Artificial Intelligence surrounds us daily, with over 91% of leading companies continually investing in it. Siri on our phones, online shopping showing us items that it has learned that we want, social media advertisements based on our online habits, automatic lane assist and braking in our cars, and so much more. Even less obvious, and more dangerous, are the AI systems that decide who is approved for a home loan, who is offered a job, who receives the best healthcare, who receives bail versus being held in jail. Since this technology is so widespread and invasive in our lives, we assume that it is heavily legislated and regulated. Definitely not.

As with any technology, most people believe that AI should not be used to discriminate against people, purposefully or unintentionally. The Pew Research Center documented Americans' concerns over the increasing use of AI in daily life. With 72% of Americans neutral or more concerned than excited about AI, it is politically and ethically prudent to protect constituents.

National Competitiveness Is at Stake

With Russian President Vladimir Putin stating that the country with the best AI will be "the ruler of the world" and Chinese leaders aggressively pursuing AI dominance, it is clear that this is necessary for national competitiveness and security. We must move quickly and thoughtfully to maximize the capabilities of AI while also protecting people against systemically embedded actions that our society does not condone. It is time for specific legislative and regulatory action that both supports innovation and ensures that this critical technology well serves humanity.

Federal Progress So Far

The US Federal Government has made progress through an Executive Order issued by President Trump that was later codified by a Democrat-controlled legislature under the National Defense Authorization Act of 2021 in the National Artificial Intelligence Initiative Act (NAIIA of 2020). Earlier that year, additional foundational progress was made with the introduction of a national framework, the Blueprint for an AI Bill of Rights. Further, national regulatory agencies have created a patchwork of situationally-specific regulations. The Federal Trade Commission, in one example, has established regulations and strongly advised businesses regarding expected situational behavior.

State-Level Challenges

For context from Oklahoma, like many other states, the same structural challenges that the federal structure creates exist. There is no single agency with accountability, or law focused on businesses and the way that they use AI. Arguably, the problem is worse in Oklahoma as there is little governmental attention given to commercially-created technology issues. In 2022, the first legislative attempts were made to address AI. Three bills were introduced but all failed. This is unfortunate as H.B. 3011 attempted to address AI algorithmic harms. Ideally, a state-level unified approach to addressing digital issues and ensuring opportunities is needed.

The Explainability Trap

On the surface, it could seem that enough is being done, but the activity is misleading. Currently, there is so much debate about explaining how the systems work, among other concerns, that progress is languishing in protecting people from current-day harm. While the valuable explainability and transparency debate continues, legislation should be crafted that embeds expectations, across all human-impacting systems, that ensure fair and equitable outcomes. Let us start by establishing situationally-agnostic legal and auditable expectations specifically for equitable outcomes, and associate penalties for failure to comply. Then, the onus will be on those financially benefiting from this powerful technology to ensure that they are creating "fair" outcomes while not stifling critical AI innovation.


Dr. Lisa Palmer
Dr. Lisa Palmer

CEO & Co-Founder

Lisa wrote the book on AI adoption, literally. Her Wiley-published research, the largest qualitative study of enterprise AI adoption, shapes the frameworks neurocollective uses to help organizations move past AI ambition into measurable outcomes.

Research, AI Leadership