Regulating AI: Ethical Limits of Emerging Technologies: Why Should Executives Care?

Why on Earth would the Business Ethics Alliance take on a topic like regulating artificial intelligence (AI)? After our very successful August Signature Event with Dr. Kathy Meier-Hellstern, director of Google’s Responsible AI Department, we knew we needed to follow up on the ethics of AI discussion. Exploring the boundaries of regulation against innovation and the business ethics involved with both, seemed to be the natural progression. Laurel Oetken, director of Tech Nebraska, Ruthie Barko, regional executive director for TechNet, and Jack Horgan, shareholder and attorney at Koley Jessen formed the panel for the morning’s discussion sponsored by Koley Jessen, Google, and Cox.

Framed against a quadrant matrix of doing the right thing for the business (even though it might not be ethical), and doing the right thing ethically (although it might not be the best for business), the room of 100 participants had several opportunities to network, ask and answer questions, and hear the latest in trends about what the impact of AI regulation means for their organizations.

Trends- Oetken started the discussion noting that traditionally, Nebraska has a risk-averse culture, which explains the hesitancy to “dive in” and adopt AI technologies. In a workforce shortage, this hesitancy to engage with AI, coupled with unclear and developing regulations might continue to impact current workforce availability. Barko noted that AI regulation will be a significant part of the 2024 elections. She sees bi-partisan alignment and support for regulating AI across the region at the federal level although at the state level, regulation might resemble more of a patchwork of laws focused more on data privacy regulation. Horgan brought up the progress made in the European Union (EU), which is completing its Omnibus Regulation Act concerning data privacy and AI. When it comes to AI, Horgan noted, the EU had much more stringent regulations protecting the individual whereas the United States tends to overall be more lenient regarding individual data privacy, instead supporting organizational business needs. All three panelists identified industries more likely to be impacted by AI regulation including innovation industries like science and technology, which rely on the ability to “push” boundaries of knowledge and practice, creative and content-driven industries where copyright laws cannot keep up with AI, and small businesses which may not have the capacity to fully utilize or understand the implications of AI use.

Top of Mind Questions- The panel learned that issues regarding AI regulation that keep executives up at night revolve around continuing to innovate in “unknown” areas if compliance with regulation is expected and when non-compliance is the ethical thing to do, especially when human life and safety are at risk. To the latter, Horgan mentioned the concept of humans being out of the AI loop (highest ethical risk), over the AI loop (moderate ethical risk), or in the AI loop (which is the lowest ethical risk if humans are ethical).

Impact, or Why Executives Should Care- This area served as framing questions for action moving forward regarding ethical obligations when considering AI technology as the ultimate responsibility for AI implementation falls to senior executives. AI is the first general-purpose concept in decades that will affect almost every aspect of an organization. Horgan compared AI to the advent of discovering something akin to “nuclear” technology (an appropriate comparison given the recent “Oppenheimer” movie!) and needing to ensure some guardrails are in place during the emerging stages of the newer technologies. Oetken reminded executives that understanding their own mindset is critical when working with AI: How much risk are they willing to take on? What are the biases/and/or discrimination aspects their organizations risk implementing by using AI? At what point is the human CEO critical and irreplaceable? Barko asked those in the room to consider how they might ethically influence regulation through lobbying and organizational policy.

I ended the discussion for the day by sharing this quote about the role of AI in determining ethical behavior: “AI can be used as a tool to assist in determining ethical behavior, but the determination of what is ethical ultimately relies on human judgment and values. AI systems can be programmed to follow ethical guidelines or principles, and they can be used to analyze complex ethical dilemmas and provide insights. However, AI lacks the ability to make inherently ethical judgments without human input. It can only process information based on the data and instructions it has been given.” The author of this quote? ChatGPT. Now that is insightful AI.

Resources:

European Union AI Regulations https://www.theverge.com/2023/12/13/23999849/eu-ai-act-artificial-intelligence-regulations-complicated-delays

- Advertisement -

Biden’s Executive Order Regarding AI Regulations

https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

Meta Labeling AI imagery

https://www.reuters.com/technology/meta-start-labeling-ai-generated-images-companies-like-openai-google-2024-02-06/

Economist op-ed Regarding AI and Organizational Culture

https://www.economist.com/by-invitation/2024/02/12/two-experts-predict-ai-will-transform-companies-understanding-of-themselves