Shareholders Demand AI Risk Transparency from Tech and Entertainment Giants
Shareholders at major companies in tech and entertainment are pushing for greater disclosure of the risks associated with AI.
As artificial intelligence (AI) continues to develop and shape various industries, shareholders at major companies like Alphabet Inc. and Warner Bros. Discovery Inc. are demanding more transparency regarding the risks associated with the rapidly evolving technology. This heightened scrutiny reflects a broader trend of investor concern, particularly after last summer’s Hollywood strikes, which flagged potential issues such as AI taking credit from writers and duplicating actors' likenesses.
Investors are now pushing companies to reveal the potential pitfalls of AI and to outline their ethical guidelines in managing such technologies. For instance, proposals at Meta Platforms Inc. emphasize the danger of AI-generated misinformation affecting elections worldwide. This surge in investor activism reflects an unease about AI's unchecked progression and its capacity to disrupt not just business operations but also societal norms.
These shareholder proposals, although not always successful, have begun to stir significant discussion. For example, proposals at Apple Inc. and Microsoft Corp. have seen substantial shareholder engagement, though they fell short of majority support. The debates these proposals trigger often spotlight the complex balance companies must strike between leveraging cutting-edge AI technologies and mitigating associated risks.
At Microsoft’s annual meeting, Krist Novoselic, a Microsoft shareholder and former Nirvana bassist, presented a proposal by Arjuna Capital to seek more clarity on how Microsoft plans to handle AI-driven misinformation. Microsoft, involved with AI through initiatives like its AI tool Copilot and its investment in OpenAI, has committed to producing a government report detailing its AI governance, highlighting ongoing corporate efforts to address AI's ethical implications.
At Meta and Alphabet, similar proposals aim to address concerns over misinformation stemming from AI advancements in social media and other digital platforms. These proposals not only underscore the potential for misuse of AI technology in spreading false information but also raise questions about the adequacy of existing governance frameworks to handle such challenges.
The labor movement, too, has played a critical role in escalating these issues. For example, the AFL-CIO withdrew proposals at Walt Disney Co. and Comcast Corp. after securing commitments for greater AI disclosure. They continue, however, to press for more substantial governance changes, including new proposals at Amazon.com Inc. and Netflix Inc.'s upcoming annual meetings focused on AI's human rights implications, particularly in employment practices.
This wave of proposals and the discussions they provoke come at a time when regulatory environments are also shifting. The European Union is advancing its AI Act, aimed at ensuring AI systems adhere to safe and ethical standards, while in the U.S., the White House has issued directives focusing on AI security and privacy as well.
With these regulatory changes, companies, in particular tech and entertainment, are increasingly discussing AI in their annual reports, highlighting both the technology's potential and its risks. As companies continue to explore AI’s vast potential, they are also tasked with the crucial responsibility of aligning their AI strategies with broader ethical and governance frameworks.