A.I. and the Future of Inequality

 

THE FUTURE OF A.I.WITH WENDELL WALLACH

 
 

Will AI exacerbate inequality? Can we stop that from happening?

As the world grapples with guardrails to protect us from future dangers of AI, I thought it timely to talk to an ethicist who has lived and breathed these challenges for decades.

Enter, Wendell Wallach, Carnegie-Uehiro fellow at the Carnegie Council for Ethics in International Affairs, where he co-directs the Artificial Intelligence & Equality Initiative (AIEI). He is also Emeritus Chair of Technology and Ethics Studies at Yale University’s Interdisciplinary Center for Bioethics, a scholar with the Lincoln Center for Applied Ethics, a fellow at the Institute for Ethics & Emerging Technology, and a senior advisor to The Hastings Center. His books include Moral Machines: Teaching Robots Right from Wrong, and A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control.

MEETING WENDELL WALLACH

Wendell was extraordinarily generous in inviting me to his home near Hartford CT, where we recorded our discussion over cups of tea in his living room. He was grounded and thoughtful and had a refreshingly broad perspective on the many complex forces shaping our future.

We shared similar views on the potential for AI to do great good as well as harm, and a similar impatience with the ‘existential threat’ rhetoric being bandied about by Silicon Valley luminaries who seem happy to divert attention to long-term fears while raking in as much short-term money as they can. I especially valued looking at AI through his inequality lens – which we discussed from wealth, power and social perspectives, and how Wendell was able to place today’s AI challenges within the broader sweep of history.

Most importantly, while worried about bad corporate behaviours and all the ways A.I. can make things worse, he was also optimistic about the actions available to us to re-shape the trajectory for the better.

There were many takeaways from our conversation, big and small, some of which I’ve revisited briefly below.

Enjoy the podcast!

 
 

CHECK OUT THE PODCAST TRANSCRIPT

UNEQUAL DIVIDENDS FROM PRODUCTIVITY

An early takeaway for me was the different industrial context against which our present-day AI issues are playing out. Wendell pointed out that 50 years ago roughly half of productivity gains in our society went into worker wages and the other half went to the owners of capital, whereas we are now approaching 70% of all productivity gains going to the owners of capital! As he emphasized, we are creating an increasingly top-heavy society and are becoming increasingly dependent on an ever more powerful “one percent” and the exacerbation of inequality is in their interest. Couple this with the present “winner take all” mentality in Silicon Valley, and something as powerful as AI will likely tip the balance even further.

 

ACCOUNTABILITY FOR A BETTER A.I. FUTURE

Given how we started the discussion – with “ethics washing,” and how corporations are on the whole taking us to darker places, and how even the so-called good guys can’t be relied upon to stay ethical when tempted by the huge AI rewards – Wendell was surprisingly optimistic about opportunities to turn the situation around. 

The bedrock to his pathway to a better future is accountability. Properly educate citizens on the dangers so they know to demand accountability for bad AI behaviors from governments and corporations, and reintroduce accountability in law by removing, for example, the Section 230 protections that the big tech players presently hide behind. Section 230 is the US federal statute that immunizes online service providers from liability for the third-party content they host. In essence it allows big tech companies to be treated as distributors instead of publishers. Reintroducing accountability through repealing such protections won’t resolve everything, but it will go a long way towards eliminating the most egregious players and convincing others to toe the line.

 

WHAT KIND OF WORLD DO WE WANT?

Wendell pointed out how grappling with the very significant challenges of AI governance has forced us to ask much bigger questions of ourselves. What kind of world do we want to live in? How do we want organisations to behave with respect to all emerging technologies? While we are groping forward slowly, almost all the progress we make on A.I. ethics guidelines and guardrails may be applied more broadly, and this can only be a good thing. For further reading on this subject as it applies to the biotech revolution, see also my interview with Jennifer Kuzma on what’s at stake now that CRISPR has made it possible to edit all forms of life. We are indeed living in a key moment in history!

 

YOU CAN MAKE A DIFFERENCE

What can we do as individuals?

As Wendell puts it, making a better future is about inflection points; sometimes a slight nudge now changes the trajectory enough to take us to a wholly different destination.

To start with, educate ourselves and others on the risks. Share what you learn. Lobby as a citizen and as a representative of your organization to repeal Section 230 and introduce stronger accountability mechanisms in law. I’ve listed some of the more prominent AI ethics frameworks below. Cherry-pick the best and most relevant parts and imbed them in the rules & routines of your organization or department. If your organization hasn’t already created a senior role 100% dedicated to AI, lobby for one, asap, because no matter how big you imagine AI is going to be, what’s coming will be bigger.

 

USEFUL A.I. ETHICS FRAMEWORKS

Below are some useful AI Ethics frameworks. These are current as at October 2023 and will doubtless evolve fast, so don’t forget to check for updates and new additions.

OECD principles on AI 

UNESCO ethics for AI

World Economic Forum principles of ethical AI  

European Union Ethics guidelines for trustworthy AI 

Beijing AI principles

White House draft blueprint for an AI Bill of rights

 

MORE TO EXPLORE ON THE FUTURE OF A.I.

For more FutureBites on the future of AI, you can listen to my interview with Hod Lipson and ponder “What Does It mean to Make a Robot Conscious?” and read my deconstruction of the existential question “Should We Worry About Skynet?” as well as a short essay I wrote on what I consider the biggest and most pernicious long-term threat from AI, and my broad-brush expectations on how the explosive extension of AI capabilities will play out over the next 10 years and listen to my interview with Alex Marcireau on how bio-inspired chips will revolutionize the future of A.I.

Or, for an immersive, fully-customized, high-energy experience to get ALL your staff innovating for a better future, book me to deliver a mind-bending futurist keynote at your next event!

 
Previous
Previous

Bruce Levine Interview Transcript

Next
Next

WendEll Wallach Interview Transcript