AI as Tyrant

  • Sharebar
Tim Mack

Tim Mack

Computerizer / Pixabay

Not too long ago I wrote about the growing use of AI as a management tool to oversee online customer relations workers and human–robot relations in areas such as warehousing. Since that time, more information has come to light about moving AI decision making up the organizational ladder into human resources, including hiring and firing, and into other arenas, such as decision making in combat arenas. While these trends clearly reduce middle management workload, questions have arisen about the functionality and viability in this growing area of AI responsibility.

As well, human oversight in these types of decisions has declined, resulting in the decline of review and appeal options for workers. This has also been true for outside suppliers for companies such as Amazon, where vendor relationships can be ended without a review of charges filed against them, largely because of a conviction at top management levels that “machines make decisions more quickly and accurately than people, reducing costs and giving Amazon a competitive advantage.”

One area of worker conflict is the Flex delivery service, where AI algorithms focus on meeting schedules and delivery goals, without consideration of real-world challenges, such as locked delivery areas, packages unavailable for pickup, or vehicle breakdowns. This was based on a top management decision that it was “cheaper to trust the algorithms than pay people to investigate mistaken firings, so long as the drivers could be replaced easily.” And this is combined with minimal real-time in-the-field problem-solving support.

In the case of the Department of Defense, its autonomous weapon policy has been to allow commanders and operators to exercise appropriate levels of human judgment over the use of force, but this guideline has not been updated since November 2012.

The other side of the question is whether a human can or should make every decision about the use of lethal force—especially when hundreds of decisions are required simultaneously. And there is a strong belief among military planners and policy makers that rapid changes in AI capability will allow high reliability landmark and target identification for effective robotic self-control—in the future.

But this has not always been the case, as built-in bias in algorithms can lead to unexpected outcomes. While it is easy to say that AI will be constantly improved over time, the unanswered questions remain: Who will be doing those improvements, the AIs themselves? And for what purposes, cost savings or quality performance? At Amazon, the possibility of the review of personnel decisions by AI is reaching the vanishing point, while the possibility of human review of AI drone decisions is moving in that same direction. Neither of these trends is a comforting one.

Accordingly, MIT professor Max Tegmark argues that AI weapons should be “stigmatized and barred, like biologic weapons,” based on a belief that they are likely to fall into terrorist hands, with disastrous consequences worldwide.

References

Will Knight, “Trigger Warning,” Wired (July-August 2021).

Timothy C. Mack, “Management by AI,” Foresight Signals blog (March 27, 2021). http://www.aaiforesight.com/blog/management-ai

Paul Scharre, Army of None: Autonomous Weapons and the Future of War, WW Norton (2019). https://wwnorton.com/books/Army-of-None/

Spencer Soper, “Fired by a bot at Amazon: ‘It’s you against the machine,’” Bloomberg (June 28, 2021). https://www.bloomberg.com/news/features/2021-06-28/fired-by-bot-amazon-t...

About the Author

Timothy C. Mack is managing principal of AAI Foresight Inc. and former president of the World Future Society (2004–14). He may be reached at tcmack333@gmail.com.

Image by Computerizer from Pixabay