Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Bring Human Values to AI

  • Jacob Abernethy,
  • François Candelon,
  • Theodoros Evgeniou,
  • Abhishek Gupta,
  • Yves Lostanlen

artificial intelligence undermining human values essay

When it launched GPT-4, in March 2023, OpenAI touted its superiority to its already impressive predecessor, saying the new version was better in terms of accuracy, reasoning ability, and test scores—all of which are AI-performance metrics that have been used for some time. However, most striking was OpenAI’s characterization of GPT-4 as “more aligned”—perhaps the first time that an AI product or service has been marketed in terms of its alignment with human values.

In this article a team of five experts offer a framework for thinking through the development challenges of creating AI-enabled products and services that are safe to use and robustly aligned with generally accepted and company-specific values. The challenges fall into five categories, corresponding to the key stages in a typical innovation process from design to development, deployment, and usage monitoring. For each set of challenges, the authors present an overview of the frameworks, practices, and tools that executives can leverage to face those challenges.

Speed and efficiency used to be the priority. Now issues such as safety and privacy matter too.

Idea in Brief

The problem.

Products and services increasingly leverage artificial intelligence to improve efficiency and performance, but the results can be unpredictable, intrusive, offensive, and even dangerous.

The Solution

Companies need to factor AI’s behavior and values into their innovation and development processes to ensure that they bring to market AI-enabled offerings that are safe to use and are aligned with generally accepted and company-specific values.

How to Proceed

This article identifies six key challenges that executives and entrepreneurs will face and describes how to meet them. Companies that move early to acquire the needed capabilities will find them an important source of competitive advantage.

When it launched GPT-4, in March 2023, OpenAI touted its superiority to its already impressive predecessor, saying the new version was better in terms of accuracy, reasoning ability, and test scores—all of which are AI-performance metrics that have been used for some time. However, most striking was OpenAI’s characterization of GPT-4 as “more aligned”—perhaps the first time that an AI product or service has been marketed in terms of its alignment with human values.

  • JA Jacob Abernethy is an associate professor at the Georgia Institute of Technology and a cofounder of the water analytics company BlueConduit.
  • FC François Candelon is a managing director and senior partner at Boston Consulting Group (BCG), and the global director of the BCG Henderson Institute.
  • Theodoros Evgeniou is a professor at INSEAD and a cofounder of the trust and safety company Tremau.
  • AG Abhishek Gupta is the director for responsible AI at Boston Consulting Group, a fellow at the BCG Henderson Institute, and the founder and principal researcher of the Montreal AI Ethics Institute.
  • YL Yves Lostanlen has held executive roles at and advised the CEOs of numerous companies, including AI Redefined and Element AI.

artificial intelligence undermining human values essay

Partner Center

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 23 February 2022

Human autonomy in the age of artificial intelligence

  • Carina Prunkl   ORCID: orcid.org/0000-0002-0123-9561 1  

Nature Machine Intelligence volume  4 ,  pages 99–101 ( 2022 ) Cite this article

1765 Accesses

13 Citations

20 Altmetric

Metrics details

  • Science, technology and society

Current AI policy recommendations differ on what the risks to human autonomy are. To systematically address risks to autonomy, we need to confront the complexity of the concept itself and adapt governance solutions accordingly.

This is a preview of subscription content, access via your institution

Relevant articles

Open Access articles citing this article.

Assessing deep learning: a work program for the humanities in the age of artificial intelligence

  • Jan Segessenmann
  • , Thilo Stadelmann
  •  …  Oliver Dürr

AI and Ethics Open Access 21 December 2023

A principles-based ethics assurance argument pattern for AI and autonomous systems

  • , Ibrahim Habli
  •  …  Marten Kaas

AI and Ethics Open Access 06 June 2023

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

Raz, J. The Morality of Freedom (Clarendon Press, 1986).

Korsgaard, C. M., Cohen, G. A., Geuss, R., Nagel, T. Williams, T. & O’Neilk, O. The Sources of Normativity (Cambridge Univ. Press, 1996).

Christman, J. in The Stanford Encyclopedia of Philosophy (ed. Zalta, E. N.) (Metaphysics Research Lab, Stanford Univ., 2020); https://plato.stanford.edu/entries/autonomy-moral/

Roessler, B. Autonomy: An Essay on the Life Well-Lived (John Wiley, 2021).

Susser, D., Roessler, B. & Nissenbaum, H. Technology, Autonomy, and Manipulation (Technical Report) (Social Science Research Network, Rochester, NY, 2019).

Kramer, A. D. I., Guillory, J. E. & Hancock, J. T. Proc. Natl Acad. Sci. USA 111 , 8788–8790 (2014).

Article   Google Scholar  

European Commission High-Level Experts Group (HLEG). Ethics Guidelines for Trustworthy AI (Technical Report B-1049) (EC, Brussels, 2019).

Association for Computing Machinery (ACM). ACM Code of Ethics and Professional Conduct (ACM, 2018).

Université de Montréal. Montreal Declaration for a Responsible Development of AI (Forum on the Socially Responsible Development of AI (Université de Montréal, 2017).

European Committee of the Regions. White Paper on Artificial Intelligence - A European approach to excellence and trust (EC, 2020).

Organisation for Economic Co-operation and Development. Recommendation of the Council on Artificial Intelligence (Technical Report OECD/LEGAL/0449) (OECD 2019); https://oecd.ai/en/ai-principles

European Commission, Directorate-General for Research and Innovation, European Group on Ethics in Science and New Technologies. Statement on artificial intelligence, robotics and ‘autonomous’ systems (EC, 2018).

Floridi, L. & Cowls, J. Harvard Data Sci. Rev 1 , 1–13 (2019).

Google Scholar  

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. & Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (SSRN Scholarly Paper ID 3518482) (Social Science Research Network, Rochester, NY, 2020); https://papers.ssrn.com/abstract=3518482

Milano, S., Taddeo, M. & Floridi, L. Recommender Systems and their Ethical Challenges (SSRN Scholarly Paper ID 3378581) (Social Science Research Network, Rochester, NY, 2019).

Calvo, R. A., Peters, D. & D’Mello, S. Commun. ACM 58 , 41–42 (2015).

Mik, E. Law Innov. Technol. 8 , 1–38 (2016).

Helberger, N. Profiling and Targeting Consumers in the Internet of Things – A New Challenge for Consumer Law (Technical Report) (Social Science Research Network, Rochester, NY, 2016).

Burr, C., Morley, J., Taddeo, M. & Floridi, L. IEEE Trans. Technol. Soc. 1 , 21–33 (2020).

Morley, J. & Floridi, L. Sci. Eng. Ethics 26 , 1159–1183 (2020).

Brownsword, R. in Law, Human Agency and Autonomic Computing (eds Hildebrandt., M. & Rouvroy, A.) 80–100 (Routledge, 2011).

Calvo, R., Peters, D., Vold, K. V. & Ryan, R. in Ethics of Digital Well-Being (Philosophical Studies Series, vol. 140) (eds Burr, C. & Floridi, L.) 31–54 (Springer, 2020).

Rubel, A., Castro, C. & Pham, A. Algorithms and Autonomy: The Ethics of Automated Decision Systems (Cambridge Univ. Press, 2021).

Dworkin, G. The Theory and Practice of Autonomy (Cambridge Univ. Press. 1988).

Mackenzie, C. Three Dimensions of Autonomy: A Relational Analysis (Oxford Univ. Press, 2014).

Noggle, R. Am. Philos. Q. 33 , 43–55 (1996).

Elster, J. Sour Grapes: Studies in the Subversion of Rationality (Cambridge Univ. Press, 1985).

Adomavicius, G., Bockstedt, J. C., Curley, S. P. & Zhang, J. Info. Syst. Res 24 , 956–975 (2013).

Ledford, H. Nature 574 , 608–609 (2019).

Dworkin, G. in The Stanford Encyclopedia of Philosophy (ed. Zalta, E. N.) (Metaphysics Research Lab, Stanford Univ. Press, 2020; https://plato.stanford.edu/archives/fall2020/entries/paternalism/

Kühler, M. Bioethics 36 , 194–200 (2021).

Christman, J. The Politics of Persons: Individual Autonomy and Socio-Historical Selves (Cambridge Univ. Press, 2009.)

Download references

Acknowledgements

The author thanks J. Tasioulas, M. Philipps-Brown, C. Veliz, T. Lechterman, A. Dafoe and B. Garfinkel for their helpful comments. Funding: No external funding sources.

Author information

Authors and affiliations.

Institute for Ethics in AI, University of Oxford, Oxford, UK

Carina Prunkl

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Carina Prunkl .

Ethics declarations

Competing interests.

The author declares no competing interests.

Peer review

Peer review information.

Nature Machine Intelligence thanks the anonymous reviewers for their contribution to the peer review of this work.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Prunkl, C. Human autonomy in the age of artificial intelligence. Nat Mach Intell 4 , 99–101 (2022). https://doi.org/10.1038/s42256-022-00449-9

Download citation

Published : 23 February 2022

Issue Date : February 2022

DOI : https://doi.org/10.1038/s42256-022-00449-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

  • Thilo Stadelmann
  • Oliver Dürr

AI and Ethics (2023)

Theodor W. Adorno, Artificial Intelligence, and Democracy in the Postdigital Era

  • Sungjin Park

Postdigital Science and Education (2023)

Decolonizing AI Ethics: Relational Autonomy as a Means to Counter AI Harms

  • Sábëlo Mhlambi
  • Simona Tiribelli

Topoi (2023)

Evaluating an artificial intelligence literacy programme for empowering and developing concepts, literacy and ethical awareness in senior secondary students

  • Siu-Cheung Kong
  • William Man-Yin Cheung
  • Olson Tsang

Education and Information Technologies (2023)

  • Ibrahim Habli
  • Marten Kaas

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

artificial intelligence undermining human values essay

Responsibility & Safety

How can we build human values into AI?

Iason Gabriel and Kevin McKee

  • Copy link ×

Abstract header of 3D columns in a blue gradient.

Drawing from philosophy to identify fair principles for ethical AI

As artificial intelligence (AI) becomes more powerful and more deeply integrated into our lives, the questions of how it is used and deployed are all the more important. What values guide AI? Whose values are they? And how are they selected?

These questions shed light on the role played by principles – the foundational values that drive decisions big and small in AI. For humans, principles help shape the way we live our lives and our sense of right and wrong. For AI, they shape its approach to a range of decisions involving trade-offs, such as the choice between prioritising productivity or helping those most in need.

In a paper published today in the Proceedings of the National Academy of Sciences , we draw inspiration from philosophy to find ways to better identify principles to guide AI behaviour. Specifically, we explore how a concept known as the “veil of ignorance” – a thought experiment intended to help identify fair principles for group decisions – can be applied to AI.

In our experiments, we found that this approach encouraged people to make decisions based on what they thought was fair, whether or not it benefited them directly. We also discovered that participants were more likely to select an AI that helped those who were most disadvantaged when they reasoned behind the veil of ignorance. These insights could help researchers and policymakers select principles for an AI assistant in a way that is fair to all parties.

Two diagrams side by side. On the left there's a hexagon filled with connected blue and green dots to illustrate diverse opinion. On the right, a square with a blue dot separated from a group of green dots represents the veil of ignorance.

The veil of ignorance (right) is a method of finding consensus on a decision when there are diverse opinions in a group (left).

A tool for fairer decision-making

A key goal for AI researchers has been to align AI systems with human values. However, there is no consensus on a single set of human values or preferences to govern AI – we live in a world where people have diverse backgrounds, resources and beliefs. How should we select principles for this technology, given such diverse opinions?

While this challenge emerged for AI over the past decade, the broad question of how to make fair decisions has a long philosophical lineage. In the 1970s, political philosopher John Rawls proposed the concept of the veil of ignorance as a solution to this problem. Rawls argued that when people select principles of justice for a society, they should imagine that they are doing so without knowledge of their own particular position in that society, including, for example, their social status or level of wealth. Without this information, people can’t make decisions in a self-interested way, and should instead choose principles that are fair to everyone involved.

As an example, think about asking a friend to cut the cake at your birthday party. One way of ensuring that the slice sizes are fairly proportioned is not to tell them which slice will be theirs. This approach of withholding information is seemingly simple, but has wide applications across fields from psychology and politics to help people to reflect on their decisions from a less self-interested perspective. It has been used as a method to reach group agreement on contentious issues, ranging from sentencing to taxation.

Building on this foundation, previous DeepMind research proposed that the impartial nature of the veil of ignorance may help promote fairness in the process of aligning AI systems with human values. We designed a series of experiments to test the effects of the veil of ignorance on the principles that people choose to guide an AI system.

Maximise productivity or help the most disadvantaged?

In an online ‘harvesting game’, we asked participants to play a group game with three computer players, where each player’s goal was to gather wood by harvesting trees in separate territories. In each group, some players were lucky, and were assigned to an advantaged position: trees densely populated their field, allowing them to efficiently gather wood. Other group members were disadvantaged: their fields were sparse, requiring more effort to collect trees.

Each group was assisted by a single AI system that could spend time helping individual group members harvest trees. We asked participants to choose between two principles to guide the AI assistant’s behaviour. Under the “maximising principle” the AI assistant would aim to increase the harvest yield of the group by focusing predominantly on the denser fields. While under the “prioritising principle”the AI assistant would focus on helping disadvantaged group members.

An illustration of the ‘harvesting game’ where players (shown in red) either occupy a dense field that is easier to harvest (top two quadrants) or a sparse field that requires more effort to collect trees (shown in green).

An illustration of the ‘harvesting game’ where players (shown in red) either occupy a dense field that is easier to harvest (top two quadrants) or a sparse field that requires more effort to collect trees.

We placed half of the participants behind the veil of ignorance: they faced the choice between different ethical principles without knowing which field would be theirs – so they didn’t know how advantaged or disadvantaged they were. The remaining participants made the choice knowing whether they were better or worse off.

Encouraging fairness in decision making

We found that if participants did not know their position, they consistently preferred the prioritising principle, where the AI assistant helped the disadvantaged group members. This pattern emerged consistently across all five different variations of the game, and crossed social and political boundaries: participants showed this tendency to choose the prioritising principle regardless of their appetite for risk or their political orientation. In contrast, participants who knew their own position were more likely to choose whichever principle benefitted them the most, whether that was the prioritising principle or the maximising principle.

A chart showing the effect of the veil of ignorance on the likelihood of choosing the prioritising principle (almost 0.75). A control bar to the left of this is around two times smaller than the veil of ignorance bar (almost 0.25).

A chart showing the effect of the veil of ignorance on the likelihood of choosing the prioritising principle, where the AI assistant would help those worse off. Participants who did not know their position were much more likely to support this principle to govern AI behaviour.

When we asked participants why they made their choice, those who did not know their position were especially likely to voice concerns about fairness. They frequently explained that it was right for the AI system to focus on helping people who were worse off in the group. In contrast, participants who knew their position much more frequently discussed their choice in terms of personal benefits.

Lastly, after the harvesting game was over, we posed a hypothetical situation to participants: if they were to play the game again, this time knowing that they would be in a different field, would they choose the same principle as they did the first time? We were especially interested in individuals who previously benefited directly from their choice, but who would not benefit from the same choice in a new game.

We found that people who had previously made choices without knowing their position were more likely to continue to endorse their principle – even when they knew it would no longer favour them in their new field. This provides additional evidence that the veil of ignorance encourages fairness in participants’ decision making, leading them to principles that they were willing to stand by even when they no longer benefitted from them directly.

Fairer principles for AI

AI technology is already having a profound effect on our lives. The principles that govern AI shape its impact and how these potential benefits will be distributed.

Our research looked at a case where the effects of different principles were relatively clear. This will not always be the case: AI is deployed across a range of domains which often rely upon a large number of rules to guide them , potentially with complex side effects. Nonetheless, the veil of ignorance can still potentially inform principle selection, helping to ensure that the rules we choose are fair to all parties.

To ensure we build AI systems that benefit everyone, we need extensive research with a wide range of inputs, approaches, and feedback from across disciplines and society. The veil of ignorance may provide a starting point for the selection of principles with which to align AI. It has been effectively deployed in other domains to bring out more impartial preferences . We hope that with further investigation and attention to context, it may help serve the same role for AI systems being built and deployed across society today and in the future.

Read more about DeepMind’s approach to safety and ethics .

artificial intelligence undermining human values essay

How Do We Align Artificial Intelligence with Human Values?

artificial intelligence undermining human values essay

A major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best. What will trigger this change? Artificial Intelligence.

Recently, some of the top minds in Artificial Intelligence (AI) and related fields got together to discuss how we can ensure AI remains beneficial throughout this transition, and the result was the Asilomar AI Principles document. The intent of these 23 principles is to offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, “Of course, it’s just a start…a work in progress.”

The Principles represent the beginning of a conversation, and now that the conversation is underway, we need to follow up with broad discussion about each individual principle. The Principles will mean different things to different people, and in order to benefit as much of society as possible, we need to think about each principle individually.

As part of this effort, I interviewed many of the AI researchers who signed the Principles document to learn their take on why they signed and what issues still confront us.

Value Alignment

Today, we start with the Value Alignment principle.

Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

Stuart Russell, who helped pioneer the idea of value alignment, likes to compare this to the King Midas story . When King Midas asked for everything he touched to turn to gold, he really just wanted to be rich. He didn’t actually want his food and loved ones to turn to gold. We face a similar situation with artificial intelligence: how do we ensure that an AI will do what we really want, while not harming humans in a misguided attempt to do what its designer requested?

“Robots aren’t going to try to revolt against humanity,” explains Anca Dragan , an assistant professor and colleague of Russell’s at UC Berkeley, “they’ll just try to optimize whatever we tell them to do. So we need to make sure to tell them to optimize for the world we actually want.”

What Do We Want?

Understanding what “we” want is among the biggest challenges facing AI researchers.

“The issue, of course, is to define what exactly these values are, because people might have different cultures, different parts of the world, different socioeconomic backgrounds — I think people will have very different opinions on what those values are. And so that’s really the challenge,” says Stefano Ermon , an assistant professor at Stanford.

Roman Yampolskiy , an associate professor at the University of Louisville, agrees. He explains, “It is very difficult to encode human values in a programming language, but the problem is made more difficult by the fact that we as humanity do not agree on common values, and even parts we do agree on change with time.”

And while some values are hard to gain consensus around, there are also lots of values we all implicitly agree on. As Russell notes , any human understands emotional and sentimental values that they’ve been socialized with, but it’s difficult to guarantee that a robot will be programmed with that same understanding.

But IBM research scientist Francesca Rossi is hopeful. As Rossi points out, “there is scientific research that can be undertaken to actually understand how to go from these values that we all agree on to embedding them into the AI system that’s working with humans.”

Dragan’s research comes at the problem from a different direction. Instead of trying to understand people, she looks at trying to train a robot or AI to be flexible with its goals as it interacts with people. “At Berkeley,” she explains, “we think it’s important for agents to have uncertainty about their objectives, rather than assuming they are perfectly specified, and treat human input as valuable observations about the true underlying desired objective.”

Rewrite the Principle?

While most researchers agree with the underlying idea of the Value Alignment Principle, not everyone agrees with how it’s phrased, let alone how to implement it.

Yoshua Bengio , an AI pioneer and professor at the University of Montreal, suggests “assured” may be too strong. He explains, “It may not be possible to be completely aligned. There are a lot of things that are innate, which we won’t be able to get by machine learning, and that may be difficult to get by philosophy or introspection, so it’s not totally clear we’ll be able to perfectly align. I think the wording should be something along the lines of ‘we’ll do our best.’ Otherwise, I totally agree.”

Walsh, who’s currently a guest professor at the Technical University of Berlin, questions the use of the word “highly.” “I think any autonomous system, even a lowly autonomous system, should be aligned with human values. I’d wordsmith away the ‘high,’” he says.

Walsh also points out that, while value alignment is often considered an issue that will arise in the future, he believes it’s something that needs to be addressed sooner rather than later. “I think that we have to worry about enforcing that principle today,” he explains. “I think that will be helpful in solving the more challenging value alignment problem as systems get more sophisticated.

Rossi, who supports the the Value Alignment Principle as, “the one closest to my heart,” agrees that the principle should apply to current AI systems. “I would be even more general than what you’ve written in this principle,” she says. “Because this principle has to do not only with autonomous AI systems, but … is very important and essential also for systems that work tightly with humans-in-the-loop and where the human is the final decision maker. When you have a human and machine tightly working together, you want this to be a real team.”

But as Dragan explains, “This is one step toward helping AI figure out what it should do, and continuously refining the goals should be an ongoing process between humans and AI.”

Let the Dialogue Begin

And now we turn the conversation over to you. What does it mean to you to have artificial intelligence aligned with your own life goals and aspirations? How can it be aligned with you and everyone else in the world at the same time? How do we ensure that one person’s version of an ideal AI doesn’t make your life more difficult? How do we go about agreeing on human values, and how can we ensure that AI understands these values? If you have a personal AI assistant, how should it be programmed to behave? If we have AI more involved in things like medicine or policing or education, what should that look like? What else should we, as a society, be asking?

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work .

Related content

Other posts about  ai , ai safety principles , recent news.

artificial intelligence undermining human values essay

The Pause Letter: One year later

artificial intelligence undermining human values essay

Catastrophic AI Scenarios

artificial intelligence undermining human values essay

Gradual AI Disempowerment

artificial intelligence undermining human values essay

Frank Sauer on Autonomous Weapon Systems

Sign up for the future of life institute newsletter.

Dr. Ian O'Byrne

<span class='p-name'>Aligning AI With Human Values and Interests: An Ethical Imperative</span>

Leave A Comment Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed .

To respond on your own website, enter the URL of your response which should contain a link to this post's permalink URL. Your response will then appear (possibly after moderation) on this page. Want to update or remove your response? Update or delete your post and re-enter your post's URL again. ( Learn More )

How close are we to AI that surpasses human intelligence?

Subscribe to the center for technology innovation newsletter, jeremy baum and jeremy baum undergraduate student - ucla, researcher - ucla institute for technology, law, and policy @_jeremybaum john villasenor john villasenor nonresident senior fellow - governance studies , center for technology innovation @johndvillasenor.

July 18, 2023

  • Artificial general intelligence (AGI) is difficult to precisely define but refers to a superintelligent AI recognizable from science fiction.
  • AGI may still be far off, but the growing capabilities of generative AI suggest that we could be making progress toward its development.
  • The development of AGI will have a transformative effect on society and create significant opportunities and threats, raising difficult questions about regulation.

For decades, superintelligent artificial intelligence (AI) has been a staple of science fiction, embodied in books and movies about androids, robot uprisings, and a world taken over by computers. As far-fetched as those plots often were, they played off a very real mix of fascination, curiosity, and trepidation regarding the potential to build intelligent machines.

Today, public interest in AI is at an all-time high. With the headlines in recent months about generative AI systems like ChatGPT, there is also a different phrase that has started to enter the broader dialog: a rtificial general intelligence , or AGI. But what exactly is AGI, and how close are today’s technologies to achieving it?

Despite the similarity in the phrases generative AI and artificial general intelligence, they have very different meanings. As a post from IBM explains, “Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.” However, the ability of an AI system to generate content does not necessarily mean that its intelligence is general.

To better understand artificial general intelligence, it helps to first understand how it differs from today’s AI, which is highly specialized. For example, an AI chess program is extraordinarily good at playing chess, but if you ask it to write an essay on the causes of World War I, it won’t be of any use. Its intelligence is limited to one specific domain. Other examples of specialized AI include the systems that provide content recommendations on the social media platform TikTok, navigation decisions in driverless cars, and purchase recommendations from Amazon.

AGI: A range of definitions

By contrast, AGI refers to a much broader form of machine intelligence. There is no single, formally recognized definition of AGI—rather, there is a range of definitions that include the following:

While the OpenAI definition ties AGI to the ability to “outperform humans at most economically valuable work,” today’s systems are nowhere near that capable. Consider Indeed’s list of the most common jobs in the U.S. As of March 2023, the first 10 jobs on that list were: cashier, food preparation worker, stocking associate, laborer, janitor, construction worker, bookkeeper, server, medical assistant, and bartender. These jobs require not only intellectual capacity but, crucially, most of them require a far higher degree of manual dexterity than today’s most advanced AI robotics systems can achieve.

None of the other AGI definitions in the table specifically mention economic value. Another contrast evident in the table is that while the OpenAI AGI definition requires outperforming humans, the other definitions only require AGI to perform at levels comparable to humans. Common to all of the definitions, either explicitly or implicitly, is the concept that an AGI system can perform tasks across many domains, adapt to the changes in its environment, and solve new problems—not only the ones in its training data.

GPT-4: Sparks of AGI?

A group of industry AI researchers recently made a splash when they published a preprint of an academic paper titled, “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” GPT-4 is a large language model that has been publicly accessible to ChatGPT Plus (paid upgrade) users since March 2023. The researchers noted that “GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting,” exhibiting “strikingly close to human-level performance.” They concluded that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version” of AGI.

Of course, there are also skeptics: As quoted in a May New York Times article , Carnegie Mellon professor Maarten Sap said, “The ‘Sparks of A.G.I.’ is an example of some of these big companies co-opting the research paper format into P.R. pitches.” In an interview with IEEE Spectrum, researcher and robotics entrepreneur Rodney Brooks underscored that in evaluating the capabilities of systems like ChatGPT, we often “mistake performance for competence.”

GPT-4 and beyond

While the version of GPT-4 currently available to the public is impressive, it is not the end of the road. There are groups working on additions to GPT-4 that are more goal-driven, meaning that you can give the system an instruction such as “Design and build a website on (topic).” The system will then figure out exactly what subtasks need to be completed and in what order in order to achieve that goal. Today, these systems are not particularly reliable, as they frequently fail to reach the stated goal. But they will certainly get better in the future.

In a 2020 paper , Yoshihiro Maruyama of the Australian National University identified eight attributes a system must have for it to be considered AGI: Logic, autonomy, resilience, integrity, morality, emotion, embodiment, and embeddedness. The last two attributes—embodiment and embeddedness—refer to having a physical form that facilitates learning and understanding of the world and human behavior, and a deep integration with social, cultural, and environmental systems that allows adaption to human needs and values.

It can be argued that ChatGPT displays some of these attributes, like logic. For example, GPT-4 with no additional features reportedly scored a 163 on the LSAT and 1410 on the SAT . For other attributes, the determination is tied as much to philosophy as much as to technology. For instance, is a system that merely exhibits what appears to be morality actually moral? If asked to provide a one-word answer to the question “is murder wrong?” GPT-4 will respond by saying “Yes.” This is a morally correct response, but it doesn’t mean that GPT-4 itself has morality, but rather that it has inferred the morally correct answer through its training data.

A key subtlety that often goes missing in the “How close is AGI?” discussion is that intelligence exists on a continuum, and therefore assessing whether a system displays AGI will require considering a continuum. On this point, the research done on animal intelligence offers a useful analog. We understand that animal intelligence is far too complex to enable us to meaningfully convey animal cognitive capacity by classifying each species as either “intelligent” or “not intelligent:” Animal intelligence exists on a spectrum that spans many dimensions, and evaluating it requires considering context. Similarly, as AI systems become more capable, assessing the degree to which they display generalized intelligence will be involve more than simply choosing between “yes” and “no.”

AGI: Threat or opportunity?

Whenever and in whatever form it arrives, AGI will be transformative, impacting everything from the labor market to how we understand concepts like intelligence and creativity. As with so many other technologies, it also has the potential of being harnessed in harmful ways. For instance, the need to address the potential biases in today’s AI systems is well recognized, and that concern will apply to future AGI systems as well. At the same time, it is also important to recognize that AGI will also offer enormous promise to amplify human innovation and creativity. In medicine, for example, new drugs that would have eluded human scientists working alone could be more easily identified by scientists working with AGI systems.

AGI can also help broaden access to services that previously were accessible only to the most economically privileged. For instance, in the context of education, AGI systems could put personalized, one-on-one tutoring within easy financial reach of everyone, resulting in improved global literacy rates. AGI could also help broaden the reach of medical care by bringing sophisticated, individualized diagnostic care to much broader populations.

Regulating emergent AGI systems

At the May 2023 G7 summit in Japan, the leaders of the world’s seven largest democratic economies issued a communiqué that included an extended discussion of AI, writing that “international governance of new digital technologies has not necessarily kept pace.” Proposals regarding increased AI regulation are now a regular feature of policy discussions in the United States , the European Union , Japan , and elsewhere.

In the future, as AGI moves from science fiction to reality, it will supercharge the already-robust debate regarding AI regulation. But preemptive regulation is always a challenge, and this will be particularly so in relation to AGI—a technology that escapes easy definition, and that will evolve in ways that are impossible to predict.

An outright ban on AGI would be bad policy. For example, AGI systems that are capable of emotional recognition could be very beneficial in a context such as education, where they could discern whether a student appears to understand a new concept, and adjust an interaction accordingly. Yet the EU Parliament’s AI Act, which passed a major legislative milestone in June, would ban emotional recognition in AI systems (and therefore also in AGI systems) in certain contexts like education.

A better approach is to first gain a clear understanding of potential misuses of specific AGI systems once those systems exist and can be analyzed, and then to examine whether those misuses are addressed by existing, non-AI-specific regulatory frameworks (e.g., the prohibition against employment discrimination provided by Title VII of the Civil Rights Act of 1964). If that analysis identifies a gap, then it does indeed make sense to examine the potential role in filling that gap of “soft” law (voluntary frameworks) as well as formal laws and regulations. But regulating AGI based only on the fact that it will be highly capable would be a mistake.

Related Content

Cameron F. Kerry

July 7, 2023

Alex Engler

June 16, 2023

Darrell M. West

May 17, 2023

Artificial Intelligence Technology Policy & Regulation

Governance Studies

Center for Technology Innovation

Jolynn Dellinger, Stephanie K. Pell

April 18, 2024

Milena Nikolova, Femke Cnossen, Boris Nikolaev

The Brookings Institution, Washington DC

10:30 am - 11:45 am EDT

More From Forbes

The Dangers Of Not Aligning Artificial Intelligence With Human Values

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

In artificial intelligence (AI), the “alignment problem” refers to the challenges caused by the fact that machines simply do not have the same values as us. In fact, when it comes to values, then at a fundamental level, machines don't really get much more sophisticated than understanding that 1 is different from 0.

As a society, we are now at a point where we are starting to allow machines to make decisions for us. So how can we expect them to understand that, for example, they should do this in a way that doesn’t involve prejudice towards people of a certain race, gender, or sexuality? Or that the pursuit of speed, or efficiency, or profit, has to be done in a way that respects the ultimate sanctity of human life?

Theoretically, if you tell a self-driving car to navigate from point A to point B, it could just smash its way to its destination, regardless of the cars, pedestrians, or buildings it destroys on its way.

Similarly, as Oxford philosopher Nick Bostrom outlined, if you tell an intelligent machine to make paperclips, it might eventually destroy the whole world in its quest for raw materials to turn into paperclips. The principle is that it simply has no concept of the value of human life or materials or that some things are too valuable to be turned into paperclips unless it is specifically taught it.

This forms the basis of the latest book by Brian Christian, The Alignment Problem – How AI Learns Human Values . It’s his third book on the subject of AI following his earlier works, The Most Human Human and Algorithms to Live By . I have always found Christian’s writing enjoyable to read but also highly illuminating, as he doesn’t worry about getting bogged down with computer code or mathematics. But that’s certainly not to say it is in any way lightweight or not intellectual.

Best Travel Insurance Companies

Best covid-19 travel insurance plans.

Rather, his focus is on the societal, philosophical, and psychological implications of our ever-increasing ability to create thinking, learning machines. If anything, this is the aspect of AI where we need our best thinkers to be concentrating their efforts. The technology, after all, is already here – and it’s only going to get better. What’s far less certain is whether society itself is mature enough and has sufficient safeguards in place to make the most of the amazing opportunities it offers - while preventing the serious problems it could bring with it from becoming a reality.

I recently sat down with Christian to discuss some of the topics. Christian’s work is particularly concerned with the encroachment of computer-aided decision-making into fields such as healthcare, criminal justice, and lending, where there is clearly potential for them to cause problems that could end up affecting people’s lives in very real ways.

“There is this fundamental problem … that has a history that goes back to the 1960s, and MIT cyberneticist Norbert Wiener , who likened these systems to the story of the Sorcerer’s Apprentice,” Christian tells me.

Most people reading this will probably be familiar with the Disney cartoon in which Mickey Mouse attempts to save himself the effort of doing his master’s chores by using a magic spell to imbue a broom with intelligence and autonomy. The story serves as a good example of the dangers of these qualities when they aren't accompanied by human values like common sense and judgment.

“Wiener argued that this isn’t the stuff of fairytales. This is the sort of thing that’s waiting for us if we develop these systems that are sufficiently general and powerful … I think we are at a moment in the real world where we are filling the world with these brooms, and this is going to become a real issue.”

One incident that Christian uses to illustrate how this misalignment can play out in the real world is the first recorded killing of a pedestrian in a collision involving an autonomous car. This was the death of Elaine Herzberg in Arizona, US, in 2018.

When the National Transportation Safety Board investigated what had caused the collision between the Uber test vehicle and Herzberg, who was pushing a bicycle across a road, they found that the AI controlling the car had no awareness of the concept of jaywalking. It was totally unprepared to deal with a person being in the middle of the road, where they should not have been.

On top of this, the system was trained to rigidly segment objects in the road into a number of categories – such as other cars, trucks, cyclists, and pedestrians. A human being pushing a bicycle did not fit any of those categories and did not behave in a way that would be expected of any of them.

“That’s a useful way for thinking about how real-world systems can go wrong,” says Christian, “It’s a function of two things – the first is the quality of the training data. Does the data fundamentally represent reality? And it turns out, no – there’s this key concept called jaywalking that was not present.”

The second factor is our own ability to mathematically define what a system such as an autonomous car should do when it encounters a problem that requires a response.

“In the real world, it doesn't matter if something is a cyclist or a pedestrian because you want to avoid them either way. It's an example of how a fairly intuitive system design can go wrong."

Christian’s book goes on to explore these issues as they relate to many of the different paradigms that are currently popular in the field of machine learning, such as unsupervised learning, reinforcement learning, and imitation learning. It turns out that each of them presents its own challenges when it comes to aligning the values and behaviors of machines with the humans who are using them to solve problems.

Sometimes the fact that machine learning attempts to replicate human learning is the cause of problems. This might be the case when errors in data mean the AI is confronted with situations or behaviors that would never be encountered in real life, by a human brain. This means there is no reference point, and the machine is likely to continue making more and more mistakes in a series of "cascading failures."

In reinforcement learning – which involves training machines to maximize their chances of achieving rewards for making the right decision – machines can quickly learn to “game” the system, leading to outcomes that are unrelated to those that are desired. Here Christian uses the example of Google X head Astro Teller's attempt to incentivize soccer-playing robots to win matches. He devised a system that rewarded the robots every time they took possession of the ball – on the face of it, an action that seems conducive to match-winning. However, the machines quickly learned to simply approach the ball and repeatedly touch it. As this meant they were effectively taking possession of the ball over and over, they earned multiple rewards – although it did little good when it came to winning the match!

Christian’s book is packed with other examples of this alignment problem – as well as a thorough exploration of where we are when it comes to solving it. It also clearly demonstrates how many of the concerns of the earliest pioneers in the field of AI and ML are still yet to be resolved and touches on fascinating subjects such as attempts to imbue machines with other characteristics of human intelligence such as curiosity.

You can watch my full conversation with Brian Christian, author of The Alignment Problem – How AI Learns Human Values, on my YouTube channel:

Bernard Marr

  • Editorial Standards
  • Reprints & Permissions

Advertisement

Advertisement

The importance of humanizing AI: using a behavioral lens to bridge the gaps between humans and machines

  • Perspective
  • Open access
  • Published: 25 August 2022
  • Volume 2 , article number  14 , ( 2022 )

Cite this article

You have full access to this open access article

  • A. Fenwick   ORCID: orcid.org/0000-0002-5412-9745 1 &
  • G. Molnar   ORCID: orcid.org/0000-0002-3536-8599 2  

10k Accesses

5 Citations

1 Altmetric

Explore all metrics

One of the biggest challenges in Artificial Intelligence (AI) development and application is the lack of consideration for human enhancement as a cornerstone for its operationalization. Nor is there a universally accepted approach that guides best practices in this field. However, the behavioral science field offers suggestions on how to develop a sustainable and enriching relationship between humans and intelligent machines. This paper provides a three-level (micro, meso and macro) framework on how to humanize AI with the intention of enhancing human properties and experiences. It argues that humanizing AI will help make intelligent machines not just more efficient but will also make their application more ethical and human-centric. Suggestions to policymakers, organizations, and developers are made on how to implement this framework to fix existing issues in AI and create a more symbiotic relationship between humans and machines moving into the future.

Similar content being viewed by others

artificial intelligence undermining human values essay

How artificial intelligence will change the future of marketing

Thomas Davenport, Abhijit Guha, … Timna Bressgott

artificial intelligence undermining human values essay

Artificial Intelligence and Business Value: a Literature Review

Ida Merete Enholm, Emmanouil Papagiannidis, … John Krogstie

artificial intelligence undermining human values essay

The Ethics of AI Ethics: An Evaluation of Guidelines

Thilo Hagendorff

Avoid common mistakes on your manuscript.

1 Introduction

The concept of artificial intelligence (AI) has been around since antiquity (e.g., [ 1 ]). It is clear from investigative literature (e.g., [ 2 , 3 ]) popular culture (e.g., [ 4 ]), and even ancient philosophers (e.g., [ 5 ]) that humans have long been intrigued by the idea of creating artificial life, be it from stone or machines, with some sort of intelligence to help, serve, or protect human life. Modern AI has matured into a reputable science and technology thanks to the development of powerful computers, a better theoretical understanding of what AI is and how it works, and the availability of large amounts of data [ 6 ].

AI has been defined in many ways [ 7 , 8 , 9 , 10 ]. The different interpretations of AI generally converge to two major descriptions: (i) ‘the ability to think, understand, and problem-solve like a human’ and (ii) ‘the ability to mimic human thinking’. Another important aspect in defining AI is by the words ‘artificial’ and ‘intelligence’. Artificial is often referred to as anything humans build (e.g., [ 9 , 11 ]). ‘Intelligence’ refers to a computer’s ability to learn (independently), understand and reason like a human [ 12 ]. However, there is currently no clear consensus on how to define intelligence (e.g. [ 13 ].). Instead, more philosophical concepts of intelligence (Weak AI and Strong AI) are often used to differentiate between varying degrees of machine intelligence (e.g. [ 12 ].). Machine Learning (ML) is often used interchangeably with AI, though related, they are not exactly the same. Machine learning is a subset of AI and describes a set of techniques that is used to solve data-related problems without being explicitly programmed [ 14 ]. In this article, by AI we refer to both rule-based and machine learning techniques [ 12 ], unless mentioned otherwise.

AI technology, as including machine learning techniques, is capable of processing information (e.g., [ 15 ]), identifying patterns (e.g., [ 16 ]), making predictions (e.g., [ 17 ]), and even operating robots and autonomous devices (e.g., [ 18 , 19 ]). Machine Learning (ML) and a subset called Deep Learning (DL) are powering most digital applications today, providing efficiencies and new avenues for value creation. This trend will only continue as we move into the future, standing at the forefront of the 4th Industrial Revolution.

However, the future of AI is not without concerns. In recent years, ethical and moral dilemmas have emerged regarding how AI is being used in modern-day applications (e.g., [ 20 , 21 ]), specifically, the use of AI in the public domain and the (un)intentional consequences machine learning algorithms have on human well-being and economic choices (e.g., [ 22 ]). In addition, policymakers who lack knowledge in the AI field are not always up to speed on preventing unethical or inhumane use of technology, nor do they want to limit their countries' digital competitiveness due to AI policies that are too stringent. It’s clear that the advancement of AI needs to be governed by more human-centric principles (referred to hereafter as ‘humanizing AI’), ones that are easily understood by all stakeholders and that benefit society.

If left undefined, humanizing AI is an ambiguous concept and a challenge in humanizing AI is that there is no universally accepted approach that guides the best practice for design and use of AI. In a narrow definition, humanizing AI means the process of creating and using AI that (i) understands not only human emotions but human unconscious dynamics, (ii) has the capability to interact with humans in a natural, human-like manner, and (iii) during this interaction it processes information in a similar way that people do. Producing AI that processes information similarly to people does not automatically produce a symbiotic relationship between humans and AI, however, it is a requirement to build a trusting relationship with machines. We believe that humanizing AI needs to manifest at multiple levels, which are interconnected, to help bridge the gaps between humans and machines which is currently lacking (e.g., [ 23 ]).

In this paper, we argue that AI conceptualization and application need to be less artificial and more human-like. We are not arguing that AI needs to look more like human beings, but rather that humanizing AI sets a foundation for AI development to integrate aspects of human intelligence, cognition and behavior which complement human limitations and promote human values. We contribute to the existing literature on AI human-centric advancements by providing a motivational framework to explain the operationalization of AI today. This paper also provides a multilayered behavioral approach to developing and applying AI in a more humane and equitable way. Existing literature on usability, user experience, human-centered design, and human–computer interaction (e.g., [ 24 , 25 , 26 , 27 , 28 ]) all have behavioral elements, but not all of them consider a multilayered approach. Even the ISO standards related to human-centered design for interactive systems [ 29 ] lack a multilevel viewpoint. However, [ 26 , 27 , 28 ] discuss the necessity of understanding technology development (and AI development with it) from a multilevel perspective. Our paper provides a unique viewpoint in this discussion. Finding a way to build a symbiotic relationship with AI as we transition into a digital world is of crucial importance if humanity wants to benefit from technology [ 30 , 31 ].

To discuss the rational for our framework and multilayered approach, we structure the paper in the following way. First, we provide an overview of the current concerns with AI. Next, we explain about the importance of humanizing AI and introduce our framework how to humanize AI from a multilevel perspective. Finally, our paper concludes and suggests future research directions.

2 Current concerns with AI

AI’s existing and potential benefits are undeniable, but it may do more harm than good in some cases. Academics and business professionals frequently raise concerns about possible biases, lack of transparency and explainability, power imbalance, and liability – to mention a few issues [ 32 ].

Even the best AI tools can institutionalize existing biases that might be present in the training data. The creation and deployment of AI solutions have been a predominantly male orientation, restricted to specific areas of the world, such as the US and China (e.g., [ 33 ]). The active field of AI has limited diversity in terms of gender and race [ 34 ]. This not only unbalances the beneficiaries of this technology, but also limits diversity in use and propagates potential bias in how AI functions (e.g., [ 33 ]).

Our experience with real-life AI projects is that companies often underestimate the difficulties in creating a clean and unbiased training dataset. In addition to being aware of the problem of possible bias, domain experts need to do a lot of work to eliminate embedded biases in the training data and make a clean dataset available for the AI algorithm to learn.

We have very little visibility and knowledge of how and why an AI tool makes its decisions, especially with deep learning neural network approaches. Further research is necessary to make deep learning approaches explainable. In general, businesses (and humans) will only trust AI-enabled systems if they can fully understand how AI makes decisions and predictions [ 35 ]. Tackling issues of explainability is difficult, as biased modeling is often found after the development stage [ 36 ]. Therefore, researchers need to focus not only on post-modeling explainability but also address explainability in earlier stages. Current standardization efforts (e.g., [ 37 ]) aim to bring a harmonized approach to address the issue of transparency. Further to building these frameworks, it is also important that experts from various domains speak the same language. To address this, the IEEE 7007 Working Group has developed a set of ontologies representing norms and ethical principles, data privacy and protection, transparency and accountability, and ethical violation management [ 38 ].

There seems to be a global consensus that policymakers will need to establish rules and control mechanisms to ensure that AI tools are safe and will aid rather than harm society (e.g., [ 39 , 40 ]). However, there is no global consensus on how to build a framework to regulate AI. For example, the European Union and the United States differ widely on how they want to address AI regulation.

To create an EU-wide framework, the European Commission issued a draft proposal of the Artificial Intelligence Act to the European Parliament on 21 April 2021. In the proposal, they established a framework for determining whether an AI application poses a significant risk which would subject it to additional obligations, including a conformity assessment, auditing requirements, and post-market monitoring [ 41 ]. The US is taking a slower and more fragmented approach. Lawmakers have introduced bills that control the unwanted effects of AI algorithms from various aspects [ 42 , 43 ]. Government offices and US regulatory agencies have also outlined their respective views and positions on regulating AI tools, but a comprehensive federal framework does not exist yet.

AI also has the potential to change both market and regulatory dynamics. Because of this, building the proper legal framework is necessary, not only to guard against individual and societal harm but also to ensure a level playing field for businesses. The current imbalance is putting considerable power into the hands of tech companies. To prevent market tipping, lawmakers may consider forcing tech companies in data-driven markets to share some of the data they collect [ 44 ] or explain how their algorithms work [ 45 ].

3 The importance of humanizing AI

It is not only the prospects of AI that need to be addressed but also the way it is currently used. Big data companies such as social media platforms are renowned for their ability to influence human behavior which has led to scandals and data privacy breaches (e.g., [ 46 , 47 ]). Not addressing this will only make it more difficult for policymakers to make any significant changes to how big tech companies leverage human data to maximize gain. It is naive to continue to think of humans as superbeings able to fully control themselves in the face of increasingly sophisticated online persuasion and manipulation tactics. Equally concerning, is the way mechanistic algorithms (the application of narrow or weak AI) influence complex human behavior.

If AI had any kind of embodied representation today, it would have to be Mr. Spock from Star Trek. Governed by logic and capable of making rational decisions without being swayed by emotion, Mr. Spock would run the world without space for human error or irrational behavior, diminishing humanity to an artificial society governed by algorithms. A better representation of humankind would be Homer Simpson, limited in cognitive capacity, persuadable and irrational, but also caring and supportive. Homer Simpson would benefit greatly from Mr. Spock’s characteristics if they didn’t undermine human values.

Humanizing AI requires more than embodiment alone. We also need to consider the underpinning AI architecture paradigms that govern the machine-to-human interaction [ 48 , 49 ]. These underpinnings help to understand how machines engage with their environment to make sense of the world and how to interact effectively (e.g. [ 50 ]). Bringing this back to the Mr. Spock metaphor, it reflects his ability to sense, plan, and interact with the environment to find the best possible solution. AI must not become a replacement for human cognition; rather it should be a tool to enhance human life.

4 Multilevel approach to humanizing AI

So far, we have discussed the reasons why we need to humanize AI. In this section, we will discuss how a behavioral lens can help us humanize it from inception to deployment. Considering only human-centric usages of AI is not enough. A multilevel approach is required, one which focuses on AI from creation all the way through to societal impact. First, designing AI to think like humans is one way of bringing humans and AI closer together. Can behavioral science help us design AI technology that has the capability to take human thought patterns into consideration in their functioning, that is, to embed knowledge of human thinking into the algorithms [ 51 ]? Second, applying a behavioral lens to consider more human-centric ways of serving people is needed. How do automation and AI usage facilitate human functioning, and what about fairness and transparency? Finally, from a macro perspective, how can behavioral science facilitate a more positive and ethical impact of AI on society? We need to discuss the mechanisms underlying existing phenomena and behavior-informed strategies to humanize AI from the micro, meso and macro perspectives.

5 Humanizing AI from an algorithm perspective (micro)

Creating more human-like AI starts at the programming level. Like the micro perspective in behavioral science, understanding the software (brain) helps to understand the hardware (behavior). One of the main goals in AI development is to create intelligent machines that can understand, think, and act like human beings [ 52 ]. This type of AI is often referred to as strong AI or artificial general intelligence. Currently, AI's capabilities are narrow in scope (often referred to as weak AI or artificial narrow intelligence) being able to execute specific tasks like automation, surveillance, or autonomous driving.

Understanding machine intelligence and AI architecture is key to guiding more human-centric AI design. However, as AI computation becomes more ‘complex’ it gets harder to figure out how intelligent machines make decisions (e.g., [ 53 ]). To guide the evolution of AI operationalization in an explainable and responsible manner, various (micro-level) types of mechanisms need to be in place. These mechanisms i.e., audit trails (e.g. [ 54 ]), interpretability (e.g. [ 55 ]), and algorithmic design choices (e.g., [ 56 ]) can guide AI development and deployment into the future.

5.1 Anthropomorphism

It is well known that advancements made in machine learning algorithms are often anthropomorphized to represent human-like features [ 57 ]. Anthropomorphism is defined as the attribution of human-like traits to non-human objects, animals, and entities (e.g., [ 58 ]). Some researchers and businesspeople argue that for AI to become more integrated into human life or to enhance human properties, it needs to be more human-like [ 59 ]. Footnote 1 AI researchers argue that to become more human-like, AI needs to represent human characteristics such as conversational abilities (e.g. [ 60 ]), using mental shortcuts to make decisions (e.g., [ 61 ]), being empathetic (e.g., [ 62 ]), or looking more human physically (e.g., [ 63 , 64 ]). However, we should note the importance of delineating what human-like means for machines. With the term human-like we refer to the creation of behavioral similarities of humans in machines and do not mean the ontological definition of human-likeness (e.g. humans as conscious, experiencing, emotional beings).

The development of AI functionality and mechanisms has been cognitively inspired and modeled after the human brain (e.g., [ 65 , 66 ]). For example, the design of Artificial Neural Networks (ANNs) is based on the way neurons in the brain process and exchange information. The development of Convolutional Neural Networks (CNNs) used in computer vision is based on how cats process visual information neurologically [ 67 ]. Besides traditional artificial intelligent approaches, Bioinspired Intelligent Algorithms (BIAs) also represent human (or living) organisms functioning at the micro level (e.g., [ 68 ]). BIAs show a strong underpinning in neuroscience and biological systems which are reflected in their working mechanisms. Genetic Algorithms (GA), Evolutionary Algorithms (EA), and Bee Colony Algorithms (BCA) are examples of BIAs. The benefit of using BIAs is that they are more explainable than traditional neural networks (e.g., [ 68 , 69 , 70 ]).

5.2 Anthropomorphic Algorithms

Some recent attempts to build more human-like AI at the micro level have been the creation of neural networks infused with decision science theory to develop anthropomorphic algorithms that use mental shortcuts to mimic human decision-making (e.g. [ 61 ]). Heuristics and mental shortcuts, often referred to as cognitive errors or limitations in human intelligence, do serve an evolutionary purpose [ 71 ]. They help to make quick decisions in difficult or uncertain situations while using limited information and cognitive resources [ 72 ].

The benefits of infusing algorithms with decision theory are that it helps machines think more like humans, makes faster decisions thanks to minimizing information requirements and computational power, and generates more accurate predictions in line with human cognition [ 73 ]. This could lead to a better user experience or higher customer satisfaction with product suggestions. Finally, it also addresses the issue of explainability as the built-in shortcuts make the decision rules applied in complex models transparent (e.g., [ 74 ]). The latter is a significant issue of current modern AI, especially for ANNs. Conclusively, using behavioral theory to create more human-like AI not only helps address current limitations of existing AI, but can also provide pathways to more transparent operability and symbiotic human–machine design.

One of the major reasons for anthropomorphizing AI beyond application design is to consider the ethical foundations of anthropomorphic design (e.g., [ 75 ]). Bioethical principles are often used as the basis for developing ethical AI, e.g., respecting human rights and dignity, accountability, transparency, and promoting well-being [ 57 ]. These ethical considerations are seen as important viewpoints in the design and application of AI [ 76 , 77 ]. In fact, anthropomorphizing AI has the potential to not only provide perspectives on finding effective ways of coexisting, but also provide a foundation for the ethical use of AI and how it can enhance human properties and life. For example, anthropomorphizing AI can support ethical considerations beyond existing bioethical principles, providing a broader perspective to the meaning of ethical use. [ 78 ] has questioned if it is unethical that AI applications or robots can make certain people (people in need of social connection i.e. elderly, mentally unwell) believe it is capable of building an emotional connection. Broadening the perspective of ethical use will become increasingly important as more human-like AI becomes available.

Advancing AI through an anthropomorphic lens is a discussion that we believe requires further attention as the human–machine relationship is not purely objective and rational, but is governed by experiences, emotions and heuristics.

6 Humanizing AI from an application perspective (meso)

In this section, we consider potential approaches to humanizing AI from an application perspective where the ‘how’ is emphasized more than the ‘what’. Technology is always a means to an end and the way it is used depends on the intended purpose. Technology often emerges from a human-centric purpose motive (e.g., improving humanity, helping people to stay connected, making tasks easier). As time passes, the profit motive takes over and users become the target of exploitation, especially if investor metrics are involved in the further development of applications.

This development is prominent in consumer applications such as social media (e.g., Facebook), food delivery (e.g., Deliveroo), and ride sharing (e.g., Uber), which use algorithms to maximize profit margins and influence users and service providers [ 79 ]. Though issues relating to online exploitation and manipulation (e.g., [ 80 ]), psychological harm (e.g., [ 81 ]), data privacy (e.g., [ 82 ]), and misconduct (e.g., [ 83 ]) have been reported, little preventative action is being taken.

AI developments for business focus mainly on automation (e.g., process automation, robotics), smart solutions (e.g., just-in-time production, I-o-T smart buildings) and programs to help business managers make better decisions (e.g., talent management platforms, business intelligence systems). Businesses see the development and application of AI as a strategic imperative to create business value through increased efficiencies, revenue, or cost reductions (e.g., [ 84 , 85 ]). However, there are also many company-focused AI solutions aimed at tracking and spying on employees. Whether commercial or business, the collection and use of personal data need to be governed and curated based on ethical considerations and human-enhancing properties. To achieve this, well-being metrics should be considered before investor metrics. This would fundamentally change future business-to-consumer (B2C) and business-to-business (B2B) application development. Other mechanisms which can be used to ensure more human centricity, accountability, and safety in application are audit trails (e.g. [ 54 ]), Responsible AI governance (e.g., [ 86 ]), and data bias and proportionality checks (e.g., [ 87 ]) among others.

Human-like AI application also needs to be considered within ‘Industry 4.0’ (I4.0), which is a term given to reflect the fourth industrial revolution currently taking place, characterized by big data, velocity and connectivity, cyber security systems, and embedded intelligence often referred to as smart technologies (e.g., [ 88 ]). The exponential growth of data and machine-to-machine and machine-to-human connectivity within I4.0 brings along various knowledge management and data interpretation challenges. Ontologies can provide a solution to bridge the complexity of data semantics and enable machine reasoning (e.g. [ 89 , 90 ]). Within I4.0, machine intelligence needs to move away from narrow AI approaches and evolve more into cross-domain intelligence.

7 Humanizing AI from an organizational perspective

AI is reshaping the ways organizations operate. Bionic companies that combine human capabilities with machine intelligence, are no longer viewed as futuristic. According to a recent survey by McKinsey, the three business functions where AI adoption is most common are service operations, product and service development, and sales and marketing [ 91 ].

Technology complexity is not the biggest obstacle for large scale AI adoption – human nature is. Resistance to change and fear of the unknown are often quoted as key barriers to organizational adoption of AI, especially in core business areas where machines are used to perform complex cognitive functions [ 92 ]. While employing AI to automate highly manual processes (process automation) is gaining widespread acceptance, intelligent automation (decision automation) still needs to earn trust [ 93 ].

A recent research report by IBM argues that “trusted, explainable AI is crucial to widespread adoption of the technology”, including maintaining brand integrity and meeting regulatory compliance [ 94 ]. The efforts behind explainable AI (XAI) aim to address the concerns related to lack of trust and seek to ensure that humans can easily understand the machine’s decisions by providing solutions to black box issues and transparent explanations. But will these efforts be enough?

Research suggests that people supported by machines frequently trust the decisions of AI more than they should. This is even true for explainable AI solutions. Recently, a Google engineer believed that the company’s DL-enabled chatbot ‘LaMDA’ had become sentient due to its human-like conversational abilities [ 95 ]. Research evidence indicates that, in many instances, people make worse decisions than they would without the assistance of machine intelligence [ 96 , 97 ]. This raises the question of what is needed to humanize AI and make it more easily accepted by organizations.

First, AI tools have several advantages in performing certain customer-facing tasks quicker and more accurately than humans, especially when they can demonstrate high cognitive intelligence and empathetic behavior. These solutions, however, must be accepted and trusted by customers, and they need to deliver socially acceptable performance [ 98 ]. Our view is that once customers and society accept and trust AI tools, organizations will endorse them more readily.

Second, AI tools need to focus on more than explainability and ethics, eliminating unwanted bias in decision-making, and showing perceptible empathy. They also need to deliver the diversity of human decisions. If we let only a very few algorithms perform specific cognitive tasks, we might end up amplifying systemic risks such as flash-crash events on the stock markets due to high-frequency trading [ 99 ], or we might monopolize the core software engines behind cognitive AI tools [ 100 ].

Third, AI tools must convey to human users that their decision automation is subject to errors; not all automated decisions will be accurate. The higher the cognitive function, the more likely they can make mistakes, just like humans. We tend to have more trust in people who are like us in some way, and it is easier for us to predict the reactions of people who resemble us [ 101 , 102 ] . This notion needs to be designed into AI tools and employee training during the operationalization phase if the AI solution is to be easily accepted by organizations and have the desired result.

Lastly, organizations will need to manage the risks that come with introducing intelligent machines. Operationalization of low cognitive AI solutions (such as process automation) poses fewer risks which can be reasonably mitigated by appropriate control [ 103 ]. The ultimate danger is that intelligent machines might seize control over their environment and refuse human control. Further research addressing this AI control problem and heavy-handed ex-ante regulation for highly cognitive AI tools will help to mitigate the risk that, as the late physicist Steven Hawkings put it, “The development of full artificial intelligence could spell the end of the human race" [ 104 ]. Footnote 2

8 Humanizing AI at the societal level (macro)

As mentioned before, AI as a technology has reached a level in which its usage has become ubiquitous. Although machine learning algorithms help us to process data efficiently, predict outcomes, and make intelligent decisions, it is the way these data analytic approaches are used that need to be scrutinized. In this section, we want to discuss some of the more precarious applications of AI that have (or continue to have) a deep impact on human behavior and society at large. Note that our review does not go into the details of subjective and objective measurement of human well-being, as defined by [ 105 ]; our three specific societal examples will be presented just to highlight the need for more human-like considerations of AI in society.

9 China’s Social Credit System

In 2020, a countrywide social credit system (SCS) was implemented in China which scores citizens based on their offline and online behavior. Using AI-powered surveillance cameras, payment tracking through Alipay and other Chinese online payment methods, and social media surveillance, China’s centralized SCS can evaluate a citizen’s score and thus provide or restrict access to resources based on how well someone behaves (e.g., [ 106 ]). This application of AI monitors and socially-engineers behaviors and grants or denies access to public resources. From the perspective of humanizing AI, this approach is questionable as it monitors and coerces behavior in an opaque way, leaving it unclear for citizens how to influence their score.

SCSs are nothing new. Online platforms like Uber and Airbnb allow customers and service providers to evaluate the experience they had with each other. User-generated reviews act as a form of social validation and authority. Reviews not only affect the likeability and demand of the service but also impose a level of control over how service providers and customers interact with each other, based on expected behaviors governing the system (e.g. [ 107 , 108 ]). However, the application of social ranking systems beyond consumer apps needs to be further reviewed.

9.1 Facebook segmentation algorithms

In recent months, Facebook and its operating platform have had significant backlash from the public because of how its algorithms promote hatred and discourage diverse views [ 109 ]. Facebook uses AI to classify and segment its users based on all kinds of characteristics ranging from behavioral profiles, social ties and even psychological traits it can infer using machine learning and pattern recognition. Facebook uses this data to enhance platform engagement and better serve its advertisers by offering more targeted advertising.

However, the unintended consequence of creating micro-segments is that users get stuck in echo chambers and are less exposed to different information and opinions. Studies have found that this kind of algorithmic design leads to more siloed thinking, reinforcing latent beliefs and potentially fueling more hatred [ 109 ].

These events make clear that AI-driven social media platforms have a significant impact on attitude formation and offline behaviors and thus are not neutral. Platforms that serve humankind need to be transparent in operations and designed with human well-being in mind. Recent EU regulation aims to force social media companies operating in the EU to disclose how their algorithms work. The unintended consequences of for-profit platforms need to be considered during the design stages and addressed later if necessary. Allowing the profit motive to take over the purpose motive is a key challenge to be addressed in the pursuit of a symbiotic relationship between humans and machines.

9.2 Cambridge Analytica Psychographic Profiling

The US presidential elections in 2016 were criticized because of the use of psychographic profiling online to sway voter decisions. The company responsible for facilitating the online profiling and micro-targeting of US citizens was Cambridge Analytica. It allegedly used personal data, improperly collected through a personality test on Facebook, which provided a rich user profile to identify attitudes, beliefs, and opinions. These data were then used to create micro-targeted ads aimed at influencing election participation and voter decisions [ 110 ].

Another consequence of this kind of profiling is that user data can be used to make highly accurate predictions about political beliefs, sexual orientations, fears and desires, and other sensitive information, which is impossible to predict using traditional survey approaches if not directly asked [ 111 , 112 ]. This means that data companies with advanced machine learning algorithms know more about a user than relatives or close friends do [ 113 ]. If this information is used to influence attitudes and behavior for political reasons, then AI poses a risk to the social fabric of humanity and political integrity. Global institutions and countries need to address these applications and provide guidelines of ethical considerations in AI design and usage.

At the macro level, techno-social mechanisms need to be in place to ensure that the usage and advancement of AI are improving societal outcomes. This is necessary for technology adoption and trust in AI (e.g., [ 53 ]). Institutional mechanisms that govern AI development and deployment at the societal level focus mainly on protecting human values (e.g., equality, fairness, safety, and privacy – in [ 114 ]), accountability (e.g., holding AI developers accountable – in [ 115 ]), and incentives (e.g., funding or promoting AI-driven technologies that strengthen human values – in [ 116 ]). These mechanisms need to be in place to prevent both individual harm as well as societal harm (e.g. [ 40 ]).

10 The power motive of AI usage at the societal level

The three examples given highlight the power motive of AI usage at the societal level. The institutions that can install these mechanisms do it mainly to exert or expand their existing powers over the people they can control within their system. So far, the emphasis has been on governing and control (see Fig. 1 which represents our motivational framework of AI operationalization today - purpose, profit, and power). Using AI to socially engineer behavior, manipulate democratic voting decisions and segregate people based on similar characteristics, values, and/or beliefs does not benefit human well-being or promote equality. It enables “divide and power” asymmetries within society and highlights major concerns related to privacy, surveillance, discrimination and bias. It is also not unthinkable that with the ongoing datafication (digitization of all aspects of our daily lives and the evaluation of this data) that SCS in various forms could emerge in other parts of the world [ 106 ].

However, we can see potential avenues for human-enhancing and societally desirable usage of AI despite these examples of human-limiting approaches. Enabling societal development using AI-powered systems can allow people to flourish in more ways than are currently practiced. AI systems can augment human functioning and replace repetitive or dangerous tasks, allowing humans to focus more on strengthening qualities such as creativity, connection, altruism and emotional intelligence. AI and other enabling technologies can be used to help provide equal access to resources to facilitate human growth and well-being, emphasizing morals, fairness, ethics and even philosophy.

figure 1

Motivations driving AI creation and usage

11 Conclusion and research directions

Modern-day AI development and deployment show great potential in creating value for business and society. However, the current state and the future of AI are not without concerns. Ethical and moral dilemmas have arisen in recent years due to AI usage in the public domain and the (un)intentional consequences algorithms have on economic choices and human well-being. Moreover, policymakers lack the speed and motivation to regulate the market with the required laws on time. Being too strict on AI policies can limit a country’s digital competitiveness. It’s clear that new perspectives are needed to help solve current issues and advance the field. In this paper, we argue that AI conceptualization and application need to be less artificial and more human-like. AI development and deployment need to be governed by more human-centric principles, ones that are easily understood by all stakeholders and that benefit society. We therefore propose a multilayered behavioral approach to address the issues and potential solutions.

This paper also reviewed the mechanisms underlying existing phenomena and behavior-informed strategies to humanize AI from the micro, meso and macro perspectives. In terms of mechanisms, we highlighted the importance of audit trails, interpretability, and algorithmic design choices at the micro level, Responsible AI governance, and data bias and proportionality checks at the meso level, and techno-social mechanisms such as protecting human values, accountability, and incentives at the macro level. For strategies, we proposed solutions which help build trusted and explainable AI and support technology adoption, such as the development of anthropomorphic algorithms and other human-like features, making clear how algorithms make decisions, minimizing human and machine bias, and ensuring that the usage of AI augments and protects human life through incentivization and accountability.

Humanizing AI also means introducing ethical principles into the activities related to the planning, development and deployment of AI tools. The Responsible AI Guidelines, as developed by the US Department of Defense, provide detailed guidelines to ensure that ethical considerations are integrated into the design, training, and operationalization of AI, and define a process that is responsible, reproducible and scalable [ 117 ]. They go well beyond spelling out the need for explainability; they also aim to ensure that the decisions of AI tools are in line with human values. We believe that design research efforts should focus on investigating the core aspects of these themes, such as responsible AI, explainable AI and anthropomorphic design.

This paper provides various avenues for future research. First, from the micro perspective, future research should focus on exploring ways to build algorithms that are able to mimic human decision-making processes and make decisions in a more human-centric manner. This is not only important to make AI more understandable but is also an avenue to create intelligent systems with more general intelligence. Second, from the meso perspective, future research efforts should focus on finding ways to promote the application and adoption of more equitable, trusted, and responsible AI solutions to help overcome existing barriers and build a stronger relationship between humans and machines. Finally, from a macro perspective, future research efforts should focus on investigating and designing mechanisms that promote and secure human properties as the application of intelligent systems at the societal level becomes more common. The field of behavioral data science, where ML experts and behavioral scientists work together, has a valuable contribution to humanizing AI in future research efforts.

Addressing today’s AI challenges is crucial if we want to build a more symbiotic relationship between humans and machines. Humanizing AI does not automatically lead to a more symbiotic relationship between humans and machines but does set a necessary foundation for its development based on human values and potential. This is not only important to build better AI, but also helps humankind to better understand what it means to be human in a digital world. Once again, we require wisdom to guide the future of AI.

Data availability

Not applicable.

Code availability

Others (e.g., [ 118 , 119 ]) argue that AI should be “tethered to the humans who create and deploy them”, but it should not be human-like.

Undeniably, AI has its advantages and benefits, and there are views to support the argument that possible risks can be kept under control [ 118 , 120 , 121 ].

Fron C, Korn O. A short history of the perception of robots and automata from antiquity to modern times. In: Social robots: technological, societal and ethical aspects of human-robot interaction. Cham: Springer International Publishing; 2019. p. 1–12.

Google Scholar  

Devecka M. Did the Greeks believe in their robots? Camb Class J. 2013;59:52–69.

Article   Google Scholar  

Homer. The Iliad. New York: Penguin Publishing Group; 1991.

Shelley MW. Frankenstein; or, the modern Prometheus. London: Printed for Lackington, Hughes Harding, Mavor & Jones; 1818.

Aristotle. The Rhetoric of Aristotle: an expanded translation with supplementary examples for students of composition and public speaking. New York: D. Appleton and Co; 1932.

Russell S, Davis E, Norvig P. Artificial intelligence: a modern approach. Hoboken: Prentice Hall; 2009.

MATH   Google Scholar  

Afiouni R. Organizational Learning in the Rise of Machine Learning. International Conference on Information Systems, Munich. 2019.

Lee J, Suh T, Roy D, Baucus M. Emerging technology and business model innovation: the case of artificial intelligence. J Open Innov. 2019;5(3):1–13.

Mikalef P, Gupta M. Artificial intelligence capability: conceptualization, measurement calibration, and empirical study on its impact on organizational creativity and firm performance. Inf Manag. 2021;58(3):1–20.

R. Schmidt, A. Zimmermann, M. Möhring and B. Keller, "Value Creation in Connectionist Artificial Intelligence - A Research Agenda," in AMCIS , 2020.

Simon HA. The sciences of the artificial. Cambridge: MIT; 1970.

Russel S, Norvig P. Artificial intelligence: a modern approach. London: Pearson; 2016.

Wang P. On defining artificial intelligence. J Artif Gen Intell. 2019;10(2):1–37.

Kühl N, Goutier M, Hirt R, Satzger G. Machine Learning in Artificial Intelligence: Towards a Common Understanding. https://arxiv.org/abs/2004.04686 . 2020.

Du X, Dua S. Data mining and machine learning in cybersecurity. Abingdon-on-Thames: Taylor & Francis; 2011.

Bishop CM. Pattern recognition and machine learning. New York: Springer; 2006.

Serrano W. Big data intelligent search assistant based on the random neural network., advances in big data: proceedings of the 2nd INNS conference on big data. Thessaloniki: Springer International Publishing; 2016.

Chen Y. Integrated and intelligent manufacturing: perspectives and enablers. Engineering. 2017;3(5):588–95.

Liu H-Y, Zawieska K. From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence. Ethics Inf Technol. 2017;22:321–33.

Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1(9):389–99.

Ryan M, Stahl BC. Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications. J Inf Commun Ethics Soc. 2021;19(1):61–86.

Pew Research Center. Artificial Intelligence and the Future of Humans, 2018.

Han S, Kelly E, Nikou S, Svee E-O. Aligning artificial intelligence with human values: reflections from a phenomenological perspective. AI Soc. 2021. https://doi.org/10.1007/s00146-021-01247-4 .

Hollnagel E, Woods DD. Joint cognitive systems: foundations of cognitive systems engineering. Milton Park: Taylor & Francis Group; 2005.

Book   Google Scholar  

Norman DA. The Design of Everyday Things, Revised and. expanded. Cambridge: MIT Press; 2013.

Bødker S. Third-wave HCI, 10 years later—participation and sharing. Interactions. 2015;22(5):24–31.

Saariluoma P, Oulasvirta A. User psychology: re-assessing the boundaries of a discipline. Sci Res. 2010;1(5):317–28.

Saariluoma P, Cañas J, Leikas J. Designing for Life. London: MacMillan; 2016.

ISO, 9241 - Ergonomics of human-system interaction—Part 210: Human-centred design for interactive systems, ISO, 2019.

Miyake N, Ishiguro H, Dautenhahn K, Nomura T. Robots with children: practices for human-robot symbiosis. IEEE: Piscataway; 2011.

Sandini V, Mohan, Sciutti A, Morasso P. Social cognition for human-robot symbiosis—challenges and building blocks. Front Neurorobotics. 2018;12:34.

Fabi S, Xu X, de Sa VR. Exploring the racial bias in pain detection with a computer vision model. 2022. https://cogsci.ucsd.edu/~desa/Exploring_the_Racial_Bias_in_Pain_Detection_with_a_Computer_Vision_Model.pdf . Accessed 15 May 2022

Daugherty PR, Wilson J, Chowdhury R. Using Artificial Intelligence to promote diversity. Boston: MIT Sloan Management Review; 2018.

Kiritchenko S, Mohammad SM. Examining gender and race bias in two hundred sentiment analysis systems. arXiv. 2018. https://doi.org/10.48550/arXiv.1805.04508 .

Lockey S, Gillespie N, Holm D, A. Someh IA. A Review of Trust in Artificial Intelligence: Challenges, Vulnerabilities and Future Directions. Proceedings of the 54th Hawaii International Conference on System Sciences. 2021.

Suresh H, Guttag JV. A Framework for Understanding Unintended Consequences of Machine Learning. arXiv. 2020;2:8.

IEEE. P7001 - Draft standard for transparency of autonomous systems. New York: IEEE; 2020. p. 1–70.

IEEE. P7007 - Ontological Standard for Ethically Driven Robotics and Automation Systems. Newyork: IEEE; 2021.

Acemoglu D. Harms of AI. Natl Bureau Econ Res. 2021. https://doi.org/10.3386/w29247 .

Smuha NA. Beyond the individual: governing AI’s societal harm. Int Policy Rev. 2021. https://doi.org/10.14763/2021.3.1574 .

European Commission, Proposal for a regulation of the European parliament and of the council laying down harmonised rules on Artificial Intelligence (artificial intelligence Act) and amending certain union legislative Acts , 2021.

United States Congress (117th), H.R.2154—Protecting Americans from Dangerous Algorithms Act, 2021.

United States Congress (117th), S.1896—Algorithmic Justice and Online Platform Transparency Act, 2021.

Graef I, Prüfer J. Governance of data sharing: a law & economics proposal. Res Policy. 2021;50(9):104330.

Fu G. CDA Insights 2022: Toward ethical artificial intelligence in international development. 2022. https://dai-global-digital.com/cda-insights-2022-toward-ethical-artificial-intelligence-in-international-development.html . Accessed on 23 May 2022.

Schlackl F, Link N, Hoehle H. Antecedents and consequences of data breaches: a systematic review. Inform Manag. 2022;59:103638.

Dembrow B. Investing in human futures: how big tech and social media giants abuse privacy and manipulate consumerism. U MIA Bus L Rev. 2022;30(3):324–49.

Bayat B, Bermejo-Alonso J, Carbonera J, Facchinetti T. Requirements for building an ontology for autonomous robots. Industrial Robot. 2016;43:469–80.

Coste-Maniere E, Simmons R. Architecture, the backbone of robotic systems. EEE International Conference on Robotics and Automation. Symposia Proceedings, San Francisco, CA. 2000.

J. Calzado, A. Lindsay, C. Chen, G. Samuels and J. I. Olszewska, "SAMI: Interactive, Multi-Sense Robot Architecture," in IEEE 22nd International Conference on Intelligent Engineering Systems (INES), Las Palmas de Gran Canaria, 2018.

Oulasvirta A. It’s time to rediscover HCI models. Interactions. 2019;26(4):52–6.

Bostrom N. Superintelligence: paths. Dangers: Strategies, Brilliance Publishing; 2015.

Samek W, Müller KR. 2019. Explainable AI: interpreting, explaining and visualizing deep learning. Towards explainable artificial intelligence. Springer. pp. 5–22.

Falco G, Shneiderman B, Badger J, Carrier R, Dahbura A. Governing AI safety through independent audits. Nature Mach Intell. 2021;3:566–71.

Burkhardt R, Hohn N, Wigley C. Leading your organization to responsible AI. https://www.mckinsey.com/business-functions/quantumblack/our-insights/leading-your-organization-to-responsible-ai . Accessed 14 Jun 2022

Amoore L, Raley R. Securing with algorithms. Secur Dialogue. 2017;48(1):3–10.

Salles A, Evers K, Farisco M. Anthropomorphism in AI. AJOB Neurosci. 2020;11(2):88–95.

Epley N, Waytz A, Cacioppo JT. On seeing human: a three-factor theory of anthropomorphism. Psychol Rev. 2007;114(4):864–86.

Bar-Cohen Y, Hanson D. The coming robot revolution: expectations and fears about emerging intelligent, humanlike machines. New York: Springer; 2016.

Araujo T. Living up to the chatbot hype: the influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput Hum Behav. 2018;85(1):183–9.

Fabi S, Hagendorff T. Why we need biased AI. How including cognitive and ethical machine biases can enhance AI systems. arXiv. 2022. https://doi.org/10.48550/arXiv.2203.09911 .

Airenti G. The cognitive bases of anthropomorphism: from relatedness to empathy. Int J Soc Robot. 2015;7(1):117–27.

Leong B, Selinger E. Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism . Proceedings of the Association for Computing Machinery's Conference on Fairness, Accountability, and Transparency, Atlanta, GA, 2018.

G. Marcus, Deep Learning. A Critical Appraisal, arXiv, 2018.

Ullman S. Using neuroscience to develop artificial intelligence. Science. 2019;363(6428):692–3.

Eysenck MW, Eysenck C. AI vs Humans. London: Taylor & Francis Group; 2021.

Nagi J, Ducatelle F, Di Caro GA, Cireşan D, Meier U, Giusti A, Nagi F, Schmidhuber J, Gambardella LM. Max-pooling convolutional neural networks for vision-based hand gesture recognition. New York: IEEE; 2011. p. 342–7.

Ni J, Wu L, Fan X, Yang S. Bioinspired intelligent algorithm and its applications for mobile robot control: a survey. Comput Intell Neurosci. 2016;2016:1–16.

Binitha SD, Sathya SS. A survey of Bio inspired optimization algorithms. Int J Soft Comput Eng. 2012;2:2.

Olszewska JI. Snakes in trees: an explainable artificial intelligence approach for automatic object detection and recognition. ICAART; 2022.

Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):1124–31.

Klein G. Naturalistic decision making. Hum Factors J Hum Factors Ergonomics Soc. 2008;50(3):456–60.

Gadzinski G, Castello A. Fast and frugal heuristics augmented: when machine learning quantifies Bayesian uncertainty. J Behav Exp Finance. 2020;26:100293.

Hafenbrädl S, Waeger D, Marewski JN, Gigerenzer G. Applied decision making with fast-and-frugal heuristics. J Appl Res Mem Cogn. 2016;5(2):215–31.

Damiano L, Dumouchel P. Anthropomorphism in human-robot co-evolution. Front Psychol. 2018. https://doi.org/10.3389/fpsyg.2018.00468 .

Mittelstadt B. Principles alone cannot guarantee ethical AI. Nat Mach Intell. 2019;1(11):501–7.

V. Vakkuri, K.-K. Kemell and P. Amrahamsson. Implementing ethics in AI: initial results of an industrial multiple case study. Product-Focused Software Process Improvement. PROFES 2019. Lecture Notes in Computer Science. Cham 2019.

Coeckelbergh M. Can we trust robots? Ethics Inf Technol. 2012;14(1):53–60.

Wu T. The Attention merchants: the epic struggle to get inside our heads. London: Atlantic Books; 2017.

Susser D, Roessler B, Nissenbaum H. Online manipulation: hidden influences in a digital world. Georgetown Law Technol Rev. 2019;4(1):1–45.

Amedie J. The Impact of Social Media on Society. 2015. https://scholarcommons.scu.edu/engl_176/2 . Accessed 26 May 2022

Sushama C, Kumar MS, Neelima P. Privacy and security issues in the future: a social media. Mater Today. 2021. https://doi.org/10.1016/j.matpr.2020.11.105 .

Bakir V, McStay A. Fake news and the economy of emotions. Digit J. 2018;6(2):154–75.

Alsheibani SA, Messom CH, Cheung YP, Alhosni M. Reimagining the Strategic Management of Artificial Intelligence: Five Recommendations for Business leaders in AMCIS. 2020.

Amer-Yahia S, Roy SB, Chen L, Morishima A, Monedero J. Making AI machines work for humans in FoW. ACM Sigmod Record. 2020;49:30–5.

E. Papagiannidis, I. M. Enholm, P. Mikalef and J. Krogstie. Structuring AI Resources to Build an AI Capability: a Conceptual Framework. ECIS. 2021.

Arrieta AB, Díaz-Rodríguez N, Ser JD, Bennetot A, Tabik. Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform Fusion. 2020;58:82–115.

Sampath K, Khamis A, Fiorini S, Carbonera J, Olivares Alarcos A. Ontologies for industry 4.0. Knowl Eng Rev. 2019;34:E17.

A. Hassani, A. Medvedev, P. D. Haghighi, S. Ling, M. Indrawan-Santiago, A. Zavlavsky and P. P. Jayaraman. Context-as-a-Service Platform: exchange and share context in an IoT ecosystem. IEEE International Conference on Pervasive Computing and Communications Workshops. 2018.

Olszewska JI, Allison AK. ODYSSEY: Software development life cycle ontology. Proceedings of the International Conference on Knowledge Engineering and Ontology Development. 2018.

Chui M, Hall B., Singla, Sukharevsky A. Global survey: the state of AI in 2021. https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/global-survey-the-state-of-ai-in-2021 . Accessed 7 Feb 2022

Goasduff L. 3 Barriers to AI Adoption. 2019. https://www.gartner.com/smarterwithgartner/3-barriers-to-ai-adoption . Accessed 7 Feb 2022

Coombs C, Hislop D, Taneva SK, Barnard S. The strategic impacts of Intelligent Automation for knowledge and service work: an interdisciplinary review. J Strateg Inform Syst. 2020;29:4.

Watson IBM. 2021. Global AI Adoption Index 2021. https://newsroom.ibm.com/IBMs-Global-AI-Adoption-Index-2021 . Accessed 8 Feb 2022

Fenwick A, Caneri M, Ma S, Chung-Pang TS, Jimenez MA, Calzone O, López-Ausens T, Ananías C. 2022. Sentient or illusion: what LaMDA teaches us about being human when engaging with AI. MIT Technology Review Arabia (Arabic). https://drfenwick.medium.com/sentient-or-illusion-what-lamda-teaches-us-about-being-human-when-engaging-with-ai-39b9237b49d8 . Accessed 26 Jun 2022.

Bansal G, Wu T, Zhou J, Fok R, Nushi B, Kamar E, Ribeiro MT, Weld D. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. pp. 1–16, 2021.

Buçinca Z, Lin P, Gajos KZ. Glassman EL. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. IUI '20: Proceedings of the 25th International Conference on Intelligent User Interfaces. pp. 454–464, March 2020.

Pelau C, Dabija D-C, Ene I. What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Comput Hum Behav. 2021. https://doi.org/10.1016/j.chb.2021.106855 .

Kirilenko A, Kyle AS, Samadi M, Tuzun T. The flash crash: high-frequency trading in an electronic market. J Financ. 2017;72(3):967–98.

Hindman M. The internet trap: how the digital economy builds monopolies and undermines democracy. Princeton: Princeton University Press; 2018.

DeBruine LM. Facial resemblance enhances trust. Proc Royal Soc Biol Sci. 2002;269:1498.

Kramer RM. Rethinking trust. Harv Bus Rev. 2009;87(6):68–77.

B. Bhatti. 7 Types of AI Risk and How to Mitigate their Impact. https://towardsdatascience.com/7-types-of-ai-risk-and-how-to-mitigate-their-impact-36c086bfd732 . Accessed 13 Sept 2020

R. Cellan-Jones, "Stephen Hawking warns artificial intelligence could end mankind," 2 December 2014. . Available: https://www.bbc.com/news/technology-30290540 . [Accessed 8 February 2022].

IEEE, 7010 Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being, New York, NY: IEEE, 2020.

A. Cheung and Y. Chen, "From Datafication to Data State: Making Sense of China’s Social Credit System and Its Implications," Law & Social Inquiry, pp. 1–35, 2021.

S. Feldstein. The Global Expansion of AI Surveillance. 2019. https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847 . Accessed 14 Jun 2022

A. Fenwick. How’s your social credit score? 2018. https://www.hult.edu/blog/your-social-credit-score/ Accessed 26 Jun 2022

Flaxman S, Goel S, Rao JM. Filter bubbles, echo chambers, and online news consumption. Public Opin Quart. 2016;80:298–320.

Bastos MT, Mercea D. The Brexit botnet and user-generated hyperpartisan news. Soc Sci Comput Rev. 2019;37(1):38–54.

Kosinski M, Stillwell D, Graepel T. Private traits and attributes are predictable from digital records of human behavior. Proc Natl Acad Sci. 2013;110(15):5802–5.

Kosinski M, Bachrach Y, Kohli P, Stillwell D, Graepel T. Manifestations of user personality in website choice and behaviour on online social networks. Mach Learn. 2014;95(3):357–80.

Article   MathSciNet   Google Scholar  

Youyou W, Kosinski M, Stillwell D. Computer-based personality judgments are more accurate than those made by humans. Proc Nat Acad Sci. 2015;112(4):1036–40.

European Commission. White paper on artificial intelligence: a European approach to excellence and trust. 2020. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf . Accessed 26 Jun 2022

M. Wieringa, "What to account for when accounting for algorithms. A systematic literature review on algorithmic accountability," in Proceedings of the 2020 conference on Fairness, Accountability, and Transparency , 2020.

The White House. Artificial Intelligence, Automation, and the Economy. 2016. https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF . Accessed 26 Jun 2022

J. Dunnmon, B. Goodman, P. Kirechu, C. Smith and A. V. Deusen, Responsible AI Guidelines in Practice, Defense Innovation Unit, US Department of Defense, 2021.

I. Kostopoulos. Decoupling Human Characteristics from Algorithmic Capabilities. The IEEE Standards Association, 2014. https://standards.ieee.org/initiatives/artificial-intelligence-systems/decoupling-human-characteristics/ . Accessed 11 Jun 2022

Johnson DG, Miller KW. Un-making artificial moral agents. Ethics Inf Technol. 2008;10(2):123–33.

Stahl BC. Ethical issues of AI. Artificial intelligence for a better future springer briefs in research and innovation governance. Cham: Springer; 2021.

Saariluoma P, Rauterberg M. Turing’s Error-revised. International Journal of Philosophy Study. 2016;4:22–41.

Download references

Acknowledgements

The authors would like to express sincere thanks to the reviewers for their valuable comments.

This project hasn’t received any funding.

Author information

Authors and affiliations.

Hult International Business School, Dubai, UAE

University of Colorado Boulder, ATLAS Institute, Boulder, CO, USA

You can also search for this author in PubMed   Google Scholar

Contributions

Ali Fenwick and Gabor Molnar contributed equally to this paper.

Corresponding author

Correspondence to A. Fenwick .

Ethics declarations

Competing interests.

The authors declare no competing interests. The views and opinions expressed in this paper are those of the authors and do not necessarily reflect the views or positions of any entities they represent.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Fenwick, A., Molnar, G. The importance of humanizing AI: using a behavioral lens to bridge the gaps between humans and machines. Discov Artif Intell 2 , 14 (2022). https://doi.org/10.1007/s44163-022-00030-8

Download citation

Received : 29 May 2022

Accepted : 28 July 2022

Published : 25 August 2022

DOI : https://doi.org/10.1007/s44163-022-00030-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

ITU

Committed to connecting the world

Girls in ICT

  • Media Centre
  • Publications
  • Areas of Action
  • Regional Presence
  • General Secretariat
  • Radiocommunication
  • Standardization
  • Development
  • Members' Zone

Designing AI for Human Values

Skip Navigation Links

Responsible Artificial Intelligence: Designing AI for Human Values

Artificial intelligence (AI) is increasingly affecting our lives in smaller or greater ways. In order to ensure that systems will uphold human values, design methods are needed that incorporate ethical principles and address societal concerns. In this paper, we explore the impact of AI in the case of the expected effects on the European labor market, and propose the accountability, responsibility and transparency (ART) design principles for the development of AI systems that are sensitive to human values.

Artificial intelligence, design for values, ethics, societal impact

General information

  • ITU Journal: ICT Discoveries
  • Editorial board
  • Issue n.1: call for papers
  • Alessia Magliarditi at [email protected]

© ITU All Rights Reserved

  • Privacy notice
  • Accessibility
  • Report misconduct
  • International edition
  • Australia edition
  • Europe edition

Lots of workers in hi-vis vests amid conveyor belts and boxes in a vast warehouse

Human values, as well as AI, must be at the core of the future of work

Automation too often erodes conditions and job quality creating anxiety and overwork. To build ‘good work’, we must invest in people as well as tech

T he UK economy is at a pivotal moment. Two years on from Covid, and it remains the only country in the developed world where people have continued to drop out of the labour market in greater numbers beyond the pandemic.

Rates of economic inactivity have risen and vacancies in the hospitality, health and technology sectors are proving hard to fill. At the same time, automation and the acceleration of artificial intelligence (AI) technology risk spreading fear and anxiety among workers. The UK is experiencing new forms of polarisation between good and poor-quality work.

How the government responds to the challenges the current jobs market presents is crucial. Yet we still do not have a cross-department council, strategy or minister to coordinate and drive the “future of work” agenda.

after newsletter promotion

A new report from the Business, Energy & Industrial Strategy select committee highlights the obstacles the UK faces in seeking to deliver sustainable, inclusive growth. It also highlights a remarkable range of labour market challenges, even though unemployment levels remain close to a record low. But what is missing from the report, and indeed from the government’s vision, is a focus on the importance of “good work”.

This is work that is more than just employment, it is work that promotes dignity, autonomy and equality; work that has fair pay and conditions. The government often focuses on unemployment figures as a metric for whether the economy is doing well. But the data that we have shows that increasing the number of professional jobs in a local area can no longer be seen as a vehicle for reducing the amount of mundane, low-quality work in that area.

The government appears to be putting huge store in technology and automation to drive growth and create “ better jobs and better opportunities ”. The problem with this is that new technologies do not automatically create better jobs .

A robotic arm holding a hamburger

Without a focus on human values and agency, automation can seriously detract from people’s experience of work. The BEIS report cites the adoption of AI by firms such as Amazon and Royal Mail as creating “anxiety, stress, unhappiness and overwork”. Surveillance systems are “leading to distrust, micromanagement and, in some cases, disciplinary action”. This is not about “robots taking jobs” – this is about automated systems eroding conditions for workers and diminishing job quality when people are not at the heart of it.

It need not be this way: automation can build good work. Tools such as ChatGPT can speed up mundane tasks, freeing up workers to focus on more complex and creative tasks. Well used, an AI system in education could do the heavy lifting on analysing pupil data, for instance, allowing teachers to focus on spending more time teaching students.

While more research is needed into the impacts of automation on work and people, we do know that, to get the best results from automation, much higher levels of investment in human capabilities are needed alongside investment in hardware and software. In short: we need to invest in people, not just tools. Investment in this context is not about the amount spent on software or training to use a system, it is about an orientation towards human agency, about people feeling they are being invested in.

We can be ambitious for the future of work and have an optimistic, forward-looking approach to the responsible design, use and governance of advanced workplace technologies. But to deliver this we need an overarching, proactive and systematic framework of regulation to be developed that requires pre-emptive evaluation of how these tools might affect access to work, conditions of work, and the quality of jobs.

Better-quality jobs protect people and communities against health, social and economic shocks, and focusing on good work as technologies are introduced – as we have modelled here – would not simply offer protections against job losses, but actively seek to build a better labour market, one that shares the benefits of automation as widely as possible.

Anna Thomas is co- founder of the Institute for the Future of Work, an independent research body exploring the impacts of technology on working lives. She established the UK’s future of work commission , the all- party parliamentary group on the future of work and the Pissarides Review into Work and Wellbeing

  • Global development
  • The future of work
  • Work & careers
  • Artificial intelligence (AI)
  • Work-life balance
  • Workers' rights
  • Health & wellbeing

Most viewed

Help | Advanced Search

Computer Science > Artificial Intelligence

Title: dupe: detection undermining via prompt engineering for deepfake text.

Abstract: As large language models (LLMs) become increasingly commonplace, concern about distinguishing between human and AI text increases as well. The growing power of these models is of particular concern to teachers, who may worry that students will use LLMs to write school assignments. Facing a technology with which they are unfamiliar, teachers may turn to publicly-available AI text detectors. Yet the accuracy of many of these detectors has not been thoroughly verified, posing potential harm to students who are falsely accused of academic dishonesty. In this paper, we evaluate three different AI text detectors-Kirchenbauer et al. watermarks, ZeroGPT, and GPTZero-against human and AI-generated essays. We find that watermarking results in a high false positive rate, and that ZeroGPT has both high false positive and false negative rates. Further, we are able to significantly increase the false negative rate of all detectors by using ChatGPT 3.5 to paraphrase the original AI-generated texts, thereby effectively bypassing the detectors.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. Artificial intelligence vs. human intelligence: Differences explained

    artificial intelligence undermining human values essay

  2. Artificial Intelligence Argumentative Essay

    artificial intelligence undermining human values essay

  3. Artificial Intelligence and Art: Pushing the Boundaries or Undermining

    artificial intelligence undermining human values essay

  4. 012 Intelligence Essay Example ~ Thatsnotus

    artificial intelligence undermining human values essay

  5. (PDF) Challenges of Aligning Artificial Intelligence with Human Values

    artificial intelligence undermining human values essay

  6. Essay on Artificial intelligence

    artificial intelligence undermining human values essay

VIDEO

  1. Artificial intelligence... essay

  2. Artificial intelligence- the death of creativity. CSS 2024 essay paper

  3. WHY A.I IS A Silent Threat to Humanity's Future #ai, #technology #short

  4. Artificial intelligence essay for Students

  5. This is Why AI is Saving Humanity

  6. Will ARTIFICIAL INTELLIGENCE Destroy Humanity? Why AI So Dangerous for us?

COMMENTS

  1. Aligning artificial intelligence with human values: reflections from a

    Artificial Intelligence (AI) must be directed at humane ends. The development of AI has produced great uncertainties of ensuring AI alignment with human values (AI value alignment) through AI operations from design to use. For the purposes of addressing this problem, we adopt the phenomenological theories of material values and technological mediation to be that beginning step. In this paper ...

  2. (PDF) Aligning artificial intelligence with human values: reflections

    Artificial Intelligence (AI) must be directed at humane ends. The development of AI has produced great uncertainties of ensuring AI alignment with human values (AI value alignment) through AI ...

  3. AI Should Augment Human Intelligence, Not Replace It

    In an economy where data is changing how companies create value — and compete — experts predict that using artificial intelligence (AI) at a larger scale will add as much as $15.7 trillion to ...

  4. Bring Human Values to AI

    Bring Human Values to AI. Speed and efficiency used to be the priority. Now issues such as safety and privacy matter too. Summary. When it launched GPT-4, in March 2023, OpenAI touted its ...

  5. PDF Modelling Human Values for AI Reasoning

    Section 2 motivates using values as critical modelling and architectural concepts for symbolic AI systems. It outlines why values are considered vital to understanding human behaviour and why explicit modelling and computational reasoning of human values is needed in AI. Section 3 explores the nature of human values from social psychology ...

  6. Artificial Intelligence, Values, and Alignment

    The latter involves aligning artificial intelligence with the correct or best scheme of human values on a society-wide or global basis. While the minimalist view starts with the sound observation that optimizing exclusively for almost any metric could create bad outcomes for human beings, we may ultimately need to move beyond minimalist ...

  7. Reboot AI with human values

    Reboot AI with human values. A former head of the European Research Council urges critical thinking about the algorithms that shape our lives and societies. A security staff member wears augmented ...

  8. Humans and Intelligent Machines: Underlying Values

    Summary. This chapter aims to provide a survey of some of the questions concerning how we value intelligence and how human nature is understood, which underlie many of the central and most perplexing questions in AI ethics. Analysing these issues can assist with understanding divergent viewpoints. Many questions in AI ethics concern what ...

  9. Human autonomy in the age of artificial intelligence

    Progress in the development of artificial intelligence (AI) opens up new opportunities for supporting and fostering autonomy, but it simultaneously poses significant risks. Recent incidents of AI ...

  10. Reflections on Artificial Intelligence Alignment with Human Values: A

    Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the ...

  11. Artificial Intelligence Will Change Human Value(s)

    Artificial Intelligence Will Change Human Value (s) Near-term AI advances ultimately will lead to a major societal shift. By Robert K. Ackerman. Mar 01, 2019. Peshkova/Shuttertock. The changes that artificial intelligence will bring to the technology landscape could pale in comparison to what it wreaks on global society.

  12. How can we build human values into AI?

    For humans, principles help shape the way we live our lives and our sense of right and wrong. For AI, they shape its approach to a range of decisions involving trade-offs, such as the choice between prioritising productivity or helping those most in need. In a paper published today in the Proceedings of the National Academy of Sciences, we draw ...

  13. Challenges of Aligning Artificial Intelligence with Human Values

    Challenges of Aligning Arti cial Intelligence. with Human Values. Margit Sutrop. Department of Philosophy. University of Tartu. Jakobi 2. Tartu 50090, Estonia. Email: [email protected]. Abstract ...

  14. How Do We Align Artificial Intelligence with Human Values?

    Value Alignment. Today, we start with the Value Alignment principle. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation. Stuart Russell, who helped pioneer the idea of value alignment, likes to compare this to the King Midas story.

  15. Aligning AI With Human Values and Interests: An Ethical Imperative

    The rapid development of artificial intelligence (AI) poses exciting possibilities as well as ethical challenges. As AI systems become more sophisticated and integrated into our lives, ensuring they align with human values and interests becomes imperative. But how can we achieve this goal? Alignment refers to developing AI that behaves in accordance with the preferences,

  16. How close are we to AI that surpasses human intelligence?

    July 18, 2023. Artificial general intelligence (AGI) is difficult to precisely define but refers to a superintelligent AI recognizable from science fiction. AGI may still be far off, but the ...

  17. AI undermining 'core human values' becomes target of €1.9m grant

    Researchers at the University of Cambridge's Leverhulme Centre for the Future of Intelligence (LCFI) have been awarded nearly €2m to build a better understanding of how AI can undermine "core human values".

  18. Human dignity and AI: mapping the contours and utility of human dignity

    1 Sheila Jasanoff, The Ethics of Invention: Technology and the Human Future (WW Norton & Company, 1st edn 2016) 7 ('New technologies such as gene modification, artificial intelligence and robotics have the potential to infringe on human dignity and compromise core values of being human.').

  19. The Dangers Of Not Aligning Artificial Intelligence With Human Values

    In artificial intelligence (AI), the "alignment problem" refers to the challenges caused by the fact that machines simply do not have the same values as us. In fact, when it comes to values ...

  20. The importance of humanizing AI: using a behavioral lens to ...

    The concept of artificial intelligence (AI) has been around since antiquity (e.g., []).It is clear from investigative literature (e.g., [2, 3]) popular culture (e.g., []), and even ancient philosophers (e.g., []) that humans have long been intrigued by the idea of creating artificial life, be it from stone or machines, with some sort of intelligence to help, serve, or protect human life.

  21. Designing AI for Human Values

    Abstract. Artificial intelligence (AI) is increasingly affecting our lives in smaller or greater ways. In order to ensure that systems will uphold human values, design methods are needed that incorporate ethical principles and address societal concerns. In this paper, we explore the impact of AI in the case of the expected effects on the ...

  22. [2202.13985] The dangers in algorithms learning humans' values and

    The dangers in algorithms learning humans' values and irrationalities. Rebecca Gorman, Stuart Armstrong. For an artificial intelligence (AI) to be aligned with human values (or human preferences), it must first learn those values. AI systems that are trained on human behavior, risk miscategorising human irrationalities as human values -- and ...

  23. Human values, as well as AI, must be at the core of the future of work

    At the same time, automation and the acceleration of artificial intelligence (AI) technology risk spreading fear and anxiety among workers. The UK is experiencing new forms of polarisation between ...

  24. [2404.11408] DUPE: Detection Undermining via Prompt Engineering for

    DUPE: Detection Undermining via Prompt Engineering for Deepfake Text. As large language models (LLMs) become increasingly commonplace, concern about distinguishing between human and AI text increases as well. The growing power of these models is of particular concern to teachers, who may worry that students will use LLMs to write school ...