1984: Thread

grarpamp grarpamp at gmail.com
Thu Feb 2 21:44:58 PST 2023


 Jump to content
Toggle sidebar Wikipedia The Free Encyclopedia

    Create account

Personal tools

Ethics of artificial intelligence
18 languages

    Article
    Talk

    Read
    Edit
    View history

>From Wikipedia, the free encyclopedia
Part of a series on
Artificial intelligence
Anatomy-1751201 1280.png
Major goals
Approaches
Philosophy
History
Technology
Glossary

    v
    t
    e

The ethics of artificial intelligence is the branch of the ethics of
technology specific to artificially intelligent systems.[1] It is
sometimes divided into a concern with the moral behavior of humans as
they design, make, use and treat artificially intelligent systems, and
a concern with the behavior of machines, in machine ethics. It also
includes the issue of a possible singularity due to superintelligent
AI.
Ethics fields' approaches
Robot ethics
Main article: Robot ethics

The term "robot ethics" (sometimes "roboethics") refers to the
morality of how humans design, construct, use and treat robots.[2]
Robot ethics intersect with the ethics of AI. Robots are physical
machines whereas AI can be only software.[3] Not all robots function
through AI systems and not all AI systems are robots. Robot ethics
considers how machines may be used to harm or benefit humans, their
impact on individual autonomy, and their effects on social justice.
Machine ethics
Main article: Machine ethics

Machine ethics (or machine morality) is the field of research
concerned with designing Artificial Moral Agents (AMAs), robots or
artificially intelligent computers that behave morally or as though
moral.[4][5][6][7] To account for the nature of these agents, it has
been suggested to consider certain philosophical ideas, like the
standard characterizations of agency, rational agency, moral agency,
and artificial agency, which are related to the concept of AMAs.[8]

Isaac Asimov considered the issue in the 1950s in his I, Robot. At the
insistence of his editor John W. Campbell Jr., he proposed the Three
Laws of Robotics to govern artificially intelligent systems. Much of
his work was then spent testing the boundaries of his three laws to
see where they would break down, or where they would create
paradoxical or unanticipated behavior. His work suggests that no set
of fixed laws can sufficiently anticipate all possible
circumstances.[9] More recently, academics and many governments have
challenged the idea that AI can itself be held accountable.[10] A
panel convened by the United Kingdom in 2010 revised Asimov's laws to
clarify that AI is the responsibility either of its manufacturers, or
of its owner/operator.[11]

In 2009, during an experiment at the Laboratory of Intelligent Systems
in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, robots
that were programmed to cooperate with each other (in searching out a
beneficial resource and avoiding a poisonous one) eventually learned
to lie to each other in an attempt to hoard the beneficial
resource.[12]

Some experts and academics have questioned the use of robots for
military combat, especially when such robots are given some degree of
autonomous functions.[13] The US Navy has funded a report which
indicates that as military robots become more complex, there should be
greater attention to implications of their ability to make autonomous
decisions.[14][15] The President of the Association for the
Advancement of Artificial Intelligence has commissioned a study to
look at this issue.[16] They point to programs like the Language
Acquisition Device which can emulate human interaction.

Vernor Vinge has suggested that a moment may come when some computers
are smarter than humans. He calls this "the Singularity".[17] He
suggests that it may be somewhat or possibly very dangerous for
humans.[18] This is discussed by a philosophy called
Singularitarianism. The Machine Intelligence Research Institute has
suggested a need to build "Friendly AI", meaning that the advances
which are already occurring with AI should also include an effort to
make AI intrinsically friendly and humane.[19]

There are discussion on creating tests to see if an AI is capable of
making ethical decisions. Alan Winfield concludes that the Turing test
is flawed and the requirement for an AI to pass the test is too
low.[20] A proposed alternative test is one called the Ethical Turing
Test, which would improve on the current test by having multiple
judges decide if the AI's decision is ethical or unethical.[20]

In 2009, academics and technical experts attended a conference
organized by the Association for the Advancement of Artificial
Intelligence to discuss the potential impact of robots and computers
and the impact of the hypothetical possibility that they could become
self-sufficient and able to make their own decisions. They discussed
the possibility and the extent to which computers and robots might be
able to acquire any level of autonomy, and to what degree they could
use such abilities to possibly pose any threat or hazard. They noted
that some machines have acquired various forms of semi-autonomy,
including being able to find power sources on their own and being able
to independently choose targets to attack with weapons. They also
noted that some computer viruses can evade elimination and have
achieved "cockroach intelligence". They noted that self-awareness as
depicted in science-fiction is probably unlikely, but that there were
other potential hazards and pitfalls.[17]

However, there is one technology in particular that could truly bring
the possibility of robots with moral competence to reality. In a paper
on the acquisition of moral values by robots, Nayef Al-Rodhan mentions
the case of neuromorphic chips, which aim to process information
similarly to humans, nonlinearly and with millions of interconnected
artificial neurons.[21] Robots embedded with neuromorphic technology
could learn and develop knowledge in a uniquely humanlike way.
Inevitably, this raises the question of the environment in which such
robots would learn about the world and whose morality they would
inherit – or if they end up developing human 'weaknesses' as well:
selfishness, a pro-survival attitude, hesitation etc.

In Moral Machines: Teaching Robots Right from Wrong,[22] Wendell
Wallach and Colin Allen conclude that attempts to teach robots right
from wrong will likely advance understanding of human ethics by
motivating humans to address gaps in modern normative theory and by
providing a platform for experimental investigation. As one example,
it has introduced normative ethicists to the controversial issue of
which specific learning algorithms to use in machines. Nick Bostrom
and Eliezer Yudkowsky have argued for decision trees (such as ID3)
over neural networks and genetic algorithms on the grounds that
decision trees obey modern social norms of transparency and
predictability (e.g. stare decisis),[23] while Chris Santos-Lang
argued in the opposite direction on the grounds that the norms of any
age must be allowed to change and that natural failure to fully
satisfy these particular norms has been essential in making humans
less vulnerable to criminal "hackers".[24]

According to a 2019 report from the Center for the Governance of AI at
the University of Oxford, 82% of Americans believe that robots and AI
should be carefully managed. Concerns cited ranged from how AI is used
in surveillance and in spreading fake content online (known as deep
fakes when they include doctored video images and audio generated with
help from AI) to cyberattacks, infringements on data privacy, hiring
bias, autonomous vehicles, and drones that do not require a human
controller.[25]
Ethics principles of artificial intelligence

In the review of 84[26] ethics guidelines for AI 11 clusters of
principles were found: transparency, justice and fairness,
non-maleficence, responsibility, privacy, beneficence, freedom and
autonomy, trust, sustainability, dignity, solidarity.[26]

Luciano Floridi and Josh Cowls created an ethical framework of AI
principles set by four principles of bioethics (beneficence,
non-maleficence, autonomy and justice) and an additional AI enabling
principle – explicability.[27]
Transparency, accountability, and open source

Bill Hibbard argues that because AI will have such a profound effect
on humanity, AI developers are representatives of future humanity and
thus have an ethical obligation to be transparent in their
efforts.[28] Ben Goertzel and David Hart created OpenCog as an open
source framework for AI development.[29] OpenAI is a non-profit AI
research company created by Elon Musk, Sam Altman and others to
develop open-source AI beneficial to humanity.[30] There are numerous
other open-source AI developments.

Unfortunately, making code open source does not make it
comprehensible, which by many definitions means that the AI code is
not transparent. The IEEE has a standardisation effort on AI
transparency.[31] The IEEE effort identifies multiple scales of
transparency for different users. Further, there is concern that
releasing the full capacity of contemporary AI to some organizations
may be a public bad, that is, do more damage than good. For example,
Microsoft has expressed concern about allowing universal access to its
face recognition software, even for those who can pay for it.
Microsoft posted an extraordinary blog on this topic, asking for
government regulation to help determine the right thing to do.[32]

Not only companies, but many other researchers and citizen advocates
recommend government regulation as a means of ensuring transparency,
and through it, human accountability. This strategy has proven
controversial, as some worry that it will slow the rate of innovation.
Others argue that regulation leads to systemic stability more able to
support innovation in the long term.[33] The OECD, UN, EU, and many
countries are presently working on strategies for regulating AI, and
finding appropriate legal frameworks.[34][35][36]

On June 26, 2019, the European Commission High-Level Expert Group on
Artificial Intelligence (AI HLEG) published its "Policy and investment
recommendations for trustworthy Artificial Intelligence".[37] This is
the AI HLEG's second deliverable, after the April 2019 publication of
the "Ethics Guidelines for Trustworthy AI". The June AI HLEG
recommendations cover four principal subjects: humans and society at
large, research and academia, the private sector, and the public
sector. The European Commission claims that "HLEG's recommendations
reflect an appreciation of both the opportunities for AI technologies
to drive economic growth, prosperity and innovation, as well as the
potential risks involved" and states that the EU aims to lead on the
framing of policies governing AI internationally.[38] To prevent harm,
in addition to regulation, AI-deploying organizations need to play a
central role in creating and deploying trustworthy AI in line with the
principles of trustworthy AI, and take accountability to mitigate the
risks.[39]
Ethical challenges
Biases in AI systems
Main article: Algorithmic bias
Then-US Senator Kamala Harris speaking about racial bias in artificial
intelligence in 2020

AI has become increasingly inherent in facial and voice recognition
systems. Some of these systems have real business applications and
directly impact people. These systems are vulnerable to biases and
errors introduced by its human creators. Also, the data used to train
these AI systems itself can have biases.[40][41][42][43] For instance,
facial recognition algorithms made by Microsoft, IBM and Face++ all
had biases when it came to detecting people's gender;[44] these AI
systems were able to detect gender of white men more accurately than
gender of darker skin men. Further, a 2020 study reviewed voice
recognition systems from Amazon, Apple, Google, IBM, and Microsoft
found that they have higher error rates when transcribing black
people's voices than white people's.[45] Furthermore, Amazon
terminated their use of AI hiring and recruitment because the
algorithm favored male candidates over female ones. This was because
Amazon's system was trained with data collected over 10-year period
that came mostly from male candidates.[46]

Bias can creep into algorithms in many ways. The most predominant view
on how bias is introduced into AI systems is that it is embedded
within the historical data used to train the system. For instance,
Amazon's AI-powered recruitment tool was trained with its own
recruitment data accumulated over the years, during which time the
candidates that successfully got the job were mostly white males.
Consequently, the algorithms learned the (biased) pattern from the
historical data and generated predictions for the present/future that
these types of candidates are mostly like to succeed in getting the
job. Therefore, the recruitment decisions made by the AI system turn
out to be biased against female and minority candidates. Friedman and
Nissenbaum identify three categories of bias in computer systems:
existing bias, technical bias, and emergent bias.[47] In natural
language processing, problems can arise from the text corpus — the
source material the algorithm uses to learn about the relationships
between different words.[48]

Large companies such as IBM, Google, etc. have made efforts to
research and address these biases.[49][50][51] One solution for
addressing bias is to create documentation for the data used to train
AI systems.[52][53] Process mining can be an important tool for
organizations to achieve compliance with proposed AI regulations by
identifying errors, monitoring processes, identifying potential root
causes for improper execution, and other functions.[54]

The problem of bias in machine learning is likely to become more
significant as the technology spreads to critical areas like medicine
and law, and as more people without a deep technical understanding are
tasked with deploying it. Some experts warn that algorithmic bias is
already pervasive in many industries and that almost no one is making
an effort to identify or correct it.[55] There are some open-sourced
tools [56] by civil societies that are looking to bring more awareness
to biased AI.
Robot rights

"Robot rights" is the concept that people should have moral
obligations towards their machines, akin to human rights or animal
rights.[57] It has been suggested that robot rights (such as a right
to exist and perform its own mission) could be linked to robot duty to
serve humanity, analogous to linking human rights with human duties
before society.[58] These could include the right to life and liberty,
freedom of thought and expression, and equality before the law.[59]
The issue has been considered by the Institute for the Future[60] and
by the U.K. Department of Trade and Industry.[61]

Experts disagree on how soon specific and detailed laws on the subject
will be necessary.[61] Glenn McGee reported that sufficiently humanoid
robots might appear by 2020,[62] while Ray Kurzweil sets the date at
2029.[63] Another group of scientists meeting in 2007 supposed that at
least 50 years had to pass before any sufficiently advanced system
would exist.[64]

The rules for the 2003 Loebner Prize competition envisioned the
possibility of robots having rights of their own:

    61. If in any given year, a publicly available open-source Entry
entered by the University of Surrey or the Cambridge Center wins the
Silver Medal or the Gold Medal, then the Medal and the Cash Award will
be awarded to the body responsible for the development of that Entry.
If no such body can be identified, or if there is disagreement among
two or more claimants, the Medal and the Cash Award will be held in
trust until such time as the Entry may legally possess, either in the
United States of America or in the venue of the contest, the Cash
Award and Gold Medal in its own right.[65]

In October 2017, the android Sophia was granted "honorary" citizenship
in Saudi Arabia, though some considered this to be more of a publicity
stunt than a meaningful legal recognition.[66] Some saw this gesture
as openly denigrating of human rights and the rule of law.[67]

The philosophy of Sentientism grants degrees of moral consideration to
all sentient beings, primarily humans and most non-human animals. If
artificial or alien intelligence show evidence of being sentient, this
philosophy holds that they should be shown compassion and granted
rights.

Joanna Bryson has argued that creating AI that requires rights is both
avoidable, and would in itself be unethical, both as a burden to the
AI agents and to human society.[68]
Threat to human dignity
Main article: Computer Power and Human Reason

Joseph Weizenbaum[69] argued in 1976 that AI technology should not be
used to replace people in positions that require respect and care,
such as:

    A customer service representative (AI technology is already used
today for telephone-based interactive voice response systems)
    A nursemaid for the elderly (as was reported by Pamela McCorduck
in her book The Fifth Generation)
    A soldier
    A judge
    A police officer
    A therapist (as was proposed by Kenneth Colby in the 70s)

Weizenbaum explains that we require authentic feelings of empathy from
people in these positions. If machines replace them, we will find
ourselves alienated, devalued and frustrated, for the artificially
intelligent system would not be able to simulate empathy. Artificial
intelligence, if used in this way, represents a threat to human
dignity. Weizenbaum argues that the fact that we are entertaining the
possibility of machines in these positions suggests that we have
experienced an "atrophy of the human spirit that comes from thinking
of ourselves as computers."[70]

Pamela McCorduck counters that, speaking for women and minorities "I'd
rather take my chances with an impartial computer", pointing out that
there are conditions where we would prefer to have automated judges
and police that have no personal agenda at all.[70] However, Kaplan
and Haenlein stress that AI systems are only as smart as the data used
to train them since they are, in their essence, nothing more than
fancy curve-fitting machines; Using AI to support a court ruling can
be highly problematic if past rulings show bias toward certain groups
since those biases get formalized and engrained, which makes them even
more difficult to spot and fight against.[71]

Weizenbaum was also bothered that AI researchers (and some
philosophers) were willing to view the human mind as nothing more than
a computer program (a position now known as computationalism). To
Weizenbaum, these points suggest that AI research devalues human
life.[69]

AI founder John McCarthy objects to the moralizing tone of
Weizenbaum's critique. "When moralizing is both vehement and vague, it
invites authoritarian abuse," he writes. Bill Hibbard[72] writes that
"Human dignity requires that we strive to remove our ignorance of the
nature of existence, and AI is necessary for that striving."
Liability for self-driving cars
Main article: Self-driving car liability

As the widespread use of autonomous cars becomes increasingly
imminent, new challenges raised by fully autonomous vehicles must be
addressed.[73][74] Recently,[when?] there has been debate as to the
legal liability of the responsible party if these cars get into
accidents.[75][76] In one report where a driverless car hit a
pedestrian, the driver was inside the car but the controls were fully
in the hand of computers. This led to a dilemma over who was at fault
for the accident.[77]

In another incident on March 18, 2018, Elaine Herzberg was struck and
killed by a self-driving Uber in Arizona. In this case, the automated
car was capable of detecting cars and certain obstacles in order to
autonomously navigate the roadway, but it could not anticipate a
pedestrian in the middle of the road. This raised the question of
whether the driver, pedestrian, the car company, or the government
should be held responsible for her death.[78]

Currently, self-driving cars are considered semi-autonomous, requiring
the driver to pay attention and be prepared to take control if
necessary.[79][failed verification] Thus, it falls on governments to
regulate the driver who over-relies on autonomous features. as well
educate them that these are just technologies that, while convenient,
are not a complete substitute. Before autonomous cars become widely
used, these issues need to be tackled through new
policies.[80][81][82]
Weaponization of artificial intelligence
Main article: Lethal autonomous weapon

Some experts and academics have questioned the use of robots for
military combat, especially when such robots are given some degree of
autonomy.[13][83] On October 31, 2019, the United States Department of
Defense's Defense Innovation Board published the draft of a report
recommending principles for the ethical use of artificial intelligence
by the Department of Defense that would ensure a human operator would
always be able to look into the 'black box' and understand the
kill-chain process. However, a major concern is how the report will be
implemented.[84] The US Navy has funded a report which indicates that
as military robots become more complex, there should be greater
attention to implications of their ability to make autonomous
decisions.[85][15] Some researchers state that autonomous robots might
be more humane, as they could make decisions more effectively.[86]

Within this last decade, there has been intensive research in
autonomous power with the ability to learn using assigned moral
responsibilities. "The results may be used when designing future
military robots, to control unwanted tendencies to assign
responsibility to the robots."[87] From a consequentialist view, there
is a chance that robots will develop the ability to make their own
logical decisions on whom to kill and that is why there should be a
set moral framework that the AI cannot override.[88]

There has been a recent outcry with regard to the engineering of
artificial intelligence weapons that have included ideas of a robot
takeover of mankind. AI weapons do present a type of danger different
from that of human-controlled weapons. Many governments have begun to
fund programs to develop AI weaponry. The United States Navy recently
announced plans to develop autonomous drone weapons, paralleling
similar announcements by Russia and Korea respectively. Due to the
potential of AI weapons becoming more dangerous than human-operated
weapons, Stephen Hawking and Max Tegmark signed a "Future of Life"
petition[89] to ban AI weapons. The message posted by Hawking and
Tegmark states that AI weapons pose an immediate danger and that
action is required to avoid catastrophic disasters in the near
future.[90]

"If any major military power pushes ahead with the AI weapon
development, a global arms race is virtually inevitable, and the
endpoint of this technological trajectory is obvious: autonomous
weapons will become the Kalashnikovs of tomorrow", says the petition,
which includes Skype co-founder Jaan Tallinn and MIT professor of
linguistics Noam Chomsky as additional supporters against AI
weaponry.[91]

Physicist and Astronomer Royal Sir Martin Rees has warned of
catastrophic instances like "dumb robots going rogue or a network that
develops a mind of its own." Huw Price, a colleague of Rees at
Cambridge, has voiced a similar warning that humans might not survive
when intelligence "escapes the constraints of biology". These two
professors created the Centre for the Study of Existential Risk at
Cambridge University in the hope of avoiding this threat to human
existence.[90]

Regarding the potential for smarter-than-human systems to be employed
militarily, the Open Philanthropy Project writes that these scenarios
"seem potentially as important as the risks related to loss of
control", but research investigating AI's long-run social impact have
spent relatively little time on this concern: "this class of scenarios
has not been a major focus for the organizations that have been most
active in this space, such as the Machine Intelligence Research
Institute (MIRI) and the Future of Humanity Institute (FHI), and there
seems to have been less analysis and debate regarding them".[92]
Opaque algorithms

Approaches like machine learning with neural networks can result in
computers making decisions that they and the humans who programmed
them cannot explain. It is difficult for people to determine if such
decisions are fair and trustworthy, leading potentially to bias in AI
systems going undetected, or people rejecting the use of such systems.
This has led to advocacy and in some jurisdictions legal requirements
for explainable artificial intelligence.[93]
Singularity
Further information: Existential risk from artificial general
intelligence, Superintelligence, and Technological singularity

Many researchers have argued that, by way of an "intelligence
explosion", a self-improving AI could become so powerful that humans
would not be able to stop it from achieving its goals.[94] In his
paper "Ethical Issues in Advanced Artificial Intelligence" and
subsequent book Superintelligence: Paths, Dangers, Strategies,
philosopher Nick Bostrom argues that artificial intelligence has the
capability to bring about human extinction. He claims that general
superintelligence would be capable of independent initiative and of
making its own plans, and may therefore be more appropriately thought
of as an autonomous agent. Since artificial intellects need not share
our human motivational tendencies, it would be up to the designers of
the superintelligence to specify its original motivations. Because a
superintelligent AI would be able to bring about almost any possible
outcome and to thwart any attempt to prevent the implementation of its
goals, many uncontrolled unintended consequences could arise. It could
kill off all other agents, persuade them to change their behavior, or
block their attempts at interference.[95]

However, instead of overwhelming the human race and leading to our
destruction, Bostrom has also asserted that superintelligence can help
us solve many difficult problems such as disease, poverty, and
environmental destruction, and could help us to "enhance"
ourselves.[96]

The sheer complexity of human value systems makes it very difficult to
make AI's motivations human-friendly.[94][95] Unless moral philosophy
provides us with a flawless ethical theory, an AI's utility function
could allow for many potentially harmful scenarios that conform with a
given ethical framework but not "common sense". According to Eliezer
Yudkowsky, there is little reason to suppose that an artificially
designed mind would have such an adaptation.[97] AI researchers such
as Stuart J. Russell,[98] Bill Hibbard,[72] Roman Yampolskiy,[99]
Shannon Vallor,[100] Steven Umbrello[101] and Luciano Floridi[102]
have proposed design strategies for developing beneficial machines.
Actors in AI ethics

There are many organisations concerned with AI ethics and policy,
public and governmental as well as corporate and societal.

Amazon, Google, Facebook, IBM, and Microsoft have established a
non-profit, The Partnership on AI to Benefit People and Society, to
formulate best practices on artificial intelligence technologies,
advance the public's understanding, and to serve as a platform about
artificial intelligence. Apple joined in January 2017. The corporate
members will make financial and research contributions to the group,
while engaging with the scientific community to bring academics onto
the board.[103]

The IEEE put together a Global Initiative on Ethics of Autonomous and
Intelligent Systems which has been creating and revising guidelines
with the help of public input, and accepts as members many
professionals from within and without its organization.

Traditionally, government has been used by societies to ensure ethics
are observed through legislation and policing. There are now many
efforts by national governments, as well as transnational government
and non-government organizations to ensure AI is ethically applied.
Intergovernmental initiatives

    The European Commission has a High-Level Expert Group on
Artificial Intelligence. On 8 April 2019, this published its "Ethics
Guidelines for Trustworthy Artificial Intelligence".[104] The European
Commission also has a Robotics and Artificial Intelligence Innovation
and Excellence unit, which published a white paper on excellence and
trust in artificial intelligence innovation on 19 February 2020.[105]
    The OECD established an OECD AI Policy Observatory.[106]

Governmental initiatives

    In the United States the Obama administration put together a
Roadmap for AI Policy.[107] The Obama Administration released two
prominent white papers on the future and impact of AI. In 2019 the
White House through an executive memo known as the "American AI
Initiative" instructed NIST the (National Institute of Standards and
Technology) to begin work on Federal Engagement of AI Standards
(February 2019).[108]
    In January 2020, in the United States, the Trump Administration
released a draft executive order issued by the Office of Management
and Budget (OMB) on "Guidance for Regulation of Artificial
Intelligence Applications" ("OMB AI Memorandum"). The order emphasizes
the need to invest in AI applications, boost public trust in AI,
reduce barriers for usage of AI, and keep American AI technology
competitive in a global market. There is a nod to the need for privacy
concerns, but no further detail on enforcement. The advances of
American AI technology seems to be the focus and priority.
Additionally, federal entities are even encouraged to use the order to
circumnavigate any state laws and regulations that a market might see
as too onerous to fulfill.[109]
    The Computing Community Consortium (CCC) weighed in with a
100-plus page draft report[110] – A 20-Year Community Roadmap for
Artificial Intelligence Research in the US[111]
    The Center for Security and Emerging Technology advises US
policymakers on the security implications of emerging technologies
such as AI.
    The Non-Human Party is running for election in New South Wales,
with policies around granting rights to robots, animals and generally,
non-human entities whose intelligence has been overlooked.[112]
    In Russia, the first-ever Russian "Codex of ethics of artificial
intelligence" for business was signed in 2021. It was driven by
Analytical Center for the Government of the Russian Federation
together with major commercial and academic institutions such as
Sberbank, Yandex, Rosatom, Higher School of Economics, Moscow
Institute of Physics and Technology, ITMO University, Nanosemantics,
Rostelecom, CIAN and others.[113]

Academic initiatives

    There are three research institutes at the University of Oxford
that are centrally focused on AI ethics. The Future of Humanity
Institute that focuses both on AI Safety[114] and the Governance of
AI.[115] The Institute for Ethics in AI, directed by John Tasioulas,
whose primary goal, among others, is to promote AI ethics as a field
proper in comparison to related applied ethics fields. The Oxford
Internet Institute, directed by Luciano Floridi, focuses on the ethics
of near-term AI technologies and ICTs.[116]
    The Centre for Digital Governance at the Hertie School in Berlin
was co-founded by Joanna Bryson to research questions of ethics and
technology.[117]
    The AI Now Institute at NYU is a research institute studying the
social implications of artificial intelligence. Its interdisciplinary
research focuses on the themes bias and inclusion, labour and
automation, rights and liberties, and safety and civil
infrastructure.[118]
    The Institute for Ethics and Emerging Technologies (IEET)
researches the effects of AI on unemployment,[119][120] and policy.
    The Institute for Ethics in Artificial Intelligence (IEAI) at the
Technical University of Munich directed by Christoph Lütge conducts
research across various domains such as mobility, employment,
healthcare and sustainability.[121]

Private organizations

    Algorithmic Justice League[122]
    Black in AI[123]
    Data for Black Lives[124]
    Queer in AI[123]

Role and impact of fiction
Main article: Artificial intelligence in fiction

The role of fiction with regards to AI ethics has been a complex one.
One can distinguish three levels at which fiction has impacted the
development of artificial intelligence and robotics: Historically,
fiction has been prefiguring common tropes that have not only
influenced goals and visions for AI, but also outlined ethical
questions and common fears associated with it. During the second half
of the twentieth and the first decades of the twenty-first century,
popular culture, in particular movies, TV series and video games have
frequently echoed preoccupations and dystopian projections around
ethical questions concerning AI and robotics. Recently, these themes
have also been increasingly treated in literature beyond the realm of
science fiction. And, as Carme Torras, research professor at the
Institut de Robòtica i Informàtica Industrial (Institute of robotics
and industrial computing) at the Technical University of Catalonia
notes,[125] in higher education, science fiction is also increasingly
used for teaching technology-related ethical issues in technological
degrees.
History

Historically speaking, the investigation of moral and ethical
implications of "thinking machines" goes back at least to the
Enlightenment: Leibniz already poses the question if we might
attribute intelligence to a mechanism that behaves as if it were a
sentient being,[126] and so does Descartes, who describes what could
be considered an early version of the Turing Test.[127]

The romantic period has several times envisioned artificial creatures
that escape the control of their creator with dire consequences, most
famously in Mary Shelley's Frankenstein. The widespread preoccupation
with industrialization and mechanization in the 19th and early 20th
century, however, brought ethical implications of unhinged technical
developments to the forefront of fiction: R.U.R – Rossum's Universal
Robots, Karel Čapek's play of sentient robots endowed with emotions
used as slave labor is not only credited with the invention of the
term 'robot' (derived from the Czech word for forced labor, robota)
but was also an international success after it premiered in 1921.
George Bernard Shaw's play Back to Methuselah, published in 1921,
questions at one point the validity of thinking machines that act like
humans; Fritz Lang's 1927 film Metropolis shows an android leading the
uprising of the exploited masses against the oppressive regime of a
technocratic society.
Impact on technological development

While the anticipation of a future dominated by potentially
indomitable technology has fueled the imagination of writers and film
makers for a long time, one question has been less frequently
analyzed, namely, to what extent fiction has played a role in
providing inspiration for technological development. It has been
documented, for instance, that the young Alan Turing saw and
appreciated G.B. Shaw's play Back to Methuselah in 1933[128] (just 3
years before the publication of his first seminal paper[129] which
laid the groundwork for the digital computer), and he would likely
have been at least aware of plays like R.U.R., which was an
international success and translated into many languages.

One might also ask the question which role science fiction played in
establishing the tenets and ethical implications of AI development:
Isaac Asimov conceptualized his Three Laws of Robotics in the 1942
short story "Runaround", part of the short story collection I, Robot;
Arthur C. Clarke's short "The Sentinel", on which Stanley Kubrick's
film 2001: A Space Odyssey is based, was written in 1948 and published
in 1952. Another example (among many others) would be Philip K. Dicks
numerous short stories and novels – in particular Do Androids Dream of
Electric Sheep?, published in 1968, and featuring its own version of a
Turing Test, the Voight-Kampff Test, to gauge emotional responses of
androids indistinguishable from humans. The novel later became the
basis of the influential 1982 movie Blade Runner by Ridley Scott.

Science fiction has been grappling with ethical implications of AI
developments for decades, and thus provided a blueprint for ethical
issues that might emerge once something akin to general artificial
intelligence has been achieved: Spike Jonze's 2013 film Her shows what
can happen if a user falls in love with the seductive voice of his
smartphone operating system; Ex Machina, on the other hand, asks a
more difficult question: if confronted with a clearly recognizable
machine, made only human by a face and an empathetic and sensual
voice, would we still be able to establish an emotional connection,
still be seduced by it? (The film echoes a theme already present two
centuries earlier, in the 1817 short story "The Sandmann" by E.T.A.
Hoffmann.)

The theme of coexistence with artificial sentient beings is also the
theme of two recent novels: Machines like me by Ian McEwan, published
in 2019, involves (among many other things) a love-triangle involving
an artificial person as well as a human couple. Klara and the Sun by
Nobel Prize winner Kazuo Ishiguro, published in 2021, is the
first-person account of Klara, an 'AF' (artificial friend), who is
trying, in her own way, to help the girl she is living with, who,
after having been 'lifted' (i.e. having been subjected to genetic
enhancements), is suffering from a strange illness.
TV series

While ethical questions linked to AI have been featured in science
fiction literature and feature films for decades, the emergence of the
TV series as a genre allowing for longer and more complex story lines
and character development has led to some significant contributions
that deal with ethical implications of technology. The Swedish series
Real Humans (2012–2013) tackled the complex ethical and social
consequences linked to the integration of artificial sentient beings
in society. The British dystopian science fiction anthology series
Black Mirror (2013–2019) was particularly notable for experimenting
with dystopian fictional developments linked to a wide variety of
recent technology developments. Both the French series Osmosis (2020)
and British series The One deal with the question of what can happen
if technology tries to find the ideal partner for a person. Several
episodes of the Netflix series Love, Death+Robots have imagined scenes
of robots and humans living together. The most representative one of
them is S02 E01, it shows how bad the consequences can be when robots
get out of control if humans rely too much on them in their
lives.[130]
Future visions in fiction and games

The movie The Thirteenth Floor suggests a future where simulated
worlds with sentient inhabitants are created by computer game consoles
for the purpose of entertainment. The movie The Matrix suggests a
future where the dominant species on planet Earth are sentient
machines and humanity is treated with utmost speciesism. The short
story "The Planck Dive" suggests a future where humanity has turned
itself into software that can be duplicated and optimized and the
relevant distinction between types of software is sentient and
non-sentient. The same idea can be found in the Emergency Medical
Hologram of Starship Voyager, which is an apparently sentient copy of
a reduced subset of the consciousness of its creator, Dr. Zimmerman,
who, for the best motives, has created the system to give medical
assistance in case of emergencies. The movies Bicentennial Man and
A.I. deal with the possibility of sentient robots that could love. I,
Robot explored some aspects of Asimov's three laws. All these
scenarios try to foresee possibly unethical consequences of the
creation of sentient computers.[131]

The ethics of artificial intelligence is one of several core themes in
BioWare's Mass Effect series of games.[132] It explores the scenario
of a civilization accidentally creating AI through a rapid increase in
computational power through a global scale neural network. This event
caused an ethical schism between those who felt bestowing organic
rights upon the newly sentient Geth was appropriate and those who
continued to see them as disposable machinery and fought to destroy
them. Beyond the initial conflict, the complexity of the relationship
between the machines and their creators is another ongoing theme
throughout the story.

Detroit: Become Human is one of the most famous video games which
discusses the ethics of artificial intelligence recently. Quantic
Dream designed the chapters of the game using interactive storylines
to give players a more immersive gaming experience. Players manipulate
three different awakened bionic people in the face of different events
to make different choices to achieve the purpose of changing the human
view of the bionic group and different choices will result in
different endings. This is one of the few games that puts players in
the bionic perspective, which allows them to better consider the
rights and interests of robots once a true artificial intelligence is
created.[133]

Over time, debates have tended to focus less and less on possibility
and more on desirability,[134] as emphasized in the "Cosmist" and
"Terran" debates initiated by Hugo de Garis and Kevin Warwick. A
Cosmist, according to Hugo de Garis, is actually seeking to build more
intelligent successors to the human species.

Experts at the University of Cambridge have argued that AI is
portrayed in fiction and nonfiction overwhelmingly as racially White,
in ways that distort perceptions of its risks and benefits.[135]
See also

    AI takeover
    Artificial consciousness
    Artificial general intelligence (AGI)
    Computer ethics
    Effective altruism, the long term future and global catastrophic risks
    Existential risk from artificial general intelligence
    Human Compatible
    Personhood
    Philosophy of artificial intelligence
    Regulation of artificial intelligence
    Robotic Governance
    Superintelligence: Paths, Dangers, Strategies

Researchers

    Timnit Gebru
    Joy Buolamwini
    Deb Raji
    Ruha Benjamin
    Safiya Noble
    Margaret Mitchell
    Meredith Whittaker
    Alison Adam
    Seth Baum
    Nick Bostrom
    Joanna Bryson
    Kate Crawford
    Kate Darling
    Luciano Floridi
    Anja Kaspersen
    Michael Kearns
    Ray Kurzweil
    Catherine Malabou
    Ajung Moon
    Vincent C. Müller
    Peter Norvig
    Steve Omohundro
    Stuart J. Russell
    Anders Sandberg
    Mariarosaria Taddeo
    John Tasioulas
    Roman Yampolskiy
    Eliezer Yudkowsky
    Emily M. Bender

Organisations

    Center for Human-Compatible Artificial Intelligence
    Center for Security and Emerging Technology
    Centre for the Study of Existential Risk
    Future of Humanity Institute
    Future of Life Institute
    Machine Intelligence Research Institute
    Partnership on AI
    Leverhulme Centre for the Future of Intelligence
    Institute for Ethics and Emerging Technologies
    Oxford Internet Institute

Notes

    Müller, Vincent C. (30 April 2020). "Ethics of Artificial
Intelligence and Robotics". Stanford Encyclopedia of Philosophy.
Archived from the original on 10 October 2020. Retrieved 26 September
2020.
    Veruggio, Gianmarco (2011). "The Roboethics Roadmap". EURON
Roboethics Atelier. Scuola di Robotica: 2. CiteSeerX 10.1.1.466.2810.
    Müller, Vincent C. (2020), "Ethics of Artificial Intelligence and
Robotics", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of
Philosophy (Winter 2020 ed.), Metaphysics Research Lab, Stanford
University, retrieved 2021-03-18
    Anderson. "Machine Ethics". Archived from the original on 28
September 2011. Retrieved 27 June 2011.
    Anderson, Michael; Anderson, Susan Leigh, eds. (July 2011).
Machine Ethics. Cambridge University Press. ISBN 978-0-521-11235-2.
    Anderson, M.; Anderson, S.L. (July 2006). "Guest Editors'
Introduction: Machine Ethics". IEEE Intelligent Systems. 21 (4):
10–11. doi:10.1109/mis.2006.70. S2CID 9570832.
    Anderson, Michael; Anderson, Susan Leigh (15 December 2007).
"Machine Ethics: Creating an Ethical Intelligent Agent". AI Magazine.
28 (4): 15. doi:10.1609/aimag.v28i4.2065. S2CID 17033332.
    Boyles, Robert James M. (2017). "Philosophical Signposts for
Artificial Moral Agent Frameworks". Suri. 6 (2): 92–109.
    Asimov, Isaac (2008). I, Robot. New York: Bantam. ISBN 978-0-553-38256-3.
    Bryson, Joanna; Diamantis, Mihailis; Grant, Thomas (September
2017). "Of, for, and by the people: the legal lacuna of synthetic
persons". Artificial Intelligence and Law. 25 (3): 273–291.
doi:10.1007/s10506-017-9214-9.
    "Principles of robotics". UK's EPSRC. September 2010. Archived
from the original on 1 April 2018. Retrieved 10 January 2019.
    Evolving Robots Learn To Lie To Each Other Archived 2009-08-28 at
the Wayback Machine, Popular Science, August 18, 2009
    Call for debate on killer robots Archived 2009-08-07 at the
Wayback Machine, By Jason Palmer, Science and technology reporter, BBC
News, 8/3/09.
    Science New Navy-funded Report Warns of War Robots Going
"Terminator" Archived 2009-07-28 at the Wayback Machine, by Jason Mick
(Blog), dailytech.com, February 17, 2009.
    Navy report warns of robot uprising, suggests a strong moral
compass Archived 2011-06-04 at the Wayback Machine, by Joseph L.
Flatley engadget.com, Feb 18th 2009.
    AAAI Presidential Panel on Long-Term AI Futures 2008–2009 Study
Archived 2009-08-28 at the Wayback Machine, Association for the
Advancement of Artificial Intelligence, Accessed 7/26/09.
    Markoff, John (25 July 2009). "Scientists Worry Machines May
Outsmart Man". The New York Times. Archived from the original on 25
February 2017. Retrieved 24 February 2017.
    The Coming Technological Singularity: How to Survive in the
Post-Human Era Archived 2007-01-01 at the Wayback Machine, by Vernor
Vinge, Department of Mathematical Sciences, San Diego State
University, (c) 1993 by Vernor Vinge.
    Article at Asimovlaws.com Archived May 24, 2012, at the Wayback
Machine, July 2004, accessed 7/27/09.
    Winfield, A. F.; Michael, K.; Pitt, J.; Evers, V. (March 2019).
"Machine Ethics: The Design and Governance of Ethical AI and
Autonomous Systems [Scanning the Issue]". Proceedings of the IEEE. 107
(3): 509–517. doi:10.1109/JPROC.2019.2900622. ISSN 1558-2256. S2CID
77393713. Archived from the original on 2020-11-02. Retrieved
2020-11-21.
    Al-Rodhan, Nayef (7 December 2015). "The Moral Code". Archived
from the original on 2017-03-05. Retrieved 2017-03-04.
    Wallach, Wendell; Allen, Colin (November 2008). Moral Machines:
Teaching Robots Right from Wrong. USA: Oxford University Press. ISBN
978-0-19-537404-9.
    Bostrom, Nick; Yudkowsky, Eliezer (2011). "The Ethics of
Artificial Intelligence" (PDF). Cambridge Handbook of Artificial
Intelligence. Cambridge Press. Archived (PDF) from the original on
2016-03-04. Retrieved 2011-06-22.
    Santos-Lang, Chris (2002). "Ethics for Artificial Intelligences".
Archived from the original on 2014-12-25. Retrieved 2015-01-04.
    Howard, Ayanna. "The Regulation of AI – Should Organizations Be
Worried? | Ayanna Howard". MIT Sloan Management Review. Archived from
the original on 2019-08-14. Retrieved 2019-08-14.
    Jobin, Anna; Ienca, Marcello; Vayena, Effy (2 September 2020).
"The global landscape of AI ethics guidelines". Nature. 1 (9):
389–399. arXiv:1906.11668. doi:10.1038/s42256-019-0088-2. S2CID
201827642.
    Floridi, Luciano; Cowls, Josh (2 July 2019). "A Unified Framework
of Five Principles for AI in Society". Harvard Data Science Review. 1.
doi:10.1162/99608f92.8cd550d1. S2CID 198775713.
    Open Source AI. Archived 2016-03-04 at the Wayback Machine Bill
Hibbard. 2008 proceedings of the First Conference on Artificial
General Intelligence, eds. Pei Wang, Ben Goertzel, and Stan Franklin.
    OpenCog: A Software Framework for Integrative Artificial General
Intelligence. Archived 2016-03-04 at the Wayback Machine David Hart
and Ben Goertzel. 2008 proceedings of the First Conference on
Artificial General Intelligence, eds. Pei Wang, Ben Goertzel, and Stan
Franklin.
    Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial
Intelligence Free Archived 2016-04-27 at the Wayback Machine Cade
Metz, Wired 27 April 2016.
    "P7001 – Transparency of Autonomous Systems". P7001 – Transparency
of Autonomous Systems. IEEE. Archived from the original on 10 January
2019. Retrieved 10 January 2019..
    Thurm, Scott (July 13, 2018). "MICROSOFT CALLS FOR FEDERAL
REGULATION OF FACIAL RECOGNITION". Wired. Archived from the original
on May 9, 2019. Retrieved January 10, 2019.
    Bastin, Roland; Wantz, Georges (June 2017). "The General Data
Protection Regulation Cross-industry innovation" (PDF). Inside
magazine. Deloitte. Archived (PDF) from the original on 2019-01-10.
Retrieved 2019-01-10.
    "UN artificial intelligence summit aims to tackle poverty,
humanity's 'grand challenges'". UN News. 2017-06-07. Archived from the
original on 2019-07-26. Retrieved 2019-07-26.
    "Artificial intelligence – Organisation for Economic Co-operation
and Development". www.oecd.org. Archived from the original on
2019-07-22. Retrieved 2019-07-26.
    Anonymous (2018-06-14). "The European AI Alliance". Digital Single
Market – European Commission. Archived from the original on
2019-08-01. Retrieved 2019-07-26.
    European Commission High-Level Expert Group on AI (2019-06-26).
"Policy and investment recommendations for trustworthy Artificial
Intelligence". Shaping Europe’s digital future – European Commission.
Archived from the original on 2020-02-26. Retrieved 2020-03-16.
    "EU Tech Policy Brief: July 2019 Recap". Center for Democracy &
Technology. Archived from the original on 2019-08-09. Retrieved
2019-08-09.
    Curtis, Caitlin; Gillespie, Nicole; Lockey, Steven (2022-05-24).
"AI-deploying organizations are key to addressing 'perfect storm' of
AI risks". AI and Ethics: 1–9. doi:10.1007/s43681-022-00163-7. ISSN
2730-5961. PMC 9127285. PMID 35634256.
    Gabriel, Iason (2018-03-14). "The case for fairer algorithms –
Iason Gabriel". Medium. Archived from the original on 2019-07-22.
Retrieved 2019-07-22.
    "5 unexpected sources of bias in artificial intelligence".
TechCrunch. 10 December 2016. Archived from the original on
2021-03-18. Retrieved 2019-07-22.
    Knight, Will. "Google's AI chief says forget Elon Musk's killer
robots, and worry about bias in AI systems instead". MIT Technology
Review. Archived from the original on 2019-07-04. Retrieved
2019-07-22.
    Villasenor, John (2019-01-03). "Artificial intelligence and bias:
Four key challenges". Brookings. Archived from the original on
2019-07-22. Retrieved 2019-07-22.
    Lohr, Steve (9 February 2018). "Facial Recognition Is Accurate, if
You're a White Guy". The New York Times. Archived from the original on
9 January 2019. Retrieved 29 May 2019.
    Koenecke, Allison; Nam, Andrew; Lake, Emily; Nudell, Joe; Quartey,
Minnie; Mengesha, Zion; Toups, Connor; Rickford, John R.; Jurafsky,
Dan; Goel, Sharad (7 April 2020). "Racial disparities in automated
speech recognition". Proceedings of the National Academy of Sciences.
117 (14): 7684–7689. Bibcode:2020PNAS..117.7684K.
doi:10.1073/pnas.1915768117. PMC 7149386. PMID 32205437.
    "Amazon scraps secret AI recruiting tool that showed bias against
women". Reuters. 2018-10-10. Archived from the original on 2019-05-27.
Retrieved 2019-05-29.
    Friedman, Batya; Nissenbaum, Helen (July 1996). "Bias in computer
systems". ACM Transactions on Information Systems. 14 (3): 330–347.
doi:10.1145/230538.230561. S2CID 207195759.
    "Eliminating bias in AI". techxplore.com. Archived from the
original on 2019-07-25. Retrieved 2019-07-26.
    Olson, Parmy. "Google's DeepMind Has An Idea For Stopping Biased
AI". Forbes. Retrieved 2019-07-26.
    "Machine Learning Fairness | ML Fairness". Google Developers.
Archived from the original on 2019-08-10. Retrieved 2019-07-26.
    "AI and bias – IBM Research – US". www.research.ibm.com. Archived
from the original on 2019-07-17. Retrieved 2019-07-26.
    Bender, Emily M.; Friedman, Batya (December 2018). "Data
Statements for Natural Language Processing: Toward Mitigating System
Bias and Enabling Better Science". Transactions of the Association for
Computational Linguistics. 6: 587–604. doi:10.1162/tacl_a_00041.
    Gebru, Timnit; Morgenstern, Jamie; Vecchione, Briana; Vaughan,
Jennifer Wortman; Wallach, Hanna; Daumé III, Hal; Crawford, Kate
(2018). "Datasheets for Datasets". arXiv:1803.09010 [cs.DB].
    Pery, Andrew (2021-10-06). "Trustworthy Artificial Intelligence
and Process Mining: Challenges and Opportunities". DeepAI. Retrieved
2022-02-18.
    Knight, Will. "Google's AI chief says forget Elon Musk's killer
robots, and worry about bias in AI systems instead". MIT Technology
Review. Archived from the original on 2019-07-04. Retrieved
2019-07-26.
    "Where in the World is AI? Responsible & Unethical AI Examples".
Archived from the original on 2020-10-31. Retrieved 2020-10-28.
    Evans, Woody (2015). "Posthuman Rights: Dimensions of Transhuman
Worlds". Teknokultura. 12 (2). doi:10.5209/rev_TK.2015.v12.n2.49072.
    Sheliazhenko, Yurii (2017). "Artificial Personal Autonomy and
Concept of Robot Rights". European Journal of Law and Political
Sciences: 17–21. doi:10.20534/EJLPS-17-1-17-21. Archived from the
original on 14 July 2018. Retrieved 10 May 2017.
    The American Heritage Dictionary of the English Language, Fourth Edition
    "Robots could demand legal rights". BBC News. December 21, 2006.
Archived from the original on October 15, 2019. Retrieved January 3,
2010.
    Henderson, Mark (April 24, 2007). "Human rights for robots? We're
getting carried away". The Times Online. The Times of London. Archived
from the original on May 17, 2008. Retrieved May 2, 2010.
    McGee, Glenn. "A Robot Code of Ethics". The Scientist. Archived
from the original on 2020-09-06. Retrieved 2019-03-25.
    Kurzweil, Ray (2005). The Singularity is Near. Penguin Books. ISBN
978-0-670-03384-3.
    The Big Question: Should the human race be worried by the rise of
robots?, Independent Newspaper,
    Loebner Prize Contest Official Rules — Version 2.0 Archived
2016-03-03 at the Wayback Machine The competition was directed by
David Hamill and the rules were developed by members of the Robitron
Yahoo group.
    "Saudi Arabia bestows citizenship on a robot named Sophia". 26
October 2017. Archived from the original on 2017-10-27. Retrieved
2017-10-27.
    Vincent, James (30 October 2017). "Pretending to give a robot
citizenship helps no one". The Verge. Archived from the original on 3
August 2019. Retrieved 10 January 2019.
    Wilks, Yorick, ed. (2010). Close engagements with artificial
companions: key social, psychological, ethical and design issues.
Amsterdam: John Benjamins Pub. Co. ISBN 978-9027249944. OCLC
642206106.
        Weizenbaum, Joseph (1976). Computer Power and Human Reason.
San Francisco: W.H. Freeman & Company. ISBN 978-0-7167-0464-5.
        McCorduck, Pamela (2004), Machines Who Think (2nd ed.),
Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1, pp. 132–144
    Joseph Weizenbaum, quoted in McCorduck 2004, pp. 356, 374–376
    Kaplan, Andreas; Haenlein, Michael (January 2019). "Siri, Siri, in
my hand: Who's the fairest in the land? On the interpretations,
illustrations, and implications of artificial intelligence". Business
Horizons. 62 (1): 15–25. doi:10.1016/j.bushor.2018.08.004. S2CID
158433736.
    Hibbard, Bill (17 November 2015). "Ethical Artificial
Intelligence". arXiv:1411.1373 [cs.AI].
    Davies, Alex (29 February 2016). "Google's Self-Driving Car Caused
Its First Crash". Wired. Archived from the original on 7 July 2019.
Retrieved 26 July 2019.
    Levin, Sam; Wong, Julia Carrie (19 March 2018). "Self-driving Uber
kills Arizona woman in first fatal crash involving pedestrian". The
Guardian. Archived from the original on 26 July 2019. Retrieved 26
July 2019.
    "Who is responsible when a self-driving car has an accident?".
Futurism. Archived from the original on 2019-07-26. Retrieved
2019-07-26.
    Radio, Business; Policy, Law and Public; Podcasts; America, North.
"Autonomous Car Crashes: Who – or What – Is to Blame?".
Knowledge at Wharton. Archived from the original on 2019-07-26. Retrieved
2019-07-26. {{cite web}}: |first1= has generic name (help)
    Delbridge, Emily. "Driverless Cars Gone Wild". The Balance.
Archived from the original on 2019-05-29. Retrieved 2019-05-29.
    Stilgoe, Jack (2020), "Who Killed Elaine Herzberg?", Who’s Driving
Innovation?, Cham: Springer International Publishing, pp. 1–6,
doi:10.1007/978-3-030-32320-2_1, ISBN 978-3-030-32319-6, S2CID
214359377, archived from the original on 2021-03-18, retrieved
2020-11-11
    Maxmen, Amy (October 2018). "Self-driving car dilemmas reveal that
moral choices are not universal". Nature. 562 (7728): 469–470.
Bibcode:2018Natur.562..469M. doi:10.1038/d41586-018-07135-0. PMID
30356197.
    "Regulations for driverless cars". GOV.UK. Archived from the
original on 2019-07-26. Retrieved 2019-07-26.
    "Automated Driving: Legislative and Regulatory Action –
CyberWiki". cyberlaw.stanford.edu. Archived from the original on
2019-07-26. Retrieved 2019-07-26.
    "Autonomous Vehicles | Self-Driving Vehicles Enacted Legislation".
www.ncsl.org. Archived from the original on 2019-07-26. Retrieved
2019-07-26.
    Robot Three-Way Portends Autonomous Future Archived 2012-11-07 at
the Wayback Machine, By David Axe wired.com, August 13, 2009.
    United States. Defense Innovation Board. AI principles:
recommendations on the ethical use of artificial intelligence by the
Department of Defense. OCLC 1126650738.
    New Navy-funded Report Warns of War Robots Going "Terminator"
Archived 2009-07-28 at the Wayback Machine, by Jason Mick (Blog),
dailytech.com, February 17, 2009.
    Umbrello, Steven; Torres, Phil; De Bellis, Angelo F. (March 2020).
"The future of war: could lethal autonomous weapons make conflict more
ethical?". AI & Society. 35 (1): 273–282.
doi:10.1007/s00146-019-00879-x. hdl:2318/1699364. ISSN 0951-5666.
S2CID 59606353. Archived from the original on 2021-01-05. Retrieved
2020-11-11.
    Hellström, Thomas (June 2013). "On the moral responsibility of
military robots". Ethics and Information Technology. 15 (2): 99–107.
doi:10.1007/s10676-012-9301-2. S2CID 15205810. ProQuest 1372020233.
    Mitra, Ambarish (5 April 2018). "We can train AI to identify good
and evil, and then use it to teach us morality". Quartz. Archived from
the original on 2019-07-26. Retrieved 2019-07-26.
    "AI Principles". Future of Life Institute. 11 August 2017.
Archived from the original on 2017-12-11. Retrieved 2019-07-26.
    Zach Musgrave and Bryan W. Roberts (2015-08-14). "Why Artificial
Intelligence Can Too Easily Be Weaponized – The Atlantic". The
Atlantic. Archived from the original on 2017-04-11. Retrieved
2017-03-06.
    Cat Zakrzewski (2015-07-27). "Musk, Hawking Warn of Artificial
Intelligence Weapons". WSJ. Archived from the original on 2015-07-28.
Retrieved 2017-08-04.
    GiveWell (2015). Potential risks from advanced artificial
intelligence (Report). Archived from the original on 12 October 2015.
Retrieved 11 October 2015.
    Inside The Mind Of A.I. - Cliff Kuang interview
    Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion
and Machine Ethics" Archived 2015-05-07 at the Wayback Machine. In
Singularity Hypotheses: A Scientific and Philosophical Assessment,
edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric
Steinhart. Berlin: Springer.
    Bostrom, Nick. 2003. "Ethical Issues in Advanced Artificial
Intelligence" Archived 2018-10-08 at the Wayback Machine. In
Cognitive, Emotive and Ethical Aspects of Decision Making in Humans
and in Artificial Intelligence, edited by Iva Smit and George E.
Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for
Advanced Studies in Systems Research / Cybernetics.
    Umbrello, Steven; Baum, Seth D. (2018-06-01). "Evaluating future
nanotechnology: The net societal impacts of atomically precise
manufacturing". Futures. 100: 63–73.
doi:10.1016/j.futures.2018.04.007. hdl:2318/1685533. ISSN 0016-3287.
S2CID 158503813. Archived from the original on 2019-05-09. Retrieved
2020-11-29.
    Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI"
Archived 2015-09-29 at the Wayback Machine. In Schmidhuber, Thórisson,
and Looks 2011, 388–393.
    Russell, Stuart (October 8, 2019). Human Compatible: Artificial
Intelligence and the Problem of Control. United States: Viking. ISBN
978-0-525-55861-3. OCLC 1083694322.
    Yampolskiy, Roman V. (2020-03-01). "Unpredictability of AI: On the
Impossibility of Accurately Predicting All Actions of a Smarter
Agent". Journal of Artificial Intelligence and Consciousness. 07 (1):
109–118. doi:10.1142/S2705078520500034. ISSN 2705-0785. S2CID
218916769. Archived from the original on 2021-03-18. Retrieved
2020-11-29.
    Wallach, Wendell; Vallor, Shannon (2020-09-17), "Moral Machines:
>From Value Alignment to Embodied Virtue", Ethics of Artificial
Intelligence, Oxford University Press, pp. 383–412,
doi:10.1093/oso/9780190905033.003.0014, ISBN 978-0-19-090503-3,
archived from the original on 2020-12-08, retrieved 2020-11-29
    Umbrello, Steven (2019). "Beneficial Artificial Intelligence
Coordination by Means of a Value Sensitive Design Approach". Big Data
and Cognitive Computing. 3 (1): 5. doi:10.3390/bdcc3010005.
    Floridi, Luciano; Cowls, Josh; King, Thomas C.; Taddeo,
Mariarosaria (2020). "How to Design AI for Social Good: Seven
Essential Factors". Science and Engineering Ethics. 26 (3): 1771–1796.
doi:10.1007/s11948-020-00213-5. ISSN 1353-3452. PMC 7286860. PMID
32246245.
    Fiegerman, Seth (28 September 2016). "Facebook, Google, Amazon
create group to ease AI concerns". CNNMoney.
    "Ethics guidelines for trustworthy AI". Shaping Europe’s digital
future – European Commission. European Commission. 2019-04-08.
Archived from the original on 2020-02-20. Retrieved 2020-02-20.
    "White Paper on Artificial Intelligence – a European approach to
excellence and trust | Shaping Europe's digital future".
    "OECD AI Policy Observatory".
    "The Obama Administration's Roadmap for AI Policy". Harvard
Business Review. 2016-12-21. ISSN 0017-8012. Archived from the
original on 2021-01-22. Retrieved 2021-03-16.
    "Accelerating America's Leadership in Artificial Intelligence –
The White House". trumpwhitehouse.archives.gov. Archived from the
original on 2021-02-25. Retrieved 2021-03-16.
    "Request for Comments on a Draft Memorandum to the Heads of
Executive Departments and Agencies, "Guidance for Regulation of
Artificial Intelligence Applications"". Federal Register. 2020-01-13.
Archived from the original on 2020-11-25. Retrieved 2020-11-28.
    "CCC Offers Draft 20-Year AI Roadmap; Seeks Comments". HPCwire.
2019-05-14. Archived from the original on 2021-03-18. Retrieved
2019-07-22.
    "Request Comments on Draft: A 20-Year Community Roadmap for AI
Research in the US » CCC Blog". 13 May 2019. Archived from the
original on 2019-05-14. Retrieved 2019-07-22.
    "Non-Human Party". 2021.
    (in Russian) Интеллектуальные правила — Kommersant, 25.11.2021
    Grace, Katja; Salvatier, John; Dafoe, Allan; Zhang, Baobao; Evans,
Owain (2018-05-03). "When Will AI Exceed Human Performance? Evidence
from AI Experts". arXiv:1705.08807 [cs.AI].
    "China wants to shape the global future of artificial
intelligence". MIT Technology Review. Archived from the original on
2020-11-20. Retrieved 2020-11-29.
    Floridi, Luciano; Cowls, Josh; Beltrametti, Monica; Chatila, Raja;
Chazerand, Patrice; Dignum, Virginia; Luetge, Christoph; Madelin,
Robert; Pagallo, Ugo; Rossi, Francesca; Schafer, Burkhard
(2018-12-01). "AI4People—An Ethical Framework for a Good AI Society:
Opportunities, Risks, Principles, and Recommendations". Minds and
Machines. 28 (4): 689–707. doi:10.1007/s11023-018-9482-5. ISSN
1572-8641. PMC 6404626. PMID 30930541.
    "Joanna J. Bryson". WIRED. Retrieved 13 January 2023.
    "New Artificial Intelligence Research Institute Launches".
2017-11-20. Archived from the original on 2020-09-18. Retrieved
2021-02-21.
    Hughes, James J.; LaGrandeur, Kevin, eds. (15 March 2017).
Surviving the machine age: intelligent technology and the
transformation of human work. Cham, Switzerland. ISBN
978-3-319-51165-8. OCLC 976407024. Archived from the original on 18
March 2021. Retrieved 29 November 2020.
    Danaher, John (2019). Automation and utopia: human flourishing in
a world without work. Cambridge, Massachusetts. ISBN
978-0-674-24220-3. OCLC 1114334813.
    "TUM Institute for Ethics in Artificial Intelligence officially
opened". www.tum.de. Archived from the original on 2020-12-10.
Retrieved 2020-11-29.
    Lee, Jennifer 8 (2020-02-08). "When Bias Is Coded Into Our
Technology". NPR. Retrieved 2021-12-22.
    "How one conference embraced diversity". Nature. 564 (7735):
161–162. 2018-12-12. doi:10.1038/d41586-018-07718-x. PMID 31123357.
S2CID 54481549.
    Roose, Kevin (2020-12-30). "The 2020 Good Tech Awards". The New
York Times. ISSN 0362-4331. Retrieved 2021-12-21.
    Torras, Carme, (2020), “Science-Fiction: A Mirror for the Future
of Humankind” in IDEES, Centre d'estudis de temes contemporanis
(CETC), Barcelona.
https://revistaidees.cat/en/science-fiction-favors-engaging-debate-on-artificial-intelligence-and-ethics/
Retrieved on 2021-06-10
    Gottfried Wilhelm Leibniz, (1714): Monadology, § 17 (“Mill
Argument”). See also: Lodge, P. (2014): «Leibniz’s Mill Argument:
Against Mechanical Materialism Revisited”, in ERGO, Volume 1, No. 03)
https://quod.lib.umich.edu/e/ergo/12405314.0001.003/--leibniz-s-mill-argument-against-mechanical-materialism?rgn=main;view=fulltext
Retrieved on 2021-06-10
    Cited in Bringsjord, Selmer and Naveen Sundar Govindarajulu,
"Artificial Intelligence", The Stanford Encyclopedia of Philosophy
(Summer 2020 Edition), Edward N. Zalta (ed.), URL =
<https://plato.stanford.edu/archives/sum2020/entries/artificial-intelligence/>.
Retrieved on 2021-06-10
    Hodges, A. (2014), Alan Turing: The Enigma,Vintage, London,.p.334
    A. M. Turing (1936). "On computable numbers, with an application
to the Entscheidungsproblem." in Proceedings of the London
Mathematical Society, 2 s. vol. 42 (1936–1937), pp. 230–265.
    "Love, Death & Robots season 2, episode 1 recap - "Automated
Customer Service"". Ready Steady Cut. 2021-05-14. Retrieved
2021-12-21.
    Cave, Stephen; Dihal, Kanta; Dillon, Sarah, eds. (14 February
2020). AI narratives: a history of imaginative thinking about
intelligent machines (First ed.). Oxford. ISBN 978-0-19-258604-9. OCLC
1143647559. Archived from the original on 18 March 2021. Retrieved 11
November 2020.
    Jerreat-Poole, Adam (1 February 2020). "Sick, Slow, Cyborg: Crip
Futurity in Mass Effect". Game Studies. 20. ISSN 1604-7982. Archived
from the original on 9 December 2020. Retrieved 11 November 2020.
    ""Detroit: Become Human" Will Challenge your Morals and your
Humanity". Coffee or Die Magazine. 2018-08-06. Retrieved 2021-12-07.
    Cerqui, Daniela; Warwick, Kevin (2008), "Re-Designing Humankind:
The Rise of Cyborgs, a Desirable Goal?", Philosophy and Design,
Dordrecht: Springer Netherlands, pp. 185–195,
doi:10.1007/978-1-4020-6591-0_14, ISBN 978-1-4020-6590-3, archived
from the original on 2021-03-18, retrieved 2020-11-11
    Cave, Stephen; Dihal, Kanta (6 August 2020). "The Whiteness of
AI". Philosophy & Technology. 33 (4): 685–703.
doi:10.1007/s13347-020-00415-6. S2CID 225466550.

External links

    Ethics of Artificial Intelligence at the Internet Encyclopedia of Philosophy
    Ethics of Artificial Intelligence and Robotics at the Stanford
Encyclopedia of Philosophy
    Russell, S.; Hauert, S.; Altman, R.; Veloso, M. (May 2015).
"Robotics: Ethics of artificial intelligence". Nature. 521 (7553):
415–418. Bibcode:2015Natur.521..415.. doi:10.1038/521415a. PMID
26017428. S2CID 4452826.
    BBC News: Games to take on a life of their own
    Who's Afraid of Robots? Archived 2018-03-22 at the Wayback
Machine, an article on humanity's fear of artificial intelligence.
    A short history of computer ethics
    AI Ethics Guidelines Global Inventory by Algorithmwatch
    Hagendorff, Thilo (March 2020). "The Ethics of AI Ethics: An
Evaluation of Guidelines". Minds and Machines. 30 (1): 99–120.
doi:10.1007/s11023-020-09517-8. S2CID 72940833.

    v
    t
    e

Ethics

    v
    t
    e

Existential risk from artificial intelligence

    v
    t
    e

Philosophy of science
Categories:

    Philosophy of artificial intelligence
    Ethics of science and technology
    Regulation of robots

    This page was last edited on 1 February 2023, at 01:45 (UTC).
    Text is available under the Creative Commons
Attribution-ShareAlike License 3.0; additional terms may apply. By
using this site, you agree to the Terms of Use and Privacy Policy.
Wikipedia® is a registered trademark of the Wikimedia Foundation,
Inc., a non-profit organization.

    Privacy policy
    About Wikipedia
    Disclaimers
    Contact Wikipedia
    Mobile view
    Developers
    Statistics
    Cookie statement

    Wikimedia Foundation
    Powered by MediaWiki

 Jump to content
Toggle sidebar Wikipedia The Free Encyclopedia

    Create account

Personal tools

Talk:Ethics of artificial intelligence

    Article
    Talk

    Read
    Edit
    Add topic
    View history

>From Wikipedia, the free encyclopedia
WikiProject Council 	          This article is of interest to the
following WikiProjects:
WikiProject Artificial Intelligence 	
WikiProject Technology 	(Rated C-class)
WikiProject Robotics 	(Rated B-class, High-importance)
WikiProject Philosophy / Ethics / Science / Mind 	(Rated B-class,
Low-importance)
WikiProject Alternative Views 	(Rated C-class, Low-importance)
WikiProject Psychology 	(Rated C-class, Low-importance)
WikiProject Futures studies 	(Rated B-class, High-importance)
WikiProject Computer science 	(Rated B-class, Top-importance)
WikiProject icon	This article was created or improved during the
BLM/anti-discrimination edit-a-thon hosted by the Women in Red project
in July to December 2020. The editor(s) involved may be new; please
assume good faith regarding their contributions before making changes.
		
Wiki Education Foundation-supported course assignment

Sciences humaines.svg This article was the subject of a Wiki Education
Foundation-supported course assignment, between 27 August 2019 and 14
December 2019. Further details are available on the course page.
Student editor(s): Gordon1kuo.

Above undated message substituted from Template:Dashboard.wikiedu.org
assignment by PrimeBOT (talk) 20:53, 17 January 2022 (UTC)[reply]
Wiki Education Foundation-supported course assignment

Sciences humaines.svg This article was the subject of a Wiki Education
Foundation-supported course assignment, between 6 September 2020 and 6
December 2020. Further details are available on the course page.
Student editor(s): Nectaros.

Above undated message substituted from Template:Dashboard.wikiedu.org
assignment by PrimeBOT (talk) 20:53, 17 January 2022 (UTC)[reply]
Wiki Education Foundation-supported course assignment

Sciences humaines.svg This article was the subject of a Wiki Education
Foundation-supported course assignment, between 7 September 2021 and
23 December 2021. Further details are available on the course page.
Student editor(s): Kunpeng Liu. Peer reviewers: Jmcn24, Vanessa Li
(YYL).

Above undated message substituted from Template:Dashboard.wikiedu.org
assignment by PrimeBOT (talk) 20:53, 17 January 2022 (UTC)[reply]
Wiki Education Foundation-supported course assignment

Sciences humaines.svg This article was the subject of a Wiki Education
Foundation-supported course assignment, between 4 February 2019 and 15
March 2019. Further details are available on the course page. Student
editor(s): Branden Hendricks.

Above undated message substituted from Template:Dashboard.wikiedu.org
assignment by PrimeBOT (talk) 20:53, 16 January 2022 (UTC)[reply]
Comments moved from "Talk:Philosophy of artificial intelligence"

"A major influence in the AI ethics dialogue was Isaac Asimov who
fictitiously created Three Laws of Robotics to govern artificial
intelligent systems." I've removed "fictitiously." While the Three
Laws of Robotics were created for a fictitious universe, Asimov really
did create them. It might be appropriate to somehow add that he
developed them for his science fiction books. goaway110 21:57, 22 June
2006 (UTC)[reply]

I don't see the point why Ethical issues of AI should be an
independent encyclopedia entry. The lexicographic lemma here is surely
"Artificial Intelligence". --Fasten 14:41, 7 October 2005 (UTC)[reply]

I disagree Fasten, I think the main AI article should briefly mention
ethical issues and we should keep this as a separate article; the
subject can be much more extended to include uses of AI(in wars, in
saving people of dangerous conditions, in working under unhealthy
circunstances(like mining)), AI as a possible future singularity
Technological singularity(ie, what will happen if AI eventually become
more intelligent and capable than humans), it could also include more
deep discussions about the possibility of sensations(qualia) and
consciousness in AI, some comments on what will happen if AI gets
widespread in the future society with behavior, appearance and
activities very similar to ours, and about issues as "should AI have
rights and obligation?", "does it makes sense to create laws for AI
beings to obey?" Rend 01:29, 17 October 2005 (UTC)[reply]

The first question I would have regarding the ethics of AI would be
whether it is possible for a machine to be capable of conciousness.
This is obviously a very difficult question given the fact that no
human being can really be capable of knowing anyone's internal
existence other than his own. Hell, maybe computers really to have
conciousness. But if they do, they would be the only one's who would
know this for certain since it is difficult to ask a computer if it
exists without programing it to say it exists beforehand. I believe
animals have conciousness and are capable of feeling emotions even
though they cannot tell us this. Also, would the fact that a computer
wasn't capable of conciousness, much less emotions, mean that is
should not be protected and given rights. This may seem impropper but
I can help bring to mind the Terri shivo case. It is very possible
that she was fully concious and fully capable of emotions even though
she was in a permenantly catatonic state.207.157.121.50 12:49, 25
October 2005 (UTC)[reply]

    That might get difficult without OR. I changed the merge
suggestion from "Artificial Intelligence" to "Artificial intelligence
(philosophy)", which is referred to by the
Portal:Artificial_intelligence --Fasten 13:51, 19 October 2005
(UTC)[reply]

The subject can be much more extended to include:

    Use of AI in wars
    Use of AI in conditions hazardous to human(saving people from
fire, drowning, poisoned or radioactive areas)
    Use of AI in human activities(doing human work, AI failing,
substituting human jobs, doing unhealthy or dangerous work(eg:
mining), what will happen if AI gets better than us in most of our
work activities, what AI will not be able to do(at least in near
future))
    AI as a possible future singularity Technological singularity:
what will happen if AI eventually become more intelligent and capable
than humans, being able to produce even more intelligent AIs, possibly
to a level that we won't be able to understand.
    More deep discussions about the possibility of sensations(qualia)
and consciousness in AI
    Some comments on what will happen if AI gets widespread in the
future society with behavior, appearance and activities very similar
or even better than ours(could it bring problems about machine
"treatment"? I mean, could we still throw them away as they were a
simply expensive toy, if they become better than us in all our
practical activities?)
    As AI usage and presence become greater and widespread, should we
discuss issues as: "should AI have rights and obligation?", "does it
makes sense to create laws for AI beings to obey?"

I ask anyone who has references and contents to include them properly.
Rend 23:11, 21 October 2005 (UTC)[reply]

Just following up on some of Rend's questions...

    Is it ethical for a person to own an AI "being"? Would an AI being
necessarily "prefer" to be unowned?
    A computer is owned by whoever owns the factory that makes it
(until the factory sells the computer to a person) -- is the same true
of an AI being?
    If an AI being is unowned, and it builds another AI being, then
does the first AI being own the second one?
    Are the interests of human society served by incorporating unowned
AI beings into it? Would humans in such a society be at a competitive
disadvantage?
    Would the collective wisdom of AI beings come to the conclusion
that humans are but one of many forms of life on the planet, and
therefore humans don't deserve any more special treatment than, say,
mice? Or lichen?

Whichever way these questions are answered, more questions lie ahead.
For example, if we say it isn't ethical for a person to own an AI
being, then can or should society as a whole constrain the behavior of
AI beings through ownership or through "laws of robotics"? If we are
able to predict that the behavior of AI beings will not be readily
channeled to the exclusive benefit of humans, then is there a "window
of opportunity" to constrain their behavior before it gets "out of
hand" (from human society's point of view)?

A survey of current philosophical thought on questions such as these
(and the slippery slope issues surrounding them) would be very helpful
here.—GraemeMcRaetalk 05:55, 3 November 2005 (UTC)[reply]
On whether Asimov's laws can be enforced or only taught

Not all artificially intelligent machines are necessarily "programmed"
as intelligent. For example, the deep belief networks of Geoff Hinton
et al. can learn yet can be implemented in unprogrammed hardware. I
accept that Hinton's simulations require programs, but the programming
does not embody intelligence, only neural and synaptic functions which
do do not in themselves incorporate meaning.

I don't want to get into semantic or philosophical arguments here
about what "programmed" really means, but my key point is that
Asimov's laws seem to require some sort of "override" over instinct or
learning which rather implies a high-level program. Without that, one
would have to teach the laws to the AI (or in Hinton's terminology,
"learn" the AI to understand the laws) but that would leave open the
possibility for the AI to decide to ignore the laws and Terminator
scenarios follow.

So for that reason, the ethics of creating AI machines deserves
continuing attention. P.r.newman (talk) 08:42, 18 January 2011
(UTC)[reply]
Robot rights

Moved Robot rights from Artificial intelligence in fiction to Ethics
of artificial intelligence Thomas Kist 18:57, 15 October 2007
(UTC)[reply]
Proposed merge

Oppose -- for no other reason than that the fact that philosophy of AI
is already 10 pages long, and this subject probably requires about ten
pages on its own. ---- CharlesGillingham (talk) 17:15, 14 August 2008
(UTC)[reply]

http://www.youtube.com/watch?v=7VpXekQGzqg — Preceding unsigned
comment added by Angienoid (talk • contribs) 13:39, 19 March 2013
(UTC)[reply]
Re Rule 61 of for the 2003 Loebner Prize competition

As it stands today (7/4/2013), the rule with its legal specifications
says nothing at all to the subject of Robots' rights. Suggest
deleting.Svato (talk) 19:31, 4 July 2013 (UTC)[reply]
Roboethics

The lead to this section currently states, "The term "roboethics" ..
[refers] to the morality of how humans design, construct, use and
treat robots and other artificially intelligent beings. It considers
both how artificially intelligent beings may be used to harm humans
and how they may be used to benefit humans." I find the two halves of
this to be contradictory. The first is clear enough - if a robot is a
conscious being then we must treat it ethically. The second appears to
be a simple application of machine ethics to AI, and nothing whatever
to do with roboethics as defined in the first sentence. — Cheers,
Steelpillow (Talk) 12:08, 13 November 2014 (UTC)[reply]

We have to consider whether 'robotethics' is appropriate as a section
in this article and not its own page. Of course, roboethics and the
ethics of AI are deeply connected, they are properly their own fields,
given that robotics doe snot necessarily entail AI, and the embodiment
of AI in robotics comes it with its own ethical issues and have a huge
literature behind it. EthicsScholar93 (talk) 09:23, 29 November 2020
(UTC)[reply]
Solznenitsyn

    re: (Sorry, the source is McCorduck, whose "Machines That Think"
is the definitive work on the history of AI. She considers this an
issue in the ethics of AI. The synthesis is hers, not Wikipedia's.)

In this case the section should be written in the followng way:
"McCorduck <bla-bla bla>. As an example of this possibility McCorduck
cites Sozhenitsyn's <...>". Are you saying that the rest of the
section (after the footnote) is attributed to McCorduck as well? If
not, then {{cn}} is due. Staszek Lem (talk) 00:45, 15 May 2015
(UTC)[reply]
At least 50 years

I changed the sentence about how most scientists think it will be at
least 50 years before we have human-equivalent AI. First, I added in
the time -- 50 years from 2007 is not the same as 50 years from 2016!
I also traced down the actual source, which was an Independent article
from that period, about an AI symposium. The existing links were 404s,
so I switched over to use the Web Archive. I think that's probably not
the best for Wikipedia, but better than a lost source. --ESP (talk)
16:02, 19 January 2016 (UTC)[reply]
External links modified

Hello fellow Wikipedians,

I have just modified one external link on Ethics of artificial
intelligence. Please take a moment to review my edit. If you have any
questions, or need the bot to ignore the links, or the page
altogether, please visit this simple FaQ for additional information. I
made the following changes:

    Added archive
https://web.archive.org/web/20120524150856/http://www.asimovlaws.com/articles/archives/2004/07/why_we_need_fri_1.html
to http://www.asimovlaws.com/articles/archives/2004/07/why_we_need_fri_1.html

When you have finished reviewing my changes, please set the checked
parameter below to true or failed to let others know (documentation at
{{Sourcecheck}}).

check An editor has reviewed this edit and fixed any errors that were found.

    If you have discovered URLs which were erroneously considered dead
by the bot, you can report them with this tool.
    If you found an error with any archives or the URLs themselves,
you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 21:47, 20 July 2016 (UTC)[reply]
External links modified

Hello fellow Wikipedians,

I have just modified one external link on Ethics of artificial
intelligence. Please take a moment to review my edit. If you have any
questions, or need the bot to ignore the links, or the page
altogether, please visit this simple FaQ for additional information. I
made the following changes:

    Added archive
https://web.archive.org/web/20080418122849/http://www.southernct.edu/organizations/rccs/resources/research/introduction/bynum_shrt_hist.html
to http://www.southernct.edu/organizations/rccs/resources/research/introduction/bynum_shrt_hist.html

When you have finished reviewing my changes, you may follow the
instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018,
"External links modified" talk page sections are no longer generated
or monitored by InternetArchiveBot. No special action is required
regarding these talk page notices, other than regular verification
using the archive tool instructions below. Editors have permission to
delete these "External links modified" talk page sections if they want
to de-clutter talk pages, but see the RfC before doing mass systematic
removals. This message is updated dynamically through the template
{{source check}} (last update: 18 January 2022).

    If you have discovered URLs which were erroneously considered dead
by the bot, you can report them with this tool.
    If you found an error with any archives or the URLs themselves,
you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 16:58, 26 December 2016 (UTC)[reply]
External links modified

Hello fellow Wikipedians,

I have just modified one external link on Ethics of artificial
intelligence. Please take a moment to review my edit. If you have any
questions, or need the bot to ignore the links, or the page
altogether, please visit this simple FaQ for additional information. I
made the following changes:

    Added archive
https://web.archive.org/web/20111126025029/http://www.computer.org/portal/web/csdl/abs/mags/ex/2006/04/x4toc.htm
to http://www.computer.org/portal/web/csdl/abs/mags/ex/2006/04/x4toc.htm

When you have finished reviewing my changes, you may follow the
instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018,
"External links modified" talk page sections are no longer generated
or monitored by InternetArchiveBot. No special action is required
regarding these talk page notices, other than regular verification
using the archive tool instructions below. Editors have permission to
delete these "External links modified" talk page sections if they want
to de-clutter talk pages, but see the RfC before doing mass systematic
removals. This message is updated dynamically through the template
{{source check}} (last update: 18 January 2022).

    If you have discovered URLs which were erroneously considered dead
by the bot, you can report them with this tool.
    If you found an error with any archives or the URLs themselves,
you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 04:07, 24 September 2017 (UTC)[reply]
weaponization of artficial intelligence

I started a discussion on Talk:Lethal autonomous weapon about whether
to have a separate article, on the global AI arms race or perhaps
instead on policy of weaponization of AI, feel free to chime in there.
There is also some overlap between the weaponization section here and
the current Lethal autonomous weapon; probably Lethal autonomous
weapon or a new standalone article would be a better home to such
discussion, and in the long term I might end up proposing that this
current vague article might be dismantled or turned into an overview
page. Rolf H Nelson (talk) 20:12, 24 December 2017 (UTC)[reply]
Robot ethics

AI ethics - a big topic - seems to have short-changed robot ethics.
You might want to reference other articles rather than being
inaccurate. In reality, the terms roboethics and robot ethics have
meant specifically different things. That is reflected in the
intentions of the two groups with those names on FB. Roboethics has a
closer relationship with philosophy and social science, among other
things. It even reaches on that basis into public policy. Robot ethics
is a term that's referred to the technical challenge of building ai
ethics machines.

see my comment above under 'roboethics' EthicsScholar93 (talk) 09:24,
29 November 2020 (UTC)[reply]
Disorganized

This article has become very disorganized, and no longer provides a
comprehensive overview of its topic. Most of the contributions seem to
have been inserted piecemeal and the overall structure makes no sense.
May I suggest the someone take a look at the appropriate section(s) of
the article artificial intelligence and attempt to reorganize this
article in a more logical form? E.g. (1) risks, unintended
consequences, abuses (2) ethical reasoning, "friendly" AI, etc. (3)
consciousness/sentience and robot rights. Is anyone maintaining this
article? ---- CharlesGillingham (talk) 20:54, 29 July 2018
(UTC)[reply]
Addition of sub-topic: Biases in AI Systems

AI has become increasingly inherent in facial and voice recognition
systems. Some of these systems have real business implications and
directly impact people. These systems are vulnerable to biases and
errors introduced by its human makers. Also, the data used to train
these AI systems itself can have biases. For instance, facial
recognition algorithms made by Microsoft, IBM and Face++ all had
biases when it came to detecting people’s gender [23] . These AI
systems were able to detect gender of white men more accurately than
gender of darker skin men. Similarly, Amazon’s.com Inc’s termination
of AI hiring and recruitment is another example which exhibit AI
cannot be fair. The algorithm preferred more male candidates then
female. This was because Amazon’s system was trained with data
collected over 10 year period that came mostly from male candidates.
[24] — Preceding unsigned comment added by
2604:3D08:8380:B90:856E:28B1:1641:8BFB (talk) 04:18, 29 May 2019
(UTC)[reply]
Addition of sub-topic: Liability for Partial or Fully Automated Cars

The wide use of partial to fully autonomous cars seems to be imminent
in the future. But these new technologies also bring new issues.
Recently, a debate over the legal liability have risen over the
responsible party if these cars get into accident. In one of the
reports [25] a driver less car hit a pedestrian and had a dilemma over
whom to blame for the accident. Even though the driver was inside the
car during the accident, the controls were fully in the hand of
computers. Before such cars become widely used, these issues need to
be tackled through new policies — Preceding unsigned comment added by
2604:3D08:8380:B90:856E:28B1:1641:8BFB (talk) 04:23, 29 May 2019
(UTC)[reply]
Actuaries

I developed a report of the ethical use of AI for actuaries, sponsored
by the Society of Actuaries.
https://www.soa.org/globalassets/assets/files/resources/research-report/2019/ethics-ai.pdf
Neil Raden (talk) 21:06, 25 October 2019 (UTC)[reply]
ethics institutions involved in AI ethics

suggestion to add a section listing the largescale institutions
involved in AI ethics. like "The Institute for Ethical AI & Machine
Learning"

problem is, many of these instetutions dont have thare own wikipedia
articles. RJJ4y7 (talk) 15:16, 30 June 2020 (UTC)[reply]

if thare is agreement to this, then ill attempt to start the section.
— Preceding unsigned comment added by RJJ4y7 (talk • contribs)

    I can't find strong WP:SECONDARY sources on "The Institute for
Ethical AI & Machine Learning". Where there is at least one strong
secondary sources for an institution, I'm fine with it being added,
even if it doesn't have its own wikipedia page. Rolf H Nelson (talk)
04:18, 1 July 2020 (UTC)[reply]


here are some Institutions to consider adding: Future of Humanity
Institute, Global Catastrophic Risk Institute, Institute for Ethics
and Emerging Technologies, Future of Life institute, Institute for
Ethics and Artificial Intelligence, The Institute for Ethics in AI
(Oxford) EthicsScholar93 (talk) 12:11, 28 November 2020 (UTC)[reply]

re: I have added the above institutions within the appropriate list at
the end of the article. Please feel free to update their descriptions.
It seems that the current descriptions just mentions a report they
have created or not, rather than a short description of why they are a
primarily AI ethics institution. EthicsScholar93 (talk) 09:25, 29
November 2020 (UTC)[reply]
Copyright problem removed

Copyright-problem.svg Prior content in this article duplicated one or
more previously published sources. The material was copied from:
https://export.arxiv.org/ftp/arxiv/papers/2005/2005.02777.pdf. Copied
or closely paraphrased material has been rewritten or removed and must
not be restored, unless it is duly released under a compatible
license. (For more information, please see "using copyrighted works
from others" if you are not the copyright holder of this material, or
"donating copyrighted materials" if you are.)

For legal reasons, we cannot accept copyrighted text or images
borrowed from other web sites or published material; such additions
will be deleted. Contributors may use copyrighted publications as a
source of information, and, if allowed under fair use, may copy
sentences and phrases, provided they are included in quotation marks
and referenced properly. The material may also be rewritten, providing
it does not infringe on the copyright of the original or plagiarize
from that source. Therefore, such paraphrased portions must provide
their source. Please see our guideline on non-free text for how to
properly implement limited quotations of copyrighted text. Wikipedia
takes copyright violations very seriously, and persistent violators
will be blocked from editing. While we appreciate contributions, we
must require all contributors to understand and comply with these
policies. Thank you. --TheImaCow (talk) 16:21, 12 December 2020
(UTC)[reply]
Categories:

    C-Class Technology articles
    Technology articles with incomplete B-Class checklists
    Technology articles needing attention to referencing and citation
    Technology articles needing attention to coverage and accuracy
    Technology articles needing attention to structure
    Technology articles needing attention to grammar
    Technology articles needing attention to supporting materials
    Technology articles needing attention to accessibility
    WikiProject Technology articles
    B-Class Robotics articles
    High-importance Robotics articles
    WikiProject Robotics articles
    B-Class Philosophy articles
    Low-importance Philosophy articles
    B-Class ethics articles
    Low-importance ethics articles
    Ethics task force articles
    B-Class philosophy of science articles
    Low-importance philosophy of science articles
    Philosophy of science task force articles
    B-Class philosophy of mind articles
    Low-importance philosophy of mind articles
    Philosophy of mind task force articles
    C-Class Alternative Views articles
    Low-importance Alternative Views articles
    WikiProject Alternative Views articles
    C-Class psychology articles
    Low-importance psychology articles
    WikiProject Psychology articles
    B-Class futures studies articles
    High-importance futures studies articles
    WikiProject Futures studies articles
    B-Class Computer science articles
    Top-importance Computer science articles
    WikiProject Computer science articles
    WikiProject Women in Red 2020 articles
    All WikiProject Women in Red pages

    This page was last edited on 30 October 2022, at 18:15 (UTC).
    Text is available under the Creative Commons
Attribution-ShareAlike License 3.0; additional terms may apply. By
using this site, you agree to the Terms of Use and Privacy Policy.
Wikipedia® is a registered trademark of the Wikimedia Foundation,
Inc., a non-profit organization.

    Privacy policy
    About Wikipedia
    Disclaimers
    Contact Wikipedia
    Mobile view
    Developers
    Statistics
    Cookie statement

    Wikimedia Foundation
    Powered by MediaWiki


More information about the cypherpunks mailing list