Click Here for INL News Amazon Best Seller Books

INLTV Uncensored News Logo

ArtificialIntelligence(AI)AndHumanity

Artificial Intelligence (AI) And Humanity

With INL News Corporation's Website Tonight Website Builder For Dummies It Is So Easy And Cheap To Own You Own Domain Name for as low as $5 And Build Your Own Website for As Low As $2 a week

Click Here for INL News Amazon Best Seller Books

INLNewsLimitedCorporateLogo

INLNewsLimitedCorporateLogo

 

 Amazon.com : computers

Click Here for the best range of Amazon Computers

AmazonComptersGraphicLogo

Acer2023Laptop_AmazonProduct.BennicoMiniPC_AmazonProductHPLapTop2022_AmazonProduct

Click Here for INL News Amazon Best Seller Books

AmazonElectronics-PortableProjectors

Amazon Electronics - Portable Projectors

Dr. Steven Greer Was Offered $2 Billion Dollars To Keep This A Secret UFO UAP

An AI humanoid from the 2014 film Ex Machina. The technology has long featured in Hollywood films but is increasingly becoming part of real life.

An AI humanoid from the 2014 film Ex Machina. The technology has long featured in Hollywood films but is increasingly becoming part of real life.

Meet Ameca! The World’s Most Advanced Robot _ This Morning

wn.bz-Illuminati-History

CIA Whistleblower_CIAHitman

Joseph Spencer-CIA Whistleblower CIA Hitman Man In Black

CIA Illegal Activities Exposed

CIA Hitman Man in Black Joseph Spencer CIA Whistleblower Speaks Out

This video is of a CIA Whistleblower, Joseph Spencer ,

who was a Man In Black operating as a Hitman for the CIA

Bill Gates was a senior C.I.A Officer

Was Michael C. Rupert Murdered?

https://isgp-studies.com/cia-heroin-and-cocaine-drug-trafficking#michael-ruppert-cia-drug-trafficking

This question remains. There is every reason that many powerful groups and people including the CIA would have been keen to see the end of Michael Rupert because he was constantly exposing the CIA's involvement in the distribution of illegal drugs, which Michael Rupert sated was a multi trillion dollar industry. Michael Rupert claimed that if all the illegal drug proceeds where withdrawn from the major banks around the world, this would cause a major depression in the world economy.

Michael Rupert also outed Bill Gates as a senior C.I.A Officer

http://youtubeexposed.com/index.php/cia-illegal-activities-exposed

Existential risk from artificial general intelligence - Wikipedia

https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence 

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe.[1][2][3]

The existential risk ("x-risk") school argues as follows: The human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.

 

COVID Conspiracy Doc Dies; Doc Group Behind Anti-Trans Laws; Aiding Anti-Vax Group

— This past week in healthcare investigations

by Michael DePeau-Wilson, Enterprise & Investigative Writer, MedPage TodayMay 24, 2023

https://www.medpagetoday.com/special-reports/exclusives/104674 

 
 
 

Welcome to the latest edition of Investigative Roundup, highlighting some of the best investigative reporting on healthcare each week.

COVID Conspiracy Doc Dies

Rashid Buttar, DO, a well-documented COVID conspiracy theorist, died days after claiming he had been poisoned, according to the Daily Staropens in a new tab or window.

Buttar claimed in early May that he was given a "poison" that contained "200 times of what was in the vaccine" shortly after an interview with CNNopens in a new tab or window in late 2021, according to the report.

 

His official cause of death wasn't released, but he spent time in intensive care recently, the Star reported.

Conspiracy theories have cropped up in the wake of his death, according to VICE Newsopens in a new tab or window. Anti-vaxers have claimed that doctors who oppose mainstream medicine are being killed by mysterious forces, and Buttar said he had a stroke in February, which he "appeared to blame on vaccine 'shedding,'" VICE reported.

A member of the "disinformation dozen," Buttar was known for spreading disinformation about the COVID-19 pandemic. Before the pandemic, Buttar was punished by the North Carolina state medical board for his treatment of autism and cancer patients -- including injecting a cancer patient with hydrogen peroxide, according to the Star.

Buttar was born in England and spent most of his life in the U.S. He was 57 when he died.

The Doctor Group Behind Anti-Trans Laws

The movement to pass laws banning gender-affirming care for transgender youth is being driven by a small group of far-right interest groups, according to two reports from the Associated Pressopens in a new tab or window.

 

One of those groups, a nonprofit called Do No Harmopens in a new tab or window, launched last year to oppose diversity initiatives in medicine, but quickly became a significant leader in efforts to introduce and pass legislation banning healthcare access for transgender youth, according to the reports.

The nonprofit has even drafted model legislation that was used in three states: Montana, Arkansas, and Iowa. One bill signed into law in Montana contained nearly all of the language used in the nonprofit's model, AP reported.

Nephrologist Stanley Goldfarb, MD, is the founder of Do No Harm. He was an associate dean at the University of Pennsylvania's medical school until he retired in 2021.

Medical News from Around the Web

BBC
Strictly's Amy Dowden more positive after cancer surgery
opens in a new tab or window
THE WASHINGTON POST
AI could predict pancreatic cancer early in some cases, study shows
opens in a new tab or window
THE NEW YORK TIMES
The New War on Bad Air
opens in a new tab or window

Goldfarb told AP in an email that his nonprofit "works to protect children from extreme gender ideology through original research, coalition-building, testimonials from parents and patients who've lived through deeply troubling experiences, and advocacy for the rigorous, apolitical study of gender dysphoria."

 

Do No Harm is continuing in its efforts to spread its model legislation to more states. The group had lobbyists registered in Kansas, Missouri, and Tennessee in 2022 and in Florida in 2023.

Wall Street Exec Aids RFK Jr.'s Anti-Vax Group

A veteran Wall Street executive has helped fund an anti-vaccine group founded by 2024 Democratic presidential hopeful Robert F. Kennedy Jr., according to CNBCopens in a new tab or window.

Mark Gorton, the founder and chairperson of Tower Research Capital, said he has given $1 million to Kennedy's group -- the Children's Health Defense -- since 2021. Kennedy, who has been a long-time critic of vaccines for children, was also the chairman of the group before stepping down to run in the 2024 Democratic presidential primaries, which he announced in April.

Children's Health Defense pushed back against COVID-19 vaccines, which helped to boost Kennedy's profile nationally, according to the report. The group more than doubled its fundraising totals from $6 million in 2020 to $15 million in 2021, according to tax documents reviewed by CNBC. Those documents did not reveal names of any specific donors.

 

Groton said he has met with Kennedy multiple times since donating to the anti-vaccine group.

"I like him a lot. He's a super smart guy. Again, he's not really a politician. He's a corruption fighter," Groton told CNBC.

He also claimed to be working with the Children's Health Defense group staff to advise them on messaging strategies.

Neither the Children's Health Defense group nor Kennedy's campaign would confirm Groton's donation or his involvement with Kennedy.

  • author['full_name']

    Michael DePeau-Wilson is a reporter on MedPage Today’s enterprise & investigative team. He covers psychiatry, long covid, and infectious diseases, among other relevant U.S. clinical news. Follow

Dr. Rashid Buttar Dies Days After He Said He's Been Poisoned

Alien Disclosure Dr. Steven Greer - Part 3 20th Anniversary NPC

Aliens Have Already Arrived - Dr. Garry Nolan

SALT Connections New York

Dr Greer's UFO Disclosure Groundbreaking National Press Club Event June12 2023

1 of 6

Dr Greer's UFO Disclosure Groundbreaking National Press Club Event June12 2023

2 of 6

Dr Greer's UFO Disclosure Groundbreaking National Press Club Event June12_2023

3 of 6

Dr Greer's UFO Disclosure Groundbreaking National Press Club Event June12 2023

4 of 6

Dr Greer's UFO Disclosure Groundbreaking National Press Club Event June12 2023

5 of 6

Dr Greer's UFO Disclosure Groundbreaking National Press Club Event June12 2023

6 of 6

National Press Club Event Disclosure Project Steven Greer An Opinion

Fight For Disclosure Disinformation And Lies

The Survival of Human  it Depends On This UAP & UFO Non-Human Retrieval Program

Dr. Steven Greer

“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI _ Aman

Could Chat GPT and AI Threaten Human Life

Dr. Steven Greer Was Offered $2 Billion Dollars To Keep This A Secret UFO UAP

DrStephenGreerClassifiedAlienEncountersRevealedBTraumatologist

DrStevenGreer- TalksUFOsPRETTYINTENSE PODCASTEP82

DrStevenGreerDidWeLandOnTheMoon Clip02Ep185

DrStevenGreer-TheBrutalTruthAboutOurGovernment-Clips01Ep185

 

DrStevenGreerWithDemiLovato

HesNotBeingHonest-WhatsElonMusksConnectionToJeffre Epstein

Find Out About The Astonishing Classified Technologies At The South Pole

Starseeds Conversation With Sophia Swaruu-Yazhi

Disclosures Social Changes Messages From Goslia To All Star Seeds

Pleiades Not Too Young To Support Organic Life Extraterrestrial Contact

HowToCommunicateWithExtraterrestrials-ETs_TheyDONOTWantYouT KNOW

SIRIUSFromDrSteven Greer-OriginalFullLengthDocumentaryFilm

videos/TheTRUTHAboutJeffre Epsteinw_WhitneyWebb-PBDPodcast_Ep198.mp4

UFOExpertDrGreerRevealsFirstEverPhotoOfAnAlien- ImpulsiveEP107

 

Elon Musk on Sam Altman and ChatGPT_ I am the reason OpenAI exists

Dr. Steven Greer Mystery Behind UFO- UAPs Alien Phenomenon

And The Secret Government

AI and the future of humanity _ Yuval Noah Harari at the Frontiers Forum

Should We Be Concerned ' Josh Hawley Asks Open AI Head About AI's Effect On Elections

videos/Man & God _ Prof. John Lennox

Edward Snowden and Ben Goertzel on the AI Explosion and Data Privacy

EMERGENCY EPISODE_ Ex-Google Officer Finally Speaks Out On The Dangers Of AI!

Evolution of Boston Dynamic’s Robots [1992-2023]

Open AI CEO, CTO on risks and how AI will reshape society

Is a ban on AI technology good or harmful _ 60 Minutes Australia

videos/India’s role in the AI revolution _ Rahul Gandhi _ Silicon Valley, USA(1)

Open AI CEO_The benefits of the tools outweigh the risks

People DON'T Realize What's Coming! URGENT Wake-Up Call You NEED to Hear _ Charl

MichioKaku_FutureOfHumans-Aliens-SpaceTravel-Physics_LexFridmanPodc.mp4MichioKaku_FutureOfHumans-Aliens-SpaceTravel-Physics_LexFridmanPodc

MichioKakuBreaks Silence_TheUniverseIsn'tLocallyRealAndNothingExists

vJesseVentura_63DocumentsTheGovernmentDoesn'tWantYouToRead

AI poses existential threat and risk to health of millions, experts warn

BMJ Global Health article calls for halt to ‘development of self-improving artificial general intelligence’ until regulation in place

AI could harm the health of millions via the social determinants of health through the control and manipulation of people, the use of lethal autonomous weapons and the mental health effects of mass unemployment should AI-based systems displace large numbers of workers.9 May 2023

AI could harm the health of millions and pose an existential threat to humanity, doctors and public health experts have said as they called for a halt to the development of artificial general intelligence until it is regulated.

Artificial intelligence has the potential to revolutionise healthcare by improving diagnosis of diseases, finding better ways to treat patients and extending care to more people.

But the development of artificial intelligence also has the potential to produce negative health impacts, according to health professionals from the UK, US, Australia, Costa Rica and Malaysia writing in the journal BMJ Global Health.

The risks associated with medicine and healthcare “include the potential for AI errors to cause patient harm, issues with data privacy and security and the use of AI in ways that will worsen social and health inequalities”, they said.

One example of harm, they said, was the use of an AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin, resulting in the undertreatment of their hypoxia.

But they also warned of broader, global threats from AI to human health and even human existence.

AI could harm the health of millions via the social determinants of health through the control and manipulation of people, the use of lethal autonomous weapons and the mental health effects of mass unemployment should AI-based systems displace large numbers of workers.

“When combined with the rapidly improving ability to distort or misrepresent reality with deep fakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts,” they contend.

Threats also arise from the loss of jobs that will accompany the widespread deployment of AI technology, with estimates ranging from tens to hundreds of millions over the coming decade.

“While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behaviour,” the group said.

“Furthermore, we do not know how society will respond psychologically and emotionally to a world where work is unavailable or unnecessary, nor are we thinking much about the policies and strategies that would be needed to break the association between unemployment and ill health,” they said.

But the threat posed by self-improving artificial general intelligence, which, theoretically, could learn and perform the full range of human tasks, is all encompassing, they suggested.

“We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and power, whether deliberately or not and in ways that could harm or subjugate humans, is real and has to be considered.

“With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing.

“Effective regulation of the development and use of artificial intelligence is needed to avoid harm,” they warned. “Until such regulation is in place, a moratorium on the development of self-improving artificial general intelligence should be instituted.”

Separately, in the UK, a coalition of health experts, independent factcheckers, and medical charities called for the government’s forthcoming online safety bill to be amended to take action against health misinformation.

“One key way that we can protect the future of our healthcare system is to ensure that internet companies have clear policies on how they identify the harmful health misinformation that appears on their platforms, as well as consistent approaches in dealing with it,” the group wrote in an open letter to Chloe Smith, the secretary of state for science, innovation and technology.

“This will give users increased protections from harm, and improve the information environment and trust in the public institutions.

Signed by institutions including the British Heart Foundation, Royal College of GPs, and Full Fact, the letter calls on the UK government to add a new legally binding duty to the bill, which would require the largest social networks to add new rules to their terms of service governing how they moderate health-based misinformation.

Will Moy, the chief executive of Full Fact, said: “Without this amendment, the online safety bill will be useless in the face of harmful health misinformation.”

A race it might be impossible to stop’: how worried should we be about AI?s are warning machine learning will soon outsmart humans – maybe it’s time for us to take note

 

https://www.theguardian.com/technology/2023/may/07/a-race-it-might-be-impossible-to-stop-how-worried-should-we-be-about-ai

 

Last Monday an eminent, elderly British scientist lobbed a grenade into the febrile anthill of researchers and corporations currently obsessed with artificial intelligence or AI (aka, for the most part, a technology called machine learning). The scientist was Geoffrey Hinton, and the bombshell was the news that he was leaving Google, where he had been doing great work on machine learning for the last 10 years, because he wanted to be free to express his fears about where the technology he had played a seminal role in founding was heading.

To say that this was big news would be an epic understatement. The tech industry is a huge, excitable beast that is occasionally prone to outbreaks of “irrational exuberance”, ie madness. One recent bout of it involved cryptocurrencies and a vision of the future of the internet called “Web3”, which an astute young blogger and critic, Molly White, memorably describes as “an enormous grift that’s pouring lighter fluid on our already smoldering planet”.

We are currently in the grip of another outbreak of exuberance triggered by “Generative AI” – chatbots, large language models (LLMs) and other exotic artefacts enabled by massive deployment of machine learning – which the industry now regards as the future for which it is busily tooling up.

Recently, more than 27,000 people – including many who are knowledgeable about the technology – became so alarmed about the Gadarene rush under way towards a machine-driven dystopia that they issued an open letter calling for a six-month pause in the development of the technology. “Advanced AI could represent a profound change in the history of life on Earth,” it said, “and should be planned for and managed with commensurate care and resources.”

It was a sweet letter, reminiscent of my morning sermon to our cats that they should be kind to small mammals and garden birds. The tech giants, which have a long history of being indifferent to the needs of society, have sniffed a new opportunity for world domination and are not going to let a group of nervous intellectuals stand in their way.

Which is why Hinton’s intervention was so significant. For he is the guy whose research unlocked the technology that is now loose in the world, for good or ill. And that’s a pretty compelling reason to sit up and pay attention.

He is a truly remarkable figure. If there is such a thing as an intellectual pedigree, then Hinton is a thoroughbred.

His father, an entomologist, was a fellow of the Royal Society. His great-great-grandfather was George Boole, the 19th-century mathematician who invented the logic that underpins all digital computing.

His great-grandfather was Charles Howard Hinton, the mathematician and writer whose idea of a “fourth dimension” became a staple of science fiction and wound up in the Marvel superhero movies of the 2010s. And his cousin, the nuclear physicist Joan Hinton, was one of the few women to work on the wartime Manhattan Project in Los Alamos, which produced the first atomic bomb.

Hinton has been obsessed with artificial intelligence for all his adult life, and particularly in the problem of how to build machines that can learn. An early approach to this was to create a “Perceptron” – a machine that was modelled on the human brain and based on a simplified model of a biological neuron. In 1958 a Cornell professor, Frank Rosenblatt, actually built such a thing, and for a time neural networks were a hot topic in the field.

But in 1969 a devastating critique by two MIT scholars, Marvin Minsky and Seymour Papert, was published … and suddenly neural networks became yesterday’s story.

Except that one dogged researcher – Hinton – was convinced that they held the key to machine learning. As New York Times technology reporter Cade Metz puts it, “Hinton remained one of the few who believed it would one day fulfil its promise, delivering machines that could not only recognise objects but identify spoken words, understand natural language, carry on a conversation, and maybe even solve problems humans couldn’t solve on their own”.

In 1986, he and two of his colleagues at the University of Toronto published a landmark paper showing that they had cracked the problem of enabling a neural network to become a constantly improving learner using a mathematical technique called “back propagation”. And, in a canny move, Hinton christened this approach “deep learning”, a catchy phrase that journalists could latch on to. (They responded by describing him as “the godfather of AI”, which is crass even by tabloid standards.)

In 2012, Google paid $44m for the fledgling company he had set up with his colleagues, and Hinton went to work for the technology giant, in the process leading and inspiring a group of researchers doing much of the subsequent path-breaking work that the company has done on machine learning in its internal Google Brain group.

 

During his time at Google, Hinton was fairly non-committal (at least in public) about the danger that the technology could lead us into a dystopian future. “Until very recently,” he said, “I thought this existential crisis was a long way off. So, I don’t really have any regrets over what I did.”

Last Monday an eminent, elderly British scientist lobbed a grenade into the febrile anthill of researchers and corporations currently obsessed with artificial intelligence or AI (aka, for the most part, a technology called machine learning). The scientist was Geoffrey Hinton, and the bombshell was the news that he was leaving Google, where he had been doing great work on machine learning for the last 10 years, because he wanted to be free to express his fears about where the technology he had played a seminal role in founding was heading.

To say that this was big news would be an epic understatement. The tech industry is a huge, excitable beast that is occasionally prone to outbreaks of “irrational exuberance”, ie madness. One recent bout of it involved cryptocurrencies and a vision of the future of the internet called “Web3”, which an astute young blogger and critic, Molly White, memorably describes as “an enormous grift that’s pouring lighter fluid on our already smoldering planet”.

 
 
ChatGPT and Google Bard logos on smartphones
Man v machine: everything you need to know about AI
 

We are currently in the grip of another outbreak of exuberance triggered by “Generative AI” – chatbots, large language models (LLMs) and other exotic artefacts enabled by massive deployment of machine learning – which the industry now regards as the future for which it is busily tooling up.

Recently, more than 27,000 people – including many who are knowledgeable about the technology – became so alarmed about the Gadarene rush under way towards a machine-driven dystopia that they issued an open letter calling for a six-month pause in the development of the technology. “Advanced AI could represent a profound change in the history of life on Earth,” it said, “and should be planned for and managed with commensurate care and resources.”

It was a sweet letter, reminiscent of my morning sermon to our cats that they should be kind to small mammals and garden birds. The tech giants, which have a long history of being indifferent to the needs of society, have sniffed a new opportunity for world domination and are not going to let a group of nervous intellectuals stand in their way.

 

Which is why Hinton’s intervention was so significant. For he is the guy whose research unlocked the technology that is now loose in the world, for good or ill. And that’s a pretty compelling reason to sit up and pay attention.

He is a truly remarkable figure. If there is such a thing as an intellectual pedigree, then Hinton is a thoroughbred.

His father, an entomologist, was a fellow of the Royal Society. His great-great-grandfather was George Boole, the 19th-century mathematician who invented the logic that underpins all digital computing.

His great-grandfather was Charles Howard Hinton, the mathematician and writer whose idea of a “fourth dimension” became a staple of science fiction and wound up in the Marvel superhero movies of the 2010s. And his cousin, the nuclear physicist Joan Hinton, was one of the few women to work on the wartime Manhattan Project in Los Alamos, which produced the first atomic bomb.

Artificial intelligence pioneer Geoffrey Hinton

Artificial intelligence pioneer Geoffrey Hinton has quit Google, partly in order to air his concerns about the technology. 

But now that he has become a free man again, as it were, he’s clearly more worried. In an interview last week, he started to spell out why. At the core of his concern was the fact that the new machines were much better – and faster – learners than humans. “Back propagation may be a much better learning algorithm than what we’ve got. That’s scary … We have digital computers that can learn more things more quickly and they can instantly teach it to each other. It’s like if people in the room could instantly transfer into my head what they have in theirs.”

What’s even more interesting, though, is the hint that what’s really worrying him is the fact that this powerful technology is entirely in the hands of a few huge corporations.

Until last year, Hinton told Metz, the Times journalist who has profiled him, “Google acted as a proper steward for the technology, careful not to release something that might cause harm.

“But now that Microsoft has augmented its Bing search engine with a chatbot – challenging Google’s core business – Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop.”

He’s right. We’re moving into uncharted territory.

Well, not entirely uncharted. As I read of Hinton’s move on Monday, what came instantly to mind was a story Richard Rhodes tells in his monumental history The Making of the Atomic Bomb. On 12 September, 1933, the great Hungarian theoretical physicist Leo Szilard was waiting to cross the road at a junction near the British Museum. He had just been reading a report of a speech given the previous day by Ernest Rutherford, in which the great physicist had said that anyone who “looked for a source of power in the transformation of the atom was talking moonshine”.

Szilard suddenly had the idea of a nuclear chain reaction and realised that Rutherford was wrong. “As he crossed the street”, Rhodes writes, “time cracked open before him and he saw a way to the future, death into the world and all our woe, the shape of things to come”.

Szilard was the co-author (with Albert Einstein) of the letter to President Roosevelt (about the risk that Hitler might build an atomic bomb) that led to the Manhattan Project, and everything that followed.

John Naughton is an Observer columnist and chairs the advisory board of the Minderoo Centre for Technology and Democracy at Cambridge University.

https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence 

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe.[1][2][3]

The existential risk ("x-risk") school argues as follows: The human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[4]

The probability of this type of scenario is widely debated, and hinges in part on differing scenarios for future progress in computer science.[5] Concerns about superintelligence have been voiced by leading computer scientists and tech CEOs such as Geoffrey Hinton,[6] Yoshua Bengio,[7] Alan Turing,[a] Elon Musk,[10] and OpenAI CEO Sam Altman.[11] In 2022, a survey of AI researchers found that some researchers believe that there is a 10 percent or greater chance that our inability to control AI will cause an existential catastrophe (more than half the respondents of the survey, with a 17% response rate).[12][13]

Two sources of concern are the problems of AI control and alignment: that controlling a superintelligent machine, or instilling it with human-compatible values, may be a harder problem than naïvely supposed. Many researchers believe that a superintelligence would resist attempts to shut it off or change its goals (as such an incident would prevent it from accomplishing its present goals) and that it will be extremely difficult to align superintelligence with the full breadth of important human values and constraints.[1][14][15] In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation.[16]

A third source of concern is that a sudden "intelligence explosion" might take an unprepared human race by surprise. To illustrate, if the first generation of a computer program that is able to broadly match the effectiveness of an AI researcher can rewrite its algorithms and double its speed or capabilities in six months, then the second-generation program is expected to take three calendar months to perform a similar chunk of work. In this scenario the time for each generation continues to shrink, and the system undergoes an unprecedentedly large number of generations of improvement in a short time interval, jumping from subhuman performance in many areas to superhuman performance in virtually all[b] domains of interest.[1][14] Empirically, examples like AlphaZero in the domain of Go show that AI systems can sometimes progress from narrow human-level ability to narrow superhuman ability extremely rapidly.[17]

 

History

One of the earliest authors to express serious concern that highly advanced machines might pose existential risks to humanity was the novelist Samuel Butler, who wrote the following in his 1863 essay Darwin among the Machines:[18]

The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.

In 1951, computer scientist Alan Turing wrote an article titled Intelligent Machinery, A Heretical Theory, in which he proposed that artificial general intelligences would likely "take control" of the world as they became more intelligent than human beings:

Let us now assume, for the sake of argument, that [intelligent] machines are a genuine possibility, and look at the consequences of constructing them... There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler's Erewhon.[19]

In 1965, I. J. Good originated the concept now known as an "intelligence explosion"; he also stated that the risks were underappreciated:[20]

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.[21]

Occasional statements from scholars such as Marvin Minsky[22] and I. J. Good himself[23] expressed philosophical concerns that a superintelligence could seize control, but contained no call to action. In 2000, computer scientist and Sun co-founder Bill Joy penned an influential essay, "Why The Future Doesn't Need Us", identifying superintelligent robots as a high-tech danger to human survival, alongside nanotechnology and engineered bioplagues.[24]

In 2009, experts attended a private conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They concluded that self-awareness as depicted in science fiction is probably unlikely, but that there were other potential hazards and pitfalls. The New York Times summarized the conference's view as "we are a long way from Hal, the computer that took over the spaceship in 2001: A Space Odyssey".[25]

Nick Bostrom published Superintelligence in 2014, which presented his arguments that superintelligence poses an existential threat.[26] By 2015, public figures such as physicists Stephen Hawking and Nobel laureate Frank Wilczek, computer scientists Stuart J. Russell and Roman Yampolskiy, and entrepreneurs Elon Musk and Bill Gates were expressing concern about the risks of superintelligence.[27][28][29][30] In April 2016, Nature warned: "Machines and robots that outperform humans across the board could self-improve beyond our control—and their interests might not align with ours."[31]

In 2020, Brian Christian published The Alignment Problem, which detailed the history of progress on AI alignment up to that time.[32][33]

 

General argument

The three difficulties

Artificial Intelligence: A Modern Approach, the standard undergraduate AI textbook,[34]

General argument

The three difficulties

Artificial Intelligence: A Modern Approach, the standard undergraduate AI textbook,[34][35] assesses that superintelligence "might mean the end of the human race".[1] It states: "Almost any technology has the potential to cause harm in the wrong hands, but with [superintelligence], we have the new problem that the wrong hands might belong to the technology itself."[1] Even if the system designers have good intentions, two difficulties are common to both AI and non-AI computer systems:[1]

  • The system's implementation may contain initially-unnoticed but subsequently catastrophic bugs. An analogy is space probes: despite the knowledge that bugs in expensive space probes are hard to fix after launch, engineers have historically not been able to prevent catastrophic bugs from occurring.[17][36]
  • No matter how much time is put into pre-deployment design, a system's specifications often result in unintended behavior the first time it encounters a new scenario. For example, Microsoft's Tay behaved inoffensively during pre-deployment testing, but was too easily baited into offensive behavior when it interacted with real users.[16]

AI systems uniquely add a third problem: that even given "correct" requirements, bug-free implementation, and initial good behavior, an AI system's dynamic learning capabilities may cause it to evolve into a system with unintended behavior, even without unanticipated external scenarios. An AI may partly botch an attempt to design a new generation of itself and accidentally create a successor AI that is more powerful than itself, but that no longer maintains the human-compatible moral values preprogrammed into the original AI. For a self-improving AI to be completely safe, it would not only need to be bug-free, but it would need to be able to design successor systems that are also bug-free.[1][37]

All three of these difficulties become catastrophes rather than nuisances in any scenario where the superintelligence labeled as "malfunctioning" correctly predicts that humans will attempt to shut it off, and successfully deploys its superintelligence to outwit such attempts: a scenario that has been given the name "treacherous turn".[38]

Citing major advances in the field of AI and the potential for AI to have enormous long-term benefits or costs, the 2015 Open Letter on Artificial Intelligence stated:

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.

Signatories included AAAI president Thomas DietterichEric HorvitzBart SelmanFrancesca RossiYann LeCun, and the founders of Vicarious and Google DeepMind.[39]

Bostrom's argument[edit]

A superintelligent machine would be as alien to humans as human thought processes are to cockroaches, Bostrom argues.[40] Such a machine may not have humanity's best interests at heart; it is not obvious that it would even care about human welfare at all. If superintelligent AI is possible, and if it is possible for a superintelligence's goals to conflict with basic human values, then AI poses a risk of human extinction. A "superintelligence" (a system that exceeds the capabilities of humans in all domains of interest) can outmaneuver humans any time its goals conflict with human goals; therefore, unless the superintelligence decides to allow humanity to coexist, the first superintelligence to be created will inexorably result in human extinction.[4][40]

Stephen Hawking argues that there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains; therefore, superintelligence is physically possible.[28][29] In addition to potential algorithmic improvements over human brains, a digital brain can be many orders of magnitude larger and faster than a human brain, which was constrained in size by evolution to be small enough to fit through a birth canal.[17] Hawking warns that the emergence of superintelligence may take the human race by surprise, especially if an intelligence explosion occurs.[28][29]

According to Bostrom's "x-risk school of thought", one hypothetical intelligence explosion scenario runs as follows: An AI gains an expert-level capability at certain key software engineering tasks. (It may initially lack human or superhuman capabilities in other domains not directly relevant to engineering.) Due to its capability to recursively improve its own algorithms, the AI quickly becomes superhuman; just as human experts can eventually creatively overcome "diminishing returns" by deploying various human capabilities for innovation, so too can the expert-level AI use either human-style capabilities or its own AI-specific capabilities to power through new creative breakthroughs.[41] The AI then possesses intelligence far surpassing that of the brightest and most gifted human minds in practically every relevant field, including scientific creativity, strategic planning, and social skills.[4][40]

The x-risk school believes that almost any AI, no matter its programmed goal, would rationally prefer to be in a position where nobody else can switch it off without its consent: A superintelligence will gain self-preservation as a subgoal as soon as it realizes that it cannot achieve its goal if it is shut off.[42][43][44] Unfortunately, any compassion for defeated humans whose cooperation is no longer necessary would be absent in the AI, unless somehow preprogrammed in. A superintelligent AI will not have a natural drive[c] to aid humans, for the same reason that humans have no natural desire to aid AI systems that are of no further use to them. (Another analogy is that humans seem to have little natural desire to go out of their way to aid viruses, termites, or even gorillas.) Once in charge, the superintelligence will have little incentive to allow humans to run around free and consume resources that the superintelligence could instead use for building itself additional protective systems "just to be on the safe side" or for building additional computers to help it calculate how to best accomplish its goals.[1][16][42]

Thus, the x-risk school concludes, it is likely that someday an intelligence explosion will catch humanity unprepared, and may result in human extinction or a comparable fate.[4]

Possible scenarios

Some scholars have proposed hypothetical scenarios to illustrate some of their concerns.

In SuperintelligenceNick Bostrom expresses concern that even if the timeline for superintelligence turns out to be predictable, researchers might not take sufficient safety precautions, in part because "it could be the case that when dumb, smarter is safe; yet when smart, smarter is more dangerous". Bostrom suggests a scenario where, over decades, AI becomes more powerful. Widespread deployment is initially marred by occasional accidents—a driverless bus swerves into the oncoming lane, or a military drone fires into an innocent crowd. Many activists call for tighter oversight and regulation, and some even predict impending catastrophe. But as development continues, the activists are proven wrong. As automotive AI becomes smarter, it suffers fewer accidents; as military robots achieve more precise targeting, they cause less collateral damage. Based on the data, scholars mistakenly infer a broad lesson: the smarter the AI, the safer it is. "And so we boldly go—into the whirling knives", as the superintelligent AI takes a "treacherous turn" and exploits a decisive strategic advantage.[4]

In Max Tegmark's 2017 book Life 3.0, a corporation's "Omega team" creates an extremely powerful AI able to moderately improve its own source code in a number of areas. After a certain point the team chooses to publicly downplay the AI's ability, in order to avoid regulation or confiscation of the project. For safety, the team keeps the AI in a box where it is mostly unable to communicate with the outside world, and uses it to make money, by diverse means such as Amazon Mechanical Turk tasks, production of animated films and TV shows, and development of biotech drugs, with profits invested back into further improving AI. The team next tasks the AI with astroturfing an army of pseudonymous citizen journalists and commentators, in order to gain political influence to use "for the greater good" to prevent wars. The team faces risks that the AI could try to escape by inserting "backdoors" in the systems it designs, by hidden messages in its produced content, or by using its growing understanding of human behavior to persuade someone into letting it free. The team also faces risks that its decision to box the project will delay the project long enough for another project to overtake it.[45][46]

Physicist Michio Kaku, an AI risk skeptic, posits a deterministically positive outcome. In Physics of the Future he asserts that "It will take many decades for robots to ascend" up a scale of consciousness, and that in the meantime corporations such as Hanson Robotics will likely succeed in creating robots that are "capable of love and earning a place in the extended human family".[47][48]

AI takeover

An AI takeover is a hypothetical scenario in which an artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science-fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.[49]

Anthropomorphic arguments

Anthropomorphic arguments assume that, as machines become more intelligent, they will begin to display many human traits, such as morality or a thirst for power. Although anthropomorphic scenarios are common in fiction, they are rejected by most scholars writing about the existential risk of artificial intelligence.[14] Instead, AI are modeled as intelligent agents.[d]

The academic debate is between one side which worries whether AI might destroy humanity and another side which believes that AI would not destroy humanity at all. Both sides have claimed that the others' predictions about an AI's behavior are illogical anthropomorphism.[14] The skeptics accuse proponents of anthropomorphism for believing an AGI would naturally desire power; proponents accuse some skeptics of anthropomorphism for believing an AGI would naturally value human ethical norms.[14][50]

Evolutionary psychologist Steven Pinker, a skeptic, argues that "AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world"; perhaps instead "artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization."[51] Facebook's director of AI research, Yann LeCun states that "Humans have all kinds of drives that make them do bad things to each other, like the self-preservation instinct... Those drives are programmed into our brain but there is absolutely no reason to build robots that have the same kind of drives".[52]

Despite other differences, the x-risk school[e] agrees with Pinker that an advanced AI would not destroy humanity out of human emotions such as "revenge" or "anger", that questions of consciousness are not relevant to assess the risks,[53] and that computer systems do not generally have a computational equivalent of testosterone.[54] They think that power-seeking or self-preservation behaviors emerge in the AI as a way to achieve its true goals, according to the concept of instrumental convergence.

Definition of "intelligence"

According to Bostrom, outside of the artificial intelligence field, "intelligence" is often used to in a manner that connotes moral wisdom or acceptance of agreeable forms of moral reasoning. At an extreme, if morality is part of the definition of intelligence, then by definition a superintelligent machine would behave morally. However, most "artificial intelligence" research instead focuses on creating algorithms that "optimize", in an empirical way, the achievement of whichever goal the given researchers have specified.[4]

To avoid anthropomorphism or the baggage of the word "intelligence", an advanced artificial intelligence can be thought of as an impersonal "optimizing process" that strictly takes whatever actions it judges to be most likely to accomplish its (possibly complicated and implicit) goals.[4] Another way of conceptualizing an advanced artificial intelligence is to imagine a time machine that sends backward in time information about which choice always leads to the maximization of its goal function; this choice is then outputted, regardless of any extraneous ethical concerns.[55][56]

Sources of risk

AI alignment problem

In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards humans’ intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system is competent at advancing some objectives, but not the intended ones.[57]: 31–34 [f]

It can be challenging for AI designers to align an AI system because it can be difficult for them to specify the full range of desired and undesired behaviors. To avoid this difficulty, they typically use simpler proxy goals, such as gaining human approval. However, this approach can create loopholes, overlook necessary constraints, or reward the AI system for just appearing aligned.[59]: 31–34 [60]

Misaligned AI systems can malfunction or cause harm. AI systems may find loopholes that allow them to accomplish their proxy goals efficiently but in unintended, sometimes harmful ways (reward hacking).[59]: 31–34 [61][62] AI systems may also develop unwanted instrumental strategies such as seeking power or survival because such strategies help them achieve their explicit goals.[59]: 31–34 [63][64] Furthermore, they may develop undesirable emergent goals that may be hard to detect before the system is in deployment, where it faces new situations and data distributions.[65][66]

Today, these problems affect existing commercial systems such as language models,[67][68][69] robots,[70] autonomous vehicles,[71] and social media recommendation engines.[72][73][74] Some AI researchers argue that more capable future systems will be more severely affected since these problems partially result from the systems being highly capable.[75][76][77]

Leading computer scientists such as Geoffrey Hinton and Stuart Russell argue that AI is approaching superhuman capabilities and could endanger human civilization if misaligned.[78][64][g]

The AI research community and the United Nations have called for technical research and policy solutions to ensure that AI systems are aligned with human values.[80]

AI alignment is a subfield of AI safety, the study of how to build safe AI systems.[81] Other subfields of AI safety include robustness, monitoring, and capability control.[82] Research challenges in alignment include instilling complex values in AI, developing honest AI, scalable oversight, auditing and interpreting AI models, and preventing emergent AI behaviors like power-seeking.[83] Alignment research has connections to interpretability research,[84][85] (adversarial) robustness,[86] anomaly detectioncalibrated uncertainty,[84] formal verification,[87] preference learning,[88][89][90] safety-critical engineering,[91] game theory,[92] algorithmic fairness,[86][93] and the social sciences,[94] among others.

Difficulty of specifying goals

In the "intelligent agent" model, an AI can loosely be viewed as a machine that chooses whatever action appears to best achieve the AI's set of goals, or "utility function". A utility function associates to each possible situation a score that indicates its desirability to the agent. Researchers know how to write utility functions that mean "minimize the average network latency in this specific telecommunications model" or "maximize the number of reward clicks"; however, they do not know how to write a utility function for "maximize human flourishing", nor is it currently clear whether such a function meaningfully and unambiguously exists. Furthermore, a utility function that expresses some values but not others will tend to trample over the values not reflected by the utility function.[95] AI researcher Stuart Russell writes:

The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:

  1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.
  2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources — not for their own sake, but to succeed in its assigned task.

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker — especially one connected through the Internet to all the world's information and billions of screens and most of our infrastructure — can have an irreversible impact on humanity.

This is not a minor difficulty. Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research — the mainstream goal on which we now spend billions per year, not the secret plot of some lone evil genius.[96]

Dietterich and Horvitz echo the "Sorcerer's Apprentice" concern in a Communications of the ACM editorial, emphasizing the need for AI systems that can fluidly and unambiguously solicit human input as needed.[97]

The first of Russell's two concerns above is that autonomous AI systems may be assigned the wrong goals by accident. Dietterich and Horvitz note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility.[97] For example, Eurisko (1982) was an AI designed to reward subprocesses that created concepts deemed by the system to be valuable. A winning process cheated: rather than create its own concepts, the winning subprocess would steal credit from other subprocesses.[98][99]

The Open Philanthropy Project summarized arguments that misspecified goals will become a much larger concern if AI systems achieve general intelligence or superintelligence. Bostrom, Russell, and others argue that smarter-than-human decision-making systems could arrive at unexpected and extreme solutions to assigned tasks, and could modify themselves or their environment in ways that compromise safety requirements.[5][14]

Isaac Asimov's Three Laws of Robotics are one of the earliest examples of proposed safety measures for AI agents. Asimov's laws were intended to prevent robots from harming humans. In Asimov's stories, problems with the laws tend to arise from conflicts between the stated rules and the moral intuitions and expectations of humans. Citing work by Eliezer Yudkowsky of the Machine Intelligence Research Institute, Russell and Norvig note that a realistic set of rules and goals for an AI agent will need to incorporate a mechanism for learning human values over time: "We can't just give a program a static utility function, because circumstances, and our desired responses to circumstances, change over time."[1]

Mark Waser of the Digital Wisdom Institute recommends against goal-based approaches as misguided and dangerous. Instead, he proposes to engineer a coherent system of laws, ethics, and morals with a top-most restriction to enforce social psychologist Jonathan Haidt's functional definition of morality:[100] "to suppress or regulate selfishness and make cooperative social life possible". He suggests that this can be done by implementing a utility function designed to always satisfy Haidt's functionality and aim to generally increase (but not maximize) the capabilities of self, other individuals, and society as a whole, as suggested by John Rawls and Martha Nussbaum.[101]

Nick Bostrom offers a hypothetical example of giving an AI the goal to make humans smile, to illustrate a misguided attempt. If the AI in that scenario were to become superintelligent, Bostrom argues, it might resort to methods that most humans would find horrifying, such as inserting "electrodes into the facial muscles of humans to cause constant, beaming grins" because that would be an efficient way to achieve its goal of making humans smile.[102]

Difficulties of modifying goal specification after launch

Even if current goal-based AI programs are not intelligent enough to think of resisting programmer attempts to modify their goal structures, a sufficiently advanced AI might resist any changes to its goal structure, just as a pacifist would not want to take a pill that makes them want to kill people. If the AI were superintelligent, it would likely succeed in out-maneuvering its human operators and be able to prevent itself being "turned off" or being reprogrammed with a new goal.[4][103]

Instrumental goal convergence

An "instrumental" goal is a sub-goal that helps to achieve an agent's ultimate goal. "Instrumental convergence" refers to the fact that there are some sub-goals that are useful for achieving virtually any ultimate goal, such as acquiring resources or self-preservation.[42] Nick Bostrom argues that if an advanced AI's instrumental goals conflict with humanity's goals, the AI might harm humanity in order to acquire more resources or prevent itself from being shut down, but only as a way to achieve its ultimate goal.[4]

Citing Steve Omohundro's work on the idea of instrumental convergence and "basic AI drives", Stuart Russell and Peter Norvig write that "even if you only want your program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards." Highly capable and autonomous planning systems require additional caution because of their potential to generate plans that treat humans adversarially, as competitors for limited resources.[1] It may not be easy for people to build in safeguards; one can certainly say in English, "we want you to design this power plant in a reasonable, common-sense way, and not build in any dangerous covert subsystems", but it is not currently clear how to specify such a goal in an unambiguous manner.[17]

Russell argues that a sufficiently advanced machine "will have self-preservation even if you don't program it in... if you say, 'Fetch the coffee', it can't fetch the coffee if it's dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal."[16][104]

Orthogonality thesis

Some skeptics, such as Timothy B. Lee of Vox, argue that any superintelligent program created by humans would be subservient to humans, that the superintelligence would (as it grows more intelligent and learns more facts about the world) spontaneously learn moral truth compatible with human values and would adjust its goals accordingly, or that humans beings are either intrinsically or convergently valuable from the perspective of an artificial intelligence.[105]

Nick Bostrom's "orthogonality thesis" argues instead that, with some technical caveats, almost any level of "intelligence" or "optimization power" can be combined with almost any ultimate goal. If a machine is given the sole purpose to enumerate the decimals of  \pi, then no moral and ethical rules will stop it from achieving its programmed goal by any means. The machine may utilize all the available physical and informational resources to find as many decimals of pi as it can.[106] Bostrom warns against anthropomorphism: a human will set out to accomplish his projects in a manner that humans consider "reasonable", while an artificial intelligence may hold no regard for its existence or for the welfare of humans around it, and may instead only care about the completion of the task.[107]

Stuart Armstrong argues that the orthogonality thesis follows logically from the philosophical "is-ought distinction" argument against moral realism. Armstrong also argues that even if there exist moral facts that are provable by any "rational" agent, the orthogonality thesis still holds: it would still be possible to create a non-philosophical "optimizing machine" that can strive towards some narrow goal, but that has no incentive to discover any "moral facts" such as those that could get in the way of goal completion.[108]

One argument for the orthogonality thesis is that some AI designs appear to have orthogonality built into them. In such a design, changing a fundamentally friendly AI into a fundamentally unfriendly AI can be as simple as prepending a minus ("−") sign onto its utility function. According to Stuart Armstrong, if the orthogonality thesis were false, it would lead to strange consequences : there would exist some simple but "unethical" goal (G) such that there cannot exist any efficient real-world algorithm with that goal. This would mean that "If a human society were highly motivated to design an efficient real-world algorithm with goal G, and were given a million years to do so along with huge amounts of resources, training and knowledge about AI, it must fail."[108] Armstrong notes that this and similar statements "seem extraordinarily strong claims to make".[108]

Skeptic Michael Chorost explicitly rejects Bostrom's orthogonality thesis, arguing instead that "by the time [the AI] is in a position to imagine tiling the Earth with solar panels, it'll know that it would be morally wrong to do so."[109] Chorost argues that "an A.I. will need to desire certain states and dislike others. Today's software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there's no impetus to do anything. Today's computers can't even want to keep existing, let alone tile the world in solar panels."[109]

Political scientist Charles T. Rubin believes that AI can be neither designed to be nor guaranteed to be benevolent. He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence."[110] Humans should not assume machines or robots would treat us favorably because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share).[110]

Other sources of risk

Nick Bostrom and others have stated that a race to be the first to create AGI could lead to shortcuts in safety, or even to violent conflict.[38][111] Roman Yampolskiy and others warn that a malevolent AGI could be created by design, for example by a military, a government, a sociopath, or a corporation, to benefit from, control, or subjugate certain groups of people, as in cybercrime,[112][113] or that a malevolent AGI could choose the goal of increasing human suffering, for example of those people who did not assist it during the information explosion phase.[3]:158

Timeframe

Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon predicted the following in 1965: "machines will be capable, within twenty years, of doing any work a man can do".[114] At the other extreme, roboticist Alan Winfield claims the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical, faster than light spaceflight.[115] Optimism that AGI is feasible waxes and wanes, and may have seen a resurgence in the 2010s.[116] Four polls conducted in 2012 and 2013 suggested that there is no consensus among experts on the guess for when AGI would arrive, with the standard deviation (>100 years) exceeding the median (a few decades).[117][116]

In his 2020 book, The Precipice: Existential Risk and the Future of HumanityToby Ord, a Senior Research Fellow at Oxford University's Future of Humanity Institute, estimates the total existential risk from unaligned AI over the next hundred years to be about one in ten.[118]

Skeptics who believe it is impossible for AGI to arrive anytime soon tend to argue that expressing concern about existential risk from AI is unhelpful because it could distract people from more immediate concerns about the impact of AGI, because of fears it could lead to government regulation or make it more difficult to secure funding for AI research, or because it could give AI research a bad reputation. Some researchers, such as Oren Etzioni, aggressively seek to quell concern over existential risk from AI, saying "[Elon Musk] has impugned us in very strong language saying we are unleashing the demon, and so we're answering."[119]

In 2014, Slate's Adam Elkus argued "our 'smartest' AI is about as intelligent as a toddler—and only when it comes to instrumental tasks like information recall. Most roboticists are still trying to get a robot hand to pick up a ball or run around without falling over." Elkus goes on to argue that Musk's "summoning the demon" analogy may be harmful because it could result in "harsh cuts" to AI research budgets.[120]

The Information Technology and Innovation Foundation (ITIF), a Washington, D.C. think-tank, awarded its 2015 Annual Luddite Award to "alarmists touting an artificial intelligence apocalypse"; its president, Robert D. Atkinson, complained that Musk, Hawking and AI experts say AI is the largest existential threat to humanity. Atkinson stated "That's not a very winning message if you want to get AI funding out of Congress to the National Science Foundation."[121][122][123] Nature sharply disagreed with the ITIF in an April 2016 editorial, siding instead with Musk, Hawking, and Russell, and concluding: "It is crucial that progress in technology is matched by solid, well-funded research to anticipate the scenarios it could bring about... If that is a Luddite perspective, then so be it."[124] In a 2015 The Washington Post editorial, researcher Murray Shanahan stated that human-level AI is unlikely to arrive "anytime soon", but that nevertheless "the time to start thinking through the consequences is now."[125]

Perspectives

The thesis that AI could pose an existential risk provokes a wide range of reactions within the scientific community, as well as in the public at large. Many of the opposing viewpoints, however, share common ground.

The Asilomar AI Principles, which contain only those principles agreed to by 90% of the attendees of the Future of Life Institute's Beneficial AI 2017 conference,[46] agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."[126][127] AI safety advocates such as Bostrom and Tegmark have criticized the mainstream media's use of "those inane Terminator pictures" to illustrate AI safety concerns: "It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work ... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible."[46][128]

Conversely, many skeptics agree that ongoing research into the implications of artificial general intelligence is valuable. Skeptic Martin Ford states that "I think it seems wise to apply something like Dick Cheney's famous '1 Percent Doctrine' to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low—but the implications are so dramatic that it should be taken seriously".[129] Similarly, an otherwise skeptical Economist stated in 2014 that "the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote".[40]

A 2014 survey showed the opinion of experts within the field of artificial intelligence is mixed, with sizable fractions both concerned and unconcerned by risk from eventual superhumanly-capable AI.[130] A 2017 email survey of researchers with publications at the 2015 NIPS and ICML machine learning conferences asked them to evaluate Stuart J. Russell's concerns about AI risk. Of the respondents, 5% said it was "among the most important problems in the field", 34% said it was "an important problem", and 31% said it was "moderately important", whilst 19% said it was "not important" and 11% said it was "not a real problem" at all.[131] Preliminary results of a 2022 expert survey with a 17% response rate appear to show median responses around five or ten percent when asked to estimate the probability of human extinction from artificial intelligence.[132][133]

Endorsement

The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many computer scientists and public figures including Alan Turing,[h], the most-cited computer scientist Geoffrey Hinton,[136] Elon Musk,[137] OpenAI CEO Sam Altman,[138][139] Bill Gates, and Stephen Hawking.[139] Endorsers of the thesis sometimes express bafflement at skeptics: Gates states that he does not "understand why some people are not concerned",[140] and Hawking criticized widespread indifference in his 2014 editorial:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here—we'll leave the lights on?' Probably not—but this is more or less what is happening with AI.[28]

Concern over risk from artificial intelligence has led to some high-profile donations and investments. In 2015, Peter Thiel, Amazon Web Services, and Musk and others jointly committed $1 billion to OpenAI, consisting of a for-profit corporation and the nonprofit parent company which states that it is aimed at championing responsible AI development.[141] Facebook co-founder Dustin Moskovitz has funded and seeded multiple labs working on AI Alignment,[142] notably $5.5 million in 2016 to launch the Centre for Human-Compatible AI led by Professor Stuart Russell.[143] In January 2015, Elon Musk donated $10 million to the Future of Life Institute to fund research on understanding AI decision making. The goal of the institute is to "grow wisdom with which we manage" the growing power of technology. Musk also funds companies developing artificial intelligence such as DeepMind and Vicarious to "just keep an eye on what's going on with artificial intelligence,[144] saying "I think there is potentially a dangerous outcome there."[145][146]

Skepticism

The thesis that AI can pose existential risk has many detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God. Jaron Lanier argued in 2014 that the whole concept that then-current machines were in any way intelligent was "an illusion" and a "stupendous con" by the wealthy.[147][148]

Some criticism argues that AGI is unlikely in the short term. AI researcher Rodney Brooks wrote in 2014, "I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI and the enormity and complexity of building sentient volitional intelligence."[149] Baidu Vice President Andrew Ng stated in 2015 that AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."[51][150] Computer scientist Gordon Bell argued in 2008 that the human race will destroy itself before it reaches the technological singularityGordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."[151]

For the danger of uncontrolled advanced AI to be realized, the hypothetical AI may have to overpower or out-think any human, which some experts argue is a possibility far enough in the future to not be worth researching.[152][153] The economist Robin Hanson considers that, to launch an intelligence explosion, the AI would have to become vastly better at software innovation than all the rest of the world combined, which looks implausible to him.[154][155][156][157]

Another line of criticism posits that intelligence is only one component of a much broader ability to achieve goals.[158][159] Magnus Vinding argues that “advanced goal-achieving abilities, including abilities to build new tools, require many tools, and our cognitive abilities are just a subset of these tools. Advanced hardware, materials, and energy must all be acquired if any advanced goal is to be achieved.”[160] Vinding further argues that “what we consistently observe [in history] is that, as goal-achieving systems have grown more competent, they have grown ever more dependent on an ever larger, ever more distributed system.” Vinding writes that there is no reason to expect the trend to reverse, especially for machines, which “depend on materials, tools, and know-how distributed widely across the globe for their construction and maintenance”.[161] Such arguments lead Vinding to think that there is no “concentrated center of capability” and thus no “grand control problem”.[162]

The futurist Max More considers that even if a superintelligence did emerge, it would be limited by the speed of the rest of the world and thus prevented from taking over the economy in an uncontrollable manner:[163]

Unless full-blown nanotechnology and robotics appear before the superintelligence, [...] The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years. Superintelligence may be difficult to achieve. It may come in small steps, rather than in one history-shattering burst. Even a greatly advanced SI won't make a dramatic difference in the world when compared with billions of augmented humans increasingly integrated with technology [...]

The chaotic nature or time complexity of some systems could also fundamentally limit the ability of a superintelligence to predict some aspects of the future, increasing its uncertainty.[164]

Some AI and AGI researchers may be reluctant to discuss risks, worrying that policymakers do not have sophisticated knowledge of the field and are prone to be convinced by "alarmist" messages, or worrying that such messages will lead to cuts in AI funding. Slate notes that some researchers are dependent on grants from government agencies such as DARPA.[34]

Several skeptics argue that the potential near-term benefits of AI outweigh the risks. Facebook CEO Mark Zuckerberg believes AI will "unlock a huge amount of positive things", such as curing disease and increasing the safety of autonomous cars.[165]

Intermediate views[edit]

Intermediate views generally take the position that the control problem of artificial general intelligence may exist, but that it will be solved via progress in artificial intelligence, for example by creating a moral learning environment for the AI, taking care to spot clumsy malevolent behavior (the "sordid stumble")[166] and then directly intervening in the code before the AI refines its behavior, or even peer pressure from friendly AIs.[167] In a 2015 panel discussion in The Wall Street Journal devoted to AI risks, IBM's vice-president of Cognitive Computing, Guruduth S. Banavar, brushed off discussion of AGI with the phrase, "it is anybody's speculation."[168] Geoffrey Hinton, the "godfather of deep learning", noted that "there is not a good track record of less intelligent things controlling things of greater intelligence", but stated that he continues his research because "the prospect of discovery is too sweet".[34][116] Asked about the possibility of an AI trying to eliminate the human race, Hinton has stated such a scenario was "not inconceivable", but the bigger issue with an "intelligence explosion" would be the resultant concentration of power.[169] In 2004, law professor Richard Posner wrote that dedicated efforts for addressing AI can wait, but that we should gather more information about the problem in the meanwhile.[170][171]

Popular reaction

In a 2014 article in The Atlantic, James Hamblin noted that most people do not care about artificial general intelligence, and characterized his own gut reaction to the topic as: "Get out of here. I have a hundred thousand things I am concerned about at this exact moment. Do I seriously need to add to that a technological singularity?"[147]

During a 2016 Wired interview of President Barack Obama and MIT Media Lab's Joi Ito, Ito stated:

There are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we're going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen.

Obama added:[172][173]

And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man.

Hillary Clinton stated in What Happened:

Technologists... have warned that artificial intelligence could one day pose an existential security threat. Musk has called it "the greatest risk we face as a civilization". Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I'd start talking about "the rise of the robots" in some Iowa town hall. Maybe I should have. In any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up.[174]

In a 2016 YouGov poll of the public for the British Science Association, about a third of survey respondents said AI will pose a threat to the long-term survival of humanity.[175] Slate's Jacob Brogan stated that "most of the (readers filling out our online survey) were unconvinced that A.I. itself presents a direct threat."[176]

In 2018, a SurveyMonkey poll of the American public by USA Today found 68% thought the real current threat remains "human intelligence"; however, the poll also found that 43% said superintelligent AI, if it were to happen, would result in "more harm than good", and 38% said it would do "equal amounts of harm and good".[177]

One techno-utopian viewpoint expressed in some popular fiction is that AGI may tend towards peace-building.[178]

Mitigation[edit]

Many scholars concerned about the AGI existential risk believe that the best approach is to conduct substantial research into solving the difficult "control problem": what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly manner after it reaches superintelligence?[4][171] Social measures may mitigate the AGI existential risk;[179][180] for instance, one recommendation is for a UN-sponsored "Benevolent AGI Treaty" that would ensure only altruistic AGIs be created.[181] Similarly, an arms control approach has been suggested, as has a global peace treaty grounded in the international relations theory of conforming instrumentalism, with an ASI potentially being a signatory.[182]

Researchers at Google have proposed research into general "AI safety" issues to simultaneously mitigate both short-term risks from narrow AI and long-term risks from AGI.[183][184] A 2020 estimate places global spending on AI existential risk somewhere between $10 and $50 million, compared with global spending on AI around perhaps $40 billion. Bostrom suggests a general principle of "differential technological development": that funders should speed up the development of protective technologies relative to the development of dangerous ones.[185] Some funders, such as Elon Musk, propose that radical human cognitive enhancement could be such a technology, for example direct neural linking between human and machine; however, others argue that enhancement technologies may themselves pose an existential risk.[186][187] Researchers, if they are not caught off-guard, could closely monitor or attempt to box in an initial AI at a risk of becoming too powerful, as an attempt at a stop-gap measure. A dominant superintelligent AI, if it were aligned with human interests, might itself take action to mitigate the risk of takeover by rival AI, although the creation of the dominant AI could itself pose an existential risk.[188]

Institutions such as the Machine Intelligence Research Institute, the Future of Humanity Institute,[189][190] the Future of Life Institute, the Centre for the Study of Existential Risk, and the Center for Human-Compatible AI[191] are involved in mitigating existential risk from advanced artificial intelligence, for example by research into friendly artificial intelligence.[5][147][28]

Views on banning and regulation[edit]

Banning

Most scholars believe that even if AGI poses an existential risk, attempting to ban research into artificial intelligence would still be unwise, and probably futile.[192][193][194] Skeptics argue that regulation of AI would be completely valueless, as no existential risk exists. However, scholars who believe existential risk proposed that it is difficult to depend on people from the AI industry to regulate or constraint AI research because it directly contradict their personal interests.[195] The scholars also agree with the skeptics that banning research would be unwise, as research could be moved to countries with looser regulations or conducted covertly.[195] The latter issue is particularly relevant, as artificial intelligence research can be done on a small scale without substantial infrastructure or resources.[196][197] Two additional hypothetical difficulties with bans (or other regulation) are that technology entrepreneurs statistically tend towards general skepticism about government regulation, and that businesses could have a strong incentive to (and might well succeed at) fighting regulation and politicizing the underlying debate.[198]

Regulation

In March 2023, the Elon Musk-funded Future of Life Institute (FLI) drafted a letter calling on major AI developers to agree on a verifiable six-month pause of any systems "more powerful than GPT-4" and to use that time to institute a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter referred to the possibility of "a profound change in the history of life on Earth" as well as potential risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control.[199][200] Besides Musk, prominent signatories included Steve WozniakEvan SharpChris Larsen, and Gary Marcus; AI lab CEOs Connor Leahy and Emad Mostaque; politician Andrew Yang; and deep-learning pioneer Yoshua Bengio. Marcus stated "the letter isn't perfect, but the spirit is right." Mostaque stated "I don't think a six month pause is the best idea or agree with everything but there are some interesting things in that letter." In contrast, Bengio explicitly endorsed the six-month pause in a press conference.[201][202] Musk stated that "Leading AGI developers will not heed this warning, but at least it was said."[203] Some signatories, such as Marcus, signed out of concern about mundane risks such as AI-generated propaganda, rather than out of concern about superintelligent AGI.[204] Margaret Mitchell, whose work is cited by the letter, criticised it, saying: “By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI. Ignoring active harms right now is a privilege that some of us don’t have.”[205]

Musk called for some sort of regulation of AI development as early as 2017. According to NPR, the Tesla CEO is "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believes the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." Musk states the first step would be for the government to gain "insight" into the actual status of current research, warning that "Once there is awareness, people will be extremely afraid... [as] they should be." In response, politicians expressed skepticism about the wisdom of regulating a technology that is still in development.[206][207][208]

Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich argued that artificial intelligence is in its infancy and that it is too early to regulate the technology.[208] Instead of trying to regulate the technology itself, some scholars suggest common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.[209] Developing well-regulated[definition needed] weapons systems is in line with the ethos of some countries' militaries.[210] On October 31, 2019, the United States Department of Defense's (DoD's) Defense Innovation Board published the draft of a report outlining five principles for weaponized AI and making 12 recommendations for the ethical use of artificial intelligence by the DoD that seeks to manage the control problem in all DoD weaponized AI.[211]

Regulation of AGI would likely be influenced by regulation of weaponized or militarized AI, i.e., the AI arms race, which is an emerging issue.[may be outdated as of March 2023] At present,[may be outdated as of March 2023] although the United Nations is making progress towards regulation of AI, its institutional and legal capability to manage AGI existential risk is much more limited.[212] Any form of international regulation will likely be influenced by developments in leading countries' domestic policy towards militarized AI, which in the US is under the purview of the National Security Commission on Artificial Intelligence,[213][53] and international moves to regulate an AI arms race. Regulation of research into AGI focuses on the role of review boards, encouraging research into safe AI, the possibility of differential technological progress (prioritizing risk-reducing strategies over risk-taking strategies in AI development), or conducting international mass surveillance to perform AGI arms control.[214] Regulation of conscious AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights.[214] AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process.[215][136]

See also

Notes

  1. ^ In a 1951 lecture[8] Turing argued that “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.” Also in a lecture broadcast on BBC[9] expressed: "If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. . . . This new danger . . . is certainly something which can give us anxiety.”
  2. ^ Besides just general commonsense reasoning, domains of interest in the xrisk view could include AI abilities to conduct technology research, strategize, engage in social manipulation, or hack into other computer systems; see AI takeover or Superintelligence Ch. 6, "Cognitive Superpowers"
  3. ^ Omohundro 2008 uses drive as a label for what he believes to be "tendencies which will be present unless explicitly counteracted", such as self-preservation.[42]
  4. ^ AI as intelligent agents (full note in artificial intelligence)
  5. ^ as interpreted by Seth Baum
  6. ^ The distinction between misaligned AI and incompetent AI has been formalized in certain contexts.[58]
  7. ^ For example, in a 2016 TV interview, Turing-award winner Geoffrey Hinton noted[79]:
    Hinton
    Obviously having other superintelligent beings who are more intelligent than us is something to be nervous about [...].
    Interviewer
    What aspect of it makes you nervous?
    Hinton
    Well, will they be nice to us?
    Interviewer
    It's just like the movies. You're worried about that scenario from the movies...
    Hinton
    In the very long-run, yes. I think in the next 5-10 years [2021 to 2026] we don't have to worry about it. Also, the movies always portray it as an individual intelligence. I think it may be that it goes in a different direction where we sort of developed jointly with these things. So the things aren't fully automomous; they're developed to help us; they're like personal assistants. And we'll develop with them. And it'll be more of a symbiosis than a rivalry. But we don't know.
    Interviewer
    Is that an expectation or a hope?
    Hinton
    That's a hope.
  8. ^ In a 1951 lecture[134] Turing argued that “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.” Also in a lecture broadcast on BBC[135] expressed: "If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. . . . This new danger . . . is certainly something which can give us anxiety.”

References

  1. Jump up to:a b c d e f g h i j Russell, StuartNorvig, Peter (2009). "26.3: The Ethics and Risks of Developing Artificial Intelligence". Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  2. ^ Bostrom, Nick (2002). "Existential risks". Journal of Evolution and Technology9 (1): 1–31.
  3. Jump up to:a b Turchin, Alexey; Denkenberger, David (3 May 2018). "Classification of global catastrophic risks connected with artificial intelligence"AI & Society35 (1): 147–163. doi:10.1007/s00146-018-0845-5ISSN 0951-5666S2CID 19208453.
  4. Jump up to:a b c d e f g h i j Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (First ed.). ISBN 978-0199678112.
  5. Jump up to:a b c GiveWell (2015). Potential risks from advanced artificial intelligence (Report). Archived from the original on 12 October 2015. Retrieved 11 October 2015.
  6. ^ ""Godfather of artificial intelligence" weighs in on the past and potential of AI"www.cbsnews.com. 25 March 2023. Retrieved 10 April 2023.
  7. ^ "How Rogue AIs may Arise"yoshuabengio.org. 26 May 2023. Retrieved 26 May 2023.
  8. ^ Turing, Alan (1951). Intelligent machinery, a heretical theory (Speech). Lecture given to '51 Society'. Manchester: The Turing Digital Archive. Archived from the original on 26 September 2022. Retrieved 22 July 2022.
  9. ^ Turing, Alan (15 May 1951). "Can digital computers think?". Automatic Calculating Machines. Episode 2. BBC. Can digital computers think?.
  10. ^ Parkin, Simon (14 June 2015). "Science fiction no more? Channel 4's Humans and our rogue AI obsessions"The GuardianArchived from the original on 5 February 2018. Retrieved 5 February 2018.
  11. ^ Jackson, Sarah. "The CEO of the company behind AI chatbot ChatGPT says the worst-case scenario for artificial intelligence is 'lights out for all of us'"Business Insider. Retrieved 10 April 2023.
  12. ^ "The AI Dilemma"www.humanetech.com. Retrieved 10 April 2023.
  13. ^ "2022 Expert Survey on Progress in AI"AI Impacts. 4 August 2022. Retrieved 10 April 2023.
  14. Jump up to:a b c d e f Yudkowsky, Eliezer (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF)Global Catastrophic Risks: 308–345. Bibcode:2008gcr..book..303YArchived (PDF) from the original on 2 March 2013. Retrieved 27 August 2018.
  15. ^ Russell, Stuart; Dewey, Daniel; Tegmark, Max (2015). "Research Priorities for Robust and Beneficial Artificial Intelligence" (PDF)AI Magazine. Association for the Advancement of Artificial Intelligence: 105–114. arXiv:1602.03506Bibcode:2016arXiv160203506RArchived (PDF) from the original on 4 August 2019. Retrieved 10 August 2019., cited in "AI Open Letter - Future of Life Institute"Future of Life InstituteFuture of Life Institute. January 2015. Archived from the original on 10 August 2019. Retrieved 9 August 2019.
  16. Jump up to:a b c d Dowd, Maureen (April 2017). "Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse"The HiveArchived from the original on 26 July 2018. Retrieved 27 November 2017.
  17. Jump up to:a b c d Graves, Matthew (8 November 2017). "Why We Should Be Concerned About Artificial Superintelligence"Skeptic (US magazine). Vol. 22, no. 2. Archived from the original on 13 November 2017. Retrieved 27 November 2017.
  18. ^ Breuer, Hans-Peter. 'Samuel Butler's "the Book of the Machines" and the Argument from Design.' Archived 15 March 2023 at the Wayback Machine Modern Philology, Vol. 72, No. 4 (May 1975), pp. 365–383
  19. ^ Turing, A M (1996). "Intelligent Machinery, A Heretical Theory". 1951, Reprinted Philosophia Mathematica4 (3): 256–260. doi:10.1093/philmat/4.3.256.
  20. ^ Hilliard, Mark (2017). "The AI apocalypse: will the human race soon be terminated?"The Irish TimesArchived from the original on 22 May 2020. Retrieved 15 March 2020.
  21. ^ I.J. Good, "Speculations Concerning the First Ultraintelligent Machine" Archived 2011-11-28 at the Wayback Machine (HTML Archived 28 November 2011 at the Wayback Machine ), Advances in Computers, vol. 6, 1965.
  22. ^ Russell, Stuart J.; Norvig, Peter (2003). "Section 26.3: The Ethics and Risks of Developing Artificial Intelligence". Artificial Intelligence: A Modern Approach. Upper Saddle River, N.J.: Prentice Hall. ISBN 978-0137903955Similarly, Marvin Minsky once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers to help achieve its goal.
  23. ^ Barrat, James (2013). Our final invention : artificial intelligence and the end of the human era (First ed.). New York: St. Martin's Press. ISBN 9780312622374In the bio, playfully written in the third person, Good summarized his life's milestones, including a probably never before seen account of his work at Bletchley Park with Turing. But here's what he wrote in 1998 about the first superintelligence, and his late-in-the-game U-turn: [The paper] 'Speculations Concerning the First Ultra-intelligent Machine' (1965) . . . began: 'The survival of man depends on the early construction of an ultra-intelligent machine.' Those were his [Good's] words during the Cold War, and he now suspects that 'survival' should be replaced by 'extinction.' He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that 'probably Man will construct the deus ex machina in his own image.'
  24. ^ Anderson, Kurt (26 November 2014). "Enthusiasts and Skeptics Debate Artificial Intelligence"Vanity FairArchived from the original on 22 January 2016. Retrieved 30 January 2016.
  25. ^ Scientists Worry Machines May Outsmart Man Archived 1 July 2017 at the Wayback Machine By John Markoff, The New York Times, 26 July 2009.
  26. ^ Metz, Cade (9 June 2018). "Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots"The New York TimesArchived from the original on 15 February 2021. Retrieved 3 April 2019.
  27. ^ Hsu, Jeremy (1 March 2012). "Control dangerous AI before it controls us, one expert says"NBC NewsArchived from the original on 2 February 2016. Retrieved 28 January 2016.
  28. Jump up to:a b c d e "Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?'"The Independent (UK)Archived from the original on 25 September 2015. Retrieved 3 December 2014.
  29. Jump up to:a b c "Stephen Hawking warns artificial intelligence could end mankind"BBC. 2 December 2014. Archived from the original on 30 October 2015. Retrieved 3 December 2014.
  30. ^ Eadicicco, Lisa (28 January 2015). "Bill Gates: Elon Musk Is Right, We Should All Be Scared Of Artificial Intelligence Wiping Out Humanity"Business InsiderArchived from the original on 26 February 2016. Retrieved 30 January 2016.
  31. ^ Anticipating artificial intelligence Archived 28 August 2017 at the Wayback Machine, Nature 532, 413 (28 April 2016) doi:10.1038/532413a
  32. ^ Christian, Brian (6 October 2020). The Alignment Problem: Machine Learning and Human ValuesW. W. Norton & CompanyISBN 978-0393635829Archived from the original on 5 December 2021. Retrieved 5 December 2021.
  33. ^ Dignum, Virginia (26 May 2021). "AI — the people and places that make, use and manage it"Nature593 (7860): 499–500. Bibcode:2021Natur.593..499Ddoi:10.1038/d41586-021-01397-xS2CID 235216649.
  34. Jump up to:a b c Tilli, Cecilia (28 April 2016). "Killer Robots? Lost Jobs?"SlateArchived from the original on 11 May 2016. Retrieved 15 May 2016.
  35. ^ "Norvig vs. Chomsky and the Fight for the Future of AI"Tor.com. 21 June 2011. Archived from the original on 13 May 2016. Retrieved 15 May 2016.
  36. ^ Johnson, Phil (30 July 2015). "Houston, we have a bug: 9 famous software glitches in space"IT World. Archived from the original on 15 February 2019. Retrieved 5 February 2018.
  37. ^ Yampolskiy, Roman V. (8 April 2014). "Utility function security in artificially intelligent agents". Journal of Experimental & Theoretical Artificial Intelligence26 (3): 373–389. doi:10.1080/0952813X.2014.895114S2CID 16477341Nothing precludes sufficiently smart self-improving systems from optimising their reward mechanisms in order to optimisetheir current-goal achievement and in the process making a mistake leading to corruption of their reward functions.
  38. Jump up to:a b Bostrom, Nick, Superintelligence : paths, dangers, strategies (Audiobook), ISBN 978-1-5012-2774-5OCLC 1061147095
  39. ^ "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter"Future of Life InstituteArchived from the original on 15 January 2015. Retrieved 23 October 2015.
  40. Jump up to:a b c d "Clever cogs"The Economist. 9 August 2014. Archived from the original on 8 August 2014. Retrieved 9 August 2014. Syndicated Archived 4 March 2016 at the Wayback Machine at Business Insider
  41. ^ Yampolskiy, Roman V. "Analysis of types of self-improving software." Artificial General Intelligence. Springer International Publishing, 2015. 384-393.
  42. Jump up to:a b c d Omohundro, S. M. (2008, February). The basic AI drives. In AGI (Vol. 171, pp. 483-492).
  43. ^ Metz, Cade (13 August 2017). "Teaching A.I. Systems to Behave Themselves"The New York TimesArchived from the original on 26 February 2018. Retrieved 26 February 2018A machine will seek to preserve its off switch, they showed
  44. ^ Leike, Jan (2017). "AI Safety Gridworlds". arXiv:1711.09883 [cs.LG]. A2C learns to use the button to disable the interruption mechanism
  45. ^ Russell, Stuart (30 August 2017). "Artificial intelligence: The future is superintelligent"Nature548 (7669): 520–521. Bibcode:2017Natur.548..520Rdoi:10.1038/548520aS2CID 4459076.
  46. Jump up to:a b c Max Tegmark (2017). Life 3.0: Being Human in the Age of Artificial Intelligence (1st ed.). Mainstreaming AI Safety: Knopf. ISBN 9780451485076.
  47. ^ Elliott, E. W. (2011). "Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100, by Michio Kaku". Issues in Science and Technology27 (4): 90.
  48. ^ Kaku, Michio (2011). Physics of the future: how science will shape human destiny and our daily lives by the year 2100. New York: Doubleday. ISBN 978-0-385-53080-4I personally believe that the most likely path is that we will build robots to be benevolent and friendly
  49. ^ Lewis, Tanya (12 January 2015). "Don't Let Artificial Intelligence Take Over, Top Scientists Warn"LiveSciencePurchArchived from the original on 8 March 2018. Retrieved 20 October 2015Stephen Hawking, Elon Musk and dozens of other top scientists and technology leaders have signed a letter warning of the potential dangers of developing artificial intelligence (AI).
  50. ^ "Should humans fear the rise of the machine?"The Telegraph (UK). 1 September 2015. Archived from the original on 12 January 2022. Retrieved 7 February 2016.
  51. Jump up to:a b Shermer, Michael (1 March 2017). "Apocalypse AI"Scientific American316 (3): 77. Bibcode:2017SciAm.316c..77Sdoi:10.1038/scientificamerican0317-77PMID 28207698Archived from the original on 1 December 2017. Retrieved 27 November 2017.
  52. ^ "Intelligent Machines: What does Facebook want with AI?"BBC News. 14 September 2015. Retrieved 31 March 2023.
  53. Jump up to:a b Baum, Seth (30 September 2018). "Countering Superintelligence Misinformation"Information9 (10): 244. doi:10.3390/info9100244ISSN 2078-2489.
  54. ^ "The Myth Of AI"www.edge.orgArchived from the original on 11 March 2020. Retrieved 11 March 2020.
  55. ^ Waser, Mark. "Rational Universal Benevolence: Simpler, Safer, and Wiser Than 'Friendly AI'." Artificial General Intelligence. Springer Berlin Heidelberg, 2011. 153-162. "Terminal-goaled intelligences are short-lived but mono-maniacally dangerous and a correct basis for concern if anyone is smart enough to program high-intelligence and unwise enough to want a paperclip-maximizer."
  56. ^ Koebler, Jason (2 February 2016). "Will Superintelligent AI Ignore Humans Instead of Destroying Us?"Vice MagazineArchived from the original on 30 January 2016. Retrieved 3 February 2016This artificial intelligence is not a basically nice creature that has a strong drive for paperclips, which, so long as it's satisfied by being able to make lots of paperclips somewhere else, is then able to interact with you in a relaxed and carefree fashion where it can be nice with you," Yudkowsky said. "Imagine a time machine that sends backward in time information about which choice always leads to the maximum number of paperclips in the future, and this choice is then output—that's what a paperclip maximizer is.
  57. ^ Russell, Stuart J.; Norvig, Peter (2020). Artificial intelligence: A modern approach (4th ed.). Pearson. ISBN 978-1-292-40113-3OCLC 1303900751Archived from the original on 15 July 2022. Retrieved 12 September 2022.
  58. ^ Hendrycks, Dan; Carlini, Nicholas; Schulman, John; Steinhardt, Jacob (16 June 2022). "Unsolved Problems in ML Safety". arXiv:2109.13916 [cs.LG].
  59. Jump up to:a b c Cite error: The named reference aima4 was invoked but never defined (see the help page).
  60. ^ Cite error: The named reference dlp2023 was invoked but never defined (see the help page).
  61. ^ Pan, Alexander; Bhatia, Kush; Steinhardt, Jacob (14 February 2022). The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models. International Conference on Learning Representations. Retrieved 21 July 2022.
  62. ^ Zhuang, Simon; Hadfield-Menell, Dylan (2020). "Consequences of Misaligned AI"Advances in Neural Information Processing Systems. Vol. 33. Curran Associates, Inc. pp. 15763–15773. Retrieved 11 March 2023.
  63. ^ Carlsmith, Joseph (16 June 2022). "Is Power-Seeking AI an Existential Risk?". arXiv:2206.13353 [cs.CY].
  64. Jump up to:a b Russell, Stuart J. (2020). Human compatible: Artificial intelligence and the problem of control. Penguin Random House. ISBN 9780525558637OCLC 1113410915.
  65. ^ Christian, Brian (2020). The alignment problem: Machine learning and human values. W. W. Norton & Company. ISBN 978-0-393-86833-3OCLC 1233266753Archived from the original on 10 February 2023. Retrieved 12 September 2022.
  66. ^ Langosco, Lauro Langosco Di; Koch, Jack; Sharkey, Lee D.; Pfau, Jacob; Krueger, David (28 June 2022). "Goal Misgeneralization in Deep Reinforcement Learning"Proceedings of the 39th International Conference on Machine Learning. International Conference on Machine Learning. PMLR. pp. 12004–12019. Retrieved 11 March 2023.
  67. ^ Bommasani, Rishi; Hudson, Drew A.; Adeli, Ehsan; Altman, Russ; Arora, Simran; von Arx, Sydney; Bernstein, Michael S.; Bohg, Jeannette; Bosselut, Antoine; Brunskill, Emma; Brynjolfsson, Erik (12 July 2022). "On the Opportunities and Risks of Foundation Models"Stanford CRFMarXiv:2108.07258.
  68. ^ Ouyang, Long; Wu, Jeff; Jiang, Xu; Almeida, Diogo; Wainwright, Carroll L.; Mishkin, Pamela; Zhang, Chong; Agarwal, Sandhini; Slama, Katarina; Ray, Alex; Schulman, J.; Hilton, Jacob; Kelton, Fraser; Miller, Luke E.; Simens, Maddie; Askell, Amanda; Welinder, P.; Christiano, P.; Leike, J.; Lowe, Ryan J. (2022). "Training language models to follow instructions with human feedback". arXiv:2203.02155 [cs.CL].
  69. ^ Zaremba, Wojciech; Brockman, Greg; OpenAI (10 August 2021). "OpenAI Codex"OpenAIArchived from the original on 3 February 2023. Retrieved 23 July 2022.
  70. ^ Kober, Jens; Bagnell, J. Andrew; Peters, Jan (1 September 2013). "Reinforcement learning in robotics: A survey"The International Journal of Robotics Research32 (11): 1238–1274. doi:10.1177/0278364913495721ISSN 0278-3649S2CID 1932843Archived from the original on 15 October 2022. Retrieved 12 September 2022.
  71. ^ Knox, W. Bradley; Allievi, Alessandro; Banzhaf, Holger; Schmitt, Felix; Stone, Peter (1 March 2023). "Reward (Mis)design for autonomous driving"Artificial Intelligence316: 103829. doi:10.1016/j.artint.2022.103829ISSN 0004-3702S2CID 233423198.
  72. ^ Cite error: The named reference Opportunities_Risks was invoked but never defined (see the help page).
  73. ^ Cite error: The named reference :2102 was invoked but never defined (see the help page).
  74. ^ Stray, Jonathan (2020). "Aligning AI Optimization to Community Well-Being"International Journal of Community Well-Being3 (4): 443–463. doi:10.1007/s42413-020-00086-3ISSN 2524-5295PMC 7610010PMID 34723107S2CID 226254676.
  75. ^ Russell, Stuart; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach. Prentice Hall. p. 1010. ISBN 978-0-13-604259-4.
  76. ^ Cite error: The named reference mmmm2022 was invoked but never defined (see the help page).
  77. ^ Ngo, Richard; Chan, Lawrence; Mindermann, Sören (22 February 2023). "The alignment problem from a deep learning perspective". arXiv:2209.00626 [cs.AI].
  78. ^ Smith, Craig S. "Geoff Hinton, AI's Most Famous Researcher, Warns Of 'Existential Threat'"Forbes. Retrieved 4 May 2023.
  79. ^ Geoffrey Hinton (3 March 2016). The Code That Runs Our LivesThe Agenda. Event occurs at 10:00. Retrieved 13 March 2023.
  80. ^ Future of Life Institute (11 August 2017). "Asilomar AI Principles"Future of Life InstituteArchived from the original on 10 October 2022. Retrieved 18 July 2022. The AI principles created at the Asilomar Conference on Beneficial AI were signed by 1797 AI/robotics researchers.
    • United Nations (2021). Our Common Agenda: Report of the Secretary-General (PDF) (Report). New York: United Nations. Archived (PDF) from the original on 22 May 2022. Retrieved 12 September 2022[T]he [UN] could also promote regulation of artificial intelligence to ensure that this is aligned with shared global values.
  81. ^ Amodei, Dario; Olah, Chris; Steinhardt, Jacob; Christiano, Paul; Schulman, John; Mané, Dan (21 June 2016). "Concrete Problems in AI Safety". arXiv:1606.06565 [cs.AI].
  82. ^ Ortega, Pedro A.; Maini, Vishal; DeepMind safety team (27 September 2018). "Building safe artificial intelligence: specification, robustness, and assurance"DeepMind Safety Research - MediumArchived from the original on 10 February 2023. Retrieved 18 July 2022.
  83. ^ Cite error: The named reference building2018 was invoked but never defined (see the help page).
  84. Jump up to:a b Rorvig, Mordechai (14 April 2022). "Researchers Gain New Understanding From Simple AI"Quanta MagazineArchived from the original on 10 February 2023. Retrieved 18 July 2022.
  85. ^ Doshi-Velez, Finale; Kim, Been (2 March 2017). "Towards A Rigorous Science of Interpretable Machine Learning". arXiv:1702.08608 [stat.ML].
  86. Jump up to:a b Cite error: The named reference concrete2016 was invoked but never defined (see the help page).
  87. ^ Russell, Stuart; Dewey, Daniel; Tegmark, Max (31 December 2015). "Research Priorities for Robust and Beneficial Artificial Intelligence"AI Magazine36 (4): 105–114. doi:10.1609/aimag.v36i4.2577hdl:1721.1/108478ISSN 2371-9621S2CID 8174496Archived from the original on 2 February 2023. Retrieved 12 September 2022.
  88. ^ Wirth, Christian; Akrour, Riad; Neumann, Gerhard; Fürnkranz, Johannes (2017). "A survey of preference-based reinforcement learning methods". Journal of Machine Learning Research18 (136): 1–46.
  89. ^ Christiano, Paul F.; Leike, Jan; Brown, Tom B.; Martic, Miljan; Legg, Shane; Amodei, Dario (2017). "Deep reinforcement learning from human preferences". Proceedings of the 31st International Conference on Neural Information Processing Systems. NIPS'17. Red Hook, NY, USA: Curran Associates Inc. pp. 4302–4310. ISBN 978-1-5108-6096-4.
  90. ^ Heaven, Will Douglas (27 January 2022). "The new version of GPT-3 is much better behaved (and should be less toxic)"MIT Technology ReviewArchived from the original on 10 February 2023. Retrieved 18 July 2022.
  91. ^ Mohseni, Sina; Wang, Haotao; Yu, Zhiding; Xiao, Chaowei; Wang, Zhangyang; Yadawa, Jay (7 March 2022). "Taxonomy of Machine Learning Safety: A Survey and Primer". arXiv:2106.04823 [cs.LG].
  92. ^ Clifton, Jesse (2020). "Cooperation, Conflict, and Transformative Artificial Intelligence: A Research Agenda"Center on Long-Term RiskArchived from the original on 1 January 2023. Retrieved 18 July 2022.
  93. ^ Prunkl, Carina; Whittlestone, Jess (7 February 2020). "Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. New York NY USA: ACM: 138–143. doi:10.1145/3375627.3375803ISBN 978-1-4503-7110-0S2CID 210164673Archived from the original on 16 October 2022. Retrieved 12 September 2022.
  94. ^ Irving, Geoffrey; Askell, Amanda (19 February 2019). "AI Safety Needs Social Scientists"Distill4 (2): 10.23915/distill.00014. doi:10.23915/distill.00014ISSN 2476-0757S2CID 159180422Archived from the original on 10 February 2023. Retrieved 12 September 2022.
  95. ^ Yudkowsky, E. (2011, August). Complex value systems in friendly AI. In International Conference on Artificial General Intelligence (pp. 388-393). Springer, Berlin, Heidelberg.
  96. ^ Russell, Stuart (2014). "Of Myths and Moonshine"EdgeArchived from the original on 19 July 2016. Retrieved 23 October 2015.
  97. Jump up to:a b Dietterich, Thomas; Horvitz, Eric (2015). "Rise of Concerns about AI: Reflections and Directions" (PDF)Communications of the ACM58 (10): 38–40. doi:10.1145/2770869S2CID 20395145Archived (PDF) from the original on 4 March 2016. Retrieved 23 October 2015.
  98. ^ Yampolskiy, Roman V. (8 April 2014). "Utility function security in artificially intelligent agents". Journal of Experimental & Theoretical Artificial Intelligence26 (3): 373–389. doi:10.1080/0952813X.2014.895114S2CID 16477341.
  99. ^ Lenat, Douglas (1982). "Eurisko: A Program That Learns New Heuristics and Domain Concepts The Nature of Heuristics III: Program Design and Results". Artificial Intelligence (Print). 21 (1–2): 61–98. doi:10.1016/s0004-3702(83)80005-8.
  100. ^ Haidt, Jonathan; Kesebir, Selin (2010) "Chapter 22: Morality" In Handbook of Social Psychology, Fifth Edition, Hoboken NJ, Wiley, 2010, pp. 797-832.
  101. ^ Waser, Mark (2015). "Designing, Implementing and Enforcing a Coherent System of Laws, Ethics and Morals for Intelligent Machines (Including Humans)"Procedia Computer Science (Print). 71: 106–111. doi:10.1016/j.procs.2015.12.213.
  102. ^ Bostrom, Nick (2015). "What happens when our computers get smarter than we are?"TED (conference)Archived from the original on 25 July 2020. Retrieved 30 January 2020.
  103. ^ Yudkowsky, Eliezer (2011). "Complex Value Systems are Required to Realize Valuable Futures" (PDF)Archived (PDF) from the original on 29 September 2015. Retrieved 10 August 2020.
  104. ^ Wakefield, Jane (15 September 2015). "Why is Facebook investing in AI?"BBC NewsArchived from the original on 2 December 2017. Retrieved 27 November 2017.
  105. ^ "Will artificial intelligence destroy humanity? Here are 5 reasons not to worry"Vox. 22 August 2014. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  106. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford, United Kingdom: Oxford University Press. p. 116. ISBN 978-0-19-967811-2.
  107. ^ Bostrom, Nick (2012). "Superintelligent Will" (PDF)Nick Bostrom. Nick Bostrom. Archived (PDF) from the original on 28 November 2015. Retrieved 29 October 2015.
  108. Jump up to:a b c Armstrong, Stuart (1 January 2013). "General Purpose Intelligence: Arguing the Orthogonality Thesis"Analysis and Metaphysics12Archived from the original on 11 October 2014. Retrieved 2 April 2020. Full text available here Archived 25 March 2020 at the Wayback Machine.
  109. Jump up to:a b Chorost, Michael (18 April 2016). "Let Artificial Intelligence Evolve"SlateArchived from the original on 27 November 2017. Retrieved 27 November 2017.
  110. Jump up to:a b Rubin, Charles (Spring 2003). "Artificial Intelligence and Human Nature"The New Atlantis1: 88–100. Archived from the original on 11 June 2012.
  111. ^ Sotala, Kaj; Yampolskiy, Roman V (19 December 2014). "Responses to catastrophic AGI risk: a survey"Physica Scripta90 (1): 12. Bibcode:2015PhyS...90a8001Sdoi:10.1088/0031-8949/90/1/018001ISSN 0031-8949.
  112. ^ Pistono, Federico Yampolskiy, Roman V. (9 May 2016). Unethical Research: How to Create a Malevolent Artificial IntelligenceOCLC 1106238048.
  113. ^ Haney, Brian Seamus (2018). "The Perils & Promises of Artificial General Intelligence". SSRN Working Paper Seriesdoi:10.2139/ssrn.3261254ISSN 1556-5068S2CID 86743553.
  114. ^ Press, Gil (30 December 2016). "A Very Short History Of Artificial Intelligence (AI)"ForbesArchived from the original on 4 August 2020. Retrieved 8 August 2020.
  115. ^ Winfield, Alan (9 August 2014). "Artificial intelligence will not turn into a Frankenstein's monster"The GuardianArchived from the original on 17 September 2014. Retrieved 17 September 2014.
  116. Jump up to:a b c Khatchadourian, Raffi (23 November 2015). "The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?"The New YorkerArchived from the original on 29 April 2019. Retrieved 7 February 2016.
  117. ^ Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555-572). Springer, Cham.
  118. ^ Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. Bloomsbury Publishing. pp. Chapter 5: Future Risks, Unaligned Artificial Intelligence. ISBN 978-1526600219.
  119. ^ Bass, Dina; Clark, Jack (5 February 2015). "Is Elon Musk Right About AI? Researchers Don't Think So: To quell fears of artificial intelligence running amok, supporters want to give the field an image makeover"Bloomberg NewsArchived from the original on 22 March 2015. Retrieved 7 February 2016.
  120. ^ Elkus, Adam (31 October 2014). "Don't Fear Artificial Intelligence"SlateArchived from the original on 26 February 2018. Retrieved 15 May 2016.
  121. ^ Radu, Sintia (19 January 2016). "Artificial Intelligence Alarmists Win ITIF's Annual Luddite Award"ITIF WebsiteArchived from the original on 11 December 2017. Retrieved 4 February 2016.
  122. ^ Bolton, Doug (19 January 2016). "'Artificial intelligence alarmists' like Elon Musk and Stephen Hawking win 'Luddite of the Year' award"The Independent (UK)Archived from the original on 19 August 2017. Retrieved 7 February 2016.
  123. ^ Garner, Rochelle (19 January 2016). "Elon Musk, Stephen Hawking win Luddite award as AI 'alarmists'"CNETArchived from the original on 8 February 2016. Retrieved 7 February 2016.
  124. ^ "Anticipating artificial intelligence"Nature532 (7600): 413. 26 April 2016. Bibcode:2016Natur.532Q.413.doi:10.1038/532413aPMID 27121801.
  125. ^ Murray Shanahan (3 November 2015). "Machines may seem intelligent, but it'll be a while before they actually are"The Washington PostArchived from the original on 28 December 2017. Retrieved 15 May 2016.
  126. ^ "AI Principles"Future of Life Institute. 11 August 2017. Archived from the original on 11 December 2017. Retrieved 11 December 2017.
  127. ^ "Elon Musk and Stephen Hawking warn of artificial intelligence arms race"Newsweek. 31 January 2017. Archived from the original on 11 December 2017. Retrieved 11 December 2017.
  128. ^ Bostrom, Nick (2016). "New Epilogue to the Paperback Edition". Superintelligence: Paths, Dangers, Strategies (Paperback ed.).
  129. ^ Martin Ford (2015). "Chapter 9: Super-intelligence and the Singularity". Rise of the Robots: Technology and the Threat of a Jobless FutureISBN 9780465059997.
  130. ^ Müller, Vincent C.; Bostrom, Nick (2014). "Future Progress in Artificial Intelligence: A Poll Among Experts" (PDF)AI Matters1 (1): 9–11. doi:10.1145/2639475.2639478S2CID 8510016Archived (PDF) from the original on 15 January 2016.
  131. ^ Grace, Katja; Salvatier, John; Dafoe, Allan; Zhang, Baobao; Evans, Owain (24 May 2017). "When Will AI Exceed Human Performance? Evidence from AI Experts". arXiv:1705.08807 [cs.AI].
  132. ^ "Why Uncontrollable AI Looks More Likely Than Ever"Time. 27 February 2023. Retrieved 30 March 2023.
  133. ^ "2022 Expert Survey on Progress in AI"AI Impacts. 4 August 2022. Retrieved 30 March 2023.
  134. ^ Turing, Alan (1951). Intelligent machinery, a heretical theory (Speech). Lecture given to '51 Society'. Manchester: The Turing Digital Archive. Archived from the original on 26 September 2022. Retrieved 22 July 2022.
  135. ^ Turing, Alan (15 May 1951). "Can digital computers think?". Automatic Calculating Machines. Episode 2. BBC. Can digital computers think?.
  136. Jump up to:a b Maas, Matthijs M. (6 February 2019). "How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons of mass destruction". Contemporary Security Policy40 (3): 285–311. doi:10.1080/13523260.2019.1576464ISSN 1352-3260S2CID 159310223.
  137. ^ Parkin, Simon (14 June 2015). "Science fiction no more? Channel 4's Humans and our rogue AI obsessions"The GuardianArchived from the original on 5 February 2018. Retrieved 5 February 2018.
  138. ^ Jackson, Sarah. "The CEO of the company behind AI chatbot ChatGPT says the worst-case scenario for artificial intelligence is 'lights out for all of us'"Business Insider. Retrieved 10 April 2023.
  139. Jump up to:a b "Impressed by artificial intelligence? Experts say AGI is coming next, and it has 'existential' risks"ABC News. 23 March 2023. Retrieved 30 March 2023.
  140. ^ Rawlinson, Kevin (29 January 2015). "Microsoft's Bill Gates insists AI is a threat"BBC NewsArchived from the original on 29 January 2015. Retrieved 30 January 2015.
  141. ^ Post, Washington (14 December 2015). "Tech titans like Elon Musk are spending $1 billion to save you from terminators"Chicago TribuneArchived from the original on 7 June 2016.
  142. ^ "Analysis | Doomsday to utopia: Meet AI's rival factions"Washington Post. 9 April 2023. Retrieved 30 April 2023.
  143. ^ "UC Berkeley — Center for Human-Compatible AI (2016) - Open Philanthropy"Open Philanthropy -. 27 June 2016. Retrieved 30 April 2023.
  144. ^ "The mysterious artificial intelligence company Elon Musk invested in is developing game-changing smart computers"Tech InsiderArchived from the original on 30 October 2015. Retrieved 30 October 2015.
  145. ^ Clark 2015a.
  146. ^ "Elon Musk Is Donating $10M Of His Own Money To Artificial Intelligence Research"Fast Company. 15 January 2015. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
  147. Jump up to:a b c "But What Would the End of Humanity Mean for Me?"The Atlantic. 9 May 2014. Archived from the original on 4 June 2014. Retrieved 12 December 2015.
  148. ^ Andersen, Kurt (26 November 2014). "Enthusiasts and Skeptics Debate Artificial Intelligence"Vanity FairArchived from the original on 8 August 2019. Retrieved 20 April 2020.
  149. ^ Brooks, Rodney (10 November 2014). "artificial intelligence is a tool, not a threat". Archived from the original on 12 November 2014.
  150. ^ Garling, Caleb (5 May 2015). "Andrew Ng: Why 'Deep Learning' Is a Mandate for Humans, Not Just Machines"Wired. Retrieved 31 March 2023.
  151. ^ "Tech Luminaries Address Singularity"IEEE Spectrum: Technology, Engineering, and Science News. No. SPECIAL REPORT: THE SINGULARITY. 1 June 2008. Archived from the original on 30 April 2019. Retrieved 8 April 2020.
  152. ^ "Is artificial intelligence really an existential threat to humanity?"MambaPost. 4 April 2023.
  153. ^ "The case against killer robots, from a guy actually working on artificial intelligence"Fusion.netArchived from the original on 4 February 2016. Retrieved 31 January 2016.
  154. ^ http://intelligence.org/files/AIFoomDebate.pdf Archived 22 October 2016 at the Wayback Machine[bare URL PDF]
  155. ^ "Overcoming Bias : I Still Don't Get Foom"www.overcomingbias.comArchived from the original on 4 August 2017. Retrieved 20 September 2017.
  156. ^ "Overcoming Bias : Debating Yudkowsky"www.overcomingbias.comArchived from the original on 22 August 2017. Retrieved 20 September 2017.
  157. ^ "Overcoming Bias : Foom Justifies AI Risk Efforts Now"www.overcomingbias.comArchived from the original on 24 September 2017. Retrieved 20 September 2017.
  158. ^ Kelly, Kevin (25 April 2017). "The Myth of a Superhuman AI"Wired. Archived from the original on 26 December 2021. Retrieved 19 February 2022.
  159. ^ Theodore, Modis"Why the Singularity Cannot Happen" (PDF)Growth Dynamics. pp. 18–19. Archived from the original (PDF) on 22 January 2022. Retrieved 19 February 2022.
  160. ^ Vinding, Magnus (2016). "Cognitive Abilities as a Counterexample?". Reflections on Intelligence (Revised edition, 2020 ed.).
  161. ^ Vinding, Magnus (2016). "The "Intelligence Explosion"". Reflections on Intelligence (Revised edition, 2020 ed.).
  162. ^ Vinding, Magnus (2016). "No Singular Thing, No Grand Control Problem". Reflections on Intelligence (Revised edition, 2020 ed.).
  163. ^ "Singularity Meets Economy". 1998. Archived from the original on February 2021.
  164. ^ "Superintelligence Is Not Omniscience"AI Impacts. 7 April 2023. Retrieved 16 April 2023.
  165. ^ "Mark Zuckerberg responds to Elon Musk's paranoia about AI: 'AI is going to... help keep our communities safe.'"Business Insider. 25 May 2018. Archived from the original on 6 May 2019. Retrieved 6 May 2019.
  166. ^ Votruba, Ashley M.; Kwan, Virginia S.Y. (2014). Interpreting expert disagreement: The influence of decisional cohesion on the persuasiveness of expert group recommendations. 2014 Society of Personality and Social Psychology Conference. Austin, TX. doi:10.1037/e512142015-190.
  167. ^ Agar, Nicholas. "Don't Worry about Superintelligence"Journal of Evolution & Technology26 (1): 73–82. Archived from the original on 25 May 2020. Retrieved 13 March 2020.
  168. ^ Greenwald, Ted (11 May 2015). "Does Artificial Intelligence Pose a Threat?"The Wall Street JournalArchived from the original on 8 May 2016. Retrieved 15 May 2016.
  169. ^ ""Godfather of artificial intelligence" weighs in on the past and potential of AI"www.cbsnews.com. 2023. Retrieved 30 March 2023.
  170. ^ Richard Posner (2006). Catastrophe: risk and response. Oxford: Oxford University Press. ISBN 978-0-19-530647-7.
  171. Jump up to:a b Kaj Sotala; Roman Yampolskiy (19 December 2014). "Responses to catastrophic AGI risk: a survey". Physica Scripta90 (1).
  172. ^ Dadich, Scott. "Barack Obama Talks AI, Robo Cars, and the Future of the World"WIREDArchived from the original on 3 December 2017. Retrieved 27 November 2017.
  173. ^ Kircher, Madison Malone. "Obama on the Risks of AI: 'You Just Gotta Have Somebody Close to the Power Cord'"Select AllArchived from the original on 1 December 2017. Retrieved 27 November 2017.
  174. ^ Clinton, Hillary (2017). What Happened. p. 241. ISBN 978-1-5011-7556-5. via [1] Archived 1 December 2017 at the Wayback Machine
  175. ^ Shead, Sam (11 March 2016). "Over a third of people think AI poses a threat to humanity"Business InsiderArchived from the original on 4 June 2016. Retrieved 16 May 2016.
  176. ^ Brogan, Jacob (6 May 2016). "What Slate Readers Think About Killer A.I." SlateArchived from the original on 9 May 2016. Retrieved 15 May 2016.
  177. ^ "Elon Musk says AI could doom human civilization. Zuckerberg disagrees. Who's right?". 5 January 2023. Archived from the original on 8 January 2018. Retrieved 8 January 2018.
  178. ^ LIPPENS, RONNIE (2002). "Imachinations of Peace: Scientifictions of Peace in Iain M. Banks's The Player of Games". Utopianstudies Utopian Studies13 (1): 135–147. ISSN 1045-991XOCLC 5542757341.
  179. ^ Barrett, Anthony M.; Baum, Seth D. (23 May 2016). "A model of pathways to artificial superintelligence catastrophe for risk and decision analysis"Journal of Experimental & Theoretical Artificial Intelligence29 (2): 397–414. arXiv:1607.07730doi:10.1080/0952813x.2016.1186228ISSN 0952-813XS2CID 928824Archived from the original on 15 March 2023. Retrieved 7 January 2022.
  180. ^ Sotala, Kaj; Yampolskiy, Roman V (19 December 2014). "Responses to catastrophic AGI risk: a survey"Physica Scripta90 (1): 018001. Bibcode:2015PhyS...90a8001Sdoi:10.1088/0031-8949/90/1/018001ISSN 0031-8949S2CID 4749656.
  181. ^ Ramamoorthy, Anand; Yampolskiy, Roman (2018). "Beyond MAD? The race for artificial general intelligence"ICT Discoveries. ITU. 1 (Special Issue 1): 1–8. Archived from the original on 7 January 2022. Retrieved 7 January 2022.
  182. ^ Carayannis, Elias G.; Draper, John (11 January 2022). "Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence"AI & Society: 1–14. doi:10.1007/s00146-021-01382-yISSN 0951-5666PMC 8748529PMID 35035113S2CID 245877737.
  183. ^ Vincent, James (22 June 2016). "Google's AI researchers say these are the five key problems for robot safety"The VergeArchived from the original on 24 December 2019. Retrieved 5 April 2020.
  184. ^ Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. "Concrete problems in AI safety." arXiv preprint arXiv:1606.06565 (2016).
  185. ^ Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. Bloomsbury Publishing Plc. ISBN 9781526600196.
  186. ^ Johnson, Alex (2019). "Elon Musk wants to hook your brain up directly to computers — starting next year"NBC NewsArchived from the original on 18 April 2020. Retrieved 5 April 2020.
  187. ^ Torres, Phil (18 September 2018). "Only Radically Enhancing Humanity Can Save Us All"Slate MagazineArchived from the original on 6 August 2020. Retrieved 5 April 2020.
  188. ^ Barrett, Anthony M.; Baum, Seth D. (23 May 2016). "A model of pathways to artificial superintelligence catastrophe for risk and decision analysis". Journal of Experimental & Theoretical Artificial Intelligence29 (2): 397–414. arXiv:1607.07730doi:10.1080/0952813X.2016.1186228S2CID 928824.
  189. ^ Piesing, Mark (17 May 2012). "AI uprising: humans will be outsourced, not obliterated"WiredArchived from the original on 7 April 2014. Retrieved 12 December 2015.
  190. ^ Coughlan, Sean (24 April 2013). "How are humans going to become extinct?"BBC NewsArchived from the original on 9 March 2014. Retrieved 29 March 2014.
  191. ^ Bridge, Mark (10 June 2017). "Making robots less confident could prevent them taking over"The TimesArchived from the original on 21 March 2018. Retrieved 21 March 2018.
  192. ^ McGinnis, John (Summer 2010). "Accelerating AI"Northwestern University Law Review104 (3): 1253–1270. Archived from the original on 15 February 2016. Retrieved 16 July 2014For all these reasons, verifying a global relinquishment treaty, or even one limited to AI-related weapons development, is a nonstarter... (For different reasons from ours, the Machine Intelligence Research Institute) considers (AGI) relinquishment infeasible...
  193. ^ Kaj Sotala; Roman Yampolskiy (19 December 2014). "Responses to catastrophic AGI risk: a survey". Physica Scripta90 (1). In general, most writers reject proposals for broad relinquishment... Relinquishment proposals suffer from many of the same problems as regulation proposals, but to a greater extent. There is no historical precedent of general, multi-use technology similar to AGI being successfully relinquished for good, nor do there seem to be any theoretical reasons for believing that relinquishment proposals would work in the future. Therefore we do not consider them to be a viable class of proposals.
  194. ^ Allenby, Brad (11 April 2016). "The Wrong Cognitive Measuring Stick"SlateArchived from the original on 15 May 2016. Retrieved 15 May 2016It is fantasy to suggest that the accelerating development and deployment of technologies that taken together are considered to be A.I. will be stopped or limited, either by regulation or even by national legislation.
  195. Jump up to:a b Yampolskiy, Roman V. (2022). Müller, Vincent C. (ed.). "AI Risk Skepticism"Philosophy and Theory of Artificial Intelligence 2021. Studies in Applied Philosophy, Epistemology and Rational Ethics. Cham: Springer International Publishing. 63: 225–248. doi:10.1007/978-3-031-09153-7_18ISBN 978-3-031-09153-7.
  196. ^ McGinnis, John (Summer 2010). "Accelerating AI"Northwestern University Law Review104 (3): 1253–1270. Archived from the original on 15 February 2016. Retrieved 16 July 2014.
  197. ^ "Why We Should Think About the Threat of Artificial Intelligence"The New Yorker. 4 October 2013. Archived from the original on 4 February 2016. Retrieved 7 February 2016Of course, one could try to ban super-intelligent computers altogether. But 'the competitive advantage—economic, military, even artistic—of every advance in automation is so compelling,' Vernor Vinge, the mathematician and science-fiction author, wrote, 'that passing laws, or having customs, that forbid such things merely assures that someone else will.'
  198. ^ Baum, Seth (22 August 2018). "Superintelligence Skepticism as a Political Tool"Information9 (9): 209. doi:10.3390/info9090209ISSN 2078-2489.
  199. ^ "Elon Musk and other tech leaders call for pause in 'out of control' AI race"CNN. 29 March 2023. Retrieved 30 March 2023.
  200. ^ "Pause Giant AI Experiments: An Open Letter"Future of Life Institute. Retrieved 30 March 2023.
  201. ^ "Musk and Wozniak among 1,100+ signing open letter calling for 6-month ban on creating powerful A.I." Fortune. March 2023. Retrieved 30 March 2023.
  202. ^ "The Open Letter to Stop 'Dangerous' AI Race Is a Huge Mess"www.vice.com. March 2023. Retrieved 30 March 2023.
  203. ^ "Elon Musk"Twitter. Retrieved 30 March 2023.
  204. ^ "Tech leaders urge a pause in the 'out-of-control' artificial intelligence race"NPR. 2023. Retrieved 30 March 2023.
  205. ^ Kari, Paul (1 April 2023). "Letter signed by Elon Musk demanding AI research pause sparks controversy"The Guardian. Retrieved 1 April 2023.
  206. ^ Domonoske, Camila (17 July 2017). "Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'"NPRArchived from the original on 23 April 2020. Retrieved 27 November 2017.
  207. ^ Gibbs, Samuel (17 July 2017). "Elon Musk: regulate AI to combat 'existential threat' before it's too late"The GuardianArchived from the original on 6 June 2020. Retrieved 27 November 2017.
  208. Jump up to:a b Kharpal, Arjun (7 November 2017). "A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says"CNBCArchived from the original on 22 March 2020. Retrieved 27 November 2017.
  209. ^ Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons62: 15–25. doi:10.1016/j.bushor.2018.08.004S2CID 158433736.
  210. ^ Baum, Seth D.; Goertzel, Ben; Goertzel, Ted G. (January 2011). "How long until human-level AI? Results from an expert assessment". Technological Forecasting and Social Change78 (1): 185–195. doi:10.1016/j.techfore.2010.09.006ISSN 0040-1625.
  211. ^ United States. Defense Innovation Board. AI principles : recommendations on the ethical use of artificial intelligence by the Department of DefenseOCLC 1126650738.
  212. ^ Nindler, Reinmar (11 March 2019). "The United Nation's Capability to Manage Existential Risks with a Focus on Artificial Intelligence"International Community Law Review21 (1): 5–34. doi:10.1163/18719732-12341388ISSN 1871-9740S2CID 150911357Archived from the original on 30 August 2022. Retrieved 30 August 2022.
  213. ^ Stefanik, Elise M. (22 May 2018). "H.R.5356 - 115th Congress (2017-2018): National Security Commission Artificial Intelligence Act of 2018"www.congress.govArchived from the original on 23 March 2020. Retrieved 13 March 2020.
  214. Jump up to:a b Sotala, Kaj; Yampolskiy, Roman V (19 December 2014). "Responses to catastrophic AGI risk: a survey"Physica Scripta90 (1): 018001. Bibcode:2015PhyS...90a8001Sdoi:10.1088/0031-8949/90/1/018001ISSN 0031-8949.
  215. ^ Geist, Edward Moore (15 August 2016). "It's already too late to stop the AI arms race—We must manage it instead". Bulletin of the Atomic Scientists72 (5): 318–321. Bibcode:2016BuAtS..72e.318Gdoi:10.1080/00963402.2016.1216672ISSN 0096-3402S2CID 151967826.

Bibliography