What is Artificial Intelligence (HTML version),
John McCarthy, Stanford University.
An important introductory paper for undergraduate students.
This Web page has links to other versions of John McCarthy's paper.
If the above link is not operational, then you can read
(Local Copy of the March 29, 2003 version): "ps" - PostScript and "pdf" - Acrobat Reader formats.
(Local Copy of the November 12, 2007 version):
Part 1: Basic Questions Part 2: Branches of AI Part 3: Applications Part 4: More Questions Part 5: Bibliography
Gary Marcus (a professor of cognitive science at New York University, USA):
Why can't my computer understand me? (August 16, 2013).
This article was published in the New Yorker magazine to celebrate
the talk presented by Professor
Hector Levesque on occasion of the Research Excellence Award that
he received in August 2013 at the
premier international conference on artificial intelligence.
Winograd Schema Challenge: AI Common Sense Still a Problem, for Now.
IEEE Spectrum interviewed Charles Ortiz on July 28, 2016. The paper about
Winograd Schema Challenge, written by
Hector Levesque ,
Ernest Davis, and Leora Morgenstern,
was published in the proccedings of the 14th international conference on
Principles of Knowledge Representation and Reasoning, Vienna, Austria, July 20-24, 2014.
The Turing Test:
Computing Machinery and Intelligence by Alan Turing,
published in "Mind", vol. LIX, N 236, pages 433-460, October, 1950.
Because of the advancement in software agents technologies,
nowdays this test has commercial applications discussed in
this New York Times article (NY Times, December 10, 2002) and in
several other articles.
The recent CAPTCHA project has a goal
of developing electronic tests that can tell humans and computers apart.
Patrick Hayes and Kenneth Ford
Turing Test Considered Harmful. This paper was published in the proceedings of the International Joint Conference on AI (IJCAI-1995), Montreal Canada, August 20-25, 1995.
Kenneth M. Ford, Patrick J. Hayes, Clark Glymour, James Allen
(from the Florida Institute for Human and Machine Cognition, IHMC).
Cognitive Orthoses: Toward Human-Centered AI. Published in AI MAGAZINE, Winter 2015, Vol 36, No 4, pages 5-8.
Building Watson (a computer that won in Jeopardy competition over human champions):
"An overview of the DeepQA project" by
David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David
Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric
Nyberg, John Prager, Nico Schlaefer, Chris Welty from IBM Research.
Published in "AI Magazine", vol 31, N3, 2010, pp. 59-79.
Watson's Jeopardy! Challenge Web site at IBM Research.
dicuss some of the technical issues
(linked from the IBM Research Web site).
Natural Language Processing With Prolog in the IBM Watson System written by
Adam Lally (IBM) and Paul Fodor (Stony Brook University) was published on March 31, 2011
PDF version of this article.
Think you have solved question answering? Try the
AI2 Reasoning Challenge (ARC)!
The ARC dataset contains 7,787 genuine grade-school level, multiple-choice
science questions, assembled to encourage research in advanced question-answering.
This extensive benchmark was developed by a reasearch team from
Allen Institite for Artificial Intellgience (AI2).
This research is described in the paper written by
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, Oyvind Tafjord
published in arXiv:1803.05457.
Same research group (Peter Clark, Oren Etzioni and others) published a related paper From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project, arXiv:1909.01958, 11 Sep 2019. They wrote: "Even in 2016, the best AI system achieved merely 59.3% on an 8th Grade science exam challenge. This paper reports unprecedented success on the Grade 8 New York Regents Science Exam, where for the first time a system scores more than 90% on the exam's non-diagram, multiple choice (NDMC) questions. In addition, our Aristo system, building upon the success of recent language models, exceeded 83% on the corresponding Grade 12 Science Exam NDMC questions."
Gideon Lewis-Kraus wrote an article
The Great A.I. Awakening. This is an article about
recent improvements in Google Translate and the so-called
``neural networks" technology that helped to make translation better.
Published in the New York Times Magazine on December 14, 2016.
In a more recent article, also published in New York Times,
Gary Marcus claims that
Artificial Intelligence Is Stuck and then proposes
what needs to be done to move it forward (New York Times, July 29, 2017).
In Spring 2020, Open-AI Lab has released a neural-network system GPT3 with 175 billions of parameters.
In response, Gary Marcus and Ernest Davis wrote this:
GPT3: OpenAI’s language generator has no idea what it’s talking about.
Published in MIT Technology Review on August 22, 2020.
They make the following conclusion about GPT3:
"It’s a fluent spouter of bullshit, but even with 175 billion parameters and 450 gigabytes of input data, it’s not a reliable interpreter of the world."
Gary Marcus is founder and CEO of Robust.AI He is also a professor emeritus at NYU, and author of five books.
Ernest Davis is a professor of computer science at New York University. He has authored four books.
(Here is a local copy for students
to study and laugh. Yes, it is funny. But it is also sad. You decide. )
October 1, 2020.
Is GPT-3 Intelligent? A conversation of John Etchemendy, co-director of the Stanford Institute for Human-Centered Artificial Intelligence, with Oren Etzioni, CEO of Allen Institute for Artificial Intelligence (AI2), company founder, and Professor Emeritus of computer science at the Univ. of Washington (Seattle, USA).
March 10, 2021.
Emily Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell raise doubts whether the direction taken by BERT and its variants GPT2/3 is the right research direction to pursue: On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. This paper was published in the proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. In particular, they comment that large language models "are not performing natural language understanding (NLU), and only have success in tasks that can be approached by manipulating linguistic form".
(Canadian Institute for Advanced Research, University of Toronto and Google)
received his Research Excellence Award at the IJCAI-2005.
This is the highest honor for research in artificial intelligence.
The very gentle after-dinner version of his lecture
``Can computer simulations of the brain allow us to see into the mind?"
is available as PPT slides, but you also need to download six .avi movies
to the same directory as the powerpoint file and with the same names as
they currently have:
Jürgen Schmidhuber published in June 2015 an online
Critique of Paper by "Deep Learning Conspiracy"
where he critically discussed an article "Deep learning" written by
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton
(published in the journal Nature, v521, p436-444, on 28 May 2015).
Moreover, he published his own
deep learning overview that has been widely discussed and verified
by the machine learning community. His overview provides an unbiased
historical review of research that led to success of
deep learning in neural networks.
Some researchers believe deep-learning with its back-propagation still
has a core role in AI's future. But on September 15, 2017,
Professor Geoff Hinton said that, to push materially ahead,
entirely new methods will probably have to be invented. Hinton quoted
a great German physicist Max Planck who said that
Science advances one funeral at a time",
and then Hinton added
"The future depends on some graduate student who is deeply suspicious of everything I have said".
Gary Marcus (New York University) published a related paper:
Deep Learning: A Critical Appraisal,
arXiv1801.00631 (Submitted on 2 Jan 2018).
Dr. Geoffrey Hinton's 2019 Public lecture at the 13th Annual Meeting of the
Canadian Association for Neuroscience
Does the brain do backpropagation?,
on Tuesday, May 21, 2019, 6:30-8:00pm, in the Sick-Kids auditorium.
John Launchbury, the Director of DARPA's Information Innovation Office (I2O), discusses the
"three waves of AI" and the capabilities required for AI
to reach its full potential. He outlines 3 waves in the AI research
and explains -- what AI can do, what it can't do, and where it is headed.
Published on Feb 15, 2017.
Defense Advanced Research Projects Agency (DARPA) Announces $2 Billion Campaign to
Develop Next Wave of AI Technologies, September 7, 2018.
DARPA’s multi-year strategy seeks contextual reasoning in AI systems
to create more trusting, collaborative partnerships between humans and machines.
One of the programs is called
Machine Common Sense.
M. Mitchell Waldrop wrote
"The much-ballyhooed artificial intelligence approach boasts impressive feats
but still falls short of human brainpower. Researchers are determined
to figure out what’s missing."
He published an article
What are the limits of deep learning?
in the Proceedings of the National Academy of Sciences of the USA,
on January 22, 2019, volume 116 (4), pages 1074-1077.
a Professor and former chairman of the computer science department at
the University of California Los Angeles (UCLA) published a paper
Human-Level Intelligence or Animal-Like Abilities?
in the Communications of the ACM, October 2018, Vol. 61, No. 10 (October), pages 56-67.
It is also available as a
Professor Darwiche directs the Automated Reasoning Group at UCLA.
His research interests span probabilistic and symbolic reasoning,
and their applications including machine learning.
Founded in 1979, the Association for the Advancement of Artificial Intelligence (AAAI)
is a nonprofit scientific society devoted to advancing the scientific understanding of
the mechanisms underlying thought and intelligent behavior and their embodiment in machines.
AAAI-2020 Chat with Daniel Kahneman.
After the Turing event of the previous evening (Sunday, February 9, 2020), there was
a chat with Nobel laureate Daniel Kahneman (on Monday, Feb 10, 2020)
to discuss the present and future of AI and human decision making.
a professor from the Universitat Pompeu Fabra (Barcelona, Spain)
General Solvers for General AI. Published on June 17, 2016.
Reasoning with Cause and Effect,
a research excellence lecture by
Judea Pearl, Univ. of California,
Theoretical Impediments to Machine Learning
With Seven Sparks from the Causal Revolution, a note written by
professor from the University of California, Los Angeles.
TECHNICAL REPORT R-475, September 2017.
Will Knight published an article with the title
If AI's So Smart, Why Can't It Grasp Cause and Effect?
in Wired on March 9, 2020. He notes: "Deep-learning models can spot patterns
that humans can't. But software still can't explain, say,
what caused one object to collide with another".
This article explains an ongoing project of Professor
at the joint MIT-IBM AI lab. (His older Web page is
available here )
Univ. of California, Berkeley (USA), published the paper
Artificial Intelligence—The Revolution Hasn’t Happened Yet
in the Harvard Data Science Review (HDSR), 2019, Issue 1.1.
In particular, he writes the following:
"Of course, classical human-imitative AI problems remain of
great interest as well. However, the current focus on doing
AI research via the gathering of data, the deployment of deep learning
infrastructure, and the demonstration of systems that mimic certain
narrowly-defined human skills—with little in the way of emerging
explanatory principles—tends to deflect attention from major open
problems in classical AI. These problems include the need to bring
meaning and reasoning into systems that perform natural language processing,
the need to infer and represent causality, the need to develop
computationally-tractable representations of uncertainty and
the need to develop systems that formulate and pursue long-term goals."
Jeffrey Funk warns:
Expect evolution, not revolution: Despite the hype, artificial intelligence will take years
to significantly boost economic productivity. Published in
Volume 57, Issue 3, pages 30-35.
For those of you who want to learn more about games.
You can read about
General Game Playing Project and also about
Artificial Intelligence and
1975 ACM Turing Award Lecture
"Computer science as empirical inquiry: symbols and search" by
Allen Newell and Herbert A. Simon, Carnegie-Mellon Univ., Pittsburgh, PA.
Published in Communications of the ACM, Volume 19 Issue 3, March 1976, Pages 113-126.
Provided by the
ACM Digital Library.
Knowledge-based model of mind and its contribution to sciences.
An Interview with Ed Feigenbaum, a professor from
Published in ``Communications of the ACM'', Vol. 53 No. 6, Pages 41-45.
What is a Systematic Method of Scientific Discovery? by
Herbert A. Simon, Carnegie Mellon University. Published in
Systematic Methods of Scientific Discovery:
Papers from the 1995 Spring Symposium, ed. Raul Valdes-Perez, pages 1-2.
Technical Report SS-95-03. Association for the Advancement of Artificial Intelligence,
Menlo Park, California.
Where is AI Heading?
"Eye on the Prize" by
Stanford University. Published in "AI Magazine", vol 16, N2, 1995, pp. 9-17.
How do you teach a computer common sense? Researchers at a company
called Cycorp in Austin, Texas, are trying to find out. Since 1984, they
have been incorporating a huge collection of everyday knowledge in an AI
The Cyc project aims to develop a comprehensive common sense knowledge base,
and associated reasoning systems. They are now being used to
enable the development of knowledge-intensive applications for industry
Why people think computers can't, written by
Massachusetts Institute of Technology.
"AI Magazine", vol. 3, N4, Fall 1982, p. 3-15.
SHRDLU, a program for understanding natural language, written by Terry
Winograd at the M.I.T. Artificial Intelligence Laboratory in 1968-70.
SHRDLU carried on a simple dialog with a user, about a small world of objects
(the BLOCKS world).
Terry Winograd is professor of computer science at
this Web site collects information about subsequent versions and updates.
Thinking machines: Can there be? Are we?, Terry Winograd, Stanford University.
Programs with Common Sense (1958), John McCarthy, Stanford University.
How Intelligent is Deep Blue?, by Drew
McDermott, Yale University.
[This is the original, long version of an article that appeared in the May 14, 1997 New York Times with more flamboyant title.]
If the link above fails, download a local copy.
A Gamut of Games. This article reviews the past successes,
current projects, and future research directions for AI using computer games
as a research test bed. Written by
University of Alberta, Canada.
Published in ``AI Magazine'', volume 22, number 3, pp. 29-46, 2001.
The Scientific Relevance of Robotics. Remarks at the Dedication of
the CMU Robotics Institute.
Published in the AI Magazine, Vol 2, No 1, Spring 1981.
When Robots Meet People: Research Directions In Mobile Robotics
Sebastian Thrun, Stanford University. He was a head of the team that
built Stanley, the robotic car. Stanley was judged to be the "Best Robot
Of All Time" by Wired Magazine, and
NOVA shot a great
documentary about Stanley and the race, which is available online
Lifelong Learning Algorithms published in the book
"Learning to Learn" edited by Sebastian Thrun and Lorien Pratt,
Sebastian Thrun and Tom Mitchell.
Lifelong robot learning.
Robotics and Autonomous Systems, 15:25-46, 1995
Tom Mitchell Et Al about
Never-Ending Language Learner.
Communications of the ACM, April 2018, Volume 61, Issue 5.
Robots, Re-Evolving Mind written by
Carnegie Mellon University. He also provides a photo of
Shakey, the robot.
The robot and the baby an amusing story written by
John McCarthy (4th Sep 1927 - 24th Oct 2011), a person who invented the term
"Artificial Intelligence". He was a professor at Stanford University.
a local copy in PDF
of this story written on June 28, 2001.
The next generation of WWW can benefit from the AI-inspired technologies:
``Semantic Web Services"
by McIlraith, S., Son, T.C. and Zeng, H. Published in IEEE Intelligent Systems, Special
Issue on the Semantic Web, 16(2):46--53, March/April, 2001 (Copyright IEEE, 2001).
This paper is available from
Sheila McIlraith web page in Stanford University. Additional
Web Services Activity is
provided by Semantic Web Services Interest Group.
Tim Berners-Lee, the inventor of the WWW, thinks about evolution of the Web in
the 21st century.
Here is Tim Berners-Lee's
Semantic Web Road-map,
written in September 1998.
The additional information:
American: The Semantic Web (a new form of Web content that is meaningful to computers).
This paper has been published in
Scientific American (May, 2001). The paper is written by
Tim Berners-Lee, James Hendler and Ora Lassila.
The Future of the Web: Tim Berners-Lee's Testimony before the United
States House of Representatives Committee (on 2007-03-01).
Sir Tim Berners-Lee has received the 2016 ACM A.M. Turing Award for inventing the WWW. This is the highest honour awarded in computer science by the Association for Computing Machinery (ACM), a professional CS association. The article Weaving the Web explains the history of how the Web was invented.
"A logical framework for depiction and image interpretation",
R. Reiter (Univ. of Toronto), and A. Mackworth (Univ. of British Columbia).
Artificial Intelligence, vol 41, N 2, 1989, pp. 125-155.
Logical vs.Analogical or Symbolic vs. Connectionist or Neat vs. Scruffy, written by
Massachusetts Institute of Technology. Published in
"AI Magazine", vol 12, N 2, 1991, pp. 34-51.
From here to human-level AI, John McCarthy, Stanford University.
An invited talk at the Knowledge Representation conference, 1996.
Oliver Sacks (1933–2015) was a physician and the author of over ten books.
Speak, Memory published in the New York Review of Books, in the February 21, 2013 issue.
The Mental Life of Plants and Worms, Among Others, published in the New York Review of Books, in the April 24, 2014 issue.
In the River of Consciousness, published in the New York Review of Books, in the January 15, 2004 issue.
Christof Koch is the Chief Scientist and President of the Allen Institute for Brain Science.
From 1987 until 2013, he worked as Professor of Cognitive and Behavioral Biology,
at the California Institute of Technology. His lecture
The Quest for Consciousness: A Neurobiological Approach
was delivered on March 22, 2006, UC Berkeley Campus.
more information about his research.
The Feeling of Life Itself: Why Consciousness Is Widespread but Can't Be Computed (MIT Press, 2019).
His book "The Quest for Consciousness: a Neurobiological Approach", was published by Roberts and Co., (2004), ISBN 0-9747077-0-8.
Dr Alex Taylor sets a difficult problem solving task,
will the crow defeat the puzzle?
Are crows the ultimate problem solvers? -- Inside the Animal Mind: Episode 2.
BBC Two Programme website.
Computer programs as empirical models in cognitive psychology:
Herbert Simon, the Psychology Department at Carnegie Mellon University.
Human beings use symbolic processes to solve problems,
reason, speak and write, learn and invent. Over the past 45 years,
cognitive psychology has built and tested empirical models of these processes.
The models take the form of computer programs that
simulate human behavior.
What has AI in Common with Philosophy?,
John McCarthy, Stanford University.
Mathematical Intuition vs. Mathematical Monsters,
Synthese, 2000, p.317-332, written by
Solomon Feferman, Stanford University. See also his paper
The Logic of Mathematical.
Discovery. Vs. the Logical. Structure of Mathematics reprinted as
Chapter 3 in the book "In the Light of Logic". Author: Solomon
Feferman. (Oxford University Press, 1998, ISBN 0-19-508030-0,
Logic and Computation in Philosophy series).
Where Mathematics Comes From", written by George Lakoff and Rafael Nunez,
published by "Basic Books".
Book review: "Where Mathematics Come From, Reviewed by
James J. Madden,
Department of Mathematics, Louisiana State University. Professor
Ernest Davis published his review
Mathematics as Metaphor
in the Journal of Experimental and Theoretical AI, vol. 17, no. 3, 2005, pp. 305-315.
Asimov, Isaac: "Robot Visions" and "Robot Dreams",
there are several paperback editions.
Raymond Smullyan (1919–2017) was a mathematician, logician, magician,
creator of extraordinary puzzles, philosopher, pianist.
One of his best known collections of recreational logic puzzles is
"What is the name of this book?". There are several paperback editions,
e.g., recent editions are published by Dover.
cps721 (Artificial Intelligence).