Lisp and Prolog for Artificial Intelligence? [clos

2019-03-08 01:04发布

问题:

Now since i've taken a class 3 years ago in A.I. im clearly proficient enough to ask this question......just kidding just kidding ;)

but seriously, what is it about these languages that make them so popular for A.I. research. Even though A.I. research is "old"...it's came probably the longest way in the past 5-10 years it seems like.... Is it because the languages were somewhat "designed" around the concept of A.I. , or just that we have nothing really better to use right now?

I ask this because I've always found it quite interesting, and Im just kinda curious. If im entirely wrong and they use different languages I would love to know what all they use. I mean i can understand prolog, especially with Sentient/Propositional Logic and Fuzzy logic. but I dont understand "Why" we would use Lisp...and even what else A.I. researchers would use to do machine learning etc.

Any articles/books on the subject matter is helpful too :)

回答1:

Can't really speak to Prolog, but here's why Lisp:

  • Lisp is a homoiconic language, which means that the code is expressed in the same form (s-expressions) as data structures in the language. i.e. "code is data". This has big advantages if you are writing code that modifies/manipulates other code, e.g. genetic algorithms or symbolic manipulation.

  • Lisp's macro system makes it well suited for defining problem-specific DSLs. Most Lisp developers effectively "extend the language" to do what they need. Again the fact that Lisp is homoiconic helps enormously here.

  • There is some historical connection, in that Lisp became popular at about the same time as a lot of the early AI research. Some interesting facts in this thread.

  • Lisp works pretty well as a functional programming language. This is quite a good domain fit for AI (where you are often just trying to get the machine to learn how to produce the correct output for a given input).

  • Subjective view: Lisp seems to appeal to people with a mathematical mindset, which happens to be exactly whet you need for a lot of modern AI..... this is possible due to the fact that Lisp is pretty closely related to the untyped lambda calculus

I'm doing some AI/machine learning work at the moment, and chose Clojure (a modern Lisp on the JVM) pretty much for the above reasons.



回答2:

The question has already been answered for Lisp, so I'll just comment on Prolog.

Prolog was designed for two things: natural language processing and logical reasoning. In the GOFAI paradigm of the early 1970s, when Prolog was invented, this meant:

  1. constructing symbolic grammars for natural language that would be used to construct logical representations of sentences/utterances;
  2. using these representations and logical axioms (not necessarily those of classical logic) to infer new facts;
  3. using similar grammars to translate logical representation back into language.

Prolog is very good at this and is used in the ISS for exactly such a task. The approach got discredited though, because

  1. "all grammars leak": no grammar can catch all the rules and exceptions in a language;
  2. the more detailed the grammar, the higher the complexity (both big O and practical) of parsing;
  3. logical reasoning is both inadequate and unnecessary for many practical tasks;
  4. statistical approaches to NLP, i.e. "word counting", have proven much more robust. With the rise of the Internet, adequate datasets are available to get the statistics NLP developers need. At the same time, memory and disk costs has declined while processing power is still relatively expensive.

Only recently have NLP researchers developed somewhat practical combined symbolic-statistical approaches, sometimes using Prolog. The rest of the world uses Java, C++ or Python, for which you can more easily find libraries, tools and non-PhD programmers. The fact that I/O and arithmetic are unwieldy in Prolog doesn't help its acceptance.

Prolog is now mostly confined to domain-specific applications involving NLP and constraint reasoning, where it does seem to fare quite well. Still, few software companies will advertise with "built on Prolog technology" since the language got a bad name for not living up to the promise of "making AI easy."

(I'd like to add that I'm a great fan of Prolog, but even I only use it for prototyping.)



回答3:

Lisp had an advantage when we believed AI was symbol manipulation and things like Ontologies. Prolog had an advantage when we believed AI as logic, and Unification was the tricky operation. But neither of these provide any advantage for any of the current contenders for "AI": Statistical AI is about sparse arrays. Neural networks of all kinds, including deep learning, is about oceans of nodes connected with links. Model Free Methods (many kinds of machine learning, evolutionary methods, etc) are also very simple. The complexity is emergent, so you don't have to worry about it. Write a simple base that can learn what it needs to learn. In either of these cases, any general purpose language will do. Arguments can even be made that most Neural Network approaches are so simple that C++ would be overkill.

Use the language that allows you to most easily hire the best programmers for the task.



回答4:

There has been some good and informative responses here but the point of Lisp and Prolog has either been missed, marginalized, or not emphasized enough.

Lisp and then later Prolog emerged in an era when the main AI research revolved around symbolic processing. A simple example of symbolic processing is how we humans do algebra, calculus, or integrals by hand. We symbolically manipulate the variables and constants to derive equivalent relationships. Lisp and Prolog were designed for this purpose.

Symbolic manipulation is not trivially implemented in C++ or Java for they were not designed with this purpose in mind. However C++, Java or similar languages may be buzzword languages in AI nowadays because there now exists several variations of AI research that do not deal with symbolic processing.

One form of AI deals with using statistical methods as the basis of knowledge and this requires using much leaner languages to reduce computation time. Also many so called AI systems are nothing more than specialized systems to serve a particular niche purpose. Of course these systems may be best programmed in a non-Lisp/Prolog language, and rely less on 'reasoning' or common-sense knowledge acquisition and more on processing data from inputs.

Even Watson (which is programmed in Java, C++, and a little Prolog) is arguably a highly specialized system. It appears Watson was designed to acquire a vast amount of facts whereby it then sorts through these facts using sophisticated search algorithms (not sure though and IBM would probably resent me for saying that). The future AI implementations will likely combine AI paradigms and implement various languages for each specialized part. Even Lisp and Prolog may one day make a comeback.



回答5:

It maybe a.good ideas to recall the motivations for Prolog: Logic for problem solving and to understand reasoning, human or machine like. This is an ongoing project and even though Prolog is one of its finest result, is not its final. We keep looking for better languages to represent knowledge. Check the latest book by Bob Kowalski: how to be artificially intelligent.



回答6:

but I dont understand "Why" we would use Lisp...and even what else A.I. researchers would use to do machine learning etc.

Yann LeCun developed Lush aka LISP Universal Shell. He also became Director of AI Research at a social media network recently.

Any articles/books on the subject matter is helpful too :)

I guess you already know Artificial Intelligence: A Modern Approach It is the most read introduction book for AI at universities.