I'm trying to understand the concept of languages levels (regular, context free, context sensitive, etc.).
I can look this up easily, but all explanations I find are a load of symbols and talk about sets. I have two questions:
Can you describe in words what a regular language is, and how the languages differ?
Where do people learn to understand this stuff? As I understand it, it is formal mathematics? I had a couple of courses at uni which used it and barely anyone understood it as the tutors just assumed we knew it. Where can I learn it and why are people "expected" to know it in so many sources? It's like there's a gap in education.
Here's an example:
Any language belonging to this set is a regular language over the alphabet.
How can a language be "over" anything?
In the context of computer science, a word is the concatenation of symbols. The used symbols are called the alphabet. For example, some words formed out of the alphabet
{0,1,2,3,4,5,6,7,8,9}
would be1
,2
,12
,543
,1000
, and002
.A language is then a subset of all possible words. For example, we might want to define a language that captures all elite MI6 agents. Those all start with double-0, so words in the language would be
007
,001
,005
, and0012
, but not07
or15
. For simplicity's sake, we say a language is "over an alphabet" instead of "a subset of words formed by concatenation of symbols in an alphabet".In computer science, we now want to classify languages. We call a language regular if it can be decided if a word is in the language with an algorithm/a machine with constant (finite) memory by examining all symbols in the word one after another. The language consisting just of the word
42
is regular, as you can decide whether a word is in it without requiring arbitrary amounts of memory; you just check whether the first symbol is 4, whether the second is 2, and whether any more numbers follow.All languages with a finite number of words are regular, because we can (in theory) just build a control flow tree of constant size (you can visualize it as a bunch of nested
if
-statements that examine one digit after the other). For example, we can test whether a word is in the "prime numbers between 10 and 99" language with the following construct, requiring no memory except the one to encode at which code line we're currently at:Note that all finite languages are regular, but not all regular languages are finite; our double-0 language contains an infinite number of words (
007
,008
, but also004242
and0012345
), but can be tested with constant memory: To test whether a word belongs in it, check whether the first symbol is0
, and whether the second symbol is0
. If that's the case, accept it. If the word is shorter than three or does not start with00
, it's not an MI6 code name.Formally, the construct of a finite-state machine or a regular grammar is used to prove that a language is regular. These are similar to the
if
-statements above, but allow for arbitrarily long words. If there's a finite-state machine, there is also a regular grammar, and vice versa, so it's sufficient to show either. For example, the finite state machine for our double-0 language is:The equivalent regular grammar is:
The equivalent regular expression is:
Some languages are not regular. For example, the language of any number of
1
, followed by the same number of2
(often written as 1n2n, for an arbitrary n) is not regular - you need more than a constant amount of memory(= a constant number of states) to store the number of1
s to decide whether or not a word is in the language.This should usually be explained in the theoretical computer science course. Luckily, Wikipedia explains both formal and regular languages quite nicely.
Here's some of the equivalent definitions from Wikipedia:
The first thing to note is that a regular language is a formal language, with some restrictions. A formal language is essentially a (possibly infinite) collection of strings. For example, the formal language Java is the collection of all possible Java files, which is a subset of the collection of all possible text files.
One of the most important characteristics is that unlike the context-free languages, a regular language does not support arbitrary nesting/recursion, but you do have arbitrary repetition.
A language always has an underlying alphabet which is the set of allowed symbols. For example, the alphabet of a programming language would usually either be ASCII or Unicode, but in formal language theory it's also fine to talk about languages over other alphabets, for example the binary alphabet where the only allowed characters are
0
and1
.In my university, we were taught some formal language theory in the Compilers class, but this is probably different between different schools.
I learnt most of that kind of thing from "Introduction to the Theory of Computation", by Michael Sipser; I found it really useful.
Here are the contents: