“The eleventh commandment was “Thou Shalt Compute” or “Thou Shalt Not Compute” – I forget which.” – Alan Perlis
My goal in writing this article is to give us all a nice “Automata Theory” Baader–Meinhof effect. You may be thinking you’re not exceedingly familiar with Automata, a topic I hold very near and dear to my heart, but stick with me here and you’ll surprise yourself. Historically, it was the first time it was trendy to ask fun questions like how sophisticated of a machine could a person make? Can Machines think, laugh, cry, and feel emotions? Could a machine experience something the same way I can? Say we built a duck, would it have free will? Do I even have free will? We were fairly confident we could do it in the 18th century around the peak of the French Empire, then later at the dawn of the 20th century with AI Allstars coming on the scene such as Alan Perlis “The Godfather of Artificial Intelligence” which has to be the coolest accolade I’ve ever heard of. Now, we haven’t touched on this topic since Hobbes until the next time Nvidia makes our skin crawl again with more “deepfakes”. For the moment, Artificial Intelligence is out of the news cycle, so let’s take the chance to talk about the fun stuff. The pop-science articles take the magic out of it.
So what are Automata exactly? It comes from the word Automaton, which was a super fun fad in the 18th century when it was trendy to create philosophical toys such as living dolls. We had seen an industrial revolution fueled by the French Empire and it got people thinking, where is this technology thing heading? In my opinion, technology is more fun when everyone feels empowered to do anything. Perhaps my favorite exemplar of this spirit was a particularly saucy french fellow by the name of Jacques de Vaucanson who had an even cooler title than Perlis, as he was once dubbed “New Prometheus”, ya know, like the god of fire? Yeah, that guy. So how did our hero earn his badge? He had a habit of making skin-crawling, trippy, and philosophical machines, most notably a duck with a digestive track fondly named Canard Digérateur or “Digesting Duck”.

This is of course an American artist’s misunderstood drawing of how the duck works, but you get the idea. The duck appeared to “eat kernels of grain, metabolize, and defecate them”. No digestion actually took place, the duck was more of a symbol that one-day true machine digestion would be able to take place. He was an artist that made philosophical toys like this one often to exemplify what machines would one day do, and with 3d printed organs coming online, I’m not sure I disagree at this point.
In the spirit of the 18th Century French, I ask the question again, can we build a duck? That’s what Automata Theory really gets at in essence. Automata comes from the Greek “self-acting, self-willed, or self-moving”. When I was young, I asked my parents if mom’s Van, an automobile, was alive or dead. How could it move on its own? How could I move on my own? The question struck me that day on the way to a travel hockey game, they should have known I wasn’t going to be in the NHL at that point if you ask me.
Automata play a major role in the theory of computation, a study of theoretical computer science and artificial intelligence. If you’re feeling in the weeds here’s a handy dandy map courtesy of Wikipedia so we don’t get lost.

There we go not so bad now, is it? There are only 4 levels or classes of Automata, each level more powerful than the last. The simplest and least powerful being the combinatorial logic at the heart of our map, and the most powerful being the great, omnipotent Turing Machine. What I am referring to as our duck. Did you know the name computer is actually a 17th-century word that’s an occupation? “One Who Computes” was also a popular term in the mid-20th century when, if you were a bright and talented woman, you had as many job opportunities as you could count on one hand. Among them are a nurse and a computer. Seen below is the NACA High-Speed Flight Station “Computer Room” (1949).

So in a roundabout way, the first computers were smart women being used as human calculators. Coincidentally when computer machines started hitting the market around the same time, the first programmers, which was the only way to operate the machines at the time, were these same bright women. It wasn’t male-dominated the whole time! Of course, the men decided that it was pretty cool stuff and kicked them out as we all know. But for the first years of computer programming history, it was female-dominated, then about 50/50 for a long time. Just a side tangent, but male domination in the industry didn’t start happening until formal education got its roots and the universities started targeting young men for burnout hours. But don’t take it from me, just ask Uncle Bob! He talks about it in his presentation The Future of Programming. He also starts all of his talks with a short science lesson which can be pretty fun.
Alan Turing, who for the unacquainted, is the inventor of the computer (machines not people), first described the “human computer” as someone who is “supposed to be following fixed rules; he has no authority to deviate from them in any detail.” Which is a fairly good description if he’d just swap out the pronouns. He was particularly fascinated with the idea of the A-Machine or automatic machine which looks like this, but the tapes are infinitely long or it doesn’t work right.

This funny-looking tape thing was the very first description of a computer machine. Later called the Turing Machine by Alonzo Church, inventor of Lambda Calculus, which is what my business, Lambda Group is proudly named after. Lambda Calculus and the Turing Machine are equivalent definitions of computability, so It is one of the most groundbreaking descriptions in computer science because it captured the notion of Turing Completeness (a.k.a. computability), which is a machine theoretically capable of expressing all tasks accomplishable by computers (people not machines). And more importantly, In the spirit of “New Prometheus”, ducks.
What’s fascinating about automata theory is when you can spot different automata classifications in the wild. Once you can spot them, you’ll never unsee them. JavaScript is a Turing machine because you can express anything in it, but familiarity with each class lets you reason about the properties of the tools you’re using and allows you to quickly size up solutions to problems. If I were to ask you to parse JSON with Regex, you might wince. Why is it so difficult to parse JSON with Regex exactly, and what kind of tool would I need to use? Well, Regex is a Finite State Machine, but JSON requires a pushdown Automaton. In other words, you’re trying to solve a layer 2 problem with a layer 1 tool. You need to level up for one simple reason. Your Regular expressions and can’t balance parenthesize. Regular languages can be parsed by finite state machines because they don’t need some notion of recursion to be expressed. They just go left to right looking for patterns on a single track, but when it comes to checking if parentheses are balanced, you need to have more power. General-purpose programming languages can almost all be parsed as Context-Free Grammars, you don’t need a full Turing machine to represent Python’s Grammar, but Python itself is Turing Complete. A more intuitive understanding of automata can help programmers build much more appropriate solutions to problems. My favorite demonstration of this is Clojure’s Spec which at first appears to be some sort of super-powered UFO technology, but really just comes down to basic context-free grammars. This tool is difficult to explain because it’s so abstract, but it allows you to define predicates to match against data, such as a list of even numbers, or a list that starts with 1, 2, 3 and ends in 0 or more strings. Specs are as expressive as you want them to be so long as you’re matching against data in a context-free grammar, you can spec it. This tool allows you to do all sorts of interesting things from writing parsers, to contract programming, as well as specifying your API endpoints and generating swagger docs as well as very human and machine-readable error codes for your data. You can validate forms, do property-based testing, and the list goes on. This approach isn’t even unique to Clojure, it’s just that abstract context-free grammar specification libraries aren’t widely distributed so it’s the only industry-grade solution. I think tooling like this will only get more popular as our familiarity with theoretical computer science increases as a society. Perhaps turning complete specification systems will be widely adopted and we can start generating both human and machine-readable error messages throughout the entirety of our systems. I believe these sorts of concepts can be embedded straight into our programming languages so we can start leveraging declarative specifications in day-to-day programming in the near future.