How To Write A Calculator in 70 Python Lines, By Writing a Recursive-Descent Parser

February 24, 2013

Three months ago, I wrote a post detailing the process of writing a calculator using a parsing library. The popular response, however, was that readers are far more curious about seeing a calculator written from scratch, with the batteries included but nothing else. I figured, why not?

Writing a calculator is simple, if you use hacks specific to arithmetic expressions, but the effect of hacks is nearly always the same: the solution isn’t elegant, it’s not extendable, and it’s hard to understand intuitively. In my appreciation of a good challenge, and my aim at a beneficial post, I decided to write it using a mostly generic recursive-descent parser. In the same spirit as last time, I wanted to do it in as few lines as I reasonably can, so it’s filled with hacks and tricks, but they’re superficial and not specific to the task at hand.

This post is a detailed, step-by-step explanation of my implementation. If you want to jump straight to the code and figure it out by yourself, just scroll to the end of this post. Hopefully when you’re done you’ll have better understanding of how parsing works internally, and you’ll be inspired to use a proper parsing library to avoid this entire bloody mess.

To understand this post, you should have a strong understanding of Python, and it’s recommended to have some understanding of what parsing is and what it’s for. If you’re not sure, I recommend that you read my previous post, in which I thoroughly explain the grammar that I will be using in this post.

Step 1: Tokenize

The first step of processing the expression is to turn it into a list of individual symbols. This is the easiest part, and not the point of this exercise, so I allowed myself to cheat here quite a lot.

First, I defined the tokens (Numbers are notably absent; they’re the default) and a Token type:

token_map = {'+':'ADD', '-':'ADD', 
             '*':'MUL', '/':'MUL', 
             '(':'LPAR', ')':'RPAR'}

Token = namedtuple('Token', ['name', 'value'])

And here’s the code I used to tokenize an expression `expr`:

split_expr = re.findall('[\d.]+|[%s]' % ''.join(token_map), expr)
tokens = [Token(token_map.get(x, 'NUM'), x) for x in split_expr]

The first line is a trick that splits the expression into the basic tokens, so

'1.2 / ( 11+3)' --> ['1.2', '/', '(', '11', '+', '3', ')']

The next line names the tokens, so that the parser can recognize them by category:

['1.2', '/', '(', '11', '+', '3', ')']
->
[Token(name='NUM', value='1.2'), Token(name='MUL', value='/'), Token(name='LPAR', value='('), Token(name='NUM', value='11'), Token(name='ADD', value='+'), Token(name='NUM', value='3'), Token(name='RPAR', value=')')]

Any token that is not in the token_map is assumed to be a number. Our tokenizer lacks a property called validation which would prevent non-numbers from being accepted, but luckily the evaluator will handle this task later on.

That’s it. Now that we have a list of tokens, our next step is to parse it into an AST.

Step 2: Define the grammar

The parser I chose to implement is a naive recursive descent parser, which is a simpler version of LL parsing. It’s the simplest parser to implement, and in fact mine takes only 14 lines. It’s a kind of top-down parser, which means that it starts by matching the highest rule (like: expression), and recursively tries to match its sub-rules until it matches the lowest rules (like: number). To put it another way, while a bottom-up (LR) parser will gradually fold tokens and rules into other rules, until there’s only one rule left, a top-down (LL) parser like ours will gradually expand the rules into less abstract rules, until they completely match the input-tokens.

Before we get to the actual parser, let’s talk about the grammar. In my previous post, I used an LR parser, and I defined the calculator grammar like this (caps are tokens):

add: add ADD mul | mul;
mul: mul MUL atom | atom;
atom: NUM | '(' add ')' | neg;
neg: '-' atom;

(If you don’t understand this grammar, you should read my previous post)

This time I’m using an LL parser, instead of LR, and here’s how I defined the grammar:

rule_map = {
    'add' : ['mul ADD add', 'mul'],
    'mul' : ['atom MUL mul', 'atom'],
    'atom': ['NUM', 'LPAR add RPAR', 'neg'],
    'neg' : ['ADD atom'],
}

There is a subtle change here. The recursive definitions of add and mul are reversed. This is a very important detail, and I need to explain it.

The LR version of this grammar uses something called left-recursion. When LL parsers see recursion, they just dive in there in an attempt to match the rule. So when faced with left-recursion, they enter infinite recursion. Even smart LL-parsers such as ANTLR suffer from this issue, though it probably writes a friendly error instead of looping infinitely like our toy parser would.

Left-recursion is easily solved by changing it to right-recursion, and that is what I did. But because nothing is easy with parsers, it created another problem: While left-recursion parses 3-2-1 correctly as (3-2)-1, right-recursion parses it
incorrectly as 3-(2-1). I don’t know of an easy solution to this problem, so to keep things short and simple for you and me both, I decided to keep the incorrect form and deal with it in post-processing (see step 4).

Step 3: Parse into an AST

The algorithm is simple. We’re going to define a recursive function that receives two parameters: The first is the name of the rule that we’re trying to match, and the second is the list of tokens we have left. We’ll start with add (which is the highest rule) and with the entire list of tokens, and have the recursive calls become increasingly more specific. The function returns a tuple: The current match, and a list of the tokens that are left to match. For the purpose of short code, we’ll make it capable of also matching tokens (they’re both strings; one is UPPER-CASE and the other lower-case).

Here’s is the code for the parser:

RuleMatch = namedtuple('RuleMatch', ['name', 'matched'])

def match(rule_name, tokens):
    if tokens and rule_name == tokens[0].name:      # Match a token?
        return RuleMatch(tokens[0], tokens[1:])
    for expansion in rule_map.get(rule_name, ()):   # Match a rule?
        remaining_tokens = tokens
        matched_subrules = []
        for subrule in expansion.split():
            matched, remaining_tokens = match(subrule, remaining_tokens)
            if not matched:
                break   # no such luck. next expansion!
            matched_subrules.append(matched)
        else:
            return RuleMatch(rule_name, matched_subrules), remaining_tokens
    return None, None   # match not found

Lines 4-5 check if rule_name is actually a token, and if it matches the current token. If it does, it will return the match, and which tokens are still left to consume.

Line 6 iterates over the sub-rules of rule_name, so each can be matched recursively. If rule_name is a token, the get() call will return an empty tuple and the flow will fall to the empty return (line 16).

Lines 9-15 iterate over every element of the current sub-rule, and try to match them in sequentially. Each iteration tries to consume as many matching tokens as possible. If one element did not match, we discard the entire sub-rule. However, if all elements matched, we reach the else clause and return our match for rule_name, with the remaining tokens to match.

Let’s run it and see what we get for 1.2 / ( 11+3).

>>> tokens = [Token(name='NUM', value='1.2'), Token(name='MUL', value='/'), Token(name='LPAR', value='('), Token (name='NUM', value='11'), Token(name='ADD', value='+'), Token(name='NUM', value='3'), Token(name='RPAR', value=')')]

>>> match('add', tokens)

(RuleMatch(name='add', matched=[RuleMatch(name='mul', matched=[RuleMatch(name='atom', matched=[Token(name='NUM', value='1.2')]), Token(name='MUL', value='/'), RuleMatch(name='mul', matched=[RuleMatch(name='atom', matched=[Token(name='LPAR', value='('), RuleMatch(name='add', matched=[RuleMatch(name='mul', matched=[RuleMatch(name='atom', matched=[Token(name='NUM', value='11')])]), Token(name='ADD', value='+'), RuleMatch(name='add', matched=[RuleMatch(name='mul', matched=[RuleMatch(name='atom', matched=[Token(name='NUM', value='3')])])])]), Token(name='RPAR', value=')')])])])]), [])

The result is a tuple, of course, and we can see there are no remaining tokens. The actual match is not easy to read, so let me draw it for you

    add
        mul
            atom
                NUM '1.2'
            MUL '/'
            mul
                atom
                    LPAR    '('
                    add
                        mul
                            atom
                                NUM '11'
                        ADD '+'
                        add
                            mul
                                atom
                                    NUM '3'
                    RPAR    ')'

This is what the AST looks like, in concept. It’s a good practice to imagine the parser run in your mind, or on a piece of paper. I dare say it’s necessary to do so if you want to grok it. You can use this AST as a reference to make sure you got it right.

So far we’ve written a parser capable of correctly parsing binary operations, unary operations, brackets and precedence.

There’s only one thing it does incorrectly, and we’re going to fix it in the next step.

Step 4: Post Processing

My parser is not perfect in many ways. The important one is that it cannot handle left-recursion, which forced me to write the grammar as right-recursive. As a result, parsing 8/4/2 results in the folowing AST:

    add
        mul
            atom
                NUM 8
            MUL '/'
            mul
                atom
                    NUM 4
                MUL '/'
                mul
                    atom
                        NUM 2

If we try to solve the expression using this AST, we’ll have to calculate 4/2 first, which is wrong. Some LL-parsers choose to fix the associativity in the tree. That takes too many lines ;). Instead, we’re going to flatten it. The algorithm is simple: For each rule in the AST that 1) needs fixing, and 2) is a binary operation (has three sub-rules), and 3) its right-hand operand is the same rule: flatten the latter into the former. By “flatten”, I mean replace a node with its children, in the context of its parent. Since our traversal is DFS post-order, meaning it starts from the edge of the tree and works its way to the root, the effect accumulates. Here’s the code:

    fix_assoc_rules = 'add', 'mul'

    def _recurse_tree(tree, func):
        return map(func, tree.matched) if tree.name in rule_map else tree[1]

    def flatten_right_associativity(tree):
        new = _recurse_tree(tree, flatten_right_associativity)
        if tree.name in fix_assoc_rules and len(new)==3 and new[2].name==tree.name:
            new[-1:] = new[-1].matched
        return RuleMatch(tree.name, new)

This code will turn any structural sequence of additions or multiplications into a flat list (without mixing each other). Parenthesis break the sequence, of course, so they won’t be affected.

From this point I could re-build the structure as left-associative, using code such as

    def build_left_associativity(tree):
        new_nodes = _recurse_tree(tree, build_left_associativity)
        if tree.name in fix_assoc_rules:
            while len(new_nodes)>3:
                new_nodes[:3] = [RuleMatch(tree.name, new_nodes[:3])]
        return RuleMatch(tree.name, new_nodes)

But I won’t. I’m pressed for lines of code, and changing the evaluation code to handle lists takes a lot less lines than rebuilding the tree.

Step 5: Evaluate

Evaluating the tree is very simple. All that’s required is to traverse the tree in a similar fashion to the post-processing code (namely DFS post-order), and to evaluate each rule in it. At the point of evaluation, because we recurse first, each rule should be made of nothing more than numbers and operations. Here’s the code:

    bin_calc_map = {'*':mul, '/':div, '+':add, '-':sub}
    def calc_binary(x):
        while len(x) > 1:
            x[:3] = [ bin_calc_map[x[1]](x[0], x[2]) ]
        return x[0]

    calc_map = {
        'NUM' : float,
        'atom': lambda x: x[len(x)!=1],
        'neg' : lambda (op,num): (num,-num)[op=='-'],
        'mul' : calc_binary,
        'add' : calc_binary,
    }

    def evaluate(tree):
        solutions = _recurse_tree(tree, evaluate)
        return calc_map.get(tree.name, lambda x:x)(solutions)

I wrote calc_binary to evaluate both addition and multiplication (and their counterparts). It evaluates lists of either, in a left-associative fashion, thus bringing our little LL-grammar annoyance to conclusion.

Step 6: The REPL

The plainest REPL possible:

    if __name__ == '__main__':
        while True:
            print( calc(raw_input('> ')) )

Please don’t make me explain it 🙂

Appendix: Tying it all together: A calculator in 70 lines

    '''A Calculator Implemented With A Top-Down, Recursive-Descent Parser'''
    # Author: Erez Shinan, Dec 2012

    import re, collections
    from operator import add,sub,mul,div

    Token = collections.namedtuple('Token', ['name', 'value'])
    RuleMatch = collections.namedtuple('RuleMatch', ['name', 'matched'])

    token_map = {'+':'ADD', '-':'ADD', '*':'MUL', '/':'MUL', '(':'LPAR', ')':'RPAR'}
    rule_map = {
        'add' : ['mul ADD add', 'mul'],
        'mul' : ['atom MUL mul', 'atom'],
        'atom': ['NUM', 'LPAR add RPAR', 'neg'],
        'neg' : ['ADD atom'],
    }
    fix_assoc_rules = 'add', 'mul'

    bin_calc_map = {'*':mul, '/':div, '+':add, '-':sub}
    def calc_binary(x):
        while len(x) > 1:
            x[:3] = [ bin_calc_map[x[1]](x[0], x[2]) ]
        return x[0]

    calc_map = {
        'NUM' : float,
        'atom': lambda x: x[len(x)!=1],
        'neg' : lambda (op,num): (num,-num)[op=='-'],
        'mul' : calc_binary,
        'add' : calc_binary,
    }

    def match(rule_name, tokens):
        if tokens and rule_name == tokens[0].name:      # Match a token?
            return tokens[0], tokens[1:]
        for expansion in rule_map.get(rule_name, ()):   # Match a rule?
            remaining_tokens = tokens
            matched_subrules = []
            for subrule in expansion.split():
                matched, remaining_tokens = match(subrule, remaining_tokens)
                if not matched:
                    break   # no such luck. next expansion!
                matched_subrules.append(matched)
            else:
                return RuleMatch(rule_name, matched_subrules), remaining_tokens
        return None, None   # match not found

    def _recurse_tree(tree, func):
        return map(func, tree.matched) if tree.name in rule_map else tree[1]

    def flatten_right_associativity(tree):
        new = _recurse_tree(tree, flatten_right_associativity)
        if tree.name in fix_assoc_rules and len(new)==3 and new[2].name==tree.name:
            new[-1:] = new[-1].matched
        return RuleMatch(tree.name, new)

    def evaluate(tree):
        solutions = _recurse_tree(tree, evaluate)
        return calc_map.get(tree.name, lambda x:x)(solutions)

    def calc(expr):
        split_expr = re.findall('[\d.]+|[%s]' % ''.join(token_map), expr)
        tokens = [Token(token_map.get(x, 'NUM'), x) for x in split_expr]
        tree = match('add', tokens)[0]
        tree = flatten_right_associativity( tree )
        return evaluate(tree)

    if __name__ == '__main__':
        while True:
            print( calc(raw_input('> ')) )
Advertisements

Contracts and protocols as a substitute to types and interfaces

December 8, 2011

I am a big fan of assertions. Whenever I reach a point in my code where I say “that pointer can’t possibly be null”, I immediately write – assert( p != NULL ); – and whenever I say “this list can’t possibly be longer than 256” I write assert len(l) <= 256. If you wonder why I keep doing this, it’s because very often I’m wrong. It’s not that I’m a particularly bad programmer, but sometimes I make mistakes, and even when I don’t, sometimes I get very unexpected input, and even when I don’t, sometimes other pieces of code conspire against me. Assertions save me from mythical bug hunts on a regular basis.

So, it’s not a big surprise that I’m a big fan of contracts too. If you don’t know what contracts are, they’re essentially asserts that run at the beginning and end of each function, and check that the parameters and the return values meet certain expectations. In a way, function type declarations, as can be found in C or Java, are a special case of contracts. (Would you like to know more?)

Why not just use duck-typing?

Duck typing is great, but in my experience it becomes a burden as the system grows in size and complexity. Sometimes objects aren’t fully used right away; they are stored as an instance variable, pickled for later use, or sent to another process or another computer. When you finally get the AttributeError, it’s in another execution stack, or in another thread, or in another computer, and debugging it becomes very unpleasant! And what happens when you get the correct object, but it’s in the wrong state? You won’t even get an exception until something somewhere gets corrupted.

In my experience, using an assertion system is the best way to find the subtle bugs and incongruities of big and complex systems.

Why do we need something new?

Types are very confining, even in “typeless” dynamic languages. Take Python: If your API has to verify that it’s getting a file object, the only way is to call isinstance(x, file). That forces the caller to inherit from file, even if he’s writing a mock object (say, as an RPC proxy) that makes no disk access. In any static-type language the I know, it’s impossible to say that you accept either int or float, and you’re forced to either write the same function twice, or use a template and just define it twice.

Today’s interfaces are ridiculous. In C#, an interface with a method that returns a IList<int> will be very upset if you try to implement it as returning List<int>! And don’t even try to return a List<int> when you’re expected to return List. Note that C# will gladly cast between these types in the code, but when dealing with interfaces and function signatures it just goes nuts. It gets very annoying when you’re implementing an ITree inteface and can’t use your own class as nodes‘ type because the signatures collide, and instead you have to explicitly cast from ITree at every method. But I digress.

Even if today’s implementations were better, types are just not enough. They tell you very little about the input or the output. You want to be able to test its values, lengths, states, and maybe to even interact with it to some degree. What we have just doesn’t cut it.

What should we do instead?

Contracts are already pretty good: they have a lot of flexibility and power, they’re self-documenting, and they can be reasoned upon by the compiler/interpreter (“Oh it only accepts a list[int<256]? Time to use my optimized string functions instead!”). But they only serve as a band-aid to existing type systems. They don’t give you the wholesome experience of abstract classes and methods. But, they can.

To me, contracts are much bigger than just assertions. I see them as stepping-stones to a completely new paradigm, that will replace our current system of interfaces, abstract methods, and needless inheritance, with “Contract Protocols”.

How? These are the steps that we need to take to get there:
  1.  Be able to state your assertions about a function, in a declarative manner. Treat these assertions as an entity called a “contract”.  We’re in the middle of this step, and some contract implementations (such as the wonderful PyContracts for python) have already taken the declarative entity route, which is essential for the next step.
  2. Be able to compare contracts. Basically, I want to be able to tell if a contract is contained within another contract, so if C1⊂C2 and x∊C1 then x∊C2. I suspect it’s easier said then done, but I believe that the following (much easier) steps render it as worth doing.
  3. Be able to bundle contracts in a “contract protocol”, and use it to test a class. A protocol is basically just a mapping of {method-name: contract}, and applying it to a class tests that each method exists in the class, and that its contract is a subset of the protocol’s corresponding contract. If these terms are met, it can be said that the class implements the protocol. A class can implement several protocols, obviously.
  4. Be able to compare protocols. Similarly to contracts, we want to check if a protocol is a subset of another protocol. Arguably, it’s the same as step 3.
  5. Contracts can also check if an instance implements a protocol. Making a full circle, we can now use protocols to check for protocols and so on, allowing layers of complexity. We can now write infinitely detailed demands about what a variable should be, but very concisely.

When we finish point 5, we have a complete and very powerful system in our hands. We don’t need to ever discuss types, except for the most basic ones. Inheritance is now only needed to gain functionality, not identity. We can use it for debug-only purposes, but also for run-time decisions in production (For example, in a Strategy pattern).

Example

As a last attempt to get my point across, here is vaguely how I imagine the file protocol to look in pseudo-code.

It doesn’t do the idea any justice, but hopefully it’s enough to get you started.

protocol Closeable:
<pre>    close()

protocol _File < Closeable:
    [str] name
    [int,>0] tell()
    seek( [int,in (0,1,2)] )

protocol ReadOnlyFile < _File:
    [str,!=''] read( [int,>0 or None]? )
    [Iterable[str]] readlines( )
    [Iterable[str]] __iter__( )

protocol WriteOnlyfile < _File:
    [int,>0] write( [str,!=''] )
    writelines( [Iterable[str]] )
    flush()

protocol RWFile < ReadOnlyFile | WriteOnlyFile:
    pass

>>> print ReadOnlyFile < RWFile
True
>>> print implements( open('bla'), ReadOnlyFile )
True
>>> print implements( open('bla'), Iterable )  # has __iter__ function,
True
>>> print implements( open('bla'), Iterable[int] )
False
>>> print implements( open('bla'), WriteOnlyFile )  # default is 'r'
False
>>> print implements( open('bla'), RWFile )
False
>>> print implements( open('bla', 'w+'), RWFile )
True

Zen, Art and Programming

July 7, 2008

The main advantage of having no readership is that I have very few restrictions on what to write about.
(the main disadvantage is obvious)

So in this post, I would like to tell a short zen story –

Notice the excessive and redundant outlining (very blunt around the eyes), and that nothing is just quite right. I would be embarassed to display this in public, but everyones first steps were (or still are) awkward.

Notice the excessive and redundant outlining (very blunt around the eyes), and that nothing is just quite right. I would be embarassed to display this in public, but everyones first steps were (or still are) awkward.

Art
On my first drawing lesson (sketching, to be precise) my teacher gave me the task of copying a grayscaled painting with coal (coal is easy to erase and is good for beginners). The painting featured the face of a girl. As most people would, I started by outlining her face and hair, positioning her nose, shaping her eyes, and went on to coloring the paper to match the tones on the painting. I already spent a hour and a half working on the drawing yet naturally it was disfigured. The teacher did not seem to care much about this. Instead he remarked that in the painting the eyes are not outlined, nor is the face. He also remarked about the incorrect differences in tones. Then he said “this drawing is stuck”, took a small cloth and smeared the drawing, erasing most details and leaving only the general shape and tone. An hour and a half of work was ruined, and I was in shock. But I kept an open-mind and listened to his instructions. Here they are, partly in my own words and commentary, but in the same spirit:
1. Avoid detail. Anything you can’t see when reducing your eyes is not important
2. Your sight is biased. To be correct you have to compare elements to other elements (elements being location, size, color, etc.).
3. Observe. Spending time on understanding the drawing (the relationships of elements) will save you time when drawing.
He said this before but it only made sense at that point. And so I resumed drawing, trying to keep these rules. I was amazed by two things: How quickly I managed to reconstruct the painting, and how convincing it looked without any real detail. And so, I learned several things:
1. Your internal model is harder to construct than the drawing. Spend more time on observing, and you will spend less time on the drawing.
2. Detail distracts you from the more important problems.
3. Keep detail to the end. If you start from it, you will never get it right.

Programming
If you remember the title, you must ask yourself what all this has to do with programming. Well, as I returned home from the lesson I had some time to contemplate and I remembered a time when a friend and I had to teach someone how to program. This was at work, so that someone was committed. Among the teachings we conjured up an exercise which was supposed to bring the “student” to the right spirit. The right spirit, or as we called it, the “programming zen”, was very important to us. The exercise was this: The student was given a programming problem which he had to solve. After making sure that he completes the program correctly and that the code is of high quality, we order him to erase the program (and any copy) from his computer.
This seemingly cruel exercise was meant to give the following lesson: Code is not important. What’s important is your knowledge and your understanding of it. Once you solve a problem once, you can solve it again without an effort.

Zen
This was what I remembered, and I then realized that I have been taught what I tried to teach someone else some time ago, and it applies to both drawing and programming:

Your internal model is harder to construct than the output, and is also more important. Spend more time understanding the problem, and you will spend less time on implementing it.

I believe the other tips and learnings mentioned here also apply to programming, but these are stories for another time.