In a previous article we saw type constructors for ADTs, and how they are not types themselves. These are `Maybe`

and `Either`

in Haskell again:

```
data Maybe a = Nothing | Just a
data Either a b = Left a | Right b
```

The previous declaration for `Maybe`

names a type constructor that transforms a given type `a`

into a new type `Maybe a`

, such as `Maybe Integer`

or `Maybe Char`

. We cannot ask the type of `Maybe`

by itself to the Haskell interpreter:

```
$ ghci
GHCi, version 8.2.1: http://www.haskell.org/ghc/ :? for help
Prelude> :type Maybe
<interactive>:1:1: error:
• Data constructor not in scope: Maybe
• Perhaps you meant variable ‘maybe’ (imported from Prelude)
```

The error we get back from the interpreter says that a data constructor named `Maybe`

could not be found. That is because it’s a type constructor instead. But there is a different question we can ask the interpreter, we can ask for its *kind*:

```
Prelude> :kind Maybe
Maybe :: * -> *
```

Type kinds are “types of types” and partition different type constructors that share the same arity into sets, just like types are sets of values. The kind of regular types such as `Integer`

and `Char`

is just `*`

. The kind `*`

is the kind of all types that can have values, and there is not really much more to it^{1}. The kind of the type constructor `Maybe`

, on the other hand, is `* -> *`

^{2} which means that it takes any type of kind `*`

and gives back another type of kind `*`

. Why does this matter? It means that we cannot create the types `Maybe Maybe`

or `Maybe Either`

, because the kinds do not match. Kinds are how the compiler can make sure the application of the type constructors make sense:

```
Prelude> :kind Maybe Maybe
<interactive>:1:7: error:
• Expecting one more argument to ‘Maybe’
Expected a type, but ‘Maybe’ has kind ‘* -> *’
Prelude> :kind Maybe Either
<interactive>:1:7: error:
• Expecting two more arguments to ‘Either’
Expected a type, but ‘Either’ has kind ‘* -> * -> *’
```

Just as it is possible to partially apply a function in Haskell, it is also possible to partially apply a type constructor:

```
Prelude> :kind Either
Either :: * -> * -> *
Prelude> :kind Either Integer
Either Integer :: * -> *
Prelude> :kind Either Integer Char
Either Integer Char :: *
```

This means that `Maybe`

and `Either a`

have the same kind and can be used in type signatures whenever a type constructor of kind `* -> *`

is needed.

The Idris programming language is a language where types are first-class entities and therefore have types themselves. We can ask the Idris interpreter the same questions we asked the Haskell one before:

```
$ idris
Idris> :type Maybe
Maybe : Type -> Type
Idris> :type Either
Either : Type -> Type -> Type
```

As we can see Idris says that `Maybe`

is just a regular function that takes values of type `Type`

and returns other values of type `Type`

(the first type is obviously different than the second one). Therefore type kinds are not needed in Idris, because it only needs to check the function arity:

```
Idris> :type Maybe Either
Type mismatch between
Type -> Type -> Type (Type of Either)
and
Type (Expected type)
```

In Idris, all values of type `Type`

can be used as types in the traditional sense. It adds complexity to the language but it allows for powerful type constructions where the type of some values can depend on other values, instead of depending simply on other types as in Haskell. For instance, we can write a function that takes an `Integer`

or a `Double`

depending on the value of other argument to the function:

```
WantDouble : Bool -> Type
WantDouble False = Integer
WantDouble True = Double
-- the argument `number` will have either `Integer` or `Double` type,
-- depending on the value of `b`
processNumber : (b : Bool) -> WantDouble b -> String
processNumber False number = show (number + 1)
processNumber True number = show (number / 3.78)
```

In statically-typed programming languages all values are logically partitioned into sets which denote their *types*. For instance, numbers such as 1, 2, 99, and 456 are of the `Integer`

type. Other numbers such as 3.14, 67.88, and 0.123 are of the `Real`

type. When a value belongs to a type it is also said that such value *inhabits* that type. When types are viewed as theorems, we can also say that the value is a *witness* or a *proof* of the type it inhabits.

Algebraic data types, or ADTs, are a way of declaring new types often found in functional programming languages. The following simple ADT declares a new type for boolean values (the code snippets in this article will be given in Haskell):

`data Bool = True | False`

This declaration introduces a new type, denoted by `Bool`

, which has only two values: `True`

and `False`

(this is how the `Bool`

type is actually defined in Haskell). When reading ADT declarations it is important to remember that the newly introduced name to the left of the equals sign is in the namespace of types, whereas the newly introduced names to the right of the equals sign are in the namespace of values. One could similarly create a new type for the days of the week:

```
data Weekdays = Sunday
| Monday
| Tuesday
| Wednesday
| Thursday
| Friday
| Saturday
```

Both newly introduced types are also called enumerations because they are sets of simple, fixed values. They are also called *sum types*, because their values are either one or other of the alternatives separated by the pipe symbol (`|`

), i. e., they are the sum of the alternatives. But is also possible to define *product types*, such as the following:

`data Point = Point Integer Integer`

They are called product types because their values are the Cartesian products of other values, such as `Integer`

× `Integer`

in this case. Also notice that, although we have introduced the new name `Point`

on both sides of the equals sign, those names do not refer to the same thing. The `Point`

to the left of the equals sign is naming a new type, whereas the `Point`

to the right of the equals sign is naming a *value constructor* (also known as *data constructor*). We could as well have defined `Point`

this way:

`data Point = MkPoint Integer Integer`

Value constructors are not values themselves, but are function-like objects that, given some input values, build *new values* of the newly-declared type. Given the above declaration, this is how we can now create new values that inhabit `Point`

:

```
origin = MkPoint 0 0
p1 = MkPoint 0 100
p2 = MkPoint 33 33
```

They are “function-like” because, although they are invoked like regular functions, they are implemented by the compiler itself and just act as *containers*, holding the values passed to them for future use. For instance, let’s see which type is reported for the value constructor above by the Haskell interpreter:

```
$ ghci
Prelude> data Point = MkPoint Integer Integer
Prelude> :t MkPoint
MkPoint :: Integer -> Integer -> Point
```

It is very convenient that, in a language with first-class functions, value constructors are also functions. It allows one to create a list of points from two lists of coordinates with the simple snippet:

```
-- zipWith takes a function and two lists, and apply the function
-- to the corresponding elements of the lists yielding a new list
-- as a result
points = zipWith MkPoint xCoords yCoords
-- this is the type of applying zipWith to MkPoint as reported
-- by Haskell, brackets denote a list
zipWith MkPoint :: [Integer] -> [Integer] -> [Point]
```

We have seen sum and product types but, more generally, ADTs can be sum types of product types:

```
data ProjectivePoint = PointInInfinity
| RealPoint Integer Integer
```

This is actually why they are called *algebraic* in the first place. With regard to the type declared above, all the following values inhabit `ProjectivePoint`

:

```
infinity = PointInInfinity
origin = RealPoint 0 0
pp1 = RealPoint 100 (-100)
```

Just like we have value constructors, which transform values into values of other types, we can also have *type constructors* which transforms types into other types. One example is the very popular `Maybe`

ADT:

`data Maybe a = Nothing | Just a`

In this declaration `Maybe`

by itself does not denote a type, but a type constructor. The variable `a`

in this declaration stands for any other type, such as `Integer`

, `Char`

or other ADTs such as `Bool`

or `Point`

. `Maybe Integer`

, on the other hand, denotes a type. Values of this type can be created by one of the data constructors `Nothing`

or `Just x`

, where `x`

is some integer value. We can write thus a function that returns the square root of a real number if it is not negative:

```
-- function type
maybeSqrt :: Double -> Maybe Double
-- function implementation
maybeSqrt x = if x < 0 then Nothing
else Just (sqrt x)
```

Other such type constructor is `Either`

, which is a sum type whose values can hold values of either two different types, but not both at the same time:

`data Either a b = Left a | Right b`

The declaration `Either Integer Double`

denotes a type whose values can hold either integer or real numbers:

```
someInteger :: Either Integer Double
someInteger = Left 42
someDouble :: Either Integer Double
someDouble = Right 3.1415
```

It is also possible to create types that reference themselves. This is how an ADT for a list of values could be declared:

```
data List a = Empty
| Element a (List a)
```

This declaration means that values of type `List a`

are either an empty list, or a value of type `a`

followed by another value of type `List a`

. Such ADTs are called *recursive*. Examples of values of this type are:

```
noInts :: List Integer
noInts = Empty
aDouble :: List Double
aDouble = Element 1.0 Empty
aMaybe :: List (Maybe Integer)
aMaybe = Element (Just 9) Empty
aCoupleMaybes :: List (Maybe Integer)
aCoupleMaybes = Element Nothing aMaybe
```

So far we have seen different ways of creating ADTs, but how do we use them? We can look into the values inside an ADT by using *pattern matching*, which means pairing patterns together with code to be executed in case the respective pattern matches. For instance we can write a function that converts booleans to zeros and ones:

```
binary :: Bool -> Integer
binary False = 0
binary True = 1
```

It is not necessary to list all the patterns if we want to treat just some of them differently. The following function will test if a weekday is in the weekend or not:

```
isWeekend :: Weekday -> Bool
isWeekend Sunday = True
isWeekend Saturday = True
isWeekend _ = False
```

For matching against data constructors which hold values, we name the values which will become local variables in the function scope:

```
hasBiggerThan10 :: Maybe Integer -> Bool
hasBiggerThan10 Nothing = False
hasBiggerThan10 (Just x) = x > 10
squaredDistance :: Point -> Point -> Integer
squaredDistance (MkPoint x1 y1) (MkPoint x2 y2) =
(x2 - x1) ^ 2 + (y2 - y1) ^ 2
getXCoord :: ProjectivePoint -> Maybe Integer
getXCoord PointInInfinity = Nothing
getXCoord (RealPoint x _) = Just x -- no need to name an unused value
-- pattern matching can unwrap data constructors at arbitrary levels
getSecondIntegerOrZero :: List (Maybe Integer) -> Integer
getSecondIntegerOrZero (Element _ (Element (Just x) _)) = x
getSecondIntegerOrZero _ = 0
```

This was a short introduction to algebraic data types which should give a good overview of what they can do, but there is much more to be said about them. Some resources for further reading:

]]>Lua is a programming language that, in the best Scheme tradition, takes pride in its extreme simplicity. But it takes things even further in one aspect: There is just one data structure, the associative array, or map. As Lua was designed from the beginning to be an extension language that could be picked up easily by non-programmers, this makes sense. And Lua’s usage numbers show this strategy worked. Lua is widely used as a scripting language for computer games, for instance.

In Lua maps are called tables. Although having tables as the only data structure in the language seems very limiting at first, the language has several features to help using tables in several different contexts. This post will be about enumerations in Lua.

In languages such as C and Java, enumerations are written like this:

```
enum Colors {
BLUE = 1,
GREEN,
RED,
VIOLET,
YELLOW
};
```

The enumeration values are stored as integral types. The integer values can be assigned manually for one or more values, the remaining ones will get successive integers. The values can be used as integers, such as for indexing arrays.

In Lua it is possible to use a similar syntax, based on one of the several table constructors:

```
Colors = {
BLUE = 1,
GREEN = 2,
RED = 3,
VIOLET = 4,
YELLOW = 5,
}
-- the above is equivalent to:
Colors = {
["BLUE"] = 1,
["GREEN"] = 2,
["RED"] = 3,
["VIOLET"] = 4,
["YELLOW"] = 5,
}
-- reading the integer back
local color = Colors.RED -- the same as Colors["RED"]
```

This already goes some way to making our enumeration, but it’s not very convenient because we need to assign integers to every key in the table. If that’s not good enough a different table constructor could be used but the keys would need to be written as strings explicitly:

```
Colors = {
"BLUE",
"GREEN",
"RED",
"VIOLET",
"YELLOW",
}
-- which is equivalent to:
Colors = {
[1] = "BLUE",
[2] = "GREEN",
[3] = "RED",
[4] = "VIOLET",
[5] = "YELLOW",
}
```

Now items are numbered automatically but the keys and values are inverted. In this case we would need a helper function to switch those for us:

```
-- assumes the tbl is an array, i.e., all the keys are
-- successive integers - otherwise #tbl will fail
function enum(tbl)
local length = #tbl
for i = 1, length do
local v = tbl[i]
tbl[v] = i
end
return tbl
end
```

When passing a literal table (or string) to a function in Lua the parentheses can be omitted which lets us create a handy DSL:

```
Colors = enum {
"BLUE",
"GREEN",
"RED",
"VIOLET",
"YELLOW",
}
-- finally, get our integer from the enum!
local color = Colors.RED
```

As a side note, strings in Lua are immutable and interned by the interpreter, which means that accessing tables by string keys is very fast. In the table implementation the pointer to the interned string will be used instead of a hash function for looking up the associated values. This is not as fast as accessing an enumeration value in C, but in the interpreted world, it’s still very fast.

]]>I have had an interest in Haskell for a long time. When I first encountered the language, resources such as Real World Haskell and Learn You a Haskell didn’t exist yet, so I tried to make do with the A “Gentle” Introduction to Haskell, which had me scratching my head *a lot*. At that time I even started writing a tool in Haskell to help my father run his business. But then other things came up and I was left with that superficial knowledge of Haskell that I always wanted to deepen but never got to.

A couple of months ago I stumbled on some praise online for a new Haskell book, which promised to teach everything from first principles. When I saw that the book included chapters on more recent topics such as `Foldable`

and `Traversable`

and on things I never properly got such as the `Reader`

monad and monad transformers, I decided to give it a try. I was not disappointed. One of the many *a-ha* moments I had reading the book was about those famous monad transformers.

I had an ingrained notion that I brought from other languages that Haskell data types were just either enumerations or data containers. For instance:

```
-- enumeration
data Bool = False | True
-- container
data BinTree a = Nil | Node a (BinTree a) (BinTree a)
```

Therefore I was puzzled when reading code examples given in the book that seemed to just be about pointless wrapping and unwrapping of data with a `MaybeT`

monad transformer. Then I realized that this wrapping and unwrapping was being done in order to have access to the `Monad`

instance of `MaybeT`

and use its instance functions against the wrapped data. In other words, types were being used not to hold other data, as in a collection, but to *change* the behaviour of that same data.

A small example should demonstrate why this is useful. Let’s start a module:

```
module MaybeT where
import Control.Monad.Trans.Maybe
```

In this example, there are two text files with key-value pairs. The first one is a dictionary from Portuguese to English:

```
casa house
carro car
mala suitcase
mesa table
```

and the second one is a price list:

```
house 1000000
chair 2300
suitcase 600
pen 15
```

These data sources could be replaced by a database or a web service and the reasoning would be the same. In order to access those sources I/O needs to be performed and therefore the result will be embedded in the `IO`

type. Moreover, an optional result type (`Maybe`

) should be used in case we search for a non-existant key. Therefore the functions that retrieve data will be:

```
mkTable f = let mkPair (x:y:_) = (x, f y) in
fmap (mkPair . words) . filter (/= "") . lines
getItem :: String -> (String -> a) -> String -> IO (Maybe a)
getItem file f key = readFile file >>= return . lookup key . mkTable f
getTranslation :: String -> IO (Maybe String)
getTranslation = getItem "dict.txt" id
getPrice :: String -> IO (Maybe Int)
getPrice = getItem "prices.txt" read
```

We want to, given a product named in Portuguese, to have its price. The types of the results of those functions show that to access the data we want, we need to unwrap first the `IO`

type and then `Maybe`

. An attempt would be:

```
v1 :: String -> IO (Maybe Int)
v1 key = do
item <- getTranslation key
case item of
Nothing -> return Nothing
Just x -> getPrice x
```

The `do`

notation unwraps the `IO`

monad, and the `Maybe`

is unwrapped with a `case`

statement. Running this in `ghci`

:

```
Prelude> :l MaybeT.hs
*MaybeT> v1 "livro"
Nothing
*MaybeT> v1 "casa"
Just 1000000
*MaybeT>
```

It was necessary to, in this example, manually unwrap the `Maybe`

type embedded in `IO`

so as to call `getPrice`

on the result. Although in this case the effort was not very big, things can escalate quickly with more functions.

Let’s see a possible definition of `MaybeT`

:

`newtype MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) }`

This declaration shows that a single object named `runMaybeT`

is being held with the type `m (Maybe a)`

. It is the composition of a type `m`

we know nothing about, besides the fact it has also an instance of `Monad`

, and the `Maybe`

type we know, which is *wrapped* by `m`

. So the `MaybeT`

transformer is a way of accessing the innermost data bypassing the outer monad, which we don’t care about, without needing to manually unwrap it at every step, as the next example show:

```
v2 :: String -> IO (Maybe Int)
v2 key = runMaybeT $ do
item <- MaybeT $ getTranslation key
MaybeT $ getPrice item
```

It is not terribly shorter than the first version, but the advantages are clear when the number of steps increases. For each new function added to the chain a `case`

statement would have to be added to the first version, while in the second version the code is shorter and the flow is obvious. Besides, `newtype`

is a restricted version of `data`

that doesn’t exist at runtime: All this wrapping and unwrapping of `MaybeT`

seen in the code above will be elided by the compiler. Other monad transformers like `EitherT`

, `ReaderT`

, `StateT`

, etc. work in similarly, they let you abstract away the outer monad and work on the data you care about.

I am glad I bought this book. This insight alone made it worth for me.

]]>Lately I have been reading about searching in game trees. So a friend of mine sent me the “Escape from Zurg” paper, which talks about problem solving by tree searching with Haskell.

In a large class of problems one is given a start state and some predicate for a desired final state. Moreover, the rules that dictate how a successor state is generated from a previous one are also given. Take a board game for instance, like tic-tac-toe. There is the initial state (which is the empty board), the predicate for a desired state (a state in which tree pieces of mine are aligned), and rules to create one state from another (the rules of the game). Another example of such problems is the “Escape from Zurg” one, that is stated as follows:

Buzz, Woody, Rex, and Hamm have to escape from Zurg. They merely have to cross one last bridge before they are free. However, the bridge is fragile and can hold at most two of them at the same time. Moreover, to cross the bridge a flashlight is needed to avoid traps and broken parts. The problem is that our friends have only one flashlight with one battery that lasts for only 60 minutes (this is not a typo:

sixty). The toys need different times to cross the bridge (in either direction):

TOY TIME Buzz 5 minutes Woody 10 minutes Rex 20 minutes Hamm 25 minutes Since there can be only two toys on the bridge at the same time, they cannot cross the bridge all at once. Since they need the flashlight to cross the bridge, whenever two have crossed the bridge, somebody has to go back and bring the flashlight to those toys on the other side that still have to cross the bridge. The problem now is: In which order can the four toys cross the bridge in time (that is, in 60 minutes) to be saved from Zurg?

This class of problems is usually solved by searching. The initial state and all the successors ultimately build a DAG (directed acyclic graph) that is called the *search space*. Some search spaces are small, like tic-tac-toe playing or even the “Escape from Zurg” puzzle. Others are large, like playing Reversi; even larger, like playing Chess; others are infinite. Needless to say that that are better searching ways than others. This same puzzle, for instance, was solved in Scheme by Ben Simon using the cool `amb`

operator. The `amb`

operator uses first-class continuations to backtrack and take another path in case the first one fails. It was an elegant solution for this puzzle, but it is not adequate for general searching because it backtracks blindly. There are others ways to search blindly, for instance *depth-first search* and *breadth-first search*.

In a depth-first search, we analyse the successors of a given state until there are no more successors to analyse, and then backtrack to a previous level of the tree. In a breadth-first search, we analyse the whole fringe of the tree (the *horizon nodes*) before expanding it one level further using the successor rules. This is good for infinite search spaces, because we will eventually find a solution if there is one. The implementation in Scheme is straightforward:

```
(define (depth-first states successors goal?)
(if (null? states)
#f
(let ((state (car states)))
(if (goal? state)
state
(depth-first (append (successors state)
(cdr states))
successors
goal?)))))
(define (breadth-first states successors goal?)
(if (null? states)
#f
(let ((state (car states)))
(if (goal? state)
state
(breadth-first (append (cdr states)
(successors state))
successors
goal?)))))
```

This is actually all we need for simple searches. Of course we need to supply the initial state, the procedure that generates successors and the goal predicate. These are specific for each kind of problem. Let’s see the Zurg problem procedures:

```
;; the type of the states found in our puzzle
(define-type state
(previous unprintable:) ;; previous state
near ;; toys in near shore
far ;; toys in far shore
flashlight ;; location of flashlight
time) ;; time consumed so far
(define (start-state)
(make-state #f
'(buzz woody rex hamm)
'()
'near
0))
(define (final-state? state)
(and (null? (state-near state))
(<= (state-time state) 60)))
;; at most two toys can travel across the bridge
;; the flashlight must be used in all crossings
(define (successors state)
(let ((orig (if (eqv? (state-flashlight state)
'near)
(state-near state)
(state-far state)))
(dest (if (eqv? (state-flashlight state)
'near)
(state-far state)
(state-near state))))
(map (lambda (toy-pair)
(let ((toy1 (car toy-pair))
(toy2 (cdr toy-pair)))
(make-next-state state
(remove toy2
(remove toy1 orig))
(ucons toy1
(cons toy2 dest))
(max (time-cost toy1)
(time-cost toy2)))))
(product orig orig))))
```

The previous procedures use some utilities:

```
(define (make-next-state state orig dest time)
(if (eqv? (state-flashlight state) 'near)
(make-state state orig dest 'far (+ (state-time state) time))
(make-state state dest orig 'near (+ (state-time state) time))))
;; the time each toy takes to cross the bridge
(define time-cost
(let ((costs '((buzz . 5)
(woody . 10)
(rex . 20)
(hamm . 25))))
(lambda (toy)
(let ((pair (assv toy costs)))
(and pair
(cdr pair))))))
(define (ucons a b)
(if (memv a b)
b
(cons a b)))
(define (product lis1 lis2)
(let lp1 ((lis1 lis1)
(res '()))
(if (null? lis1)
res
(let ((i1 (car lis1)))
(let lp2 ((lis2 lis2)
(res res))
(if (null? lis2)
(lp1 (cdr lis1) res)
(let ((i2 (car lis2)))
(if (member (cons i2 i1) res)
(lp2 (cdr lis2) res)
(lp2 (cdr lis2) (cons (cons i1 i2) res))))))))))
```

Our `successors`

procedure does not take time in account. So it is possible for a branch of the tree to go on indefinitely. To guarantee that we will arrive at a solution, we use a breadth-first search:

```
(define (print-path state)
(let loop ((state state)
(path '()))
(if state
(loop (state-previous state)
(cons state path))
(for-each println path))))
(print-path (breadth-first (list (start-state)) successors final-state?))
```

With the result:

```
#<state #14 near: (buzz woody rex hamm) far: () flashlight: near time: 0>
#<state #15 near: (rex hamm) far: (buzz woody) flashlight: far time: 10>
#<state #16 near: (woody rex hamm) far: (buzz) flashlight: near time: 20>
#<state #17 near: (woody) far: (rex hamm buzz) flashlight: far time: 45>
#<state #18 near: (buzz woody) far: (rex hamm) flashlight: near time: 50>
#<state #19 near: () far: (buzz woody rex hamm) flashlight: far time: 60>
```

If we change our `successors`

procedure a little, we can use depth-first search:

```
(define (successors2 state)
(let ((orig (if (eqv? (state-flashlight state)
'near)
(state-near state)
(state-far state)))
(dest (if (eqv? (state-flashlight state)
'near)
(state-far state)
(state-near state))))
(filter (lambda (s)
(<= (state-time s) 60)) ;; filtering out bad states
(map (lambda (toy-pair)
(let ((toy1 (car toy-pair))
(toy2 (cdr toy-pair)))
(make-next-state state
(remove toy2
(remove toy1 orig))
(ucons toy1
(cons toy2 dest))
(max (time-cost toy1)
(time-cost toy2)))))
(product orig orig)))))
(print-path (depth-first (list (start-state)) successors2 final-state?))
```

The advantage of changing the successors procedure is that depth-first searches are usually faster than breadth-first searches, in this case 483ms vs. 24630ms. Also, it uses less memory. So it is always a good idea to try to cut the search space as soon as possible.

Doing some research about compilers, interpreters and virtual machines, I have gathered some bibliography from several resources. Here it is, in no particular order:

- Essentials of Programming Languages, by Daniel P. Friedman, Mitchell Wand and Christopher T. Haynes
- Programming Language Pragmatics, by Michael L. Scott
- Smalltalk-80: The Language and Its Implementation, by Adele Goldberg and David Robson
- Writing Compilers and Interpreters: An Applied Approach Using C++, by Ronald L. Mak
- Modern Compiler Implementation in ML, by Andrew W. Appel
- Lisp in Small Pieces, by Christian Queinnec
- An Incremental Approach to Compiler Construction, by Abdulaziz Ghuloum
- Let’s Build a Compiler, by Jack Crenshaw
- Structure and Interpretation of Computer Programs, by Harold Abelson and Gerald Jay Sussman, with Julie Sussman
- Advanced Compiler Design and Implementation, by Steven Muchnick
- The 90 Minute Scheme to C compiler, by Marc Feeley
- Compilers: Principles, Techniques, and Tools, by Aho, Lam, Sethi and Ulman (the “Dragon Book”)
- Engineering a Compiler, by Keith Cooper and Linda Torczon
- Mike Pall’s guide to the Lua source code
- Anton Ertl’s papers
- David Gregg’s papers
- Michael Franz’s papers
- Garbage Collection: Algorithms for Automatic Dynamic Memory Management, by Richard Jones and Rafael D Lins
- Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp, by Peter Norvig
- The Art of the Interpreter of, the Modularity Complex (Parts Zero, One, and Two), by Guy Lewis Steele, Jr. and Gerald Jay Sussman
- Compiler section on readscheme.org