I have the following numerical expression interpreter (simplified example)
data Node x
= Symbol String -- A parsed word coming from Python: syntactic category, feature list, source text
| Branches x x
deriving (Functor, Foldable, Traversable)
type SyntaxTree = Fix Node
-- here's the type of denotations
data Meaning
= Number Double
| BinaryOperator (Double -> Double -> Double)
| UnaryOperator (Double -> Double)
-- this is a catamorphism
simpleInterpret :: SyntaxTree -> Meaning
simpleInterpret = Fold.cata simpleLexicon
simpleLexicon :: Node Meaning -> Meaning
simpleLexicon = \case
Symbol "0" -> Number 0
Symbol "1" -> Number 1
Symbol "/" -> BinaryOperator (/)
Branches (BinaryOperator bop) (Number s) -> UnaryOperator (bop s)
Branches (Number s) (BinaryOperator bop) -> UnaryOperator (bop s)
Branches (UnaryOperator uop) (Number s) -> Number (uop s)
Branches (Number s) (UnaryOperator uop) -> Number (uop s)
_ -> error "not handled"
It would be nice if I didn't have to introduce this ungainly Meaning
type and ensuing rules to combine a Meaning
on a left branch with a Meaning
on a right branch. In the real problem, I'm working on, of which this is a simplified example, there need to be more constructors for Meaning
and it gets to be awkward. Intuitively, I just want my code to know that if the left branch thing is of type a
and the right branch thing is of type a->b
, then the result should be of type b
.
Is there a good technique for doing this, perhaps with fancier type-level machinery? I suppose what I really want is for Meaning
to be the type of simply-typed lambda calculus expressions, so perhaps the answer is to embed the simply-typed lambda calculus?