This is yet another CPS introduction post, with a focus on the use cases of CPS. This is where I find some other CPS articles to be insufficient in, for example, when I first read the CPS entry on Wikibooks, I got what CPS is but it wasn’t quite clear to me what the different ways are in which CPS can profitably be used in practice. Besides, CPS is a fairly obscure, convoluted and counter-intuitive thing, so I reckon another post that explains it is always beneficial.
The codebase I regularly work with uses in many places integer types that are not
Word64, etc. A number of functions in
however, only works with
Int, such as
length, and so there’s a lot of
fromIntegrals floating around,
which isn’t pretty. I was once tempted to get rid of all these
fromIntegrals by using
a coworker reviewing the PR pointed out that the doc of
genericLength mentions the following:
I’ve been playing with fixed points of Haskell functors lately and thought it would be useful to write down some of my thoughts, both to solidify my understanding and for future reference. If it happens to be helpful for someone else, then all the better. I guess I’ll start with Haskell’s category.
Beautiful Bridges is one of the eleven problems in ACM ICPC World Finals 2019. The problem set can be found here. In this post, we are going to solve this problem in two different ways, both of which involve specifying the problem as a relational hylomorphism, and extracting, via algebraic reasoning, an efficient functional program that conforms to the specification.
A bifunctor is a functor whose source is a product category. In Haskell,
bifunctors have the
Functional programmers know all too well that left-associated concatenation of cons-lists can
be a source of inefficiency. For example, if we perform a left-associated concatenation of n
singleton cons-lists, it would take O(n2) to force the entire result list, and
O(n) just to force the first element. To illustrate the problem, recall that
++ are defined as
This post aims to give a quick-and-dirty explanation of
MonadFix. It is quick and
MonadFix is complex and weird stuff (or at least it seems so), and I do not
think I am able to present a comprehensive discussion. Instead, this post mainly consists of a simple example
that hopefully can give one a rough idea of what
MonadFix does and where it might
The first language I used to do purely functional programming is Scala, which I learned when I was at Facebook, and I absolutely loved it. Not long afterwards I started to learn Haskell, and sure enough, Scala quickly became the second favorite. Haskell is simply too beautiful. This motivated me to join Formation in April last year. Being paid to write Haskell on a daily basis feels like a dream, in the sense that my work is now my hobby. As a result, I’ll start to write more Haskell posts here than Scala ones. I still like Scala but I haven’t touched it in a while and I’m fairly rusty at this point.
Free monoids and free monads are important concepts to understand in functional programming. A free monoid on type
A is just
List[A]. List is the single most widely used data structure in functional programming and everyone knows what a list is. But not everyone knows why lists are called free monoids, and what is so “free” about it. And it pays to do so, because understanding what free monoids are about will make the concept of free monads much more comprehensible.
When writing any non-trivial amount of code, we usually factor out the common logic or patterns to improve modularity, facilitate code reuse and avoid repeating ourselves. Recursion schemes is just about that. With recursion schemes you can factor out the recursion patterns in your code, so that you don’t need to write the same types of recursion over and over again. This may sound a little magical to those who aren’t familiar with recursion schemes, but it really isn’t.