In the previous lesson we looked at an interesting bifunctor, but in this lesson we’re going to come back to the
Data.Bifunctor module and its laws. The
Bifunctor laws are closely related to the
Functor laws which we have already seen, but there are some interesting new things to look at here.
Previously we looked at the connection between
liftA2, their connection to
pure function, and the monoidal structure of applicatives. We will still want to give some closeup attention to a few other applicatives, especially functions and lists, but this lesson will focus on the other operators in the typeclass (
*>) before we move on to those. While you almost never need to implement them yourself – because the compiler can derive them from your implementations of pure and
liftA2) – they are useful, yet often ignored.
In this lesson we look at lists and spend a little more time with the connection between applicatives and monoids. We do not typically think of types having more than one valid
Applicative instance. The connection between a type and a typeclass, expressed in an
instance declaration, must be unique. Ideally, there should only one sensible or legal way to implement the functions for that type. And, indeed, that’s how
Functor works, which is why we can easily derive
Functor instances. And since applicatives are functors, we might expect that to carry over, and most of the time it does work out that way. However, applicatives are not mere functors; they are monoidal functors.
Decorator syntax refers to some syntactic sugar in Python that lets you simultaneously define a function
f and apply a function
g to the function you just defined.
f is called the “decorated” function, and
g is called the “decorator.” Here we look at several examples using decorators in Python, compare them to similar situations in Haskell, and discuss how lambdas let us achieve the same result without having a special syntax for decorators.
iterator is a sort of sequence that, unlike a
list, computes each subsequent value only as it is needed rather than producing the entire list all at once. For example, from a range, we can produce an iterator of numbers. Here we construct an iterator
xs that will produce three values:
xs = iter(range(1, 4)). We can use the
next function to demonstrate pulling values from the iterator one at a time.
Here we look at the closely-related Python functions
enumerate, and the Haskell functions
A semigroup is a type together with a binary operation. The operation must be “closed”, meaning its output has the same type as its operands, and it must be “associative”:
x <> (y <> z) =
(x <> y) <> z. Many sets have more than one such operation over them. Integers, for example, are semigroups under both a “min” and a “max” operation.
Threads are concurrent flows of execution within one process. Haskell chooses to implement its own multithreading abstraction instead of simply using native OS threads directly, because OS threads are slower to initialize and have a greater memory cost. For applications written entirely in Haskell, there is no semantic difference between OS threads and Haskell threads, but there are some reasons to appreciate the distinction between the two. By default, Haskell programs only use a single OS thread. You must build with the
-threaded flag to enable multiple OS threads.
Using newtypes like
Password instead of a generic string type can increase type safety and expressiveness, but, as we saw previously, it can also lead to annoyances where you have to convert between two types that are, in one sense, the same. This lesson of the Validation course introduces the
Coercible class as general way of handling those conversions.
This lesson of the Validation course looks into the prisms from the
validation library and how we can use them to generalize our code to be, as they say, more liberal in what we accept. Doing this requires us to look into the
lens library and some of its functions, as well as take a deeper look at what it means for types to be isomorphic to each other.
Using Attoparsec’s incremental parsing ability, we fed the parser small chunks at a time. This approach eliminates the lazy I/O and affords us some more manual control over the I/O at the expense of being slightly more cumbersome. A streaming library like
pipes provides a good middle ground between the two approaches, giving us back some of the convenience of lazy datatypes – and some additional ability to decompose our code into smaller cohesive pieces – without sacrificing fine-grained control over the operations of the program.