## Partial order

From wikipedia:

In mathematics, especially order theory, a partially ordered set (or poset) formalizes and generalizes the intuitive concept of an ordering, sequencing, or arrangement of the elements of a set. A poset consists of a set together with a binary relation that indicates that, for certain pairs of elements in the set, one of the elements precedes the other. Such a relation is called a partial order to reflect the fact that not every pair of elements need be related: for some pairs, it may be that neither element precedes the other in the poset. Thus, partial orders generalize the more familiar total orders, in which every pair is related.

Formal definition:

A partial order is a binary relation $\le$ over a set $P$ which is reflexive, antisymmetric and transitive:

• $a \le a$
• if $a \le b$ and $b \le b$ then $a = b$
• if $a \le b, b\le c$, then $a \le c$

The binary relation $\subseteq$ over sets is a partial order. In the category of sets, we can have a commutative diagram as below:

Which can be translated into a Venn’s diagram:

## Kleisli category

A small code example:

myNegate :: Bool -> (Bool, String)
myNegate x = (not x, "myNegate")

stringify :: Bool -> (String, String)
stringify bool = (show bool, "stringify")

compose :: (a -> (b, String)) -> (b -> (c, String)) -> a -> (c, String)
compose f g x =
let (x1, string1) = f x
(x2, string2) = g x1
in
(x2, string1 ++ "\n" ++ string2)

main :: IO()
main = do
print $(compose myNegate stringify) True A more general illustration of the above code: Here m a corresponds to (a, String) in the code. For an category E, the corresponding Kleisli category K exists if • There is an arrow $K_f : a \rightarrow b$ in K for $f: a \rightarrow m b$ • $id_a$ in K corresponds to $h : a \rightarrow m a$ in E • Composition id defined in C, i.e. $\text{Compose}(f, g) : a \rightarrow m c$ • Similary, association law holds in E. If the Kleisli category exists for such a category E, then m is called a monad. ## Functor Definition: class Functor (f :: * -> *) where fmap :: (a -> b) -> f a -> f b (<$) :: a -> f b -> f a
{-# MINIMAL fmap #-}

Examples:

data Const c a = Const c
instance Functor (Const c) where
fmap f (Const c) = Const c

data Identity a = Identity a
instance Functor Identity where
fmap f (Identity a) = Identity \$ f a

The Const functor illustrated:

### Function type as a functor

Given a function f :: a -> b, we can convert the infix op into prefix and get f :: fun a b, and fun a is a functor. (You don’t have to use a prefix op, but perhaps that’s easier to reason with. Suit yourself.)

In the illustration above, I use the f then g notation (more logical to me) instead of the usual $g \circ f$.

## Contravariant functor

Contravariant functor is similar to (covariant) functor, but with the directions of all arrows flipped. See the illustration below:

In the diagram above F r is the type constructor that receive a type a and returns a function type a -> r.

## Bifunctor

Bifunctor illustrated:

From haskell documentation:

Formally, the class Bifunctor represents a bifunctor from Hask ->
Hask.

Intuitively it is a bifunctor where both the first and second
arguments are covariant.

You can define a Bifunctor by either defining bimap or by defining
both first and second.

If you supply bimap, you should ensure that:

bimap id id ≡ id


If you supply first and second, ensure:

first id ≡ id
second id ≡ id


If you supply both, you should also ensure:

bimap f g ≡ first f . second g


Examples of bifunctor:

    instance Bifunctor Either where
bimap f _ (Left a) = Left (f a)
bimap _ g (Right b) = Right (g b)

instance Bifunctor Const where
bimap f _ (Const a) = Const (f a)

instance Bifunctor (,) where
bimap f g ~(a, b) = (f a, g b)

The ~(a,b) part is lazy pattern matching, this means the pattern matching assumes succes until a or b is actually used later.

### Disect Maybe as a Bifunctor

data Maybe1 a = Nothing1 | Just1 a
-- equivalent to: Either () (Identity a)
-- equivalent to: Either (Const () a) (Identity a)
-- We know that both Const and Idenity are functors
-- Maybe1 is therefore a bifunctor

All algebraic data types are functorial, and Functor instances can be automatically generated in Haskell:

{-# LANGUAGE DeriveFunctor #-}
data Maybe2 a = Nothing2 | Just2 a deriving Functor

## Profunctor

A profunctor is similar to a bifunctor, but with one of the arrows flipped. For example, the function type (->) a b or a -> b is a profunctor:

In the diagram above, if f were pointing down, then the functor won’t work correctly, so -> is not a bifunctor.

## Currying

A function with two parameters (z, a) and returns a b can be seen as a mapping from a Cartesian product $z \times a$ to $b$, as illustrated below:

The following diagram show how currying works from the category theory point of view:

## Functional objects

A functional object $B^A$ represents all possible functions $f: A \rightarrow B$. Intuitively, the size of a functional object $|B^A| = \left|B\right|^\left|A\right|$.

## Type algebra

Let’s translate some haskell function syntax into functional object notation.

f :: Either a b -> c

-- currying
f :: c -> (b -> a) -- is equivalent to
f :: c -> b -> a

f :: c -> (a, b) -- is somewhat equivalent to
f :: (c -> a, c -> b)

## Curry-Howard-Lambek isomorphism

Logic Types Category theory
True () Terminal object
False Void Initial object
$a \land b$ (a, b) $a \times b$
$a \lor b$ Either a b $a + b$
$a \implies b$ f :: a -> b $b^a$

## Natural transformation

The square on the right side is called the naturality square, in which $G_f \circ \alpha_a = \alpha_b \circ F_f$. Here $\alpha = (\alpha_a, \alpha_b)$ is a natural transformation.

Let’s zoom into the naturality square:

When $x_1 \rightarrow x_2$, $x_2 \rightarrow x_4$ and $x_1 \rightarrow x_3$ are fixed, $x_3 \rightarrow x_4$ will be automatically determined.

Transformation from a high resolution functor F to a low resolution functor G:

Transformation from a high resolution functor F to a high resolution functor G:

Transformation from a low resolution functor F to a low resolution functor G:

Given a natural transformation as below:

alpha :: forall a. F a -> G a 

We get the following theorem for free (derived from the square of naturality):

alpha . fmap f = fmap f . alpha
-- type annotation,
-- on the left side
alpha :: F a -> G a
fmap :: (a -> b) -> G a -> G b
f :: a -> b
-- on the right side
fmap :: (a -> b) -> F a -> F b
f :: a -> b
alpha :: F b -> G b

Here is an example of natural transformation:

safeHead :: [a] -> Maybe a
safeHead [] = Nothing
safeHead (x:_) = x

To show that the theorem holds:

f :: Int -> Int
f x = x + 1

(safeHead . fmap f) []
= safeHead []
= Nothing
(safeHead . fmap f) [1, 2]
= safeHead [2, 3]
= Just 2

(fmap f . safeHead) []
= fmap f Nothing
= Nothing
(fmap f . safeHead) [1, 2]
= fmap f (Just 1)
= Just 2

Composibility of natural transformations:

Identity of natural transformations:

## 2-category

In the diagram above, $C,D$ are categories, $F,G,H$ are functors, $\alpha, \beta$ are natural transformations. The whole structure is called a 2-category because it’s a category of categories.

In a 2-category, an object is also called a 0-morphism, a functor is a 1-morphism, a natural transformation is a 2-morphism (morphism between morphisms), and so on.

### Naturality and composability in a 2-category

Naturality and composability in a 2-category is more complicated than a 1-category because of higher-order morphisms:

In the diagram above, we have

If we trace a single object $a$ from category $C$, we get the following diagram:

Notice the last layer forms a naturality square:

There, $G_\alpha, G'_\alpha$ are functors and $(\beta_{F_a}$, $\beta_{F'_a})$ is a natural transformation.

If we are to define the fmap function for $G_\alpha$, it would be something like:

fmap :: (F a -> F' a) -> G F a -> G F' a
fmap alpha GFa =
let G x = GFa
in
G alpha x 

## Bicategory

Bicategory is different from category in the sense that the identity and associativity laws hold up to isomorphism (hence less strict).

## Monad

Let’s start with Kleisli arrows that was introduced before. In haskell the signature of a Kleisli arrow (fish operator, composition of two embellished arrows) is:

(>=>) :: (a -> m b) -> (b -> m c) -> mc

Trying to implement it:

(>=>) :: (a -> m b) -> (b -> m c) -> mc
f >=> g = \a ->
let xmb = f a
in
xmb ...

We got a xmb :: m b and a g :: b -> m c, there is a type mismatch. This must be dealt with. We introduce a new operator, >>= (called bind):

-- will be implemented later
(>>=) :: m b -> (b -> mc) -> mc

(>=>) :: (a -> m b) -> (b -> m c) -> mc
f >=> g = \a ->
let xmb = f a
in
xmb >>= g

Next try to implement bind, assuming m is already a functor:

(>>=) :: m b -> (b -> m c) -> mc
x >>= g =
let x1 = fmap g x
...

(>=>) :: (a -> m b) -> (b -> m c) -> mc
f >=> g = \a ->
let xmb = f a
in
xmb >>= g

Now we have a x1: m m c, but we need a m c, this is begging for a new function m m c -> m c. In haskell it’s called join.

join :: m m a -> m a

(>>=) :: m b -> (b -> m c) -> mc
x >>= g =
let x1 = fmap g x
in
join x1

(>=>) :: (a -> m b) -> (b -> m c) -> mc
f >=> g = \a ->
let xmb = f a
in
xmb >>= g

So everything hangs on join. Time for a formal definition of monad:

class Functor m => Monad m where
join :: m (m a) => m a
return :: a -> ma

Once you fufilled your duty of defining join and return, >>= and >=> will be automatically defined for you.

In reality, haskell uses a different definition:

class Monad m where
(>>=) :: m a -> (a -> m b) -> m b
return :: a -> m a

And join is defined in terms of >>=.

### The Maybe monad

join :: Maybe (Maybe a) -> May a
join Just (Just a) = Just a
join _ = Nothing

return :: a -> Maybe a
return a = Just a 

If we choose to define >>= instead (which might better explain the brilliance of Maybe):

(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
Nothing >>= f = Nothing
Just a >>= f = f a

## Small experiment with dependent types in Swift 3

//: Playground - noun: a place where people can play

import Cocoa

// warmup exercise
// get the types of variables using swift reflection
let x = Mirror(reflecting: (1, 2, "e"))
let y = Mirror(reflecting: (3, 4, "hello"))
print(x.subjectType)
// compare types
print(x.subjectType == y.subjectType)

class TypeT<T> { }
// return value is a Mirror (Swift jargon for reflection of types)
// this value depends on the value of the first param
func value<T>(_ val: Int, blank: T) -> Mirror {
if(val == 0) {
// if val is zero, then return the type of blank
return Mirror(reflecting: blank)
}
// otherwise, wrap the type of blank inside a TypeT and recurse.
return value(val - 1, blank: TypeT<T>())
}

let x1 = value(3, blank: 0) // Mirror for TypeT<TypeT<TypeT<Int>>>