thecoder.cafe
83
86
lihaoyi 2 days ago

For some reason everyone likes to talk about Monads, but really the other types here are just as interesting. For example, Applicatives are less dynamic than Monads in that you can't `flatMap`/`bind` to decide on the "next" thing to evaluate based on the previous value, but in exchange you get a "static" tree (or graph) of Applicatives that lends itself much better to static analysis, optimization, parallelism, and so on.

IIRC Haxl (https://github.com/facebook/Haxl) uses Applicatives to optimize and parallelise remote data fetching, which is hard to do with Monads since those are inherently sequential due to the nature of `flatMap`/`bind`. My own Mill build tool (https://mill-build.org/) uses an applicative structure for your build so we can materialize the entire build graph up front and choose how to parallelize it, query it, or otherwise manipulate it, which is again impossible with Monads since the structure of a Monad computation is only assembled "on the fly" as the individual steps are being evaluated. "Parallel Validation" where you want to aggregate all failures, rather than stopping at the first one, is another common use case (e.g. https://hackage.haskell.org/package/validation or https://typelevel.org/cats/datatypes/validated.html)

Monads seem to have this strange aura around them that attracts certain kinds of personalities, but really they're just one abstraction in a whole toolkit of useful abstractions, and there are many cases where Applicative or some other construct are much more suited

kqr 2 days ago

> Monads seem to have this strange aura around them that attracts certain kinds of personalities

Historical accident.

There was a time, not very long ago, when we didn't know applicative functors were a useful separate subset of monads. We thought full monads were needed for all the neat things that applicatives are sufficient for.

During this time, lots of ink was spilled over monads. Had we invented applicative functors a little earlier, they would probably have gotten more of the spotlight they deserve.

-----

I also think people underappreciate the humble semigroup/monoid. But this is not historical accident, it is just that it seems to simple to be useful. But it is useful to be able to write functions generic over concatenation!

kccqzy 2 days ago

Indeed it was not long ago that in the language there was no relationship at all between the Applicative class and the Monad class. And then one release Applicative was made the superclass of Monad. That's the reason why we have sequence and sequenceA, sequence_ and sequenceA_, liftM and fmap, ap and <*>, liftM2 and liftA2, return and pure, traverse and mapM etc. All these pairs of functions do the same thing but are duplicated for historical reasons.

This historical accident has, IMO, made the language harder to teach.

mncharity 1 day ago

> And then one release Applicative was made the superclass of Monad.

After months of mailing list discussion and committee meetings. This is my go-to example for why I'd like a language with a push-out lattice of theories (path-independent accumulation of types, operators, laws). This should ideally "just work", no committees needed. Coding with math-like tight locality.

> This historical accident has, IMO, made the language harder to teach.

As LLM refactoring continues to improve, perhaps instead of current "here's an alternate prelude which cleans up historical mess for teaching", we might get to "here's a global refactoring of cabal and open docs to ..."?

yodsanklai 1 day ago

> Monads seem to have this strange aura around them that attracts certain kinds of personalities

I don't know if it's a matter of personality or aura. Monads are the first unfamiliar/complicated abstraction you're bumping into when learning Haskell. You can't do anything IO without monads, and they're not straightforward like functors or monoids. This is probably why there are more discussions about monads.

rthnbgrredf 2 days ago

This reminds me of https://www.adit.io/posts/2013-04-17-functors,_applicatives,...

I think over the recent years, there's been a rise in typed languages that support functional programming like TypeScript and Rust. It will be interesting to see if this trend continues in the context of AI assistant programming. My guess is that it will become easier for beginners, and the type systems will help to build more robust programs in cooperation with AI.

yodsanklai 1 day ago

Yes, type checking works very well with AI since typing provides a step of verification. The more precise the type system is, and the more guarantees you can have. At some extreme, you can entirely specify the program with types. So if the program type checks, it is guaranteed to implement its spec and you can trust the AI. I assume the AI will have a hard time to write the code though. But I had very good results in Rust, less in Haskell. I think it's also because the Rust compiler gives more meaningful error messages, which helps the AI to iterate.

snickell 1 day ago

Even in early 2025, LLMs are already the most powerful type inference algorithm. Why would they need a static type system in 2030?

My guess is it'll be the opposite: I suspect compared to humans, LLMs will make fewer type errors, and more errors that are uncaught by types. Thus I expect type systems will be of lower value to them (compared to humans), leading to a shift toward dynamic languages and the possible extinction of typed languages.

The alternative I could imagine is moving toward Haskell-like languages with MUCH stronger type systems, where higher-level errors ARE type errors.

My one concrete observation in this direction: "press . to see valid options" behavior is a traditional strong point of typed languages. And interestingly, proved to be one the first thing that early/dumb LLMs were actually pretty good at. I believe that indicates LLMs are relatively good at type inference (compared to other things you can ask them to do), and we should expect that to continue being a strong point for them.

In working with Cline in both TypeScript and JavaScript, I find the LLM making tons of errors it has to go and fix in a future iteration, but virtually none of them are type errors.

I suspect LLMs are relatively good at duck-typed languages because they have a much bigger working memory than humans. As a result, the LLM can hold in working memory not just e.g., the function argument in front of them, but also all the callers of the function, and how they used the variable, and callers of the callers, and thus what "duck-type" it will be.

A system that can do this level of automatic type inference doesn't necessarily benefit from a formal, static, compile time type system.

Tainnor 1 day ago

A well-typed program provides a soundness proof that can be straightforwardly verified by just compiling the program. Even in a humongous codebase in a slow-to-compile language, this is cheaper than e.g. running an LLM on your codebase every time you push a commit (especially if you use incremental compilation). Type systems, even simpler ones, just give a lot of bang for the buck.

nsonha 1 day ago

> Why would they need a static type system in 2030?

Why do many people talk about type systems as if they're only a safety guard?

To me that's never the main role of type systems. I don't know what's the word for it, but types allow me to read the code, on a high level. Sure AI will write them but as long as software engineers exist, we still have to read the code. How do you even read code without types? Comments? Unit tests? Actual implementation?

> LLMs will make fewer type errors, and more errors that are uncaught by types

> extinction of typed languages

Don't you find these contradictory? If LLM increases the rate of error uncaught by types, then type systems or the usage of them should catch up, otherwise there is no magical way for software to get better with LLM.

In the current state of LLM, the type system (or lsp and/or automated tests) is what allows the "agentic" AI coder to have a feedback loop and iterate a few times before it hands off to the programmer, perhaps that gives the delusion that the LLM is doing it completely without type system.

snickell 1 day ago

> How do you even read code without types?

We're not going to settle the preference for dynamic vs static types here. Its probably older than both of us, with many fine programmers on both sides of the fence. I'll leave it at this: well-informed programmers choosing to write in dynamically typed languages DO read code without types, and have happily done so since the late 1950s (lisp).

The funny thing is, I experience the same "how do you even??" feeling reading statically typed code. There's so much... noise on the screen, how can you even follow what's going on with the code? I guess people are just different?

> LLMs will make fewer type errors, and more errors that are uncaught by types

The errors I'm talking about are like "this CSS causes the element to draw part of its content off-screen, when it probably shouldn't". In theory, some sufficiently advanced type system could catch that (and not catch elements off screen that you want off-screen)? But realistically: pretty challenging for a static type system to catch.

The errors I see are NOT errors that throw exceptions at runtime either, in other words, they are beyond the scope of current type systems, either dynamic (runtime) or static (compile time). Remember that dynamic languages ARE usually typed, they are just type checked at runtime not compile time.

> perhaps that gives the delusion that the LLM is doing it completely without type system.

I mentioned coding in JS with cline, so no delusion. It does fine w/o a type system, and it rarely generates runtime errors. I fix those like I do with runtime errors generated when /I/ program with a dynamic language: I see them, I fix them. I find they're a lot rarer in both LLM generated code and in human generated code that proponents of static typing seem to think?

Tainnor 1 day ago

> The funny thing is, I experience the same "how do you even??" feeling reading statically typed code. There's so much... noise on the screen, how can you even follow what's going on with the code? I guess people are just different?

That really depends on the language, though. If you have type inference, you don't really have to write down that many types, e.g. it's in theory possible entire Haskell programs without mentioning a single type - though in practice, nobody does that for larger programs.

> Remember that dynamic languages ARE usually typed, they are just type checked at runtime not compile time.

This is a common misconception. Dynamically typed languages are not type checked at all. The only thing they'll do is throw errors at runtime in specific situations, whereas statically typed languages verify that the types are correct for every possible execution (well, there are always some escape hatches, but you have to know what you're doing). In a dynamically typed languages it's e.g. perfectly possible to introduce a minor change somewhere that will suddenly cause a function very far removed to break, e.g. if you suddenly return a null value where none was expected, something which I've experienced a ton when working on Ruby codebases (tbf, this can also happen in any statically typed language that adjoins "null" to every type - but there's plenty of modern languages like Rust, Swift, Kotlin, Scala etc. that fix this oversight). If you do that in a modern statically typed language, the compiler will yell at you immediately.

> The errors I'm talking about are like "this CSS causes the element to draw part of its content off-screen, when it probably shouldn't". In theory, some sufficiently advanced type system could catch that (and not catch elements off screen that you want off-screen)? But realistically: pretty challenging for a static type system to catch

I'm perfectly willing to believe that type systems are not very helpful for those kinds of tasks. I think type systems are definitely more useful when there's complex business logic involved, e.g. in huge, complicated backends with many different sorts of entities.

jerf 2 days ago

Unfortunately, while you may not have appreciated the tone of the Haskell interaction, they are correct in their assessment from a factual perspective. This explanation propagates a number of misunderstandings of the topics well known to be endemic to beginners.

In particular, I observed the common belief that functors apply to "containers", when in fact they apply to things that are not containers as well, most notably functions themselves, and it also contains the common belief that a monad has "a" value, rather than any number of values. For instance, the "list monad" will confuse someone operating on this description because when the monad "takes the value out of the list", it actually does it once per value in the list. This is the common "monad as burrito" metaphor, basically, which isn't just bad, but is actually wrong.

I'm not limiting it to these errors either, these are just the ones that leap out at me.

_jackdk_ 1 day ago

I agree. The "container" intuition for Monads leaves you stuck when you try to contemplate IO (or even Promises, these days), because the "bind" operator looks like it does something impossible: extract "the" `a` from the `IO a`, when you have no idea what it is. (Trust me, I spent a long time stuck at this point.) Better to think of Monad as "Applicative + join" (you need Applicative to get `pure`).

If you think of Monads in terms of `fmap` + `join :: Monad m => m (m a) -> m a`, then you don't need to imagine an "extraction" step and your intuition is correct across more instances. Understanding `join` gives you an intuition that works for all the monads I can think of, whereas the container intuition only works for `Maybe` or `Either e` (not even `[]`, even though it _is_ a container). You can define each of `>>=`/`join`/`>=>` in terms of `pure` + any of the other two, and it is an illuminating exercise to do so. (That `class Monad` defines `>>=` as its method is mostly due to technical GHC reasons rather than anything mechanical.)

jerf 1 day ago

I prefer the "join" approach for beginners too, but >>= has become so pervasive that I feel bad trying to explain it that way. Turning people loose on monad-heavy code with that approach still leaves them needing to convert their understanding into >>= anyhow.

One does wonder about the alternate world where that was the primary way people interacted with it.

_jackdk_ 12 hours ago

I think you don't have to teach people to program with `join`, but just that `m >>= f = join (fmap f) m`. It explains away the "get a value out" question but teaches the most common function from the interface.

adamddev1 2 days ago

Bartosz Milewski argues that we can think of functions etc. as containers as well, if you check out his YouTube lectures on Category Theory for Programmers. Lists and functions "contain" a type.

jerf 2 days ago

A term's utility comes from its ability to separate things into different categories. A definition of "container" that includes everything is therefore useless, because if everything a container, there is no information in the statement that something is a container.

In Bartosz's case he's probably making the precise point that we can abstract out to that point and that at a super, super high level of category theory, there isn't anything that isn't a container. However, that's a didactic point, not a general truth. In general we programmers generally do mean something by the word "container", and functors can indeed include things that are therefore not containers.

Moreover, I would say it's not what the author was thinking. The author is not operating on that super high level of category theory.

edflsafoiewq 2 days ago

A list [b] is a container for bs indexed by integers. A function a->b is a container for bs indexed by as.

cluckindan 2 days ago

[b] is more like a blueprint for a container, and a->b is more like an assembly line of containers.

rebeccaskinner 2 days ago

> A function a->b is a container for bs

Anecdotally, this is one of those things that's trivially true to some people, but really hard for other people to internalize. I think it's why the "container" can lead people astray- if you haven't internalized the idea of functions as being indexed by their argument, it's a really mind twisting thing to try to make that leap.

robto 1 day ago

One of the fun things about Clojure that reinforces this "trivially true" perspective is that maps and sets are functions:

    ;; "maps" the keys to the values
    (map {1 "a" 2 "b"} (take 5 (cycle 1 2))) ;;=> '("a" "b" "a" "b" "a")
    ;; acts as a predicate that tests for membership
    (filter #{"a" "b" "c"} ["a" "b" "c" "d" "e" "f"]) ;;=> '("a" "b" "c")
Once you get used to this idiom you naturally start thing of other functions (or applicative functors) the same way. The syntax sugar makes for some very concise and expressive code too.

magicalhippo 1 day ago

If I give you a function "f(x) := 3 * x", is it really that useful to talk about it as a container of the natural numbers?

The reverse though is useful, a container looks like a function that takes one or more indices and returns a value or element.

rebeccaskinner 1 day ago

I think that understanding the (moral) equivalence is useful in both directions. In particular, I think helping people understand the "function-as-container" analogy is a useful way for people to understand pure functions- another thing that's conceptually simple but a lot of people struggle to really wrap their mind around it.

magicalhippo 1 day ago

Personally I would say it muddies the water for me, as "container" has strong connotations in other directions.

But then I've never thought the concept of a pure function to be particularly difficult, despite growing up on procedural languages.

It's other bits that I struggle with when it comes to functional programming.

n_plus_1_acc 2 days ago

I can recommend learnung some scala, where HashMap extends PartialFunction

rebeccaskinner 1 day ago

I've never looked at scala, but that's really interesting. Do you find that's useful in practice?

mncharity 1 day ago

> really hard [...] leap

Two stepping stones might be array getters (function that's array-ish), and arrays with an indexed default value function (array that's function-ish)?

rebeccaskinner 1 day ago

I've recently started writing a series of blog posts (https://rebeccaskinner.net/posts/2024-10-18-dictionaries-are...) trying to explain the idea and my approach has been to explain the idea using comprehensions. I haven't had a lot of people review the post yet, and I still have at least one if not two more follow-ups before it's done, so I'm not yet sure how well the idea will land.

magicalhippo 1 day ago

Nice introduction. Still not entirely sold that dictionaries are pure functions, though.

Will you be covering common dictionary operations like adding/removing elements and iterating over the dictionary keys?

I have some ideas on how one might frame it in a pure function setting but they all seem quite contorted in a similar way to your incrementDict, ie you'd never actually do that, so curious if there are better ways. Then maybe you'll sell me on the premise.

rebeccaskinner 1 day ago

I'm really focusing less on the idea that Dict the data type with it's associated methods is like a function, and more on the idea that a dictionary in the general sense is a mapping of input values to output values, and you can think of functions that way.

That said, there are some pretty reasonable analogies to be made between common dictionary operations and functions.

For example, adding and removing items can be done with function composition so long as you are okay with partial lookups. Here's a really short example I put together:

  module Example where
  import Control.Applicative

  type Dict a b = Eq a => a -> Maybe b

  emptyDict :: Dict a b
  emptyDict = const Nothing

  singleton :: a -> b -> Dict a b
  singleton k v target
    | k == target = Just v
    | otherwise = Nothing

  unionDict :: Dict a b -> Dict a b -> Dict a b
  unionDict dict1 dict2 k = dict1 k <|> dict2 k

  insertDict :: a -> b -> Dict a b -> Dict a b
  insertDict k v dict = singleton k v `unionDict` dict

  removeDict :: a -> Dict a b -> Dict a b
  removeDict k dict target
    | k == target = Nothing
    | otherwise = dict k
This particular representation of dictionaries isn't necessarily something you'd really want to do, but the general approach can be quite useful when you start working with something like GADTs and you end up with things like:

  data Smaller a where
    SmallerInt :: Smaller Int
    SmallerBool :: Smaller Bool
  
  data Larger a where
    LargerInt :: Larger Int
    LargerBool :: Larger Bool
    LargerString :: Larger String
  
  someLarger :: Larger x -> x
  someLarger l =
    case l of
      LargerInt -> 5
      LargerBool -> True
      LargerString -> "foo"
  
  embedLarger ::
    (forall x. Larger x -> Smaller x) ->
    (forall smallerI. Smaller smallerI -> r) ->
    (forall largerI. Larger largerI) -> r
  embedLarger mapping fromSmaller larger = fromSmaller (mapping larger)
(I'm actually co-authoring a talk for zurihac this year on this pattern, so I have quite a bit more to say on it, but probably not ideal to cram all of that into this comment).

magicalhippo 1 day ago

> and more on the idea that a dictionary in the general sense is a mapping of input values to output values, and you can think of functions that way.

So what's the difference between a map and a dictionary then?

> Here's a really short example I put together

Much appreciated. I don't really know Haskell (nor any other functional language), but I'm pretty sure I understood it.

> This particular representation of dictionaries isn't necessarily something you'd really want to do

Yeah that's pretty much what I had in mind, and yes it's possible but it feels forced. For one you're not actually removing an element, you just make it impossible to retrieve. A distinction that might seem moot until you try to use it, depending on the compiler magic available.

> I'm actually co-authoring a talk for zurihac this year on this pattern

Sounds interesting, will check it out when it's published.

rebeccaskinner 1 day ago

> So what's the difference between a map and a dictionary then?

You're asking good questions and catching me being imprecise with my language. Let me try to explain what I'm thinking about more precisely without (hopefully) getting too formal.

When I say "a function is a mapping of values" I'm really trying to convey the idea of mathematical functions in the "value goes in, value comes out" sense. In a pure function, the same input always returns the same output. If you have a finite number of inputs, you could simply replace your function with a lookup table and it would behave the same way.

When I talk about dictionaries, I'm speaking a little loosely and sometimes I'm taking about particular values (or instances of a python Dict), and other times I'm being more abstract. In any case though, I'm generally trying to get across the idea that you have a similar relationship where for any key (input) you get a particular output.

(aside: Literally right now as I'm typing this comment, I also realize I've been implicitly assuming that I'm talking about an _immutable_ value, and I've been remiss in not mentioning that. I just want to say that I really appreciate this thread because, if nothing else, I'm going to edit my blog post to make that more clear.)

The main point is that dictionaries are made up of discrete keys and have, in Python at least, a finite number of keys. Neither of those constraints necessarily apply to functions, so we end up in an "all dictionaries are functions, but not all functions are dictionaries" situation.

> Yeah that's pretty much what I had in mind, and yes it's possible but it feels forced. For one you're not actually removing an element, you just make it impossible to retrieve. A distinction that might seem moot until you try to use it, depending on the compiler magic available.

This is a great example of the kind of thinking I'm trying to address in the article. You're completely right in a very mechanical "this is what memory is doing in the computer" sort of sense, but from the standpoint of reasoning about the problem space deleting an element and being unable to access the element are the same thing.

Of course in the real world we can't completely handwave away how much memory our program uses, or the fact that a function encoding of a dictionary turns a constant time lookup into a linear time lookup. Those are real concerns that you have to deal with for non-trival applications, even in a pure functional language.

The benefit you get, and I apologize because this is hard to explain- let alone prove, is that you can often end up with a much _better_ solution to problems when you start by handwaving away those details. It opens up the solution space to you. Transformations to your architecture and the way you think about your program can be applied regardless of the specific representation, and it's a really powerful way to think about programming in general.

magicalhippo 1 day ago

Thanks for the detailed responses, highly appreciated.

I taught myself programming as a kid using QBasic, and quickly moved on to Turbo Pascal and assembly, so clearly my programming career was doomed from the start[1].

For one I do like to keep in mind how it will actually be executed. The times I've not done that it has usually come back to bite me. But that perhaps hampers me a bit when reading very abstract work.

That said I like going outside my comfortable box, as I often find useful things even though they might not be directly applicable to what I normally do. Like you say, often changing the point of view can help alot, something that can often be done in a general way.

Anyway, looking forward to the rest of the article series and the talk.

[1]: https://en.wikiquote.org/wiki/Edsger_W._Dijkstra#How_do_we_t...

mncharity 1 day ago

Interesting. I liked how "Dictionaries are Pure Functions" set up currying as JSON nested dictionaries.

Curiously, I've a backburnered esolang idea of gathering up the rich variety of dict-associated tooling one never gets to have all in one place, and then making everything dict-like. Permitting say xpath sets across function compositions.

noelwelsh 1 day ago

One can start with a partial explanation and expand it cover all the cases as learning progresses. This is how most learning takes place. I expect your primary school teachers introduced numbers with the natural numbers, instead of, say, transfinite numbers. Students learn Newtonian physics before relativity. It's completely fine to build an understanding of monads as operating on containers, and then expand that understanding as one encounters more cases.

_jackdk_ 1 day ago

An intuition of monads built on "flattening" nested layers of `m` is easier to teach and works for more monads.

T-R 2 days ago

Thinking too concretely about monads as boxes might make the behavior of the ListT monad transformer seem a bit surprising... unless you were already imagining your box as containing Schrodinger's cat.

I can definitely understand the author taking offense to the interaction, but now that a lot more programmers have had some experience with types like Result<T> and Promise<T> in whatever their other favorite typed language with generics is, the box/container metaphors are probably less helpful for those people than just relating the typeclasses to interfaces, and pointing out that algebraic laws are useful for limiting the leakiness of abstractions.

hajile 2 days ago

Functions are just containers of calculations (the whole “code is data”).

I don’t know why lists as values in a container would be confusing. Lots of very popular languages literally have box types which may not be exactly the same, but show that expecting containers to potentially commission complex data isn’t unusual.

codebje 1 day ago

One source for confusion around lists is that the list monad is often used to model non-determinism, rather than just "many things". If you're thinking about non-determinism, a list is akin to a container of one item when you don't precisely know which item it is, but do know it's one of zero or more candidates.

The most widely recognised example, IMO, would be monadic parser combinators. "A parser for a thing, is a function from a string, to a list of pairs of strings and things."

fn-mote 1 day ago

> I don’t know why lists as values in a container would be confusing.

The GP makes it pretty clear - the misunderstanding is that there is one value in a container. A list has many.

hajile 6 hours ago

That's like saying a dev would be confused that an Object can contain a list. I can't see that tripping up anyone but the most junior of developers.

Tainnor 1 day ago

In general, in abstract mathematics no analogy or "intuitive concept" of something will ever replace the rigorous definition. That doesn't mean that imperfect analogies can't be useful, though. You just have to use them as a starting point instead of stopping there.

I think the container analogy can be useful up to a point. There is (potentially) something of value wrapped in another type (e.g. an integer "wrapped in" IO) and we usually cannot access it directly (because of various reasons: because IO is special, because a list may be empty, etc.), but we can string together some operations that manipulate the contents implicitly.

burlesona 2 days ago

I feel like Haskell is easier to use than it is to explain, and in my experience a lot of these kind of tutorial / explanations actually make things seem harder and more complicated than just working with the concepts and observing what they do. (This one included.)

globnomulous 2 days ago

I'm not familiar with Haskell and am really, really struggling to follow the article.

In the case of the functor, the author doesn't explain in technical, specific enough terms the difference between "open the box, extract the value out of it, apply the function, and put the result back in a box" and "apply a function to a box directly; no need to perform all the steps ourselves." I have no idea what 'apply a function to a box' even means.

> That’s the essence of functors: an abstraction representing something to which we can apply a function to the value(s) inside

The error in this sentence garbles its meaning beyond recovery. "We can apply a function" governs two prepositional phrases that are semantically and syntactically identical: "to which;" "to the value(s) inside." There's no way to resolve the meaning of one without rendering the other incoherent.

BoiledCabbage 1 day ago

The number one mistake is everyone trying to explain a Haskell concept to the general population makes is using Haskell. If someone already knows Haskell there is a good chance they know there concepts. Don't use Haskell as the language, use js to explain it.

The number two mistake people make is being aware of the number one mistake so they go write yet another Monad tutorial in Javascript (or Java or whatever...). Which is why there are so many damn Monad tutorials, all saying pretty much the same thing.

codebje 1 day ago

I am not yet sure whether it's a third mistake to think that it's particularly relevant to understand monads (and friends) outside of a language with (a) the higher-kinded types necessary to actually use them, and (b) a type system that is inconsistent in the face of effects without monads.

I waver between a belief that developers with curiosity about computer science topics will, over time, be quantitatively better developers, and the notion that these are niche topics with limited relevancy.

After all, it's very clear that the Java standard library design committee understands what monads are and where they're useful, since the library is littered with the things, but there's vast numbers of developers out there making effective use of futures, collections, optionals, and streams, building their own intuitions about what "flatMap" means you can get away with, all without reading any monad tutorials.

timeon 1 day ago

> The number two mistake people make is being aware of the number one mistake so they go write yet another Monad tutorial in Javascript (or Java or whatever...). Which is why there are so many damn Monad tutorials, all saying pretty much the same thing.

I was lucky seeing this before hitting submit button. Phew that was close.

BoiledCabbage 1 day ago

Glad I could help you out there.

bobbylarrybobby 1 day ago

The distinction is that in general “opening a box and extracting the value” makes no sense, as it's not a thing that can be done in general. If your box is a Maybe, there might not be a value to extract. If it's a list, there might be zero or multiple values. It only ever makes sense to map over the contents of the box, replacing the values with their image under the map.

pests 1 day ago

To try to answer your first question, coming form someone who is also not an expert in Haskell or monads.

"apply a function to a box directly; no need to perform all the steps ourselves."

The box doesn't change, and it also doesn't matter what's inside of it. You are attaching the function to the box, who later knows how to apply it to itself. If you were to open the box, you would need to know how to handle all the possible contents. It's key here that you are only handling a box and nothing else.

alabastervlog 2 days ago

Every “hard” concepts I’ve seen in Haskell is immediately clear to me if explained in almost any other language. The hard part is Haskell, not the concept.

Usually I’m left wondering why whatever-it-is even has a name, it’s so simple and obvious and also not that special or useful seeming, it’d never have occurred to me to name it. I guess the people giving them names are coming at them from a very different perspective.

Exception: type classes. Those are nice and do need a name.

codebje 1 day ago

Haskell has a type system that lets these things be directly useful in ways they cannot be in many other languages.

You can't, in Java, declare anything like "class Foo<F<T> extends Functor<T>>", or use a similar generic annotation on a method: you can't have an unapplied type-level function as an argument to another type-level function.

These things get a name in Haskell because they can be directly used as useful abstractions in their own right. And perhaps because Haskell remains very close to PL research.

fellowniusmonk 2 days ago

Why are there so few practical, example and code driven tutorials? I've never run across a succinct "build Twitter with Haskell" in the wild.

jaspervdj 1 day ago

This talk seems like exactly what you are looking for for:

Gabriel Gonzalez - “A bare-bones Twitter clone implemented with Haskell + Nix” @ ZuriHac 2020 https://www.youtube.com/live/Q3qjTVcU9cg

rrgok 2 days ago

Yes, I really need a real word Haskell project simple enough to understand all the math concept. Like, I don't know when to implement the Monad type-class to my domain data types. For example, taking the twitter example, if I have Tweet data type:

- should I implement the Monad, Applicative or Functor type class?

- How would that help in the big picture?

- What if I don't do it?

All these funny example of boxes, burritos or context doesn't not help me solve problems.

Take for example Monoid, I understand (partially maybe) that it useful for fold (or reduce) a list to a single value.

wavemode 1 day ago

> Yes, I really need a real word Haskell project simple enough to understand all the math concept

There actually is a book with precisely that title, which provides what you're asking for: https://book.realworldhaskell.org/

> Like, I don't know when to implement the Monad type-class to my domain data types

A concrete type (such as your Tweet type) can't be a Monad. Monad is implemented on generic types (think: `MyType a`, where `a` can be filled in with a concrete type to produce e.g. `MyType Int` or `MyType String`).

Most monads are data structures like list `[a]` or structures which provide context to computations like `State s a` or `Reader r a`

yodsanklai 1 day ago

> should I implement the Monad, Applicative or Functor type class?

You rarely have to implement these type classes. But you need to understand how they work since many libraries use them. If you do IO, error handling, concurrency, use containers, option parsing and so on, you'll have to use these type classes.

For your own types, nobody forces you to implement them. If it turns you can make your type an instance of some type class, you may be able to reuse existing code rather than reimplementing it. And it will make the program more readable too.

T-R 1 day ago

> should I implement the Monad, Applicative or Functor type class?

I struggled with this when I first learned Haskell. The answer is "yes, if you can". If you have a type, and you can think of a sane way to implement `pure`, `fmap`, and `bind` that doesn't break the algebraic laws, then there's really no drawback. Same for any typeclass. It gives users access to utility functions that you might not really have to document (because they follow a standard interface) and you might not even have to maintain (when you can just use `deriving`).

Doing so will let you/users write cleaner code by allowing use of familiar tools like `do` notation, or functions from libraries that say they'll work for any Monad. It saves you from coming up with new names for those functions, and saves users from having to learn them; if I see something's a Monad, I know I can just use `do` notation; if I see something's a Monoid, I know I can get an empty one with `mempty` and use `fold` with it. As long as it's not a really strange Monad, and it doesn't break any laws, it probably just works the way it looks like it does.

If you can define `bind` et. al., but it breaks the laws, it means the abstraction is leaky - things might not work as expected, or they might work subtly differently when someone refactors the code. Probably don't do that.

If you don't implement a typeclass that you could have, it just means you might have written some code where you could've used something out of the box. Same as going through old code and realizing "this giant for-loop could've just been a few function calls if I used underscore/functools or generators".

That said, it's not too common to stumble on a whole new Monad. The Tweet type probably isn't a Monad - what does it mean for a Tweet to be parameterized on another type like `Int`, as in `Tweet<Int>`? What would it mean to `flatMap`(`bind`) a function like `Int -> Tweet<String>` on it? A Tweet is probably just a Tweet. On the other hand, it's a little easier to imagine what a `JSON<Int>` might be, and what applying a function like `Int -> JSON<String>` to it might reasonably do. Or what applying an `Int -> Graph<String>` to a `Graph<Int>` might do.

Most Monads in practice are combinations of well known ones. Usually you'll be writing some procedural code in IO, or working with a parser, and realize "I'm writing a lot of code checking for errors", "I'm tired of explicitly passing this same argument", or "I need some temporary mutable storage", or some other Effect - so you wrap up the Monad you're using with a Monad Transformer like `ExceptT`, `ReaderT`, or `StateT` in a `newtype`, derive a bunch of typeclasses, and then just delete a bunch of messy code.

ngruhn 1 day ago

Highly recommend Richard Eisenbergs video series on building a Wordle solver https://youtube.com/playlist?list=PLyzwHTVJlRc9Fcinmxe97pHl_...

IshKebab 2 days ago

The problem with Monads etc. is that they're simple concepts with extremely confusing names. Monad should be FlatMappable. Once it has the correct name it barely even needs an explanation at all.

bontaq 1 day ago

I've seen this opinion before but disagree with it. There are maybe five names to learn. They relate to the actual concepts, allowing you to expand your knowledge.

contravariant 1 day ago

One issue with that is that you can write Flatmap in a way that doesn't obey the Monad axioms. And once you write out what it means to be 'correctly' flatmappable you've recreated the Monad axioms.

Though it would help if more people were aware that a 'nice' way to 'unnest' a functor (F F x -> F x) is really all that it takes to have a Monad.

yodsanklai 1 day ago

FlatMappable doesn't capture what a monad is. For instance, you can do async programming using monads. Doesn't relate to FlatMappable.

I think you don't see the need for a new name if you don't grasp the concept. It's like in mathematics, you have tons of algebraic structures, like monoid, groups, fields, rings. They all represent categories of things which share some properties. You don't want to name the category by a one of its representatives, that would defeat the purpose of introducing an abstraction.

IshKebab 1 day ago

I don't know, I think the fact that you can use FlatMappables to do async programming and pure IO etc. doesn't mean you have to capture all of the potential uses in the name.

I mean... you can use timer interrupts to do preemptive multi-threading but we don't feel the need to give them a confusing name.

agumonkey 1 day ago

Even though I see why it could help as introduction, I think flatmap is too narrow to express monadism

e-dant 2 days ago

Part of why monads are not interesting to talk about is that they’re generic enough that most explanations are incomplete, and sufficient explanations are boring and unhelpful.

But the biggest reason is that they’re sort of intuitive, plenty examples exist. And then at some point someone tells you that those things are monads, but it’s in the kind of way that social psychologists make up some fancy word for crap we all know about in our gut.

Nobody gives a shit that a list is a monad, people give a shit that it’s a list. Anyone who’s written lisp or node or any nontrivial C program or anything with coroutines or anything with concurrency can and will tell you that, yeah, duh, control flow can be represented by a data structure. A couple more fancy “monad laws” and you have something that looks like other monads, and lists and if expressions and IO meld together. Ok, how unhelpful.

hibikir 1 day ago

The fact that they are so generic is what makes people misunderstand them: They focus on 1 or 2 examples, without seeing that the same concept works in all kinds of other use cases.

People realize a list can be a monad, and they they imagine option and set are also monads. But then you have to tell them that the same applies to Future, and Either. That you can have a resource monad that closes resources.

This is when the fact that something is a monad starts to matter, because of generic concepts for transformers. Every language that has promises and lists will give you a way to turn a List[Promise[T]] into Promise[List[T]], written ad-hoc, but it doesn't have to be quite so ad-hoc. It's when you are stacking 3 or 4 different properties together that the abstract concepts matter. The lack of the abstraction is what makes some language have trouble doing more than just a little bit of functional programming, as going deeper becomes unmanageable without some help.

chowells 2 days ago

The helpful part is the ability to abstract over arbitrary monads. That's the thing that makes it worth identifying that it's a known and well-studied pattern.

VirusNewbie 1 day ago

But if people understood monads they wouldn't be bending over backwards to shoehorn specific syntactic sugar just for error handling.

mncharity 1 day ago

Hmm. I've not yet seen a topical presentation which embeds a tweaked chatbot. Graphics, video, interactive graphics, each provide additional leverage beyond text. So too might "something to talk over the topic with". Something with a punch-list of insights to be conveyed, and misconceptions to be probed for.

Monad education is rich in flawed models and incomplete appreciation, and also in meta discussion of these. Might this lend itself to a interactive socratic-y tutor? Could the world use a... new and improved monad tutorial?

randomstate 2 days ago

Coming from non-Haskell background, it took me a good while to undestand that `Just` is a constructor specific to the `Maybe` type. Found this for a quite nice answer: https://stackoverflow.com/a/18809252

personperson69 2 days ago

the bit at the end is quite rude of the haskeller responding but I also think they're largely right; another monads explained through boxes tutorial is not gonna help anyone. In fact it's really a step in the wrong direction. Using a few different monads is where to start.

Vosporos 2 days ago

Was it rudeness or honesty without malice? The "monad tutorial" instinct is a well-documented fallacy. In my culture we don't whitewash our opinions to make them palatable to someone who's obviously doing something wrong in a known way.

redlohr 2 days ago

On first read, I was prone to agree with the author -- why put down someone seeking your input? Then I read your comment and went through it again. After a re-read I think you have it right. The response was direct, and perhaps quite cutting to the author who had devoted significant time to the article only to be told they're one of hundreds who have made the same mistake. But the only denigrating in the linked blog article seemed to be the grouping with others who had fallen into the same trap.

layer8 1 day ago

> A functor is an abstraction that allows for mapping a function over values inside a context without altering the context of the functor.

I’m not sure this is intelligible to laypeople. ;)

rebeccaskinner 2 days ago

I think it's great that people are excited about Haskell and want to write about it, and it's unfortunate that the author had to deal with a less thank tactful response to their work. I hope the author keeps spending time with Haskell and continues to make time to try to write more and help other people!

That said, here is a bit of a long comment on my thoughts about writing about and teaching these things:

It's true that teaching Monads, Applicatives, and Functors can be tricky and there are a lot of articles that end up doing more harm than good- either by teaching things that are outright incorrect, or more often, teaching people a particular way to use them but setting people up for a lot of trouble when they run across uses that diverge significantly from the mental model they've built up.

Functions are a classic example of this. There useful definitions of Functor, Applicative, and Monad for functions, and depending on the mental model you've built up they can be either fairly easy to understand or very difficult to understand. This ends up being a big problem because Applicative and Monadic functions are so pervasive, but they are incomprehensible if your stuck in the traditional data structure mental model. IO is a great example of this- it's really just a specialized State, but it can be really hard to understand how it works if you're thinking about data structures. Parsers are another good example.

I generally prefer to start people off with the "monad-as-computation" mental model, roughly "An `m a` is an m-computation that can have side effects and when evaluated returns a value of type a", where Maybe are computations that could fail, Lists are computations that can return multiple times, and IO are computations with all of the normal IO side effects.

Starting with IO has the nice benefit that you can also help people come to terms with monadic IO as a means of dealing with lazy evaluation. It's a good gateway both into helping people come to terms with the challenges of lazy IO, and it also helps to provide a concrete motivation for IO in haskell that doesn't result in people going off thinking that Monads are a hammer and every problem in the world is a nail.

From there, I think it's helpful to talk not just about bind but also join. Showing someone how to implement join in terms of bind and vice versa is a nice thing to do early because it helps to differentiate Monad from Applicative and it demystifies the "a monad is a monoid in the category of endofunctors" thing a bit (not that I'd proactively bring that up when teaching someone how to use them).

I like to characterize the high level difference as something like "Monads are computations that can _call out to_ other computations and integrate their results", "Applicatives can run computations in parallel and combine the resulting structures / side effects", and "Functors allow you to lift pure functions into a computation". At each step, highlighting both how you are getting less powerful (because you can implement functors in terms of applicatives, and applicatives in terms of monads, but not the other way around), and how having less power can help you reason better about your programs (pros/cons of applicative vs. monadic parsers are a good example here).

Finally, I think it's important early on to make sure your reader understands higher kinded types. A lot of people are used to languages with generics, but many of those languages aren't expressive enough to let you express something like Functor, and people often lack practice in thinking about something like `Maybe` separately from `Maybe Int` or `Maybe a`.

In the end, I think these things really aren't that complicated, but they are built on a different view of programming that a lot of readers have the first time they encounter them, and the best approach isn't to translate the concepts into something people are already familiar with. Instead, I think you need to help the reader adapt their mental model. It's a harder path, but one that I think pays off more in the long run.

behnamoh 2 days ago

monads are the MCP of functional programming—no one really knows what they are but everyone writes an article about them using analogies that break when you actually use them in practice.

chowells 2 days ago

Nah. Lots of people know what monads are. And critically, they don't write monad explainers.

This is because if you understand the fundamentals well enough to understand an explanation, monads are so trivially straightforward that the definition is 100% of the explanation you need. Learn about how Haskell denotes types. Learn about higher-order functions, higher-kinded types, parametric polymorphism, and bounded polymorphism. Once you are comfortable with what all of those do in Haskell, Monad is a way to bound polymorphism with a couple extra expectations about how things behave. It takes about 5 minutes to explain and show a bunch of examples.

But before you're comfortable with those parts, it's like trying to explain exponentiation to someone who doesn't understand addition. People who understand exponentiation don't do that. They don't try to use analogies. They say "you need to learn about addition first, then multiplication. You can learn about exponentiation after that."

BoiledCabbage 1 day ago

> Learn about higher-order functions, higher-kinded types, parametric polymorphism, and bounded polymorphism.

Except that's not the case because most people know all of those concepts from their main language, and don't know what a monad is.

Higher order functions yup. A function can take a funtion as an argument and correctly assign the argument type (unless C where you can finagle it but it's not first class).

Higher-kinded types? Yup. Prettt much half of "generics". Taking C# that essentially the idea that List<T> is a "type constructor" that allows you to construct a type. If you specify Integer as the T you can say something like List<Integer> and you get a type which is a list of integers.

Parametric polymorphism? Yup. When defining function - for example using List<T>. You can define the implementation of the function using the generic parameter "T" and not have to specify if you are defining the implementation on a list of Integers or a List of String, and the single implementation will work for all of them.

Bounded polymorphism? Yup. Again using C#, you can specify a restring on the "T" type parameter. Instead of saying "T" can be any type at all, you can add a "where" clause that says "T" must implement the ISerializable interface, or it must be a subclass of Foo class.

So most people will read this list. And say "huh, I guess I do already know all of those concepts but by different names." But that doesn't mean they understand Monads conceptually, when to use them nor why. Even if those things are required to read the Monad definition, there is more there.

A rough analogy, but it's like saying people know the visitor or facade design pattern just by reading their type signature. Oh and if instead of having intuitive names their names useless names like "foblax" and "grobalum" design patterns.

chowells 1 day ago

I assure you, most people cannot tell you the difference between bounded polymorphism, parametric polymorphism, and whatever their language thinks polymorphism means. (The latter is not the same as either of those.) Most people cannot handle the idea of talking about an unapplied type constructor, because their language of choice cannot do that. Maybe the number of people who can think in higher order functions has reached the majority by now. Some good ideas eventually do spread.

But most of all, people do not understand how Haskell's type system works. It is incredibly precise and concise documentation. When you do use a function? When you have its inputs and need its output. Sometimes things are defined in terms of concrete types and need further explanation. But when talking about incredibly generic interfaces like these, that's 90% or more of the necessary documentation.

Learn how that works, and you'll see why there isn't much to say about Monad.

BoiledCabbage 1 day ago

> I assure you, most people cannot tell you the difference between bounded polymorphism, parametric polymorphism,

And yet most people know one of Java, C# and Typescript and know how to use generics with constraints. Meaning they know those concepts. Your arguing my point for me. Knowing those concepts clearly isn't sufficient.

There are three things being discussed that you are conflating. Knowing a concept, knowing it's terminology and knowing how to all of them combine when used in a new concept. Knowing the underlying concepts does not imply the first. And similarly not knowing the terminology does not imply someone does not know the concepts as you seem to think it does.

chowells 1 day ago

> And similarly not knowing the terminology does not imply someone does not know the concepts

Which is why I keep focusing on understanding the Haskell type system. Those are necessary concepts, which you will learn the names of along the way.

aklein 2 days ago

What is MCP?

marcus0x62 2 days ago

Model Context Protocol. It is a way to give an LLM access to an API. There's a lot of hype about it right now, and, thus, a great many half-baked articles floating around. https://www.anthropic.com/news/model-context-protocol

hu3 1 day ago

This is how Chat GPT o1 would explain Functors, Applicatives and Monads to a PHP developer. Looks more digestible to me, supposing it is correct.

https://chatgpt.com/share/67e9b3b0-52a8-8001-87d1-d6d222a27e...

The prompt to save you a click: "I'm an experienced PHP developer, explain Monads to me using PHP exmaples." (yes I made a typo in exmaples but it worked fine anyway).