For some reason everyone likes to talk about Monads, but really the other types here are just as interesting. For example, Applicatives are less dynamic than Monads in that you can't `flatMap`/`bind` to decide on the "next" thing to evaluate based on the previous value, but in exchange you get a "static" tree (or graph) of Applicatives that lends itself much better to static analysis, optimization, parallelism, and so on.
IIRC Haxl (https://github.com/facebook/Haxl) uses Applicatives to optimize and parallelise remote data fetching, which is hard to do with Monads since those are inherently sequential due to the nature of `flatMap`/`bind`. My own Mill build tool (https://mill-build.org/) uses an applicative structure for your build so we can materialize the entire build graph up front and choose how to parallelize it, query it, or otherwise manipulate it, which is again impossible with Monads since the structure of a Monad computation is only assembled "on the fly" as the individual steps are being evaluated. "Parallel Validation" where you want to aggregate all failures, rather than stopping at the first one, is another common use case (e.g. https://hackage.haskell.org/package/validation or https://typelevel.org/cats/datatypes/validated.html)
Monads seem to have this strange aura around them that attracts certain kinds of personalities, but really they're just one abstraction in a whole toolkit of useful abstractions, and there are many cases where Applicative or some other construct are much more suited
> Monads seem to have this strange aura around them that attracts certain kinds of personalities
Historical accident.
There was a time, not very long ago, when we didn't know applicative functors were a useful separate subset of monads. We thought full monads were needed for all the neat things that applicatives are sufficient for.
During this time, lots of ink was spilled over monads. Had we invented applicative functors a little earlier, they would probably have gotten more of the spotlight they deserve.
-----
I also think people underappreciate the humble semigroup/monoid. But this is not historical accident, it is just that it seems to simple to be useful. But it is useful to be able to write functions generic over concatenation!
Indeed it was not long ago that in the language there was no relationship at all between the Applicative class and the Monad class. And then one release Applicative was made the superclass of Monad. That's the reason why we have sequence and sequenceA, sequence_ and sequenceA_, liftM and fmap, ap and <*>, liftM2 and liftA2, return and pure, traverse and mapM etc. All these pairs of functions do the same thing but are duplicated for historical reasons.
This historical accident has, IMO, made the language harder to teach.
> And then one release Applicative was made the superclass of Monad.
After months of mailing list discussion and committee meetings. This is my go-to example for why I'd like a language with a push-out lattice of theories (path-independent accumulation of types, operators, laws). This should ideally "just work", no committees needed. Coding with math-like tight locality.
> This historical accident has, IMO, made the language harder to teach.
As LLM refactoring continues to improve, perhaps instead of current "here's an alternate prelude which cleans up historical mess for teaching", we might get to "here's a global refactoring of cabal and open docs to ..."?
> Monads seem to have this strange aura around them that attracts certain kinds of personalities
I don't know if it's a matter of personality or aura. Monads are the first unfamiliar/complicated abstraction you're bumping into when learning Haskell. You can't do anything IO without monads, and they're not straightforward like functors or monoids. This is probably why there are more discussions about monads.