If you are using the formal definition of generalization in a machine learning context, then you completely misrepresented Chollet's claims. He doesn't say much about generalization in the sense of in-distribution, unseen data. Any AI algorithm worth a damn can do that to some degree. His argument is about transfer learning, which is simply a more robust form of generalization to out-of-distribution data. A network trained on Go cannot generalize to translation and vice versa.
Maybe you should stick to a single definition of "generalization" and make that definition clear before you accuse people of needing to read ML basics.
I was replying to a claim that LLMs "can’t generalize" at all, and I showed they do within their domain. No I haven't completely misrepresented the claims. Chollet is just setting a high bar for generalization.
It is a very basic form of generalization. And one that most people understand as fundamental to general intelligence.
You're proving my point. If full human level general intelligence is "basic" then you have set the bar ridiculously high for generalization.