I don't think this is correct. The capability for drawing useful logical inferences from something like a Cyc knowledge base is far more compute limited than just doing ML on any given amount of data. We're talking exponential (or worse!) vs. pure linear scaling. This is the real-world, practical reason why the Cyc folks eventually found no value at all in their most general inference engine, and ended up exclusively relying on their custom-authored, more constrained inference generators instead.
Again, I'm not saying Cyc's approach is correct. I'm saying that the underlying hope that made Lenat plow through the AI winter is the same one that made ML researchers plow through it. It's just that the ML researchers reached the end of the tunnel first (for some senses of first).