Cross-site requests have been built in to the design of the WWW since the beginning. The whole idea of hyperlinking from one place to another, and amalgamating media from multiple sites into a single page, is the essence of the World Wide Web that Tim Berners-Lee conceived at CERN, based on the HyperCard stacks and Gopher and Wais services that had preceded it.
Of course it was only later that cookies and scripting and low-trust networks were introduced.
The WWW was conceived as more of a "desktop publishing" metaphor, where pages could be formatted and multimedia presentations could be made and served to the public. It was later that the browser was harnessed as a cross-platform application delivery front-end.
Also, many sites do carefully try to guard against "linking out" or letting the user escape their walled gardens without a warning or disclaimer. As much as they may rely on third-party analytics and ad servers, most web masters want the users to remain on their site, interacting with the same site, without following an external link that would end their engagement or web session.
I'm aware of that but obviously there's a huge difference between the user clicking a link and navigating to a page on another domain and the site making that request on the user's behalf for a blob of JS.
CORS != hyperlinks. CORS is about random websites in your browser accessing other domains without your say-so. Websites doing stuff behind your back does feel antithetical to Tim Berners Lee's ideals...