It’s probably not nefarious and just generally not worth the headache of an EU release and going through the checks/requirements for an experiment or early beta.
The many rules of the EU stifle speed and change the math for releasing something especially in a big company that has a variety of requirements built up over time to reduce litigation risk or being on the wrong side of one of the overreaching government officials there.
> going through the checks/requirements
The process should be the reverse of that. Don't collect data unless you have been through the process of checking that it has a legal basis.
Typically the internal processes to verify what is/is not collected are mandated by launch region, not whether or not the initial version of the software does data collection.
At work all our products do not eat children by default and it would require legal review to add any such feature. Therefore we don't need a procedure to make sure that any child-eating components get disabled in regions that have some concerns about child-eating.
Except too bad you're launching a washing machine which needs to be checked for child-eatingyness on the tiny island of Zogdog, so you just decided not to sell it there.
> At work all our products do not eat children by default and it would require legal review to add any such feature.
I bet the people at Peloton thought that too until they made that treadmill[1]. I know you meant your critique to be absurd, but it turns out creating a child eating machine by accident is entirely possible. I also bet Peloton product development now includes a process to review child-eatingness despite that not being their primary market, just the usual twice burnt reflex.
Accidentally logging the PII can easily happen for a single engineer. I managed to do it on an product that was privacy-focused & the error slipped through review. The odds of such inadvertent errors rise linearly with the number of products and engineers, and the fines are probably superlinear with the size of an offending organization. If your 3-person consultancy chomps on a GDPR baby or 2, no one will ever know about it, but if Google does it, it's going to be news headlines an millions in fines.
Logging some PII by accident is also an issue and can also lead to compromises, but I think intentionally collecting it in bulk is the primary concern here. With google I assume bulk data collection is their default stance and that's why they need to carefully trim it down just enough that the lawyers say is justifiable rather than the other way around. That's the problem I'm gesturing at.
This mindset assumes that you know what question you want to ask before you have the data, as opposed to having data and then being able to generate hypothesis from it.
Legal basis is ever shifting based on the regional locale. As n+1 requirements pop up, it's only natural to release things like GenChess in the place that requires the least friction, especially when it is not a revenue generating event.
Or don’t launch in EU until you have to, which is what most do now so your choice