morkalork 1 day ago

I think it was here a few years ago that I read a comment saying that sick children will be the Trojan horse for normalizing gene editing of humans, because who could say no to sick children, right? Well, guess it's here now, so how long utill the eugenics wars start?

7
jjcob 1 day ago

It's not a slippery slope. Fixing defects is rather straightforward, since it's usually a single gene that needs to be edited.

If you want make your baby smarter, taller, or more handsome, it's not so easy because these traits involve 1000s of genes.

For this reason I do not think that curying diseases will lead to designer babies.

sfink 1 day ago

If you can affect germline cells, then I don't see how it's not a slippery slope. (I'm not arguing against doing it, just that it is a slope and the slope is slippery.) No designer babies necessary.

I'll steelman "fixing defects" by sticking to serious hereditary diseases (and yes, only those that correspond to one or a few known genes). As more and more conditions become treatable, the population with access to resources will have lower healthcare costs by being less susceptible to problems. (Which is a good thing, note!) Insurance companies will have more and more proxies for differentiating that don't involve direct genetic information. Societally, "those people" [the poor and therefore untreated] cost more to support medically and are an increasing burden on the system. Eugenics gains a scientific basis. Do you want your daughter marrying someone genetically substandard, if you don't have the resources to correct any issues that might show up? Probably not, you're more likely to want to build a wall between you and them. Then throw over anyone who falls behind the bleeding edge of corrections.

It'll be the latest form of redlining, but this time "red" refers literally to blood.

mckn1ght 1 day ago

I'm a fan of saying there's always a slippery slope, it's just a matter of the parameters.

But, I think that it's misguided to apply the human problem of othering to a given technology. Regardless of technology X, humans are gonna human. So, if X helps some people, we should consider it on that basis. Because without X, we will still have an endless stream of other reasons to draw red lines, as you allude to. Except in addition we'll also still have the problem that X could've helped solve.

If gene editing to cure diseases leads to a future where people want to shunt off the poor that are now the outsized burden of the healthcare system, the answer from where I sit is to find ways to make the gene therapies available to them, not to cart them off to concentration camps while they await extermination. This will require all the trappings of human coordination we've always had.

Preventing X from ever coming to fruition doesn't at all prevent all possible futures where concentration death camps are a possibility. To me they are orthogonal concerns.

Even if you can convince one culture/society not to do it, how do you stop others? Force? Now you have a different manifestation of the same problem to solve. Society needs to learn how to "yes, and..." more when it comes to this stuff. Otherwise, it's just war all the way down.

sfink 5 hours ago

I mostly agree. Well:

> This will require all the trappings of human coordination we've always had.

It is also true to say that we've never had it as quickly as it has been needed, and neither is it done as well as it needs to be. We will blunder into things that are easy to predict in advance if we are willing to look and accept what we see, but we won't.

I absolutely agree that this advance is a great thing and should be pursued further. But I also think that simply categorizing it as good or bad is a way to willfully ignore the unintended consequences. We should at least try to do better.

> Society needs to learn how to "yes, and..." more when it comes to this stuff.

Absolutely. I just think that requires nuance, wide open eyes, and acceptance of uncomfortable truths. Part of the nuance is not boiling it down to a yes/no question of "should this proceed?" (For example, how about: "How can we utilize these new capabilities to maximize benefit and minimize harm, when the only real lever we seem to have to work with is the profit motive? Everything else is getting undermined in its service.")

beeflet 22 hours ago

>For this reason I do not think that curying diseases will lead to designer babies.

Well, you're wrong. Where is the line drawn for what constitutes a disease? Retardation? Autism? Eventually every child below, say, 130 IQ will be considered disabled and unable to find work.

Apply this to every other trait: cardiovascular health, strength, height, vision, etc. All forms of weakness can be considered a disease. The end product of eugenics is that mankind will be made into a docile and fragile monoculture.

>If you want make your baby smarter, taller, or more handsome, it's not so easy because these traits involve 1000s of genes.

And? it's obvious that the technology will eventually be capable of this, just not all at once. It starts with single-gene mutations, then it will be 10's of genes, and then hundreds and thousands.

That is the slippery slope: there is absolutely nothing about your reasoning that prevents one step from leading to another.

tptacek 20 hours ago

He wasn't saying that curing diseases wouldn't lead to designer babies because he objects to the idea (though he might). He's saying that the factors that lead to a "130 IQ" score are, to the extent that they're causatively genetic at all, highly polygenic. Molecular genetics results aren't putting us on a track to predict polygenic behavioral traits (I guess except smoking?), let alone control them.

It's helpful to evaluate claims on this thread in the context of the story. It's possible (though still a very open question) that complex behavioral traits will generally become predictable or maybe even controllable in the future. But those would require breakthroughs (including basic science discoveries breaking in the direction baby-designers want them to) more significant than the announcement on this story.

stereolambda 12 hours ago

Honestly to me inequality has been always the main reasonable angle of attacking gene editing. But if vaccines are an analogy, many countries were eventually able to mass vaccinate for dangerous diseases. So this could be only the question of cost, after some period of only elite availability.

There's no inherent metaphysical worth in being on any particular level of strength, height etc., so we can spread whatever is the most convenient. I think arguments against (that I see being made) ultimately devolve into some magical thinking and a priori thing bad. (I am glad to be shown otherwise.) In fact we are already messing with human fertility in possibly unsustainable ways, so maybe more tools are needed as a part of the way out.

Of course there is political execution, corruption etc., but I don't see it any different from other technological challenges that civilization has dealt with. I.e. we need better politics but the tech is not at fault. Gene editing is isolated interventions, so it's in that detail more manageable than for example mass surveillance which is hidden and continuous.

One more esoteric argument is that we cannot socially agree on what traits are desirable. The ‘The Twenty-first Voyage of Ijon Tichy’ scenario. So opposite to "monoculture" in a way. But I don't see people expanding on that.

GenshoTikamura 1 day ago

You're certaily unfamiliar with the term "incrementalism" and its workings

tptacek 19 hours ago

No, you're assuming that polygenic trait control "scales" like a sort or even a search algorithm, when there's some molecular genetic evidence that it may instead scale like a cipher key size.

WorldPeas 8 hours ago

Not to argue semantics here but eugenics is arround selective breeding, this would allow for the opposite, even more breeding, especially in populations with hereditary ailments that would have refrained from the act in prior years. I do agree however that it is imperative greater common-sense regulations surrounding "aesthetic" or "performance" "modifications" (such callous words for young life) will need to be enacted.

danielodievich 22 hours ago

For a nice thoughtful imagination of said future, Nancy Kress's Beggars in Spain https://en.wikipedia.org/wiki/Beggars_in_Spain suggests one possible outcome. It has both gene editing to make better humans and unproductive masses. A short yet powerful read.

protocolture 23 hours ago

Well a sick child has been healed so that necessarily means we will have a war about it?

dekhn 1 day ago

it's unclear the outcome of this will be eugenics wars.

Answering the real question- it's unlikely these techniques will see widespread "recreational" usage any time soon, as they come with a wide range of risks. Further, the scientific community has learned a lot from previous eugenics programs; anything that happens in the future will happen with both social and political regulation.

It's ultimately hard to predict- many science fiction writers have speculated about this for some time, and social opinion can change quickly when people see new developments.

morkalork 1 day ago

That's part of why the trojan horse works so well, what is an unacceptable risk for someone healthy can easily be acceptable for someone with an otherwise untreatable condition. Then by the experience and knowledge gained, it becomes less risky for everyone.

NoMoreNicksLeft 1 day ago

The problem won't be that there will be those who want to have babies with edited genomes, and those who oppose that.

It will be that people just don't have children at all.

beeflet 22 hours ago

I think one reason why people won't have children is the gene-editing and the IVF that is coming. Nothing is left to chance, or to god any more. Having children is now a clinical affair. It's spiritually void.

ysofunny 1 day ago

I think i'm fighting on those wars right now, you can also call them 'darwin wars' i suppose... but bear in mind i'm crazy and online

ninetyninenine 1 day ago

[flagged]

mckn1ght 1 day ago

I think part of the problem is that “really good” and “really bad” are not universally accepted norms for any given ethical question. What you’re seeing is your own value system assumptions being checked.

It’s perfectly reasonable to say that while a technology has the propensity to be used for evil, it also has positive applications and that the real benefit now outweighs the potential downside in a hypothetical future.

Otherwise you will go down a rabbit hole at the bottom of which lies a future where we all just kinda dig in the dirt with our hands until we die because every technological innovation can be used in a variety of ways.

Like, it’s silly to me that I can’t bring a 1.5” blade keychain utility knife on a flight, and then they hand me a metal butter knife in first class. I could do way more damage with that. But they allow the butter knife because the utility has shown to far outweigh the potential downside that hasn’t manifested.

> I will slaughter a baby if I know for a fact that baby will grow up to be the next Hitler

This is one of those things that is easy to say precisely due to the impossibility of it ever actually becoming a real decision you have to make.

ninetyninenine 1 day ago

>This is one of those things that is easy to say precisely due to the impossibility of it ever actually becoming a real decision you have to make.

It's true. But things like this should be easy to say right? Like we may not be able to act logically. But we should be able to think logically, communicate logically and show that we are aware of what is logical.

My post got flagged meaning a lot of people can't separate the two things. So for example I may not be able to kill the baby in reality, but I can at least see how irrational I am.

The person who flagged me likely not only can't kill the baby. He has to construct an artificial reality to justify why he can't kill the baby and why his decision must be rational.

mckn1ght 1 day ago

I think things like that should be easy to say, if by that you mean censorship. Sure. But talk is cheap. And there are gradations of reality.

It would maybe be easier for a 15-25 y.o. to kill a baby they don't know and whose parents/family they don't know, and maybe even easier if they don't speak their language or look like them. Of course, the baby wouldn't be the only one you'd have to kill, most likely.

I submit that it would be very very different if you found out that your 4 year old child was going to go on to be the next Hitler. For a "normal" person, I think they would go to the ends of the earth to try to shape them into the kind of person that wouldn't do it. I think very few people would coldly calculate "welp, guess I gotta execute this adorable little child I've grown so attached to" as it looks up at them saying "I love you so much forever, mommy/daddy" with their little doe eyes.

(ETA: it also brings up side questions about nature vs nurture and free will)

And then consider the lifelong repercussions of the emotional fallout. You can use all the logic in the world to justify the action, but your human brain will still torment you over it. And likely, most of the other human brains that learn about it would torment you as well.

---

So, while I think you can say things like that, ie the ability and allowance, I think you should question whether you should. I think saying those kinds of things really doesn't add much to the discussion because I believe it's really just an uninformed platitude that only someone with a lack of life experience would believe.

For me this all highlights the fact that meaty ethical questions don't have a simple reductive answer. Which ties back in to the original problem that OP outright states that this is simply and clearly the wrong path to go down.

(PS the downvoting/flagging could be due to breaking the guidelines around discussing downvotes and flags, and not actually due to the topical content of the posts, and/or assuming bad faith on the part of other users as such: https://news.ycombinator.com/newsguidelines.html)

ninetyninenine 15 hours ago

>So, while I think you can say things like that, ie the ability and allowance, I think you should question whether you should.

You should because many choice in life are not strictly black and white. Saving a babies life versus introducing gene editing to humanity. If there was a baby where we knew he would grow up to slaughter millions it's absolutely worth talking about. In the age of AI and gene editing where things are influencing what it even means to be human, it is wise to stop and pause for a minute to ask the right question rather then charge forward with change that can't be taken back all because we wanted to save a baby.

cooper_ganglia 1 day ago

The anti-eugenics guy just said he would "absolutely" murder a baby...?

ninetyninenine 1 day ago

I would if I can foresee the future. But with eugenics you can't foresee the future. Self artificially selecting for genetic traits doesn't guarantee a good future. There's no gene for recreating Hitler either.

psychoslave 1 day ago

[flagged]

ninetyninenine 1 day ago

A baby Hitler gaurantees a future with a grown up Hitler. Killing the baby eliminates that future.

There could be other babies that can also grow up to be future Hitlers. So let's say 4 such babies exist. By killing one I eliminated 1/4 for futures with Grown up Hitlers that exist.

This whole thread is getting flagged. Likely by an irrational parent who can't even compute natural selection, babe, and Hitler all in a single paragraph.

NoMoreNicksLeft 1 day ago

[flagged]