> How are you accounting for this; trying every possible program length?
Part of the mutation function involves probabilistically growing and shrinking the program size (i.e., inserting and removing random instructions).
> And you are considering the simpler case where the search space is discrete, unlike the continuous spaces in most machine learning problems.
All "continuous spaces" that embody modern machine learning techniques are ultimately discrete.
No, they are not. Model outputs can be discretized but the model parameters (excluding hyperparameters) are typically continuous. That's why we can use gradient descent.
Where are the model parameters stored and how are they represented?
In disk or memory as multidimensional arrays ("tensors" in ML speak).
Do we agree that these memories consist of a finite # of bits?
Yes, of course.
Consider a toy model with just 1000 double (64-bit), or 64Kb parameters. If you're going to randomly flip bits over this 2^64K search space while you evaluate a nontrivial fitness function, genetic style, you'll be waiting for a long time.
I agree if you approach it naively you will accomplish nothing.
With some optimization, you can evolve programs with search spaces of 10^10000 states (i.e., 10 unique instructions, 10000 instructions long) and beyond.
Visiting every possible combination is not the goal here.