If you can simulate a behavior then that means you have a working understanding of how to get that behavior.
Not necessarily. You can simulate a high-level process independent from the low-level processes that make it up.
For example you can simulate traffic without simulating the inner workings of every car's engine, or even understanding how the engine works.
I might nitpick the "understanding" part. Lots of ML-type statistical models produce good results, but we can't very well explain how they work.
Or maybe by "working understanding" you mean "we have a black box that does the thing we wanted."
You can simulate behaviour with a lookup table, but that's not the same as understanding.