From near the bottom:
> One of the core new elements of the drone’s AI is the use of a deep neural network that doesn’t send control commands to a traditional human controller, but directly to the motors.
I saw that too - I'm assuming it means they're indeed using the DNN for stabilization. This has been done several times over the years, but generally with results which only rival PID and don't surpass it, so that's quite interesting. What's odd is that the physical architecture of the drone doesn't really make sense for this, so there must be some tweaks beyond the "spec" model. Hopefully some papers come soon instead of press releases.
They reference ESA's research in "Guidance and Control Nets", and when looking at ESA's page for their "Advanced Concepts Team" [0] they in turn reference ETH Zürich's research in RL for drone control. Specifically [1] this paper from 2023: "Champion-level drone racing using deep reinforcement learning" [2]. They use a 2x128 MLP for the control policy.
[0] https://www.esa.int/gsp/ACT/
[1] https://www.esa.int/gsp/ACT/projects/rl_vs_imitation_learnin...
This is crazy, its dexterity and range of motion could potentially exceed all human modeled systems.
I assume that they shave off milliseconds by doing so, and a gyroscope (or similar) sends back the position/angle of the drone. And like this does it bypass the 'limited' onboard computer and instead uses a much better/faster computer?
Reports downthread suggest that the NN is running directly on the drone, in the form of a Jetson. Which would give much better latency and quality of video.