Accelerating Aerial Autonomy with Digital Engineering

Mar 11, 2024

Autonomy has become a major capability focus for programs developing Small Unmanned Aerial Systems (sUAS). Their ability to be force multipliers for human operators has the potential to drive significant value across many industries, from inspecting gas pipelines to drone delivery networks. On the battlefield, autonomous systems are changing how modern conflict is fought and delivering a competitive advantage to those who can field them.

To achieve this, AI-based software must process sensor data to perceive its surroundings, localize, and optimize trajectory in a continuous feedback loop. However, developing these capabilities is a challenging task because the performance of the system varies based on the environment it is deployed in. For example, software trained on data from a desert test site may fail at identifying maritime vessels or planning routes over urban environments.

Digital engineering encompasses a large umbrella of tools and capabilities, part of which enables developers to build and retrain software rapidly, without relying on live testing in limited environments. Software can be tested across all possible operational domains, building confidence in its performance before deploying it.

In this blog post, we describe three novel engineering workflows that are accelerating modern sUAS software development.

Sensor Optimization

Sense, detect, and avoid is a core problem for sUAS autonomy for operation in urban or confined environments. One of the first challenges in building autonomy for any vehicle is identifying a suitable sensor suite for the required capabilities. Historically, this is a slow, hardware-intensive process. Candidate sensors must be procured and mounted on prototype vehicles, which can take several months to years. Additionally, measuring the information (e.g. pixel count for an object) captured by a sensor suite is imprecise. Often developers will attempt to manually it roughly or prototype a sensor fusion algorithm to validate if sufficient information is present in the sensor data.

Digital engineering can reduce the time to identify a suitable sensor suite from years to days. This is because a simulated environment allows developers to rapidly model a new sensor and gather precise, realistic metrics about its efficacy in different scenarios. In simulation, a collection of camera (EO or IR), radar, lidar, or radio frequency sensors can be mounted on a virtual mesh of the airframe. Each sensor can be translated and rotated with six degrees of freedom (6-DOF) using a 3D editor.

Figure 1: Placing sensors on an sUAS mesh

Immediately, information about field-of-view (FoV) coverage and FoV overlap can be calculated and visually observed for early feedback. Individual sensors can be previewed to see where there are coverage gaps or overlap.

Figure 2: Edit and visualize sensor field-of-view on a digital airframe
Figure 3: Run simulations that evaluate sensory feedback, such as the pixel count for an object

Once the suite is configured, the virtual UAS can be flown in a set of simulated scenarios. Each scenario is varied on operationally relevant parameters, such as the objects being perceived, altitude and velocity, weather conditions, and more. By running these simulations in parallel in a cloud-based or on-premise cluster environment, thousands of variations can provide feedback in minutes. If edits need to be made to the sensor suite, a developer can make adjustments in minutes and trigger a new simulation batch run.

As capability requirements change over time, this workflow can be repeated to optimize for new constraints.

Synthetic Datasets

Once the sensor suite is selected, developing a mature perception system requires a large, diverse library of sensor data for that suite. Flying aircraft for data collection in the real world is prohibitively expensive, limited to FAA-approved airspaces (e.g. away from airports), or fully unfeasible if the operational areas are over contested or denied airspaces.

While real data collection is essential, synthetic data generation can quickly fill gaps where data is limited. This is especially true for edge cases such as flying in dangerous or hard-to-replicate situations, such as a dangerous maneuver or in a contested battlefield space. Simulation can also easily replicate situations that are prohibitively complex to set up, such as simulating a complex operating environment or or a drone swarm.

Synthetic data delivers two other important advantages:

  • Pixel-perfect labels. Real-world data is not only painstakingly collected, but also painstakingly labeled. Synthetic data can come annotated with the user’s choice of labels - from bounding boxes to semantic segmentation and optical flow. Additionally, those labels are accurate down to the pixel-level, and can be customized to the model’s requirements. For example, labels of ground vehicles can be easily adjusted to include or exclude outer features like mirrors or equipment racks.
  • Variations at scale. To be robust to a category of scenarios, AI models must be trained on many variations of that scenario. For example, if you are building an air-to-ground object classification model, you would look to acquire sensor data on objects from different distances, look angles, camera zoom levels, lighting, weather, and so on. Capturing that data in the real-world requires exponentially more resources, but amoritizes towards zero in a simulated environment. The cost of running 1 simulation vs 1000 simulations can be nearly the same.

Figure 4: Air-to-ground tank variations 

To create synthetic data, users can import or work with technical artists to import relevant 3D content into a simulated environment. Synthetic Datasets comes with 100s of assets in its built-in library, each featuring physically-accurate textures. Variations can be created by parameterizing the dataset specification, turning specification fields into variables that are chosen from a distribution or specific values. Then that dataset specification can be executed via batch simulations to quickly produce a candidate dataset.

Sensor Fusion Software Development and Integration 

Often autonomous systems won’t rely solely on a singular modality to navigate. A multi-modal, multi-sensor suite has many advantages for reliable and accurate perception and localization, as well as downstream improvements for path planning.

Developing algorithms to fuse sensor data can be accelerated with physically accurate simulation. Engineers can quickly model the multi-modal sensor package and integrate prototype fusion algorithms for testing, avoiding physical testing on each iteration. Adjustments to the sensors or algorithms can then be made in minutes and recompiled. By running those simulations at scale, it’s also possible to identify boundary conditions, such as where perception is heavily reliant on a sensor that may not provide accurate data under certain conditions (e.g. a dependency on cameras in cloudy environments, or a GPS sensor when jammed).

Synthetic data for the multi-modal suite can also be generated to supplement real-world training data. This is an extension of the single-sensor perception workflow that can accelerate fusion algorithm development.

Path Planning Development

In autonomous systems, the perception system output is an input to the path planning algorithm. Put simply, deciding where to go is a function of where you are and what’s around you. In the real-world, path planning is typically developed after perception software has reached a degree of maturity, and it is primarily tested with live flights. Digital engineering offers two workflows to accelerate the path planning development lifecycle.

Mock perception

By mocking the output of the perception system in simulation, we can develop and test path planners in isolation. Pixel-accurate ground truth perception output can be passed to the planner module to test its performance, where the input data is maximally accurate. The output can also be degraded with drop-out and noise to identify at what point poor perception will cause the path planner to error. This is a critical threshold to uncover, but if we depended on live tests, it would require pushing a real aircraft to the point of crashing.

This kind of simulation is referred to as “object-level simulation” because we can represent the objects in the environment as simpler, multi-bounding boxes representations rather than 3D visual models.

Figure 5: Aerial object-level simulation view

Flight log simulation

During live test flights, the sensor data captured mid-flight can be used to extract test cases for future development. For example, if an autonomous aircraft detected debris on a runway but subsequently failed to execute a go-around, the flight logs could be brought back to the lab and replayed against new versions of the planning software until it was verified to go-around successfully. Noise and dropout can once again be applied to assess performance thresholds.

This technique maximizes the data collected from each flight and avoids having to schedule additional, expensive live tests in order to repeat previous edge cases, accelerating the development lifecycle.

It’s also important that future software updates don’t introduce regressions for an edge case that was previously fixed. To prevent this, test cases extracted from flight logs can be added to a growing test suite in a continuous integration and deployment (CI/CD) pipeline. Each software candidate is tested against the suite and any failures are triaged, enabling developers to catch known failure modes before new software is deployed onto the aerial platform. This workflow is particularly valuable for safety certification cases that emphasize a data-driven approach or a simulation-based approach.

Conclusion

Digital engineering is enabling a world where continuously improving and sustaining autonomy software to achieve mission success is now possible. Vehicle software must be developed, tested and deployed faster than ever for that vehicle to remain relevant. This requires investing in infrastructure that accelerates the development and test lifecycle, from assembling data for perception development to capturing and triaging failure cases observed in live operations. 

Applied Intuition Defense’s Approach

At Applied Intuition Defense, we build best-in-class digital engineering tools for leading autonomy programs - from the world’s largest automotive manufacturers (OEMs) and eVTOL startups to the Department of Defense. Contact us to learn more about how we can accelerate your program.