VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More

Simulation has emerged as a critical technology for helping businesses shorten time-to-market and lowering design costs. Engineers and researchers use simulation for a variety of applications, including:

  • Using a virtual model (also known as a digital twin) to simulate and test their complex systems early and often in the design process.
  • Maintaining a digital thread with traceability through requirements, system architecture, component design, code and tests.
  • Extending their systems to perform predictive maintenance (PdM) and fault analysis.

Many organizations are improving their simulation capabilities by incorporating artificial intelligence (AI) into their model-based design. Historically, these two fields have been separate, but create significant value for engineers and researchers when used together effectively. These technologies’ strengths and weaknesses are perfectly aligned to help businesses solve three primary challenges.

Challenge 1: Better training data for more accurate AI models with simulation

Simulation models can synthesize real-world data that is difficult or expensive to collect into good, clean and cataloged data. While most AI models run using fixed parameter values, they are constantly exposed to new data that may not be captured in the training set. If unnoticed, these models will generate inaccurate insights or fail outright, causing engineers to spend hours trying to determine why the model is not working.

Simulation can help engineers overcome these challenges. Rather than tweaking the AI model’s architecture and parameters, it has been shown that time spent improving the training data can often yield more extensive improvements in accuracy.


AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.


Learn More

With a model’s performance so dependent on the quality of the data it is being trained with, engineers can improve outcomes with an iterative process of simulating data, updating an AI model, observing what conditions it cannot predict well, and collecting more simulated data for those conditions.

Challenge 2: AI for new in-product features

Simulation has become a vital part of the design process for engineers using embedded systems for applications such as control systems and signal processing. In many cases, these engineers are developing virtual sensors, devices that calculate a value that isn’t directly measured from the available sensors. But the ability of these methods to capture the nonlinear behavior present in many real-world systems is limited, so engineers are turning to AI-based approaches that have the flexibility to model the complexities. They use data (either measured or simulated) to train an AI model that can predict the unobserved state from the observed states and then integrate that AI model with the system.

In this case, the AI model is part of the controls algorithm that ends up on the physical hardware and usually needs to be programmed in a lower-level language, like C/C++. These requirements can impose restrictions on the types of machine learning models appropriate for such applications, so technical professionals may need to try multiple models and compare trade-offs in accuracy and on-device performance.

At the forefront of research in this area, reinforcement learning takes this approach further. Rather than learning just the estimator, reinforcement learning incorporates the entire control strategy. This technique has proved effective in some challenging applications, such as robotics and autonomous systems, but building this type of model requires an accurate model of the environment – never a guarantee – as well as massive computational power to run a large number of simulations.

Challenge 3: Balancing ‘right’ vs. ‘right now’

Businesses have always struggled with time-to-market. Organizations that push a buggy or defective solution to customers risk irreparable harm to their brand, particularly startups. The opposite is true as “also-rans” in an established market have difficulty gaining traction. Simulations were an important design innovation when they were first introduced, but their steady improvement and ability to create realistic scenarios can slow perfectionist engineers. Too often, organizations try to build “perfect” simulation models that take a significant amount of time to build, which introduces the risk that the market will have moved on.

To find the proper balance between speed and quality, technical professionals must acknowledge that there will always be environmental nuances that cannot be simulated. AI models should never be trusted blindly, even when they serve as approximations for complex, high-fidelity systems.

The future of AI for simulation

AI and simulation technologies have built and maintained their momentum individually for nearly a decade. Now, engineers are beginning to see a lot of value at their intersection, given the symbiotic nature of their strengths and weaknesses.

As models continue to serve increasingly complex applications, AI and simulation will become even more essential tools in the engineer’s toolbox. With the ability to develop, test and validate models in an accurate and affordable way, these methodologies will only continue to grow in use.

Seth DeLand is data analytics product marketing manager at MathWorks.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers