Integration

One major integration goal is the coupling of Exasim and ΣMIT, which will serve to catalyze our reasoning about shared software development. Within this task, we will explore the integration of CS/Exascale, UQ, predictive science, and ML technologies. For ML, we will focus on the deployment of ENNs for chemically accurate MD simulation and for constitutive models in FEM simulations. To bridge the spatial-temporal scales of the simulations, we will develop ROMs for both fluids and structures that can be deployed in the coupled model.

We aim to use forward UQ (via sensitivity analysis and dimension reduction) to inform the selection of reduced inputs for ML models, as part of a broader effort to develop UQ for ENNs. Later-stage UQ will incorporate data from validation experiments and allow us to assess our models. Our OED effort is also intrinsically integrative, as it “closes the loop” between the experimental effort and the simulation tools.

Our exascale technologies will be integrated directly over time with the predictive science, ML, and UQ efforts. Initially, performance will be a focus for the direct integration of DSLs, Julia, and OpenCilk/Kitsune into the coupled simulation. We will use our suite of Julia libraries, DSLs, and compiler technologies to prototype and develop performance optimizations for the existing simulation software—especially to address exascale challenges such as scheduling and data distribution across coupled components.

By using compilers and DSLs, we seek to avoid rewriting simulation components while improving the performance and portability of existing codes. Julia will also be used to develop and evaluate new simulation component prototypes, especially for the coupling aspects. All three of these technologies—DSLs, Julia, and OpenCilk/Kitsune—will gradually be integrated with Enzyme (starting with DSLs and ending with OpenCilk/Kitsune) to support fast gradients for our UQ efforts.

To enable gradient-based approaches for UQ, we plan to integrate Enzyme, the FEM and ENN DSLs, and Julia to generate efficient gradient computations within the simulation software. By leveraging a shared compiler toolkit, we aim to combine automatic and custom gradient-generation approaches that incorporate domain knowledge to enhance performance. This will allow us to compute gradients across coupled models more effectively.

We will integrate Enzyme’s automatic differentiation with gradient computations written and generated using DSLs, Julia, and other ad hoc means. This integration will optimize performance using both compiler-level tools and domain-specific insight. Ultimately, we aim to bring all three technologies together through MLIR/LLVM.

We will build on our CESMIX work integrating OpenCilk/Kitsune and Julia via an OpenCilk-based GPUCompiler.jl backend called Chi. Additionally, we will design our DSLs to target a common representation, ideally built from MLIR dialects or a novel MLIR dialect. This shared representation will enable co-optimization between DSL-generated code and OpenCilk/Kitsune, ensuring smooth integration.

Finally, we emphasize that much of this integration will occur in a modular fashion. Our goal is for the overall codebase to run independently of many of these tools if needed, while respecting existing resource management choices. Accordingly, our CS tools will adapt to those choices—especially with respect to MPI—necessitating careful design of interfaces.