Turbulence is ubiquitous in environmental and engineering fluid flows, and is encountered routinely in on a regular basis life. A higher understanding of those turbulent processes may present worthwhile insights throughout quite a lot of research areas — bettering the prediction of cloud formation by atmospheric transport and the spreading of wildfires by turbulent power change, understanding sedimentation of deposits in rivers, and bettering the effectivity of combustion in plane engines to cut back emissions, to call just a few. However, regardless of its significance, our present understanding and our capability to reliably predict such flows stays restricted. This is especially attributed to the extremely chaotic nature and the big spatial and temporal scales these fluid flows occupy, starting from energetic, large-scale actions on the order of a number of meters on the high-end, the place power is injected into the fluid flow, all the way in which all the way down to micrometers (μm) on the low-end, the place the turbulence is dissipated into warmth by viscous friction.
A highly effective software to know these turbulent flows is the direct numerical simulation (DNS), which gives an in depth illustration of the unsteady three-dimensional flow-field with out making any approximations or simplifications. More particularly, this method makes use of a discrete grid with sufficiently small grid spacing to seize the underlying steady equations that govern the dynamics of the system (on this case, variable-density Navier-Stokes equations, which govern all fluid flow dynamics). When the grid spacing is sufficiently small, the discrete grid factors are sufficient to symbolize the true (steady) equations with out the lack of accuracy. While that is engaging, such simulations require great computational sources as a way to seize the proper fluid-flow behaviors throughout such a variety of spatial scales.
The precise span in spatial decision to which direct numerical calculations should be utilized will depend on the duty and is set by the Reynolds quantity, which compares inertial to viscous forces. Typically, the Reynolds quantity can vary between 102 as much as 107 (even bigger for atmospheric or interstellar issues). In 3D, the grid measurement for the decision required scales roughly with the Reynolds quantity to the ability of 4.5! Because of this robust scaling dependency, simulating such flows is mostly restricted to flow regimes with average Reynolds numbers, and sometimes requires entry to high-performance computing programs with thousands and thousands of CPU/GPU cores.
In “A TensorFlow simulation framework for scientific computing of fluid flows on tensor processing units”, we introduce a brand new simulation framework that permits the computation of fluid flows with TPUs. By leveraging newest advances on TensorFlow software program and TPU-hardware structure, this software program software permits detailed large-scale simulations of turbulent flows at unprecedented scale, pushing the boundaries of scientific discovery and turbulence evaluation. We show that this framework scales effectively to accommodate the dimensions of the issue or, alternatively, improved run occasions, which is exceptional since most large-scale distributed computation frameworks exhibit decreased effectivity with scaling. The software program is obtainable as an open-source undertaking on GitHub.
Large-scale scientific computation with accelerators
The software program solves variable-density Navier-Stokes equations on TPU architectures utilizing the TensorFlow framework. The single-instruction, multiple-data (SIMD) method is adopted for parallelization of the TPU solver implementation. The finite distinction operators on a colocated structured mesh are solid as filters of the convolution operate of TensorFlow, leveraging TPU’s matrix multiply unit (MXU). The framework takes benefit of the low-latency high-bandwidth inter-chips interconnect (ICI) between the TPU accelerators. In addition, by leveraging the single-precision floating-point computations and extremely optimized executable by way of the accelerated linear algebra (XLA) compiler, it’s doable to carry out large-scale simulations with wonderful scaling on TPU {hardware} architectures.
This research effort demonstrates that the graph-based TensorFlow together with new sorts of ML particular objective {hardware}, can be utilized as a programming paradigm to unravel partial differential equations representing multiphysics flows. The latter is achieved by augmenting the Navier-Stokes equations with bodily fashions to account for chemical reactions, heat-transfer, and density modifications to allow, for instance, simulations of cloud formation and wildfires.
It’s price noting that this framework is the primary open-source computational fluid dynamics (CFD) framework for high-performance, large-scale simulations to totally leverage the cloud accelerators which have grow to be widespread (and grow to be a commodity) with the development of machine studying (ML) lately. While our work focuses on utilizing TPU accelerators, the code will be simply adjusted for different accelerators, equivalent to GPU clusters.
This framework demonstrates a option to tremendously scale back the fee and turn-around time related to working large-scale scientific CFD simulations and permits even higher iteration velocity in fields, equivalent to local weather and climate research. Since the framework is carried out utilizing TensorFlow, an ML language, it additionally permits the prepared integration with ML strategies and permits the exploration of ML approaches on CFD issues. With the final accessibility of TPU and GPU {hardware}, this method lowers the barrier for researchers to contribute to our understanding of large-scale turbulent programs.
Framework validation and homogeneous isotropic turbulence
Beyond demonstrating the efficiency and the scaling capabilities, additionally it is crucial to validate the correctness of this framework to make sure that when it’s used for CFD issues, we get cheap outcomes. For this objective, researchers sometimes use idealized benchmark issues throughout CFD solver improvement, a lot of which we adopted in our work (extra particulars within the paper).
One such benchmark for turbulence evaluation is homogeneous isotropic turbulence (HIT), which is a canonical and effectively studied flow during which the statistical properties, equivalent to kinetic power, are invariant below translations and rotations of the coordinate axes. By pushing the decision to the bounds of the present cutting-edge, we have been capable of carry out direct numerical simulations with greater than eight billion levels of freedom — equal to a three-dimensional mesh with 2,048 grid factors alongside every of the three instructions. We used 512 TPU-v4 cores, distributing the computation of the grid factors alongside the x, y, and z axes to a distribution of [2,2,128] cores, respectively, optimized for the efficiency on TPU. The wall clock time per timestep was round 425 milliseconds and the flow was simulated for a complete of 400,000 timesteps. 50 TB knowledge, which incorporates the rate and density fields, is saved for 400 timesteps (each 1,000th step). To our data, this is likely one of the largest turbulent flow simulations of its type performed thus far.
Due to the complicated, chaotic nature of the turbulent flow subject, which extends throughout a number of magnitudes of decision, simulating the system in excessive decision is important. Because we make use of a fine-resolution grid with eight billion factors, we’re capable of precisely resolve the sphere.
Contours of x-component of velocity alongside the z midplane. The excessive decision of the simulation is crucial to precisely symbolize the turbulent subject. |
The turbulent kinetic power and dissipation charges are two statistical portions generally used to investigate a turbulent flow. The temporal decay of those properties in a turbulent subject with out extra power injection is because of viscous dissipation and the decay asymptotes observe the anticipated analytical energy regulation. This is in settlement with the theoretical asymptotes and observations reported within the literature and thus, validates our framework.
Solid line: Temporal evolution of turbulent kinetic power (ok). Dashed line: Analytical energy legal guidelines for decaying homogeneous isotropic turbulence (n=1.3) (Ⲧl: eddy turnover time). |
Solid line: Temporal evolution of dissipation price (ε). Dashed line: Analytical energy legal guidelines for decaying homogeneous isotropic turbulence (n=1.3). |
The power spectrum of a turbulent flow represents the power content material throughout wavenumber, the place the wavenumber ok is proportional to the inverse wavelength λ (i.e., ok ∝ 1/λ). Generally, the spectrum will be qualitatively divided into three ranges: supply vary, inertial vary and viscous dissipative vary (from left to proper on the wavenumber axis, beneath). The lowest wavenumbers within the supply vary correspond to the biggest turbulent eddies, which have probably the most power content material. These giant eddies switch power to turbulence within the intermediate wavenumbers (inertial vary), which is statistically isotropic (i.e., basically uniform in all instructions). The smallest eddies, similar to the biggest wavenumbers, are dissipated into thermal power by the viscosity of the fluid. By advantage of the fantastic grid having 2,048 factors in every of the three spatial instructions, we’re capable of resolve the flow subject as much as the size scale at which viscous dissipation takes place. This direct numerical simulation method is probably the most correct because it doesn’t require any closure mannequin to approximate the power cascade beneath the grid measurement.
Spectrum of turbulent kinetic power at completely different time cases. The spectrum is normalized by the instantaneous integral size (l) and the turbulent kinetic power (ok). |
A new period for turbulent flows research
More just lately, we prolonged this framework to foretell wildfires and atmospheric flows, which is related for climate-risk evaluation. Apart from enabling high-fidelity simulations of complicated turbulent flows, this simulation framework additionally gives capabilities for scientific machine studying (SciML) — for instance, downsampling from a fantastic to a rough grid (mannequin discount) or constructing fashions that run at decrease decision whereas nonetheless capturing the proper dynamic behaviors. It may additionally present avenues for additional scientific discovery, equivalent to constructing ML-based fashions to higher parameterize microphysics of turbulent flows, together with bodily relationships between temperature, strain, vapor fraction, and so on., and will enhance upon numerous management duties, e.g., to cut back the power consumption of buildings or discover extra environment friendly propeller shapes. While engaging, a fundamental bottleneck in SciML has been the provision of information for coaching. To discover this, we now have been working with teams at Stanford and Kaggle to make the info from our high-resolution HIT simulation obtainable by way of a community-hosted web-platform, BLASTNet, to offer broad entry to high-fidelity knowledge to the research neighborhood through a network-of-datasets method. We hope that the provision of those rising high-fidelity simulation instruments along side community-driven datasets will result in important advances in numerous areas of fluid mechanics.
Acknowledgements
We want to thank Qing Wang, Yi-Fan Chen, and John Anderson for consulting and recommendation, Tyler Russell and Carla Bromberg for program administration.