December 5

Colorful fluid dynamics” and overconfidence in global climate models

0  comments

From Climate Etc.

by David Young

This post lays out in fairly complete detail some basic facts about Computational Fluid Dynamics (CFD) modeling. This technology is the core of all general circulation models of the atmosphere and oceans, and hence global climate models (GCMs).  I discuss some common misconceptions about these models, which lead to overconfidence in these simulations. This situation is related to the replication crisis in science generally, whereby much of the literature is affected by selection and positive results bias.

A full-length version of this article can be found at [ lawsofphysics1 ], including voluminous references. See also this publication [ onera ]

Numerical simulation over the last 60 years has come to play a larger and larger role in engineering design and scientific investigations. The level of detail and physical modeling varies greatly, as do the accuracy requirements. For aerodynamic simulations, accurate drag increments between configurations have high value. In climate simulations, a widely used target variable is temperature anomaly. Both drag increments and temperature anomalies are particularly difficult to compute accurately. The reason is simple: both output quantities are several orders of magnitude smaller than the overall absolute levels of momentum for drag or energy for temperature anomalies. This means that without tremendous effort, the output quantity is smaller than the numerical truncation error. Great care can sometimes provide accurate results, but careful numerical control over all aspects of complex simulations is required.

Contrast this with some fields of science where only general understanding is sought. In this case qualitatively interesting results can be easier to provide. This is known in the parlance of the field as “Colorful Fluid Dynamics.” While this is somewhat pejorative, these simulations do have their place. It cannot be stressed too strongly however that even the broad “patterns” can be quite wrong. Only after extensive validation can such simulations be trusted qualitatively, and even then only for the class of problems used in the validation. Such a validation process for one aeronautical CFD code consumed perhaps 50-100 man years of effort in a setting where high quality data was generally available. What is all too common among non-specialists is to conflate the two usage regimes (colorful versus validated) or to make the assumption that realistic looking results imply quantitatively meaningful results.

The first point is that some fields of numerical simulation are very well founded on rigorous mathematical theory. Two that come to mind are electromagnetic scattering and linear structural dynamics. Electromagnetic scattering is governed by Maxwell’s equations which are linear. The theory is well understood, and very good numerical simulations are available. Generally, it is possible to develop accurate methods that provide high quality quantitative results.  Structural modeling in the linear elasticity range is also governed by well posed elliptic partial differential equations.

The Earth system with its atmosphere and oceans is much more complex than most engineering simulations and thus the models are far more complex. However, the heart of any General Circulation Model (GCM) is a “dynamic core” that embodies the Navier-Stokes equations. Primarily, the added complexity is manifested in many subgrid models of high complexity. However, at some fundamental level a GCM is computational fluid dynamics. In fact GCM’s were among the first efforts to solve the Navier-Stokes equations and many initial problems were solved by the pioneers in the field, such as the removal of sound waves. There is a positive feature of this history in that the methods and codes tend to be optimized quite well within the universe of methods and computers currently used. The downside is that there can be a very high cost to building a new code or inserting a new method into an existing code. In any such effort, even real improvements will at first appear to be inferior to the existing technology. This is a huge impediment to progress and the penetration of more modern methods into the codes.

The best technical argument I have heard in defense of GCM’s is that Rossby waves are vastly easier to model than aeronautical flows where the pressure gradients and forcing can be a lot higher. There is some truth in this argument. The large-scale vortex evolution in the atmosphere on shorter time scales is relatively unaffected by turbulence and viscous effects, even though at finer scales the problem is ill-posed. However, there are many other at least equally important components of the earth system. An important one is tropical convection, a classical ill-posed problem because of the-large scale turbulent interfaces and shear layers. While usually neglected in aeronautical calculations, free air turbulence is in many cases very large in the atmosphere. However, it is typically neglected outside the boundary layer in GCMs. And of course there are clouds, convection and precipitation, which have a very significant effect on overall energy balance. One must also bear in mind that aeronautical vehicles are designed to be stable and to minimize the effects of ill-posedness, in that pathological nonlinear behaviors are avoided. In this sense aeronautical models may be actually easier to model than the atmosphere. In any case aeronautical simulations are greatly simplified by a number of assumptions, for example that the onset flow is steady and essentially free of atmospheric turbulence. Aeronautical flows can often be assumed to be essentially isentropic outside the boundary layer.

As will be argued below, the CFD literature is affected by positive results and selection bias. In the last 20 years, there has been increasing consciousness of and documentation of the strong influence that biased work can have on the scientific literature. It is perhaps best documented in the medical literature where the scientific communities are very large and diverse. These biases must be acknowledged by the community before they can be addressed. Of course, there are strong structural problems in modern science that make this a difficult thing to achieve.

Fluid Dynamics is a much more difficult problem than electromagnetic scattering or linear structures. First many of the problems are ill posed or nearly so. As is perhaps to be expected with nonlinear systems, there are also often multiple solutions. Even in steady RANS (Reynolds Averaged Navier-Stokes) simulations there can be sensitivity to initial conditions or numerical details or gridding.  The AIAA Drag Prediction Workshop Series has shown the high levels of variability in CFD simulations even in attached mildly transonic and subsonic flows. These problems are far more common than reported in the literature.

Another problem associated with nonlinearity in the equations is turbulence, basically defined as small scale fluctuations that have random statistical properties. There is still some debate about whether turbulence is completely represented by accurate solutions to the Navier-Stokes equations, even though most experts believe that it is. But the most critical difficulty is the fact that in most real life applications the Reynolds number is high or very high. The Reynolds number represents roughly the ratio of inertial forces to viscous forces. One might think if the viscous forcing was 4 to 7 orders of magnitude smaller than the inertial forcing (as it is for example in many aircraft and atmospheric simulations), it could be neglected. Nothing could be further from the truth. The inclusion of these viscous forces often results in an O(1) change in even total forces. Certainly, the effect on smaller quantities like drag is large and critical to successful simulations in most situations. Thus, most CFD simulations are inherently numerically difficult and simplifications and approximations are required. There is a vast literature on these subjects going back to the introduction of the digital computer; John Von Neumann made some of the first forays into understanding the behaviour of discrete approximations.

The discrete problem sizes required for modeling fluid flows by resolving all the relevant scales grow as Reynolds number to the power 9/4 in the general case, assuming second order numerical discretizations. Computational effort grows at least linearly with discrete problem size multiplied by the number of time steps. Time steps must also decrease as the spatial grid is refined because of the stability requirements of the Courant-Freidrichs-Levy condition as well as to control time discretization errors. The number of time steps grows as Reynolds number to the power 3/4. Thus overall computational effort grows with Reynolds number to the power 3. Thus, for almost all problems of practical interest, it is computationally impossible (and will be for the forseeable future) to resolve all the important scales of the flow and so one must resort to subgrid models of fluctuations not resolved by the grid. For many idealized engineering problems, turbulence is the primary effect that must be so modeled. In GCMs there are many more, such as clouds. References are given in the full paper for some other views that may not fully agree with the one presented here in order to give people a feel for the range of opinion in the field.

For modeling the atmosphere, the difficulties are immense. The Reynolds numbers are high and the turbulence levels are large but highly variable. Many of the supposedly small effects must be neglected based on scientific judgment. There are also large energy flows and evaporation and precipitation and clouds, which are all ignored in virtually all aerodynamic simulations for example. Ocean models require different methods as they are essentially incompressible. This in some sense simplifies the underlying Navier-Stokes equations but adds mathematical difficulties.

2.1       The Role of Numerical Errors in CFD

Generally, the results of many steady state aeronautical CFD simulations are reproducible and reliable for thin boundary and shear layer dominated flows by assuming little flow separation and subsonic flow. There are now a few codes that are capable of demonstrating grid convergence for the simpler geometries or lower Reynolds numbers. However, many of these simulations make many simplifying assumptions and uncertainty is much larger for separated or transonic flows.

The contrast with climate models speaks for itself. Typical grid spacings in climate models are often exceed 100 km and their vertical grid resolution is almost certainly inadequate. Further many of the models use spectral methods that are not fully stable. Various forms of filtering are used to remove undesirable oscillations. Further, the many subgrid models are solved sequentially, adding another source of numerical errors and making tuning problematic.

2.2       The Role of Turbulence and Chaos in Fluid Mechanics

In this section I describe some well verified science from fluid mechanics that govern all Navier-Stokes simulations and that must inform any non-trivial discussion of weather or climate models. One of the problems in climate science is lack of fundamental understanding of these basic conclusions of fluid mechanics or (as perhaps the case may be for some) a reluctance to discuss the consequences of this science.

Turbulence models have advanced tremendously in the last 50 years and climate models do not use the latest of these models, so far as I can tell. Further, for large-scale vortical 3D flow, turbulence models are quite inadequate. Nonetheless, proper modeling of turbulence by solving auxiliary differential equations is critical to achieving reasonable accuracy.

Just to give one fundamental problem that is a showstopper at the moment: how to control numerical error in any time accurate eddy resolving simulation. Classical methods fail. How can one tune such a model? You can tune it for a given grid and initial condition, but that tuning might fail on a finer grid or with different initial conditions. This problem is just now beginning to be explored and is of critical importance for predicting climate or any other chaotic flow.

When truncation errors are significant (as they are in most practical fluid dynamics simulations particularly climate simulations), there is a constant danger of “overtuning” subgrid models, discretization parameters or the hundreds of other parameters. The problem here is that tuning a simulation for a few particular cases too accurately is really just getting large errors to cancel for these cases. Thus skill will actually be worse for cases outside the tuning set. In climate models the truncation errors are particularly large and computation costs too high to permit systematic study of the size of the various errors. Thus tuning is problematic.

2.3       Time Accurate Calculations – A Panacea?

All turbulent flows are time dependent and there is no true steady state. However, using Reynolds averaging, one can separate the flow field into a steady component and a hopefully small component consisting of the unsteady fluctuations. The unsteady component can then be modeled in various ways. The larger the truly unsteady component is, the more challenging the modeling problem becomes.

One might be tempted to always treat the problem as a time dependent problem. This has several challenges, however. At least in principle (but not always in practice) one should be able to use conventional numerical consistency checks in the steady state case. For example, one can check grid convergence, calculate sensitivities for parameters cheaply using linearizations, and use the residual as a measure of reliability. For the Navier-Stokes equations, there is no rigorous proof that the infinite grid limit exists or is unique. In fact, there is strong evidence for multiple solutions, some corresponding to states seen in testing, and others not. All these conveniences are either inapplicable to time accurate simulations or are much more difficult to assess.

Time accurate simulations are also challenging because the numerical errors are in some sense cumulative, i.e., an error at a given time step will be propagated to all subsequent time steps. Generally, some kind of stability of the underlying continuous problem is required to achieve convergence. Likewise a stable numerical scheme is helpful.

For any chaotic time accurate simulation, classical methods of numerical error control fail. Because the initial value problem is ill-posed, the adjoint diverges. This is a truly daunting problem. We know numerical errors are cumulative and can grow nonlinearly, but our usual methods are completely inapplicable.

For chaotic systems, the main argument that I have heard for time accurate simulations being meaningful is “at least there is an attractor.” The thinking is that if the attractor is sufficiently attractive, then errors in the solution will die off or at least remain bounded and not materially affect the time average solution or even the “climate” of the solution. The solution at any given time may be wildly inaccurate in detail as Lorenz discovered, but the climate will (according to this argument) be correct. At least this is an argument that can be developed and eventually quantified and proven or disproven. Paul Williams has a nice example of the large effect of the time step on the climate of the Lorentz system. Evidence is emerging of a similar effect due to spatial grid resolution for time accurate Large Eddy Simulations and a disturbing lack of grid convergence. Further, the attractor may be only slightly attractive and there will be bifurcation points and saddle points as well. And, the attractor can be of very high dimension, meaning that tracing out all its parts could be computationally a monumental if not impossible task. So far, the bounds on attractor dimension are very large. My suggestion would be to develop and fund a large long term research effort in this area with the best minds in the field of nonlinear theory. Theoretical understanding may not be adequate at the present time to address it computationally. There is some interesting work by Wang at MIT on shadowing that may eventually be computationally feasible that could address some of the stability issues for the long-term climate of the attractor. For the special case of periodic or nearly periodic flows, another approach that is more computationally tractable is windowing. This problem of time accurate simulations of chaotic systems seems to me to be a very important unsolved question in fundamental science and mathematics and one with tremendous potential impact across many fields.

While climate modelers Palmer and Stevens’ 2019 short perspective note (see full paper for the reference) is an excellent contribution by two unusually honest scientists, there is in my opinion reason for skepticism about their proposal to make climate models into eddy resolving simulations. Their assessment of climate models is in my view mostly correct and agrees with the thrust of this post, but there are a host of theoretical issues to be resolved before casting our lot with largely unexplored simulation methods that face serious theoretical challenges. Dramatic increases in resolution are obviously sorely needed in climate models and dramatic improvements may be possible in subgrid models once resolution is improved. Just as an example, modern PDE based models may make a significant difference. I don’t think anyone knows the outcomes of these various steps toward improvement.

The “laws of physics” are usually thought of as conservation laws, the most important being conservation of mass, momentum, and energy. The conservation laws with appropriate source terms for fluids are the Navier- Stokes equations. These equations correctly represent the local conservation laws and offer the possibility of numerical simulations. This is expanded on in the full paper.

3.1       Initial Value Problem or Boundary Value Problem?

One often hears that “the climate of the attractor is a boundary value problem” and therefore it is predictable. This is nothing but an assertion with little to back it up. And of course, even assuming that the attractor is regular enough to be predictable, there is the separate question of whether it is computable with finite computing time. It is similar to the folk doctrine that turbulence models convert an ill-posed time dependent problem into a well posed steady state one. This doctrine has been proven to be wrong – as the prevalence of multiple solutions discussed above shows. However, those who are engaged in selling CFD have found it attractive despite its unscientific and effectively unverifiable nature.

A simple analogy for the climate system might be a wing as Nick Stokes has suggested. As pointed out above, the drag for a well-designed wing is in some ways a good analogy for the temperature anomaly of the climate system. The climate may respond linearly to changes in forcings over a narrow range. But that tells us little. To be useful, one must know the rate of response and the value (the value of temperature is important for example for ice sheet response). These are strongly dependent on details of the dynamics of the climate system through nonlinear feedbacks.

Many use this analogy to try to transfer the credibility [not fully deserved] from CFD simulations of simple systems to climate models or other complex separated flow simulations. This is not a correct implication. In any case, even simple aeronautical simulations can have very high uncertainty when used to simulate challenging flows.

3.2       Turbulence and SubGrid Models

Subgrid turbulence models have advanced tremendously over the last 50 years. The subgrid models must modify the Navier-Stokes equations if they are to have the needed effect. Turbulence models typically modify the true fluid viscosity by dramatically increasing it in certain parts of the flow, e.g., a boundary layer. The problem here is that these changes are not really based on the “laws of physics”, and certainly not on the conservation laws. The models are typically based on assumed relationships that are suggested by limited sets of test data or by simply fitting available test data. They tend to be very highly nonlinear and typically make an O(1) difference in the total forces. As one might guess, this area is one where controversy is rife. Most would characterize this as a very challenging problem, in fact one that will probably never be completely solved, so further research and controversy is a good thing.

Negative results about subgrid models have begun to appear. One recent paper shows that cloud microphysics models have parameters that are not well constrained by data. Using plausible values, ECS (equilibrium climate sensitivity) can be “engineered” over a significant range. Another interesting result shows that model results can depend strongly on the order chosen to solve the numerous subgrid models in a given cell. In fact, the subgrid models should be solved simultaneously so that any tuning is more independent of numerical details of the methods used. This is a fundamental principle of using such models and is the only way to ensure that tuning is meaningful. Indeed, many metrics for skill are poorly replicated by current generation climate models, particularly regional precipitation changes, cloud fraction as a function of latitude, Total Lower Troposphere temperature changes compared to radiosondes and satellite derived values, tropical convection aggregation and Sea Surface Temperature changes, just to name a few. This lack of skill for SST changes seems to be a reason why GCM model-derived ECS is inconsistent with observationally constrained energy balance methods.

Given the large grid spacings used in climate models, this is not surprising. Truncation errors are almost certainly larger than the changes in energy flows that are being modeled.  In this situation, skill is to be expected only on those metrics involved in tuning (either conscious or subconscious) or metrics closely associated with them. In layman’s terms, those metrics used in tuning come into alignment with the data only because of cancellation of errors.

One can make a plausible argument for why models do a reasonable job of replicating the global average surface temperature anomaly. The models are mostly tuned to match top of atmosphere radiation balance. If their ocean heat uptake is also consistent with reality (and it seems to be pretty close) and if the models conserve energy, one would expect the average temperature to be roughly right even if it is not explicitly used for tuning. However, this apparent skill does not mean that other outputs will also be skillful.

This problem of inadequate tuning and unconscious bias plagues all application areas of CFD. A typical situation involves a decades long campaign of attempts to apply a customer’s favorite code to an application problem (or small class of problems). Over the course of this campaign many, many combinations of gridding and other parameters are “tried” until an acceptable result is achieved. The more challenging issue of establishing the limitations of this acceptable “accuracy” for different types of flows is often neglected because of lack of resources. Thus, the cancellation of large numerical errors is never quantified and remains hidden, waiting to emerge when a more challenging problem is attempted.

3.3       Overconfidence and Bias

As time passes, the seriousness of the bias issue in science continues to be better documented and understood. One recent example quotes one researcher as saying “Loose scientific methods are leading to a massive false positive bias in the literature.” Another study states:

“Poor research design and data analysis encourage false-positive findings. Such poormethods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science.”

In less scholarly settings, these results are typically met with various forms of rationalization. Often we are told that “the fundamentals are secure” or “my field is different” or “this affects only the medical fields.” To those in the field, however, it is obvious that strong positive bias affects the Computational Fluid Dynamics literature for the reasons described above and that practitioners are often overconfident.

This overconfidence in the codes and methods suits the perceived self-interest of those applying the codes (and for a while suited the interests of the code developers and researchers), as it provides funding to continue development and application of the models to ever more challenging problems. Recently, this confluence of interests has been altered by an unforeseen consequence, namely laymen who determine funding have come to believe that CFD is a solved problem and hence have dramatically reduced the funding stream for fundamental development of new methods and also for new theoretical research. This conclusion is an easy one for outsiders to reach given the CFD literature, where positive results predominate even though we know the models are just wrong both locally and globally for large classes of flows, for example strongly separated flows. Unfortunately, this problem of bias is not limited to CFD, but I believe is common in many other fields that use CFD modeling as well.

Another rationalization used to justify confidence in models are appeals to the “laws of physics” as discussed above. These appeals however omit a very important source of uncertainty and seem to provide a patina of certainty covering a far more complex reality.

Another corollary of the doctrine of the “laws of physics” is the idea that “more physics” must be better. Thus, simple models that ignore some feedbacks or terms in the equations are often maligned. This doctrine also suits the interest of some in the community, i.e., those working on more complex and costly simulations. It is also a favored tactic of Colorful Fluid Dynamics to portray the ultimately accurate simulation as just around the corner if we get all the “physics” included and use a sufficiently massive parallel computer. This view is not an obvious one when critically examined. It is widely held however among both people who run and use CFD results and those who fund CFD.

3.4       Further Research

So what is the future of such simulations and GCMs? As attempts are made to use them in areas where public health and safety are at stake, estimating uncertainty will become increasingly important. Items deserving attention in my opinion are discussed in some detail in the full paper, posted here on Climate Etc. I would argue that the most important elements needing attention, both in CFD and in climate and weather modeling, are new theoretical work and insights and the development of more accurate data. The latter work is not glamorous and the former can entail career risks. These are hard problems. and in many cases, a particular line of enquiry will not yield anything really new.

The dangers to be combatted include:

It is critical to realize that the literature is biased and that replication failures are often not published.We really need to escape from the elliptic boundary value problem (well posed) mental model that are held by so many with a passing familiarity with the issues. A variant of this mental model one encounters in the climate world is the doctrine of “converting an initial value problem to a boundary value problem.” This just confuses the issue, which is really about the attractor and its properties. The methods developed for well-posed elliptic problems have been pursued about as far as they will take us. However, this mental model can result in dramatic overconfidence in models in CFD.A corollary of the “boundary value problem” misnomer is the idea that “If I run the model right, the answer will be right” mental model. This is patently false and even dangerous, however, it gratifies egos and aids in marketing.

I have tried to lay out in summary form some of the issues with high Reynolds number fluid simulations and to highlight the problem of overconfidence as well as some avenues to try to fundamentally advance our understanding. Laymen need to be aware of the typical tactics of the dark arts of “Colorful Fluid Dynamics” and “science communication.” It is critical to realize that much of the literature is affected by selection and positive results bias. This is something that most will admit privately, but is almost never publicly discussed.

How does this bias come about? An all too common scenario is for a researcher to have developed a new code or a new feature of an old code or to be trying to apply an existing code or method to a particular test case of interest to a customer. The first step is to find some data that is publicly available or obtain customer supplied data. Much of the older and well documented experiments involve flows that are not tremendously challenging. One then runs the code or model (adjusting grid strategies, discretization and solver methodologies, and turbulence model parameters or methods) until the results match the data reasonably well. Then the work often stops (in many cases because of lack of funding or lack of incentives to draw more scientifically balanced conclusions) and is published. The often large number of runs with different parameters that provided less convincing results are explained as due to “bad gridding,” “inadequate parameter tuning,” “my inexperience in running the code,” etc. The supply of witches to be burned is seemingly endless. These rationalizations are usually quite honest and sincerely believed, but biased. They are based on a cultural bias that if the model is “run right” then the results will be right, if not quantitatively, then at least qualitatively. As we saw above, those who develop the models themselves know this to be incorrect as do those responsible for using the simulations where public safety is at stake. As a last resort one can always point to any deficiencies in the data or for the more brazen, simply claim the data is wrong since it disagrees with the simulation. The far more interesting and valuable questions about robustness and uncertainty or even structural instability in the results are often neglected. One logical conclusion to be drawn from the perspective by Palmer and Stevens calling for eddy resolving climate models is that the world of GCM’s is little better. However, this paper is a hopeful sign of a desire to improve and is to be strongly commended.

This may seem a cynical view, but it is unfortunately based on practices in the pressure filled research environment that are all too common. There is tremendous pressure to produce “good” results to keep the funding stream alive, as those in the field well know. Just as reported in medically related fields, replication efforts for CFD have often been unsuccessful, but almost always go unpublished because of the lack of incentives to do so. It is sad to have to add that in some cases, senior people in the field can suppress negative results. Some way needs to be found to provide incentives for honest and objective replication efforts and publishing those findings regardless of the opinions of the authors of the method. Priorities somehow need to be realigned toward more scientifically valuable information about robustness and stability of results and addressing uncertainty.

However, I see some promising signs of progress in science. In medicine, recent work shows that reforms can have dramatic effects in improving the quality of the literature. There is a growing recognition of the replication crisis generally and the need to take action to prevent science’s reputation with the public from being irreparably damaged. As simulations move into the arena affecting public safety and health, there will be hopefully increasing scrutiny, healthy skepticism, and more honesty. Palmer and Stevens’ recent paper is an important (and difficult in the politically charged climate field) step forward on a long and difficult road to improved science.

In my opinion those who retard progress in CFD are often involved in “science communication” and “Colorful Fluid Dynamics.” They sometimes view their job as justifying political outcomes by whitewashing high levels of uncertainty and bias or making the story good click bait by exaggerating. Worse still, many act as apologists for “science” or senior researchers and tend to minimize any problems. Nothing could be more effective in producing the exact opposite of the desired outcome, viz., a cynical and disillusioned public already tired of the seemingly endless scary stories about dire consequences often based on nothing more than the pseudo-science of “science communication” of politically motivated narratives. This effect has already played out in medicine where the public and many physicians are already quite skeptical of health advice based on retrospective studies, biased reporting, or slick advertising claiming vague but huge benefits for products or procedures. Unfortunately, bad medical science continues to affect the health of millions and wastes untold billions of dollars. The mechanisms for quantifying the state of the science on any topic, and particularly estimating the often high uncertainties, are very weak. As always in human affairs, complete honesty and directness is the best long term strategy. Particularly for science, which tends to hold itself up as having high authority, the danger is in my view worth addressing urgently. This response is demanded not just by concerns about public perceptions, but also by ethical considerations and simple honesty as well as a regard for the lives and well-being of the consumers of our work who deserve the best information available.

Biosketch.: David Young received a PhD in mathematics in 1979 from the University of Colorado-Boulder. After completing graduate school, Dr. Young joined the Boeing Company and has worked on a wide variety of projects involving computational physics, computer programming, and numerical analysis. His work has has been focused on the application areas of aerodynamics, aeroelastics, computational fluid dynamics,airframe design, flutter, acoustics, and electromagnetics. To address these applications, he has done original theoretical work in high performance computing, linear potential flow and boundary integral equations, nonlinear potential flow, discretizations for the Navier-Stokes equations, partial differential equations and the finite element method, preconditioning methods for large linear systems, Krylov subspace methods for very large nonlinear systems, design and optimization methods, and iterative methods for highly nonlinear systems.

Moderation note: This is a technical thread, and comments will be ruthlessly moderated for relevance and civility.

56votes
Article Rating

Stuart Turley is President and CEO of Sandstone Group, a top energy data, and finance consultancy working with companies all throughout the energy value chain. Sandstone helps both small and large-cap energy companies to develop customized applications and manage data workflows/integration throughout the entire business. With experience in implementing enterprise networks, supercomputers, and cellular tower solutions, Sandstone has become a trusted source and advisor in this space. Stuart has led the “Total Corporate Digital Integration” platform at Sandstone and works with Sandstone clients to help integrate all aspects of modern digital business. He is also the Executive Publisher of www.energynewsbeat.com, the best source for 24/7 energy news coverage and is the Co-Host of the energy news video and Podcast Energy News Beat.

Stuart is on Board Member of ASN Productions, DI Communities

Stuart is guided by over 30 years of business management experience, having successfully built and help sell multiple small and medium businesses while consulting for numerous Fortune 500 companies. He holds a B.A in Business Administration from Oklahoma State and an MBA from Oklahoma City University.


Tags


You may also like