“If the butterfly flaps its wings…” so goes the chaos theory of dynamical systems where the sensitive and not extreme dependence on the initial condition determines the disorder of the system as a whole. It is this miniscule of relevance that wiggles a little and creates the storm of discontinuity. And it is this little wiggle that flies in the face of Reductionism.
So mathematical models vary by what the desired study is, variables involved, materials used, and the intensity of the computational analysis. For instance a model based on the aeronautical dynamics of a composite material will require information on the deformation patterns, tensile strength and elasticity of the material under study. All such comparisons would be based on aluminum (the gold standard) which has served as the measuring rod for decades. Additional information regarding its tolerance to fatigue (recall the Aloha Airlines disaster) and its resistance to fire (Use of Nomex fibers in composites and as this relates to the strength or weakness of the material) will be brought to bear on the new concept to be exploited.
Remembering that a mathematical model is an order of reference within the hierarchy of a larger model, must give the analyst pause and concern at every level, for instance an analyst performing a validation experiment of a rigid machine exposed to large degrees of vibration has to have “Harmonic analysis” in mind as one of the parameters.
An example here could be the rotor blade of a helicopter subjected to the supersonic speeds has to undergo deformation, linear and rotational stresses and place significant centrifugal forces at the rotor disc level. Fatigue damage in homogenous material is especially dangerous because they are unpredictable, giving no prior notification of the imminent failure, they occur with sudden devastation and show no exterior plastic deformations. Advanced composite materials exhibit gradual damage accumulation before failure. Typically, matrix cracking, delamination occur early in the life, while fiber-fracture and fiber-matrix debonds initiate during the beginning of the life and accumulate rapidly towards the end, leading to the final failure. Thus actual experimentation shows that composite rotor blades become better predictors of failure then homogenous ones. This was borne out of actual experimentation. The mathematical modeling did not reveal the anomalous behavior of different material used in the simulated outcome. In a calm wind environment this would have different dynamics compared in conditions of strong wind shear where performance, function and integrity may be challenged.
Similarly a mathematical model estimating the population by using the “Birth Rate” and “Death rate” does not really tell us the real story. Changing the parameter to “Per Capita Birth Rate,” and “Per Capita Death Rate” signifies the population being tested thus it gives a better record. Using the right parameter in the methodology makes a difference in the computed outcome and desired result. This makes the use of mathematical models even more difficult since the user of such models must be sure to pick critical parameters. It is at least at this point, that the weakness of a mathematical model lies.
So the question will naturally arise what is the purpose of Mathematical Models if complexity theory is not in effect? We will answer this question in the second part of this series. Stay tuned.
Co-Written by PlusUltraTech and JediMedicine
So one might then consider that living in an extremely complex world that is governed by multivariate conditions, transposing each condition onto a theoretical model would requires zillion terabytes to compute - which we don’t have yet, creates a problem for such enterprises. So herein lies the premise of this undertaking that only the brilliant mind of Edward Lorenz could have named the “Butterfly Effect.”
See These Pages: FUTURISM TECH TRENDS SINGULARITY SCIENCE CENSORSHIP SOCIAL NETWORKS eREADERS MOBILE DEVICES
Each piece of the whole exerts influence on the whole in ways that we cannot completely comprehend. Our understanding continues to evolve and the Eureka moments, encompassing as they may be, are also expositions of the limitations of the dynamical system that we do not fully comprehend. Nothing is in isolation and nothing changes without some effect from something else. For instance the human population explosion over the past century is associated with consequences, such as pressures for survival with corresponding extinction of some species as well as artificial preservation of others for our own needs (horses and dogs for instance). So to say we live in a dynamic system is an understatement of our own existence. It is a dynamic system that is constantly in flux. It is a plastic constantly being remolded.
Progress for humanity is littered with the refuse of failure. It seems to mimic life where 1.5 billion species have gone extinct based on fossil data. Our failures maybe not as dramatic as that but they are indicative of our limits.
The fundamental question is whether it is possible to predict the response of some process after various numerical inputs have been used to determine the outcome with a high degree of reliability.
Our failure is based on limits of understanding and data, the precision of the initial and subsequent conditions that are the prerequisite in a dynamic system and the perceived outcome. Therein lies the tail of this tale. The fly in the ointment is the chaos theory. It is now and always will be. The predicates of the Chaos Theory bring the accidental “strange attractor” to muddy up the entire linearly-structured, predictably-sequenced, data-limited human endeavor of this science of prediction. Reductionism is itself confined into the cage of unpredictability by the little “wiggle” of the unknown or unaccounted for variable.
Reductionism and Mathematical Modeling:
To understand nature humans have used the art of Reduction. To reduce something to its most basic elements, so that we can see how that something ticks. This philosophy has given birth to multiple industries that have used nature as a template. Aerospace, Medical, Engineering and Nuclear to name a few. Having mastered the art of seeing things flayed open for all pieces to be visible has also inculcated in us the art of modeling that would help newer designs, predictable functions, better performance, and even the future. Reductionism is a model employed in mathematical modeling concepts. It works for design implementation of a modest number of rules and components for assembly. Where the failure occurs is in the complexity of the arrangements since all the possible iterations of the system have not been tested or vouched for.
A simple example would be a high profile car with all the deployable safety crash equipment but no anti-rollover bar, until a rollover accident. Something not contemplated is a variable, that is to say not predicted by the model, and therefore not ascribed for. These variables lend themselves to less than 100% safety.
Knowing the innards of an animal by opening its torso kills the animal and thus ends the experiment but does not elicit what makes it tick. Similarly dissecting the brain will not yield the mind. There are elements not found in the library of knowledge that can make an experiment predict the real outcome.
A simple example would be a high profile car with all the deployable safety crash equipment but no anti-rollover bar, until a rollover accident. Something not contemplated is a variable, that is to say not predicted by the model, and therefore not ascribed for. These variables lend themselves to less than 100% safety.
Knowing the innards of an animal by opening its torso kills the animal and thus ends the experiment but does not elicit what makes it tick. Similarly dissecting the brain will not yield the mind. There are elements not found in the library of knowledge that can make an experiment predict the real outcome.
Mathematical Modeling is a full-blown tool used in all sciences and disciplines. It is used to extrapolate information with the maximum known. But there are always "variables" that do not fit the model predictions. These variables are ascribed to probability derivations. Here, it is important to make some distinctions. What is the difference between a model and a simulation? Nino Boccara in his book entitled, Modeling Complex Systems states quoting John Maynard Smith that a model is a "simplified mathematical representation of a system..." he goes on to say that in a model, "...only the few relevant features that are thought to play an essential role in the interpretation of the observed phenomena should be retained."
A model should be distinguished from a simulation. In a simulation you want the greatest numbers of features used, not just the most relevant ones. The more features included in a simulation the more realistically it will portray the simulation. But as Boccara adds, "The better a simulation is for its own purposes, by the inclusion of all relevant details, the more difficult it is to generalize its conclusions..." Boccara thus concludes, "Whereas a good simulation should include as much detail as possible, a good model should include as little as possible."
The holy grail of mathematical modeling of course is to be able to take one factor of complex chaotic systems and be able to predict future outcomes. Since the discovery of chaotic theory, scientists have been far from any such event. In a 2005 presentation entitled, Process, Pattern, Prediction: Complexity in Driven Dynamical Systems, is very realistic as to the expectations of computer mathematical models. We will quote from the notes for the video:
If you cannot see the embedded video here is the link: http://bit.ly/fctvyb.
Reductionism is based on what is called a linear system. The elements of a linear system are based on known knowledge, where a deterministic set of rules and facts apply. Within this system the pieces that fit are defined and proscribed for in the piece-by-piece assemblage of the whole. These linear systems lend themselves to be modeled mathematically. An analysis of these systems shows the limited sets of rules and parts. If the parts and the rules become complex then Baysian rules of probability takes over and calculations of probability determine a less than 100% successful outcome. Unfortunately these probability calculations are based on certain confidences that make it impossible for real determinism. In other words the absolute outcomes are always in question.
A model should be distinguished from a simulation. In a simulation you want the greatest numbers of features used, not just the most relevant ones. The more features included in a simulation the more realistically it will portray the simulation. But as Boccara adds, "The better a simulation is for its own purposes, by the inclusion of all relevant details, the more difficult it is to generalize its conclusions..." Boccara thus concludes, "Whereas a good simulation should include as much detail as possible, a good model should include as little as possible."
The holy grail of mathematical modeling of course is to be able to take one factor of complex chaotic systems and be able to predict future outcomes. Since the discovery of chaotic theory, scientists have been far from any such event. In a 2005 presentation entitled, Process, Pattern, Prediction: Complexity in Driven Dynamical Systems, is very realistic as to the expectations of computer mathematical models. We will quote from the notes for the video:
Edward N Lorenz discovered that chaos and unpredictability are hallmarks of even simple driven systems. Predicting the future evolution of a variety of driven nonlinear systems is further complicated by the fact that their dynamical processes are 1) often not amenable to direct observation; and 2) are strongly multi-scale, so that length and time scales range from very much smaller and shorter than human perception, to very much larger and longer. An example of such systems is the atmosphere, in which, from a practical standpoint, it is impossible to measure the temperatures, pressures, and humidity at all locations at all times. Other important systems include neural networks and earthquake fault systems, both of which are examples of driven threshold systems. In systems such as these, we can only observe the space-time patterns of extreme events.The icon for this recent attempts to make predictive models for certain types of chaotic behavior is found in the work by geologists to predict when and where earthquakes will happen. You might have wondered what a threshold system is. J.B. Rundle, etc., in a 2002 paper entitled, Self Organization In Leaky Threshold Systems: The Influence of Near-Mean Field Dynamics And Its Implications For Earthquakes, Neurobiology And Forecasting, states,
Threshold systems are known to be some of the most important nonlinear self-organizing systems in nature, including networks of earthquake faults, neural networks, superconductors and semiconductors, and the World Wide Web, as well as political, social, and ecological systems. All of these systems have dynamics that are strongly correlated in space and time, and all typically display a multiplicity of spatial and temporal scales.
If you cannot see the embedded video here is the link: http://bit.ly/fctvyb.
Reductionism is based on what is called a linear system. The elements of a linear system are based on known knowledge, where a deterministic set of rules and facts apply. Within this system the pieces that fit are defined and proscribed for in the piece-by-piece assemblage of the whole. These linear systems lend themselves to be modeled mathematically. An analysis of these systems shows the limited sets of rules and parts. If the parts and the rules become complex then Baysian rules of probability takes over and calculations of probability determine a less than 100% successful outcome. Unfortunately these probability calculations are based on certain confidences that make it impossible for real determinism. In other words the absolute outcomes are always in question.
Ah! there is the rub. Mathematical Modeling is a philosophical construct that has with time become the process “de-jour” of scientific thought. It has been used successfully many times and many times it has come short. As much as the former is true, it is the latter that gives us pause. From our failures comes the light of knowledge. This then is the fruit derived from dashed hopes of botched outcomes that leads to advancing the language of mathematical perfection. Something we seek but may never realize.
The fundamental question is whether it is possible to predict response of some process after various numerical inputs have been used to determine the outcome with a high degree of reliability. Here is a video that covers the history of complexity systems and chaotic theory. If you cannot see the embedded video here is the link: http://bit.ly/eedyuM.
How A Mathematical Model Is Formed
The first order here is the initial thought also called “the concept.” The concept is based on several inputs that include materials, mechanics, interacting parts, procedures etc. This minimalist list is left to the “judgment and experience of the mathematical analysts.” Any increase in the number of variables such as interacting and changing external dynamic factors requires a complex series of undertakings that involve probabilities and mathematics.
The first order here is the initial thought also called “the concept.” The concept is based on several inputs that include materials, mechanics, interacting parts, procedures etc. This minimalist list is left to the “judgment and experience of the mathematical analysts.” Any increase in the number of variables such as interacting and changing external dynamic factors requires a complex series of undertakings that involve probabilities and mathematics.
Aloha Airline Disaster Flight 243 |
After the initial conceptualization process has been completed, the concept then has to be validated by the rigor of predicting component/subcomponent and/or system failures. This is done via simulated experiments. At this point all subcomponents, components, subassemblies and the total assembly of the product under consideration are forced through known “stressors.” A successful prediction based on inducing failures under different external variables gives validation to the concept. The errors that creep in could relate to the initial conceptualized model, numerical approximation (recall Edward Lorenz’s error see video below), the simulated experiment, or the statistical variables employed- all subject to the minor “wiggle.”. Since most of the modeling is based on predictable non-logical axioms, they are axioms nevertheless and due to their necessary simplistic assumptions, suffer the wrath of nature’s whims.
An example here could be the rotor blade of a helicopter subjected to the supersonic speeds has to undergo deformation, linear and rotational stresses and place significant centrifugal forces at the rotor disc level. Fatigue damage in homogenous material is especially dangerous because they are unpredictable, giving no prior notification of the imminent failure, they occur with sudden devastation and show no exterior plastic deformations. Advanced composite materials exhibit gradual damage accumulation before failure. Typically, matrix cracking, delamination occur early in the life, while fiber-fracture and fiber-matrix debonds initiate during the beginning of the life and accumulate rapidly towards the end, leading to the final failure. Thus actual experimentation shows that composite rotor blades become better predictors of failure then homogenous ones. This was borne out of actual experimentation. The mathematical modeling did not reveal the anomalous behavior of different material used in the simulated outcome. In a calm wind environment this would have different dynamics compared in conditions of strong wind shear where performance, function and integrity may be challenged.
Similarly a mathematical model estimating the population by using the “Birth Rate” and “Death rate” does not really tell us the real story. Changing the parameter to “Per Capita Birth Rate,” and “Per Capita Death Rate” signifies the population being tested thus it gives a better record. Using the right parameter in the methodology makes a difference in the computed outcome and desired result. This makes the use of mathematical models even more difficult since the user of such models must be sure to pick critical parameters. It is at least at this point, that the weakness of a mathematical model lies.
History teaches us that the Malthusian Model of population expansion and resource comparison was woefully inadequate since there were no inputs for catastrophes. It was a steady-state model, which as we know is not how humans and nature behave and neither does it incorporate externalities like the Influenza epidemic of 1918 that decimated a quarter of the world population. His modeling was simplistic in the steady-state growth of the population outstripping the food supply proclaiming the incipient crises.
So the question will naturally arise what is the purpose of Mathematical Models if complexity theory is not in effect? We will answer this question in the second part of this series. Stay tuned.
Co-Written by PlusUltraTech and JediMedicine
1 comment:
I love your take on the progress for humanity. That it is littered with the refuse of failure is such a brilliant concept.
aircraft hardware
Post a Comment