Saturday, November 15, 2008
Non-Equilibrium Information Theory (DARPA group)
Of possible interest -- DARPA group attempting to use non-equilibrium information theory to study mobile ad hoc wireless networks (MANETs). Lots of information theory pubs, not too sure yet they're really on to what constitutes "non-equilibrium," worth investigating.
Non-Equilibrium Theory: Basic References
Gathering up two of the most classic sources: Prigogine's Thermodynamics of Irreversible Processes , and Kubicek and Marek's Computational Methods in Bifurcation Theory and Dissipative Structures.
So here's an interesting little do-at-home experiment: Study the meltdown of the Lehman Brothers, which started the whole stock market runoff in September / October of 2008. Using Google (which is a horrible tool for this, but freely available), run queries on "Lehman Brothers" and maybe a key phrase such as "financial crisis" for each of a set of days over the summer -- if you want to short-cut, just try months, such as June, 2008; then replace that with July, 2008, then August, 2008, etc.
Month Total query returns (using Google)
April, 2008: 241.000 hits
May, 2008: 1,120,000 hits
June, 2008: 324,000 hits.
July, 2008: 408,000 hits
August, 2008: 377,000 hits
September, 2008: 769;,000 hits (Lehman files for bankruptcy)
October, 2008: 844,000 hits
Now Google's page-rank algorithm is the quirkiest thing in the world -- results change ALL THE TIME when you use it. So when you run this little mini-experiment, you'll get different results from me. (I could run it 10 minutes from now and get different results.)
The point is -- despite the HUGE amount of noise in the return sets (using such a crude means for date stamping -- any data collection lab could do much better), we see that the Lehman Brothers criticality did not "happen overnight." There was MUCH discussion about Lehman Bros. for months prior to the decision to file for bankruptcy.
Even given that such corporate crises take a long time to brew, and then to come to a simmer, something pushed the whole situation over the edge. (And not just the Lehman Bros. filing.) This situation was unstable for a long time before it broke.
If we were to pick a physical systems analogy to describe what happened, we could say that the entire financial structure was in a "stationary non-equilibrium state," using a term coined by Prigogine.
(Note that one of Sutter's key bones of contention with Beinhocker's Origins (see Sutter's review on Amazon's Origins page) is that Beinhocker pays homage to the Complex Adaptive Systems (CAS) work at the Santa Fe Institute (SFI), while carefully ignoring the equally -- and perhaps much more important -- work by Prigogine and others of the European school.
(My personal take is that Prigogine's work deals much more thoughtfully with the core issues of developing a robust model for non-equilibrium systems, and the CAS work, by and large, is much more "phenomenalistic" -- they have lots of cool algorithms, lots of neat little demos, but not much that can really provide a robust and rigorous model. Still somewhat TBD, and over the course of this blog, I'll undoubtedly have much more to say.)
So the point to ponder, quoting Prigogine w/r/t "stationary non-equilibrium states" (p. 75) is that:
"No confusion should arise between such states and equilbirum states which are characterized by zero entropy production. Another example of a stationary state is afforded by a system which receives a component M from the outside environment and transforms it through a certain number of intermediate compounds into a final product F which is returned to the external environment. A stationary state arises when the concentrations of the intermediate components no longer vary with time."
So ... with Lehman Bros., with the entire economic superstructure that existed prior to the October meltdown -- can we identify components M and F, along with intermediate compounds? If we can, we might be able to make the analogy --
--- this will take some pondering.
And if anyone wants to post a comment, go ahead.
In the meantime -- reading Prigogine and jumping ahead a couple of pages (pg. 85); w/r/t/ Eqn. (6.44):
The entropy flow then becomes:
d(e)S/dt = Sum(over gamma) {s(sub-gamma) (dn(e)n(gamma)/dt)}
(am going to learn how to cut and paste equation images into this ...)
"The inequality (6.38) for the stationary state leads to the conclusion that the entropy of the matter entering the system has to be smaller than the entropy of the matter given off by the system to the external world . From the thermodynamic point of view the open system 'degrades' the matter it receives and it is this degradation which maintains the stationary state."
So -- were Lehman Bros. and others "giving off more entropy" than the material (e.g., investment funds) that they received originally contained? In short, did they produce greater -- not so much randomness (too simplistic a term) but rather distribution over possible states?
Another point to ponder ...
So here's an interesting little do-at-home experiment: Study the meltdown of the Lehman Brothers, which started the whole stock market runoff in September / October of 2008. Using Google (which is a horrible tool for this, but freely available), run queries on "Lehman Brothers" and maybe a key phrase such as "financial crisis" for each of a set of days over the summer -- if you want to short-cut, just try months, such as June, 2008; then replace that with July, 2008, then August, 2008, etc.
Month Total query returns (using Google)
April, 2008: 241.000 hits
May, 2008: 1,120,000 hits
June, 2008: 324,000 hits.
July, 2008: 408,000 hits
August, 2008: 377,000 hits
September, 2008: 769;,000 hits (Lehman files for bankruptcy)
October, 2008: 844,000 hits
Now Google's page-rank algorithm is the quirkiest thing in the world -- results change ALL THE TIME when you use it. So when you run this little mini-experiment, you'll get different results from me. (I could run it 10 minutes from now and get different results.)
The point is -- despite the HUGE amount of noise in the return sets (using such a crude means for date stamping -- any data collection lab could do much better), we see that the Lehman Brothers criticality did not "happen overnight." There was MUCH discussion about Lehman Bros. for months prior to the decision to file for bankruptcy.
Even given that such corporate crises take a long time to brew, and then to come to a simmer, something pushed the whole situation over the edge. (And not just the Lehman Bros. filing.) This situation was unstable for a long time before it broke.
If we were to pick a physical systems analogy to describe what happened, we could say that the entire financial structure was in a "stationary non-equilibrium state," using a term coined by Prigogine.
(Note that one of Sutter's key bones of contention with Beinhocker's Origins (see Sutter's review on Amazon's Origins page) is that Beinhocker pays homage to the Complex Adaptive Systems (CAS) work at the Santa Fe Institute (SFI), while carefully ignoring the equally -- and perhaps much more important -- work by Prigogine and others of the European school.
(My personal take is that Prigogine's work deals much more thoughtfully with the core issues of developing a robust model for non-equilibrium systems, and the CAS work, by and large, is much more "phenomenalistic" -- they have lots of cool algorithms, lots of neat little demos, but not much that can really provide a robust and rigorous model. Still somewhat TBD, and over the course of this blog, I'll undoubtedly have much more to say.)
So the point to ponder, quoting Prigogine w/r/t "stationary non-equilibrium states" (p. 75) is that:
"No confusion should arise between such states and equilbirum states which are characterized by zero entropy production. Another example of a stationary state is afforded by a system which receives a component M from the outside environment and transforms it through a certain number of intermediate compounds into a final product F which is returned to the external environment. A stationary state arises when the concentrations of the intermediate components no longer vary with time."
So ... with Lehman Bros., with the entire economic superstructure that existed prior to the October meltdown -- can we identify components M and F, along with intermediate compounds? If we can, we might be able to make the analogy --
--- this will take some pondering.
And if anyone wants to post a comment, go ahead.
In the meantime -- reading Prigogine and jumping ahead a couple of pages (pg. 85); w/r/t/ Eqn. (6.44):
The entropy flow then becomes:
d(e)S/dt = Sum(over gamma) {s(sub-gamma) (dn(e)n(gamma)/dt)}
(am going to learn how to cut and paste equation images into this ...)
"The inequality (6.38) for the stationary state leads to the conclusion that the entropy of the matter entering the system has to be smaller than the entropy of the matter given off by the system to the external world . From the thermodynamic point of view the open system 'degrades' the matter it receives and it is this degradation which maintains the stationary state."
So -- were Lehman Bros. and others "giving off more entropy" than the material (e.g., investment funds) that they received originally contained? In short, did they produce greater -- not so much randomness (too simplistic a term) but rather distribution over possible states?
Another point to ponder ...
Quick Note: Helmholtz vs. Gibbs Free Energy
Using this blog as an online set of research notes (about that which I don't mind sharing ) -- suppose that we try using an equilibrium-based approach of some sort for modeling what we all know is a very non-equilibrium world. Which formulation, Helmholtz or Gibbs, works best for us?
Helmholtz free energy is at constant temperature and volume. It is denoted as A, where the defining equation is A = U-TS, where U is enthalpy, T is temperature, and S is entropy.
Gibbs free energy is at constant pressure and temperature.
The systems that we consider would be at constant volume. (E.g., constant volume of a nation, etc.) We do foresee "adding to" the system, content; e.g., adding people, adding tons of grain produced per year, adding money produced by a national treasury. These will increase the pressure, not the volume. For example, during inflation, when more money is actually being produced (or with the recent government bailout, where money is being "produced" out of thin air), we add to the "density" of money in the system; same volume, more units (money), thus greater density of dollars -- and also greater "pressure." So we will be using the Helmholtz formulation.
Helmholtz free energy is at constant temperature and volume. It is denoted as A, where the defining equation is A = U-TS, where U is enthalpy, T is temperature, and S is entropy.
Gibbs free energy is at constant pressure and temperature.
The systems that we consider would be at constant volume. (E.g., constant volume of a nation, etc.) We do foresee "adding to" the system, content; e.g., adding people, adding tons of grain produced per year, adding money produced by a national treasury. These will increase the pressure, not the volume. For example, during inflation, when more money is actually being produced (or with the recent government bailout, where money is being "produced" out of thin air), we add to the "density" of money in the system; same volume, more units (money), thus greater density of dollars -- and also greater "pressure." So we will be using the Helmholtz formulation.
Thursday, November 13, 2008
Equilibrium and Utility: Two Different Realms
Continuing with Beinhocker's Origins of Wealth, it is important to distinguish carefully between some of the ideas that Beinhocker is expounding. While overall, he is doing a good job of bringing in many related thoughts and ideas, there is a slight tendency towards "mushing."
In that note, I'd like to suggest that we discern carefully between ideas involving utility (Origins, hardcover; pp. 34 & 37), and equilibrium. On pg. 34, Beinhocker begins a discussion of how utility is an underlying concept that allows us to model system dynamics, in which two or more parties are each separately trying to optimize their own position. On pg. 37, he introduces the notation of a util, which has further been subsumed into other measures and approaches.
We need to go back to what we learned in our first semester of chemistry: unit analysis.
The fundamental units of a free energy equation (Helmholtz or Gibbs) are energy. The enthalpy is an energy term, and while entropy is label-less, temperature is an energy term.
We define the equilibrium point as when the derivative of the free energy reaches zero. Thus, equilibrium is when we at the lowest possible point in the energy content of the system. And we need -- whenever we make analogy of equilibrium to some other process or system -- to first ensure that we are modeling something that can be described as at least an analogy to energy.
In contrast, utility is something that active, dynamic agents seek to optimize. They are not seeking to minimize, but rather maximize utility.
Equilibrium systems must intrinsically be characterized by those with a statistically large enough number of units so that the idea of distributing units among all available "energy states" (entropy) makes sense. And in simplest possible terms, there have to be at least two different energy states that these units could occupy, otherwise the idea of "distribution" doesn't make sense.
Conversely, one can compute utility functions for systems with very small numbers of units. In such systems, the 2nd law of thermodynamics does not necessarily apply; we are trusting that the individual units have the wherewithal to optimize their respective utilities.
Utility functions are of great advantage in control and optimization theory. They are an excellent, in fact a necessary, component for both predictive analysis systems.
One of the best ways to understand refinements to the role of utility is to examine it in the context of neurocontrol, see e.g. Paul Werbos's chapter on A Menu for Designs of Reinforcement Learning Over Time, in Neural Networks for Control, edited by Miller, Sutton, and Werbos [1990].
But we need to make a clear distinction between systems which can be characterized by equilibrium processes, and those characterized by utility optimization.
I suggest that the reader slow down, read carefully, and use his or her own powers of discernment to cut through this Gordian knot of multiple intertwining concepts. They are all useful, and all powerful, but need to be carefully disambiguated.
In that note, I'd like to suggest that we discern carefully between ideas involving utility (Origins, hardcover; pp. 34 & 37), and equilibrium. On pg. 34, Beinhocker begins a discussion of how utility is an underlying concept that allows us to model system dynamics, in which two or more parties are each separately trying to optimize their own position. On pg. 37, he introduces the notation of a util, which has further been subsumed into other measures and approaches.
We need to go back to what we learned in our first semester of chemistry: unit analysis.
The fundamental units of a free energy equation (Helmholtz or Gibbs) are energy. The enthalpy is an energy term, and while entropy is label-less, temperature is an energy term.
We define the equilibrium point as when the derivative of the free energy reaches zero. Thus, equilibrium is when we at the lowest possible point in the energy content of the system. And we need -- whenever we make analogy of equilibrium to some other process or system -- to first ensure that we are modeling something that can be described as at least an analogy to energy.
In contrast, utility is something that active, dynamic agents seek to optimize. They are not seeking to minimize, but rather maximize utility.
Equilibrium systems must intrinsically be characterized by those with a statistically large enough number of units so that the idea of distributing units among all available "energy states" (entropy) makes sense. And in simplest possible terms, there have to be at least two different energy states that these units could occupy, otherwise the idea of "distribution" doesn't make sense.
Conversely, one can compute utility functions for systems with very small numbers of units. In such systems, the 2nd law of thermodynamics does not necessarily apply; we are trusting that the individual units have the wherewithal to optimize their respective utilities.
Utility functions are of great advantage in control and optimization theory. They are an excellent, in fact a necessary, component for both predictive analysis systems.
One of the best ways to understand refinements to the role of utility is to examine it in the context of neurocontrol, see e.g. Paul Werbos's chapter on A Menu for Designs of Reinforcement Learning Over Time, in Neural Networks for Control, edited by Miller, Sutton, and Werbos [1990].
But we need to make a clear distinction between systems which can be characterized by equilibrium processes, and those characterized by utility optimization.
I suggest that the reader slow down, read carefully, and use his or her own powers of discernment to cut through this Gordian knot of multiple intertwining concepts. They are all useful, and all powerful, but need to be carefully disambiguated.
Wednesday, November 12, 2008
"Origins of Wealth" - A (Multi-Part) Critical Review
Over the last few months, questions of not only wealth and finances, but the underpinnings of our entire financial structure, have become paramount in many of our minds.
We -- that usually means you and me -- and right now means the world collectively -- have largely misunderstood the world's financial structure over recent years. (Those who HAVE accurately understood are not only more secure, but substantially richer by now.)
Most of us are current on "what went wrong."
Most of us now understand how the financial state of affairs grew so out-of-bounds that "what went wrong" was the only foreseeable action. (Easy to do in hindsight.)
What many of us are seeking now -- beyond the next tactical or even strategic step -- is a better basis for understanding the world's financial system for the future. This means, we are re-examining what we are using as a "theoretical base" for national and even global economic modeling.
This is not impractical or far-fetched. In fact, Kurt Lewin is credited with saying “There is nothing so practical as a good theory”.
As a physical chemist, I start from a place of knowing little about economics, economic systems, or world models. However, I know a great deal about modeling large-scale, complex physical systems -- this was the subject of my graduate work and dissertation, and underlies each of the patents that I've made (all four of them -- will put endnotes at the bottom of this posting).
So in the spirit of discourse, let's start with one presumptive theory, examine it, pull it apart, see where it has strengths and deficiencies, and move on. And we can keep on doing this until we arrive at something that works.
(As an aside -- I'm not only a scientist, but also an entrepreneur -- so the "best of the best" of my thoughts will be kept for the clients funding my next company. But this blog post lets me share with you the way we would if we were having a discussion at a cocktail party, or after a seminar. It gives me a means of organizing my thoughts supporting the next round of inventions, and you something to read, discuss, and consider. And feel free to email me at alianna1 at gmail dot com or post your comments to this blog.)
Eric D. Beinhocker purports to propose a good theory in his 2006 book, The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics. (If you jump to the Amazon site, you'll be able to not only "search inside" -- but more importantly, read the reviews -- which actully are very useful. In particular, Craig Howe does a good job of summarizing the major premises, and -- more importantly -- A.J. Sutter correctly identifies the book's major weaknesses.)
But let's start with a quick, one-paragraph overview. Beinhocker's main premise is that the equilibrium theory of "traditional economics" is substantively unable to be a useful or predictive model for any level of macro-economic events.
Specifically, Beinhocker states (pp. 42-43, hardcover version):
"By the end of the twentieth century, Traditional Economics was thoroughly dominated by the Neoclassical paradigm with its foundational notions of rational, optimizing consumers and producers making choices in a world of finite resources, and (with the exception of investments in technology) those choices being bounded by decreasing returns. This combination of self-interest and constraints then drive the economy to the Pareto optimal point of equilibrium ... the Neoclassical general equilibrium theory of Arrow and Debreau ostensibly aswered the great question of wealth allocation."
Beinhocker sums up his premise (p. 43) as:
"Nontheless, despite the unquestionably significant impact of Traditional Economics, the unease expressed at the beginning of the chapter remains valid. The economist Werner Hildenbrand once compared general equilibrium theory to a gothic cathedral, of which Walras and his contemporaries were the architects, and the great economists of the twentieth century were the master builders. Unfortunately, as we will see in the next chapter, the cathedral was built on very shaky ground."
Not bad.
Where Beinhocker's book (and premise) break down is that he attempts to explain why general equilibrium theory doesn't apply well to economic modeling, without the use of equations!
I credit an old friend and colleague of mine, Artie Briggs, with consistently referring to mathematics as a "compact notational language." When Sutter characterizes Beinhocker's book as being (at least in part) "sloppy and superfluous," he is referring partly to the content. (I agree largely with Sutter, and recommend his review as even more worthwhile a read than Origins -- and it is much shorter!) But part of the problem is trying to explain a complex physical system using words, instead of equations. (The same comment will hold when Beinhocker attempts to describe Complex Adaptive Systems, or CAS.)
In order to make much sense out of an "equilibrium-based approach" to modeling, we need to first understand equilibrium theory. A good, solid year of graduate-level statistical thermodynamics is a pretty good start for this. But then -- and this truly is an essential step -- a person really needs to go beyond the equations.
We start by writing down the basic Helmholtz free energy equation: A = E-TS
We need to internalize concepts such as "enthalpy" and "entropy," and even "temperature." (This is not trivial!) And then, we need to internalize what happens as we do the free energy minimization (which is what gives us equilibrium).
The simplest model that we can get for this is the Ising spin glass model for a system of bistate particles. When we have a simple model, with no nearest-neighbor interactions, we do get a system in which we can find equilibrium points. (The challenge of course is figuring out what elements in the economic world correspond to these "bistate" particles.)
When we introduce nearest-neighbor interactions, we get our first useful model of a phase transition, where a system can go from one free energy equilibrium point to another.
Should we be able to apply spin glass theory (or any similar interpretation) to the economic system at large, we'd have access to a mathematical formalism that DOES describe what just happened: A transition from a highly non-equilibrium but "metastable" state to a true "free energy minimum" (or real equilibrium state).
In short, we have been -- over the past several years -- at a highly "metastable" state. We have reached the limits at which that metastability would persist. (Sometimes, fluctuations are all that it takes, sometimes, pushing the metastability too far induces a transition.)
We have had a "cascade" in which we have gone from the previous highly metastable state of expanded housing credit to one that is more realistic.
The next questions we should ask ourselves are:
1) Is this new state of economic affairs really the "stable" free energy minimum state, or have we just reached a "temporary" minimum -- and are further cascades in order? (For those watching the credit crisis, this is a realistic concern), and
2) If we can apply this kind of free-energy model -- this time, backing off the assumption that we are at or will be at a free energy equilbrium -- what exactly are we modeling? How are we modeling it? In essence, what meaning are we ascribing to the variables in the system?
In short, we are trying to solve a word problem.
As most of us who have taken (or taught) algebra will recall, the tough part is not so much working out the equation. It is figuring out what the variables are, and how they relate to each other. It is figuring out the meaning of what we are modeling. What is important, what is not?
Enough for today. When I pick up again, two of the topics I'll address will be the remainder of Beinhocker's book (Complex Adaptive Systems, or CAS), and -- what might be more important -- how we can start selecting our independent and dependent variables, and making a useful model of the economic system.
Endnotes:
Patents Awarded:
1. System and Method for Evidence Accumulation and Hypothesis Generation, A.J. Maren et al., USPTO 7,421,419, granted 2008; filed April 12, 2006, full patent claims text for Evidence Accumulation and Hypothesis Generation
2. System and Method for Predictive Analysis and Predictive Analysis Markup Language, A.J. Maren , USPTO 7,389,282, granted 2008; filed on Nov. 02, 2005; full claims text for Predictive Analysis and Predictive Analysis Markup Language patent.
3. Knowledge Discovery Method with Utility Functions and Feedback Loops, A.J. Maren & S. Campbell, USPTO 7,333,997, Feb. 19, 2008
4. Sensor Fusion Apparatus and Method, A.J. Maren et al., USPTO 5,850,625, Dec. 15, 1998, filed on 03/13/1997 . Assignee: Accurate Automation Corporation.
Abstract: "The invented apparatus fuses two or more sensor signals to generate a fused signal with an improved confidence of target existence and position. The invented apparatus includes gain, control and fusion units, and can also include an integration unit. The integration unit receives signals generated by two or more sensors, and generates integrated signals based on the sensor signals. The integration unit performs temporal and weighted spatial integration of the sensor signals, to generate respective sets of integrated signals supplied to the gain control and fusion units. The gain control unit uses a preprogrammed function to map the integrated signals to an output signal that is scaled to generate a gain signal supplied to the fusion unit. The fusion unit uses a preprogrammed function to map its received integrated signals and the gain signal, to a fused signal that is the output of the invented apparatus. The weighted spatial integration increases the fused signal's sensitivity to near detections and suppresses response to detections relatively distant in space and time, from a detection of interest. The gain control and fusion functions likewise suppress the fused signal's response to low-level signals, but enhances response to high-level signals. In addition, the gain signal is generated from signals integrated over broad limits so that, if a detection occurred near in space or time to a detection of interest, the gain signal will cause the fused signal to be more sensitive to the level of the detection of interest."
Patents Pending:
5. Knowledge Discovery System, A.J. Maren et al., USPTO Application (patent pending) 20050278362.
Abstract: "A knowledge discovery apparatus and method that extracts both specifically desired as well as pertinent and relevant information to query from a corpus of multiple elements that can be structured, unstructured, and/or semi-structured, along with imagery, video, speech, and other forms of data representation, to generate a set of outputs with a confidence metric applied to the match of the output against the query. The invented apparatus includes a multi-level architecture, along with one or more feedback loop(s) from any level n to any lower level n−1 so that a user can control the output of this knowledge discovery method via providing inputs to the utility function."
6. System for Hypothesis Generation, A.J. Maren, USPTO Application Number 20070156720, filed 08/31/2006; published on 07/05/2007.
Abstract: "A system for performing hypothesis generation is provided. An extraction processor extracts an entity from a data set. An association processor associates the extracted entity with a set of reference entities to obtain a potential association wherein the potential association between the extracted entity and the set of reference entities is described using a vector-based belief-value-set. A threshold processor determines whether a set of belief values of the vector-based belief-value-set exceed a predetermined threshold. If the belief values exceed a predetermined threshold the threshold processor adopts the association."
Related Work: (collection will increase over time)
Principal Investigator or Co-PI on seven Phase 1 SBIR / STTR contracts (DoD/NSF); PI on four Phase II SBIR/STTR contracts.
-- 107. Intelligent Agents Using Situation Assessment (Report Abstract), A.J. Maren & R.M. Akita, Phase 1 SBIR for the National Science Foundation.
Very brief CV: Alianna Maren, Chief Scientist & Founder, Viziant Corporation
We -- that usually means you and me -- and right now means the world collectively -- have largely misunderstood the world's financial structure over recent years. (Those who HAVE accurately understood are not only more secure, but substantially richer by now.)
Most of us are current on "what went wrong."
Most of us now understand how the financial state of affairs grew so out-of-bounds that "what went wrong" was the only foreseeable action. (Easy to do in hindsight.)
What many of us are seeking now -- beyond the next tactical or even strategic step -- is a better basis for understanding the world's financial system for the future. This means, we are re-examining what we are using as a "theoretical base" for national and even global economic modeling.
This is not impractical or far-fetched. In fact, Kurt Lewin is credited with saying “There is nothing so practical as a good theory”.
As a physical chemist, I start from a place of knowing little about economics, economic systems, or world models. However, I know a great deal about modeling large-scale, complex physical systems -- this was the subject of my graduate work and dissertation, and underlies each of the patents that I've made (all four of them -- will put endnotes at the bottom of this posting).
So in the spirit of discourse, let's start with one presumptive theory, examine it, pull it apart, see where it has strengths and deficiencies, and move on. And we can keep on doing this until we arrive at something that works.
(As an aside -- I'm not only a scientist, but also an entrepreneur -- so the "best of the best" of my thoughts will be kept for the clients funding my next company. But this blog post lets me share with you the way we would if we were having a discussion at a cocktail party, or after a seminar. It gives me a means of organizing my thoughts supporting the next round of inventions, and you something to read, discuss, and consider. And feel free to email me at alianna1 at gmail dot com or post your comments to this blog.)
Eric D. Beinhocker purports to propose a good theory in his 2006 book, The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics. (If you jump to the Amazon site, you'll be able to not only "search inside" -- but more importantly, read the reviews -- which actully are very useful. In particular, Craig Howe does a good job of summarizing the major premises, and -- more importantly -- A.J. Sutter correctly identifies the book's major weaknesses.)
But let's start with a quick, one-paragraph overview. Beinhocker's main premise is that the equilibrium theory of "traditional economics" is substantively unable to be a useful or predictive model for any level of macro-economic events.
Specifically, Beinhocker states (pp. 42-43, hardcover version):
"By the end of the twentieth century, Traditional Economics was thoroughly dominated by the Neoclassical paradigm with its foundational notions of rational, optimizing consumers and producers making choices in a world of finite resources, and (with the exception of investments in technology) those choices being bounded by decreasing returns. This combination of self-interest and constraints then drive the economy to the Pareto optimal point of equilibrium ... the Neoclassical general equilibrium theory of Arrow and Debreau ostensibly aswered the great question of wealth allocation."
Beinhocker sums up his premise (p. 43) as:
"Nontheless, despite the unquestionably significant impact of Traditional Economics, the unease expressed at the beginning of the chapter remains valid. The economist Werner Hildenbrand once compared general equilibrium theory to a gothic cathedral, of which Walras and his contemporaries were the architects, and the great economists of the twentieth century were the master builders. Unfortunately, as we will see in the next chapter, the cathedral was built on very shaky ground."
Not bad.
Where Beinhocker's book (and premise) break down is that he attempts to explain why general equilibrium theory doesn't apply well to economic modeling, without the use of equations!
I credit an old friend and colleague of mine, Artie Briggs, with consistently referring to mathematics as a "compact notational language." When Sutter characterizes Beinhocker's book as being (at least in part) "sloppy and superfluous," he is referring partly to the content. (I agree largely with Sutter, and recommend his review as even more worthwhile a read than Origins -- and it is much shorter!) But part of the problem is trying to explain a complex physical system using words, instead of equations. (The same comment will hold when Beinhocker attempts to describe Complex Adaptive Systems, or CAS.)
In order to make much sense out of an "equilibrium-based approach" to modeling, we need to first understand equilibrium theory. A good, solid year of graduate-level statistical thermodynamics is a pretty good start for this. But then -- and this truly is an essential step -- a person really needs to go beyond the equations.
We start by writing down the basic Helmholtz free energy equation: A = E-TS
We need to internalize concepts such as "enthalpy" and "entropy," and even "temperature." (This is not trivial!) And then, we need to internalize what happens as we do the free energy minimization (which is what gives us equilibrium).
The simplest model that we can get for this is the Ising spin glass model for a system of bistate particles. When we have a simple model, with no nearest-neighbor interactions, we do get a system in which we can find equilibrium points. (The challenge of course is figuring out what elements in the economic world correspond to these "bistate" particles.)
When we introduce nearest-neighbor interactions, we get our first useful model of a phase transition, where a system can go from one free energy equilibrium point to another.
Should we be able to apply spin glass theory (or any similar interpretation) to the economic system at large, we'd have access to a mathematical formalism that DOES describe what just happened: A transition from a highly non-equilibrium but "metastable" state to a true "free energy minimum" (or real equilibrium state).
In short, we have been -- over the past several years -- at a highly "metastable" state. We have reached the limits at which that metastability would persist. (Sometimes, fluctuations are all that it takes, sometimes, pushing the metastability too far induces a transition.)
We have had a "cascade" in which we have gone from the previous highly metastable state of expanded housing credit to one that is more realistic.
The next questions we should ask ourselves are:
1) Is this new state of economic affairs really the "stable" free energy minimum state, or have we just reached a "temporary" minimum -- and are further cascades in order? (For those watching the credit crisis, this is a realistic concern), and
2) If we can apply this kind of free-energy model -- this time, backing off the assumption that we are at or will be at a free energy equilbrium -- what exactly are we modeling? How are we modeling it? In essence, what meaning are we ascribing to the variables in the system?
In short, we are trying to solve a word problem.
As most of us who have taken (or taught) algebra will recall, the tough part is not so much working out the equation. It is figuring out what the variables are, and how they relate to each other. It is figuring out the meaning of what we are modeling. What is important, what is not?
Enough for today. When I pick up again, two of the topics I'll address will be the remainder of Beinhocker's book (Complex Adaptive Systems, or CAS), and -- what might be more important -- how we can start selecting our independent and dependent variables, and making a useful model of the economic system.
Endnotes:
Patents Awarded:
1. System and Method for Evidence Accumulation and Hypothesis Generation, A.J. Maren et al., USPTO 7,421,419, granted 2008; filed April 12, 2006, full patent claims text for Evidence Accumulation and Hypothesis Generation
2. System and Method for Predictive Analysis and Predictive Analysis Markup Language, A.J. Maren , USPTO 7,389,282, granted 2008; filed on Nov. 02, 2005; full claims text for Predictive Analysis and Predictive Analysis Markup Language patent.
3. Knowledge Discovery Method with Utility Functions and Feedback Loops, A.J. Maren & S. Campbell, USPTO 7,333,997, Feb. 19, 2008
4. Sensor Fusion Apparatus and Method, A.J. Maren et al., USPTO 5,850,625, Dec. 15, 1998, filed on 03/13/1997 . Assignee: Accurate Automation Corporation.
Abstract: "The invented apparatus fuses two or more sensor signals to generate a fused signal with an improved confidence of target existence and position. The invented apparatus includes gain, control and fusion units, and can also include an integration unit. The integration unit receives signals generated by two or more sensors, and generates integrated signals based on the sensor signals. The integration unit performs temporal and weighted spatial integration of the sensor signals, to generate respective sets of integrated signals supplied to the gain control and fusion units. The gain control unit uses a preprogrammed function to map the integrated signals to an output signal that is scaled to generate a gain signal supplied to the fusion unit. The fusion unit uses a preprogrammed function to map its received integrated signals and the gain signal, to a fused signal that is the output of the invented apparatus. The weighted spatial integration increases the fused signal's sensitivity to near detections and suppresses response to detections relatively distant in space and time, from a detection of interest. The gain control and fusion functions likewise suppress the fused signal's response to low-level signals, but enhances response to high-level signals. In addition, the gain signal is generated from signals integrated over broad limits so that, if a detection occurred near in space or time to a detection of interest, the gain signal will cause the fused signal to be more sensitive to the level of the detection of interest."
Patents Pending:
5. Knowledge Discovery System, A.J. Maren et al., USPTO Application (patent pending) 20050278362.
Abstract: "A knowledge discovery apparatus and method that extracts both specifically desired as well as pertinent and relevant information to query from a corpus of multiple elements that can be structured, unstructured, and/or semi-structured, along with imagery, video, speech, and other forms of data representation, to generate a set of outputs with a confidence metric applied to the match of the output against the query. The invented apparatus includes a multi-level architecture, along with one or more feedback loop(s) from any level n to any lower level n−1 so that a user can control the output of this knowledge discovery method via providing inputs to the utility function."
6. System for Hypothesis Generation, A.J. Maren, USPTO Application Number 20070156720, filed 08/31/2006; published on 07/05/2007.
Abstract: "A system for performing hypothesis generation is provided. An extraction processor extracts an entity from a data set. An association processor associates the extracted entity with a set of reference entities to obtain a potential association wherein the potential association between the extracted entity and the set of reference entities is described using a vector-based belief-value-set. A threshold processor determines whether a set of belief values of the vector-based belief-value-set exceed a predetermined threshold. If the belief values exceed a predetermined threshold the threshold processor adopts the association."
Related Work: (collection will increase over time)
Principal Investigator or Co-PI on seven Phase 1 SBIR / STTR contracts (DoD/NSF); PI on four Phase II SBIR/STTR contracts.
-- 107. Intelligent Agents Using Situation Assessment (Report Abstract), A.J. Maren & R.M. Akita, Phase 1 SBIR for the National Science Foundation.
Very brief CV: Alianna Maren, Chief Scientist & Founder, Viziant Corporation
Subscribe to:
Posts (Atom)