Some mathematical facts are true for no reason. They are accidental, lacking any cause or deeper meaning. This is an apparent attribute of any dynamical system which we try to model mathematically. And so are the facts of our chaotic environment which we call nature.

In our quest for understanding, we are becoming more and more aware of the complexity in seeking certainty in a world overladen with information. It isn’t that there is too much information, it is the simple fact that we humans will never come to a point at understanding them.

Seeking patterns, forming algorithms and measuring their effects is a futile endeavor. Gödel’s incompleteness theory, Shannon’s mathematical theory of communication and Chaitin’s path of incompleteness to chaos all show when observing the universe as is, it is impossible to come to an algorithm, a compression method or an encoding scheme to predict the next move or event.

Yet we humans are drawn to patterns willy-nilly. We invent elaborate tools and mind past data with a toothpick in order to predict and sniff out tendencies in the financial markets and social sciences so that we can make our next buck. Case:

Mirghaemi spent two years using Bayesian techniques to study how European bond markets responded to 3,077 separate releases of economic data between 2007 and 2008. She studied 1.6 million bond trades and figured out which pieces of news moved the markets more, and which ones analysts and traders were more likely to forecast poorly. “It made my eyesight like a double,” she said. But Mirghaemi’s research should now, in theory, allow traders, and trading algorithms, to position themselves better on an hour-by-hour basis. “It definitely makes money,” she said.

How much information is really in 3,077 separate releases of economic data and 1.6 million bond trades ? These data, we could argue, are points in time, a sequence of events which had not been able to be predicted – hence her research. Can these sequence of events be truly random? Mathematically, it is impossible to prove that a number is random. So physicists and mathematicians alike have relied on proving the opposite: that a number or sequence of events N are interesting i.e. non-random by finding an algorithm for N.

Even though most positive integers are uninteresting (i.e. Random).., you can never be sure, you can never prove it—although there may be a finite number of exceptions. But you can only prove it in a small number of cases. So most positive integers are uninteresting or algorithmically incompressible, but you can almost never be sure in individual cases, even though it’s overwhelmingly likely.

I can imagine one could try to prove that a data point is random by brute force, by implementing every known algorithm in a computer and test it against these data sets. But what you would get is algorithms testing other algorithms. A paradox. A recursive self looping nut.

But we are drawn to these 1.6 million bond trades because they seem to not exhibit total randomness. If they were redundant, meaning all trades had the same attributes, we would again claim that no information exists amongst the numbers. As Shannon proved, information is surprise. It is when in a sequence of events where the next occurrence appears as a surprise that we find value and information.

The 1.6 million trades, this data set, is neither of these extremes: random nor redundant. Though, they indeed have information so this is the exact reason why we are meticulously trying to invent an algorithm which can be used to replicate its sequence. A futile undertaking.

For the problem is, we live in a nonlinear complex dynamical system and it is full of irrational agents – us. Finding a rational algorithm to predict an irrational dynamical system is like asking God to rig the dice when there is no God.