Monday, February 28, 2011
Friday, February 25, 2011
The Greatest Unsolved Problems
In 2000, the Clay Mathematics Institute published a list of seven unsolved problems in mathematics called the Millennium Prize Problems. There is a prize of US$1,000,000 for solving each problem. One of these problems has already been solved. In 1900, David Hilbert published a list of 23 unsolved problems in mathematics. Looking at the status of these problems on Wikipedia, only five of them remain unresolved.
Although there are many unsolved problems in science and mathematics, we have been making steady progress in solving open problems. Fermat's Last Theorem was conjectured in 1637 and was solved in 1995. The four color theorem was stated in 1852 and proven in 1976. Throughout human history we seem to be systematically progressing and accumulating scientific knowledge.
I have compiled a list of what I consider to be the five most important unsolved problems. Ranking the importance of problems is of course subjective. Biologists would probably be biased towards biology problems and physicists towards physics problems. I have tried to avoid being too biased towards computer science problems. Here is the list, ranked from most important to least:
1) Theory of Everything:
The Theory of Everything (TOE) is the most important unsolved problem in physics. As the name suggests, this theory would fully explain all known physical phenomena in the universe. The complexity of the universe does not necessarily mean that TOE needs to be complex. As observed in chaos theory and cellular automaton, a set of extremely simple rules can lead to incredible complexity. Albert Einstein spent the last few decades of his life searching for TOE (which he referred to as unified field theory). He failed.
Einstein's general theory of relativity explains the universe at large scales. So far, all experimental evidence confirms general relativity. Quantum mechanics explains the universe at the scale of subatomic particles. Once again, all experimental evidence confirms quantum mechanics. Unfortunately, these two theories can not both be right. In extreme conditions like black holes and the Big Bang, these theories seem to contradict each other. String theory seems to be a promising candidate for a TOE. It resolves the tension between general relativity and quantum mechanics. However, string theory has yet been unable to produce testable experimental predictions.
2) Intelligence:
There are several definitions of intelligence. When comparing the intelligence of animals, most people agree that humans are more intelligent than dogs and dogs are more intelligent than goldfish. How can we quantify intelligence? Several tests have been created that attempt to measure and compare human intelligence (such as the IQ test). Some of these tests are based directly on pattern matching and prediction skills.
Based on my research in the field of data compression, I have my own definition of intelligence which is easily quantifiable. Intelligence can be measured by the cross entropy rate of a predictive compression algorithm on multidimensional sequence data. This basically means that an algorithm which is better at predicting temporal patterns in data is more intelligent. The human brain is extremely good at recognising and predicting patterns in the massively parallel sensory data it processes.
Most data compression algorithms work on one-dimensional sequence data. For example, text is a one-dimensional sequence of characters. Studies have been performed to try to measure the cross entropy rate of humans trying to predict the next character in a natural language text sequence: 0.6 to 1.3 bits per character. The best text compression algorithms (such as PAQ8) are starting to approach the upper end of this range. Unfortunately, compressing one-dimensional data is easy compared to high-dimensional data. In my opinion, any compression algorithm which could approach the predictive capability of the human brain on billions of parallel inputs would be truly intelligent. The most promising framework I have seen for explaining the algorithm behind human intelligence is hierarchical temporal memory.
If humans are so intelligent, what is the point of making intelligent machines? Well, if we could design an algorithm which becomes more intelligent just by giving it more computational resources, we could make it arbitrarily intelligent. If it could surpass human intelligence, there could be a singularity in which the AI can design a new AI which is even more intelligent than itself (ad infinitum). Achieving this would allow AI to solve the other four open problems on this list (assuming they are solvable!).
3) Dark Energy and Dark Matter:
Experiments indicate that the universe is expanding at an accelerated rate. This is surprising because mass in the universe should cause gravity to slow its expansion. Dark energy is currently the most accepted theory to explain the expansion. Not much is known about dark energy besides the fact that based on its effect on expansion, about 74% of the universe is dark energy.
So if 74% of the universe is made up of dark energy, you might assume the rest of it is the visible universe. Wrong. The visible universe only accounts for about 4%. The other 22% is known as dark matter. Once again, not very much is known about dark matter. Advances in our understanding of dark energy and dark matter would provide insight into the nature of our universe and its eventual fate.
4) One-way Functions:
Any computer scientists reading this post might be surprised by the fact that the P = NP problem is not on my list. As Scott Aaronson points out, solving the P = NP problem could have a huge impact:
The existence of one-way functions is another famous problem in computer science. The existence (or non-existence) of one-way functions would have a bigger impact than proving that P ≠ NP. In fact, the existence of one-way functions would in itself prove that P ≠ NP. If one-way functions do not exist then secure public key cryptography is impossible. Either outcome results in a useful result.
5) Abiogenesis:
Abiogenesis is the study of the origin of life on Earth. Having little chemistry/biology background, I can't give a very good comparison between different abiogenesis theories. However, I can appreciate the importance of understanding how life arose. Not only would it enhance our knowledge of life on Earth, it would assist in the search for extra terrestrial life.
Although there are many unsolved problems in science and mathematics, we have been making steady progress in solving open problems. Fermat's Last Theorem was conjectured in 1637 and was solved in 1995. The four color theorem was stated in 1852 and proven in 1976. Throughout human history we seem to be systematically progressing and accumulating scientific knowledge.
I have compiled a list of what I consider to be the five most important unsolved problems. Ranking the importance of problems is of course subjective. Biologists would probably be biased towards biology problems and physicists towards physics problems. I have tried to avoid being too biased towards computer science problems. Here is the list, ranked from most important to least:
1) Theory of Everything:
The Theory of Everything (TOE) is the most important unsolved problem in physics. As the name suggests, this theory would fully explain all known physical phenomena in the universe. The complexity of the universe does not necessarily mean that TOE needs to be complex. As observed in chaos theory and cellular automaton, a set of extremely simple rules can lead to incredible complexity. Albert Einstein spent the last few decades of his life searching for TOE (which he referred to as unified field theory). He failed.
Einstein's general theory of relativity explains the universe at large scales. So far, all experimental evidence confirms general relativity. Quantum mechanics explains the universe at the scale of subatomic particles. Once again, all experimental evidence confirms quantum mechanics. Unfortunately, these two theories can not both be right. In extreme conditions like black holes and the Big Bang, these theories seem to contradict each other. String theory seems to be a promising candidate for a TOE. It resolves the tension between general relativity and quantum mechanics. However, string theory has yet been unable to produce testable experimental predictions.
2) Intelligence:
There are several definitions of intelligence. When comparing the intelligence of animals, most people agree that humans are more intelligent than dogs and dogs are more intelligent than goldfish. How can we quantify intelligence? Several tests have been created that attempt to measure and compare human intelligence (such as the IQ test). Some of these tests are based directly on pattern matching and prediction skills.
Based on my research in the field of data compression, I have my own definition of intelligence which is easily quantifiable. Intelligence can be measured by the cross entropy rate of a predictive compression algorithm on multidimensional sequence data. This basically means that an algorithm which is better at predicting temporal patterns in data is more intelligent. The human brain is extremely good at recognising and predicting patterns in the massively parallel sensory data it processes.
Most data compression algorithms work on one-dimensional sequence data. For example, text is a one-dimensional sequence of characters. Studies have been performed to try to measure the cross entropy rate of humans trying to predict the next character in a natural language text sequence: 0.6 to 1.3 bits per character. The best text compression algorithms (such as PAQ8) are starting to approach the upper end of this range. Unfortunately, compressing one-dimensional data is easy compared to high-dimensional data. In my opinion, any compression algorithm which could approach the predictive capability of the human brain on billions of parallel inputs would be truly intelligent. The most promising framework I have seen for explaining the algorithm behind human intelligence is hierarchical temporal memory.
If humans are so intelligent, what is the point of making intelligent machines? Well, if we could design an algorithm which becomes more intelligent just by giving it more computational resources, we could make it arbitrarily intelligent. If it could surpass human intelligence, there could be a singularity in which the AI can design a new AI which is even more intelligent than itself (ad infinitum). Achieving this would allow AI to solve the other four open problems on this list (assuming they are solvable!).
3) Dark Energy and Dark Matter:
Experiments indicate that the universe is expanding at an accelerated rate. This is surprising because mass in the universe should cause gravity to slow its expansion. Dark energy is currently the most accepted theory to explain the expansion. Not much is known about dark energy besides the fact that based on its effect on expansion, about 74% of the universe is dark energy.
So if 74% of the universe is made up of dark energy, you might assume the rest of it is the visible universe. Wrong. The visible universe only accounts for about 4%. The other 22% is known as dark matter. Once again, not very much is known about dark matter. Advances in our understanding of dark energy and dark matter would provide insight into the nature of our universe and its eventual fate.
4) One-way Functions:
Any computer scientists reading this post might be surprised by the fact that the P = NP problem is not on my list. As Scott Aaronson points out, solving the P = NP problem could have a huge impact:
If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in "creative leaps," no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss...The problem is that most people assume that P ≠ NP. Although a proof of this would have some theoretical value, it would not have a large impact. This is why the problem got bumped off my list.
The existence of one-way functions is another famous problem in computer science. The existence (or non-existence) of one-way functions would have a bigger impact than proving that P ≠ NP. In fact, the existence of one-way functions would in itself prove that P ≠ NP. If one-way functions do not exist then secure public key cryptography is impossible. Either outcome results in a useful result.
5) Abiogenesis:
Abiogenesis is the study of the origin of life on Earth. Having little chemistry/biology background, I can't give a very good comparison between different abiogenesis theories. However, I can appreciate the importance of understanding how life arose. Not only would it enhance our knowledge of life on Earth, it would assist in the search for extra terrestrial life.
Subscribe to:
Posts (Atom)