ever since ants solved the traveling salesman problem - beating humans by a margin of a few million years (and counting, down) - it's been a little awkward in the sciences.
man's slightly clunky love affair with computation was based on a messy and very human misrecognition of its partner. things that came hard to us - long algorithmic computation, advanced logical manipulation, keeping track of dentist appointments - computers were able to do reliably, gracefully and fast. so (not-entirely-grounded projections being the main business of the brain), of course we thought 'surely the next step, teaching computers to do the simple stuff like vision, language and object manipulation, will come pretty naturally. it's so damn easy!'.
well. uhm. as it turns out. erm. as we (maybe all conscious creatures!?) are wont to do, we projected the hell out of our premises onto computers then proceeded to flail aimlessly for decades, consistently failing to figure out a way to set up a logical system that was complex enough to handle those seemingly simple, actually insanely incomprehensible, tasks.
but all that is about to change - and, sweetest of victories, it doesn't even require us to stop with all the failing! suppose you have a problem so bizarre, uncertain, or even unposeable - say, self-driving or computer vision - that you are pretty sure the only way to solve it would be by some miraculous stroke of computability genius. how do you solve it?
you dont! you set it up so it solves itself while you sit back with a caipirinha and a massive research grant, at the very best telling it from time to time where to go. this is entirely true. the new, hot, exciting and funky field of machine learning does exactly that.
in different ways, neural networks and evolutionary algorithms take a bunch of premises, mix them all up, define a type of fitness measure - say, by showing it pictures of cats and rewarding good guesses, what they call back-propagation, or making a thousand tiny mutated copies of each racing algorithm and selecting the fastest, via artificial evolution - and let the simulation run on, continually improving itself and becoming more and more distant from anything that any human programmer could conceivably be able to come up with. understanding the underlying nature and structure of the phenomenon so as to abstract it into a coherent whole, subsequently modelling it under the guidance and control of abstract reasoning? naaah! let reality do the job for you.
this, incidentally, is exactly how nature herself prefers to code: throw a bunch of self-replicating interaction-heavy staggeringly-complex slightly-mutable beings encoded in long strings of dna out into the world, and let the least lame die slower than the rest! much shockingly, 80% of the code you finally come up with using machine learning is garbled incomprehensible junk! how very much like dna.
is this self-programming? not quite, but actually maybe, yup, perhaps in the same way that our learning is us self-programming without quite knowing what goes on in the brain.
so, amazing failures that we are as coders, we were still wise enough to take ourselves out of the equation in the name of progress. and even started making a very big fuss about it, throwing culture all sorts of curve balls about how big a step this is towards developing Artificial Inteligence and who knows maybe one day a less pathetic form of Mind.
what does all this tell us about the world? hastily grafted onto complex adaptive systems theory, the whole machine learning shebang opens up entire new vistas of exploration of the real. it sheds a bit of light on what exact kind of lazy, far-looking, hormone-ridden creatures we really are.
first, it poses a bunch of very pertinent questions as to how it is that other complex structures come to be, and whether we can or can't simulate them to try to get a better grasp on them. for instance, assuming one could just keep pumping more and more extropy on the daisyworld or the sugarscape, could you come up with humanlike institutions like money without ever coding for anything like subjectivity or meaning?
(the complex wet dream of course is that, given enough complexity, meaning itself just decides to pop up of its own accord like legendary brazilian patron saints. that's even supposed to be, according to pseudo-real-science, how meaning first emerges from the brain itself, which is what inspired the entire neural network boom)
second, and i hate to go all lacano-pseudokantian on you, at this point of machines doing eerie things, I feel that you gotta either split your reals or go back to the scientific revolution. half a real: meaning and everything ever interpreted. the other half: everything else - nuclear power, evolution, gravity, paper, butts.
maybe the most valuable lesson from complex systems machine learning is that it teaches us to see reality in ways that more clearly reveal its, and our, limitations. one of the problems is that it's really hard to rival the universe's parallel processing power, every atom a transistor, the ultimate infinite cellular automaton (citation needed). a possibly bigger problem, as always, is qualia - at what point do you have enough interconnected networks of computation that something like top-down self-referential self-representation just emerges? how much embodiment is needed for what we call consciousness? not that any engineer worthy of the name halts at the menace of philosophy. philosophy is us trying to ponder the whole imponderable, a neural network trying to dive right into the messy that made it come into being, where even humans now fear to tread.
jobless and ridiculously far away from the person i love the most (context is everything), the other night i dreamt i had a portable evolutionary algorithm, coded in python because i'm far from any kind of pro!, that i carried around everywhere with me kind of like a sidekick, and every time i answered a question or talked to someone it measured the context and the situation to the best of its ability then spawned billions of code-hypotheses on who i was, then backward propagated the shit of whatever i had answered till eventually it came up with - you guessed it, edging a bit towards nightmare now, what else - it came up with ANOTHER ME!
it's soothing in a newtonian way to think that the dream was only a linear combination of kevin kelly charlie brooker and andrew ng, but, given the complex inscrutable 80%-junk-like capabilities of the brain, I suspect the universe knows better.
but there's a kind of kind poetry to this, our first move in unleashing our own self-obsolescence. keeping in mind that we all of us are first and foremost a digestive tube regulated by emotion and planted smack in the middle of a lot of messy nonchaos, then expected to do our best at learning, coming up with better learning mechanisms than ourselves - if you look at it from a kind of alien transcendental pure consciousness viewpoint - doesn't seem all that far-fetched. maybe we can just go on with this removing ourselves from the programming and step aside for some proper-er consciousnesses to carry the torch. at this rate we'll have a talking walking (unfeeling) robot brain before we even figured out what meaning or experience are. goes to show how much to trust the untamed question-asking brain. in any case, machine learning is a game changer for all the games there are. I can't even remember what reality was supposed to be like before.
man's slightly clunky love affair with computation was based on a messy and very human misrecognition of its partner. things that came hard to us - long algorithmic computation, advanced logical manipulation, keeping track of dentist appointments - computers were able to do reliably, gracefully and fast. so (not-entirely-grounded projections being the main business of the brain), of course we thought 'surely the next step, teaching computers to do the simple stuff like vision, language and object manipulation, will come pretty naturally. it's so damn easy!'.
well. uhm. as it turns out. erm. as we (maybe all conscious creatures!?) are wont to do, we projected the hell out of our premises onto computers then proceeded to flail aimlessly for decades, consistently failing to figure out a way to set up a logical system that was complex enough to handle those seemingly simple, actually insanely incomprehensible, tasks.
but all that is about to change - and, sweetest of victories, it doesn't even require us to stop with all the failing! suppose you have a problem so bizarre, uncertain, or even unposeable - say, self-driving or computer vision - that you are pretty sure the only way to solve it would be by some miraculous stroke of computability genius. how do you solve it?
you dont! you set it up so it solves itself while you sit back with a caipirinha and a massive research grant, at the very best telling it from time to time where to go. this is entirely true. the new, hot, exciting and funky field of machine learning does exactly that.
in different ways, neural networks and evolutionary algorithms take a bunch of premises, mix them all up, define a type of fitness measure - say, by showing it pictures of cats and rewarding good guesses, what they call back-propagation, or making a thousand tiny mutated copies of each racing algorithm and selecting the fastest, via artificial evolution - and let the simulation run on, continually improving itself and becoming more and more distant from anything that any human programmer could conceivably be able to come up with. understanding the underlying nature and structure of the phenomenon so as to abstract it into a coherent whole, subsequently modelling it under the guidance and control of abstract reasoning? naaah! let reality do the job for you.
this, incidentally, is exactly how nature herself prefers to code: throw a bunch of self-replicating interaction-heavy staggeringly-complex slightly-mutable beings encoded in long strings of dna out into the world, and let the least lame die slower than the rest! much shockingly, 80% of the code you finally come up with using machine learning is garbled incomprehensible junk! how very much like dna.
is this self-programming? not quite, but actually maybe, yup, perhaps in the same way that our learning is us self-programming without quite knowing what goes on in the brain.
so, amazing failures that we are as coders, we were still wise enough to take ourselves out of the equation in the name of progress. and even started making a very big fuss about it, throwing culture all sorts of curve balls about how big a step this is towards developing Artificial Inteligence and who knows maybe one day a less pathetic form of Mind.
what does all this tell us about the world? hastily grafted onto complex adaptive systems theory, the whole machine learning shebang opens up entire new vistas of exploration of the real. it sheds a bit of light on what exact kind of lazy, far-looking, hormone-ridden creatures we really are.
first, it poses a bunch of very pertinent questions as to how it is that other complex structures come to be, and whether we can or can't simulate them to try to get a better grasp on them. for instance, assuming one could just keep pumping more and more extropy on the daisyworld or the sugarscape, could you come up with humanlike institutions like money without ever coding for anything like subjectivity or meaning?
(the complex wet dream of course is that, given enough complexity, meaning itself just decides to pop up of its own accord like legendary brazilian patron saints. that's even supposed to be, according to pseudo-real-science, how meaning first emerges from the brain itself, which is what inspired the entire neural network boom)
second, and i hate to go all lacano-pseudokantian on you, at this point of machines doing eerie things, I feel that you gotta either split your reals or go back to the scientific revolution. half a real: meaning and everything ever interpreted. the other half: everything else - nuclear power, evolution, gravity, paper, butts.
maybe the most valuable lesson from complex systems machine learning is that it teaches us to see reality in ways that more clearly reveal its, and our, limitations. one of the problems is that it's really hard to rival the universe's parallel processing power, every atom a transistor, the ultimate infinite cellular automaton (citation needed). a possibly bigger problem, as always, is qualia - at what point do you have enough interconnected networks of computation that something like top-down self-referential self-representation just emerges? how much embodiment is needed for what we call consciousness? not that any engineer worthy of the name halts at the menace of philosophy. philosophy is us trying to ponder the whole imponderable, a neural network trying to dive right into the messy that made it come into being, where even humans now fear to tread.
jobless and ridiculously far away from the person i love the most (context is everything), the other night i dreamt i had a portable evolutionary algorithm, coded in python because i'm far from any kind of pro!, that i carried around everywhere with me kind of like a sidekick, and every time i answered a question or talked to someone it measured the context and the situation to the best of its ability then spawned billions of code-hypotheses on who i was, then backward propagated the shit of whatever i had answered till eventually it came up with - you guessed it, edging a bit towards nightmare now, what else - it came up with ANOTHER ME!
it's soothing in a newtonian way to think that the dream was only a linear combination of kevin kelly charlie brooker and andrew ng, but, given the complex inscrutable 80%-junk-like capabilities of the brain, I suspect the universe knows better.
but there's a kind of kind poetry to this, our first move in unleashing our own self-obsolescence. keeping in mind that we all of us are first and foremost a digestive tube regulated by emotion and planted smack in the middle of a lot of messy nonchaos, then expected to do our best at learning, coming up with better learning mechanisms than ourselves - if you look at it from a kind of alien transcendental pure consciousness viewpoint - doesn't seem all that far-fetched. maybe we can just go on with this removing ourselves from the programming and step aside for some proper-er consciousnesses to carry the torch. at this rate we'll have a talking walking (unfeeling) robot brain before we even figured out what meaning or experience are. goes to show how much to trust the untamed question-asking brain. in any case, machine learning is a game changer for all the games there are. I can't even remember what reality was supposed to be like before.