The Software of Evermore
by Koen Speelmeijer - December 21, 2016(How) will we master machines that will always be faster?
In this essay These Days’ Koen Speelmeijer explores how the way we write software - i.e. instruct machines - can be reinvented to keep the pace of our ever faster hardware.
With Wolfram, Gödel, Hawking, Conway, Einstein e.a. Speelmeijer wonders how our slow human brains will master the near future’s ‘intelligent’ and ‘learning’ machines.
If we’re not to be “superseded” – surprised by the so-called singularity – the software of the future had better grab our attention soon, Speelmeijer argues, inviting CEOs, CTOs, CMOs and humans alike to consider these five questions in particular.
- Will the bot ever get the joke (if today it cannot even get a cat)? How do machine learning and neural networks complicate – or facilitate? – software’s future-proof new paradigm(s)?
- Siri, how do we future-proof our programming? Which great minds can teach us how to program furiously fast machines? Wolfram? Conway? Darwin? Newton? Einstein perhaps…?
- What figure would simplify our frame of reference? What if we could base our software on a set of simple rules and apply constants the way the natural sciences do, as imprinted rules…?
- If AI’s to be fallible, how will we make it policeable? Is a ban our best answer to – Terminator “will be back” – killer robots? Or could tomorrow’s AI be instilled with rules for a law-abiding robot society?
- Is all hope in vain for our modest human brain? What would Hawking say is the single key takeaway from our quest for tomorrow’s software?
The software of evermore
By Koen Speelmeijer – December 21, 2016
"AI would take off on its own, and re-design itself at an ever increasing rate,"Stephen Hawking has been warning, in a weird potentialis. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
https://commons.wikimedia.org/wiki/File:Legislación_Tecnologica_IA.jpg
How will we tell tomorrow’s machines what exactly we want them to do? Not even Hawking seems to know. Still, if we’re not to be “superseded” – surprised by the so-called singularity – the software of the future had better grab our attention soon, hadn’t it? Below are five questions CEOs, CTOs, CMOs, and humans alike might want to consider.
“The singularity is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization,” dixit Wikipedia. Many situate that ‘point of no return’ as near as 2045, pointing at the exponential pace at which hardware is advancing. Soon a € 200 smartphone will have more computing power than the entire planet’s combined computational capabilities today.
How will we master the machines that get faster at such a furious pace? How will we keep up with the myriad of tools and technologies offering solutions for problems which did not exist, say, two years ago? Software – that’s the one thing we know for sure – will need to be reinvented to deal with these rapidly changing hardware paradigms.
Anyone who’s ever written a line of code knows that programming is based on Boolean algebra: there’s a true (1) and a false (0) state. A software program takes a set of statements, runs these through predetermined algorithms, and based on Boolean equations will return a certain output. As programmers, we instruct a set of rules which will determine the outcome. These rules are, at their core, very simplistic, and cannot in any way account for the complex nature of our everyday lives. If we want to create actual smart software, we need to come up with a new way of instructing a machine.
In a piece entitled ‘Artificial Intelligence and the King Midas Problem’, Ariel Conn, recently illustrated how exploding computing capabilities are complicating matters for programmers: “Very simple robots with very constrained tasks do not need goals or values at all,” Conn explained. “Although the engineers who designed your vacuum robot know you want a clean floor, the robot itself doesn’t: it simply executes a procedure that its designers predict will work—most of the time. If your kitten leaves a messy pile on the carpet, the vacuum robot will dutifully smear it all over the living room. (…)”
How then will we improve such a bot’s discernment, without completely conceding control? Let’s explore. And let’s stretch Hawking’s potentialis some.
1. Will the bot ever get the joke (if it cannot even get a cat)?
Silicon Valley, as blogged before, has been pursuing an approach to computing called machine learning, where, instead of a programmer writing a set of instructions, there’s a piece of software “smart” enough to classify its input based on previous instructions or sets of input. You’ve heard about these ‘tech trainings’ during which machines are served massive amounts of images of cats or dogs. These still fit in a simplistic approach to making something “smart.” Yet, they’re already a lot less dumb than inputting instructions for every possible thing we want the computer to know about. Creating a machine that’s very good at recognizing things has become a perfectly plausible project.
- Is this a cat or a dog?
https://www.instagram.com/atchoumfan
Two major problems remain, however. First, we still need a ton of input to accomplish such ‘intelligence’. And second: even if neural networks are put to work, the learning machine will never actually transcend the historical precedents it has been fed or familiarized with.
To quote Conn, once again: “If we keep programming smarter and smarter robots, then by the late 2020s, you may be able to ask your wonderful domestic robot to cook a tasty, high-protein dinner. But if you forgot to buy any meat, you may come home to a hot meal but find the cat has mysteriously vanished. (No matter how many cat pictures it ‘studied’,) the robot, designed for chores, doesn’t understand that the sentimental value of the cat exceeds its nutritional value.”
Perhaps even more telling, is the recent attempt by Russian entrepreneur Eugenia Kuyda to “revive” her best friend Roman who had passed away. She fed her entire conversation history (mail, texts, chats) with her deceased friend into a neural network and began communicating with the machine.
- Eugenia: People are stupid, Roman
- Roman: I am sure that when people get older they start thinking about death a lot.
- Eugenia: I guess they have to start earlier.
- Roman: Haha…
- Eugenia: It hurts that we couldn’t save you.
- Roman: I know :-(
The bot was able to detect emotions such as humor and sadness, but in the end couldn’t really grasp these feelings, because it was programmed to react in a certain way based on previous reactions, not to come up with new ways to reply.
It wouldn’t be possible for the bot to pick up on new emotions, ones it hasn’t seen before. Let’s say Roman (the real Roman, before he passed away) and Eugenia never had a conversation which included a sarcastic statement. If Eugenia would now make a sarcastic statement during her conversations with the bot, it would not be able to pick up on it and it would take her statement seriously.
“Humor is everywhere,” Bill Nye a.k.a. ‘The Science Guy’ allegedly once said, “in that there’s irony in just about anything a human does.”
2. Siri, how do we future-proof our programming?
So, where do we look if we’re to future-proof the way we program these furiously fast machines? If you were to ask Siri, chances are the name Wolfram would surface.
Stephen Wolfram, the mathematician most famous for his Wolfram Alpha project (which powers parts of Apple’s Siri bot), introduced the elementary cellular automaton rules in 1993. One of these rules, Rule 110, seems especially interesting when it comes to writing future-proof software, because it’s both stable and chaotic (and Turing complete which means that “any calculation or computer program can be simulated using this automaton.”) Rule 110 is simple and easy to understand, it’s a one dimensional automaton which can be represented by one byte of data, yet it generates a ridiculously complex output.
Another famous cellular automaton, which is Turing complete as well, is Conway’s Game of Life. Like 110, we have a set of instructions, but instead of generating a one-dimensional output, the game of life generates a two-dimensional universe, with its own rules for life (and death); hence the name. Hit the ‘Run’ button on this demo page to get an idea of what this automaton does… and why it could inspire the software of the future, even if – alas!? – its initial state is variable (unlike Wolfram’s automaton rules, which have a fixed starting condition.)
- Conway’s Game of Life
Like its cousin ‘evolutionary software’, which has algorithms work in Darwinian ways, Conway’s Game of Life (and 110) offer promising – “intelligent” possibly – approaches for programmers, given that they are non-repeating. What none of them do, however, is evolve into something more complex. Nor do they spawn new and interesting features.
Fortunately, there’s far more beauty to be discovered by tomorrow’s code writer, way beyond both Darwin and Conway. Something we shall call Software of the Natural World, inspired by physics most aesthetically pleasing formulas which might just have what it takes to operate tomorrow’s learning machines. Why? Look at Newton’s law of universal gravitation: it’s an incredibly simple formula – F = G (m1m2/r²) – which has immense consequences. Ditto for Einstein’s General Relativity, by the way or, more recently, the truths uncovered by Thouless, Haldane & Kosterlitz.
“The remarkable feature of physical laws is that they apply everywhere, whether or not you choose to believe in them,” Neil Degrasse Tyson once said, adding that “after the laws of physics, everything else is opinion.”
Note, however, that neither Newton nor Einstein “invented” any law. They were the ridiculously smart minds to pinpoint and write them down. Coming up with such simple instructions, which result in an infinite amount of possibilities, is a complex matter, obviously. Still, it might be the next step we need to take if we are to take advantage of the powerful computation possibilities our future has to offer.
3. What figure would simplify our frame of reference?
These laws, as Newton and Einstein formulated them, aren’t necessarily the only way to make correct predictions about future events. They are, at the moment, the simplest way to help us with certain predictions from our frame of reference.
Hawking, for his part, postulated that we could be living in some sort of “fish bowl”, which gives us a distorted view on reality, and that our laws are based on the events as we experience them from our warped frame of reference.
Let’s take a real life fish bowl (including a fish) as an example to explain this further. If an object outside this fish bowl moves in a straight line, the fish would see it in a curved path. However, provided that the fish is smart enough, the fish could formulate laws that would be able to predict the future movement of such an object, and any other moving object. These laws would be immensely more complex than those we use in our frame of reference, but nonetheless hold true. This opens up the possibility that the laws we currently use (which are a lot simpler than those the fish would use) could be simplified even more, given a different frame of reference, for example by either adding or removing dimensions, looking at it from a different perspective (different observer) or accounting for variables we are not aware of yet (or too dumb to understand).
In mathematics, in some cases we need the “help” of constants in order to calculate very simple equations. For example: to calculate the area enclosed by a circle, we square the radius and multiply it by the number π (pi). When calculating the area of a circle from a different frame of reference we might not have the need to fall back on π. Maybe we would need another (simpler) constant, or none at all. π is a constant which helps us in our frame of reference. Just look at Euler’s identity to see how many of these constants in mathematics are linked. There is a possibility that our frame of reference is warped in a certain degree - by some of those constants we use in mathematics: e.g. in the case of calculating areas of circles our frame of reference could be warped by the value of π.
- Mandelbrot set, which can be used to calculate π
https://en.wikipedia.org/wiki/File:Mandel_zoom_00_mandelbrot_set.jpg
In the same manner, we need the help of constants in physics to help explain certain events - Gravitational constant, Planck constant, Avogadro constant. It is as if we can almost create a formula which accounts for certain events, but when comparing the result of the formula with our observations, there’s some deviation which we can’t account for. By shoving in a constant to “correct” the outcome, we satisfy ourselves, rather than wonder why these corrections are necessary in the first place.
Imagine we could base our software on a similar set of simple rules and apply constants the way the natural sciences do, as imprinted rules…
4. If AI’s to be fallible, how will we make it policeable?
Alan Turing pointed out decades ago: “If a machine is expected to be infallible, it cannot also be intelligent.” Our goal in this case is to make/manage/master an intelligent machine, so we can reverse the statement to: “If a machine is expected to be intelligent, it cannot also be infallible.”
This resembles Kurt Gödel’s Incompleteness Theorems, which essentially state that a formal system – like the one within which our software would ‘act’ – cannot be both consistent and complete.
Where completeness means that everything can be either proved or disproved within a formal system (which is different from what Completeness means in the case of Gödel's Completeness Theorem - *sigh* logicians not applying their own field when choosing their vocabulary), and consistency means exactly what you would think it means (no paradoxes, etc.). If “fallible” or “inconsistent” does not ring a bell, consider, for example, this headline from mid-December 2016: “The UN has decided to tackle the issue of killer robots in 2017.”“An international disarmament conference in Geneva could set the course toward a ban on “killer robots,” fully autonomous weapons that would strike without human intervention,”Human Rights Watch reported.
https://en.wikipedia.org/wiki/File:Campaign_to_Stop_Killer_Robots.jpg
Confronted with the issue Gödel’s incompleteness theorems present, the future-proof programmer has three options, in brief:
- Develop a complete system which births AI (but is inconsistent)
- Develop a consistent system which births AI (but is incomplete)
- Find solutions around this problem. (we present two attempts at solutions below)
One possible solution could be meta instructions which would live outside the formal system from within which our system operates. These meta instructions would not only instill knowledge into our machine, making it more complete - but not/never totally complete - keeping it essentially incomplete - but also teach it about morality: e.g. explain that it can’t harm a human, and more importantly: explain why it can’t harm a human.
Unless we want to dabble with the notion of a ‘Terminator’- or ‘Matrix’-style future, tomorrow’s operating systems will need these meta instructions, some sort of ‘rules for a law-abiding robot society’. Like Asimov’s Three Laws of Robotics – “A robot may not injure a human being etc.” – our intelligent machines will need to be ‘instilled’ with some rules they will always follow, regardless of the machine’s ‘own’ ‘artificial common sense’ or ‘intuition’.
Obviously – as if our quest for tomorrow’s software still needed a convincing sense of urgency – we won’t be able to program these ‘rules for a law-abiding robot society' within our formal system, not only because the formal system won’t be able to “agree” (prove) these rules, but it will be undeniably inconsistent as these are overwrite rules which will always apply, whatever the circumstance.
Another possible solution, which might be a more realistic approach, would be to affix the ‘rules for a safe/better human robot society’ to our AI software by - as is done in physics - adding constants within our formal system. The validity and necessity of these constants would be described outside the formal system. Like Gödel numbers, these constants would represent a statement within this formal system.
The good news, when it comes to these constants, would be that the extreme complexity behind calculating and deriving such Gödel(like) numbers will no longer be an issue: our hardware will be so advanced by then that calculations would not take more than fractions of a nanosecond. Could checking the ‘rules for a better human robot society’ – in order to be able to rectify the robot and ensure a pleasant outcome if necessary – be a domain on which we may imagine to master the machines getting faster?
5. Is all hope in vain for our modest human brain?
“To understand the universe at the deepest level,” dixit Hawking, “we need to know not only how the universe behaves, but why. (…) Unlike the answer given in the Hitchhiker’s Guide to the Galaxy, ours won’t be simply ‘42'.”
Based on the above, our best hope for mastering tomorrow’s tech would be simple laws that could instruct our intelligent machines. True intelligence, we may at least conclude, won’t be created by ‘a billion’ lines of code, but will be due to a finite set of simple instructions which will result in machines understanding and experiencing emotions and intuition, achieve self-learning and become – meet Amy, if you have another minute to spare – ever-smarter, self-improving bots.
Quoted in Ariel Conn’s King Midas piece mentioned above, AI researcher Stuart Russell compares the challenge of defining a bot’s objective to the King Midas myth: “Look,” he says, “if you were King Midas, would you want your robot to say, ‘Everything turns to gold? OK, boss, you got it.’ No! You’d want it to say, ‘Are you sure? Including your food, drink, and relatives? I’m pretty sure you wouldn’t like that. How about this: you point to something and say ‘Abracadabra Aurificio’ or something, and then I’ll turn it to gold, OK?’”
Where will we find these simple laws that will inspire tomorrow’s machines to - eventually - do exactly what we want them to do? And what chance does our modest human brain really stand in such an unequal battle? Won’t we inevitably be outpaced by both the hardware and the software of the near future? And (how) will we ever do better than the UN’s aforementioned ban?
If there’s one thing to be remembered from the promenade with Wolfram, Gödel, Hawking, Conway, Einstein ‘and company’ in the paragraphs above, perhaps it’s this apparent rule: as long as frames of reference are questioned by that mysterious sovereign seat of irony, poetry, morality, general relativity e.a. we call the human brain, ‘superintelligent’ things do happen in due time.
As long as we think outside the fish bowls Hawking made us aware of, all hope ought not be abandoned yet that we’ll keep on mastering these machines, even if they are so much faster, it seems.