?

Log in

The singularity has not arrived (nor will it). - The year was 2081 [entries|archive|friends|userinfo]
matt

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

The singularity has not arrived (nor will it). [Feb. 19th, 2011|02:39 pm]
matt
[Tags|, ]

I have not watched the computer named Watson beat its human rivals at Jeopardy, but I have been following the fringes about what has been reported on it, because that's a lot more interesting to me. A good number of the people around me have expressed quite a bit of interest in it, though, and perhaps because my social network is populated with a lot of creative writery types and scientists and transhumanists of all stripes, this is not terribly surprising.

Occasionally I take some flak at work for making statements like “this can never happen” or moreso “this will never happen again”, because for most people, experientially, the fact that something happened makes it more likely that it will happen again. I apologize if opening with "The singularity can never happen" pisses you off, that's not my intent. It's a genuine desire to induce a question in your heads: “why am I making this argument?”

I make the counterargument, though, not because I am against progress and technology (far from it) nor because I fear our machine overlords (I fear them even less than other overlords, and that's slim, too). I don't make the argument from a position of scarcity or constrained resource (although the scarcity/abundance paradigm will become more of a theme in upcoming blog posts). I'm going to make the argument from the approach that you can attempt to answer the wrong question and mistake a right answer for success.

I am being unduly influenced by the format of a recent blog post that I read, completely off topic on the subject of marriage. You can read the article here: http://www.huffingtonpost.com/tracy-mcmillan/why-youre-not-married_b_822088.html. Go ahead- it's worth reading. Not, however, because of the content, though, but for the same reason that many people get tripped up (usually once) by the “St Ives” story/joke. You know, seven wives, seven sacks, seven cats. It's not a terribly funny joke to play, because it's only funny once. And “funny once” is a recurring theme in my latest reading nearer to the subject of this blog post, “The Moon is a Harsh Mistress”, by Heinlein. If you aren't reading that blog post carefully, you might miss the part where the writer says, “I've been married three times” (read: to jerks), and the whole point, I think, is that she's not providing you with prescription, but trying to induce you to question what you think is sound- why you want what you want, and how “how you go about it” influences the outcome.

So in the same format as that blog post on why you're not married, I will explain why the singularity is not near. Not exhaustively, mind you, smarter people than me are working on this, and others have perfectly good arguments: Steven Pinker, for example, here: http://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity. But I think I might have a different- not novel, but... distinctive, perhaps minority outlook on the subject, and as always, would appreciate feedback.

1. The Ship of Theseus, or why immortality is impossible. (Go ahead, google it, if you're unfamiliar with the argument, or any other term that I use. I'm not trying to make you feel dumb. I've had to google a lot of this myself just to keep up with the rest of you! The fact that you can google it, and what you learn as a result, is part of my point.) I've had a few discussions lately in which the counterparty has breathlessly ignored the second law of thermodynamics with unbounded whiggishness both in making arguments for or against their point, but the gist of my point here is that one of the strengths of evolution is not that systems are constantly evolving to a higher, more ordered, more “evolved” state, but that there is a natural strength in diversity, because the rules of the game are constantly shifting. Sometimes you have to choose between having Theseus's ship, and having a ship in the first place; or having the benefits that having the ship provides.

The “great conservative mistake” that is made is that what is now (or in the past) is “better”, and that life would be great if we could only have more of that, forever. Forever is a really long time, and goes up against the second law pretty hard. This is why the second mistake that gets made is one of the Ship of Theseus, and that is that once you start replacing worn out or missing parts with new ones, you run the calculable risk that the new object is not the same as the original one.

You see this classically in agnatic succession- if you think rationally, eventually, if you risk the possibility by chance of always having daughters your line will fall. Every chance, if 50/50, does not rule out conclusively always tossing heads, only statistically so, and statistics don't give a shit about you. So if you set out to preserve agnatic succession infinitely because you see it as the “right” system, there's always a possibility of failure, and I think, more likely sooner than later. And, so it goes, i would argue, for all systems. (there's that bold use of “all” or “never” again, that gets me in trouble with my peers)

I work in a highly regulated industry, and currently I'm at the tail end of a multi-year project to replace one of our more complicated compliance systems. I think, sometimes, I get paid on the principle that every time someone says, “Gee, {old system name} used to do this just fine,” I get paid a nickel, and at the end of the week, I'm fairly compensated for my time. There's perhaps a desire for people in my position to be infinitely smart, and “just know” that when I replace a particular function, there are a multitude of unspoken ways in which they use that function and I should know (or anticipate) all of them. The argument is that if I were “more perfect”, that the act of replacement would not be disruptive. But, disruption is sometimes, arguably even often, the whole point, which brings me to

2. Smarter is not better.  I really like the sort of human interest stories that are being spawned by Watson on Jeopardy. The teamwork, the guy who breaks down crying trying to keep the hardware up, etc. There's probably no denying that those guys who are Watson's team are really bright people, but I think they have made a huge mistake that is often made in computational circles, indeed by most bright people, and that is that being smart in itself is an end and not a means.

I think this is an unspoken bias of modern society, one which we tend to get testy about. You know, “Brawndo is what plants crave”. We like smart people, but we hate them too. I think most people would like to be smarter, just not so much that they get picked on. Just like, in my experience, most people would like to be richer, but not so much that they can't refer to themselves as middle class and/or tout their humble origins. (I guess I see a lot of fabulously wealthy people calling themselves middle class, and it bothers me- a good fraction of people walking around today in America are unspeakably, filthy rich, and don't seem to notice it. I think I am guilty of this at times myself, though, too, so I tread cautiously.)

I don't really think most people care whether Watson wins or loses- it is, granted, only a matter of time that for any task which must have one correct answer, at some point the ability for another individual of variable brainpower and time to weed out the wrong answers will exceed that at which any given individual of fixed brainpower and time could win. It's a joke that's really only funny once- look at how there hasn't been a grandmaster vs computer chess rematch. This leads me to

3. It's not the answers to the questions, but the questions without answers. Also worded- it's not the what, but the how. I'm not referring to theology here, mind you. When I was in graduate school, I was very strongly impacted by Dr Barry Carpenter, who taught graduate physical organic chemistry from an entirely novel standpoint (to me). In most science classes, there's always a right answer, and that your success is typically governed by how many arcane facts you can stuff in your head. His approach was that once you get to the top of your field (and most of the people sitting around me, were) there ceases to be right answers, and if you keep questing after the absolute right answer you've missed the point. It's almost like a big cosmic joke- the monster at the end of the book is you, Grover.

His teaching approach was not to find right answers, but that the act of asking questions is what generates technology and advancement. It almost seems ironic, in that light, that Watson must form its answers *as* questions.

We addressed in our lectures very basic science questions that you would think have a clear and unambiguous answer: for example, is cyclobutadiene a square molecule, or a rectangular one? The problem is, how you answer the question determines the answer. For example, if you perform spectroscopy of a single molecule encased and frozen in a buckyball, to measure the bond length, the act of measurement might provide just enough energy to cause the bonds to flip, such that you measure two oscillating rectangles on average as a square. Or the act of freezing it in that buckyball might provide an environment that distorts or squishes the square such that it only appears to be a rectangle, because there's not enough room in the sphere for the unsquished molecule. Which brings me to

4. You can observe facts just because that's what you want to observe, not because they are truly facts. Observer bias is powerful, and all around us. Each of us has our pet stories- vaccines in autism, for example. Few people, though, hold that harsh mirror and lighting up to look at themselves. It's compelling, and heartbreaking. “My child was doing ok, and then the vaccine, and then something bad happened, and I demand to know why”. But the very way you go about answering the question influences the outcome, as demonstrated.

In the case of Watson, I worry that the creators have lost their way, making the classic whiggish, transhumanist mistake of “better is better”, without questioning why. I'm not saying the goal was to win Jeopardy, I'm saying that they wanted to demonstrate that their computer can understand innuendo, “natural language”, as good as any human, and that they believe that the scientific test that will demonstrate this is the game of Jeopardy. But they're just borrowing this pre-engineered test for their own purposes, and perhaps not recognizing that the biases built into such a test might be misleading them in ways they cannot visualize- it's the act of discovering those biases that can lead to the real breakthroughs.

We like going to experts for right answers, but there aren't always answers, and sometimes the questions are the wrong ones. I worry about why they want a computer to recognize slang- it's an input mode that doesn't have a purpose yet. We like computers because they give exact answers- there's no ambiguity in math. So why, exactly, do you want them to be more ambiguous?

I don't think the smart guys at IBM, nor necessarily any of the talented and smart people that I find in all the “advanced” disciplines, have any particular advantage at this process. That's because

5. Sometimes dumb and ugly is happy. I wouldn't be being true to the tone of the article I was patterning on if I didn't throw an offensive grenade, but I'm not trying to be heartless or insensitive on this one. I'm really bothered by the tone of programs like “Ingenious Minds”, but to the point, being super “successful”, rich, having all the answers... in short, always having “more” is not a guaranteed successful, satisfying strategy. “The one who dies with the most toys, wins” is, I think, a really dangerous, harmful philosophy, and it's even more ill conceived when you add “...and then don't die”. I don't think we should be lusting after “if only I could be able to be super good at math like Mr Autistic, except without all the seizures and hallucinations part” of other people's talents.

The point is that games like Jeopardy are set up to highlight individual achievement, even though Watson itself is far from an individual achievement. Nor, I suspect, is any individual who wins- the prodding parent, the attentive teacher, the mindful cafeteria worker- there's a lot that goes into the care and feeding of individuals, and to mistake an individual contribution for the entire success is to miss the point. Theres a missing logical leap- that all we have to do is get computers to be able to do x, y, and z, and self replicate, and then... profit? World domination? Paper clips? Underpants? The point is not the ends, the things, but the team, and when you forget that sense of balance, you've already failed. Because there is no ubermensch- even a cyborg enhanced one- because the domination of the ubermensch, just like the classic villain flaw, is that the act of having a problem, revealing that villainous snicker, will generate new ways of thinking about the problem, and new attitudes, which will overcome. Such that you should never be

6. Fighting the last war. I'm surrounded by a lot of really bright people on my programming team, and I see them fall into this trap all the time. For example, we get stung by an uninitialized variable, and then every code review after that people are trying to attribute all the failures to uninitialized variables, even though it's something new, like a race condition. And then every problem we're looking for a potential race condition... on, and on.

Now, the counterargument might be, so if I handle all the unititialized variables, and all the race conditions, etc, at some point I'm more perfect, right? But that's not my point at all. Making a machine simulacrum of life- what't the point again? Netting together rat neurons to make a supercomputer- whats the point again? I'm struck by the meme going around about Bill O'Reilly's recent anti-atheistic tirade, where he says, “the tide goes in and out, never a miscommunication”. But I think both sides have missed the point. If the moon's gravitational oscillation is disrupted, it doesn't mean anything to either the believer or nonbeliever- it's just a situation. Whether you do or don't know it's the moon's influence, there's no miscommunication. All can participate in solutions to situations, and the skill doesn't always rest on being the one- sometimes it's recognizing who is the one, and getting them on your team, and helping them be them. Sometimes it's about being the support system. Sometimes it about being just offensive, loud and annoying enough (looking at you, Ermey) that motivates another to go over the top. And sometimes it's about taking a second to pray, even if there's no god to help you, because a moment of quiet or a pause to let the rock smash in front of you instead of racing underneath it is what is really needed. You know, dumb luck. Don't rule it out in your quest for happiness.

People are always worried about someone coming in, taking over and messing it all up, because really- isn't that the last war that we're fighting? Against dictators, communists, imperialists, whatever? Isn't the ubercybermensch (or aliens, or Xenu, or whatever the neurosis) just exactly that last War?

There's no “point” or “meaning” to life, only what you bring to it. As such, I don't think machines will ever be better at it than us. Indeed, there may be a time when we recognize that peaceful coexistence with machines is the right solution- and we probably don't know if that day has already arrived! How do you know that machines haven't already decided that our addiction to new and better toys isn't perfectly conducive to their own self-preservation and improvement, and are just happy for the ride?
LinkReply

Comments:
[User Picture]From: mlerules
2011-06-13 05:30 pm (UTC)

TidBits that struck me lots:

I worry about why they want a computer to recognize slang- it's an input mode that doesn't have a purpose yet.

For some reason the latter bit strikes me as potentially arguably quite "pure," although the implications of the former (yr worry) seems to be quite the opposite (pure evil?).


How do you know that machines haven't already decided that our addiction to new and better toys isn't perfectly conducive to their own self-preservation and improvement, and are just happy for the ride?

Hee! So, inquiring minds (well, this one anyway) wonders whether you've read Michael Pollan's The Botany of Desire?
(Reply) (Thread)