Failure by leaps and bounds

I was listening to another great seminar (don’t remember which), but there was a side comment on evolution. Basically, nature can really only make incremental steps, building and changing only what it has. Making a huge evolutionary leap is hard. For, example, wings evolved from feathered arms. Nature just didn’t leap to birds.

And the same, I think, applies to products. I think that one reason big complex products fail is because the leap too big, the number of changes to large, the scale of complexity affecting everything. For example, space travel and nuclear reactors are complex endeavors that we leaped into, but they are plagued with difficulties, and, as with space, no mere mortal can do it. I think this also is part of the reason that biotech is still struggling -it’s a jump into a complex product, even if it is one single drug.

Furthermore, as I’ve been trying to stretch my mind 5,000 years into the future, I think such issues affect our predictions of the future. If you subscribe to the Vinge-Kurzwiel view of the coming Singularity, then you believe in the linearity of technology. But, I think that, all because the components have become bigger, better, and faster, it doesn’t mean that the complexity scales in a manageable way. I think when we arrive to the time of the Singularity, the complexity will keep us from making the fantastic leaps that Vinge and Kurzwiel believe in.*

*Oh, and then there’s the software to run it all. Heh.

1 Comment

  1. There’s a basic inconsistency here, which is that the Singularity can not arise from linearity and incrementalism. It presumes that Strong AI is a “natural” outgrowth of Moore’s Law. There’s little if any actual evidence that this is so. Weak AI is clearly an outgrowth of Moore’s Law, but there’s no empirical reason to believe that Weak and Strong AI are the same species.
    http://rafer.wirelessink.com/?p=15

Comments are closed.