Friday, April 25, 2008

An effort to seem intelligent



Mixture between nanotech, architecture, and nature, with a little ethical focus on the side.

Recently talking to a friend of mine (architecture student, big into the archy scene) who mentioned a couple of projects he was working on, which I'm not going to give away here, but it got me thinking about the possible overlap between certain disciplines. That and the horrendous amount of sci-fi I've read.

Lets say that some time in the near future the problem of true Von Neumann systems are cracked, giving the human race the power to manufacture at a molecular level. This basically gives us a capability to build practically anything, producing numerous paradigm shifts in several fields, from the a shift to the rod logic style computing as envisaged in Neal Stephenson's "Diamond Age", to the biononics (nanodevices operating and interfacing at a deep biological level) shown in Peter F. Hamilton's "The Dreaming Void", to the more mundane efforts of reprogrammable clothing , and even way up to environmental scrubbers. There would also be major shifts in materials science, as materials which are now being manipulated at a molecular level now display a whole host of new physical properties, for example the surface tension of a water droplet all of a sudden becomes a major factor.

The question then arises - if we suddenly arrive at a point where we have the ability to do, almost literally, anything, what is to stop people from doing it?

Now this isn't entirely a bad thing, as we would see an explosion in human creativity, vast new areas opening up for expansion, medicine advanced exponentially, and so on, as well the ability to create almost any structure and have it be smart, self-maintaining, and reconfigurable at the drop of a hat. But there is also the flip side of this - what if some despot gets his grubby hands on it and decides to key a nanobot keyed to mangle DNA with certain characteristics, say blue eyes...

While the ethical issue has always been a bit of a laugh to most people, it will become more and more relevant as time goes by. As a species, large scale ethics is not something we're particularly good at, the analogy I generally prefer is that if the human race were a single person, it would be a teenager battling through the latter stages of adolescence.

Now to tie it all in: at the beginning I mentioned nanotech, architecture, nature and ethics. Lets say for example that someone was to design a house, a free standing configurable house, that incorporated both nature and nanotech in its design. Lets say it was a 2 storey edifice with living roots instead of foundations, which provided heat by geo-thermal tapping, water directly from the root system, oxygen from plan life living in and around it, a wildly variegated ecosystem of, say, savannah flora and (small obviously) in one area, heat gathering/shedding plants growing from the roof, a whole host of different ecologies co-existing symbiotically in building. Such a building could even incorporate food production into its design. The nanotech within the structure would be responsible for managing, monitoring and maintaining such a diverse ecology, as well as for providing all the elements a technological civilisation would come to expect, communications, computing power, entertainmet. It could even be intelligent, self-aware.

Such a house could be very well be viewed as "smart", self repairing, eco friendly, etc... It could even be considered, in a very real sense, to be alive.

Yay the pundits cry!! Utopia has been discovered!! But what happens if this smart, self-renewing, self-maintaining, self-aware edifice decides that it doesn't want anyone living in it? Do we declare this a travesty and force our view on it, or possibly worse, eradicate all traces of intelligence and self-awareness from it, thereby committing genocide? Numerous people, much cleverer than me, have debated the possibility of an AI conflict, and the various benign or apocalyptic outcomes thereof, so I'll keep my nose out of it, thank you.

My question instead is at what point do we take stock, sit down and make a rational and mature decision as a society that this is what we want to do? When do we take responsibility for what we are trying to do, and lay ethical ground rules for the creation, development and treatment of these new entities?

The sooner the better I would think.