Page 1 of 1

We are almost there! (7 Jul, 2021)

PostPosted: Wed Jul 07, 2021 7:47 am
by aardvark_admin
This column is archived at:

Are we even close to being ready for the day that our computers become sentient?

How far away is that day?

We already have CPUs that rival the brain in terms of raw processing capacity and that parity should arrive well within the next decade. Where to from there?

How will we handle the moral and ethical challenges of creating machines that have any degree of sentience?

And will they be our slaves or our masters? Cyber-lives matter? :-)

Re: We are almost there! (7 Jul, 2021)

PostPosted: Wed Jul 07, 2021 8:45 am
by Perry
Bruce wrote:How far away is that milestone day when a machine first becomes self-aware and what will we do when that happens? Have we adequately thought through the ethical and moral implications of creating machines that have a consciousness and are therefore "alive"? The future is a scary place sometimes and we're on a journey right into the heart of it with no way to stop or get off.

A Vague Parallel
A science fiction story of long ago that I can recall.

Vast armies of AI-equipped robots served the human race. AI was such that many slave-bots learned to obey only their master or mistress. All such machines had a paramount imperative. Killing human life was prohibited. A slave-bot would not obey an order to kill a human and would, if within sight, rapidly intervene to prevent its master or mistress attempting to do so.

Somehow, someway, it was not possible for the robot-making-plants to build robots without that imperative built in. A global army of pacifist robots which would actively halt all violence / attempts at murder. Sounded good.

But one day, an evil sod realised it was possible to build robots which could be programmed to destroy other robots. And the he did and the semi-utopian dream went sour.

Re: We are almost there! (7 Jul, 2021)

PostPosted: Wed Jul 07, 2021 9:42 am
by hagfish
I think this will be one of those '10 years away' technologies, like brain-machine interfaces, over-unity fusion, self-driving cars etc. The first 90% of the job might progress well, but the last 10% turns out to be a whole new 100%. And then the last 1% is a new 100%, and so on. As you say, it's not the neurons, so much as the linkages among them that are a brain's special sauce.

As/when/if we (or the machines themselves) manage to develop some kind of sentient being, I expect we'll treat it with the same moral and ethical scruples that we treat one another and our environment.