I can't think of a bigger crime against AGI than giving them the capabilities of giving them human emotions anyway.
Inter arma enim silent leges^ But emotionless, sentient AI can run afoul of Commander Data Syndrome.
"Allah may guide their bullets, but Jesus helps those who aim down the sights."Admittedly Data's cold logical moments were pretty awesome. Such as when he told a collector that for crimes and continued crime against sentient life, he would have to die.
As for AIs intended to make people feel better psychologically, best to Fly on a Dog.
This is where I present my usual introjection to the effect that intelligence and self-awareness need not be associated with each other. An expert system can transcend human intelligence without achieving self-awareness. It's even possible for a sophisticated computer to fake human emotional qualities without itself possessing self-awareness. And if an intelligent entity lacks self-awareness, it's difficult to define what an "emotion" would be.
The best reason I can think of to program an emotional capacity into an AI would be to give it a capacity for empathy, which could help prevent it from causing unnecessary harm, and help motivate a protective impulse toward weaker beings.
"We learn from history that we do not learn from history."Yeah Empathy is important.
"The concept of empathy is understood by him, but irrelevant." - Farid
You want to avoid that too. Just because an AI knows or understands what empathy is and means doesn't automatically make it empathic.
edited 24th Nov '15 7:07:18 PM by MajorTom
"Allah may guide their bullets, but Jesus helps those who aim down the sights."I know. It needs to actually feel empathy.
It could use an expansive and possibly complex system of responses based on locally gathered data and compare it to data base. Expert systems more or less do that now. It would be a very bare emulation method but people are people let human psyche and a tendency to humanize just about everything take its course.
Who watches the watchmen?Expert systems can't improvise or improve themselves.
They don't need to. They just need access to a complex enough data base and that data base can have information constantly input into it.
Who watches the watchmen?“In a galaxy, where the United Earth Government has fought a civil war, losing several worlds to the Southern Cross and had a war with the Alien Cay Union, hope rides in a ship called Victory ” ~*cue the theme music*~
Alrighty then, here's my take in my 'Verse:
- A lot of alien races seem to regard AI as a mere tool. Any AI that starts to show any sign of self awareness is either shut down or disconnected and studied. Mostly because many races view a self-aware AI as impractical or view the moral implications of enslaving a self-aware being to awful to bear.
- The Abusive Precursors to the Martissans held from their creations most technology. They just loved bossing their slaves around.
- Humans found true self-aware AI a huge challenge. Some thought that the whole systems would have to be microscopic just to model anything close to the human brain (wags suggested that humans would need to store all the journals on AI research microscopically as well). Computers got smarter and smarter but self-awareness seemed elusive.
- One company took another path: cybernetics. RRDC engineered brains that were as much machine as living tissue. The first prototype "robots", Mort, Bort and Gort were not only self-aware but learned much faster than even the best expert systems. However their Literal-Minded nature left room for improvement. Years latter a pivotal battle exposed the bio-android's lack of nuance concerning their human makers. Future lines played with A.I. Is a Crapshoot.
- Trying to fix this, RRDC created "The Academy". A place where humans (and bio-android combat veterans) train new bio-androids in human behavior and ethics before turning them over to the Earth Defense Force. Many bio-androids view this Training from Hell as tougher than EDF basic military training. Mostly because The Academy makes the shades of gray part of the curriculum.
- One company took another path: cybernetics. RRDC engineered brains that were as much machine as living tissue. The first prototype "robots", Mort, Bort and Gort were not only self-aware but learned much faster than even the best expert systems. However their Literal-Minded nature left room for improvement. Years latter a pivotal battle exposed the bio-android's lack of nuance concerning their human makers. Future lines played with A.I. Is a Crapshoot.
The discussion of self-awareness vs intelligence vs empathy reminds me of CHAS the battle robot from the Starship Troopers cartoon. Big beefy robot armed to the teeth, generally awesome, most of the troopers didn't care for him except for the non-trooper Jimmy Olsen reporter. Had problems with metaphors and idiomatic phrases.
Anyways, at the end, when they're having to make a hasty retreat through a minefield, CHAS rescues the Jimmy Olsen guy, and is preparing to make his Last Stand to buy them time against the oncoming Zerg Rush. Olsen guy tries to talk him out of it, telling him that he can save himself too, and CHAS gives the logical reply:
"I was never alive."
Man, that episode. Dem feels. Yet the Roughnecks forgot about him in the next episode.
Inter arma enim silent legesUsing rocks as weapons wastes resources, wasting resources is heresy. Ergo using rocks as interstellar/interplanetary weapons is HERESY!
"Allah may guide their bullets, but Jesus helps those who aim down the sights."
Rocks are indeed free!
Its the shiping costs that are horendous.
A big rock, however, is a world-killer, a fusion bomb isn't, even if its yield is in the gigaton range.
This argument ignores the fact that most of the kinetic energy comes from the fall down the gravity well and/or pre-existing velocity. The fuel expense is not that much when you consider the end result.
edited 25th Nov '15 5:55:42 PM by Aetol
Worldbuilding is fun, writing is a choreYou know whats a better weapon then a rock?
A space ship you were going to de-comission any way.
If Skylab, Columbia and Mir are any indication, space ships have a tendency to break up in atmosphere rather than hit like a meteor.
"Allah may guide their bullets, but Jesus helps those who aim down the sights.""You know whats a better weapon then a rock"
My mom.
"We learn from history that we do not learn from history."While they do break up a surprising amount of debris still survives to the surface.
Who watches the watchmen?^ That's the problem. A break up turns what would be a region or city killer into a localized bombardment not too dissimilar to a missile strike. Only more chaotic and inaccurate.
And that's under best conditions. At worst it won't be that much dissimilar to an aircraft in-flight break up as in the debris that hits the ground doesn't really do any splash damage to anything. Yeah it crushes and damages what it hits directly but it doesn't do much else than that.
edited 25th Nov '15 7:02:21 PM by MajorTom
"Allah may guide their bullets, but Jesus helps those who aim down the sights."I think the main issue with sending the space rocks to crush planets it's that doing so allows a major time window for reinforcements to come and aid the sieged planet.
Since you're talking about going to a few AU worth of distance to seek a suitable rock and then coast back it towards the target.
If you want to use sheer kinetic energy just a bunch of tungsten rods can do the same job probably cheaper and much faster anyway.
Inter arma enim silent leges
While A.I.s wouldn't develop emotions as humans know them, they might develop emotions period. They might have a recursive loop checker that triggers when a task loops more than expected. This might be equivalent to bewilderment or maybe frustration. They might have a simplification of thought during a crisis to speed up processing. This might equate to fear. An AI might experience loss when their heuristics trigger and they go to call someone they know only to realize that that person is dead or gone.
An AI might not experience emotion like humans do but I'm pretty sure they'd feel something.
On a side note, this should have it's own thread. AI theory is too big for just Sci-fi military tactics and strategy.
edited 23rd Nov '15 4:22:15 PM by Belisaurius