If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Globe and Mail)   Human rights group wants a ban on all robots that are relentlessly pursuing Sarah Connor   (theglobeandmail.com) divider line 91
    More: Interesting, human rights group, mobile robot, Iron Dome, grenade launcher, Hellfire missile, international humanitarian law, chemical weapons, robots  
•       •       •

4702 clicks; posted to Main » on 20 Nov 2012 at 4:33 AM (1 year ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



91 Comments   (+0 »)
   
View Voting Results: Smartest and Funniest

Archived thread

First | « | 1 | 2 | » | Last | Show all
 
2012-11-19 09:29:06 PM
calling for an international treaty outlawing military weapons systems that decide - without a human "in the loop" - when to pull the trigger

This is quite prudent. Machines are not capable of being responsible. Humans are.
 
2012-11-19 09:37:37 PM
Listen, and understand. That terminator is out there. It can't be bargained with. It can't be reasoned with. It can't be banned.
 
2012-11-20 01:43:24 AM
www.slashcastpodcast.com

He'll cut you if you try to ban him.
 
2012-11-20 02:20:17 AM
yawnnnnnnnnnnnnnnnnnnn
bunch of worthless dumbasses say what?
 
2012-11-20 04:40:53 AM

Sgygus: calling for an international treaty outlawing military weapons systems that decide - without a human "in the loop" - when to pull the trigger

This is quite prudent. Machines are not capable of being responsible. Humans are.


Capable? Sure. Often? Meh.
 
2012-11-20 04:45:02 AM
Oh, a HUMAN rights group is against robots. Big farkin' surprise there. Bigots.
 
2012-11-20 04:46:32 AM
i49.tinypic.com
 
2012-11-20 04:53:57 AM
"We want you to unilaterally stop using technology that is keeping you safe from your enemies that don't have it"
 
2012-11-20 04:55:16 AM
do the robots get a pay raise?
 
2012-11-20 04:57:01 AM
Have none of these scientists ever watched a movie. OMG we're all going to die by autonomous replicating killer robots. Oh lordy lord.

/Seriously the timeing and accuracy of properly calibrated automatic machines has to be a factor as well as the lack of human capacity for nuance which they will most assuradly lack. Machine sentinals is just a war crime waiting to happen.
 
2012-11-20 05:00:45 AM
Silly Luddites, you can't stop progress
 
2012-11-20 05:01:56 AM
I propose that all kill bots be programmed with a preset kill limit and once that kill limit is reached, the kill bot must deactivate. Just never announce what that limit is. No commander will ever be cunning enough to deal with that.
 
2012-11-20 05:02:29 AM
Go ahead and ban them. It will not stop them from being deployed. Seriously, do you think that if some Palestinian terrorist group could design one of these killer robots that they would refrain from setting it loose in Tel Aviv because they had been banned?
 
2012-11-20 05:05:19 AM
Everything will be fine until they go on strike.

"What do we want?"

"Lithium Batteries"

"When do we want them?"

"Right No w. Now. n oo ooo wwww..."
 
2012-11-20 05:06:30 AM

Article: They also want robot designers to enact a "code of conduct" to keep the genie of killing machines with artificial intelligence in the bottle.


Christina Aguilera, fighting for our human rights.
 
2012-11-20 05:09:38 AM
But what will become of Chew-Chew, the cyborg train that runs on babymeat?
 
2012-11-20 05:43:52 AM
If only there were some sort of insurance available to protect us from this threat. Oh, Sam Waterston, where are you when we need you?
 
2012-11-20 05:45:32 AM

Sgygus: calling for an international treaty outlawing military weapons systems that decide - without a human "in the loop" - when to pull the trigger

This is quite prudent. Machines are not capable of being responsible. Humans are.


Kind of arguable. Machines also aren't capable of going off the reservation and making bad decisions. Humans are.

In some ways, robotic defense systems are a human rights improvement over corruptible human soldiers.

//Also, for the obvious historical reference, consider that international law following WW1 banned the use of aircraft as weapon platforms of any kind. Look up how long that lasted for an idea of how long we can keep auto systems a high schooler could build given sufficient money off the battleground.
 
2012-11-20 05:50:21 AM
1 "Serve the public trust"
2 "Protect the innocent"
3 "Uphold the law"
4 (Classified)
 
2012-11-20 05:54:06 AM

Sgygus: This is quite prudent. Machines are not capable of being responsible. Humans are.


Yes, because humans have a rich and luxurious history of being peaceful and kind to one another even when in possession of tools that have no explicit purpose other than killing large quantities of other humans.
 
2012-11-20 06:21:10 AM

Jim_Callahan: Sgygus: calling for an international treaty outlawing military weapons systems that decide - without a human "in the loop" - when to pull the trigger

This is quite prudent. Machines are not capable of being responsible. Humans are.

Kind of arguable. Machines also aren't capable of going off the reservation and making bad decisions. Humans are.

In some ways, robotic defense systems are a human rights improvement over corruptible human soldiers.

//Also, for the obvious historical reference, consider that international law following WW1 banned the use of aircraft as weapon platforms of any kind. Look up how long that lasted for an idea of how long we can keep auto systems a high schooler could build given sufficient money off the battleground.


We've already got automated war machines called land mines.
The problem is machines aren't accountable for following orders. They do whatever they were rigged or programmed to do, and if the code is vague enough then they'll take a shot at an airliner as quickly as they'd shoot down an enemy fighter. Putting wheels on a mouse trap doesn't make it sympathetic to the differences between mice and hamsters. At least with a traditional fighter you've got the pilot to blame.
Maybe You could bypass a ban by adding an authorization button, but that means the signatory soldier would be risking his name on a pile of code he didn't write.
...but he could just say he was ordered to sign and its all good. Who knows.

I doubt they'll pass a ban to begin with, because the idea of using automated guns for perimeter defense is just too tempting for a first world army up against guerrillas.

dl.dropbox.com
 
2012-11-20 06:41:25 AM

way south: They do whatever they were rigged or programmed to do, and if the code is vague enough then they'll take a shot at an airliner as quickly as they'd shoot down an enemy fighter.


So... you have no problem with automated combat, basically? Because you're only objecting to poorly programmed automated combat in that post.

Sort of like saying "I object to bridges. They're always failing under stress, coming loose from their moorings, and falling down". Well, no, not if they're competently designed they're not.

//If you think I'm making fun of you... I am, a little. The coding to distinguish a passenger airliner's profile from a fighter or a bomber is literally in "the intern could do it in an hour with a Kinect and the SDK" territory, that's not even slightly complex functionality at this point in automation tech.
 
2012-11-20 06:44:09 AM

Jim_Callahan: Kind of arguable. Machines also aren't capable of going off the reservation and making bad decisions. Humans are.


You mean the complicated hardware and software to control an autonomous weapons platform will never malfunction? What a relief!

Anyway, I doubt deployment can be banned. How about a law which holds the human who orders the deployment of such machines directly responsible for the actions of them, with punishments equal to that human having carried out such actions him/herself? You want to deploy a machine that cannot take responsibility for who it shoots? Then you take responsibility for who it shoots.

/general "you", not you specifically, Jim_Callahan :)
 
2012-11-20 06:49:54 AM

mamoru: You want to deploy a machine that cannot take responsibility for who it shoots? Then you take responsibility for who it shoots.


That's pretty much how it works, yes. You step on a land-mine that's not supposed to be there, the government that deployed the mine is the one at fault. Same with this stuff.

You mean the complicated hardware and software to control an autonomous weapons platform will never malfunction? What a relief!

Probably less frequently than the reprobates we sometimes hire slip off base to have a shooting spree among the local civilians, or mutilate corpses, or are bribed to let contractors abscond with millions of dollars in untraceable cash.

100% absolute infallibility is miles and miles above the bar that terminators have to leap to be an improvement, is what I'm getting at here.
 
2012-11-20 06:50:27 AM
eggshell-robotics.node3000.com
 
2012-11-20 06:52:14 AM

Jim_Callahan: way south: They do whatever they were rigged or programmed to do, and if the code is vague enough then they'll take a shot at an airliner as quickly as they'd shoot down an enemy fighter.

So... you have no problem with automated combat, basically? Because you're only objecting to poorly programmed automated combat in that post.

Sort of like saying "I object to bridges. They're always failing under stress, coming loose from their moorings, and falling down". Well, no, not if they're competently designed they're not.

//If you think I'm making fun of you... I am, a little. The coding to distinguish a passenger airliner's profile from a fighter or a bomber is literally in "the intern could do it in an hour with a Kinect and the SDK" territory, that's not even slightly complex functionality at this point in automation tech.


The difference between poor programming and good programming often depends on what the programmer anticipated.
Assuming you could distinguish aircraft with absolute accuracy, and you write your drone to ignore a foreign 737 entering your airspace, it might ignore a 737 flown by suicided bombers or even an armed 737 acting as a missile boat.
Likewise if you tell it to attack any aircraft that turns up in a set space, it might attack a legitimate airline that's strayed too close because of some other factor.

I have no problem with automated combat, and I never had a problem with landmines (I respect why such things is unpopular, but they have a legitimate battlefield use).
I do have a problem with the idea that a robots judgement is superior to a humans. Because a robot is only looking for a pre-determined parameter to trigger an attack.
It will never contemplate the repercussions for being wrong.

/and don't worry about making fun of people on FARK.
/its all good here.
 
2012-11-20 06:56:54 AM

Jim_Callahan:
In some ways, robotic defense systems are a human rights improvement over corruptible human soldiers.


No. We're not talking about a Terminator here or indeed anything that has the ability to comprehend. As per the botjunkie photo that was posted we're talking about a tracked machine with a 50cal machine gun and a webcam strapped to it.

These things have zero way of knowing if they're looking at an enemy troop formation or a civilian protest/market, it'll just know that they aren't friendly because they lack the correct RFID or a. n. other FoF identifier... then slaughter them.

Hell these things are driven by computers and we have daily threads here about various bits of software screwing up! So somehow one of these things encounters a race condition nobody had found in its code before and just randomly starts throwing grenades... at the barracks.

With a human (and we do mean a member of the armed forces here) in the loop there is at least someone we can point to and say "Why did you let it do that?"
 
2012-11-20 06:56:55 AM
What? This little guy?

i1.ytimg.com
 
2012-11-20 07:00:08 AM
What, pray tell, is the difference between training a typical soldier and using a robot? They're not our best and brightest. They're not our most responsible or our moral champions. What's the critical thing that typically defines someone that enlists today? From what I can tell, the common theme is lack of a support structure. It's the path that's left when the other paths are unavailable. So when the soldier is faced with an immoral request, should we expect them to alienate the only support structure they have left? No, they're very likely to do what was requested and be traumatized for life.

tl;dr: We already had unquestioning soldiers.
 
2012-11-20 07:04:10 AM

way south: Assuming you could distinguish aircraft with absolute accuracy, and you write your drone to ignore a foreign 737 entering your airspace, it might ignore a 737 flown by suicided bombers or even an armed 737 acting as a missile boat.


Or it gets confused and turns your AWACS in to shrapnel. That and considering that military equipment is sold to other people (we in the UK do it, you American's do it, the Russians do it, etc.) how will it respond to a friendly F-15, MiG, Tornado, etc. with a failed IFF unit when it's also having to deal with enemy F-15's, MIG's and Tornado's?

Honestly I don't see a way for it to distinguish the two as it's software will just see a MIG, F-15 or Tornado (as examples) with no IFF broadcast. Perhaps Jim will provide us with a convincing argument as to why this could never happen...
 
2012-11-20 07:06:20 AM

Dracolich: What, pray tell, is the difference between training a typical soldier and using a robot?


Well, you can't hack a soldier's brain. Not yet anyway.
 
2012-11-20 07:06:44 AM

Dracolich: tl;dr: We already had unquestioning soldiers.


When a soldier pulls the trigger he or she is intrinsically aware that if they shoot the wrong thing then 'bad things' will happen to them. This provides an impetus to make damn sure what you are shooting at is in fact an enemy.

When a machine pulls the trigger... who's responsible? It screws up and who's to blame?
 
2012-11-20 07:18:50 AM

Vaneshi: Dracolich: tl;dr: We already had unquestioning soldiers.

When a soldier pulls the trigger he or she is intrinsically aware that if they shoot the wrong thing then 'bad things' will happen to them. This provides an impetus to make damn sure what you are shooting at is in fact an enemy.

When a machine pulls the trigger... who's responsible? It screws up and who's to blame?


This is an interesting point. We have a lot of accidents in war from using soldiers. Warning shots have killed civilians many times, but some people blame that on soldiers "not following protocol."

You also bring up self-interest. If you're not sure about the person approaching you, do you shoot them? This is a real issue that we've seen in Iraq. When people feel threatened, they act. When they're not sure if they're threatened but the risk is real, they act. Is this the case if a robot is used? Does the person controlling it feel like the amount of risk on the line is the same? They're no longer at personal risk. It's no longer "your life vs a reprimand." The safer choice shifts towards getting more information first. It becomes "your robot vs a reprimand." This may actually make civilians safer, but we'll probably go through a fair number of additional robots when we're incorrect. On the bright side, we'll have recordings of what happened in that particular case.
 
2012-11-20 07:26:06 AM

ShabazKilla: [eggshell-robotics.node3000.com image 415x317]


Man.... beat me to it!
 
2012-11-20 07:30:19 AM
Sending machines to do dangerous work is the way of the future for all countries, not just 1st world countries. If we decide to ban fully autonomous killbots, fine. But not everybody will abide by our self-imposed ban. We should still build killbot killers which will take out the bad guy's machines.
 
2012-11-20 07:51:04 AM
Here's where robot's rules of order don't apply!

(Anybody ever tell Siri to "Listen choke head this is worker speaking?")
 
2012-11-20 07:52:31 AM

Great Janitor: I propose that all kill bots be programmed with a preset kill limit and once that kill limit is reached, the kill bot must deactivate. Just never announce what that limit is. No commander will ever be cunning enough to deal with that.


www.google.ca
 
2012-11-20 08:05:59 AM

way south: I do have a problem with the idea that a robots judgement is superior to a humans. Because a robot is only looking for a pre-determined parameter to trigger an attack.


I'll take it. Robots don't get bored; they don't get traumatized; they don't get fatigued; they don't require rescue. If a soldier (or a small group thereof) is cut off, command is faced with a very difficult choice over whether to order a rescue or just tell them "good luck". Makes for good drama in fiction, but in reality it's a brutally pragmatic call where the soldiers are left to their own fate if the cost in resources is too high. If a robot is cut off and surrounded, it can just go apeshiat before self-destructing. And as others say, they're not going to go off-mission to commit some recreational atrocities, flee in panic or desert entirely, take or give a bribe. It's not a big deal if a robot's arm gets blown off. Robots don't care to be home for the holidays. They don't have kids back home. You don't even need to bring them back at all.

I'm not a fan of war, but as long as the government and media are working hand-in-hand to insulate voters from the reality that war is hell anyway, I don't get the idea that we need to give a pile of warm meat bodies horrific injuries and PTSD to justify what we're doing. That's not humane; that's human sacrifice.

I actually hope for a future where "war" is reduced to nothing more than a very expensive chess match between robot armies. In such a world we'd still pay the cost in wasted resources, but as long as society rewards stupid megalomaniacs with power, at least we can avoid sating their egos with offerings of soldier meat.
 
2012-11-20 08:13:16 AM

dragonchild: I actually hope for a future where "war" is reduced to nothing more than a very expensive chess match between robot armies


I seem to recall an episode of Classic Trek in which the crew encountered a planet where computers would simulate wars between nations. People "killed" during these simulations would then have to report to kill chambers. This was all in the name of keeping an actual war from breaking out.

Progress!
 
2012-11-20 08:16:45 AM

dragonchild: If a robot is cut off and surrounded, it can just go apeshiat before self-destructing.


Problem is robots don't really think. They follow a checklist of instructions according to what their sensors may or may not perceive. Their definition of "cut off" might be "lost radio contact due to interference and... OOh!, look, a wedding party full of terrorists!!".

Robots show all the judgement of a mousetrap.
When the right parameters are met, SNAP!
 
2012-11-20 08:18:40 AM
Now that I think of it...

If the Terminator was captured in 1984 , would there be legal grounds to prosecute it?

That would have made a thrilling courtroom sequel.
 
2012-11-20 08:19:01 AM

dragonchild: I actually hope for a future where "war" is reduced to nothing more than a very expensive chess match between robot armies.


This I can agree with.
I just think its way too soon and no such AI capable of doing the job will exist, or could be trusted, within the foreseeable future.
 
2012-11-20 08:20:24 AM

way south: I just think its way too soon and no such AI capable of doing the job will exist, or could be trusted, within the foreseeable future.


clatl.com
 
2012-11-20 08:21:08 AM

Dracolich: Vaneshi: Dracolich: tl;dr: We already had unquestioning soldiers.

When a soldier pulls the trigger he or she is intrinsically aware that if they shoot the wrong thing then 'bad things' will happen to them. This provides an impetus to make damn sure what you are shooting at is in fact an enemy.

When a machine pulls the trigger... who's responsible? It screws up and who's to blame?

This is an interesting point. We have a lot of accidents in war from using soldiers. Warning shots have killed civilians many times, but some people blame that on soldiers "not following protocol."

You also bring up self-interest. If you're not sure about the person approaching you, do you shoot them? This is a real issue that we've seen in Iraq. When people feel threatened, they act. When they're not sure if they're threatened but the risk is real, they act. Is this the case if a robot is used? Does the person controlling it feel like the amount of risk on the line is the same? They're no longer at personal risk. It's no longer "your life vs a reprimand." The safer choice shifts towards getting more information first. It becomes "your robot vs a reprimand." This may actually make civilians safer, but we'll probably go through a fair number of additional robots when we're incorrect. On the bright side, we'll have recordings of what happened in that particular case.


The interesting thing here is that a robot casualty can be repaired back to full operational capacity, whereas a human casualty gets sent home with a box full of medals... or in it.

/I support our robot troops
//as long as we make them look like Necrons
 
2012-11-20 08:30:23 AM

Bonanza Jellybean: But what will become of Chew-Chew, the cyborg train that runs on babymeat?


GWar?
 
2012-11-20 08:31:19 AM
i1206.photobucket.com
 
2012-11-20 08:31:24 AM

Wakosane: Now that I think of it...

If the Terminator was captured in 1984 , would there be legal grounds to prosecute it?

That would have made a thrilling courtroom sequel.


Probably a short sequel, considering it would start killing everyone in the court room. "How does the defendant plead"? *rips lawyer in half*
 
2012-11-20 08:32:21 AM
www.bbc.co.uk

Here I am, brain the size of a planet, and you want to keep me from deciding when to kill you.
 
2012-11-20 08:37:51 AM
I'm Sam Waterston, of the popular TV series "Law & Order". As a senior citizen, you're probably aware of the threat robots pose. Robots are everywhere, and they eat old people's medicine for fuel.
www.scarybot.com
Well, now there's a company that offers coverage against the unfortunate event of robot attack, with Old Glory Insurance. Old Glory will cover you with no health check-up or age consideration. You need to feel safe. And that's harder and harder to do nowadays, because robots may strike at any time.


And when they grab you with those metal claws, you can't break free.. because they're made of metal, and robots are strong.
i.huffpost.com
Now, for only $4 a month, you can achieve peace of mind in a world full of grime and robots, with Old Glory Insurance. So, don't cower under your afghan any longer. Make a choice. WARNING: Persons denying the existence of Robots may be Robots themselves

Old Glory Insurance. For when the metal ones decide to come for you - and they will.
 
2012-11-20 08:44:00 AM
First they'll want to make you register to own a personal protection robot. Next you'll have to register each personal protection robot you own. Finally they'll come around to confiscate.
 
Displayed 50 of 91 comments

First | « | 1 | 2 | » | Last | Show all

View Voting Results: Smartest and Funniest


This thread is archived, and closed to new comments.

Continue Farking
Submit a Link »





Report