If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Globe and Mail)   Human rights group wants a ban on all robots that are relentlessly pursuing Sarah Connor   (theglobeandmail.com) divider line 91
    More: Interesting, human rights group, mobile robot, Iron Dome, grenade launcher, Hellfire missile, international humanitarian law, chemical weapons, robots  
•       •       •

4703 clicks; posted to Main » on 20 Nov 2012 at 4:33 AM (1 year ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



91 Comments   (+0 »)
   
View Voting Results: Smartest and Funniest

Archived thread

First | « | 1 | 2 | » | Last | Show all
 
2012-11-20 08:50:13 AM  

Vaneshi: Or it gets confused and turns your AWACS in to shrapnel. That and considering that military equipment is sold to other people (we in the UK do it, you American's do it, the Russians do it, etc.) how will it respond to a friendly F-15, MiG, Tornado, etc. with a failed IFF unit when it's also having to deal with enemy F-15's, MIG's and Tornado's?


The same way a human would respond would be my guess. These hypothetical robots would be using satellite based communications and could be instructed to 'stand down' during non-standard conditions the same way a meat based soldier can be.
 
2012-11-20 09:05:16 AM  
Good luck with that. Humans love to kill one another and they love killing large numbers of one another even more. But there's that darn guilt thing to contend with. Making warfare a computer game eliminates this pesky emotion.
 
2012-11-20 09:07:40 AM  

dragonchild: I'll take it. Robots don't get bored; they don't get traumatized; they don't get fatigued; they don't require rescue. If a soldier (or a small group thereof) is cut off, command is faced with a very difficult choice over whether to order a rescue or just tell them "good luck". Makes for good drama in fiction, but in reality it's a brutally pragmatic call where the soldiers are left to their own fate if the cost in resources is too high. If a robot is cut off and surrounded, it can just go apeshiat before self-destructing. And as others say, they're not going to go off-mission to commit some recreational atrocities, flee in panic or desert entirely, take or give a bribe. It's not a big deal if a robot's arm gets blown off. Robots don't care to be home for the holidays. They don't have kids back home. You don't even need to bring them back at all.


www.wearysloth.com
"Well, consider that in the history of many worlds, there have always been disposable creatures. They do the dirty work. They do the work that no one else wants to do because it's too difficult or too hazardous...and an army of Datas, all disposable. You don't have to think about their welfare, you don't think about how they feel. Whole generations of disposable people."
 
2012-11-20 09:14:50 AM  

SkunkWerks: I seem to recall an episode of Classic Trek in which the crew encountered a planet where computers would simulate wars between nations. People "killed" during these simulations would then have to report to kill chambers. This was all in the name of keeping an actual war from breaking out.


As a drama that was a good episode, but I'm in reality here. Robots to disintegration chambers portrayed only in dramatic fiction is one hell of a hypothetical slippery slope.

way south: Problem is robots don't really think. They follow a checklist of instructions according to what their sensors may or may not perceive. Their definition of "cut off" might be "lost radio contact due to interference and... OOh!, look, a wedding party full of terrorists!!".


Humans do that already. If you're concerned about civilian casualties then don't get into a farking war to begin with. A lot of weapons we deploy to this day are notoriously messy, and compared to past strategies (Dresden, Hiroshima, Hanoi) they look like brain surgery. I don't think fuel-air explosives are any better than robots at sorting out terrorists from wedding guests just because a manual process is involved.
 
2012-11-20 09:44:37 AM  
John McGinnis, a Northwestern University Law professor, suggests that "artificial-intelligence robots on the battlefield may actually lead to less destruction, becoming a civilizing force in wars."

This poor bastard has never watched a scifi movie in his life.
 
2012-11-20 09:45:06 AM  

randomjsa: "We want you to unilaterally stop using technology that is keeping you safe from your enemies that don't have it"


You need to do this.
 
2012-11-20 10:06:18 AM  

dragonchild: SkunkWerks: I seem to recall an episode of Classic Trek in which the crew encountered a planet where computers would simulate wars between nations. People "killed" during these simulations would then have to report to kill chambers. This was all in the name of keeping an actual war from breaking out.

As a drama that was a good episode, but I'm in reality here. Robots to disintegration chambers portrayed only in dramatic fiction is one hell of a hypothetical slippery slope.

way south: Problem is robots don't really think. They follow a checklist of instructions according to what their sensors may or may not perceive. Their definition of "cut off" might be "lost radio contact due to interference and... OOh!, look, a wedding party full of terrorists!!".

Humans do that already. If you're concerned about civilian casualties then don't get into a farking war to begin with. A lot of weapons we deploy to this day are notoriously messy, and compared to past strategies (Dresden, Hiroshima, Hanoi) they look like brain surgery. I don't think fuel-air explosives are any better than robots at sorting out terrorists from wedding guests just because a manual process is involved.


I think there is a difference because we have a certain level of expectation from humans and they can usually explain their reasoning. Many of our mistakes are related to bad intel or other reasonable (if unfortunate) factors.

On the flip side we're talking about putting that authority in the hands of a machine to both pick the targets and decide when to attack.

I go back to the landmine example.
You have to be ok with turning the entire deployment area into a killing zone. Because someone needs to take responsability, even if the machine goes astray. 
With that in mind, if I was that someone, I'd prefer the machine default to a non-killing decision.

If the fail safe of a battlefield robot is to ignore the enemy, automated attack drones won't ever be a thing.
They'll be remote controlled, at best.
 
2012-11-20 10:21:32 AM  
In the USA like minded groups are focusing on the ban of weaponized drones particular for civilian use. The drone industry is lobbying congress to create a system of grants that would allow police departments throughout the country to buy weaponized drones which might include face recognition and heat sensing technologies. The FAA has, in the past, lodged opposition to this plan because of their unacceptably high failure rate. That is, drones crash, a lot.

Drones were supposed to be an inexpensive replacement for manned reconnaissance flights. They are rapidly becoming a source of defense spending bloat. Interestingly, the army, at one point, opposed the use of drones by the CIA because their is no effective oversight over the CIAs' use of drones. The CIA drone programs have created issues that make the lives of uniformed military personnel more difficult. The army seems to have walked back their public opposition to CIA drone use recently. I personally doubt that their seeming retraction is due to a genuine change in heart.

I recommend this book Link as a starting place for anyone who might like to learn more.
 
2012-11-20 10:25:33 AM  

mamoru: Jim_Callahan: Kind of arguable. Machines also aren't capable of going off the reservation and making bad decisions. Humans are.

You mean the complicated hardware and software to control an autonomous weapons platform will never malfunction? What a relief!

Anyway, I doubt deployment can be banned. How about a law which holds the human who orders the deployment of such machines directly responsible for the actions of them, with punishments equal to that human having carried out such actions him/herself? You want to deploy a machine that cannot take responsibility for who it shoots? Then you take responsibility for who it shoots.

/general "you", not you specifically, Jim_Callahan :)


That's a decent idea. To clarify, are you suggesting holding the remote operator responsible or the commanding officer or both?
 
2012-11-20 10:42:20 AM  

Dracolich: What, pray tell, is the difference between training a typical soldier and using a robot? They're not our best and brightest. They're not our most responsible or our moral champions. What's the critical thing that typically defines someone that enlists today? From what I can tell, the common theme is lack of a support structure. It's the path that's left when the other paths are unavailable. So when the soldier is faced with an immoral request, should we expect them to alienate the only support structure they have left? No, they're very likely to do what was requested and be traumatized for life.

tl;dr: We already had unquestioning soldiers.


Some of them yes. We've all heard the third-hand stories about the drill sergeant who informs a young recruit "You're not qualified to think. You're not authorized to think. You'll be told when to think." That said, I know that some remarkably intelligent people end up in the military. Most of the ones I knew a bazillion years ago ended up there because at least partially because of the lack of support structure you note. I suppose that's an accurate way to term it when the last factory in the county went out of business, the region is still largely dedicated to farming or mining or quarrying and the wait list to get on at any of those facilities is longer than your arm, the family is struggling to make ends meet and you have this whip smart young person who has to work odd jobs on his own just to pay for the privilege of taking the SAT. I used to sit in class with a lot of people who'd had to pay for their own SAT because their parents hadn't had the money to cover the test. Money didn't fall out of the sky to pay for college after they'd taken the test.

Also, back then, the school I attended, in those classes civilians were not allowed to attend, military history and strategy, these future officers were encouraged to think. I know this because the conversations spilled over into the regular history and government classes. Most notable to me, was that they were having classroom conversations around the topic "When is it a persons' duty to disobey orders?" There was an attempt to teach these kids what illegal orders were, how to discern them, and to put some thought into how they would handle being given an illegal order.

I have no idea if that teaching method is still done at that school or if it has ever been carried out similarly at other military schools, but it ought to be. A class on "When is it your duty to disobey orders?" ought to be required at every military school in the country.

Tune in same time, same day next week when I will post an intimate screed about my opinions on Ann Rice novels.
Seriously, I wish I could express myself more precisely.
 
2012-11-20 10:44:09 AM  
concisely
(sigh)
 
2012-11-20 10:48:12 AM  

Notabunny: Sending machines to do dangerous work is the way of the future for all countries, not just 1st world countries. If we decide to ban fully autonomous killbots, fine. But not everybody will abide by our self-imposed ban. We should still build killbot killers which will take out the bad guy's machines.


Killbot killers will be mostly deployed in an asymmetrical manner for the all the foreseeable future. The top sellers of bots at this time is the United States, China, and Israel. Little discretion is being exercised. Profit is the bottom line in deciding when and to whom to sell these machines. We are heading towards a relatively chaotic situation. Granted, the sky is not going to fall. Still, this is a topic all of us would do well to try to educate ourselves on.
 
2012-11-20 10:49:58 AM  
Disassemble?
 
2012-11-20 10:59:34 AM  

Dracolich: Vaneshi: Dracolich: tl;dr: We already had unquestioning soldiers.

When a soldier pulls the trigger he or she is intrinsically aware that if they shoot the wrong thing then 'bad things' will happen to them. This provides an impetus to make damn sure what you are shooting at is in fact an enemy.

When a machine pulls the trigger... who's responsible? It screws up and who's to blame?

This is an interesting point. We have a lot of accidents in war from using soldiers. Warning shots have killed civilians many times, but some people blame that on soldiers "not following protocol."

You also bring up self-interest. If you're not sure about the person approaching you, do you shoot them? This is a real issue that we've seen in Iraq. When people feel threatened, they act. When they're not sure if they're threatened but the risk is real, they act. Is this the case if a robot is used? Does the person controlling it feel like the amount of risk on the line is the same? They're no longer at personal risk. It's no longer "your life vs a reprimand." The safer choice shifts towards getting more information first. It becomes "your robot vs a reprimand." This may actually make civilians safer, but we'll probably go through a fair number of additional robots when we're incorrect. On the bright side, we'll have recordings of what happened in that particular case.


In the case of drones, it is arguable whether their deployment has made civilians safer. There is a contingency of groups that argue that due to their overuse and because they are not nearly as precise as their manufacturers would have us believe, they pose an expanding threat to civilians. I know, on the other hand, there is an individual who sometimes posts to these threads, I know his FARK name when I see it, I think maybe it is CaptD. (?); who seems to sincerely believe drones offer safety advantages to both civilians and military personnel. I am inclined to think he has an overly rosy view of them, but, honestly, I have no real time experience with drones of any sort and he likely does have some experience with them. So, there's that for whatever it's worth.
 
2012-11-20 11:06:07 AM  

way south: dragonchild: If a robot is cut off and surrounded, it can just go apeshiat before self-destructing.

Problem is robots don't really think. They follow a checklist of instructions according to what their sensors may or may not perceive. Their definition of "cut off" might be "lost radio contact due to interference and... OOh!, look, a wedding party full of terrorists!!".

Robots show all the judgement of a mousetrap.
When the right parameters are met, SNAP!


The occasional wedding party is taken down with human intervention. Maybe I'm too cynical, but first there's no way to effectively compensate families after an event like that, but do we even try? Second, while there seems to be no end to the slush fund available to people like Erik Prince and whatever he is currently calling his criminal organization; I doubt very seriously there is so much as a dime allotted to providing therapy to drone operators who accidentally took out a wedding party.

I view war as a thing that always, without exception, creates a minimum of two sets of victims. The only "winners" are those rare few who are powerful enough to profit by war.
 
2012-11-20 11:11:11 AM  

dragonchild: Humans do that already. If you're concerned about civilian casualties then don't get into a farking war to begin with.


Here we have the best idea that will ever be posted in this thread. Thank-you, dragonchild.
 
2012-11-20 11:12:14 AM  
 
2012-11-20 11:13:30 AM  

dragonchild: Humans do that already. If you're concerned about civilian casualties then don't get into a farking war to begin with


Exactly. War is a messy and bloody affair that ultimately results in loss on both sides. Sending robots that nobody gives a fark about to do our dirty work removes a very important message: War is not a fun thing, it is the absolute last resort once all other possible avenues have been fully explored.

War should always be something that when you engage in it you get to 'enjoy' the coffins coming home as well. As a species we have a really crappy long term memory and a desperate desire to remove past atrocities from it. This should never happen.

And that's why a human should be in the loop no matter how 'smart' these things are. A human should be forced to authorise and then watch as these things chew apart other robots, humans, cattle, whatever they come across.
 
2012-11-20 11:15:52 AM  

CygnusDarius: But what happens if the opposite happens: If the robot is programed to obey the War Code of Conduct, and humans issue an order against its programming?.


Fail Safe. It refuses and disarms itself in a manner that leaves none of its weapons intact, preferably whilst still allowing the chassis to roll back to base.
 
2012-11-20 11:41:33 AM  
"Any robot able to [TERMINATE A THRESHER] will be rewarded with the ability to be proud of the fact that they [TERMINATED A THRESHER]"

/not obscure
 
2012-11-20 11:59:33 AM  

Vaneshi: Sending robots that nobody gives a fark about to do our dirty work removes a very important message: War is not a fun thing


Unfortunately, the people starting wars and those fighting them have been two separate groups for most of history.

So you want people to suffer just to send a message? To whom, our sociopathic leaders?? You know what happened the last time we did that? The Administration insisted we needed to continue the war to justify the suffering.

way south: I think there is a difference because we have a certain level of expectation from humans and they can usually explain their reasoning.


That reasoning is usually, "I was just following orders." Robots controlled by programs don't scare me half as much as humans programmed by political agendas.
 
2012-11-20 12:03:42 PM  

washington-babylon: "Any robot able to [TERMINATE A THRESHER] will be rewarded with the ability to be proud of the fact that they [TERMINATED A THRESHER]"

/not obscure


Remember your rules Robots, Jack is your god, Threshers are your enemy and both consider you expendable
 
2012-11-20 12:07:20 PM  
Don't be fooled! Military robots can still commit war crimes!

art.penny-arcade.com
 
2012-11-20 12:19:02 PM  

Lunaville: dragonchild: Humans do that already. If you're concerned about civilian casualties then don't get into a farking war to begin with.

Here we have the best idea that will ever be posted in this thread. Thank-you, dragonchild.


This.

Esp when you consider tactics like Hamas uses... building missile launchers on top of apartment buildings.
 
2012-11-20 01:10:27 PM  

dragonchild: Unfortunately, the people starting wars and those fighting them have been two separate groups for most of history.

So you want people to suffer just to send a message? To whom, our sociopathic leaders?? You know what happened the last time we did that? The Administration insisted we needed to continue the war to justify the suffering.


And you know why this was able to happen?

Because there's not a draft anymore.

The public at large has been isolated from the horrors of war, because military service is effectively confined to a small underclass of volunteers who, in general, don't have a lot of alternatives. You insulate the general population from the awful things that are done in its name, and those people don't vote politicians out of office when they squander our blood and treasure.

Increasing use of drones is going to make the problem worse, not better. Drones are a warmongering politician's wet dream.
 
2012-11-20 02:19:12 PM  
The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots
 
2012-11-20 02:25:04 PM  

yves0010: washington-babylon: "Any robot able to [TERMINATE A THRESHER] will be rewarded with the ability to be proud of the fact that they [TERMINATED A THRESHER]"

/not obscure

Remember your rules Robots, Jack is your god, Threshers are your enemy and both consider you expendable


Wow, suddenly I am reminded of this:  www.vgcats.com
 
2012-11-20 03:18:42 PM  
The Armed Robotic Vehicle-Assault (Light) (ARV-A-L) currently in development, could be ready for operation by 2014 and is currently planned for delivery to the first brigades by the years 2014-2015. 

cache.gawker.com
 
2012-11-20 08:32:17 PM  

Sgygus: calling for an international treaty outlawing military weapons systems that decide - without a human "in the loop" - when to pull the trigger

This is quite prudent. Machines are not capable of being responsible. Humans are.


Would you like to play a game?
 
2012-11-20 08:52:37 PM  

dragonchild: way south: I think there is a difference because we have a certain level of expectation from humans and they can usually explain their reasoning.

That reasoning is usually, "I was just following orders." Robots controlled by programs don't scare me half as much as humans programmed by political agendas.


Yes, but who controls the robots?
If you remove the human, thinking, questioning, accountable, soldiers from the military then what you have left are machines that obey only the politicians.

They are the source of so very many bad agendas.
..and soon there'll be no one left to ask any questions.
 
2012-11-21 12:18:13 AM  

dragonchild: I actually hope for a future where "war" is reduced to nothing more than a very expensive chess match between robot armies. In such a world we'd still pay the cost in wasted resources, but as long as society rewards stupid megalomaniacs with power, at least we can avoid sating their egos with offerings of soldier meat.


images1.wikia.nocookie.net
What future war might look like.
 
2012-11-21 12:19:36 AM  

SkunkWerks: dragonchild: I actually hope for a future where "war" is reduced to nothing more than a very expensive chess match between robot armies

I seem to recall an episode of Classic Trek in which the crew encountered a planet where computers would simulate wars between nations. People "killed" during these simulations would then have to report to kill chambers. This was all in the name of keeping an actual war from breaking out.

Progress!


Doh, should have read your post before posting mine.

Yes, the episode is called "A Taste of Armageddon".
 
2012-11-21 12:32:47 AM  
Vaneshi: Sending robots that nobody gives a fark about to do our dirty work removes a very important message: War is not a fun thing

dragonchild: Unfortunately, the people starting wars and those fighting them have been two separate groups for most of history.

So you want people to suffer just to send a message? To whom, our sociopathic leaders?? You know what happened the last time we did that? The Administration insisted we needed to continue the war to justify the suffering..


The Sunk cost fallacy is a harsh mistress.

/M-O-O-N, that spells "sunk cost fallacy."
 
2012-11-21 06:49:21 AM  

dragonchild: As a drama that was a good episode, but I'm in reality here.


You seem to think that Killer Robots are an excellent idea that will in no way go horribly wrong or be misused by humanity in any way.

I think Reality is a missed left turn back about a hundred miles or so.
 
2012-11-21 07:00:41 AM  

way south: Yes, but who controls the robots?


Logic. I know that scares a lot of people, but you know what the beauty is? Robots don't respond to hypocrisy. They're incapable of the cruel mind tricks humans use to so easily eliminate the "reason" you naively rely on. You assume that robots won't question agendas, and you're right in a literal sense, but what they won't do is embrace an agenda. If, for example, you try to program a "racist" robot that kills any human of a certain skin color, it could well kill the GOP Presidential nominee touring the base after getting a spray-a-tan. The risk alone would force programmers to err on the side of caution to prevent a company-destroying fiasco. Robots pose more problems to those who would abuse their "unquestioning" nature than humans, whom we program a little too well sometimes. Hell, the programming we used to justify slavery over 200 years ago now has tens of millions of Americans wanting to kill their own President!
 
2012-11-21 07:22:37 AM  

dragonchild: Logic.


http://en.wikipedia.org/wiki/Three_Laws_of_Robotics
 
2012-11-21 07:42:19 AM  

SkunkWerks: http://en.wikipedia.org/wiki/Three_Laws_of_Robotics


Precisely. Asimov understood the limitations of translating human "logic" into AI logic, which can lead to various unintended consequences. For dramatic effect this is almost always portrayed in a worst-case scenario in fiction, but in practice, it invariably leads to triggering failsafes that either shut the robot down or cause it to reset. Trying to get a robot to follow an agenda would be hilariously like an old-school text adventure game:

> KILL DARKIES
Command not found
> ATTACK DARKIES
Syntax error
> ATTACK TARGET
Unspecified parameter

etc.

Long story short, people freaked out from watching old Star Trek episodes tend to have far more unrealistic expectations for the Robot Apocalypse than people who have actual experience programming robots.
 
2012-11-21 07:47:54 AM  

dragonchild: Asimov understood the limitations of translating human "logic" into AI logic, which can lead to various unintended consequences.


You might be forgetting that Asimov also understood that any "perfect system" is invariably imperfect- that it's not a matter of perfection, it's simply a matter of time.


dragonchild: Long story short, people freaked out from watching old Star Trek episodes tend to have far more unrealistic expectations for the Robot Apocalypse than people who have actual experience programming robots.


Asimov's work inspired a lot of Trek episodes, both classic and later. Just sayin'.
 
2012-11-21 08:01:47 AM  

dragonchild: > KILL DARKIES
Command not found
> ATTACK DARKIES
Syntax error
> ATTACK TARGET
Unspecified parameter


they'd just write killdarkies.exe which has if-then checks for skin tone, facial features, a match to an existing entry in a db, etc. Then they'd run it from one of the underground cities that exist to allow government to continue in the face of a large disaster, in case of a misfire.
 
2012-11-21 10:04:09 AM  

dragonchild: You assume that robots won't question agendas, and you're right in a literal sense, but what they won't do is embrace an agenda.


Hitler probably didn't care if all his soldiers loved him or embraced the Reich's long term goals.
it only mattered that they carried out their orders with speed and efficiency. Hence the nationalism and intimidation, to prevent descension in the ranks.
When soldiers like Rommel started to think to much, the Fuhrer spared no brutality in dispensing with them.

Logic to a computer is a very rudimentary thing. Its not Asimov's laws and the robot isn't debating whether its right to kill a human if it means preventing the deaths of other humans. It is following instructions with absolute and unerring perfection, and those instructions are decided by a programmer working for his days pay.

Like a genie from a bottle, you've got to be very careful what you ask a machine to do.
This is no way prevents you from asking it to do something exceedingly cruel.
 
2012-11-21 10:14:08 AM  

way south: Like a genie from a bottle, you've got to be very careful what you ask a machine to do.


static.seekingalpha.com
 
Displayed 41 of 91 comments

First | « | 1 | 2 | » | Last | Show all

View Voting Results: Smartest and Funniest


This thread is archived, and closed to new comments.

Continue Farking
Submit a Link »






Report