jake_lex: Among the many things I hated about the movie 2010 was the lame explanation they gave for why HAL freaked out and killed the crew. I think it was actually way simpler than that: HAL made a mistake, and just couldn't handle the fact that he did. It was, really, the first glimmer of actual self-awareness, of true sentience. And he couldn't handle it, and flipped the fark out. For all his sophisticated programming, it couldn't help him deal with that.The whole "Well, he couldn't handle the fact he had to be honest with the crew and hide the true nature of the mission" explanation just didn't work for me. It was too needlessly complicated.
legion_of_doo: /Look Dude, I can see you're really upset about this. I honestly think you ought to sit down calmly, take a bong hit, and think things over. I've still got the greatest enthusiasm and confidence in the mission.
trialpha: This sounds less like "self-healing" and more like "using digital circuitry to auto-calibrate analog circuitry", which really isn't all that new.They also carefully omit what happens when that the controlling asic gets damaged.
poot_rootbeer: Sounds like hot failover to a redundant system, implemented at the IC level.Useful for implanted medical devices maybe -- where the cost of replacing damaged electronics is emergency surgery, in the best case -- but for run-of-the-mill consumer electronics it's still going to be a better idea to dedicate every available bit of silicon to maximize efficiency in the normal case where nobody's shooting lasers at the IC while it's in operation.
jake_lex: Among the many things I hated about the movie 2010 was the lame explanation they gave for why HAL freaked out and killed the crew.
eyeq360: Now we';re going to have to listen to the damn computer sing "Daisy" over and over again.
Ishkur: jake_lex: Among the many things I hated about the movie 2010 was the lame explanation they gave for why HAL freaked out and killed the crew.Nonono, that was a PERFECT explanation because it shows the fallibility of human logic versus machine logic (something that programmers have been grappling with for damn near 50 years). The guy summed it up beautifully: "HAL was told to lie, by people who find it easy to lie."Computers do not lie nor do they understand the need to. HAL was given explicit instructions to protect the mission at all costs and then was told not to reveal these instructions to the humans. Essentially he was given really BAD and contradictory instructions. Protect the mission against all threats. The humans are the mission. But what if the humans are the threat? You can see him caught in a web of logic there like Nomad from Star Trek. So he made what he thought was the correct decision. He wasn't evil, he was just given shiatty mission perameters by idiot bureacrats on Earth.And that is totally plausible and totally realistic situation that may actually happen at some point in the future. That is, if we're stupid enough to develop AI and then allow it to control all life support functions.
Want more news before we break it? Try
See what's behind the green doorand help keep the tap flowing
Sign up for the Fark NotNewsletter!
Links are submitted by members of the Fark community.
When community members submit a link, they also write a custom headline for the story.
Other Farkers comment on the links. This is the number of comments. Click here to read them.
You need to create an account to submit links or post comments.
Click here to submit a link.
Also on Fark
Submit a Link »
Copyright © 1999 - 2017 Fark, Inc | Last updated: Sep 26 2017 09:14:15
Runtime: 0.202 sec (201 ms)