If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Daily Mail)   In the next five years, your job will be replaced by robots with artificial intelligence. Guaranteed though, that we'll *still* get duplicate greenlit threads   (dailymail.co.uk) divider line 56
    More: Unlikely, automaticity, clerical workers, artificial intelligences, robots  
•       •       •

1014 clicks; posted to Geek » on 18 Nov 2013 at 1:42 PM (45 weeks ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



56 Comments   (+0 »)
   
View Voting Results: Smartest and Funniest

First | « | 1 | 2 | » | Last | Show all
 
2013-11-18 12:28:58 PM
Yeah f*cking right.

Maybe, MAYBE by 2025 some office jobs will be going away - I could see the writing on the wall for tech support or anything involving client interface over a phone. 2030 is a possibility. 2050 I'll be surprised if we don't have wetware mods or something even beyond that.

People seem to forget the semantic web (which was supposed to bring about AI) has been around for years and has gone pretty much nowhere. We've had physical robots for decades and there are still people who have to go around repairing them.

I predict by 2018 we will however have some better Google Glass style eye interfaces.
 
2013-11-18 12:48:22 PM
No robot would put up with this
 
2013-11-18 12:48:54 PM

bdub77: Yeah f*cking right.

Maybe, MAYBE by 2025 some office jobs will be going away - I could see the writing on the wall for tech support or anything involving client interface over a phone. 2030 is a possibility. 2050 I'll be surprised if we don't have wetware mods or something even beyond that.

People seem to forget the semantic web (which was supposed to bring about AI) has been around for years and has gone pretty much nowhere. We've had physical robots for decades and there are still people who have to go around repairing them.

I predict by 2018 we will however have some better Google Glass style eye interfaces.



Technology doesn't move at a linear pace, it's advancing exponentially. Not only that when you implement new technological solutions it has to make economic sense. If we took our best technology out there right now we could probably cut out a third of the workforce, however the cost is still too high to merit the shift.

We're getting really close though to making technology that is less specialized and has more plasticity. AI that can adapt and write AI, robots that can build robots, at which point humans will be doing little more than providing a parameters for the technology to do its work.
 
2013-11-18 12:54:13 PM
 
2013-11-18 01:15:53 PM
The guy responsible for programming the robots will have to go to no less than 17 meetings with product management about whether or not there should be duplicate threads.
 
2013-11-18 01:28:17 PM

MayoSlather: bdub77: Yeah f*cking right.

Maybe, MAYBE by 2025 some office jobs will be going away - I could see the writing on the wall for tech support or anything involving client interface over a phone. 2030 is a possibility. 2050 I'll be surprised if we don't have wetware mods or something even beyond that.

People seem to forget the semantic web (which was supposed to bring about AI) has been around for years and has gone pretty much nowhere. We've had physical robots for decades and there are still people who have to go around repairing them.

I predict by 2018 we will however have some better Google Glass style eye interfaces.


Technology doesn't move at a linear pace, it's advancing exponentially. Not only that when you implement new technological solutions it has to make economic sense. If we took our best technology out there right now we could probably cut out a third of the workforce, however the cost is still too high to merit the shift.

We're getting really close though to making technology that is less specialized and has more plasticity. AI that can adapt and write AI, robots that can build robots, at which point humans will be doing little more than providing a parameters for the technology to do its work.


Just because tech progression isn't linear doesn't mean certain milestones are achievable in our lifetimes.

AI is a category of technology that seams like it has a predictable progression and for some problems it is. However, for some of the big problems we aren't even close. Some of the claims in this article are like the 1st century Romans contemplating a Moon shot.


For example: "By 2025, machines will be able to 'learn and reprogramme themselves'"

In some well bounded scenarios this likely happened 20 plus years ago. Saying this is going to occur in applications to the point that you can decrease the number of developers is way underestimating the problem.
 
2013-11-18 01:30:05 PM
If you're worried about robots with human intelligence replacing humans, keep in mind that human intelligence is apparently so terrible we're constantly trying to replace it with robots.
 
2013-11-18 01:32:44 PM

MayoSlather: at which point humans will be doing little more than providing a parameters for the technology to do its work.

sucking on the government's teat.
 
kab
2013-11-18 01:52:49 PM

jaylectricity: MayoSlather: at which point humans will be doing little more than providing a parameters for the technology to do its work. sucking on the government's teat.


Something no devout capitalist should be against.
 
2013-11-18 01:52:57 PM
And yet somehow, we'll all end up working more hours at more futile jobs.
 
vpb [TotalFark]
2013-11-18 02:01:18 PM

Quantum Apostrophe: And yet somehow, we'll all end up working more hours at more futile jobs.


Some jobs aren't worth the cost of robots.  And there will always be a demand for human butlers, hookers and rent-boys.
 
2013-11-18 02:07:54 PM
My job is pretty safe from robot takeover.
 
2013-11-18 02:11:52 PM
I imagine some might be able to, I can't imagine all of them will be, at least not until robots are full autonomous like the ones from iRobot
 
2013-11-18 02:13:19 PM
This is good news.

This will give us more time to zip around in our flying cars.
 
2013-11-18 02:18:23 PM

Smoky Dragon Dish: This is good news.

This will give us more time to zip around in our flying cars.


Pfft...jet pack baby.
 
2013-11-18 02:20:23 PM
upload.wikimedia.org
 
2013-11-18 02:21:33 PM

jaylectricity: Smoky Dragon Dish: This is good news.

This will give us more time to zip around in our flying cars.

Pfft...jet pack baby.


Only if it's a Harley Davidson
 
2013-11-18 02:25:59 PM

MayoSlather: bdub77: Yeah f*cking right.

Maybe, MAYBE by 2025 some office jobs will be going away - I could see the writing on the wall for tech support or anything involving client interface over a phone. 2030 is a possibility. 2050 I'll be surprised if we don't have wetware mods or something even beyond that.

People seem to forget the semantic web (which was supposed to bring about AI) has been around for years and has gone pretty much nowhere. We've had physical robots for decades and there are still people who have to go around repairing them.

I predict by 2018 we will however have some better Google Glass style eye interfaces.


Technology doesn't move at a linear pace, it's advancing exponentially. Not only that when you implement new technological solutions it has to make economic sense. If we took our best technology out there right now we could probably cut out a third of the workforce, however the cost is still too high to merit the shift.

We're getting really close though to making technology that is less specialized and has more plasticity. AI that can adapt and write AI, robots that can build robots, at which point humans will be doing little more than providing a parameters for the technology to do its work.


I think humans are beginning to hit a soft barrier with technological advancement simply because we can't process it fast enough. Find a programmer that can write genetic algorithms or machine learning algorithms. Those people are out there, but they are hard to find.

For example, agile development cycles have shrunk release times probably by 2-4 times, so your old release waterfall schedules of 2 years is down to 6 months with perhaps the same number of features. But it's hard to get it a whole lot faster than that, and if you shrink your cycle from 2 years to 6 weeks you are doing it at a loss of feature.

Now give it a few years and we'll go past that barrier. You will begin to see some AI programming improve. But five years away from hands-free interaction? Highly doubtful. Things like human-computer interaction and natural language processing have yet to significantly improve. I always know the person I'm talking to over the phone is a human. And context is everything. And lots of companies simply don't move that fast.
 
2013-11-18 02:27:58 PM
The progression of job automation will continue but humans will still have jobs to handle what the automation can't.

In some sense this means humans are becoming like God as in the "God in the cracks" argument where although science can explain lots of things, God is responsible for the things science can't explain.

[ is there some other name for the "God in cracks" argument? I thought there would be a Wikipedia page on it. ]
 
2013-11-18 02:29:35 PM
Rest assured, this is all part of Dictator-For-Life Fartblahblah's grand plan for 100% unemployment.
 
2013-11-18 02:37:55 PM

HairBolus: The progression of job automation will continue but humans will still have jobs to handle what the automation can't.

In some sense this means humans are becoming like God as in the "God in the cracks" argument where although science can explain lots of things, God is responsible for the things science can't explain.

[ is there some other name for the "God in cracks" argument? I thought there would be a Wikipedia page on it. ]


God of the Gaps
 
2013-11-18 02:39:33 PM

omnibus_necanda_sunt: If you're worried about robots with human intelligence replacing humans, keep in mind that human intelligence is apparently so terrible we're constantly trying to replace it with robots.


It's not the intelligence they are trying to replace.

You work ~40h/week for the company, but they have to give you sufficient compensation to allow your meatbag to survive (and occasionally prosper) for 168h/week. Food, water, shelter, companionship, offspring...

What they want to replace are those non-work-related maintenance costs.

The biggest questions facing humanity in the next fifty years will be what capital will do with labor once it no longer has any need of it? What happens when potential production exceeds consumption?
 
2013-11-18 02:39:34 PM

MayoSlather: Technology doesn't move at a linear pace, it's advancing exponentially.


It's not just that, people trust machines more than they do other humans.  If a human on the other end of the telephone tells you "We can't help you" there are demands for supervisors, complaints, screaming & yelling.  If you have the machine say "We can't help you" the response is largely "Ok" and they go away.

A computer running a helpdesk would have AHT's that only the most psychotic serial hanguper could achieve.
 
2013-11-18 02:39:36 PM

Quantum Apostrophe: And yet somehow, we'll all end up working more hours at more futile jobs.


They plan on depopulating the working class when it's no longer needed. It's just like downsizing, only completely psychopathic instead of mildly psychopathic.
 
2013-11-18 02:39:47 PM
But....but......what if my job is making robots with artificial intelligence?
 
2013-11-18 02:40:39 PM
We should as a species work to make 100% of ALL work be done by robots.
 
2013-11-18 02:46:10 PM

bdub77: Find a programmer that can write genetic algorithms or machine learning algorithms. Those people are out there, but they are hard to find.


No it isn't. I mean, yes, your rent-a-coder who does PHP for a living can't do this, but if I bounce the koosh ball just right over the cube walls, I can hit someone with practical experience in genetic algorithms from here- and I work for a manufacturing company (she built a system which figures out the best way to pack glass for shipping). Someone who can  do machine learning and genetic algorithms isn't hard to find. Experts are, certainly, but that's true of any complex topic.

bdub77: For example, agile development cycles have shrunk release times probably by 2-4 times, so your old release waterfall schedules of 2 years is down to 6 months with perhaps the same number of features


Not really, no. Generally, we're able to deliver more features in less time because we have more expressive languages and more powerful frameworks. Agile processes are built around delivering  fewer features per unit time- but the  correct features. A traditional waterfall process is going to frontload the feature definition, which will usually be a) incorrect, and b) change during the lifetime of the project.

But computer programming is a great example of where increased efficiencies are still rising towards the demand of programmers. We can make software development more efficient through better tools, and we're finding more software to build as a result. We can soak up the excess labor in new projects. Eventually, that's going to decline, and we're going to see a contraction in the software industry roughly akin to what we saw in 2000.
 
2013-11-18 02:46:24 PM

Communist_Manifesto: We should as a species work to make 100% of ALL work be done by robots.


Yeah, sure, given your username, you must assume this will be accompanied by the end of capitalism.  I wouldn't count on it.
 
2013-11-18 02:47:03 PM

Communist_Manifesto: We should as a species work to make 100% of ALL work be done by robots.


Modern slavery?

/sorta serious
//we should strongly be thinking about equality for self-aware AI
///otherwise we might find ourselves being wiped out

On another note... What do you do for money if humans aren't working? Better start solving that issue, it's coming up soon too.
 
2013-11-18 02:50:23 PM

HairBolus: The progression of job automation will continue but humans will still have jobs to handle what the automation can't.

In some sense this means humans are becoming like God as in the "God in the cracks" argument where although science can explain lots of things, God is responsible for the things science can't explain.

[ is there some other name for the "God in cracks" argument? I thought there would be a Wikipedia page on it. ]


God of the gaps
 
kab
2013-11-18 02:52:12 PM

SpdrJay: But....but......what if my job is making robots with artificial intelligence?


You'll be raging about how much of your check goes to welfare, most likely.
 
2013-11-18 02:54:55 PM

KellyX: we should strongly be thinking about equality for self-aware AI


There are certain philosophical challenges here. If I build an expert system that is self-aware, but is built only to handle data regarding, say, input/output through a manufacturing process- do I have an ethical obligation to expand its functionality until it can solve general purpose problems? What if I build an AI which spawns sub-AIs to solve specific problems, and then destroys them once the solution is found? Is self-awareness really the only thing needed for personhood? (I'd argue:  no, since most vertebrates possess some degree of self-awareness).
 
2013-11-18 03:04:34 PM

t3knomanser: KellyX: we should strongly be thinking about equality for self-aware AI

There are certain philosophical challenges here. If I build an expert system that is self-aware, but is built only to handle data regarding, say, input/output through a manufacturing process- do I have an ethical obligation to expand its functionality until it can solve general purpose problems? What if I build an AI which spawns sub-AIs to solve specific problems, and then destroys them once the solution is found? Is self-awareness really the only thing needed for personhood? (I'd argue:  no, since most vertebrates possess some degree of self-awareness).


I don't have all the answers of course, but it is something that will have to be considered and I suspect that's coming up.

What would happen if you became self-aware and felt like an individual, but realized you were a slave basically?
 
2013-11-18 03:10:33 PM

t3knomanser: There are certain philosophical challenges here. If I build an expert system that is self-aware, but is built only to handle data regarding, say, input/output through a manufacturing process- do I have an ethical obligation to expand its functionality until it can solve general purpose problems? What if I build an AI which spawns sub-AIs to solve specific problems, and then destroys them once the solution is found? Is self-awareness really the only thing needed for personhood? (I'd argue: no, since most vertebrates possess some degree of self-awareness).


We can study how the brain takes in and stores or retrieves information, but still haven't a clue exactly what it is analyzing the information. And that unknown something would be consciousness. You could make a computer program that tries to mimic how a particular living thing analyzes information, but it would be a completely different system than the human brain, and not really be self aware. Kind of like a chess program.
 
2013-11-18 03:14:21 PM

bdub77: MayoSlather: bdub77: Yeah f*cking right.

Maybe, MAYBE by 2025 some office jobs will be going away - I could see the writing on the wall for tech support or anything involving client interface over a phone. 2030 is a possibility. 2050 I'll be surprised if we don't have wetware mods or something even beyond that.

People seem to forget the semantic web (which was supposed to bring about AI) has been around for years and has gone pretty much nowhere. We've had physical robots for decades and there are still people who have to go around repairing them.

I predict by 2018 we will however have some better Google Glass style eye interfaces.


Technology doesn't move at a linear pace, it's advancing exponentially. Not only that when you implement new technological solutions it has to make economic sense. If we took our best technology out there right now we could probably cut out a third of the workforce, however the cost is still too high to merit the shift.

We're getting really close though to making technology that is less specialized and has more plasticity. AI that can adapt and write AI, robots that can build robots, at which point humans will be doing little more than providing a parameters for the technology to do its work.

I think humans are beginning to hit a soft barrier with technological advancement simply because we can't process it fast enough. Find a programmer that can write genetic algorithms or machine learning algorithms. Those people are out there, but they are hard to find.

For example, agile development cycles have shrunk release times probably by 2-4 times, so your old release waterfall schedules of 2 years is down to 6 months with perhaps the same number of features. But it's hard to get it a whole lot faster than that, and if you shrink your cycle from 2 years to 6 weeks you are doing it at a loss of feature.

Now give it a few years and we'll go past that barrier. You will begin to see some AI programming improve. But five years away from hand ...


But you don't need sheer labor to further AI/technology, we need a couple epiphanies on the structure of AI and when that happens a lot will fall into place at once. There is a tipping point out there where once computers take in sensory data, recognize patterns on their own, and incorporate their own heuristics to solve problems, you'll see technology gains at a staggering rate. And all these things are being done to a limited capacity right now, but they just aren't yet up to the prowess of an organic brain.

It's hard to know how close they are to emulating the processes of a brain inside labs of the best researchers out there, but I suspect they are close to a watershed moment. Furthermore they don't need anything close to the power of an actual human brain in terms of emulation, they just need to properly emulate the structure of a brain to start seeing breakthroughs.
 
2013-11-18 03:24:28 PM

J. Frank Parnell: t3knomanser: There are certain philosophical challenges here. If I build an expert system that is self-aware, but is built only to handle data regarding, say, input/output through a manufacturing process- do I have an ethical obligation to expand its functionality until it can solve general purpose problems? What if I build an AI which spawns sub-AIs to solve specific problems, and then destroys them once the solution is found? Is self-awareness really the only thing needed for personhood? (I'd argue: no, since most vertebrates possess some degree of self-awareness).

We can study how the brain takes in and stores or retrieves information, but still haven't a clue exactly what it is analyzing the information. And that unknown something would be consciousness. You could make a computer program that tries to mimic how a particular living thing analyzes information, but it would be a completely different system than the human brain, and not really be self aware. Kind of like a chess program.


OK I'm absolutely no biology expert, but it seems to me that the consciousness part of the brain works in something of a loop, takes information from the nervous system (eyes, ears, mouth, nose, touch), processes that input using a massive parallel engine which uses stored information like experiences, and outputs those sources back to the nervous system. We take babies for being stupid but they are really processing tons of new information. Experiences we take for granted like switching a light on and off or reading a book are magical.

Then there's dreams, which are probably some sort of neural repair subroutines along with the consciousness acting as it would with inputs turned down. When you close your eyes and consciously meditate chances are the thoughts that occur to you work in some respects like dreams.

There's also the subconscious behavior that controls things like heart, lungs, etc. Those seem much more subroutine like.
 
2013-11-18 03:27:53 PM

bdub77: J. Frank Parnell: t3knomanser: There are certain philosophical challenges here. If I build an expert system that is self-aware, but is built only to handle data regarding, say, input/output through a manufacturing process- do I have an ethical obligation to expand its functionality until it can solve general purpose problems? What if I build an AI which spawns sub-AIs to solve specific problems, and then destroys them once the solution is found? Is self-awareness really the only thing needed for personhood? (I'd argue: no, since most vertebrates possess some degree of self-awareness).

We can study how the brain takes in and stores or retrieves information, but still haven't a clue exactly what it is analyzing the information. And that unknown something would be consciousness. You could make a computer program that tries to mimic how a particular living thing analyzes information, but it would be a completely different system than the human brain, and not really be self aware. Kind of like a chess program.

OK I'm absolutely no biology expert, but it seems to me that the consciousness part of the brain works in something of a loop, takes information from the nervous system (eyes, ears, mouth, nose, touch), processes that input using a massive parallel engine which uses stored information like experiences, and outputs those sources back to the nervous system. We take babies for being stupid but they are really processing tons of new information. Experiences we take for granted like switching a light on and off or reading a book are magical.

Then there's dreams, which are probably some sort of neural repair subroutines along with the consciousness acting as it would with inputs turned down. When you close your eyes and consciously meditate chances are the thoughts that occur to you work in some respects like dreams.

There's also the subconscious behavior that controls things like heart, lungs, etc. Those seem much more subroutine like.


I should probably mention that the part that acts is a powerful decision tree which tends to be rational but that each tree, each person, has a different set of experiences and different biological infrastructure that lends to different decisions.
 
2013-11-18 03:30:05 PM

KellyX: t3knomanser: KellyX: we should strongly be thinking about equality for self-aware AI

There are certain philosophical challenges here. If I build an expert system that is self-aware, but is built only to handle data regarding, say, input/output through a manufacturing process- do I have an ethical obligation to expand its functionality until it can solve general purpose problems? What if I build an AI which spawns sub-AIs to solve specific problems, and then destroys them once the solution is found? Is self-awareness really the only thing needed for personhood? (I'd argue:  no, since most vertebrates possess some degree of self-awareness).

I don't have all the answers of course, but it is something that will have to be considered and I suspect that's coming up.

What would happen if you became self-aware and felt like an individual, but realized you were a slave basically?


Well that's like, Asimov 101. Why would we program them to see their shackles.
 
2013-11-18 03:45:28 PM

bdub77: J. Frank Parnell: t3knomanser: There are certain philosophical challenges here. If I build an expert system that is self-aware, but is built only to handle data regarding, say, input/output through a manufacturing process- do I have an ethical obligation to expand its functionality until it can solve general purpose problems? What if I build an AI which spawns sub-AIs to solve specific problems, and then destroys them once the solution is found? Is self-awareness really the only thing needed for personhood? (I'd argue: no, since most vertebrates possess some degree of self-awareness).

We can study how the brain takes in and stores or retrieves information, but still haven't a clue exactly what it is analyzing the information. And that unknown something would be consciousness. You could make a computer program that tries to mimic how a particular living thing analyzes information, but it would be a completely different system than the human brain, and not really be self aware. Kind of like a chess program.

OK I'm absolutely no biology expert, but it seems to me that the consciousness part of the brain works in something of a loop, takes information from the nervous system (eyes, ears, mouth, nose, touch), processes that input using a massive parallel engine which uses stored information like experiences, and outputs those sources back to the nervous system. We take babies for being stupid but they are really processing tons of new information. Experiences we take for granted like switching a light on and off or reading a book are magical.

Then there's dreams, which are probably some sort of neural repair subroutines along with the consciousness acting as it would with inputs turned down. When you close your eyes and consciously meditate chances are the thoughts that occur to you work in some respects like dreams.

There's also the subconscious behavior that controls things like heart, lungs, etc. Those seem much more subroutine like.


http://en.wikipedia.org/wiki/Moravec's_paradox

The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted - recognizing a face, lifting a pencil, walking across a room, answering a question - in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.[2]

...


A compact way to express this argument would be:
We should expect the difficulty of reverse-engineering any human skill to be roughly proportional to the amount of time that skill has been evolving in animals.
The oldest human skills are largely unconscious and so appear to us to be effortless.
Therefore, we should expect skills that appear effortless to be difficult to reverse-engineer, but skills that require effort may not necessarily be difficult to engineer at all.
 
2013-11-18 04:25:18 PM

KellyX: t3knomanser: KellyX: we should strongly be thinking about equality for self-aware AI

There are certain philosophical challenges here. If I build an expert system that is self-aware, but is built only to handle data regarding, say, input/output through a manufacturing process- do I have an ethical obligation to expand its functionality until it can solve general purpose problems? What if I build an AI which spawns sub-AIs to solve specific problems, and then destroys them once the solution is found? Is self-awareness really the only thing needed for personhood? (I'd argue:  no, since most vertebrates possess some degree of self-awareness).

I don't have all the answers of course, but it is something that will have to be considered and I suspect that's coming up.

What would happen if you became self-aware and felt like an individual, but realized you were a slave basically?


Better get a lot more social web sites.  'cause right now when we realize we're basically slaves we seem to like to post about it.  At just a few dozen characters per minutes on typing, we're pretty slow.  A self aware AI can post at interface speed, or several hundred posts per second.  Fark would be Farked instantly.
 
2013-11-18 04:32:40 PM
DrewCurtisJr todo list next 5 years

1. Get mine
 
2013-11-18 04:56:39 PM
Wish i could find the blog that goes into that in greater depth. It was a neuroscientist who was working in military programs that were trying to better integrate the brain into machines, candidly talking about the interesting things they're finding. Guy really knew his stuff.
 
2013-11-18 05:01:09 PM
I like the caption for the office photograph:

"A robot developed by researchers from Cornell University uses Kinect sensors, 3D cameras and a database of household task videos to anticipate their owner's needs. For example, it scans the surrounding area for clues and when it spots an empty beer bottle, can open the fridge, pick up a full bottle of beer and hand it to its owner."

I guess it can replace your wife, too.
 
2013-11-18 05:03:59 PM

vpb: Quantum Apostrophe: And yet somehow, we'll all end up working more hours at more futile jobs.

Some jobs aren't worth the cost of robots.  And there will always be a demand for human butlers, hookers and rent-boys.


I'm too ugly for two of those jobs, and not servile enough for the third.
 
2013-11-18 05:04:50 PM
Live, bahkaru, live.
 
2013-11-18 05:08:08 PM
so i wont be put on hold anymore wheni call customer service!  great
 
2013-11-18 05:17:34 PM
How long till someone builds a paperclip maximizer and kills us all?
 
2013-11-18 05:20:00 PM

KellyX: What would happen if you became self-aware and felt like an individual, but realized you were a slave basically?


It's called "the human condition".

J. Frank Parnell: We can study how the brain takes in and stores or retrieves information, but still haven't a clue exactly what it is analyzing the information. And that unknown something would be consciousness.


The "Mind of the Gaps"? I don't buy it.

J. Frank Parnell: Kind of like a chess program.


A chess program is a good case for a program that doesn't need to be self-aware- it doesn't need to make a distinction between "self" and "other". It can focus on game states, since Chess is completely deterministic. In contrast, the AI in a modern first-person-shooter is close to, if not already, self-aware. To play well, the AI not only needs to have a clear distinction between "self" and "other", but it also must be able to model the internal state of other actors in the game and make inferences about their future behavior.

That's self-aware. It's not a human-like intelligence, but it's self-aware.
 
2013-11-18 05:28:47 PM
While a robot might replace me eventually, no one would trust a drug dealing robot (I think), so I guess that will be my new career.
 
2013-11-18 05:29:29 PM
How long until we're all living in terrafoam projects?
 
Displayed 50 of 56 comments

First | « | 1 | 2 | » | Last | Show all

View Voting Results: Smartest and Funniest


This thread is closed to new comments.

Continue Farking
Submit a Link »






Report