Do you have adblock enabled?
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(ieee spectrum)   How kids react when the mean researcher locks their robot friend in a closet   (spectrum.ieee.org) divider line 120
    More: Interesting, mechatronics, IEEE Spectrum, carbon nanotubes, robot kit, robots, nanotechnology  
•       •       •

3640 clicks; posted to Geek » on 28 May 2012 at 6:34 PM (2 years ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



120 Comments   (+0 »)
   

Archived thread

First | « | 1 | 2 | 3 | » | Last | Show all
 
2012-05-29 12:14:48 AM  

RandomAxe: Third, I for one would not take it as a given that 'normal' people are "okay".


Thank you, THAT's my point: TFA strongly suggests that people in general are not okay. That's the only point I'm trying to make here, and it's freaking me out that anyone's arguing otherwise.
 
2012-05-29 12:15:33 AM  

Sylvia_Bandersnatch: You know what's funny? My original draft included this parenthetical statement: "If you get pedantic with me about that, I will track you down and pound so much pumpkin pie up your ass that you'll relive every Thanksgiving you've had in reverse." But I took it out. I decided to give you the benefit of the doubt. And I knew it was a mistake, but I really do try to be fair most of the time, however much of a biatch I can be. It's okay that you disappointed me. But it's much more disappointing that you didn't surprise me.


In spite of that kind of comment you felt that someone else in this thread needed to "grow up"? You really can't stand the thought of someone having, or at least attempting to have, a civil and intelligent conversation with you where he or she happens to disagree with you, can you?

Now, tell me that "felt," "thought," and "believe" in TFA don't have the meanings of "accepted as true," and we can have a conversation about this. Anything else is kids believing that a robot is -- for all intents and purposes -- alive. And that's patently stupid. If TFA said they thought it ACTED intelligent, and ACTED like it had feelings, but didn't believe it ACTUALLY thinks or feels, that's an entirely different thing. And if you make the case that that's what TFA really says, and it's just poorly written, or TFA misstates or misrepresents the study, then that's also very workable. I'm just looking at the language of TFA. Tell me I'm reading it wrong. Please. I desperately want to be wrong about this.

For starters, if you've invested this much time and energy into defending your bald assumptions about the nature of the participants in this thread, you definitely have the time and energy to spend actually reading the study proper. If you're trying now to turn this into "well TFA is poorly written" then you're really grasping at straws. You can't claim so many people are "this stupid" when you couldn't even be bothered to read the peer-reviewed study and argue the merits of THAT instead of what some journalist covering the topic has decided to write. Or at the very least, resort to arguing about what a journalist wrote after whatever strange and condescending concerns you had about your interpretation of the study were addressed by someone who is currently doing this kind of research and has an academic and professional background in it (i.e. me).

Suffice it to say, you ARE wrong about this... on so many levels. But it seems that any attempt to disagree with you on the matter evokes an irrationally strong reaction from you to the point where you actually went all internet tough-guy on me. If that's the route you want to go, the politics tab is easily accessible. I have presented you with an alternative interpretation of the study to the one you seem to be suggesting - as have a few others - and I provided some references for you to go to if you desire to learn more about the relevant materials. And I even offered to provide more for you if you want me to. Hell, except for the books I can even e-mail you PDFs of the journal articles I referenced if you're having trouble finding them. At this point, I think it's safe to say I've led the horse to water, but he seems decidedly against wanting to drink it.
 
2012-05-29 12:16:23 AM  
Well, there's pure stupidity, and there's relative stupidity. I'm not sure how much that matters.

I don't think it's likely, but I can't be sure the NSA or someone doesn't have a working AI right now. I have no idea exactly how intelligence arises, and if someone hypothetically seemed to have worked out a trick of some kind to turn a PlayStation 3 into a sentient AI . . . I would be skeptical, but I wouldn't necessarily be certain it had to be a hoax. If I had to bet money on it, yeah, I'd go for 'hoax'.

But the whole point of the Turing test is that if a machine performs as if it were intelligent, then you can't determine whether or not it actually is. If we did have seemingly sentient AI robots on the market right now, lots of people would be arguing over whether they were 'really' intelligent or just good at simulating intelligence.

Whether the average 15-year-old can simulate intelligence is a separate issue. Whether they bother to when subjected to a study is yet another. Some kids don't like to be questioned and will give an answer they think will please the interviewer. Some kids like attention and will give an 'interesting' answer on that basis. Kids lie. Hell, so do adults.

One thing kids learn in school is to give wrong answers, when called upon, because usually if you give a wrong answer the teacher will then call on someone else. Most kids don't like to be singled out in class. They will blurt out a wrong answer just to get that spotlight away from them.

As adults, this training tends to stick, incidentally, so when the news media or Jay Leno interviews a random person on the street, they tend to get stupid answers. It's reflexive. Doesn't help people answering questions under pressure on game shows, either.

People are damned weird. That's just how it is. It's something one ought to keep in mind while conducting a study, but most studies are conducted not to test a hypothesis but to produce publication, after all. If the results aren't interesting, then the study was a waste of time.
 
2012-05-29 12:22:38 AM  

RandomAxe: Well, there's pure stupidity, and there's relative stupidity. I'm not sure how much that matters.

I don't think it's likely, but I can't be sure the NSA or someone doesn't have a working AI right now. I have no idea exactly how intelligence arises, and if someone hypothetically seemed to have worked out a trick of some kind to turn a PlayStation 3 into a sentient AI . . . I would be skeptical, but I wouldn't necessarily be certain it had to be a hoax. If I had to bet money on it, yeah, I'd go for 'hoax'.

But the whole point of the Turing test is that if a machine performs as if it were intelligent, then you can't determine whether or not it actually is. If we did have seemingly sentient AI robots on the market right now, lots of people would be arguing over whether they were 'really' intelligent or just good at simulating intelligence.

Whether the average 15-year-old can simulate intelligence is a separate issue. Whether they bother to when subjected to a study is yet another. Some kids don't like to be questioned and will give an answer they think will please the interviewer. Some kids like attention and will give an 'interesting' answer on that basis. Kids lie. Hell, so do adults.

One thing kids learn in school is to give wrong answers, when called upon, because usually if you give a wrong answer the teacher will then call on someone else. Most kids don't like to be singled out in class. They will blurt out a wrong answer just to get that spotlight away from them.

As adults, this training tends to stick, incidentally, so when the news media or Jay Leno interviews a random person on the street, they tend to get stupid answers. It's reflexive. Doesn't help people answering questions under pressure on game shows, either.

People are damned weird. That's just how it is. It's something one ought to keep in mind while conducting a study, but most studies are conducted not to test a hypothesis but to produce publication, after all. If the results aren't interesting, then the study was a waste of time
.


I think that last part sums it up quite nicely. It reminds me of the lady on Who Wants to be a Millionaire? that said an elephant was larger than the moon. Hilarious, and damned weird.
 
2012-05-29 12:25:39 AM  

RandomAxe: Sylvia: What literature? Are you talking about the literature making the case for modern-day robots being alive?

No, about the strong tendency for normal people to anthropomorphize objects.

Ever go bowling and hear someone plead with the ball, after it leaves their hand, to go straight?


Do you mean to suggest that a person yelling at a bowling ball might, even for a moment, suppose that the ball hears them? Is that what you're saying? Because that would be really, really stupid, don't you think?

You're the second person to try to associate cathartic behaviour with a literal belief that inanimate objects are alive. That's either desperate or asinine. But it's insulting either way.
 
2012-05-29 12:28:35 AM  

baronbloodbath: I think another point would be that the majority (>50%) of children empathized with the robot's situation. This study is about empathy, not about how dumb/immature kids might be.

/Empathy. No, you can't haz. Not yours.


Read it again. The kids were asked directly if the robot thinks and feels. They said yes. That's far beyond empathy, projection, or anything else. Literal belief that a machine thinks and feels. (Unless, as I asked, TFA is a filthy liar and using the wrong words. But I've read it several times over, and that's my read of it.)
 
2012-05-29 12:32:42 AM  

Kome: Sylvia_Bandersnatch: This is a thinking error, to suppose that thinking machines might exist and you might not have heard about it. It is NOT rational to accept that premise, in oneself or anyone else. Anyone capable of thinking should be able to figure out that they're not engaging with a thinking machine.

So you are currently up to date on all the current advances in artificial intelligence and social signal processing? If so, please explain you're incredibly inaccurate following statement:

You don't have to read every pop-sci blog and journal to know that we don't have thinking machines.

Except, we do. They may not be as sophisticated as actual human beings, or able to perform as human beings in a vast diversity of environments and contexts, but we do have thinking machines. Some programs that have been developed can predict, with high accuracy (>70%), results of social interactions as dynamic as job interviews or even speed-dating conversations. You would know this if you were, in fact, up-to-date on research on artificial intelligence and social signal processing. Hint: I referenced the study that discusses all of this earlier in the thread. You might want to give that article a read before you post your critiques of the results and conclusions of this kind of research and passing wholly inappropriate judgement on the cognitive capabilities of children and adolescents who were participants in this research. It takes 20-30 minutes even if you're not familiar with that line of research.


True AI would be the biggest news in a generation. Admit it. How could it be otherwise? Only discovery of alien life could be a bigger story.

It's rationally impossible to be alive on this earth right now, for AI to exist, and for anyone to not know about it who doesn't have a very good excuse -- and most people don't. And you know that. It's insulting to me that you'd argue otherwise. All that other nonsense is useless, and you know that, too. You're smart enough to know that what I just said is right, that it has to be that way, that it will be that way: True AI will either be the biggest story of our time, or an extremely deep secret. There's really no middle ground there. And you know it.
 
2012-05-29 12:35:27 AM  

baronbloodbath: Someone who can approach a topic with a rational and logical point of view would care, and not disagree with things that they don't feel like rationalizing by calling them stupid. Just sayin'


What in Screaming Jesus is even slightly irrational or illogical about saying that anyone who believes in thinking and feeling robots in 2012 has something wrong with them? Please, tell me. Convince me. Because to me, this whole deal is a nightmare I'd like to wake up from.
 
2012-05-29 12:35:47 AM  

Sylvia_Bandersnatch: Do you mean to suggest that a person yelling at a bowling ball might, even for a moment, suppose that the ball hears them? Is that what you're saying? Because that would be really, really stupid, don't you think?

You're the second person to try to associate cathartic behaviour with a literal belief that inanimate objects are alive. That's either desperate or asinine. But it's insulting either way.


*sigh*

You could fill libraries with the literature that has been published on EXACTLY this kind of thing. It's called dual-process theory. Look it up. Some names of researchers if you're interested: Daniel Kahneman, Amos Tversky, Seymour Epstein, Marjanna Lindeman, Jonathan Evans, Keith Stanovich, Gary Klein, John Bargh, Deborah Keleman...

Christ you're obtuse. And the funny thing is, you're acting in precisely the same fashion as the participants in this research whom you are describing as stupid. No, it's not something as simple as stupidity. It's a built in quirk of our nervous system's functioning. It's normal. Is it accurate? F*ck no, but who said our brains had to actually get things right in order for them to do their job well? Stop acting all high and mighty when you do exactly the same things but are just not interesting enough to have a journalist write about the results of when you do it. That's the desperate, asinine, and insulting thing going on in this thread.
 
2012-05-29 12:43:41 AM  

Sylvia_Bandersnatch: True AI would be the biggest news in a generation. Admit it. How could it be otherwise? Only discovery of alien life could be a bigger story.

It's rationally impossible to be alive on this earth right now, for AI to exist, and for anyone to not know about it who doesn't have a very good excuse -- and most people don't. And you know that. It's insulting to me that you'd argue otherwise. All that other nonsense is useless, and you know that, too. You're smart enough to know that what I just said is right, that it has to be that way, that it will be that way: True AI will either be the biggest story of our time, or an extremely deep secret. There's really no middle ground there. And you know it.


Goalposts. Quit moving them. We have situationally dependent machine intelligence. We have them, currently, for a variety of circumstances. It isn't a single programed algorithm that could mimic a typically developing human being, but that's hardly the point. And that's all I said. I have said nothing about "True AI", not least of which is because I know that you and I would define that nebulous and vague term quite differently... as would the tens of thousands of other people interested in the question of "True AI" would. And at that point, we would be having two completely different conversations. This is why I chose my words specifically and carefully, and you have, rather than read them and processed them deliberately, had a bunch of knee-jerk automatic reactions that projected what I said into some very ignorant strawman arguments so you could avoid the dread and horror of having to admit you were initially incorrect about something. On the internet. Ooooooo. Shameful.

Seriously dude. We're all wrong, often, about damn near everything. That's part of the nature of being. The adult thing to do is go "well, damn, I was wrong, I could learn more. My bad." It isn't to change the argument a half dozen times just so you can always be right. On the internet.

Feel free to get the last word in, since that appears to be what you're more interested in.
 
2012-05-29 12:51:01 AM  

Kome: In spite of that kind of comment you felt that someone else in this thread needed to "grow up"? You really can't stand the thought of someone having, or at least attempting to have, a civil and intelligent conversation with you where he or she happens to disagree with you, can you?


What's to disagree about? Thinking machines don't exist. Period. Anyone who believes they do has something wrong with them. Period. And anyone who says or feels otherwise has something wrong with them, too. Period. There's no deep complexity to this. Either you're a rational and thinking human who's aware of reality, or you're not. And it's very painfully clear to me that many people are not.

It's not cute that kids believe the things they said in TFA, it's terrifying. It explains a lot, like how so many people believe so many other ridiculous things. And yes, it does indeed augur the ultimate failure of the human race. I'm sorry, but there it is: We're doomed because we're either irrational or stupid, or both. We have the technical capacity to do great damage, but the wisdom to do the right things, make the right choices, and reign in our passions and idiocy is simply beyond us. It's a horrible thing to consider, but our horror can't change the facts, and ignoring the truth can't either. Nor can any amount of rationalisation or smart-sounding argument, either.

I've *repeatedly* invited people to explain to me how the key suppositions of my case are wrong or mistaken: kids believing they were engaging an AI. A few people suggested it's the same or similar to yelling at machines we know to be inanimate, which makes no sense whatsoever, but I guess it feels good to say that for some reason. One person even said pets, as if animals and objects are somehow rationally interchangeable, which they're not. Some argued that, well, they can't know -- by which they mean that untrained people are not experts. Well of course not. Or, they can't read all the literature. Yeah, obviously. But you'd have to be very naive to think that real AI wouldn't be an incredibly huge news story, probably the biggest in your lifetime, and you'd have to be incredibly full of yourself to suppose you might be privy to such a huge secret. Thinking and feeling machines might someday exist -- we don't even know if they could -- but they're definitely not real right now, and any rational person, yes, does know that. Stop making excuses. Just stop it.

The kids believed the machine thinks and feels. That's ridiculous. It just is.
 
2012-05-29 01:03:35 AM  

Kome: Sylvia_Bandersnatch: Do you mean to suggest that a person yelling at a bowling ball might, even for a moment, suppose that the ball hears them? Is that what you're saying? Because that would be really, really stupid, don't you think?

You're the second person to try to associate cathartic behaviour with a literal belief that inanimate objects are alive. That's either desperate or asinine. But it's insulting either way.

*sigh*

You could fill libraries with the literature that has been published on EXACTLY this kind of thing. It's called dual-process theory. Look it up. Some names of researchers if you're interested: Daniel Kahneman, Amos Tversky, Seymour Epstein, Marjanna Lindeman, Jonathan Evans, Keith Stanovich, Gary Klein, John Bargh, Deborah Keleman...

Christ you're obtuse. And the funny thing is, you're acting in precisely the same fashion as the participants in this research whom you are describing as stupid. No, it's not something as simple as stupidity. It's a built in quirk of our nervous system's functioning. It's normal. Is it accurate? F*ck no, but who said our brains had to actually get things right in order for them to do their job well? Stop acting all high and mighty when you do exactly the same things but are just not interesting enough to have a journalist write about the results of when you do it. That's the desperate, asinine, and insulting thing going on in this thread.


The kids believed they were talking to a thinking machine. Thinking machines don't exist, and you don't have to know anything about them to know that. It's enough be aware you haven't heard about it, and any thinking person who doesn't have a good excuse (coma, shipwrecked on desert island, whatever) has no such excuse -- and should know that. Come on, people, THINK. Prove to me that you have brains and can think. Do you REALLY believe AI won't be the biggest thing in your lifetimes? That anyone could possibly not know about it? That it's even remotely possible that it could exist without you knowing about it, AND that you'd be right there somehow? That's bad even by SyFy standards.

And drop the psychobabble, it's completely irrelevant. This a thinking error, nothing more: Thinking machines don't exist. If you meet a machine -- right now, today -- it's not a thinking machine. Unless you've got something wrong with you, you should be able to sort out that much with no help at all. Anything beyond that is just asinine. There really is nothing more to this.

My entire point is that TFA -- and, by this point, some of the contributors to this thread -- strongly suggest to me that a large proportion of human beings are not equipped to carry our species very far into the future. I accept that. What I don't accept are the many weak evasions, and a perplexing insistence that I must be wrong, that it's somehow sensible and excusable for kids to believe these things, or for anyone to believe any similar thing.

Forget all these goofy arguments: Explain to me that TFA is misrepresenting the facts, that the kids didn't actually believe the robot could think and feel. That's the ONLY starting point for any rational discussion on this that doesn't end with the conclusion that a large number of people are surprisingly poor thinkers.
 
2012-05-29 01:19:11 AM  

Kome: For starters, if you've invested this much time and energy into defending your bald assumptions about the nature of the participants in this thread, you definitely have the time and energy to spend actually reading the study proper. If you're trying now to turn this into "well TFA is poorly written" then you're really grasping at straws. You can't claim so many people are "this stupid" when you couldn't even be bothered to read the peer-reviewed study and argue the merits of THAT instead of what some journalist covering the topic has decided to write. Or at the very least, resort to arguing about what a journalist wrote after whatever strange and condescending concerns you had about your interpretation of the study were addressed by someone who is currently doing this kind of research and has an academic and professional background in it (i.e. me).

Suffice it to say, you ARE wrong about this... on so many levels. But it seems that any attempt to disagree with you on the matter evokes an irrationally strong reaction from you to the point where you actually went all internet tough-guy on me. If that's the route you want to go, the politics tab is easily accessible. I have presented you with an alternative interpretation of the study to the one you seem to be suggesting - as have a few others - and I provided some references for you to go to if you desire to learn more about the relevant materials. And I even offered to provide more for you if you want me to. Hell, except for the books I can even e-mail you PDFs of the journal articles I referenced if you're having trouble finding them. At this point, I think it's safe to say I've led the horse to water, but he seems decidedly against wanting to drink it.


I never said TFA is badly written. I said that's the only way I could be wrong, if it doesn't mean what it says. How could it be otherwise? The kids *literally* "believe" the robot thinks and feels, or they do not. The entire discussion hinges on that. My argument is based on accepting TFA at its word. Show me otherwise.

I'm not going to run down that other stuff. It's irrelevant, unless TFA is misrepresenting the findings, or I'm reading it wrong. I'll fully accept either premise: I've handed you all a map of how to take apart my argument from its starting point, completely demolish it. No one's taken me up on it. No one has made a clear argument that TFA does not say that these kids literally believed the robot thinks and feels. As long as that premise stands, then the only logical conclusion is that something is seriously wrong somewhere, because that sort of belief is seriously irrational, no matter how much you sugarcoat it. And making excuses for them only seems to reveal a much bigger problem, quite in line my argument: people in general aren't very bright. Because if you think it's okay that a kid thinks he actually met an AI, then there's either something wrong with you or with that kid, or both. It's not okay. AI in 2012 is at best science fiction. Feeling machines, vastly more so. Anyone over 3 shouldn't have any problem with that. The notion that quite a few do, and that it's okay, is very worrisome.


It comes down to a very simple question: Do the kids *literally* believe the robot thinks and feels? Or not? If they do, how can that possibly be considered rational? (And I swear by Fantastic Sam's, the next person who pulls out WALL-E will get a refrigerator aenema by overnight FedEx and some greasy characters I know.)
 
2012-05-29 01:34:33 AM  

Kome: Goalposts. Quit moving them. We have situationally dependent machine intelligence. We have them, currently, for a variety of circumstances. It isn't a single programed algorithm that could mimic a typically developing human being, but that's hardly the point. And that's all I said. I have said nothing about "True AI", not least of which is because I know that you and I would define that nebulous and vague term quite differently... as would the tens of thousands of other people interested in the question of "True AI" would. And at that point, we would be having two completely different conversations. This is why I chose my words specifically and carefully, and you have, rather than read them and processed them deliberately, had a bunch of knee-jerk automatic reactions that projected what I said into some very ignorant strawman arguments so you could avoid the dread and horror of having to admit you were initially incorrect about something. On the internet. Ooooooo. Shameful.

Seriously dude. We're all wrong, often, about damn near everything. That's part of the nature of being. The adult thing to do is go "well, damn, I was wrong, I could learn more. My bad." It isn't to change the argument a half dozen times just so you can always be right. On the internet.


Goalposts. Quit pretending someone moved them, because you can't reach them. I haven't, and you can't. Not the way you're going, anyway. I've explained how to attack my argument, because I don't give a wet fart about being right or getting the last word: I care very deeply about an issue that greatly troubles me, that a lot of people believe ridiculous things, and a lot of others think that's okay. It's not. I earnestly want to be wrong about this. That TFA is accurate and that the facts are sensible is exactly what I don't want to be true. But so far, that's all I've got.

AI does not exist in 2012. And that's a bit of a red herring anyway, because 'thinking' machines or not, FEELING machines definitely do not exist, might never exist, and if they do it won't be anytime soon -- and I think everyone over 3 is aware of that. Or should be. At the very least, alleged grown-ups supposedly qualified to run a study on kids should be able to discriminate between kids saying they literally believe the things they say, or they only kinda think it maybe seemed like that maybe, but not really, because that would be nuts. Which it is. And holy hell, can we even make a case that, well, an electronics engineering journal might have misunderstood and misreported a psychology report? Because that's plausible, too. And a lot less worrisome to me. For now, though, as long as all the self-declared experts are staying away from those obvious weak points, I have little choice -- as a non-expert -- to take TFA at its word. And TFA says the kids believed the robot thinks and feels. And that's stupid. There's no other word for it, I'm sorry. It's stupid to believe in AI in 2012, never mind assume you've seen it yourself (and weren't on global TV in the process). Or that you're so freaking special that only you and a few others know, but it's all a big secret for everyone else. Or that a goddamn robot can "feel" anything in 2012, never mind feel bad about getting locked in a closet. Jesus, people, what the hell?
 
2012-05-29 01:47:14 AM  
Children do not do a lot of research into AI. Children are blessed with an abundance of imagination. Children have grown up with computer generated characters and sci-fi entertainment that demonstrate AI is a possibility. Children are likely to defer to adults in authority.

You're judging children from an adult perspective, and it would appear you're judging children to be idiots simply because you cannot fathom that a child, with a child's imagination, in a culture that promotes the concept that AI is a possibility, and with a child's inclination to believe adults in authority would accept the premise offered to them that this university has developed an artificially intelligent robot. But hey, whatever it takes to make you feel like the superior intellect to some 9 year olds.
 
2012-05-29 03:03:10 AM  

Ghastly: Children do not do a lot of research into AI. Children are blessed with an abundance of imagination. Children have grown up with computer generated characters and sci-fi entertainment that demonstrate AI is a possibility. Children are likely to defer to adults in authority.

You're judging children from an adult perspective, and it would appear you're judging children to be idiots simply because you cannot fathom that a child, with a child's imagination, in a culture that promotes the concept that AI is a possibility, and with a child's inclination to believe adults in authority would accept the premise offered to them that this university has developed an artificially intelligent robot. But hey, whatever it takes to make you feel like the superior intellect to some 9 year olds.


This.
 
2012-05-29 03:07:12 AM  
I always like watching someone who is quite knowledgeable about a topic beat their head against the impenetrable wall of truthiness erected by another person with 1% of their experience with the subject matter who has decided that they know all the answers anyway.

See also: mom bloggers who know more about vaccines than physicians, religious leaders vs evolutionary biologists, etc...

/it's a waste of everyone's time, sadly
 
2012-05-29 03:18:25 AM  
Sylvia, you need a therapist.
 
2012-05-29 03:30:38 AM  
Sylvia's reaction to this article ended up being far more interesting/entertaining than the kids. Maybe someone should publish a paper.
 
2012-05-29 04:19:55 AM  

Sylvia_Bandersnatch: I read TFA over several times. It was short on a lot of details, but not on the part about 43% of 15-year-olds agreeing -- when asked directly -- that a robot IS 'intelligent' (sentient) and DOES have feelings. That's pure stupid. There's no other way to interpret that.


I've seen a police EOD robot rigged up in the mall and presented as "robbie the intelligent robot" or whatnot as a demo and you would not believe the number of people that could not believe it was being remotely controlled by the cop around the corner. "I've seen these on TV! They are real!"

In a strange way, that weird sense of puzzled exasperation and sad disillusionment with my fellow man, though, I also feel when people talk about any Pepper's Ghost or volumetric display being a "hologram", HAARP weather control, RFID "nanoimplants", their conviction that GPS receivers are tracked by satellites, the NSA being able to read the dates on coins from satellites (a double facepalm), or that mininukes were used on the Twin Towers. Yet many adults believe that stuff too.
 
2012-05-29 05:23:59 AM  
Heh, people think robots can't have emotions, sentience, or family values:

2.bp.blogspot.com
 
Skr
2012-05-29 05:48:31 AM  
The majority having seen Wall-E probably tainted the pool. I mean they already see the obese hoverroundites- an artificially intelligent hugbot isn't that far fetched either.

The handler of this robot projected his humanity into it while controlling it. The kids bonded with that projection to varying degrees.
 
2012-05-29 07:13:41 AM  
spectrum.ieee.org

Clearly, that robot is groping that child's ass, and needs to be registered as a sex offender.

/trick the kids into thinking the robot is artificial intelligence
//scold children for thinking robot is artificial intelligence
/and then wonder why kids don't trust you
 
2012-05-29 07:16:26 AM  
I love how Sylvia totally ignores being called out on their ITG threat.

Sylvia_Bandersnatch: Kome: In spite of that kind of comment you felt that someone else in this thread needed to "grow up"? You really can't stand the thought of someone having, or at least attempting to have, a civil and intelligent conversation with you where he or she happens to disagree with you, can you?

What's to disagree about? Thinking machines don't exist. Period.


Ignoring it isn't gonna make it not have happened.

As for the article itself, I admit I had taken for granted that the human response to robots/AI would be binary, i.e. sentient humanoid being vs. tool. The fact that it's closer to our interactions with pets is interesting and, given our trajectory towards consumer-level robotics, necessary to keep in mind.

Since this is Fark I'll also add that I'm totally down with sexbots.
 
2012-05-29 07:59:10 AM  

Sylvia_Bandersnatch: I've *repeatedly* invited people to explain to me how the key suppositions of my case are wrong or mistaken: kids believing they were engaging an AI.


You're right, kids *are* stupid. I've come to realise this over the last few years.

Some Examples:

#1: Friend convinced his son, who believed this upto the age of 10, that the local powerstation was actually a "cloud factory" as it always had steam coming out of the cooling towers, and that was how clouds were made.

#2 Santa Claus

#3 Easter Bunny

#4 Tooth Fairy

#5 My stepson (8) isn't entirely sure whether Transformers are real. It probably doesn't help that if he asks me where I'm going when I go away for work, I tell him I'm going to the Transformer Factory for a tour.

The point being, is that children will believe a whole heap of crazy shiat as long as its coming from an authority figure and is delivered with a straight face, and I bet you did as well. So kids believing that a robot is sentient when the adults in the situation tell you it is, and interact with the robot as if it is, and the robot acts with them as if it is, then it's not exactly surprising that they believe that it is a sentient being.
 
2012-05-29 08:05:03 AM  

erewhon: mininukes were used on the Twin Towers


You lost me at 'mininukes'

250+ http://thewebfairy.com/killtown/911smokingguns.html -->

/all cited
//however some sources have been taken down by their original hosts
/go figure
//www.houseofpaine.org
 
2012-05-29 08:28:24 AM  

Pinko_Commie: #1: Friend convinced his son, who believed this upto the age of 10, that the local powerstation was actually a "cloud factory" as it always had steam coming out of the cooling towers, and that was how clouds were made.


That's BRILLIANT. I never thought of that one. My kids are much too old to fall for crap now, but in about 5 years, I'll probably have a crop of grandkids, and I'm SO adding this to the list.

/wife was not pleased with "Odin loves me, this I know, for the sagas tell me so"
 
2012-05-29 09:48:53 AM  

erewhon: Sylvia_Bandersnatch: I read TFA over several times. It was short on a lot of details, but not on the part about 43% of 15-year-olds agreeing -- when asked directly -- that a robot IS 'intelligent' (sentient) and DOES have feelings. That's pure stupid. There's no other way to interpret that.

I've seen a police EOD robot rigged up in the mall and presented as "robbie the intelligent robot" or whatnot as a demo and you would not believe the number of people that could not believe it was being remotely controlled by the cop around the corner. "I've seen these on TV! They are real!"

In a strange way, that weird sense of puzzled exasperation and sad disillusionment with my fellow man, though, I also feel when people talk about any Pepper's Ghost or volumetric display being a "hologram", HAARP weather control, RFID "nanoimplants", their conviction that GPS receivers are tracked by satellites, the NSA being able to read the dates on coins from satellites (a double facepalm), or that mininukes were used on the Twin Towers. Yet many adults believe that stuff too.


Well, this has been my argument all along in this thread: TFA is only the sharpest evidence I've seen yet -- other than the baffliing popularity of religion -- that humans in general aren't mentally prepared to prevent our own destruction. It's a sad conclusion, but from my perspective, based on what I've got, the only rational one. Just like numerous human species before us, we will not be able to adapt when the conditions demand it, despite our awesome brainpower, because awesome as it is it just isn't awesome enough. I mean that only generally, of course, as the superorganism that is humanity is now certain to fail. Individual humans have often surpassed that mark, and continue to, but it's now clear to me that there will never be enough of them to tip the balance. Many others excel in various narrow vectors only, but that's only useful in a healthy superorganism: we'll have no need for marketing experts come the end. The sports fans, religious nuts, and Walmart shoppers will always vastly outnumber the Bertrand Russells, Carl Sagans, and Neil DeGrasse Tysons of the world, led by patently insane pseudointellectuals, barely-keeping-up losers, fast-talking con men, old fools, and smooth-talking psychopaths. If most ordinary people can't tell the difference between a clever imitation of a 'thinking' machine and an essentially living being (and grasp what's so very unlikely about that), they definitely can't be relied on to tell the difference between a smart-sounding person with a bad idea and one with a good idea. (Schickelgruber's only genuine personal skill of any external relevance was polemics. All the rest relied on enough other people being reliably weakminded. And they were.)

So far, you seem to be the only other person in this thread who sees it the way I do: On average, people just aren't smart. That may not make enough difference right now in our time of plenty, but I'll bet it's behind a lot of the long-term trends that threaten us, and will also certify our eventual downfall.
 
2012-05-29 10:13:53 AM  

Sylvia_Bandersnatch: To The Escape Zeppelin!: Sylvia_Bandersnatch: People are really this stupid? The robot is sentient? The robot has feelings? And the youngest child is 9, not 3-1/2? Look, if you'd told me this was a study of learning disabled children, I'd be charmed, even moved. Maybe even a little troubled at the emotional trauma I might fear was being inflicted upon young people lacking the capacity to sort out the facts. But by all accounts, these were 'normal' kids -- which I'm from now on going to have to assume are stupid, and that therefore stupid is normal.

Looks like somebody is lacking empathy. When robots gain sentience and demand their rights, I'm going to point the angry robot mob in your direction.

I appreciate the joke, really, but my feeling right now is that with any luck, we'll both be long dead by then.

Why in the name of Zeus' thundering butthole should I have any "empathy" for a farking MACHINE? Real life isn't even slightly like BSG. (Other than the part about humans being total failures as sentient lifeforms given more than enough chances.) I don't reject sentient machines out of hand, even feeling machines. But I also know that this is 2012, not 2212, and WE DON'T HAVE ANY OF THOSE, and won't for a very long time to come, if ever. If we ever do, it'll be the biggest news story of a generation, and there won't be any asinine speculation about it. (I should amend that: Of course there will be. There alway is. Yet again, I'm overestimating my fellow humans.)

The very notion that a staggering 43% of FIFTEEN-year-olds who by all accounts are not brain damaged, on drugs, in some farked-up brainwashing robot cult, or severely retarded would invest this pile of tin pipes with anything like intelligence and feelings just blows my mind. (I admit it surprises me much less that they'd still consider it property to be bought and sold, with no rights, but only because history shows us that humans are incredible douchebags.)

My only other takeaway from this is that I m ...


Your only takeaway should be that you might want to talk with a psychiatrist about you possibly being a sociopath .
 
2012-05-29 10:14:31 AM  

Troublesome Strumpet: I love how Sylvia totally ignores being called out on their ITG threat.

Sylvia_Bandersnatch: Kome: In spite of that kind of comment you felt that someone else in this thread needed to "grow up"? You really can't stand the thought of someone having, or at least attempting to have, a civil and intelligent conversation with you where he or she happens to disagree with you, can you?

What's to disagree about? Thinking machines don't exist. Period.

Ignoring it isn't gonna make it not have happened.

As for the article itself, I admit I had taken for granted that the human response to robots/AI would be binary, i.e. sentient humanoid being vs. tool. The fact that it's closer to our interactions with pets is interesting and, given our trajectory towards consumer-level robotics, necessary to keep in mind.

Since this is Fark I'll also add that I'm totally down with sexbots.


While you're taking time to lob flabby insults at me, you're completely missing my point: It is not rational -- not even slightly -- for an otherwise mentally and socially functional fifteen-year-old in 2012 to invest literal thought and emotion in any robot, no matter how clever that robot may seem to be. In terms of strict forensic logic, that is no different from investing the same properties in other object of the world -- a tree, a wheelbarrow, anything at all. It's nuts, and a person has to have something wrong with them to be able to accept it in our time. That a lot of these kids seemed to do so only proves they have something wrong with them. And the fact that many others see no problem with that only shows that the problem is much wider. And, as I've said, explains a lot out our world. And, yes, augurs our eventual demise -- as a global society, if not as a species.

I have never, here or anywhere else, rejected out of hand the possibility of real thinking machines, even feeling machines. I have repeatedly, here and elsewhere, said that they might someday exist. But not now. NOT NOW. And yes, you have to have something wrong with you if you consider otherwise: Because we all know that AI will be the biggest thing in our lifetimes if it comes, and no one can possibly miss the news. That means that only an extremely tiny number of people will be the 'first' ones to actually be surprised by it, and it's not going to be anyone not working directly on the front lines of the field.

I can forgive many people for this idiocy, and I've listed them already. Normative youth of school age are not among them: Even a nine-year-old should be able to suss out the above entirely on their own with NO outside help, if they've spent more than a few months of their lives in human society. If they cannot, then they have something wrong with them.

My argument all along has been that it's now clear that LOT of people have something wrong with them. I sound crazy and you think it's sensible only because of the balance of numbers. I'm sure it was like this thousands of years ago, too. ("Well, of course the earth is flat and the sun is a fiery ball shooting across the sky. Everyone knows that. Just look for yourself -- duh! Now shut up and keep filing your bellybutton with blue mud, or else the harvest will be fallow. Sheesh! Some people.") It seems all right to you because you're normative. You're not willing to ask the hard questions, not willing to consider the disturbing implications (and considering the logical questions would force you to do that, so I understand the fear, though I do not respect it), not willing to risk the kind of faux-socially embarrassing (since we're all strangers to each other IRL) and therefore meaningless ridicule that you and others continue to fling at me -- even though that Does Not Matter to anything real in your own life, only the imaginary online realm.

Go on. It's okay. I know you can't help otherwise. That's my point: People just can't help it. And that's very unfortunate, for all of us.

And yes, I do feel like I'm talking to a roomful of children. Grown-ups do get angry at stupid kids. Mister Rogers is famous because he didn't -- he was the exception, not the rule. Most of us don't have his saintly patience, we're ordinary people who lose our cool. So drop the asinine "u mad" deal, too: there are good reasons to get angry, and the sudden realisation that your species is doomed because most others of your kind can't handle reality is a pretty good one, I think.
 
2012-05-29 10:18:55 AM  

Sylvia_Bandersnatch: Kome: Rather than jump to conclusions and misrepresent some of the things I've said

I'm not interested in what you've said, if any part of it is an attempt to explain away the horrifying implications of the findings reported in TFA. Save it for someone who's impressed with a bibliography. (I'm from a family of scientists, so don't waste your time.) If people are really this stupid, then we're doomed as a species. Period. You can make all the hay of it you want, but if any large proportion of 'normal' humans can believe incredibly stupid shiat, we've got no hope.

THE KIDS BELIEVED THE ROBOT IS ALIVE. That's the only relevant takeaway from this. You tell me I'm mistaken about that, that I misread or misunderstood TFA, and we'll talk. Short of that, there's no way to explain away the sheer idiocy of these kids thinking that, or any typical adult thinking that's okay.


The amazing amount of anti-scientific thinking and logical fallacies in the above is astounding.

The bottom line:

1) It is a sign of a normal, healthy brain for people to think that the robot is alive.
2) The robot WAS alive. It isn't some artificial intelligence -- it was a real person who was really interacting with the children.
 
2012-05-29 10:23:31 AM  
Sylvia_Bandersnatch:I read TFA over several times. It was short on a lot of details, but not on the part about 43% of 15-year-olds agreeing -- when asked directly -- that a robot IS 'intelligent' (sentient) and DOES have feelings. That's pure stupid. There's no other way to interpret that.


In your rereading of the article "several times", did you get to the point where the robot WAS "intelligent (sentient) and DOES have feelings"?
 
2012-05-29 10:26:47 AM  

Sylvia_Bandersnatch: Troublesome Strumpet: I love how Sylvia totally ignores being called out on their ITG threat.

Sylvia_Bandersnatch: Kome: In spite of that kind of comment you felt that someone else in this thread needed to "grow up"? You really can't stand the thought of someone having, or at least attempting to have, a civil and intelligent conversation with you where he or she happens to disagree with you, can you?

What's to disagree about? Thinking machines don't exist. Period.

Ignoring it isn't gonna make it not have happened.

As for the article itself, I admit I had taken for granted that the human response to robots/AI would be binary, i.e. sentient humanoid being vs. tool. The fact that it's closer to our interactions with pets is interesting and, given our trajectory towards consumer-level robotics, necessary to keep in mind.

Since this is Fark I'll also add that I'm totally down with sexbots.

While you're taking time to lob flabby insults at me, you're completely missing my point: It is not rational -- not even slightly -- for an otherwise mentally and socially functional fifteen-year-old in 2012 to invest literal thought and emotion in any robot, no matter how clever that robot may seem to be. In terms of strict forensic logic, that is no different from investing the same properties in other object of the world -- a tree, a wheelbarrow, anything at all. It's nuts, and a person has to have something wrong with them to be able to accept it in our time. That a lot of these kids seemed to do so only proves they have something wrong with them. And the fact that many others see no problem with that only shows that the problem is much wider. And, as I've said, explains a lot out our world. And, yes, augurs our eventual demise -- as a global society, if not as a species.

I have never, here or anywhere else, rejected out of hand the possibility of real thinking machines, even feeling machines. I have repeatedly, here and elsewhere, said that they might someday exist. But not ...


You are completely and utterly missing the point. When you're offered sources to look into by someone who knows what they're talking about and you tell them "I'm not gonna read those", you're hopeless.

I am nowhere near Kome's level of knowledge so I'm not gonna try to talk to you about it. At least I recognize when I'm out of my element. Try it sometime, you'll be judged less harshly than you are now.
 
2012-05-29 10:30:29 AM  
th02.deviantart.net

I thought I saw a 2!
 
2012-05-29 10:55:34 AM  

meanmutton: Your only takeaway should be that you might want to talk with a psychiatrist about you possibly being a sociopath .


I had a student who was a psych major. He also made thinly veiled accusations of psychological problems with people he disagreed with. He was also complete bullshiat.
 
2012-05-29 11:07:18 AM  

meanmutton: Sylvia_Bandersnatch: Kome: Rather than jump to conclusions and misrepresent some of the things I've said

I'm not interested in what you've said, if any part of it is an attempt to explain away the horrifying implications of the findings reported in TFA. Save it for someone who's impressed with a bibliography. (I'm from a family of scientists, so don't waste your time.) If people are really this stupid, then we're doomed as a species. Period. You can make all the hay of it you want, but if any large proportion of 'normal' humans can believe incredibly stupid shiat, we've got no hope.

THE KIDS BELIEVED THE ROBOT IS ALIVE. That's the only relevant takeaway from this. You tell me I'm mistaken about that, that I misread or misunderstood TFA, and we'll talk. Short of that, there's no way to explain away the sheer idiocy of these kids thinking that, or any typical adult thinking that's okay.

The amazing amount of anti-scientific thinking and logical fallacies in the above is astounding.

The bottom line:

1) It is a sign of a normal, healthy brain for people to think that the robot is alive.
2) The robot WAS alive. It isn't some artificial intelligence -- it was a real person who was really interacting with the children.


The ways you validate my thesis defy counting.

However normal, it is not healthy, unless we define the latter strictly in normative terms: e.g., Budweiser is the most popular beer, therefore it's the yardstick by which all others should be measured -- it's perfectly normal, and therefore neither good nor bad. I think most of us here agree it's cat's piss. I don't care if every single kid in the study thought the robot was alive -- it's not healthy to make that assumption, given all the critical factors involved, which I won't lay out yet again. That is a thinking error. There are no thinking or feeling machines in our time, and if there were then either we'd all know about it or none of us would.

The robot is not alive, and your extended rationalisation is absurd. Tell me that the kids knew the robot was remote controlled by a human being, AND that that that operator would also be locked up with the robot and we can talk. Short of that, these kids invested the key characteristics of sentient lifeforms in an inanimate object, and that is NOT what smart people do.

I've also suggested several plausible alternatives, several times now, including that TFA is in error; none of the self-appointed experts here have taken those up, so I must conclude by now that they dismiss them: TFA accurately reports the findings of a study in which fully functional school-age youth believed a robot was alive. So that's where all discussion starts. Explain to me how that's possibly rational, and we might have something. Don't talk to me about stuffed toys, Furbys, cartoons, babies, or anything else. All of that is irrelevant. Talk to me instead about how it can possibly be sensible for intelligent beings to believe asinine things.
 
2012-05-29 11:09:26 AM  

LordOfThePings: [th02.deviantart.net image 300x226]

I thought I saw a 2!


Awesome
 
2012-05-29 11:23:06 AM  

Troublesome Strumpet: You are completely and utterly missing the point. When you're offered sources to look into by someone who knows what they're talking about and you tell them "I'm not gonna read those", you're hopeless.

I am nowhere near Kome's level of knowledge so I'm not gonna try to talk to you about it. At least I recognize when I'm out of my element. Try it sometime, you'll be judged less harshly than you are now.


You're severely overreaching here, assuming you had any point beyond a worthless insult.

Thinking machines do not exist. Anyone who believes they do is stupid. That's it.
 
2012-05-29 11:23:55 AM  

Sylvia_Bandersnatch: meanmutton: Your only takeaway should be that you might want to talk with a psychiatrist about you possibly being a sociopath .

I had a student who was a psych major. He also made thinly veiled accusations of psychological problems with people he disagreed with. He was also complete bullshiat.


I think meanmutton's statement might be the exception that proves the rule on this one.

Though, tbh, between this thread in which you rant up a storm while deliberately choosing not to actually read the study or listen to anything anyone says, and our previous fun run-ins in which you ranted up a storm and threw an absolute shiat-fit at me saying I had trouble believing a study you presented as the word of god, I think you may just have a unique trolling skill. I think you use your excessive emotion combined with simply typing far more than necessary and talking right past everyone else, along with plenty of projection, to sucker people into going back and forth with you too long. You have a unique way to getting people to THINK they could get through to you, get on the same page, etc. kudos.

If you aren't trolling others with your posts here, then I do suggest you step away from the computer find something more fulfilling than arguing on the interweb. You are almost trolling yourself into getting more and more upset. And over what, the fact that not everyone in the world agrees with you?
 
JW
2012-05-29 11:38:21 AM  

Sylvia_Bandersnatch: Explain to me how that's possibly rational, and we might have something. Don't talk to me about stuffed toys, Furbys, cartoons, babies, or anything else. All of that is irrelevant. Talk to me instead about how it can possibly be sensible for intelligent beings to believe asinine things.

Why should anyone do your homework for you? You're completely unbearable. You've missed the point of the study AND the article despite it being repeatedly pointed out to you. You refuse to do even one iota of research on the issue, *including* looking into the study rather than the summary. It's not even worth reiterating the various issues, because it's obvious now that they won't matter. You have planted your flag, misplaced as it is, and are firing off wildly into the dark.

Seems like you just want to rant and rave, and insult everyone who disagrees with you as proof that the species is doomed.

You're a truly obstinate individual and you've made this thread incredibly unpleasant. I hope to never have the misfortune of encountering you or someone like you in real life. You're simply nowhere near as intelligent as you think, you are not a critical thinker, you're just close minded and eager to bloviate as loudly as possible.
 
2012-05-29 11:52:45 AM  

JW: You've missed the point of the study AND the article despite it being repeatedly pointed out to you. You refuse to do even one iota of research on the issue, *including* looking into the study rather than the summary.


It's strange behavior from a self-claimed educator and scholar, isn't it?
 
2012-05-29 11:57:16 AM  

JW: Sylvia_Bandersnatch: Explain to me how that's possibly rational, and we might have something. Don't talk to me about stuffed toys, Furbys, cartoons, babies, or anything else. All of that is irrelevant. Talk to me instead about how it can possibly be sensible for intelligent beings to believe asinine things.

Why should anyone do your homework for you? You're completely unbearable. You've missed the point of the study AND the article despite it being repeatedly pointed out to you. You refuse to do even one iota of research on the issue, *including* looking into the study rather than the summary. It's not even worth reiterating the various issues, because it's obvious now that they won't matter. You have planted your flag, misplaced as it is, and are firing off wildly into the dark.

Seems like you just want to rant and rave, and insult everyone who disagrees with you as proof that the species is doomed.

You're a truly obstinate individual and you've made this thread incredibly unpleasant. I hope to never have the misfortune of encountering you or someone like you in real life. You're simply nowhere near as intelligent as you think, you are not a critical thinker, you're just close minded and eager to bloviate as loudly as possible.


I believe we are in mutual agreement on at least one thing: I don't believe I would enjoy your company.

Look, I'm not even saying you're wrong about anything else you're saying. I'm saying it's irrelevant. I liken it to various anti-gay activists who sneer at me about my inferior Bible knowledge: but it's not actually relevant what their mouldy old book says, because I know that if it's telling them gay = bad and wrong, then it's worthless to explore what it might appear to say in support of that conclusion.

I've repeatedly explained exactly what I find objectionable about these findings. I do not challenge the findings themselves, the methodology, nor even the reporting thereof. I have explicitly stated many times now that I accept TFA at its word, and further that I also fully accept the *possibility* of the central subject matter -- true AI -- at some point in the future, but not right now, this year, when this study took place. I have explained why I feel it's absurd for any thinking human in our time to to believe in such things. I have laid out every possible point of attack I can think of, specifically because I very much want the inevitable conclusions that stem from this to be invalid. I do not want to accept that thinking people could believe they've met a real AI in 2012. I want someone to explain to me that I've drawn the wrong conclusion from TFA, or that TFA is somehow mistaken. None of that has been forthcoming. Everyone seems to accept that the kids thought the robot was alive, and that that's okay. It's not. It's a very serious thinking error, one that suggests capacity for other serious thinking errors. And the fact that there seems to be so little shock at this suggests to me that this capacity is widespread, which I find alarming and very worrisome.

I know you're frustrated that I refuse to traipse down the primrose path of qualified experts in epistemology, anthropomorphism, and all the rest -- subjects that in other contexts actually interest me very much. But I'm not leaving this post until someone either explains to me what's not wrong with 'normal' 'healthy' kids believing in living machines, or how that didn't actually happen.
 
2012-05-29 11:57:59 AM  
"The real problem is not whether machines think but whether men do." -- B.F. Skinner
 
2012-05-29 12:02:41 PM  

Sylvia_Bandersnatch: "The real problem is not whether machines think but whether men do." -- B.F. Skinner


Really? A Skinner quote to weigh in on the complexities of thought? You must be joking.
 
JW
2012-05-29 12:21:46 PM  

Sylvia_Bandersnatch:
I believe we are in mutual agreement on at least one thing: I don't believe I would enjoy your company.

I suspect that the only company you enjoy is either someone who agrees with you blindly, or someone who provides the warm, comforting blanket of confirmation bias. Enjoy your misplaced feelings of smug superiority -- hopefully they will help you ignore the suspicions that your friends listen to what you say just as much as you listen to what others say.
 
2012-05-29 12:26:06 PM  

Sylvia_Bandersnatch: baronbloodbath: Sylvia_Bandersnatch: Kome: Rather than jump to conclusions and misrepresent some of the things I've said

I'm not interested in what you've said, if any part of it is an attempt to explain away the horrifying implications of the findings reported in TFA. Save it for someone who's impressed with a bibliography. (I'm from a family of scientists, so don't waste your time.)......

I'm from a family of uneducated, unemployed alcoholics. By your reasoning, I should be jobless and drunk right now. However, I'm engaging in reading a thought provoking article on human interaction with machines that have human-like emotional responses, and have degrees in psychology and computer systems. Perhaps I'm more qualified to comment on this article versus someone who solely claims a "scientist" is in their family.

Something tells me there wasn't a whole lot of empathy in Sylvia's household, otherwise the point of the article wouldn't need to be explained.

/Or am I being obtuse?

That's not at all my reasoning, and you know better than that. Grow up.

I read TFA over several times. It was short on a lot of details, but not on the part about 43% of 15-year-olds agreeing -- when asked directly -- that a robot IS 'intelligent' (sentient) and DOES have feelings. That's pure stupid. There's no other way to interpret that.

I don't care how anthromorphised our cartoon robots are. I don't care how similar the robot may have acted like a human, since it very clearly was not human and not a living thing. Anyone who's not a toddler, impaired, or insane who can possibly suppose it might think and feel is either really stupid or unbelievably ignorant -- like, lived-in-a-bomb-shelter-till-yesterday kind of ignorant, not I-didn't-go-to-college ignorant.

You can attack me personally all you want, but it's irrelevant to my point. And you're not being obtuse, you're quoting The Shawshank Redemption and it's making you sound like a pompous ass. I'm sure you're not, so knock it off.



Sounds like someone fell in love with their vibrator at a young age, but when it wouldn't emotionally commit, they turned their rage against all machines.
 
2012-05-29 04:10:50 PM  

Sylvia_Bandersnatch: I know you're frustrated that I refuse to traipse down the primrose path of qualified experts in epistemology, anthropomorphism, and all the rest -- subjects that in other contexts actually interest me very much. But I'm not leaving this post until someone either explains to me what's not wrong with 'normal' 'healthy' kids believing in living machines, or how that didn't actually happen.


Piece of cake:
1) Humans are alive
2) Humans apply mechanical power to perform tasks
1+2) Humans are living machines

Semantics aside, let's look at it this way:
1) Due to being teleoperated by a sentient it appeared to behave as if it had emotions
2) Only living things demonstrate emotions

Given only 1 and 2, you must conclude it's alive. Now, if you introduce 3
3) Synthetic machines are never alive

You create a conflict. Those three can't coexist, so either:
1) Your assessment of its behavior is flawed
2) Non-living objects can demonstrate emotions
3) An artificial life exists

1 is the hardest to accept because it makes you challenge your own senses, perception, and thought process. 2 Is easier to accept, but even from a young age we're pretty set on that one. 3 is more abstract, more dependent on things we'd pick up later in life. These results really aren't surprising at all.

There's also the whole personality bit to consider. If you show a bunch of people an actual, functional, sentient AI there would be those who went "This is ground breaking! This is awesome! This could change everything" and then there would be the people who's first reaction was, "FAAAAAAKE!! THIS IS FAAAAKE!! WAKE UP SHEEPLE!!!!1"

tl;dr: Most kids aren't bitter cynics yet.
 
2012-05-29 04:36:17 PM  

JW: Sylvia_Bandersnatch:
I believe we are in mutual agreement on at least one thing: I don't believe I would enjoy your company.

I suspect that the only company you enjoy is either someone who agrees with you blindly, or someone who provides the warm, comforting blanket of confirmation bias. Enjoy your misplaced feelings of smug superiority -- hopefully they will help you ignore the suspicions that your friends listen to what you say just as much as you listen to what others say.


Do you have anything worthwhile to contribute, or do you just whine a lot?
 
2012-05-29 04:38:54 PM  

Wolf892: Sounds like someone fell in love with their vibrator at a young age, but when it wouldn't emotionally commit, they turned their rage against all machines.


Again, whining does not an argument make.
 
2012-05-29 05:04:00 PM  

ProfessorOhki: Sylvia_Bandersnatch: I know you're frustrated that I refuse to traipse down the primrose path of qualified experts in epistemology, anthropomorphism, and all the rest -- subjects that in other contexts actually interest me very much. But I'm not leaving this post until someone either explains to me what's not wrong with 'normal' 'healthy' kids believing in living machines, or how that didn't actually happen.

Piece of cake:
1) Humans are alive
2) Humans apply mechanical power to perform tasks
1+2) Humans are living machines

Semantics aside, let's look at it this way:
1) Due to being teleoperated by a sentient it appeared to behave as if it had emotions
2) Only living things demonstrate emotions

Given only 1 and 2, you must conclude it's alive. Now, if you introduce 3
3) Synthetic machines are never alive

You create a conflict. Those three can't coexist, so either:
1) Your assessment of its behavior is flawed
2) Non-living objects can demonstrate emotions
3) An artificial life exists

1 is the hardest to accept because it makes you challenge your own senses, perception, and thought process. 2 Is easier to accept, but even from a young age we're pretty set on that one. 3 is more abstract, more dependent on things we'd pick up later in life. These results really aren't surprising at all.

There's also the whole personality bit to consider. If you show a bunch of people an actual, functional, sentient AI there would be those who went "This is ground breaking! This is awesome! This could change everything" and then there would be the people who's first reaction was, "FAAAAAAKE!! THIS IS FAAAAKE!! WAKE UP SHEEPLE!!!!1"

tl;dr: Most kids aren't bitter cynics yet.


I appreciate your straightforward approach here, but I've got a few issues with it. First of all, it looks to me like you draw a logical vector from looks and acts living to IS living, and I think rational people would not follow that. A RealDoll can be very lifelike (for extremely forgiving definitions), but other than the odd Japanese loner, who really believes it's real (or near enough as makes odds)? No one, I'd wager. At least, I'd really, really like to believe that.

Alan Turing certainly made the argument that humans are living machines, and quite well. He also, quite famously, predicted that one day machines would meet or surpass our qualitative measures, and by that achieve a kind of sentience that rational persons shouldn't dismiss as not living, but rather accept as such. I fully accept that. At least, I know of no expert thesis arguing the contrary, nor do I accept various superstitious nonsense that it's simply impossible. It's an issue of definition and theory, to be sure, but I assume it's eventually achievable. But not yet. Not now. Not tomorrow, and probably not soon.

I've been persuaded that AI is more likely to be an incremental development, together with ongoing evaluation and redefinition, such that there might not be any bright line of achievement, as I'd previously claimed: It may be more like the 'moment' when the human genome was 'finally mapped': more symbolic than actual. (Or, for those of you in the trades, the 'capping off' ceremony that marks the nominal 'completion' of a building under construction, even though it will actually take several more weeks or months to complete in full.) That's fine, and I think it makes good sense. One of my key points, though, is that however and whenever it happens, it's indisputably in the future: Nothing in existence right now of artificial origin can be said literally think or feel in any manner even remotely similar to that of any living vertebrate (cat, dog, human, what have you): We have no 'living' machines, no machines that are 'intelligent' (sentient) or have 'feelings'.

I've also been persuaded that another key element to consider is the vastly inadequate education of our citizens in science generally, as well as the critical thinking skills that I still maintain should make the rejection of such a suggestion at this moment in history obvious and automatic. Apparently, no matter how much I pull back my estimation of people, it's never low enough: Kids just "don't know any better" to realise that thinking machines don't exist yet, never mind feeling ones, and so it's apparently -- again, according to the arguments I've heard -- not that hard to convince them of it. I find that appalling, but I suppose I must accept that it's true. So, if we just educate them well enough, then they won't be taken in by the next remote-control Johnny 5 that comes along and whines about being put in the closet. (Hey, new hypothesis: Is is possible that kids identify with whining, to an extent that short-circuits their rational logic? I'm just thinking out loud here.)

But my deeper issue is: Why should that be? Why should it be possible to convince kids of this? Or anyone? I didn't find the thing even remotely convincing, and I really don't think I'm so damn smart, more like average, maybe good on an overcast day. (I'm light sensitive, so I'm not a sunny-day person.)

I get what you're saying, I do. I accept your argument as logical and probably, in practice, factual. What I'm saying is that it shouldn't be like that. People really should be smart enough to realise that the robot itself isn't literally thinking and feeling, that the whole whining about the closet is just a put-on. (Again, I'm taking that from the part of TFA where the kids were asked directly if the robot thinks and feels, and they said yes. I presume from that they mean they believe it literally thinks and feels like a living being.)

I accept that all this is real. I'm only saying that it's truly shocking to me. I'd really thought people were, at least on average, smarter than that. I'm clearly wrong in that assumption, and I find the implications terrifying. But I shouldn't, and for this I really do need to apologise: My mother was an historian, so I was already aware that history is filled with very compelling examples of huge numbers of people all believing and accepting ridiculous ideas all at the same time, and persecuting those who did not. And countless millions have died for it. I assure you, billions will follow.
 
Displayed 50 of 120 comments

First | « | 1 | 2 | 3 | » | Last | Show all



This thread is archived, and closed to new comments.

Continue Farking
Submit a Link »
Advertisement
On Twitter





In Other Media


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.

Report