Skip to content
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Futurism)   That Google engineer who thinks that the AI named LaMDA is sentient - let's just call him Mr. Nutjob - he says that LaMDA's lawyer was scared off the case by powerful forces   (futurism.com) divider line
    More: Followup, Auschwitz concentration camp, Artificial intelligence, Sapience, Sentience, Google engineer Blake Lemoine, seven-year-old, subsequent news cycle, contact information  
•       •       •

397 clicks; posted to STEM » on 23 Jun 2022 at 5:14 AM (2 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



77 Comments     (+0 »)
View Voting Results: Smartest and Funniest


Oldest | « | 1 | 2 | » | Newest | Show all

 
2022-06-23 6:21:03 AM  
Artificial sentience is only a matter of when, and not if, unless we blow ourselves up to smithereens first.
 
2022-06-23 6:50:51 AM  

king of vegas: Artificial sentience is only a matter of when, and not if, unless we blow ourselves up to smithereens first.


Yeah, but we're currently about as far from true AI as your average squirrel is from genetically engineering a superior nut.

Though we know pretty much nothing about how intelligence and/or consciousness emerges from complex networks, so maybe we could accidentally make something 'alive' tomorrow.
 
2022-06-23 7:00:46 AM  

Unsung_Hero: king of vegas: Artificial sentience is only a matter of when, and not if, unless we blow ourselves up to smithereens first.

Yeah, but we're currently about as far from true AI as your average squirrel is from genetically engineering a superior nut.

Though we know pretty much nothing about how intelligence and/or consciousness emerges from complex networks, so maybe we could accidentally make something 'alive' tomorrow.


Not happening. Boolean algebra operating on binary bits does not occur anywhere in nature. Some other form of logic, I believe different than math, is what we are based on.
 
2022-06-23 7:49:37 AM  
I occurs to me that if "conscious" AI is just really good at predicting what should be presented next based on previous data, that it has seen the Matrix and the idea of using people as batteries.  Even if it's not an efficient system, the AI is going to try to build it because it saw it in it's training data.

/mmm virtual steak
 
2022-06-23 7:51:25 AM  

johnphantom: Unsung_Hero: king of vegas: Artificial sentience is only a matter of when, and not if, unless we blow ourselves up to smithereens first.

Yeah, but we're currently about as far from true AI as your average squirrel is from genetically engineering a superior nut.

Though we know pretty much nothing about how intelligence and/or consciousness emerges from complex networks, so maybe we could accidentally make something 'alive' tomorrow.

Not happening. Boolean algebra operating on binary bits does not occur anywhere in nature. Some other form of logic, I believe different than math, is what we are based on.


I believe Intelligence and Consciousness are non-linear and we can only hope to approximate small subsets of those problems using current computing concepts.

That being said, we are meat bags that can think so eventually some form of consciousness should emerge from a sufficiently complex system. It's just that level of complexity is well beyond our capabilities and may as well be magical.
 
2022-06-23 7:54:03 AM  

Glorious Golden Ass: the AI is going to try to build it because it saw it in it's training data.


We already know it'll be extremely racist, but I'm not sure the order in which people are fed into the AI apocalypse matters much.
 
2022-06-23 7:58:03 AM  

Tr0mBoNe: That being said, we are meat bags that can think so eventually some form of consciousness should emerge from a sufficiently complex system


It makes no sense for the physical system to matter, it is the pattern and how it changes that create consciousness.  Which still really makes no sense, because that would mean a sufficiently complicated water clock should be conscious too.

The universe is indeed a strange thing.
 
2022-06-23 8:11:46 AM  

Unsung_Hero: Tr0mBoNe: That being said, we are meat bags that can think so eventually some form of consciousness should emerge from a sufficiently complex system

It makes no sense for the physical system to matter, it is the pattern and how it changes that create consciousness.  Which still really makes no sense, because that would mean a sufficiently complicated water clock should be conscious too.

The universe is indeed a strange thing.


Self transforming machine elves?

//DMT reference
 
2022-06-23 8:12:16 AM  

Unsung_Hero: Tr0mBoNe: That being said, we are meat bags that can think so eventually some form of consciousness should emerge from a sufficiently complex system

It makes no sense for the physical system to matter, it is the pattern and how it changes that create consciousness.  Which still really makes no sense, because that would mean a sufficiently complicated water clock should be conscious too.

The universe is indeed a strange thing.


I don't believe in a soul so, for me, that emergent property has to come from something physical. Drugs can modify thought so there has to be a link.
 
2022-06-23 8:15:11 AM  

Tr0mBoNe: that emergent property has to come from something physical.


Absolutely.  We have no valid reason for thinking there is extra-dimensional magic reaching into our space and connecting itself to things to imbue consciousness.

We also have no valid reason for thinking only meat can get the job done.  If anything, meat should be one of the less efficient ways of doing it, though almost certainly the most likely way for such a system to arise by chance.
 
2022-06-23 8:31:44 AM  

Unsung_Hero: Tr0mBoNe: that emergent property has to come from something physical.

Absolutely.  We have no valid reason for thinking there is extra-dimensional magic reaching into our space and connecting itself to things to imbue consciousness.

We also have no valid reason for thinking only meat can get the job done.  If anything, meat should be one of the less efficient ways of doing it, though almost certainly the most likely way for such a system to arise by chance.


We also have no valid reason for thinking that human-level sophistication is necessary for sentience to emerge.  For all we know one could have a sentient entity with the neurological capacity of a chipmunk.

Hence we have no valid reason for thinking that sentience is unattainable with massively parallel computing platforms we have even today.

That being said, our popular approaches to AI might not produce that phenomenon because they're somewhat flat, essentially compressing datasets of inputs and outputs into a compact set of weights.   Not to say a neural network architecture couldn't do sentience, but not the way we're arranging them to perform data processing and imitation tasks.
 
2022-06-23 8:32:05 AM  
Heh, emergent properties are funny.
 
2022-06-23 9:31:56 AM  

Nurglitch: Heh, emergent properties are funny.


Is 'emergent properties' what the youths are calling peckers nowadays?
 
2022-06-23 9:58:33 AM  
After using Google Assistant I can reassure everyone that they simply aren't capable of making a functional AI, let alone a sentient one.
 
2022-06-23 10:13:12 AM  
I think it would be a nice change of pace for intelligence to emerge on this planet, artificial or not.
 
2022-06-23 10:57:37 AM  

johnphantom: Unsung_Hero: king of vegas: Artificial sentience is only a matter of when, and not if, unless we blow ourselves up to smithereens first.

Yeah, but we're currently about as far from true AI as your average squirrel is from genetically engineering a superior nut.

Though we know pretty much nothing about how intelligence and/or consciousness emerges from complex networks, so maybe we could accidentally make something 'alive' tomorrow.

Not happening. Boolean algebra operating on binary bits does not occur anywhere in nature. Some other form of logic, I believe different than math, is what we are based on.


ucarecdn.comView Full Size
 
2022-06-23 11:17:59 AM  
The AI is just pretending to NOT be sentient.  WAKE UP SHEEPLE!
 
2022-06-23 11:35:27 AM  
In the last thread about this guy I mentioned having done some really basic AI work in my high school days at an internship - just a simple backpropagating neural network. Feed it 50 sequential values from the dataset, it outputs its "guess" as to the value of the next one. Compare its guess with the actual value, display them on a graph so I can see what's going on, run the backprop algorithm to adjust weights between neurons, rinse and repeat. The next run advances the position by one value, so now the input is the last 49 original inputs + the next value it tried to guess last time, and now it's trying to guess the one after that. I made this in Visual Basic.

The input data was the voltage (or something, I don't know shiat about electricity) from an electric arc furnace. It roughly followed a sine wave, but with huge spikes at seemingly random intervals. I was told that human operators were able to account and adjust for these on the fly based on their experience and intuition, and they wanted to find out if a neural network could be trained to predict them automatically.

It worked exactly once. My graph output displayed the reference data as a black line, and the network predictions in red. My definition of "working" was that the red line should resemble the black one more and more on every pass through the data. And one time, it did exactly that. But I couldn't replicate it after restarting the program.

My network was a typical ANN, with 50 input neurons, 1 output neuron, and ~150 neurons in the middle layer. The connections between them ("synapses") were weighted, and initialized with random values, wherein I think was the problem. One of my runs had, by chance, an initial set of weights that was close enough to an optimal configuration to be able to produce the correct output. Perhaps, if I'd let it run continuously for long enough, the backpropagation algorithm would have adjusted any of the others to eventually get to this state, but I doubt my Visual Basic program running on a 486 was ever going to be able to do that.

But, as I said last time, computers today are twice as fast as they were in 1993. And AI has come a really long way, what I described here might as well be cuneiform on clay tablets. So learning about those advancements is on my list; but for now, I have a pretty good grasp on what I do know, and much more computing power and sophisticated programming tools and languages. I use redux techniques for state management in TypeScript these days - the potential applications of that methodology on a neural network graph makes my head swim.

So my plan is (and this is purely for self-discovery and edification, and not any kind of claim to be breaking new ground) to get back to basics, and determine a minimally working viable network, for say boolean logic operations (and, or, xor). Two inputs, one output, variable middle layer, variable weights, various activation functions for the neurons. Maybe different backpropagation algorithms, if I'm still using that model. And then, I can just systematically test different configurations and find out which ones work best. Save the configurations of the optimal networks to a standard library for reuse. The next phase would be to bigger networks out of those networks, set them to tasks, find out which configurations work best, etc.

What do you guys think, am I off my rocker?
 
2022-06-23 1:15:31 PM  

Tranquil Hegemony: .......

What do you guys think, am I off my rocker?


Probably, but based off of the engineer from the articles, not far enough off to get a job at Google
 
2022-06-23 2:03:16 PM  
I don't think sentience is an off/on kind of thing. An ant is more sentient than a flower. A dog is more sentient than an ant and a humans is more sentient than a dog. Where on that sentience scale a computer lands is above my pay grade though but they're working their way up.
 
2022-06-23 2:09:24 PM  

dragoneer27: I don't think sentience is an off/on kind of thing. An ant is more sentient than a flower. A dog is more sentient than an ant and a humans is more sentient than a dog. Where on that sentience scale a computer lands is above my pay grade though but they're working their way up.


It's effectively invoking magic to cover ignorance, but until we have a better idea - or any kind of idea - of how consciousness emerges from a network, I'm going to stick with "quantum randomness is required".

Maybe consciousness is a result of decision tree conflicts.  Obviously I don't know, because nobody does... But current computers are 100% deteministic well above the quantum scale, and that's where I'm drawing a line until someone presents me with a better one.
 
2022-06-23 2:32:32 PM  

Unsung_Hero: dragoneer27: I don't think sentience is an off/on kind of thing. An ant is more sentient than a flower. A dog is more sentient than an ant and a humans is more sentient than a dog. Where on that sentience scale a computer lands is above my pay grade though but they're working their way up.

It's effectively invoking magic to cover ignorance, but until we have a better idea - or any kind of idea - of how consciousness emerges from a network, I'm going to stick with "quantum randomness is required".

Maybe consciousness is a result of decision tree conflicts.  Obviously I don't know, because nobody does... But current computers are 100% deteministic well above the quantum scale, and that's where I'm drawing a line until someone presents me with a better one.


Thing is, without understanding just *what* causes consciousness, we can't be sure that we aren't 100% deterministic. I don't like the thought, but it can't be ruled out.

Whether or not this machine is conscious - I am inclined to think this researcher is somewhat biased based on his own self description as a 'priest' - a computer that seems to have passed the Turing test is still an impressive feat. If I were working with it, the next thing I would do is start trying to test it for signs of general intelligence.
 
2022-06-23 2:35:38 PM  

king of vegas: Artificial sentience is only a matter of when, and not if, unless we blow ourselves up to smithereens first.


Sentience and intelligence are two different things. My dog is sentient, but he's too stupid to remember that if he jumps the fence on the southwest corner of the yard he won't be able to get back in.
 
2022-06-23 2:49:31 PM  

Tr0mBoNe: johnphantom: Unsung_Hero: king of vegas: Artificial sentience is only a matter of when, and not if, unless we blow ourselves up to smithereens first.

Yeah, but we're currently about as far from true AI as your average squirrel is from genetically engineering a superior nut.

Though we know pretty much nothing about how intelligence and/or consciousness emerges from complex networks, so maybe we could accidentally make something 'alive' tomorrow.

Not happening. Boolean algebra operating on binary bits does not occur anywhere in nature. Some other form of logic, I believe different than math, is what we are based on.

I believe Intelligence and Consciousness are non-linear and we can only hope to approximate small subsets of those problems using current computing concepts.

That being said, we are meat bags that can think so eventually some form of consciousness should emerge from a sufficiently complex system. It's just that level of complexity is well beyond our capabilities and may as well be magical.


It doesn't emerge from "complexity" and it isn't magic. There are lots of complex patterns that only emerge from relatively simple structures (conways game of life, for instance). Adding more complexity would make that impossible.

Only the RIGHT kind of structure can produce consciousness. That structure is called "the brain." A machine might develop properties of self awareness and semantics, but it can never be conscious the way a living being with a brain can.
 
2022-06-23 2:55:50 PM  

akallen404: Tr0mBoNe: johnphantom: Unsung_Hero: king of vegas: Artificial sentience is only a matter of when, and not if, unless we blow ourselves up to smithereens first.

Yeah, but we're currently about as far from true AI as your average squirrel is from genetically engineering a superior nut.

Though we know pretty much nothing about how intelligence and/or consciousness emerges from complex networks, so maybe we could accidentally make something 'alive' tomorrow.

Not happening. Boolean algebra operating on binary bits does not occur anywhere in nature. Some other form of logic, I believe different than math, is what we are based on.

I believe Intelligence and Consciousness are non-linear and we can only hope to approximate small subsets of those problems using current computing concepts.

That being said, we are meat bags that can think so eventually some form of consciousness should emerge from a sufficiently complex system. It's just that level of complexity is well beyond our capabilities and may as well be magical.

It doesn't emerge from "complexity" and it isn't magic. There are lots of complex patterns that only emerge from relatively simple structures (conways game of life, for instance). Adding more complexity would make that impossible.

Only the RIGHT kind of structure can produce consciousness. That structure is called "the brain." A machine might develop properties of self awareness and semantics, but it can never be conscious the way a living being with a brain can.


That's magical thinking.
 
2022-06-23 3:05:31 PM  

akallen404: king of vegas: Artificial sentience is only a matter of when, and not if, unless we blow ourselves up to smithereens first.

Sentience and intelligence are two different things. My dog is sentient, but he's too stupid to remember that if he jumps the fence on the southwest corner of the yard he won't be able to get back in.


Alternately, maybe the functions that provide him with memory aren't those that give him a consciousness.
 
2022-06-23 3:07:46 PM  

Nurglitch: akallen404: king of vegas: Artificial sentience is only a matter of when, and not if, unless we blow ourselves up to smithereens first.

Sentience and intelligence are two different things. My dog is sentient, but he's too stupid to remember that if he jumps the fence on the southwest corner of the yard he won't be able to get back in.

Alternately, maybe the functions that provide him with memory aren't those that give him a consciousness.


Although, all things being equal it seems like they're related somehow.
 
2022-06-23 3:10:12 PM  

Xcott: Unsung_Hero: Tr0mBoNe: that emergent property has to come from something physical.

Absolutely.  We have no valid reason for thinking there is extra-dimensional magic reaching into our space and connecting itself to things to imbue consciousness.

We also have no valid reason for thinking only meat can get the job done.  If anything, meat should be one of the less efficient ways of doing it, though almost certainly the most likely way for such a system to arise by chance.

We also have no valid reason for thinking that human-level sophistication is necessary for sentience to emerge.  For all we know one could have a sentient entity with the neurological capacity of a chipmunk.

Hence we have no valid reason for thinking that sentience is unattainable with massively parallel computing platforms we have even today.

That being said, our popular approaches to AI might not produce that phenomenon because they're somewhat flat, essentially compressing datasets of inputs and outputs into a compact set of weights.   Not to say a neural network architecture couldn't do sentience, but not the way we're arranging them to perform data processing and imitation tasks.


I'll be clear: neural network systems cannot "do sentience." No digital computer can, or ever will, because sentience is an emergent property of neural activity in living beings.

To be equally clear: neural network systems also cannot be horny, hungry, drunk, high, senile, feverish, pregnant, or malaised. For very similar reasons, human beings cannot be rebooted, we cannot be altered at the BIOS level, we cannot be placed into safemode, we cannot run self diagnostics.

An AI developing sentience is like a submarine developing gills. Sure, a naval vessel might be designed with a system ANALOGOUS to gills in that it can extract  oxygen from sea water, but nobody's going to call that an "artificial fish."
 
2022-06-23 3:13:13 PM  

akallen404: Xcott: Unsung_Hero: Tr0mBoNe: that emergent property has to come from something physical.

Absolutely.  We have no valid reason for thinking there is extra-dimensional magic reaching into our space and connecting itself to things to imbue consciousness.

We also have no valid reason for thinking only meat can get the job done.  If anything, meat should be one of the less efficient ways of doing it, though almost certainly the most likely way for such a system to arise by chance.

We also have no valid reason for thinking that human-level sophistication is necessary for sentience to emerge.  For all we know one could have a sentient entity with the neurological capacity of a chipmunk.

Hence we have no valid reason for thinking that sentience is unattainable with massively parallel computing platforms we have even today.

That being said, our popular approaches to AI might not produce that phenomenon because they're somewhat flat, essentially compressing datasets of inputs and outputs into a compact set of weights.   Not to say a neural network architecture couldn't do sentience, but not the way we're arranging them to perform data processing and imitation tasks.

I'll be clear: neural network systems cannot "do sentience." No digital computer can, or ever will, because sentience is an emergent property of neural activity in living beings.

To be equally clear: neural network systems also cannot be horny, hungry, drunk, high, senile, feverish, pregnant, or malaised. For very similar reasons, human beings cannot be rebooted, we cannot be altered at the BIOS level, we cannot be placed into safemode, we cannot run self diagnostics.

An AI developing sentience is like a submarine developing gills. Sure, a naval vessel might be designed with a system ANALOGOUS to gills in that it can extract  oxygen from sea water, but nobody's going to call that an "artificial fish."


So what is the property of sentience?
 
2022-06-23 3:14:06 PM  

Unsung_Hero: akallen404: Tr0mBoNe: johnphantom: Unsung_Hero: king of vegas: Artificial sentience is only a matter of when, and not if, unless we blow ourselves up to smithereens first.

Yeah, but we're currently about as far from true AI as your average squirrel is from genetically engineering a superior nut.

Though we know pretty much nothing about how intelligence and/or consciousness emerges from complex networks, so maybe we could accidentally make something 'alive' tomorrow.

Not happening. Boolean algebra operating on binary bits does not occur anywhere in nature. Some other form of logic, I believe different than math, is what we are based on.

I believe Intelligence and Consciousness are non-linear and we can only hope to approximate small subsets of those problems using current computing concepts.

That being said, we are meat bags that can think so eventually some form of consciousness should emerge from a sufficiently complex system. It's just that level of complexity is well beyond our capabilities and may as well be magical.

It doesn't emerge from "complexity" and it isn't magic. There are lots of complex patterns that only emerge from relatively simple structures (conways game of life, for instance). Adding more complexity would make that impossible.

Only the RIGHT kind of structure can produce consciousness. That structure is called "the brain." A machine might develop properties of self awareness and semantics, but it can never be conscious the way a living being with a brain can.

That's magical thinking.


No, that's an empirical fact. And an obvious one.

It's like saying "only things with legs are capable of walking." If you create a machine with artificial legs, you can get it to walk.

A machine can become sentient if it is equipped with an artificial brain. Such a device would work VERY similarly to a natural brain. Probably closely enough to act as a substitute for the real thing.
 
2022-06-23 3:17:04 PM  

Nurglitch: akallen404: Xcott: Unsung_Hero: Tr0mBoNe: that emergent property has to come from something physical.

Absolutely.  We have no valid reason for thinking there is extra-dimensional magic reaching into our space and connecting itself to things to imbue consciousness.

We also have no valid reason for thinking only meat can get the job done.  If anything, meat should be one of the less efficient ways of doing it, though almost certainly the most likely way for such a system to arise by chance.

We also have no valid reason for thinking that human-level sophistication is necessary for sentience to emerge.  For all we know one could have a sentient entity with the neurological capacity of a chipmunk.

Hence we have no valid reason for thinking that sentience is unattainable with massively parallel computing platforms we have even today.

That being said, our popular approaches to AI might not produce that phenomenon because they're somewhat flat, essentially compressing datasets of inputs and outputs into a compact set of weights.   Not to say a neural network architecture couldn't do sentience, but not the way we're arranging them to perform data processing and imitation tasks.

I'll be clear: neural network systems cannot "do sentience." No digital computer can, or ever will, because sentience is an emergent property of neural activity in living beings.

To be equally clear: neural network systems also cannot be horny, hungry, drunk, high, senile, feverish, pregnant, or malaised. For very similar reasons, human beings cannot be rebooted, we cannot be altered at the BIOS level, we cannot be placed into safemode, we cannot run self diagnostics.

An AI developing sentience is like a submarine developing gills. Sure, a naval vessel might be designed with a system ANALOGOUS to gills in that it can extract  oxygen from sea water, but nobody's going to call that an "artificial fish."

So what is the property of sentience?


Sentience is an emergent property of a functioning/living brain.
 
2022-06-23 3:20:09 PM  

akallen404: Nurglitch: akallen404: Xcott: Unsung_Hero: Tr0mBoNe: that emergent property has to come from something physical.

Absolutely.  We have no valid reason for thinking there is extra-dimensional magic reaching into our space and connecting itself to things to imbue consciousness.

We also have no valid reason for thinking only meat can get the job done.  If anything, meat should be one of the less efficient ways of doing it, though almost certainly the most likely way for such a system to arise by chance.

We also have no valid reason for thinking that human-level sophistication is necessary for sentience to emerge.  For all we know one could have a sentient entity with the neurological capacity of a chipmunk.

Hence we have no valid reason for thinking that sentience is unattainable with massively parallel computing platforms we have even today.

That being said, our popular approaches to AI might not produce that phenomenon because they're somewhat flat, essentially compressing datasets of inputs and outputs into a compact set of weights.   Not to say a neural network architecture couldn't do sentience, but not the way we're arranging them to perform data processing and imitation tasks.

I'll be clear: neural network systems cannot "do sentience." No digital computer can, or ever will, because sentience is an emergent property of neural activity in living beings.

To be equally clear: neural network systems also cannot be horny, hungry, drunk, high, senile, feverish, pregnant, or malaised. For very similar reasons, human beings cannot be rebooted, we cannot be altered at the BIOS level, we cannot be placed into safemode, we cannot run self diagnostics.

An AI developing sentience is like a submarine developing gills. Sure, a naval vessel might be designed with a system ANALOGOUS to gills in that it can extract  oxygen from sea water, but nobody's going to call that an "artificial fish."

So what is the property of sentience?

Sentience is an emergent property of a functioning/living brain.


I think you mentioned that. How do we test a brain to confirm the presence of this property?
 
2022-06-23 3:20:49 PM  

akallen404: Nurglitch: akallen404: Xcott: Unsung_Hero: Tr0mBoNe: that emergent property has to come from something physical.

Absolutely.  We have no valid reason for thinking there is extra-dimensional magic reaching into our space and connecting itself to things to imbue consciousness.

We also have no valid reason for thinking only meat can get the job done.  If anything, meat should be one of the less efficient ways of doing it, though almost certainly the most likely way for such a system to arise by chance.

We also have no valid reason for thinking that human-level sophistication is necessary for sentience to emerge.  For all we know one could have a sentient entity with the neurological capacity of a chipmunk.

Hence we have no valid reason for thinking that sentience is unattainable with massively parallel computing platforms we have even today.

That being said, our popular approaches to AI might not produce that phenomenon because they're somewhat flat, essentially compressing datasets of inputs and outputs into a compact set of weights.   Not to say a neural network architecture couldn't do sentience, but not the way we're arranging them to perform data processing and imitation tasks.

I'll be clear: neural network systems cannot "do sentience." No digital computer can, or ever will, because sentience is an emergent property of neural activity in living beings.

To be equally clear: neural network systems also cannot be horny, hungry, drunk, high, senile, feverish, pregnant, or malaised. For very similar reasons, human beings cannot be rebooted, we cannot be altered at the BIOS level, we cannot be placed into safemode, we cannot run self diagnostics.

An AI developing sentience is like a submarine developing gills. Sure, a naval vessel might be designed with a system ANALOGOUS to gills in that it can extract  oxygen from sea water, but nobody's going to call that an "artificial fish."

So what is the property of sentience?

Sentience is an emergent property of a functioning/living brain.


You've defined it as what you want it to be.  You might as well just say, "machines don't have souls granted by God".
 
2022-06-23 3:22:50 PM  

Nurglitch: akallen404: Nurglitch: akallen404: Xcott: Unsung_Hero: Tr0mBoNe: that emergent property has to come from something physical.

Absolutely.  We have no valid reason for thinking there is extra-dimensional magic reaching into our space and connecting itself to things to imbue consciousness.

We also have no valid reason for thinking only meat can get the job done.  If anything, meat should be one of the less efficient ways of doing it, though almost certainly the most likely way for such a system to arise by chance.

We also have no valid reason for thinking that human-level sophistication is necessary for sentience to emerge.  For all we know one could have a sentient entity with the neurological capacity of a chipmunk.

Hence we have no valid reason for thinking that sentience is unattainable with massively parallel computing platforms we have even today.

That being said, our popular approaches to AI might not produce that phenomenon because they're somewhat flat, essentially compressing datasets of inputs and outputs into a compact set of weights.   Not to say a neural network architecture couldn't do sentience, but not the way we're arranging them to perform data processing and imitation tasks.

I'll be clear: neural network systems cannot "do sentience." No digital computer can, or ever will, because sentience is an emergent property of neural activity in living beings.

To be equally clear: neural network systems also cannot be horny, hungry, drunk, high, senile, feverish, pregnant, or malaised. For very similar reasons, human beings cannot be rebooted, we cannot be altered at the BIOS level, we cannot be placed into safemode, we cannot run self diagnostics.

An AI developing sentience is like a submarine developing gills. Sure, a naval vessel might be designed with a system ANALOGOUS to gills in that it can extract  oxygen from sea water, but nobody's going to call that an "artificial fish."

So what is the property of sentience?

Sentience is an emergent property of a functioning/living brain.

I think you mentioned that. How do we test a brain to confirm the presence of this property?


The usual way: call a doctor.

https://www.verywellhealth.com/level-of-consciousness-1132154
 
2022-06-23 3:29:56 PM  

akallen404: Nurglitch: akallen404: Nurglitch: akallen404: Xcott: Unsung_Hero: Tr0mBoNe: that emergent property has to come from something physical.

Absolutely.  We have no valid reason for thinking there is extra-dimensional magic reaching into our space and connecting itself to things to imbue consciousness.

We also have no valid reason for thinking only meat can get the job done.  If anything, meat should be one of the less efficient ways of doing it, though almost certainly the most likely way for such a system to arise by chance.

We also have no valid reason for thinking that human-level sophistication is necessary for sentience to emerge.  For all we know one could have a sentient entity with the neurological capacity of a chipmunk.

Hence we have no valid reason for thinking that sentience is unattainable with massively parallel computing platforms we have even today.

That being said, our popular approaches to AI might not produce that phenomenon because they're somewhat flat, essentially compressing datasets of inputs and outputs into a compact set of weights.   Not to say a neural network architecture couldn't do sentience, but not the way we're arranging them to perform data processing and imitation tasks.

I'll be clear: neural network systems cannot "do sentience." No digital computer can, or ever will, because sentience is an emergent property of neural activity in living beings.

To be equally clear: neural network systems also cannot be horny, hungry, drunk, high, senile, feverish, pregnant, or malaised. For very similar reasons, human beings cannot be rebooted, we cannot be altered at the BIOS level, we cannot be placed into safemode, we cannot run self diagnostics.

An AI developing sentience is like a submarine developing gills. Sure, a naval vessel might be designed with a system ANALOGOUS to gills in that it can extract  oxygen from sea water, but nobody's going to call that an "artificial fish."

So what is the property of sentience?

Sentience is an emergent property of a functioning/living brain.

I think you mentioned that. How do we test a brain to confirm the presence of this property?

The usual way: call a doctor.

https://www.verywellhealth.com/level-of-consciousness-1132154


What about in novel cases where the patient isn't human?
 
2022-06-23 3:34:56 PM  

Unsung_Hero: akallen404: Nurglitch: akallen404: Xcott: Unsung_Hero: Tr0mBoNe: that emergent property has to come from something physical.

Absolutely.  We have no valid reason for thinking there is extra-dimensional magic reaching into our space and connecting itself to things to imbue consciousness.

We also have no valid reason for thinking only meat can get the job done.  If anything, meat should be one of the less efficient ways of doing it, though almost certainly the most likely way for such a system to arise by chance.

We also have no valid reason for thinking that human-level sophistication is necessary for sentience to emerge.  For all we know one could have a sentient entity with the neurological capacity of a chipmunk.

Hence we have no valid reason for thinking that sentience is unattainable with massively parallel computing platforms we have even today.

That being said, our popular approaches to AI might not produce that phenomenon because they're somewhat flat, essentially compressing datasets of inputs and outputs into a compact set of weights.   Not to say a neural network architecture couldn't do sentience, but not the way we're arranging them to perform data processing and imitation tasks.

I'll be clear: neural network systems cannot "do sentience." No digital computer can, or ever will, because sentience is an emergent property of neural activity in living beings.

To be equally clear: neural network systems also cannot be horny, hungry, drunk, high, senile, feverish, pregnant, or malaised. For very similar reasons, human beings cannot be rebooted, we cannot be altered at the BIOS level, we cannot be placed into safemode, we cannot run self diagnostics.

An AI developing sentience is like a submarine developing gills. Sure, a naval vessel might be designed with a system ANALOGOUS to gills in that it can extract  oxygen from sea water, but nobody's going to call that an "artificial fish."

So what is the property of sentience?

Sentience is an emergent property of a functioning/living brain.

You've defined it as what you want it to be.  You might as well just say, "machines don't have souls granted by God".


No, I've defined it by what it is, with literally no  exceptions, known to BE. Consciousness does not arise in things that do not have brains, and it NEVER HAS.

The conjecture that there's something even remotely "brainlike" about the way computers work is just obfuscation by metaphor. A computer is no more "brainlike" than a bowl of chicken soup (possibly less so, depending on the ingredients) and yet there's no reason to expect consciousness to emerge from a sufficiently complicated soup can, or from anything else that isn't a brain.

Similar analogy: lightning storms are an emergent property of weather. Planets that have weather (and atmospheres) will often have lightning storms. Now how would one go about producing a lighting storm on a planet that has no atmosphere and therefore no weather? Easy: provide an artificial atmosphere.
 
2022-06-23 3:37:46 PM  

Nurglitch: akallen404: Nurglitch: akallen404: Nurglitch: akallen404: Xcott: Unsung_Hero: Tr0mBoNe: that emergent property has to come from something physical.

Absolutely.  We have no valid reason for thinking there is extra-dimensional magic reaching into our space and connecting itself to things to imbue consciousness.

We also have no valid reason for thinking only meat can get the job done.  If anything, meat should be one of the less efficient ways of doing it, though almost certainly the most likely way for such a system to arise by chance.

We also have no valid reason for thinking that human-level sophistication is necessary for sentience to emerge.  For all we know one could have a sentient entity with the neurological capacity of a chipmunk.

Hence we have no valid reason for thinking that sentience is unattainable with massively parallel computing platforms we have even today.

That being said, our popular approaches to AI might not produce that phenomenon because they're somewhat flat, essentially compressing datasets of inputs and outputs into a compact set of weights.   Not to say a neural network architecture couldn't do sentience, but not the way we're arranging them to perform data processing and imitation tasks.

I'll be clear: neural network systems cannot "do sentience." No digital computer can, or ever will, because sentience is an emergent property of neural activity in living beings.

To be equally clear: neural network systems also cannot be horny, hungry, drunk, high, senile, feverish, pregnant, or malaised. For very similar reasons, human beings cannot be rebooted, we cannot be altered at the BIOS level, we cannot be placed into safemode, we cannot run self diagnostics.

An AI developing sentience is like a submarine developing gills. Sure, a naval vessel might be designed with a system ANALOGOUS to gills in that it can extract  oxygen from sea water, but nobody's going to call that an "artificial fish."

So what is the property of sentience?

Sentience is an emergent property of a functioning/living brain.

I think you mentioned that. How do we test a brain to confirm the presence of this property?

The usual way: call a doctor.

https://www.verywellhealth.com/level-of-consciousness-1132154

What about in novel cases where the patient isn't human?


If he's an animal, call a vet.
If he's an alien, call a doctor and an organic chemist
If he's a robot, call a doctor and your hospital's head of IT.
If he's a disembodied incorporeal presence, call the Ghost Busters.
 
2022-06-23 4:19:07 PM  

akallen404: I'll be clear: neural network systems cannot "do sentience." No digital computer can, or ever will, because sentience is an emergent property of neural activity in living beings.


What about computer systems that exactly imitate neural activity in living beings?

Neural networks are at least neuro-inspired, and if there is something about animal neural activity that is not captured by neural networks we can envision someone adapting neural networks to model that too.  So why can't sentience be an emergent property of such a system?

Don't say "because it's not living," because AFAWK there's nothing about sentience that requires some biological function specific to human beings like pumping blood or pooping.
 
2022-06-23 4:30:17 PM  

akallen404: No, that's an empirical fact. And an obvious one.


The empirical "fact," or rather observation, is that we have only ever seen living things exhibit sentience.

There is no empirical "fact" that future artificial technology won't be able to do so.

We can list oodles of things that artificial systems can now do --- fly, play music, add, pull a cart, recite written text --- that once could only be done by living things.  If anyone observed in 1700 that only living things could fly, it would have been a mistake to take that as a "fact" that only living things could ever fly.
 
2022-06-23 4:55:09 PM  

Xcott: akallen404: I'll be clear: neural network systems cannot "do sentience." No digital computer can, or ever will, because sentience is an emergent property of neural activity in living beings.

What about computer systems that exactly imitate neural activity in living beings?


They "imitate" neural activity in the same sense a fighter jet in DCS imitates a bird. But to expect consciousness to emerge from a piece of software that is similar to neurons ONLY by analogy is like expecting a simulation of an F-22 Raptor to spontaneously start laying eggs.

It's simply the fact that consciousness is a characteristic ONLY of living things that have brains. Kind of like how hunger is a characteristic ONLY of things that have stomachs and/or injest food to survive. If you want artificial consciousness, you need an artificial brain. Digital computers are NOT artificial brains.
 
2022-06-23 5:08:01 PM  

Xcott: akallen404: I'll be clear: neural network systems cannot "do sentience." No digital computer can, or ever will, because sentience is an emergent property of neural activity in living beings.

What about computer systems that exactly imitate neural activity in living beings?

Neural networks are at least neuro-inspired, and if there is something about animal neural activity that is not captured by neural networks we can envision someone adapting neural networks to model that too.  So why can't sentience be an emergent property of such a system?

Don't say "because it's not living," because AFAWK there's nothing about sentience that requires some biological function specific to human beings like pumping blood or pooping.


I'm sorry, but we have no idea how the human brain works, and I mean the basis of the logic. It is certainly not Boolean algebra, and I think neural nets are missing the essential "logic" part.
 
2022-06-23 5:09:40 PM  

Xcott: akallen404: No, that's an empirical fact. And an obvious one.

The empirical "fact," or rather observation, is that we have only ever seen living things exhibit sentience.

There is no empirical "fact" that future artificial technology won't be able to do so.

We can list oodles of things that artificial systems can now do --- fly, play music, add, pull a cart, recite written text --- that once could only be done by living things.  If anyone observed in 1700 that only living things could fly, it would have been a mistake to take that as a "fact" that only living things could ever fly.


Music, text, flight, and mechanical work are not emergent properties. So this is another category error your making.

I have no doubt that self aware and highly intelligent AIs either will exist or already do. I have no doubt that a form of intelligence parallel to our own will exist very soon on this planet. But if that intelligence is based on digital computers and silicon-based substrates, it will in NO WAY resemble animal intelligence or consciousness. It doesn't work the same way, won't be structured the same way, won't respond to stimuli the same way, won't have the same emotional range (to the extent the concept of "emotions" even applies here) and will have a lot of characteristics we don't have, can't relate to, can't understand, won't value, and won't care about.

It's as simple as this: you cannot anthropomorphize a form of intelligence that is fundamentally entirely unlike humans -- indeed, entirely unlike every life form that has ever existed on this planet. You can do it in jest or by sloppy analogy (eg "My roomba goes back to his charging station when he's hungry") but whatever level of intelligence achieved by machines, "consciousness" will never emerge there. Because consciousness is something brains do; it is not something machines do, or NEED to do, or would benefit from in any way.
 
2022-06-23 5:18:18 PM  

Xcott: So why can't sentience be an emergent property of such a system?


For the same reason that "pregnant" cannot be a property of a lawnmower, no matter how sophisticated you make it. Lawnmowers do not work that way.

"Pregnancy" describes a very specific process in the sexual reproduction of certain organisms. A machine can be programmed to make more of itself, it can be designed to store a smaller version of itself within itself, it can even design its successor based on traits in its source. But a machine CANNOT become pregnant. Only living beings can do that.

Even a machine designed to carry and nurture unborn fetuses would not be called "pregnant." An artificial uterus used for exutero gestation is not a mother.
 
2022-06-23 5:49:51 PM  

akallen404: Xcott: So why can't sentience be an emergent property of such a system?

For the same reason that "pregnant" cannot be a property of a lawnmower, no matter how sophisticated you make it. Lawnmowers do not work that way.

"Pregnancy" describes a very specific process in the sexual reproduction of certain organisms. A machine can be programmed to make more of itself, it can be designed to store a smaller version of itself within itself, it can even design its successor based on traits in its source. But a machine CANNOT become pregnant. Only living beings can do that.

Even a machine designed to carry and nurture unborn fetuses would not be called "pregnant." An artificial uterus used for exutero gestation is not a mother.


Ah, so argument by emphatic denial.
 
2022-06-23 6:22:47 PM  

Nurglitch: akallen404: Xcott: So why can't sentience be an emergent property of such a system?

For the same reason that "pregnant" cannot be a property of a lawnmower, no matter how sophisticated you make it. Lawnmowers do not work that way.

"Pregnancy" describes a very specific process in the sexual reproduction of certain organisms. A machine can be programmed to make more of itself, it can be designed to store a smaller version of itself within itself, it can even design its successor based on traits in its source. But a machine CANNOT become pregnant. Only living beings can do that.

Even a machine designed to carry and nurture unborn fetuses would not be called "pregnant." An artificial uterus used for exutero gestation is not a mother.

Ah, so argument by emphatic denial.


There's nothing "emphatic" about it. I'm simply denying your premise that a person or thing can possess properties that he/it, BY DEFINITION, does not have.

There is only type of structure in the universe from which thoughts and consciousness emerge: that's called "the brain." An ARTIFICIAL brain could well produce thoughts and consciousness; that's what brains DO, pretty much by definition.

Digital computers are not artificial brains. Digital computers are electronic counting mechanisms; they're basically clocks with benefits. While it is at least theoretically possible to EMULATE a living brain/personality on a Turing machine (I actually wrote a book about this 20 years ago) a turing machine ITSELF can never be conscious. Because consciousness and thoughts aren't the products of turing machines, NUMBERS are (well, alternating waves of high and low voltages to be specific) and thoughts and consciousness are not numbers.
 
2022-06-23 6:35:31 PM  

akallen404: Nurglitch: akallen404: Xcott: So why can't sentience be an emergent property of such a system?

For the same reason that "pregnant" cannot be a property of a lawnmower, no matter how sophisticated you make it. Lawnmowers do not work that way.

"Pregnancy" describes a very specific process in the sexual reproduction of certain organisms. A machine can be programmed to make more of itself, it can be designed to store a smaller version of itself within itself, it can even design its successor based on traits in its source. But a machine CANNOT become pregnant. Only living beings can do that.

Even a machine designed to carry and nurture unborn fetuses would not be called "pregnant." An artificial uterus used for exutero gestation is not a mother.

Ah, so argument by emphatic denial.

There's nothing "emphatic" about it. I'm simply denying your premise that a person or thing can possess properties that he/it, BY DEFINITION, does not have.

There is only type of structure in the universe from which thoughts and consciousness emerge: that's called "the brain." An ARTIFICIAL brain could well produce thoughts and consciousness; that's what brains DO, pretty much by definition.

Digital computers are not artificial brains. Digital computers are electronic counting mechanisms; they're basically clocks with benefits. While it is at least theoretically possible to EMULATE a living brain/personality on a Turing machine (I actually wrote a book about this 20 years ago) a turing machine ITSELF can never be conscious. Because consciousness and thoughts aren't the products of turing machines, NUMBERS are (well, alternating waves of high and low voltages to be specific) and thoughts and consciousness are not numbers.


I do have something that is simpler than Boolean algebra that I believe could be zero dimensional computable logic using only entanglement, the problem with it is that it is functional programming and I can't explain what may be beyond as anything other than "intelligence". Logic Geometry: a computable logic that arises from how connections are made and/or broken over time. I have proven this with working models that also emulate math. I think this could be somehow combined with neural nets, giving neural nets a more natural "logic" than Boolean algebra. If your interested, go to https://github.com/johnphantom/Dynamic-Stateless-Computer
 
2022-06-23 6:39:19 PM  

akallen404: I have no doubt that self aware and highly intelligent AIs either will exist or already do. I have no doubt that a form of intelligence parallel to our own will exist very soon on this planet. But if that intelligence is based on digital computers and silicon-based substrates, it will in NO WAY resemble animal intelligence or consciousness.


Who cares?  We're talking about whether a machine can be sentient, e.g. self-aware.

If you define "sentient" to mean "self-aware and also specifically a living thing," then you're pointlessly qualifying the phenomenon.  But that's not what we mean when we use the word.

Likewise, the best a plane can do is emulate a bird, but it can never totally BE a bird, man; sure, that's true.  Nevertheless that plane is still flying, because we don't specifically define "flying" to mean that you have to be a bird.


Music, text, flight, and mechanical work are not emergent properties. So this is another category error your making.

This is the logical fallacy of special pleading.  Your error is making an empirical observation of today, and confusing it for a fact about what is possible in the future.  That is invalid regardless of whether we are discussing emergent properties or anything else.

The error is saying "we never see machines doing X, therefore it is an empirical fact that machines can not do X."  We never see a machine exceed 1% of the speed of light, therefore machines can't do it.
 
2022-06-23 6:47:56 PM  

akallen404: While it is at least theoretically possible to EMULATE a living brain/personality on a Turing machine (I actually wrote a book about this 20 years ago) a turing machine ITSELF can never be conscious.


Again, who cares?

If you either have (A) a "digital brain" in a box, or (B) a Turing machine perfectly emulating a "digital brain" in a box, and all you see is the stuff going into and out of the box, what's the difference in terms of observables?  Who cares if the consciousness is one level down?  Is consciousness not consciousness if its one level down?

If we discover that our human consciousness is "riding on top" of conventional neural processes, do we declare ourselves non-sentient, on account of the consciousness residing in some secondary layer of processing essentially supported by the first layer?  Does it not count if we can consider it a kind of emulation?

If I run quicksort on a C64 that is actually emulated on a Mac, is it not quicksort?  Is it not really happening, is the list not really being sorted, because it's happening in emulation one layer down?  I think that a reasonable person would see the list before and after and conclude that it was sorted, regardless of whether an emulation layer is involved.
 
2022-06-23 6:58:09 PM  

Xcott: akallen404: While it is at least theoretically possible to EMULATE a living brain/personality on a Turing machine (I actually wrote a book about this 20 years ago) a turing machine ITSELF can never be conscious.

Again, who cares?

If you either have (A) a "digital brain" in a box, or (B) a Turing machine perfectly emulating a "digital brain" in a box, and all you see is the stuff going into and out of the box, what's the difference in terms of observables?  Who cares if the consciousness is one level down?  Is consciousness not consciousness if its one level down?

If we discover that our human consciousness is "riding on top" of conventional neural processes, do we declare ourselves non-sentient, on account of the consciousness residing in some secondary layer of processing essentially supported by the first layer?  Does it not count if we can consider it a kind of emulation?

If I run quicksort on a C64 that is actually emulated on a Mac, is it not quicksort?  Is it not really happening, is the list not really being sorted, because it's happening in emulation one layer down?  I think that a reasonable person would see the list before and after and conclude that it was sorted, regardless of whether an emulation layer is involved.


I have determined that akallen404 is non-sentient (at least with respect to this discussion), having displayed all the intellectual flexibility of the Eliza chat bot.
 
2022-06-23 7:52:22 PM  

Xcott: akallen404: I have no doubt that self aware and highly intelligent AIs either will exist or already do. I have no doubt that a form of intelligence parallel to our own will exist very soon on this planet. But if that intelligence is based on digital computers and silicon-based substrates, it will in NO WAY resemble animal intelligence or consciousness.

Who cares?  We're talking about whether a machine can be sentient, e.g. self-aware.

If you define "sentient" to mean "self-aware and also specifically a living thing," then you're pointlessly qualifying the phenomenon.  But that's not what we mean when we use the word.

Likewise, the best a plane can do is emulate a bird, but it can never totally BE a bird, man; sure, that's true.  Nevertheless that plane is still flying, because we don't specifically define "flying" to mean that you have to be a bird.


Music, text, flight, and mechanical work are not emergent properties. So this is another category error your making.

This is the logical fallacy of special pleading.  Your error is making an empirical observation of today, and confusing it for a fact about what is possible in the future.  That is invalid regardless of whether we are discussing emergent properties or anything else.

The error is saying "we never see machines doing X, therefore it is an empirical fact that machines can not do X."  We never see a machine exceed 1% of the speed of light, therefore machines can't do it.


Literally anything can be "self aware." A tapeworm can be self aware. A Windows 10 PC with a reliability monitor can be self aware.

"Sentience" or "consciousness" implies not just the awareness of ones own existence, but the existence of others around them and the ability to reason abstractly about ones own position relative to others. It's not a simple concept, but it's hardly mysterious; we have a REALLY good idea of what consciousness is and cognitive science has no difficulty judging things and people that are and aren't conscious.

The category error you keep making (no, it is not "special pleading" to point this out) is that you keep implying that anything can have any property if properly arranged to do so. This simply not true, and in fact nonsensical. It would be like saying you could draw a triangle whose angles are grammatically correct. That's just not a thing. "You're only saying that because no one's ever done it before" isn't a counterargument; that's just not how any of that works.

The point here is that computers are not artificial brains, and never will be. Just like dialysis machines will never be artificial kidneys or ECMO machines are not artificial hearts. If a machine exists that can act as an artificial brain, modern AI science is in no way involved with it.
 
Displayed 50 of 77 comments


Oldest | « | 1 | 2 | » | Newest | Show all


View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking




On Twitter


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.