Skip to content
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Marketwatch)   How AI is keeping prices low in accordance to Skynet's prophesy   (marketwatch.com) divider line
    More: Followup, Inflation, Monetary policy, Sal Guatieri, Robot, High inflation, use of new automation, Federal Reserve System, Robotics  
•       •       •

321 clicks; posted to Business » on 13 Mar 2019 at 11:05 AM (10 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



5 Comments     (+0 »)
 
View Voting Results: Smartest and Funniest
 
2019-03-13 11:40:00 AM  
'Prophesy' is a verb, subs.  The word you want is 'prophecy'.

/i know, i know, fighting a losing battle
 
2019-03-13 03:40:07 PM  
The article conflates AI with Automation.

Automation certainly has been a force to be reckoned with. AI is still only slightly further along than Fusion as far as appearing "Any Day Now", but 20 years at the most. 40 years ago.

However unlike fusion, researches on AI still can't answer basic questions about the science. Fusion we know how it works, the problem is replicating the process outside of a thermonuclear explosion or interior of a star. If you ask 10 Ph.Ds in the field about what the nature of intelligence is you will get somewhere between zero and 20 answers.

A real expert in the field will answer "We still don't know." A... less than real... expert (who is on the lecture/book circuit) will rattle off a series of half baked (and mutually exclusive) definitions.

To this day demonstrations of AI are on par with Sasquatch sightings as far as reproducibility.

At least the fusion people have a working list of problems they are trying to solve, and an understanding of where modern theory, materials, and technology are failing them.

And if anyone is telling you AI is somehow simple or solved, just ask them to get to the point of what they are selling. It's probably snake oil.
 
2019-03-14 10:10:30 AM  

Evil Twin Skippy: The article conflates AI with Automation.

Automation certainly has been a force to be reckoned with. AI is still only slightly further along than Fusion as far as appearing "Any Day Now", but 20 years at the most. 40 years ago.

However unlike fusion, researches on AI still can't answer basic questions about the science. Fusion we know how it works, the problem is replicating the process outside of a thermonuclear explosion or interior of a star. If you ask 10 Ph.Ds in the field about what the nature of intelligence is you will get somewhere between zero and 20 answers.

A real expert in the field will answer "We still don't know." A... less than real... expert (who is on the lecture/book circuit) will rattle off a series of half baked (and mutually exclusive) definitions.

To this day demonstrations of AI are on par with Sasquatch sightings as far as reproducibility.

At least the fusion people have a working list of problems they are trying to solve, and an understanding of where modern theory, materials, and technology are failing them.

And if anyone is telling you AI is somehow simple or solved, just ask them to get to the point of what they are selling. It's probably snake oil.


Max Tegmark has suggested (somewhat poetically and unintelligibly) that "consciousness" or self-awareness is "how data feels when it looks at itself."

When you think about yourself thinking... what exactly are you thinking about? And who/what is "you"? What about the following (in very rough language)  as a working hypothesis?

Your physical brain (the hardware) stores the sum (and possibly synergy of) all the data it has previously received. This dataset is not static; various parts of it are constantly being updated by new data. If religiously-based concepts (such as "divine soul") are excluded, this dataset (and the conceptual framework built up from it) is really all that the non-physical mind (the software) has to work with. The mind can therefore only be the continual processing and churning of that data... by a just-instant-previous "build" of that same data set. So perhaps "you" is simply a microsecond or picosecond prior version of that "dynamic" dataset. And that is why you can never "see" or comprehend the "you" you seek. Because "you" have moved on to the next iteration.

If the above is a real representation of what actually is going on in the human mind, then true AI will not be achieved until we can replicate the above process. The "hardware" concept of "neural gel" that dynamically grows new connections based on processing needs will be a critical part of this. I think that software that somehow allows infinite looping of data without crashing the system is another part.

Thoughts?
 
2019-03-14 10:20:13 AM  

Harlee: Evil Twin Skippy: The article conflates AI with Automation.

Automation certainly has been a force to be reckoned with. AI is still only slightly further along than Fusion as far as appearing "Any Day Now", but 20 years at the most. 40 years ago.

However unlike fusion, researches on AI still can't answer basic questions about the science. Fusion we know how it works, the problem is replicating the process outside of a thermonuclear explosion or interior of a star. If you ask 10 Ph.Ds in the field about what the nature of intelligence is you will get somewhere between zero and 20 answers.

A real expert in the field will answer "We still don't know." A... less than real... expert (who is on the lecture/book circuit) will rattle off a series of half baked (and mutually exclusive) definitions.

To this day demonstrations of AI are on par with Sasquatch sightings as far as reproducibility.

At least the fusion people have a working list of problems they are trying to solve, and an understanding of where modern theory, materials, and technology are failing them.

And if anyone is telling you AI is somehow simple or solved, just ask them to get to the point of what they are selling. It's probably snake oil.

Max Tegmark has suggested (somewhat poetically and unintelligibly) that "consciousness" or self-awareness is "how data feels when it looks at itself."

When you think about yourself thinking... what exactly are you thinking about? And who/what is "you"? What about the following (in very rough language)  as a working hypothesis?

Your physical brain (the hardware) stores the sum (and possibly synergy of) all the data it has previously received. This dataset is not static; various parts of it are constantly being updated by new data. If religiously-based concepts (such as "divine soul") are excluded, this dataset (and the conceptual framework built up from it) is really all that the non-physical mind (the software) has to work with. The mind can therefore only be the continual processing and churning of that data... by a just-instant-previous "build" of that same data set. So perhaps "you" is simply a microsecond or picosecond prior version of that "dynamic" dataset. And that is why you can never "see" or comprehend the "you" you seek. Because "you" have moved on to the next iteration.

If the above is a real representation of what actually is going on in the human mind, then true AI will not be achieved until we can replicate the above process. The "hardware" concept of "neural gel" that dynamically grows new connections based on processing needs will be a critical part of this. I think that software that somehow allows infinite looping of data without crashing the system is another part.

Thoughts?


That you are simultaneously overthinking and underthinking the problem.

To start with: no we are not the sum of all of our inputs. Why? Because one of the most important elements of natural intelligence seems to be selective attention. What makes intelligent things intelligent is that they seemingly know what to pay attention to and what to ignore. Being that reflect a deficiency in that ability (or maladaptive practices resulting from poor training) are considered handicapped.

So for all that complexity you described, add a layer on top.

Basically AI is going to come about by accident. And it's going to be as flaky as the people programming it, but because it has the ability to train itself, it will be found useful.

Neural networks in their present form are not that invention, because they require constant reinforcement from people (or build automation) to train. They "learn" how to produce an already decided upon goal. Intelligence, real intelligence, is what picks those goals.

Or, at least that's my opinion.
 
2019-03-14 11:19:31 AM  

Evil Twin Skippy: Harlee: Evil Twin Skippy: The article conflates AI with Automation.

Automation certainly has been a force to be reckoned with. AI is still only slightly further along than Fusion as far as appearing "Any Day Now", but 20 years at the most. 40 years ago.

However unlike fusion, researches on AI still can't answer basic questions about the science. Fusion we know how it works, the problem is replicating the process outside of a thermonuclear explosion or interior of a star. If you ask 10 Ph.Ds in the field about what the nature of intelligence is you will get somewhere between zero and 20 answers.

A real expert in the field will answer "We still don't know." A... less than real... expert (who is on the lecture/book circuit) will rattle off a series of half baked (and mutually exclusive) definitions.

To this day demonstrations of AI are on par with Sasquatch sightings as far as reproducibility.

At least the fusion people have a working list of problems they are trying to solve, and an understanding of where modern theory, materials, and technology are failing them.

And if anyone is telling you AI is somehow simple or solved, just ask them to get to the point of what they are selling. It's probably snake oil.

Max Tegmark has suggested (somewhat poetically and unintelligibly) that "consciousness" or self-awareness is "how data feels when it looks at itself."

When you think about yourself thinking... what exactly are you thinking about? And who/what is "you"? What about the following (in very rough language)  as a working hypothesis?

Your physical brain (the hardware) stores the sum (and possibly synergy of) all the data it has previously received. This dataset is not static; various parts of it are constantly being updated by new data. If religiously-based concepts (such as "divine soul") are excluded, this dataset (and the conceptual framework built up from it) is really all that the non-physical mind (the software) has to work with. The mind can therefore only be the continual processing and churning of that data... by a just-instant-previous "build" of that same data set. So perhaps "you" is simply a microsecond or picosecond prior version of that "dynamic" dataset. And that is why you can never "see" or comprehend the "you" you seek. Because "you" have moved on to the next iteration.

If the above is a real representation of what actually is going on in the human mind, then true AI will not be achieved until we can replicate the above process. The "hardware" concept of "neural gel" that dynamically grows new connections based on processing needs will be a critical part of this. I think that software that somehow allows infinite looping of data without crashing the system is another part.

Thoughts?


That you are simultaneously overthinking and underthinking the problem.

To start with: no we are not the sum of all of our inputs. Why? Because one of the most important elements of natural intelligence seems to be selective attention. What makes intelligent things intelligent is that they seemingly know what to pay attention to and what to ignore. Being that reflect a deficiency in that ability (or maladaptive practices resulting from poor training) are considered handicapped.

So for all that complexity you described, add a layer on top.

Basically AI is going to come about by accident. And it's going to be as flaky as the people programming it, but because it has the ability to train itself, it will be found useful.

Neural networks in their present form are not that invention, because they require constant reinforcement from people (or build automation) to train. They "learn" how to produce an already decided upon goal. Intelligence, real intelligence, is what picks those goals.

Or, at least that's my opinion.


Those are all excellent points (especially the selective attention one), and I am going to steal them and incorporate them into the concept. Because I really think that Tegmark is onto something with his core idea. Perhaps the selective attention power is what allows us to escape those infinite loops of recursive data-seeing.
 
Displayed 5 of 5 comments

View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking





On Twitter




In Other Media
Top Commented
Javascript is required to view headlines in widget.
  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.

Report