Skip to content
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(TechRadar)   GitHub is your copilot? Good luck: according to security research, nearly 40% of Copilot's code suggestions are erroneous and introduce vulnerabilities   (techradar.com) divider line
    More: Obvious, Artificial intelligence, Computer security, Information security, Computer, Security, DNA, Software testing, Academic researchers  
•       •       •

344 clicks; posted to STEM » on 30 Aug 2021 at 8:54 AM (13 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



20 Comments     (+0 »)
View Voting Results: Smartest and Funniest
 
2021-08-30 6:06:36 AM  
Isn't that what happens when you just copy and paste code that someone else has written?
 
2021-08-30 6:35:07 AM  
Most of it is stolen anyway, so what did you expect?  I'll gladly drop a wrench into that monstrosity.
 
2021-08-30 7:29:06 AM  
It's called Stack Overflow for a reason.
 
2021-08-30 8:56:18 AM  
AI:

encrypted-tbn0.gstatic.comView Full Size
 
2021-08-30 9:33:29 AM  
This is what happens when you set a machine learning system on a problem and have the wrong inputs. This thing was forced to come up with code without knowing what the code is for, since the machine learning system has no concept of meaning or knowing.

Code itself is an output artifact, created after we have processed the requirements and generated the intent and then transferred that into executable code. It is brittle and changes over time and may or may not convey enough meaning for you to understand what it does or why, even with our massive brains reverse engineering that intent.

We could have a machine learning system generating code if we bothered to write down our intent and the stages of the process that happened in our heads and on whiteboards before we got to the coding part. This would also let us stop writing so much code (which would be a benefit on its own regardless of machine learning use.)

So instead of a machine learning system churning through gobs of brittle and ephemeral code, we need a library of projects with the intent and requirements and the transformation between those encoded, and feed that into the ML model. It would be a sort of shortcut to get you something close to a human's ability to translate problems into executable solutions, but skipping the part where we train a human to understand problems. The machine may not need to understand, but it at least needs some encoding of how we understand.
 
2021-08-30 9:37:19 AM  
Training a coding (or any) AI with unfiltered human output is as mindless as handing your young human child's education over to an Internet connected tablet with a hearty "Good luck, little Johnny!"
 
2021-08-30 9:43:49 AM  

jacksonic: This is what happens when you set a machine learning system on a problem and have the wrong inputs. This thing was forced to come up with code without knowing what the code is for, since the machine learning system has no concept of meaning or knowing.


This kind of AI is a pattern processor.  A bunch of patterns in yielding mimic patterns out.  The "quality" of new patterns might vary by that of the patterns input, but still has no Intelligence behind it... just Artifice.

Patterns
Youtube KlckKwA85Qw
 
2021-08-30 9:49:25 AM  
Fark user imageView Full Size
 
2021-08-30 10:44:00 AM  

SansNeural: jacksonic: This is what happens when you set a machine learning system on a problem and have the wrong inputs. This thing was forced to come up with code without knowing what the code is for, since the machine learning system has no concept of meaning or knowing.

This kind of AI is a pattern processor.  A bunch of patterns in yielding mimic patterns out.  The "quality" of new patterns might vary by that of the patterns input, but still has no Intelligence behind it... just Artifice.

[YouTube video: Patterns]


But we can get machine learning to help us if we move up a few layers of abstraction and start feeding it design decisions rather than code artifacts. That would also help automate Stack Overflow, come to think of it...
 
2021-08-30 12:30:15 PM  

jacksonic: This is what happens when you set a machine learning system on a problem ...


And... scene.

Computers are not a substitute for expertise. Every decently working Expert System was the work of, get this, subject matter EXPERTS. Human beings. Who understood the problem they were working on. Assisted by other human beings who were gifted in translating that real world knowledge into something that computers could execute. (And, more importantly, have the computer throw its hands up when your inputs stray outside the bounds of what the Subject Matter Experts determined their answers worked inside of.)

Fly-By-Wire systems? Work great. Self-Driving cars? Accident waiting to happen.

And you look at both and find that fly-by-wire systems are operating on transonic aircraft. Some fighter planes cannot be flown safely without it. And all of the fly-by-wire systems in common use are running on hardware that was commercially available in the 1970s. (Or modern replacements that were custom designed to mimic the 1970s era computers.)

Self-Driving cars on the other hand are black holes of processing power. They can't pack enough GPUs and nueral nets onto the things. And they are operating perfectly mundane automobiles. Something so seemingly simple that you don't need a complex process for a human to get a license to operate. (As compared to, say, a pilot's license.)

What gives? Basically aircraft only operate inside of a special envelope where all of the laws of physics have to be controlled. Those physical laws, while complex, can be expressed as mathematical functions. Every sensor, every control surface, every engine is designed to provide the same response to the same inputs. Pilots train for years to know the safe margins in which to operate their craft. Pilots train to be predictable to one another, and to air traffic control.

Cars on the other hand are driven by the whimsey of their operator. The closest a driver can get to the physical envelop of their vehicle's performance generally involves breaking a whole lotta laws, being criminally negligent on maintenance, or driving in some hellacious weather conditions. There are also millions of more cars on the road than planes in the sky.

There are no "rules of the road." There are completely contradictory "schools of thought." And all of those schools have a giant grudge match every rush hour. And even if you could get every driver to agree on one set of predictable behaviors, the realities of road construction and roadside emergencies throw plenty of inpredictablilites. And let's not forget about weather!

AI cannot "solve" safe vehicle driving because there isn't a subject matter expert in the world (or a team of them) that could write out all of the rules that currently govern safe driving. It's an improvised mess, and has been since the first car took to the first road/dirt path.
 
2021-08-30 12:39:54 PM  

Evil Twin Skippy: jacksonic: This is what happens when you set a machine learning system on a problem ...

And... scene.

Computers are not a substitute for expertise. Every decently working Expert System was the work of, get this, subject matter EXPERTS. Human beings. Who understood the problem they were working on. Assisted by other human beings who were gifted in translating that real world knowledge into something that computers could execute. (And, more importantly, have the computer throw its hands up when your inputs stray outside the bounds of what the Subject Matter Experts determined their answers worked inside of.)

Fly-By-Wire systems? Work great. Self-Driving cars? Accident waiting to happen.

And you look at both and find that fly-by-wire systems are operating on transonic aircraft. Some fighter planes cannot be flown safely without it. And all of the fly-by-wire systems in common use are running on hardware that was commercially available in the 1970s. (Or modern replacements that were custom designed to mimic the 1970s era computers.)

Self-Driving cars on the other hand are black holes of processing power. They can't pack enough GPUs and nueral nets onto the things. And they are operating perfectly mundane automobiles. Something so seemingly simple that you don't need a complex process for a human to get a license to operate. (As compared to, say, a pilot's license.)

What gives? Basically aircraft only operate inside of a special envelope where all of the laws of physics have to be controlled. Those physical laws, while complex, can be expressed as mathematical functions. Every sensor, every control surface, every engine is designed to provide the same response to the same inputs. Pilots train for years to know the safe margins in which to operate their craft. Pilots train to be predictable to one another, and to air traffic control.

Cars on the other hand are driven by the whimsey of their operator. The closest a driver can get to the physical envelop of their vehicle's performance generally involves breaking a whole lotta laws, being criminally negligent on maintenance, or driving in some hellacious weather conditions. There are also millions of more cars on the road than planes in the sky.

There are no "rules of the road." There are completely contradictory "schools of thought." And all of those schools have a giant grudge match every rush hour. And even if you could get every driver to agree on one set of predictable behaviors, the realities of road construction and roadside emergencies throw plenty of inpredictablilites. And let's not forget about weather!

AI cannot "solve" safe vehicle driving because there isn't a subject matter expert in the world (or a team of them) that could write out all of the rules that currently govern safe driving. It's an improvised mess, and has been since the first car took to the first road/dirt path.


Machine learning aside, it's our job as software engineers to model the world. It's a travesty that we don't write down more of that modeling in a machine readable format.

Code is a bunch of instructions to a very specific type of hardware, and we put a lot of thought into other layers that we don't bother to write down except in human readable documentation. Machines will never have that benefit of experience if they cannot read it and have a method to incorporate it into their models of the world.

And I don't mean to imply that machine learning is a requirement here. I would be happy with automata traversing domain specific knowledge graphs, which would require human construction of the knowledge graphs and human construction of the automata.
 
2021-08-30 1:43:10 PM  

jacksonic: Evil Twin Skippy: jacksonic: This is what happens when you set a machine learning system on a problem ...

And... scene.

Computers are not a substitute for expertise. Every decently working Expert System was the work of, get this, subject matter EXPERTS. Human beings. Who understood the problem they were working on. Assisted by other human beings who were gifted in translating that real world knowledge into something that computers could execute. (And, more importantly, have the computer throw its hands up when your inputs stray outside the bounds of what the Subject Matter Experts determined their answers worked inside of.)

Fly-By-Wire systems? Work great. Self-Driving cars? Accident waiting to happen.

And you look at both and find that fly-by-wire systems are operating on transonic aircraft. Some fighter planes cannot be flown safely without it. And all of the fly-by-wire systems in common use are running on hardware that was commercially available in the 1970s. (Or modern replacements that were custom designed to mimic the 1970s era computers.)

Self-Driving cars on the other hand are black holes of processing power. They can't pack enough GPUs and nueral nets onto the things. And they are operating perfectly mundane automobiles. Something so seemingly simple that you don't need a complex process for a human to get a license to operate. (As compared to, say, a pilot's license.)

What gives? Basically aircraft only operate inside of a special envelope where all of the laws of physics have to be controlled. Those physical laws, while complex, can be expressed as mathematical functions. Every sensor, every control surface, every engine is designed to provide the same response to the same inputs. Pilots train for years to know the safe margins in which to operate their craft. Pilots train to be predictable to one another, and to air traffic control.

Cars on the other hand are driven by the whimsey of their operator. The closest a driver can get to the physical envelop of their vehicle's performance generally involves breaking a whole lotta laws, being criminally negligent on maintenance, or driving in some hellacious weather conditions. There are also millions of more cars on the road than planes in the sky.

There are no "rules of the road." There are completely contradictory "schools of thought." And all of those schools have a giant grudge match every rush hour. And even if you could get every driver to agree on one set of predictable behaviors, the realities of road construction and roadside emergencies throw plenty of inpredictablilites. And let's not forget about weather!

AI cannot "solve" safe vehicle driving because there isn't a subject matter expert in the world (or a team of them) that could write out all of the rules that currently govern safe driving. It's an improvised mess, and has been since the first car took to the first road/dirt path.

Machine learning aside, it's our job as software engineers to model the world. It's a travesty that we don't write down more of that modeling in a machine readable format.

Code is a bunch of instructions to a very specific type of hardware, and we put a lot of thought into other layers that we don't bother to write down except in human readable documentation. Machines will never have that benefit of experience if they cannot read it and have a method to incorporate it into their models of the world.

And I don't mean to imply that machine learning is a requirement here. I would be happy with automata traversing domain specific knowledge graphs, which would require human construction of the knowledge graphs and human construction of the automata.


I would argue that writing what we do as software engineers in a machine runnable format is a waste of time (at best) and a setup for a future disaster (at worst.)

Hear me out.

As soon as management(tm) gets it in its mind that any sort of expensive expert can be replaced by automation they utilize that automation without thinking anything through.

Look at what spreadsheets have done to accounting. Look at what Powerpoint has done for business communications. Look at what the word processor has done for literature. If you have never had to extract some useful data out of a project that a rank amateurs has "developed" in DBASE or MS Access (or god help you) Excel (with a shiatload of macros), you know exactly how dangerous the illusion of competency can be.

Now let us say some well meaning idiot writes some sort of tool that takes what a software engineer does, and makes it so a kid out of high school can run their shiat through and "fix the mistakes."

The code that passes the code writer's equivalent to a spell check is indistinguishable to a non-software engineer from quality code written by a software engineer. Just like the spreadsheet that is generated by a psychotic mogul is indistinguishable from a spreadsheet generated by a graduate level researcher. To the general public if the numbers add up in the end, there is no difference!

Because your average member of the general public is oblivious to how numbers are like political prisoners. With enough torture you can make them say anything.

There is no algorithm, I would argue, that should be applied, across the board, on every software engineering project. If the project is in a regulated industry, software is just another element to be tested and verified. In an unregulated environment, the software is probably the *only* thing being tested. If the software is being tested.

And there are plenty of ways to fool yourself into thinking "oh, I have test coverage" but not really be testing a goddamn thing. And to know what you are doing requires either experience farking up first, or being able to work under someone who does know what they are doing.

The act of writing software is, in of itself, something that software cannot write. The creation of ideas is not a process that an existing idea can explain.
 
2021-08-30 2:31:52 PM  
The status quo of today is to just hire thousands of enterprise software developers who are perhaps competent and perhaps not depending on what you're paying them. Making it easier to assemble a working system while writing less code would cost jobs in that particular area, but then free up those people to work on requirements engineering instead.

It would also free up the domain experts to just model the domains that they are expert in and not worry about providing a Java API and a c++ API and a web sdk. In an open source like fashion, you could pull in some finance modeling, an authentication system and something that defines a chat app, and then not have to write that code. If you don't know about computer science concepts and can't write code at all, then if there's any pieces missing you're stuck. For the majority of us who write code today, we would be absolved of writing the boilerplate that somebody else has already written and could just focus on the customized parts or the pieces that haven't been done yet.

We already pull in existing database products to reuse that domain knowledge of the experts who wrote it, but if we want to customize how the indexing works, we're out of luck. We pull in frameworks to handle things and make it easier for us, but when we're gluing that framework to a particular UI tool kit? It's all manual. We can't reuse our design work and we can't reuse the glue that we have to do over and over again.

And on the topic of varying skill levels: having a system built out of components that embody best practices and testing and verification will naturally allow people to assemble these more freely and then explore and validate the output. If you don't know what you're doing, you end up with something simpler and more generic. Maybe there are some holes that exist and you can't fill them in, but at least the system let you know that the holes exist. If you felt inclined, you could research more about the affected areas and gain some domain expertise.

We already do this kind of modeling in our heads, but what we're missing is the machine system that lets you traverse this knowledge and assemble a working executable, a browsable system definition, and validation of the outputs against the requirements you are originally had.
 
2021-08-30 3:40:23 PM  

Marcus Aurelius: Most of it is stolen anyway, so what did you expect?  I'll gladly drop a wrench into that monstrosity.


Yea

Code written by a world community is evil socialism and stolen whereas code written by paid company coders is peaches and cream

U arent even trying.

Lolz
 
2021-08-30 3:42:07 PM  

SansNeural: Training a coding (or any) AI with unfiltered human output is as mindless as handing your young human child's education over to an Internet connected tablet with a hearty "Good luck, little Johnny!"


AI has limits that AI fanboys wont admit
 
2021-08-30 3:42:45 PM  

Stephen_Falken: [Fark user image image 240x267]


AI hasnt aged a bit
 
2021-08-30 3:44:34 PM  

Evil Twin Skippy: jacksonic: Evil Twin Skippy: jacksonic: This is what happens when you set a machine learning system on a problem ...

And... scene.

Computers are not a substitute for expertise. Every decently working Expert System was the work of, get this, subject matter EXPERTS. Human beings. Who understood the problem they were working on. Assisted by other human beings who were gifted in translating that real world knowledge into something that computers could execute. (And, more importantly, have the computer throw its hands up when your inputs stray outside the bounds of what the Subject Matter Experts determined their answers worked inside of.)

Fly-By-Wire systems? Work great. Self-Driving cars? Accident waiting to happen.

And you look at both and find that fly-by-wire systems are operating on transonic aircraft. Some fighter planes cannot be flown safely without it. And all of the fly-by-wire systems in common use are running on hardware that was commercially available in the 1970s. (Or modern replacements that were custom designed to mimic the 1970s era computers.)

Self-Driving cars on the other hand are black holes of processing power. They can't pack enough GPUs and nueral nets onto the things. And they are operating perfectly mundane automobiles. Something so seemingly simple that you don't need a complex process for a human to get a license to operate. (As compared to, say, a pilot's license.)

What gives? Basically aircraft only operate inside of a special envelope where all of the laws of physics have to be controlled. Those physical laws, while complex, can be expressed as mathematical functions. Every sensor, every control surface, every engine is designed to provide the same response to the same inputs. Pilots train for years to know the safe margins in which to operate their craft. Pilots train to be predictable to one another, and to air traffic control.

Cars on the other hand are driven by the whimsey of their operator. The closest a driver can get to the physical envelop of their vehicle's performance generally involves breaking a whole lotta laws, being criminally negligent on maintenance, or driving in some hellacious weather conditions. There are also millions of more cars on the road than planes in the sky.

There are no "rules of the road." There are completely contradictory "schools of thought." And all of those schools have a giant grudge match every rush hour. And even if you could get every driver to agree on one set of predictable behaviors, the realities of road construction and roadside emergencies throw plenty of inpredictablilites. And let's not forget about weather!

AI cannot "solve" safe vehicle driving because there isn't a subject matter expert in the world (or a team of them) that could write out all of the rules that currently govern safe driving. It's an improvised mess, and has been since the first car took to the first road/dirt path.

Machine learning aside, it's our job as software engineers to model the world. It's a travesty that we don't write down more of that modeling in a machine readable format.

Code is a bunch of instructions to a very specific type of hardware, and we put a lot of thought into other layers that we don't bother to write down except in human readable documentation. Machines will never have that benefit of experience if they cannot read it and have a method to incorporate it into their models of the world.

And I don't mean to imply that machine learning is a requirement here. I would be happy with automata traversing domain specific knowledge graphs, which would require human construction of the knowledge graphs and human construction of the automata.

I would argue that writing what we do as software engineers in a machine runnable format is a waste of time (at best) and a setup for a future disaster (at worst.)

Hear me out.

As soon as management(tm) gets it in its mind that any sort of expensive expert can be replaced by automation they utilize that automation without thinking anything through.

Look at what spreadsheets have done to accounting. Look at what Powerpoint has done for business communications. Look at what the word processor has done for literature. If you have never had to extract some useful data out of a project that a rank amateurs has "developed" in DBASE or MS Access (or god help you) Excel (with a shiatload of macros), you know exactly how dangerous the illusion of competency can be.

Now let us say some well meaning idiot writes some sort of tool that takes what a software engineer does, and makes it so a kid out of high school can run their shiat through and "fix the mistakes."

The code that passes the code writer's equivalent to a spell check is indistinguishable to a non-software engineer from quality code written by a software engineer. Just like the spreadsheet that is generated by a psychotic mogul is indistinguishable from a spreadsheet generated by a graduate level researcher. To the general public if the numbers add up in the end, there is no difference!

Because your average member of the general public is oblivious to how numbers are like political prisoners. With enough torture you can make them say anything.

There is no algorithm, I would argue, that should be applied, across the board, on every software engineering project. If the project is in a regulated industry, software is just another element to be tested and verified. In an unregulated environment, the software is probably the *only* thing being tested. If the software is being tested.

And there are plenty of ways to fool yourself into thinking "oh, I have test coverage" but not really be testing a goddamn thing. And to know what you are doing requires either experience farking up first, or being able to work under someone who does know what they are doing.

The act of writing software is, in of itself, something that software cannot write. The creation of ideas is not a process that an existing idea can explain.


AI is overrated.  A smarter dumb machine
 
2021-08-30 5:45:51 PM  
I think the real takeaway here is that 60% of the code will pass. Remember people, Ds get degrees!
 
2021-08-30 8:09:34 PM  

Linux_Yes: Stephen_Falken: [Fark user image image 240x267]

AI hasnt aged a bit


You certainly haven't grown up any.
 
2021-08-31 3:15:02 AM  

Tannax: I think the real takeaway here is that 60% of the code will pass. Remember people, Ds get degrees!


"Perfect is the enemy of good!"
 
Displayed 20 of 20 comments

View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking




On Twitter


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.