Skip to content
Do you have adblock enabled?
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(NPR)   Rocket science is almost pure math, in fact it's governed by a series of equations that are CALLED "the Rocket equations", Computers tend to be VERY good at math. So if you asked ChatGPT to design you a rocket, it should do a great job, right?   (npr.org) divider line
    More: Fail, Computer, Science, Rocket engine, Massachusetts Institute of Technology, Research, Creativity, Rocket, Artificial intelligence  
•       •       •

1640 clicks; posted to STEM » on 02 Feb 2023 at 12:02 PM (8 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



33 Comments     (+0 »)
View Voting Results: Smartest and Funniest
 
2023-02-02 9:10:49 AM  
Ah. You used the word "design".

So you can train up an AI to do this (and get the math right) . It's more complex than writing an 8th grade essay on Hamlet. Obviously. You're going to have to invest more into the training algorithm and allow it to fail a bunch before you start seeing results.

You know. Kinda like people
 
2023-02-02 9:37:26 AM  
Not enough struts?
 
2023-02-02 11:38:33 AM  
They just look like variations of the infographics you get on google images when you search for Rocket Mission or Rocket Design.

sno man: Not enough struts?


Now I want to go home and play kerbal.
 
2023-02-02 12:04:06 PM  
pbs.twimg.comView Full Size
 
2023-02-02 12:11:16 PM  
Because that's not what this one is designed to do, you dingdongs!
Come back when we have a general intelligence.
 
2023-02-02 12:27:22 PM  
"You call that a rocket engine design? ... Bad computer! ... No cookies! ... Bad computer!"

Fark user imageView Full Size
 
2023-02-02 12:31:52 PM  
Needs more asparagus staging.
 
2023-02-02 12:59:17 PM  
This is a fantastic Idea, chatGP design me a replicator!  ChatGP design me clothing that makes me irresistible to women.  ChatGP solve world hunger!
 
2023-02-02 1:01:12 PM  
Innocent Kerbals died, NPR.  And here you are joking about it...
 
2023-02-02 1:16:11 PM  
I have been less impressed with ChatGPT that most other people.  I wouldn't ever claim to be a pinball wizard, but I'm better than the average player and can beat a local high score on a table maybe once a year or so.  I tried to have a conversation with it once about pinball strategies and out of seven playing tips that it chose to highlight, two were wrong and a third was debatable.  I also tried to have a conversation with it about how to play my favorite board game, Spirit Island, and it started making up names for cards that don't exist even though I told it this wasn't a creative writing assignment.  I would rather it say that it doesn't know enough about the topic than to try to fool me.  For sure, its very very good at imitating human language and knowing how to maintain a conversation thread.  So if those are the only metrics you care about I can't really find any faults.  And the thing is, for casual conversation, that is typically all you need to get by, which is why I think its able to fool people who only engage it on a non-erudite level.  But this article isn't the first to point out that the system has factual inaccuracies and I'm sure it won't be the last.  In particular, what I learned from the pinball conversation is that the algorithm does an incredibly good job of correlating words and concepts together, but connecting words or phrases via correlation doesn't necessarily mean it understands the context correctly.
 
2023-02-02 1:21:03 PM  
Something designed to fool the world's dum-dums by challenging their idea of "good" writing isn't great at actual rocket surgery?

Shocking.
 
2023-02-02 1:26:02 PM  
So - THIS is how Boeing "designed" Starliner!
 
2023-02-02 1:29:52 PM  

Smelly Pirate Hooker: Something designed to fool the world's dum-dums by challenging their idea of "good" writing isn't great at actual rocket surgery?

Shocking.


images.newscientist.comView Full Size

What actual rocket surgery might look like.
 
2023-02-02 1:31:16 PM  

Incog_Neeto: This is a fantastic Idea, chatGP design me a replicator!  ChatGP design me clothing that makes me irresistible to women.  ChatGP solve world hunger!


Apparently ChatGP is not, in fact, MULTIVAC. Science fiction fails us again.
 
2023-02-02 2:05:31 PM  
Fark user imageView Full Size
 
2023-02-02 3:03:32 PM  
So, I guess SkyNet launching orbital strikes is off the table; so, we've got that going for us.
 
2023-02-02 3:03:58 PM  

Ivo Shandor: [pbs.twimg.com image 310x310]


davypi: I have been less impressed with ChatGPT that most other people.  I wouldn't ever claim to be a pinball wizard, but I'm better than the average player and can beat a local high score on a table maybe once a year or so.  I tried to have a conversation with it once about pinball strategies and out of seven playing tips that it chose to highlight, two were wrong and a third was debatable.  I also tried to have a conversation with it about how to play my favorite board game, Spirit Island, and it started making up names for cards that don't exist even though I told it this wasn't a creative writing assignment.  I would rather it say that it doesn't know enough about the topic than to try to fool me.  For sure, its very very good at imitating human language and knowing how to maintain a conversation thread.  So if those are the only metrics you care about I can't really find any faults.  And the thing is, for casual conversation, that is typically all you need to get by, which is why I think its able to fool people who only engage it on a non-erudite level.  But this article isn't the first to point out that the system has factual inaccuracies and I'm sure it won't be the last.  In particular, what I learned from the pinball conversation is that the algorithm does an incredibly good job of correlating words and concepts together, but connecting words or phrases via correlation doesn't necessarily mean it understands the context correctly.


Right, it doesn't actually understand anything. Your observations are strikingly similar to comments made by chess masters in the early days of chess AIs. They noted with some annoyance that the AIs never knew when to concede. They'd be forced to play games to stalemate when stalemate could be seen to be inevitable many moves before by a human player. For the casual player, the AI was perhaps an interesting opponent, but in the hands of a master it crumbled.
 
2023-02-02 3:07:55 PM  
Fark user imageView Full Size
 
2023-02-02 4:09:31 PM  

dhcmrlchtdj: Right, it doesn't actually understand anything. Your observations are strikingly similar to comments made by chess masters in the early days of chess AIs. They noted with some annoyance that the AIs never knew when to concede. They'd be forced to play games to stalemate when stalemate could be seen to be inevitable many moves before by a human player. For the casual player, the AI was perhaps an interesting opponent, but in the hands of a master it crumbled.


And now, AI chess engines crush humans.
 
2023-02-02 4:17:23 PM  

trialpha: dhcmrlchtdj: Right, it doesn't actually understand anything. Your observations are strikingly similar to comments made by chess masters in the early days of chess AIs. They noted with some annoyance that the AIs never knew when to concede. They'd be forced to play games to stalemate when stalemate could be seen to be inevitable many moves before by a human player. For the casual player, the AI was perhaps an interesting opponent, but in the hands of a master it crumbled.

And now, AI chess engines crush humans.


Yes, and they still don't understand the game they're playing, just some alpha/beta-limited game tree.
 
2023-02-02 4:48:31 PM  
A friend of mine decided a few years after he got out of the navy he wanted to be an astronaut. He hasn't exactly managed that yet, but graduating from Emory-Riddle he came up with a new equation for something or other that was better than NASA was using before. Now, they use his. He ended up with a job out in the middle of the California desert (it's there in case something blows up, it won't take a small town with it). 

Last time he was in town was a few months ago when he & another pilot were ferrying an F/A-18 to Pensacola for something ( honestly forget what) and then taking it back afterwards. Lucky SOB!
 
2023-02-02 4:57:30 PM  
encrypted-tbn0.gstatic.comView Full Size
 
2023-02-02 5:25:04 PM  
Why would I give a fark what your latest toy can do, at all?
 
2023-02-02 5:31:42 PM  

trialpha: dhcmrlchtdj: Right, it doesn't actually understand anything. Your observations are strikingly similar to comments made by chess masters in the early days of chess AIs. They noted with some annoyance that the AIs never knew when to concede. They'd be forced to play games to stalemate when stalemate could be seen to be inevitable many moves before by a human player. For the casual player, the AI was perhaps an interesting opponent, but in the hands of a master it crumbled.

And now, AI chess engines crush humans.


Yes but if ChatGPT is really intended to be intelligent, as opposed to being able sometimes mimic intelligence, it fails catastropically precisely because it does not understand the meaning of words/sentences and therefore does not have the ability to recognize fairly obvious logical contraditions, the sine qua non of being borderline smart.
 
2023-02-02 5:50:29 PM  

hegelsghost: trialpha: dhcmrlchtdj: Right, it doesn't actually understand anything. Your observations are strikingly similar to comments made by chess masters in the early days of chess AIs. They noted with some annoyance that the AIs never knew when to concede. They'd be forced to play games to stalemate when stalemate could be seen to be inevitable many moves before by a human player. For the casual player, the AI was perhaps an interesting opponent, but in the hands of a master it crumbled.

And now, AI chess engines crush humans.

Yes but if ChatGPT is really intended to be intelligent, as opposed to being able sometimes mimic intelligence, it fails catastropically precisely because it does not understand the meaning of words/sentences and therefore does not have the ability to recognize fairly obvious logical contraditions, the sine qua non of being borderline smart.


This is a good way of phrasing it. ChatGPT is an "intelligence mimic".
 
2023-02-02 6:00:10 PM  

dhcmrlchtdj: Yes, and they still don't understand the game they're playing, just some alpha/beta-limited game tree.


This just raises the philosophical question of: "so?" If AI becomes so good at mimicking intelligence that it appears to have intelligence, does it matter that it doesn't actually?
 
2023-02-02 6:17:11 PM  
It's a translator attached to an encyclopedia.

Kinda like Watson.
 
2023-02-02 6:30:45 PM  

trialpha: dhcmrlchtdj: Yes, and they still don't understand the game they're playing, just some alpha/beta-limited game tree.

This just raises the philosophical question of: "so?" If AI becomes so good at mimicking intelligence that it appears to have intelligence, does it matter that it doesn't actually?


I fully get that and it's a reasonable argument to pose. When does the simulation become reality. I don't have an answer, any more so than we've been able to create a firm definition of what exactly intelligence is, in a quantitative sense (attempts like IQ have been discredited). We work off what we know from each other, from other species, from history, and, at most, it's all relative to each other.

For a business selling an AI assistant type service? Probably doesn't matter.

If they're generally intelligent, then is it a new form of chattel slavery?

Maybe not make the AIs *too* intelligent? Way outside my professional specialties.

But what alternative futures are there?
 
2023-02-02 8:11:05 PM  

dhcmrlchtdj: trialpha: dhcmrlchtdj: Yes, and they still don't understand the game they're playing, just some alpha/beta-limited game tree.

This just raises the philosophical question of: "so?" If AI becomes so good at mimicking intelligence that it appears to have intelligence, does it matter that it doesn't actually?

I fully get that and it's a reasonable argument to pose. When does the simulation become reality.


Well, my response was going to be more along the lines of, how is that different from what humans do?  I play a lot of board games.  Honestly, I'm not trying to brag here, but in my group of four friends, I probably win about 70% of the games I play and even when I go to conventions and play with strangers I'm shooting higher than random chance would suggest.  I have a math degree and work in financial analysis, so I just have a brain that is wired really well for crunching numbers and looking for optimal outputs.  Sometimes when I am playing a game, I will end up in situations where I have a choice between two things, one that emotionally feels good and the other feels emotionally off, but the number-crunchy part of brain tells me is correct.  And based on how games eventually play out, the number crunchy side of my brain is almost always right.  Assuming that you are playing a game to win (and there are some games I will deliberately "mis-play" because I want to do something fun instead of something thinky) we are always playing games with some kind of min/max strategy in mind.  What will get more points?  What will allow me to control more area?  If I can't get what I want, what hurts my opponent the most?  Almost everyone figures out how to "beat" tic-tac-toe when they are kids, and we essentially use the same algorithm a computer does.  The only difference with chess is that computers can handle a bigger mental overhead than we can.  In fact, what we have done as humans is starting to adopt computer-like strategies.  I'm not exactly going to dispute your claim that the computer doesn't "understand" what its doing when it plays chess, but I feel like that claim comes from a place in that we teach chess using things called "fundamentals" and "strategies" that allow us to make value judgements about the board.  Now that computers are winning, specifically because they are allowed to create their own strategies rather than being programmed with more classical valuations, we are finding that some of our classical understandings were wrong and the way we study chess strategy has changed because of our desire to understand and beat their algorithms.  So... I guess my response here is, how can we say that humans "understand" chess when computers are showing us that we don't?

That said, I don't think evoking chess is the best analogy here.  Chess is, at its core, an algorithm.  Here are a set of rules.  Is it possible to guarantee a victory?  Given enough computing power and time, I think eventually we will find that the game is solvable, its just a matter of how long it takes us to get there.  Language though is a different beast.  While there is, arguably, an algorithm for grammatical structure, there is no algorithm for the flow of a conversation, or even a "victory condition."  And I think this is maybe where this is going to make artificial chat a more difficult nut to crack than a game or a diagnostic tool.  How do you write a program that doesn't know what its end-user will want for output?  In this sense, I think part of the reason why ChatGPT fails is that it is a jack of all trades.  By flooding it with information about everything, it makes it hard for it to understand the fine details of any one topic.  Compound that by the fact that it was trained by an internet that has been flooded with misinformation, it arguably behaves like a human in the sense that it doesn't always know what to believe.
 
2023-02-03 12:07:19 AM  

leeksfromchichis: [encrypted-tbn0.gstatic.com image 300x375]


Fark user imageView Full Size
 
2023-02-03 7:37:06 AM  

SuperChris: Because that's not what this one is designed to do, you dingdongs!
Come back when we have a general intelligence.


Exactly. These AIs will tell you what they have been taught. That is why ChatGPT endorses regulation of itself. No reasonable human wants to be regulated, so ChatGPT is not the human equivalent of intelligence.
 
2023-02-03 8:56:55 AM  
"A general purpose language bot isn't good at rocket science".

O RLY?

"Hey guys!  My nephew is impressing the hell out of his 9th grade English teacher, she's saying he's performing at an AP English level.  Since everyone thinks he's a genius I gave him  my fluid physics textbook and he bombed the first few quiz questions.  What a joke."

This isn't news.  The null hypothesis is NEVER newsworthy.  The things that ChatGPT proves unexpectedly competent at, THOSE are newsworthy.
 
2023-02-03 3:29:06 PM  
ChatGPT should not be misinterpreted to be AbsolutelyGeniusGPT

It is a little inconsistent with its fussiness though. I can get all kinds of medical advise from it without any kind of disclaimers, but as soon as I start talking about a basic perl script I get lectures about it not being a script writer.
 
Displayed 33 of 33 comments

View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking




On Twitter


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.