This "blog post" appears to just be copy-pasted content from the NASA article [1]. I give credit for the source being cited, but it's still plagiarism.
It's a bit of an edge case. It makes a good point and it uses text from a credible source. AFAIK everything NASA publishes is royalty free and can just be copied.
One additional sentence between the image and the content like this and it would probably be fine:
"The explanation from OpenAI has some major flaws, here is how this NASA source explains it:"
karel-3d · 4h ago
ah that's why there is no "java applet"
low_tech_punk · 4h ago
To be fair, the blog has a "Source: NASA" link near the beginning.
nomel · 4h ago
> Plagiarism: the practice of taking someone else's work or ideas and passing them off as one's own.
"passing them off as one's own" is the key part. To prevent this, you make it very clear which parts are your own ideas and which parts are not. If you compare the source to this post, you'll see it's a mix, without delineation.
low_tech_punk · 4h ago
Thanks for calling this out. Yes, agree. I should revise my understanding of plagiarism.
Vvector · 4h ago
"The theory (from GPT-5) is one of the most widely circulated, incorrect explanations."
Naturally. This is how LLMs work. It regurgitates the data fed into it.
morninglight · 50m ago
A demonstration of the Bernoulli effect by the Flying Bernoulli Brothers.
Reminds me of when Bard also quoted something NASA put out that was also incorrect.
psunavy03 · 4h ago
> This theory also does not explain how airplanes can fly upside-down (the longer path would then be on the bottom!) which happens often at air shows and in air-to-air combat.
While true, the person writing this article does not seem to understand the difference between flying inverted and flying with a negative angle of attack. These can happen at the same time, but not necessarily. If you're performing a loop or a barrel roll, you will be inverted, but the aircraft will be performing largely as it would be when you are straight and level, because you are still under positive g with a positive AOA on the aircraft. The lift vector will just be pointed someplace other than "up."
jacquesm · 4h ago
This just seems to try to increase the author's visibility by referring to GPT-5.
andix · 4h ago
To me the whole demo [edit: today's openai live stream] didn't feel revolutionary at all.
Especially the code generation part. It feels to me like Claude Web can do those illustration artifacts already for months equally well.
Also the example in Cursor just felt like a regular Claude Code session, just with different UI.
The only part I'm excited about is, that there is no distinction between reasoning and non-reasoning models anymore. I tend to default to reasoning models, because too often I feel like I need to switch mid-conversation to a reasoning model anyway. And reasoning models degraded the user experience drastically, because it often takes them quite some time to start responding.
aeternum · 4h ago
This is the problem with LLMs, they return common knowledge as fact.
Interesting that will all the Ph.D. expert fine-tuning that GPT5 supposedly received, it still doesn't favor the more correct Newtonian explanation of airplane lift.
tekno45 · 4h ago
we can be wrong so much faster.
MartinodF · 4h ago
This is a pet peeve of mine and I'm glad to see it called out. That said, I haven't seen a comprehensive discussion of "here's the different factors that we think contribute to creating lift" for the general public, is anyone aware of a good source?
iamtheworstdev · 4h ago
In the LLMs defense - most airline pilots think this is how things work, as well.
mrbungie · 4h ago
In humans and pilots defense - Most airline pilots do not claim they have PhD level intelligence (whatever that means), as OpenAI/sama hyped frequently about gpt-5 in the preceding months.
dotancohen · 4h ago
Indeed, this claim is at the very top of the announcement:
> GPT‑5 is smarter across the board, providing more useful responses across math, science, finance, law, and more. It's like having a team of experts on call for whatever you want to know.
iamtheworstdev · 3h ago
no but they should be expected to know better during their commercial license oral exam. (speaking as a pilot)
jplusequalt · 4h ago
Placing the blame on the LLM is skirting the real issue, which is that these companies are trying to upend society by constructing a new reliance on these LLMs. If the hype around the AI space wasn't here, then there would be fewer people accepting these tools as some all-knowing machine tantamount to a god.
psunavy03 · 4h ago
You do not need to be an aerodynamicist or aerospace engineer to be a pilot. Not every pilot is a test pilot.
blibble · 4h ago
indeed, my physics education only goes upto age 18, and this was my first thought watching the presentation
then it then went away and generated a load of confidently incorrect total bullshit
"phd level" my backside
dguest · 4h ago
I liked the "avid Wikipedia reader on ketamine" characterization more.
[1] https://www.grc.nasa.gov/www/k-12/VirtualAero/BottleRocket/a...
One additional sentence between the image and the content like this and it would probably be fine:
"The explanation from OpenAI has some major flaws, here is how this NASA source explains it:"
"passing them off as one's own" is the key part. To prevent this, you make it very clear which parts are your own ideas and which parts are not. If you compare the source to this post, you'll see it's a mix, without delineation.
Naturally. This is how LLMs work. It regurgitates the data fed into it.
https://www.youtube.com/watch?v=1GAp2dlIC8I
While true, the person writing this article does not seem to understand the difference between flying inverted and flying with a negative angle of attack. These can happen at the same time, but not necessarily. If you're performing a loop or a barrel roll, you will be inverted, but the aircraft will be performing largely as it would be when you are straight and level, because you are still under positive g with a positive AOA on the aircraft. The lift vector will just be pointed someplace other than "up."
Especially the code generation part. It feels to me like Claude Web can do those illustration artifacts already for months equally well.
Also the example in Cursor just felt like a regular Claude Code session, just with different UI.
The only part I'm excited about is, that there is no distinction between reasoning and non-reasoning models anymore. I tend to default to reasoning models, because too often I feel like I need to switch mid-conversation to a reasoning model anyway. And reasoning models degraded the user experience drastically, because it often takes them quite some time to start responding.
Interesting that will all the Ph.D. expert fine-tuning that GPT5 supposedly received, it still doesn't favor the more correct Newtonian explanation of airplane lift.
then it then went away and generated a load of confidently incorrect total bullshit
"phd level" my backside