Nano Banana image examples

189 SweetSoftPillow 96 9/11/2025, 8:35:11 PM github.com ↗

Comments (96)

mitthrowaway2 · 3h ago
I've come to realize that I liked believing that there was something special about the human mental ability to use our mind's eye and visual imagination to picture something, such as how we would look with a different hairstyle. It's uncomfortable seeing that skill reproduced by machinery at the same level as my own imagination, or even better. It makes me feel like my ability to use my imagination is no more remarkable than my ability to hold a coat off the ground like a coat hook would.
al_borland · 23m ago
As someone who can’t visualize things like this in my head, and can only think about them intellectually, your own imagination is still special. When I heard people can do that, it sounded like a super power.

AI is like Batman, useless without his money and utility belt. Your own abilities are more like Superman, part of who you are and always with you, ready for use.

lemonberry · 1h ago
But you can find joy at things you envision, or laugh, or be horrified. The mental ability is surely impressive, but having a reason to do it and feeling something at the result is special.

"To see a world in a grain of sand And a heaven in a wild flower..."

We - humans - have reasons to be. We get to look at a sunset and think about the scattering of light and different frequencies and how it causes the different colors. But we can also just enjoy the beauty of it.

For me, every moment is magical when I take the time to let it be so. Heck, for there to even be a me responding to a you and all of the things that had to happen for Hacker News to be here. It's pretty incredible. To me anyway.

FuckButtons · 3h ago
I have aphantasia, I’m glad we’re all on a level playing field now.
yoz-y · 2h ago
I always thought I had a vivid imagination. But then the aphantasia was mentioned in Hello Internet once, I looked it up, see comments like these and honestly…

I’ve no idea how to even check. According to various tests I believe I have aphantasia. But mostly I’ve got not even a slightest idea on how not having it is supposed to work. I guess this is one of those mysteries when a missing sense cannot be described in any manner.

jmcphers · 2h ago
A simple test for aphantasia that I gave my kids when they asked about it is to picture an apple with three blue dots on it. Once you have it, describe where the dots are on the apple.

Without aphantasia, it should be easy to "see" where the dots are since your mind has placed them on the apple somewhere already. Maybe they're in a line, or arranged in a triangle, across the middle or at the top.

brotchie · 1h ago
When reading "picture an apple with three blue dots on it", I have an abstract concept of an apple and three dots. There's really no geometry there, without follow on questions, or some priming in the question.

In my conscious experience I pretty much imagine {apple, dot, dot, dot}. I don't "see" blue, the dots are tagged with dot.color == blue.

When you ask about the arrangement of the dots, I'll THEN think about it, and then says "arranged in a triangle." But that's because you've probed with your question. Before you probed, there's no concept in my mind of any geometric arrangement.

If I hadn't been prompted to think / naturally thought about the color of the apple, and you asked me "what color is the apple." Only then would I say "green" or "red."

If you asked me to describe my office (for example) my brain can't really imagine it "holistically." I can think of the desk and then enumerate it's properties: white legs, wooden top, rug on ground. But, essentially, I'm running a geometric iterator over the scene, starting from some anchor object, jumping to nearby objects, and then enumerating their properties.

I have glimpses of what it's like to "see" in my minds eye. At night, in bed, just before sleep, if I concentrate really hard, I can sometimes see fleeting images. I liken it to looking at one of those eye puzzles where you have to relax your eyes to "see it." I almost have to focus on "seeing" without looking into the blackness of my closed eyes.

rimprobablyly · 47m ago
Exactly my experience too. These fleeting images are rare, but bloody hell it feels like cheating at life if most people can summon up visualisations like that at will.
derektank · 1m ago
I can't recall it ever being useful outside of physics and geometry questions tbh
dom96 · 1h ago
So my mind briefly jumps to an apple and I guess I am very briefly seeing that the dots happen to be on top of the apple, but that image is fleeting.

I have had some people claim to me that they can literally see what they are imagining as if it is in front of them for prolonged periods of time, in a similar way to how it would show up via AR goggles.

I guess this is a spectrum and it's tough to dealineate the abilities. But I just looked it up and what I am describing is hyperphantasia.

gcanyon · 1h ago
For me the triggering event was reading about aphantasia, and then thinking about how I have never, ever, seen a movie about a book I've read and said, "that [actor|place|thing] looks nothing like I imagined it" Then I tried the apple thing to confirm. I have some sense of looking at things, but not much.
Sohcahtoa82 · 2h ago
After reading your first sentence, I immediately saw an apple with three dots in a triangle pointing downwards on the side. Interestingly, the 3 dots in my image were flat, as if merely superimposed on an image of an apple, rather than actually being on an apple.

How do people with aphantasia answer the question?

foofoo12 · 2h ago
I found out recently that I have aphantasia, based on everything I've read. When you tell me to visualize, I imagine. I don't see it. An apple, I can imagine that. I can describe it in incomprehensibly sparse details. But when you ask details I have to fill them in.

I hadn't really placed those three dots in a specific place on the apple. But when you ask where they are, I'll decide to put them in a line on the apple. If you ask what color they are, I'll have to decide.

brotchie · 1h ago
+1, spot on description of aphantasia.
jvanderbot · 2h ago
They may not answer but what they'll realize is that the "placing" comes consciously after the "thinking of" which does not happen with others.

That is, they have to ascribe a placement rather than describe one in the image their mind conjured up.

wrs · 2h ago
There's no apple, much less any dots. Of course, I'm happy to draw you an apple on a piece of paper, and draw some dots on that, then tell you where those are.
aaronblohowiak · 2h ago
oh just close your eyes and imagine an apple for a few moments, then open your eyes, look at the wikipedia article about aphantasia and pick the one that best fits the level of detail you imagined.
foofoo12 · 2h ago
Ask people to visualize a thing. Pick something like a house, dog, tree, etc. Then ask about details. Where is the dog?

I have aphantasia and my dog isn't anywhere. It's just a dog, you didn't ask me to visualize anything else.

When you ask about details, like color, tail length, eyes then I have to make them up on the spot. I can do that very quickly but I don't "see" the good boy.

Revisional_Sin · 3h ago
Aphantasia gang!
m3kw9 · 3h ago
To be fair, the model's ability came from us generating the training data.
quantummagic · 2h ago
To be fair, we're the beneficiaries of nature generating the data we trained on ourselves. Our ability came from being exposed to training in school, and in the world, and from examples from all of human history. Ie. if you locked a child in a dark room for their entire lives, and gave them no education or social interaction, they wouldn't have a very impressive imagination or artistic ability either.

We're reliant on training data too.

lawlessone · 1h ago
Gonna try use this one instead of paying the next time i visit a restaurant.
echelon · 1h ago
Vision has evolved frequently and quickly in the animal kingdom.

Conscious intelligence has not.

As another argument, we've had mathematical descriptions of optics, drawing algorithms, fixed function pipeline, ray tracing, and so much more rich math for drawing and animating.

Smart, thinking machines? We haven't the faintest idea.

Progress on Generative Images >> LLMs

Animats · 1h ago
> Vision has evolved frequently and quickly in the animal kingdom. Conscious intelligence has not.

Three times, something like intelligence has evolved - in mammals, octopuses, and corvids. Completely different neural architectures in those unrelated speces.

nick__m · 42s ago
Why carve out the corvid from the other birds ? Some parots and parakeets species are playing in the same league as the corvids.
echelon · 47m ago
I won't judge our distant relatives, the cephalopods and chicken theropods, but we big apes are pretty dumb.

Even with what we've got, it took us hundreds of thousands of years to invent indoor plumbing.

Vision, I still submit, is much simpler than "intelligence". It's evolved independently almost a hundred times.

It's also hypothesized that it takes as few as a hundred thousand years to evolve advanced eye optics:

https://royalsocietypublishing.org/doi/10.1098/rspb.1994.004...

Even plants can sense the visual and physical world. Three dimensional spatial relationships and paths and rays through them are not hard.

stuckkeys · 59m ago
that was deep.
micromacrofoot · 2h ago
it can only do this because it's been trained on millions of human works
echelon · 1h ago
This argument that hints at appropriation isn't going to be very useful or true, going forward.

There are now dozens of copyright safe image and video models: Adobe, MoonValley, etc.

We technically never need human works again. We can generate everything synthetically (unreal engine, cameras on a turn table, etc.)

The physics of optics is just incredibly easy to evolve.

lawlessone · 1h ago
>We technically never need human works again.

Not sure about that. Humans are doing almost all the work now still.

echelon · 1h ago
I'm sorry, but in the context of image gen, this is also deeply biased.

Nano banana saves literally millions of manual human pixel pushing hours.

It's easy to hate on LLMs and AI hype, but image models are changing the world and impacting every visual industry.

kylebenzle · 2h ago
Looks like your mental model of the world was COMPLETELY wrong. Thanks for sharing but I don't understand the need for people to constantly tell others how stupid they are (or used to be). We get it, your dumb.
lawlessone · 1h ago
>We get it, your dumb.

You're

vunderba · 3h ago
Nano-Banana can produce some astonishing results. I maintain a comparison website for state-of-the-art image models with a very high focus on adherence across a wide variety of text-to-image prompts.

I recently finished putting together an Editing Comparison Showdown counterpart where the focus is still adherence but testing the ability to make localized edits of existing images using pure text prompts. It's currently comparing 6 multimodal models including Nano-Banana, Kontext Max, Qwen 20b, etc.

https://genai-showdown.specr.net/image-editing

Gemini Flash 2.5 leads with a score of 7 out of 12, but Kontext comes in at 5 out of 12 which is especially surprising considering you can run the Dev model of it locally.

echelon · 1h ago
Add gpt-image-1. It's not strictly an editing model since it changes the global pixels, but I've found it to be more instructive than Nano Banana for extremely complicated prompts and image references.
vunderba · 1h ago
It's actually already in there - the full list of edit models is Nano-Banana, Kontext Dev, Kontext Max, Qwen Edit 20b, gpt-image-1, and Omnigen2.

I agree with your assessment - even though it does tend to make changes at a global level you can least attempt to minimize its alterations through careful prompting.

ffitch · 1h ago
great benchmark!
xnx · 3h ago
Amazing model. The only limit is your imagination, and it's only $0.04/image.

Since the page doesn't mention it, this is the Google Gemini Image Generation model: https://ai.google.dev/gemini-api/docs/image-generation

Good collection of examples. Really weird to choose an inappropriate for work one as the second example.

warkdarrior · 2h ago
More specifically, Nano Banana is tuned for image editing: https://gemini.google/overview/image-generation
vunderba · 2h ago
Yep, Google actually recommends using Imagen4 / Imagen4 Ultra for straight image generation. In spite of that, Flash 2.5 still scored shockingly high on my text-to-image comparisons though image fidelity is obviously not as good as the dedicated text to image models.

Came within striking distance of OpenAI gpt-image-1 at only one point less.

smrtinsert · 2h ago
Is it a single model or is it a pipeline of models?
SweetSoftPillow · 2h ago
Single model, Gemini 2.5 Flash with native image output capability.
minimaxir · 3h ago
[misread]
vunderba · 3h ago
They're referring to Case 1 Illustration to Figure, the anime figurine dressed in a maid outfit in the HN post.
pdpi · 3h ago
I assume OP means the actual post.

The second example under "Case 1: Illustration to Figure" is a panty shot.

plomme · 2h ago
This is the first time I really don't understand how people are getting good results. On https://aistudio.google.com with Nano Banana selected (gemini-2.5-flash-image-preview) I get - garbage - results. I'll upload a character reference photo and a scene and ask Gemini to place the character in the scene. What it then does is to simply cut and paste the character into the scene, even if they are completely different in style, colours, etc.

I get far better results using ChatGPT for example. Of course, the character seldom looks anything like the reference, but it looks better than what I could do in paint in two minutes.

Am I using the wrong model, somehow??

A_D_E_P_T · 57m ago
No, I've noticed the same.

When Nano Banana works well, it really works -- but 90% of the time the results will be weird or of poor quality, with what looks like cut-and-paste or paint-over, and it also refuses a lot of reasonable requests on "safety" grounds. (In my experience, almost anything with real people.)

I'm mostly annoyed, rather than impressed, with it.

SweetSoftPillow · 2h ago
Play around with your prompt, try ask Gemini 2.5 pro to improve your prompt before sending it to Gemini 2.5 Flash, retry and learn what works and what doesn't.
epolanski · 2h ago
+1

I understand the results are non deterministic but I get absolute garbage too.

Uploaded pics of my (32 years old) wife and we wanted to ask it to give her a fringe/bangs to see how would she look like it either refused "because of safety" and when it complied results were horrible, it was a different person.

After many days and tries we got it to make one but there was no way to tweak the fringe, the model kept returning the same pic every time (with plenty of "content blocked" in between).

SweetSoftPillow · 2h ago
Are you in gemini.google.com interface? If so, try Google AI Studio instead, there you can disable safety filters.
epolanski · 1h ago
I use ai studio, no way to disable the filters.
minimaxir · 3h ago
I recently released a Python package for easily generating images with Nano Banana: https://github.com/minimaxir/gemimg

Through that testing, there is one prompt engineering trend that was consistent but controversial: both a) LLM-style prompt engineering with with Markdown-formated lists and b) old-school AI image style quality syntatic sugar such as award-winning and DSLR camera are both extremely effective with Gemini 2.5 Flash Image, due to its text encoder and larger training dataset which can now more accurately discriminate which specific image traits are present in an award-winning image and what traits aren't. I've tried generations both with and without those tricks and the tricks definitely have an impact. Google's developer documentation encourages the latter.

However, taking advantage of the 32k context window (compared to 512 for most other models) can make things interesting. It’s possible to render HTML as an image (https://github.com/minimaxir/gemimg/blob/main/docs/notebooks...) and providing highly nuanced JSON can allow for consistent generations. (https://github.com/minimaxir/gemimg/blob/main/docs/notebooks...)

neilv · 31m ago
Unfortunately NSFW in parts. It might be insensitive to circulate the top URL in most US tech workplaces. For those venues, maybe you want to pick out isolated examples instead.

(Example: Half of Case 1 is an anime/manga maid-uniform woman lifting up front of skirt, and leaning back, to expose the crotch of underwear. That's the most questionable one I noticed.)

darkamaul · 3h ago
This is amazing. Not that long ago, even getting a model to reliably output the same character multiple times was a real challenge. Now we’re seeing this level of composition and consistency. The pace of progress in generative models is wild.

Huge thanks to the author (and the many contributors) as well for gathering so many examples; it’s incredibly useful to see them to better understand the possibilities of the tool.

istjohn · 2h ago
Personally, I'm underwhelmed by this model. I feel like these examples are cherry-picked. Here are some fails I've had:

- Given a face shot in direct sunlight with severe shadows, it would not remove the shadows

- Given an old black and white photo, it would not render the image in vibrant color as if taken with a modern DSLR camera. It will colorize the photo, but only with washed out, tinted colors

- When trying to reproduce the 3 x 3 grid of hair styles, it repeatedly created a 2x3 grid. Finally, it made a 3x3 grid, but one of the nine models was black instead of caucasian.

- It is unable to integrate real images into fabricated imagery. For example, when given an image of a tutu and asked to create an image of a dolphin flying over clouds wearing the tutu, the result looks like a crude photoshop snip and copy/paste job.

autoexec · 1h ago
Even these examples aren't perfect.

The "Photos of Yourself in Different Eras" one said "Don't change the character's face" but the face was totally changed. "Case 21: OOTD Outfit" used the wrong camera. "Virtual Makeup Try-On" messed up the make up. "Lighting Control" messed up the lighting, the joker minifig is literally just SH0133 (https://www.bricklink.com/catalogItemInv.asp?M=sh0133), "Design a Chess Set" says you don't need an input image, but the prompt said to base it off of a picture that wasn't included and the output is pretty questionable (WTF is with those pawns!), etc.

I mean, it's still pretty neat, and could be useful for people without access to photoshop or to get someone started on a project to finish up by hand.

foofoo12 · 2h ago
> I feel like these examples are cherry-picked

I don't know of a demo, image, film, project or whatever where the showoff pieces are not cherry picked.

downboots · 49m ago
Computer graphics playing in my head and I like it! I don't support Technicolor parfaits and those snobby little petit fours that sit there uneaten, and my position on that is common knowledge to everyone in Oceania.
mustaphah · 2h ago
In a side-by-side comparison with GPT-4o [1], they are pretty much on par.

[1] https://github.com/JimmyLv/awesome-nano-banana

Animats · 4h ago
I have two friends who are excellent professional graphic artists and I hesitate to send them this.
SweetSoftPillow · 3h ago
They better learn it today than tomorrow. Even though it's might be painful for some who does not like to learn new tools and explore new horizons.
AstroBen · 3h ago
I don't know if "learning this tool" is gunna help..
mitthrowaway2 · 3h ago
Maybe they're better off switching careers? At some point, your customers aren't going to pay you very much to do something that they've become able to do themselves.

There used to be a job people would do, where they'd go around in the morning and wake people up so they could get to work on time. They were called a "knocker-up". When the alarm clock was invented, these people lose their jobs to other knockers-up with alarm clocks, they lost their jobs to alarm clocks.

non_aligned · 3h ago
A lot of technological progress is about moving in the other direction: taking things you can do yourself and having others do it instead.

You can paint your own walls or fix your own plumbing, but people pay others instead. You can cook your food, but you order take-out. It's not hard to sew your own clothes, but...

So no, I don't think it's as simple as that. A lot of people will not want the mental burden of learning a new tool and will have no problem paying someone else to do it. The main thing is that the price structure will change. You won't be able to charge $1,000 for a project that takes you a couple of days. Instead, you will need to charge $20 for stuff you can crank out in 20 minutes with gen AI.

GMoromisato · 3h ago
I agree with this. And it's not just about saving time/effort--an artist with an AI tool will always create better images than an amateur, just as an artist with a camera will always produce a better picture than me.

That said, I'm pretty sure the market for professional photographers shrank after the digital camera revolution.

namibj · 1h ago
After looking at Cases 4, 9, 23, 33, and 61, I think it might be suited to take in several wide-angle pictures or photospheres or such from inside a residence, and output a corresponding floor plan schematic.

If anyone has examples, guides, or anything to save me from pouring unnecessary funds into those API credits just to figure out how to feed it for this kind of task, I'd really appreciate sharing.

vunderba · 1h ago
I can't provide a definitive answer for this - but I will say that the Google's SDK docs state that a single edit request is limited to a maximum of THREE images so depending on how many you have - you might have to sort of use the "Kontext Kludge", aka stitching together many of input images into a single JPEG.

https://cloud.google.com/vertex-ai/generative-ai/docs/models...

n8cpdx · 1h ago
Has AI generation of chest hair finally been solved? I think this is the first time I’ve seen a remotely realistic looking result.
eig · 3h ago
While I think most of the examples are incredible...

...the technical graphics (especially text) is generally wrong. Case 16 is an annotated heart and the anatomy is nonsensical. Case 28 with the tallest buildings has the decent images, but has the wrong names, locations, and years.

vunderba · 2h ago
Yeah I think some of them are really more proof of concept than anything.

Case 8 Substitute for ControlNet

The two characters in the final image are VERY obviously not in the instructed set of poses.

SweetSoftPillow · 3h ago
Yes, it's Gemini Flash model, meaning it's fast and relatively small and cheap, optimized for performance rather than quality. I would not expect mind-blowing capabilities in fine details from this class of models, but still, even in this regard this model sometimes just surprisingly good.
flysonic10 · 2h ago
I added some of these examples into my Nanna Banana image generator: https://nannabanana.ai
frfl · 3h ago
While these are incredibly good, it's sad to think about the unfathomable amount of abuse, spam, disinformation, manipulation and who know what other negatives these advancement are gonna cause. It was one thing when you could spot an AI image, but now and moving forward it's be basically increasingly futile to even try.

Almost all "human" interaction online will be subject to doubt soon enough.

Hard to be cheerful when technology will be a net negative overall even if it benefits some.

signatoremo · 2h ago
By your logic email is clearly a net negative, given how much junk it generates - spam, phishing, hate mails, etc. Most of my emails at this point are spams.
frfl · 2h ago
If we're talking objectively, yeah by definition if it's a net negative, it's a net negative. But we can both agree in absolute terms the negatives of email are manageable.

Hopefully you understand the sentiment of my original message, without getting into the semantics. AI advancement, like email when it arrived, are gonna turbocharge the negatives. Difference is in the magnitude of the problem. We're dealing with whole different scale we have never seen before.

Re: Most of my emails at this point are spams. - 99% of my emails are not spam. Yet AI spam is everywhere else I look online.

DrewADesign · 1h ago
Their argument is false equivalence. You can’t just say “if you’re saying X is negative, you must believe that Y is negative because some of the negatives could be conceptually similar.” A good faith cost benefit analysis would rank both the cost and risks of an extremely accurate, cheap, on-demand commercial image generation service and an entirely open asynchronous worldwide text communication protocol, in different universes.
foobarbecue · 1h ago
Man, I hate this. It all looks so good, and it's all so incorrect. Take the heart diagram, for example. Lots of words that sort of sound cardiac but aren't ("ventricar," "mittic"), and some labels that ARE cardiac, but are in the wrong place. The scenes generated from topo maps look convincing, but they don't actually follow the topography correctly. I'm not looking forward to when search and rescue people start using this and plan routes that go off cliffs. Most people I know are too gullible to understand that this is a bullshit generator. This stuff is lethal and I'm very worried it will accelerate the rate at which the populace is getting stupider.
destel · 3h ago
Some examples are mind blowing. It’s interesting if it can generate web/app designs
AstroBen · 3h ago
I just tried it for an app I'm working on.. very bad results
stoobs · 2h ago
I'm pretty sure these are cherry-picked out of many generation attempts, I tried a few basic things and it flat out refused to do many of them like turning a cartoon illustration into a real-world photographic portrait, it kept wanting to create a pixar style image, then when I used an ai generated portrait as an example, it refused with an error saying it wouldn't modify real world people...

I then tried to generate some multi-angle product shots from a single photo of an object, and it just refused to do the whole left, right, front, back thing, and kept doing things like a left, a front, another left, and weird half back/half side view combination.

Very frustrating.

SweetSoftPillow · 2h ago
Are you in gemini.google.com interface? If so, try Google AI Studio instead, there you can disable safety filters.
stoobs · 2h ago
I'm in AI Studio, and weirdly I get no safety settings.

I had them before when I was trying this and yes, I had them turned off.

vunderba · 1h ago
Yeah I don't see them anymore either.

I use the API directly but unless I'm having a "Berenstein Bears moment" I could have sworn those safety settings existed under the Advanced Options in AI Studio a few weeks ago.

AstroBen · 3h ago
The #1 most frustrating part of image models to me has always been their inability to keep the relevant details. Ask to change a hairstyle and you'd get a subtly different person

..guess that's solved now.. overnight. Mindblowing

m3kw9 · 3h ago
The ability to pretty accurately keep the same image from an input is a clear sign of it's improved abilities.
moralestapia · 3h ago
Wow, just amazing.

Is this model open? Open weights at least? Can you use it commercially?

SweetSoftPillow · 3h ago
This is a Google's Gemini flash 2.5 model with native image output capability. It's fast, relatively cheap and SOTA-quality, and available via API. I think getting this kind of quality in open source models will need some time, probably first from Chinese models and then from BlackForestLabs or Google's open source (Gemma) team.
vunderba · 3h ago
Outside of Google Deepmind open sourcing the code and weights of AlphaFold, I don't think they've released any of their GenAI stuff (Imagen, Gemini, Flash 2.5, etc).

The best multimodal models that you can run locally right now are probably Qwen-Edit 20b, and Kontext.Dev.

https://qwenlm.github.io/blog/qwen-image-edit

https://bfl.ai/blog/flux-1-kontext-dev

SweetSoftPillow · 3h ago
Google also open sources Gemma LLMs and embedding models, which are quite good at the time of release (SOTA or near-SOTA in the open source field).
vunderba · 3h ago
Oh very nice I wasn't aware of that [1] [2]. Adding the links as well.

[1] https://deepmind.google/models/gemma

[2] https://huggingface.co/google/gemma-7b [2]

minimaxir · 3h ago
Flux Kontext has similar quality, is open weight, and the outputs can be used commercially, however prompt adherence is good-but-not-as-good.
ChrisArchitect · 2h ago
sigh

so many little details off when the instructions are clear and/or the details are there. Brad Pitt jeans? The result are not the same style and missing clear details which should be expected to just translate over.

Another one where the prompt ended with output in a 16:9 ratio. The image isn't in that ratio.

The results are visually something but then still need so much review. Can't trust the model. Can't trust people lazily using it. Someone mentioned something about 'net negative'.

bflesch · 58m ago
You need to wait until someone does the exact picture you want, annotates it in their android photo library, and google uses it to train their AI models. But then they will be able to provide you the perfect result for your query, totally done with AI! ;)
istjohn · 2h ago
Yes, almost all of the examples are off in one way or another. The viewpoints don't actually match the arrow directions, for example. And if you actually use the model, you will see that even these examples must be cherry-picked.
bflesch · 55m ago
The way you formulated your message just made me realize that we got somehow duped into accepting the term "model" (as in "scientific model") as a valid word for this AI stuff. A scientific model has a theoretical foundation and specific configuration parameters.

The way current AI is set up, you can't even reliably adjust the position of the sun.