I tried to reproduce the fork/spaghetti example and the fashion bubble example, and neither looks anything like what they present. The outputs are very consistent, too. I am copying/pasting the images out of the advertisement page so they may be lower resolution than the original inputs, but otherwise I'm using the same prompts and getting a wildly different result.
It does look like I'm using the new model, though. I'm getting image editing results that are well beyond what the old stuff was capable of.
fariszr · 1h ago
This is the gpt 4 moment for image editing models.
Nano banana aka gemini 2.5 flash is insanely good.
It made a 171 elo point jump in lmarena!
It seems like every combination of "nano banana" is registered as a domain with their own unique UI for image generation... are these all middle actors playing credit arbitrage using a popular model name?
bonoboTP · 49m ago
I'd assume they are just fake, take your money and use a different model under the hood. Because they already existed before the public release. I doubt that their backend rolled the dice on LMArena until nano-banana popped up. And that was the only way to use it until today.
ceroxylon · 36m ago
Agreed, I didn't mean to imply that they were even attempting to run the actual nano banana, even through LMarena.
There is a whole spectrum of potential sketchiness to explore with these, since I see a few "sign in with Google" buttons that remind me of phishing landing pages.
koakuma-chan · 58m ago
Why is it called nano banana?
ZephyrBlu · 27m ago
I'm pretty sure it's because an image of a banana under a microscope generated by the model went super viral
Jensson · 54m ago
Engineers often have silly project names internally, then some marketing team rewrites the name for public release.
dcre · 46m ago
Alarming hands on the third one: it can't decide which way they're facing. But Gemini didn't introduce that, it's there in the base image.
echelon · 45m ago
> This is the gpt 4 moment for image editing models.
No it's not.
We've had rich editing capabilities since gpt-image-1, this is just faster and looks better than the (endearingly? called) "piss filter".
Flux Kontext, SeedEdit, and Qwen Edit are all also image editing models that are robustly capable. Qwen Edit especially.
Flux Kontext and Qwen are also possible to fine tune and run locally.
We've left the days of Dall-E, Stable Diffusion, and Midjourney of "prompt-only" text to image generation.
It's also looking like tools like ComfyUI are less and less necessary as those capabilities are moving into the model itself.
raincole · 41m ago
In other words, this is the gpt 4 moment for image editing models.
Gpt4 isn't "fundamentally different" from gpt3.5. It's just better. That's the exact point the parent commenter was trying to make.
I've tested it on Google AI Studio since it's available to me (which is just a few hours so take it with a grain of salt). The prompt comprehension is uncannily good.
My test is going to https://unsplash.com/s/photos/random and pick two random images, send them both and "integrate the subject from the second image into the first image" as the prompt. I think Gemini 2.5 is doing far better than ChatGPT (admittedly ChatGPT was the trailblazer on this path). FluxKontext seems unable to do that at all. Not sure if I were using it wrong, but it always only considers one image at a time for me.
notsylver · 1h ago
I digitised our family photos but a lot of them were damaged (shifted colours, spills, fingerprints on film, spots) that are difficult to correct for so many images. I've been waiting for image gen to catch up enough to be able to repair them all in bulk without changing details, especially faces. This looks very good at restoring images without altering details or adding them where they are missing, so it might finally be time.
Almondsetat · 1h ago
All of the defects you have listed can be automatically fixed by using a film scanner with ICE and a software that automatically performs the scan and the restoration like Vuescan. Feeding hundreds (thousands?) of photos to an experimental proprietary cloud AI that will give you back subpar compressed pictures with who knows how many strange artifacts seems unnecessary
notsylver · 42m ago
I scanned everything into 48-bit RAW and treat those as the originals, including the IR scan for ICE and a lower quality scan of the metadata. The problem is sharing them - important images I manually repair and export as JPEG which is time consuming (15-30 minutes per image, there are about 14000 total) so if its "generic family gathering picture #8228" I would rather let AI repair it, assuming it doesn't butcher faces and other important details. Until then I made a script that exports the raws with basic cropping and colour correction but it can't fix the colours which is the biggest issue.
zwog · 1h ago
Do you happen to know some software to repair/improve video files? I'm in the process of digitalizing a couple of Video 2000 and VHS casettes of childhood memories of my mom who start suffering from dementia. I have a pretty streamlined setup for digitalizing the videos but I'd like to improve the quality a bit.
nycdatasci · 9m ago
I've used products from topazlabs.com for the same problem and have generally been happy with them.
notsylver · 37m ago
I didn't do any videos, just pictures, but considering how little I found for pictures I doubt you'll find much
actionfromafar · 17m ago
VHSdecode if you want a rabbit hole.
Barbing · 1h ago
Hope it works well for you!
In my eyes, one specific example they show (“Prompt: Restore photo”) deeply AI-ifies the woman’s face. Sure it’ll improve over time of course.
notsylver · 21m ago
I tried a dozen or so images. For some it definitely failed (altering details, leaving damage behind, needing a second attempt to get a better result) but on others it did great. With a human in the loop approving the AI version or marking it for manual correction I think it would save a lot of time.
Sure, I could manually correct that quite easily and would do a better job, but that image is not important to us, it would just be nicer to have it than not.
I'll probably wait for the next version of this model before committing to doing it, but its exciting that we're almost there.
indigodaddy · 48m ago
Another question/concern for me: if I restore an old picture of my Gramma, will my Gramma (or a Gramma that looks strikingly similar) ever pop up on other people's "give me a random Gramma" prompts?
I've been waiting for image gen to catch up enough to be able to repair them all in bulk without changing details, especially faces.
I've been waiting for that, too. But I'm also not interesting in feeding my entire extended family's visual history into Google for it to monetize. It's wrong for me to violate their privacy that way, and also creepy to me.
Am I correct to worry that any pictures I send into this system will be used for "training?" Is my concern overblown, or should I keep waiting for AI on local hardware to get better?
adidoit · 1h ago
Very impressive.
I have to say while I'm deeply impressed by these text to image models, there's a part of me that's also wary of their impact. Just look at the comments beneath the average Facebook post.
postalcoder · 1h ago
I have been testing google's SynthID for images and while it isn't perfect, it is very good, insofar that I felt some relief from that same creeping dread over what these images will do to perceived reality.
It survives a lot of transformation like compression, cropping, and resizing. It even survives over alterations like color filtering and overpainting.
sigmar · 1h ago
facebook isn't going to implement detection though. Many (if not most) of the viral pictures are AI-generated. and facebook is incentivized to let their users get fooled to generate endless scrolling
paul7986 · 1h ago
Along with those being fooled there are many comments saying this is fake, AI trash and etc. That portion of the commenters are teaching the ignorant and soon no one will believe what they see on the Internet as real.
bonsai_bar · 31m ago
> soon no one will believe what they see on the Internet as real.
Now is that so bad?
knicholes · 1h ago
I got scammed for $15k BTC last weekend during the (failed) SpaceX Launch. I believe the deepfake of Elon and transferred it over. The tech is very convincing, and the attacks ever increasingly sophisticated.
yifanl · 1h ago
This presumes that you're okay with giving the real Elon your wallet but not a fake Elon, but why?
Jensson · 38m ago
Because it isn't worth real Elon's time to run these scams.
kamranjon · 1h ago
Would you consider writing a blog post about this experience? I'm incredibly interested in learning more details about how this unfolded.
paul7986 · 1h ago
Well just go on this guy's lawn and you will find your answer lol
Imustaskforhelp · 1h ago
Please pardon me since I don't know if this is satirical or not. I'd wish if you could clarify it.
Because if this is real, then the world is cooked
if not, then the fact that I think that It might be real but the only reason I believe its a joke is because you are on hackernews so I think that either you are joking or the tech has gotten so convincing that even people on hackernews (which I hold to a fair standard) are getting scammed.
I have a lot of questions if true and I am sorry for your loss if that's true and this isn't satire but I'd love it if you could tell me if its a satirical joke or not.
runarberg · 1m ago
[delayed]
bauruine · 1h ago
I guess it was something like [0] The Nigerian prince is now a deep fake Elon but the concept is the same. You need to send some money to get way more back.
hm, but isn't it wild thinking that elon is talking to you and asking you for 15k , like bro has the money of his lifetime, why would he ask you?
It doesn't make that much sense idk
atrus · 31m ago
I remember watching the SpaceX channel on youtube, which isn't a legit source. AI Elon basically says "I want to help make bitcoin more popular, let me show you how easy it it to transfer money around with btc. Send my $X and I'll send you back $2X! It's very inline with a typical elon message (I'll give you 1 million to vote R), it's on a channel called SpaceX. It's pretty believable.
Granted I played Runescape and EvE as a kid, so any double-isk scams are immediate redflags.
Jensson · 52m ago
Even Elon could lose his credit card or something, the story they spin is always something like that "I am rich but in a pickle, please send some money here and then I'll send you back 10x as much tomorrow when I get back to my account", but of course they never send it back.
Edit: But of course Elon would call someone he knows rather than a stranger, rich people know a lot of people so of course they would never contact you about this.
As always, it is the replies that make it worth it. GopherGeyser strikes again!
michelb · 1h ago
These SpaceX scams are rampant on youtube and highly, highly lucrative. It’s crazy and you have to be very vigilant, as whatever is promised lines up with Elon’s MO.
rangerelf · 48m ago
Why would anyone give them any money AT ALL?
It's not like they're poor or struggling.
Am I missing something?
nickthegreek · 45m ago
it requires zero vigilance if you dont play the game.
fxtentacle · 1h ago
Plot twist: It wasn't a deepfake.
You sent your wallet to the real Elon and he used it as he saw fit. ;)
pjerem · 44m ago
That’s what they said : they have been scammed !
AbraKdabra · 55m ago
I don't mean to be rude, but this sounds like natural selection doing its work.
umbra07 · 18m ago
That's the sort of statement that remains extremely rude even if you try and prefix it with "I don't mean to be rude".
lionkor · 1h ago
Not to victim-shame or anything, but that sounds more like more than one safety mechanism failed, the convincing tech only being a rather small part of it?
hansonkd · 1h ago
I think the biggest failure is on the part of the companies hosting these streams.
Its been a while, but I remember seeing streams for Elon offering to "double your bitcoin" and the reasoning was he wanted to increase the adoption and load test the network. Just send some bitcoin to some address and he will send it back double!
But the thing was it was on youtube. Hosted on an imposter Tesla page. The stream had been going on for hours and had over ten thousand people watching live. If you searched "Elon Musk Bitcoin" During the stream on Google, Google actually pushed that video as the first result.
Say what you want about the victims of the scam, but I think it should be pretty easy for youtube or other streaming companies to have a simple rule to simply filter all live streams with Elon Musk + (Crypto|BTC|etc) in the title and be able to filter all youtube pages with "Tesla" "SpaceX" etc in the title.
lionkor · 1h ago
I feel like somehow that would lessen it, but not really help much? There are obviously people with too much money in BTC who are trying to take any gamble to increase its value. It sounds like a deeper societal issue.
jfoster · 8m ago
You are right that they might never be able to get it to 0, but shouldn't they lessen it if a simple measure like the one described can prevent a bunch of people from getting fooled by the scam?
amatajohn · 46m ago
the modern turing test:
am i getting scammed by a billionare or an AI billionaire?
UltraSane · 45m ago
On the balance of probabilities it being a scam is vastly more likely than Elon actually wanting to contact you. Why would Elon need $15k in bitcoin?
It seems like money naturally flows from the gullible to the Machiavellian.
pennaMan · 1h ago
hey, I got a bridge to sell you, was $20k but we can lower it to $15k if you pay in BTC
testplzignore · 1h ago
You're paying too much for your bridges man. Who's your bridge guy?
dkiebd · 39m ago
That wasn’t a bridge.
MitPitt · 1h ago
Facebook comments are obviously botted too
nikanj · 43m ago
The comments are probably AI-generated too, because a site that seems to have lots of other people on it is more appealing than an empty wasteland
matsemann · 59m ago
Half the time I ask Gemini to generate some image it claims it doesn't have the capability. And in general I've felt it's so hard to actually use the features Google announce? Like, a third of them is in one product, some in another which I can't use, and no idea what or where I should pay to get access. So confusing.
Al-Khwarizmi · 50m ago
Yeah, in fact the website says "Try it in Gemini" and I'm not sure if I'm already trying it or not - if I choose Gemini 2.5 Flash in the regular Gemini UI, I'm using this?
oliwary · 25m ago
Also very confused at this... It told me "I'm unable to create images of specific individuals in different settings." I wish it would at least say somewhere which model we are using at the moment.
throwup238 · 40m ago
It’s going to be a messy rollout as usual. The web app (gemini.google.com) shows “Images with Imagen” for me under tools for 2.5 flash but I just tried a few image edits and remixes in the iOS app and it looks like it’s been updated to this model.
sega_sai · 44m ago
I think not. Because at least in the aistudio there is a dedicated gemini-2.5-flash-image-preview model. So I am assuming it is not available in the standard gemini chat window.
mkl · 2h ago
That lamp example is pretty impressive (though it's hard to know how cherry-picked it is). The lamp is plugged in, it's lighting the things in the scene, it's casting shadows.
lifthrasiir · 2h ago
FYI, this is the famed nano-banana model which has been now renamed to gemini-2.5-flash-image-preview in LMArena.
For people like me that don’t know what nano-banana is.
mock-possum · 1h ago
Wow I hate the ‘voice’ in that article - big if true though.
daemonologist · 1h ago
I suspect the "voice" is a language model with a bad system prompt. (Possibly the author's own words run through an LLM, to be charitable.)
3036e4 · 27m ago
It's medium.com. YouTube comments quality text packaged as clickbait articles for some revenue share. It was always slop, even without LLMs. Do they even bother with paying human authors now or is the entire site just generated? That would probably be cheaper and improve quality.
postscapes1 · 1h ago
This is what i came here to find out. Thanks.
johnfn · 21m ago
I naively went onto Gemini in order to try to use the new model and had what I could only describe as the worst conversation I've had with an AI since GPT 3.5[1]. Is this really the model that's on top of the leaderboard right now? This feels about 500 ELO points worse than my typical conversation with GPT 5.
Edit: OK, OK, I actually got it to work, and yes, I admit the results are incredible[2]. I honestly have no idea what happened with Pro 2.5 the first time.
sometimes these bots just go awry. i wish you could checkpoint spots in a conversation so you could replay from a that point, maybe with a push in the latent space or a new seed.
GaggiX · 17m ago
"Google AI Studio" and select the model
byteknight · 19m ago
Are you doing roleplay?
johnfn · 16m ago
What?
abdusco · 1h ago
I love that it's substantially faster than ChatGPT's image generation. It takes ages, so slow that the app tells you to not wait and sends you notification when the generation finishes.
andrewinardeer · 1h ago
"Generate an image of OpenAI investors after using Gemini 2.5 Flash Image"
radarsat1 · 1h ago
I've had a task in mind for a while now that I've wanted to do with this latest crop of very capable instruction-following image editors.
Without going into detail, basically the task boils down to, "generate exactly image 1, but replace object A with the object depicted in image 2."
Where image 2 is some front-facing generic version, ideally I want the model to place this object perfectly in the scene, replacing the existing object, that I have identified ideally exactly by being able to specify its position, but otherwise by just being able to describe very well what to do.
For models that can't accept multiple images, I've tried a variation where I put a blue box around the object that I want to replace, and paste the object that I want it to put there at the bottom of the image on its own.
I've tried some older models, and ChatGPT, also qwen-image last week, and just now, this one. They all fail at it. To be fair, this model got pretty damn close, it replaced the wrong object in the scene, but it was close to the right position, and the object was perfectly oriented and lit. But it was wrong. (Using the bounding box method.. it should have been able to identify exactly what I wanted to do. Instead it removed the bounding box and replaced a different object in a different but close-by position.)
Are there any models that have been specifically trained to be able to infill or replace specific locations in an image with reference to an example image? Or is this just like a really esoteric task?
So far all the in-filling models I've found are only based on text inputs.
rushingcreek · 1h ago
Yes! There is a model called ACE++ from Alibaba that is specifically trained to replace masked areas with a reference image. We use it in https://phind.design. It does seem like a very esoteric and uncommon task though.
ceroxylon · 52m ago
I don't think it is that esoteric, that sounds like deepfake 101. If you don't mind answering, does Phind do anything to prevent / mitigate this?
kemyd · 49m ago
I don't get the hype. Tested it with the same prompts I used with Midjourney, and the results are worse than in Midjourney a year ago. What am I missing?
bonoboTP · 48m ago
The hype is about image editing, not pure text-to-image. Upload an input image, say what you want changed, get the output. That's the idea. Much better preservation of characters and objects.
appenz · 32m ago
I tested it against Flux Pro Kontext (also image editing) and while it's a very different style and approach I overall like Flux better. More focus on image consistency, adjusts the lighting correctly, fixes contradictions in the image.
SirMaster · 24m ago
Can it edit the photo at the original resolution?
Most of my photos these days are 48MP and I don't want to lose a ton of resolution just to edit them.
kemyd · 47m ago
Thanks for clarifying this. That makes a lot more sense.
cdrini · 47m ago
Hmm, I think the hype is mainly for image editing, not generating. Although note I haven't used it! How are you testing it?
kemyd · 44m ago
I tested it with two prompts:
// In this one, Gemini doesn't understand what "cinematic" is
"A cinematic underwater shot of a turtle gracefully swimming in crystal-clear water [...]"
// In this one, the reflection in the water in the background has different buildings
"A modern city where raindrops fall upward into the clouds instead of down, pedestrians calmly walking [...]"
Midjourney created both perfectly.
echelon · 37m ago
As others have said, this is an image editing model.
Editing models do not excel at aesthetic, but they can take your Midjourney image, adjust the composition, and make it perfect.
These types of models are the Adobe killer.
kemyd · 17m ago
Noted that! The editing capabilities are impressive. I was excited for image gen because of the API (Midjourney doesn't have it yet).
t_mahmood · 16m ago
After the rugpull of Android, are we really going to trust Google with anything?
jeffbee · 14m ago
What does the first phrase even mean?
jfoster · 6m ago
I think it's a reference to this & similar things:
If this can do character consistency, that's huge. Just make it do the same for video...
ACCount37 · 1h ago
It's probably built on reused "secret sauce" from the video generation models.
beyonddream · 1h ago
“Internal server error
Sorry, there seems to be an error. Please try again soon.”
Never thought I would ever see this on a google owned websites!
lionkor · 1h ago
A cheap quip would be "it's vibe-coded", but that might actually very well be the case at this point!
reaperducer · 14m ago
Never thought I would ever see this on a google owned websites!
Really? Google used to be famous not only for its errors, but for its creative error pages. I used to have a google.com bookmark that would send an animated 418.
bsenftner · 59m ago
All these image models are time vampires and need to be looked at with very suspicious eyes. Try to make a room - that's easy, now try to make multiple views of the same room - next to impossible. If one is intending to use these image models for anything that requires consistency of imagery, forget it.
dpoloncsak · 2h ago
I've been looking for a whitepaper or something.
So far I've found this...which is not a whitepaper but seems relevant
Yes, they mention that the model is aka nano-banana in the blogpost
chadcmulligan · 35m ago
This is technically impressive though I really wish they'd choose other professions to automate than graphic design.
reaperducer · 18m ago
AI is supposed to set us all free. Yet, so far all the tech companies have done is eliminate the jobs of the lowest-paid people (artists, writers, photographers, designers) and transfer that money to billionaires. Yay.
anthonypasq · 14m ago
[Plows] are supposed to set us all free. Yet, so far all the tech companies have done is eliminate the jobs of the lowest-paid people ([field hands]) and transfer that money to landowners. Yay.
modeless · 1h ago
This model is very impressive. Yesterday (as nano-banana) I gave it a photo of an indoor scene with a picture hanging on a wall, and asked it the picture on a wall with a copy of the whole photo. It worked perfectly the first time.
It didn't succeed in doing the same recursively, but it's still clearly a huge advance in image models.
yuchana · 26m ago
The progress is insanely good but imagine the competition between engineers
especially there are many people taking up courses in ai and cs
qoez · 2h ago
Anyone know how it handles '1920s nazi officer'? They stopped doing humans for a while but now I see they're back so I wonder how they're handling the criticism they got from that
napo · 2h ago
it said: "I can create images about lots of things but not that. Can I try a different one for you?"
napo · 2h ago
when giving more context it replied:
"""
Unfortunately, I can't generate images of people. My purpose is to be helpful and harmless, and creating realistic images of humans can be misused in ways that are harmful. This is a safety policy that helps prevent the generation of deepfakes, non-consensual imagery, and other problematic content.
If you'd like to try a different image prompt, I can help you create images of a wide range of other subjects, such as animals, landscapes, objects, or abstract concepts.
"""
bastawhiz · 2h ago
What a weird rejection. You have to scroll pretty far in the article to see an example output that doesn't have a realistic depiction of a person.
geysersam · 1h ago
It's unfortunate they can't just explain the real reason they don't want to generate the image:
"Unfortunately I'm not able to generate images that might cause bad PR for Alphabet(tm) or subsidiaries. Is there anything else I can generate for you?"
tanaros · 1h ago
The rejection message doesn’t seem to be accurate. I tried “happy person” as a prompt in AI Studio and it generated a happy human without any complaints.
It’s possible that they relaxed the safety filtering to allow humans but forgot to update the error message.
No comments yet
Der_Einzige · 1h ago
The moment the weights are on huggingface someone with orthogonalize/abliterate the model and make it uncensored.
rvnx · 1h ago
BigBanana would be a good name for that future OnlyFans model
martythemaniak · 2h ago
What is a "1920s nazi officer" what do they look like?
I was able to upload my kids' back-to-school photos and ask nano-banana to turn them into a goth, an '80s workout girl, and a tracksuit mafioso. The results were incredibly believable, and I was able to prank my mom with them!
runarberg · 9m ago
Still fails the “full glass of wine” test, and still shows many of the artifacts typical of AI generated images like non-nonsensical text, misplacement of objects, etc.
To be honest I am kind of glad. As AI generated images proliferate, I am hoping it will be easier for humans to call them out as AI.
idiotsecant · 10m ago
This is going to be so helpful for all the poorly photoshopped Chinese junk eBay listings.
elorant · 1h ago
I have a certain use case for such image generators. Feed them an entire news article I fetch from bbc and ask it to create an image to accompany the article. Thus far only midjourney managed to understand context. And now this, which is even more impressive. We live in interesting times.
The response was a summary of the article that was pretty good, along with an image that dagnabbit, read the assignment.
stuckinhell · 1h ago
Is this the "nano banana" thing the art ai world was going crazy about recently ?
SweetSoftPillow · 1h ago
Yes it is
kumarm · 1h ago
Seems to be failing at API Calls right now with "You exceeded your current quota, please check your plan and billing details. For more information on this error,"
Hope they get API issues resolved soon.
mindprince · 2h ago
What is the difference between Gemini Flash Image models and the Imagen models?
og_kalu · 1h ago
Imagen is a diffusion text to image model. You write some text that describes your image, you get an image out and that's it.
Flash Image is an image (and text) predicting large language model. In a similar fashion to how trained LLMs can manipulate/morph text, this can do that for images as well. Things like style transfer, character consistency etc.
You can communicate with it in a way you can't for imagen, and it has a better overall world understanding.
raincole · 57m ago
Imagen: Stable Diffusion, but by Google
Gemini Flash Image: ChatGPT image, but by Google
simianwords · 1h ago
L like it but it is very restricted. I can't modify people's faces etc.
sandreas · 1h ago
I wonder if this could be used for preprocessing documents before doing OCR...
patates · 2h ago
It seems that they still block access from Europe, or from Germany at least.
elorant · 2h ago
I can access it from Greece through AI Studio just fine.
beklein · 1h ago
It works fine in OpenRouter
punkpeye · 2h ago
Use one of the router services
Narciss · 1h ago
Use it on fal.ai
kumarm · 1h ago
Since API currently is not working (seems rate limits not set for Image Generation yet) I tried on fal.
Definitely inferior to results I see on AI Studio and image generation time is 6s on AI Studio vs 30 seconds on Fal.AI
echelon · 34m ago
> Definitely inferior to results
Quality or latency?
kridsdale1 · 2h ago
Get less contradictory regulations, then.
kneegerm · 1h ago
They vote [well they don't] for it, then they complain, then they downvote and seethe. The European experience.
rvnx · 1h ago
In EU they forbid us newspapers from non-approved countries, impose cookies banners everywhere, and now block porn. Soon they will forbid some AI models which have not passed EU censorship ("safety") validation. Because we all know that governments (or even Google with Android) are better at knowing what is the safest for you.
Inconceivable that anyone would dare to criticize the regime. Have you already filed a report, comrade?
rvnx · 2m ago
Yes, I filed the report criticizing China and Russia. The bonus social credit points should hit my account by Monday.
therealmarv · 1h ago
What is the max input and output resolution of images?
This is why I'm sticking mostly to Adobe Photoshop's AI editing because there are no restrictions in that regard.
abdusco · 1h ago
Around 1 megapixel, AFAICT.
uejfiweun · 56m ago
This is pretty remarkable, I'm having a lot of fun playing around with this. Kudos to Google.
keepamovin · 1h ago
Those examples are gorgeous and amazing. This is really cool.
Narciss · 1h ago
Nano banana is here!
mclau157 · 1h ago
I could see this destroying a lot of jobs like photography, editing, marketing, etc.
bityard · 30m ago
These jobs won't go away. Power tools didn't destroy carpentry. Computers didn't destroy math. But workers who don't embrace these new tools will probably get left behind by those who do.
asdev · 1h ago
Looks like AI image generation is converging to a local maximum as well
GaggiX · 1h ago
An image seems to be 256 tokens looking the AIstudio tab, so you can generate 3906,25 images per 1M tokens, that seems a lot if I'm not wrong in some ways.
Edit: the blog post is now loading and reports "1290 output tokens per image" even though on the AI studio it said something different.
lyu07282 · 1h ago
still fails at analog clocks, if anyone else was also wondering
It does look like I'm using the new model, though. I'm getting image editing results that are well beyond what the old stuff was capable of.
Just search nano banana on Twitter to see the crazy results. An example. https://x.com/D_studioproject/status/1958019251178267111
There is a whole spectrum of potential sketchiness to explore with these, since I see a few "sign in with Google" buttons that remind me of phishing landing pages.
No it's not.
We've had rich editing capabilities since gpt-image-1, this is just faster and looks better than the (endearingly? called) "piss filter".
Flux Kontext, SeedEdit, and Qwen Edit are all also image editing models that are robustly capable. Qwen Edit especially.
Flux Kontext and Qwen are also possible to fine tune and run locally.
We've left the days of Dall-E, Stable Diffusion, and Midjourney of "prompt-only" text to image generation.
It's also looking like tools like ComfyUI are less and less necessary as those capabilities are moving into the model itself.
Gpt4 isn't "fundamentally different" from gpt3.5. It's just better. That's the exact point the parent commenter was trying to make.
My test is going to https://unsplash.com/s/photos/random and pick two random images, send them both and "integrate the subject from the second image into the first image" as the prompt. I think Gemini 2.5 is doing far better than ChatGPT (admittedly ChatGPT was the trailblazer on this path). FluxKontext seems unable to do that at all. Not sure if I were using it wrong, but it always only considers one image at a time for me.
In my eyes, one specific example they show (“Prompt: Restore photo”) deeply AI-ifies the woman’s face. Sure it’ll improve over time of course.
This is the first image I tried:
https://i.imgur.com/MXgthty.jpeg (before)
https://i.imgur.com/Y5lGcnx.png (after)
Sure, I could manually correct that quite easily and would do a better job, but that image is not important to us, it would just be nicer to have it than not.
I'll probably wait for the next version of this model before committing to doing it, but its exciting that we're almost there.
I've been waiting for that, too. But I'm also not interesting in feeding my entire extended family's visual history into Google for it to monetize. It's wrong for me to violate their privacy that way, and also creepy to me.
Am I correct to worry that any pictures I send into this system will be used for "training?" Is my concern overblown, or should I keep waiting for AI on local hardware to get better?
I have to say while I'm deeply impressed by these text to image models, there's a part of me that's also wary of their impact. Just look at the comments beneath the average Facebook post.
It survives a lot of transformation like compression, cropping, and resizing. It even survives over alterations like color filtering and overpainting.
Now is that so bad?
Because if this is real, then the world is cooked
if not, then the fact that I think that It might be real but the only reason I believe its a joke is because you are on hackernews so I think that either you are joking or the tech has gotten so convincing that even people on hackernews (which I hold to a fair standard) are getting scammed.
I have a lot of questions if true and I am sorry for your loss if that's true and this isn't satire but I'd love it if you could tell me if its a satirical joke or not.
[0]: https://www.ncsc.admin.ch/ncsc/en/home/aktuell/im-fokus/2023...
It doesn't make that much sense idk
Granted I played Runescape and EvE as a kid, so any double-isk scams are immediate redflags.
Edit: But of course Elon would call someone he knows rather than a stranger, rich people know a lot of people so of course they would never contact you about this.
https://en.wikipedia.org/wiki/Advance-fee_scam
It's not like they're poor or struggling.
Am I missing something?
You sent your wallet to the real Elon and he used it as he saw fit. ;)
Its been a while, but I remember seeing streams for Elon offering to "double your bitcoin" and the reasoning was he wanted to increase the adoption and load test the network. Just send some bitcoin to some address and he will send it back double!
But the thing was it was on youtube. Hosted on an imposter Tesla page. The stream had been going on for hours and had over ten thousand people watching live. If you searched "Elon Musk Bitcoin" During the stream on Google, Google actually pushed that video as the first result.
Say what you want about the victims of the scam, but I think it should be pretty easy for youtube or other streaming companies to have a simple rule to simply filter all live streams with Elon Musk + (Crypto|BTC|etc) in the title and be able to filter all youtube pages with "Tesla" "SpaceX" etc in the title.
am i getting scammed by a billionare or an AI billionaire?
It seems like money naturally flows from the gullible to the Machiavellian.
For people like me that don’t know what nano-banana is.
Edit: OK, OK, I actually got it to work, and yes, I admit the results are incredible[2]. I honestly have no idea what happened with Pro 2.5 the first time.
[1]: https://g.co/gemini/share/5767894ee3bc [2]: https://g.co/gemini/share/a48c00eb6089
Without going into detail, basically the task boils down to, "generate exactly image 1, but replace object A with the object depicted in image 2."
Where image 2 is some front-facing generic version, ideally I want the model to place this object perfectly in the scene, replacing the existing object, that I have identified ideally exactly by being able to specify its position, but otherwise by just being able to describe very well what to do.
For models that can't accept multiple images, I've tried a variation where I put a blue box around the object that I want to replace, and paste the object that I want it to put there at the bottom of the image on its own.
I've tried some older models, and ChatGPT, also qwen-image last week, and just now, this one. They all fail at it. To be fair, this model got pretty damn close, it replaced the wrong object in the scene, but it was close to the right position, and the object was perfectly oriented and lit. But it was wrong. (Using the bounding box method.. it should have been able to identify exactly what I wanted to do. Instead it removed the bounding box and replaced a different object in a different but close-by position.)
Are there any models that have been specifically trained to be able to infill or replace specific locations in an image with reference to an example image? Or is this just like a really esoteric task?
So far all the in-filling models I've found are only based on text inputs.
Most of my photos these days are 48MP and I don't want to lose a ton of resolution just to edit them.
// In this one, Gemini doesn't understand what "cinematic" is
"A cinematic underwater shot of a turtle gracefully swimming in crystal-clear water [...]"
// In this one, the reflection in the water in the background has different buildings
"A modern city where raindrops fall upward into the clouds instead of down, pedestrians calmly walking [...]"
Midjourney created both perfectly.
Editing models do not excel at aesthetic, but they can take your Midjourney image, adjust the composition, and make it perfect.
These types of models are the Adobe killer.
https://9to5google.com/2025/08/25/android-apps-developer-ver...
https://openrouter.ai/google/gemini-2.5-flash-image-preview
Sorry, there seems to be an error. Please try again soon.”
Never thought I would ever see this on a google owned websites!
Really? Google used to be famous not only for its errors, but for its creative error pages. I used to have a google.com bookmark that would send an animated 418.
https://developers.googleblog.com/en/introducing-gemini-2-5-...
It seems like this is 'nano-banana' all along
It didn't succeed in doing the same recursively, but it's still clearly a huge advance in image models.
""" Unfortunately, I can't generate images of people. My purpose is to be helpful and harmless, and creating realistic images of humans can be misused in ways that are harmful. This is a safety policy that helps prevent the generation of deepfakes, non-consensual imagery, and other problematic content.
If you'd like to try a different image prompt, I can help you create images of a wide range of other subjects, such as animals, landscapes, objects, or abstract concepts. """
"Unfortunately I'm not able to generate images that might cause bad PR for Alphabet(tm) or subsidiaries. Is there anything else I can generate for you?"
It’s possible that they relaxed the safety filtering to allow humans but forgot to update the error message.
No comments yet
https://en.m.wikipedia.org/wiki/Sturmabteilung
https://postimg.cc/xX9K3kLP
...
To be honest I am kind of glad. As AI generated images proliferate, I am hoping it will be easier for humans to call them out as AI.
The response was a summary of the article that was pretty good, along with an image that dagnabbit, read the assignment.
Hope they get API issues resolved soon.
Flash Image is an image (and text) predicting large language model. In a similar fashion to how trained LLMs can manipulate/morph text, this can do that for images as well. Things like style transfer, character consistency etc.
You can communicate with it in a way you can't for imagen, and it has a better overall world understanding.
Gemini Flash Image: ChatGPT image, but by Google
Definitely inferior to results I see on AI Studio and image generation time is 6s on AI Studio vs 30 seconds on Fal.AI
Quality or latency?
https://digital-strategy.ec.europa.eu/en/news/eu-rules-gener...
This is why I'm sticking mostly to Adobe Photoshop's AI editing because there are no restrictions in that regard.
Edit: the blog post is now loading and reports "1290 output tokens per image" even though on the AI studio it said something different.