Top
Best
New

Posted by gaws 10/28/2025

Generative AI Image Editing Showdown(genai-showdown.specr.net)
342 points | 77 comments
minimaxir 10/28/2025|
Everyone is sleeping on Gemini 2.5 Flash Image / Nano Banana. As shown in the OP, it's substantially more powerful than most other models while at the same price-per-image, and due to its text encoder it can handle significantly larger and more nuanced prompts to get exactly what you want. I open-sourced a Python package for generating from it with examples (https://github.com/minimaxir/gemimg) and am currently working on a blog post with even more representative examples. Google also allows generations for free with aspect ratio control in AI Studio: https://aistudio.google.com/prompts/new_chat

That said, I am surprised Seedream 4.0 beat it in these tests.

daemonologist 10/28/2025||
I don't think people are really sleeping on it - nano-banana more or less went viral when it first came out. I'd argue that aside from the capabilities built into ChatGPT (with the Ghibli craze and whatnot) craze it's the best known image editing model.
minimaxir 10/29/2025||
It's a weird situation where the Gemini mobile app hit #2 on the App Stores because of free Nano Banana, but no one ever talks about it and most disclosed image generations I've seen are still ChatGPT.
ec109685 10/29/2025||
Google photos should just include the feature. It’s kinda buried in Gemini.

Google is so weirdly non-integrated.

piquadrat 10/29/2025|||
They announced that Nano Banana will be integrated in Google Photos a couple weeks ago.

https://blog.google/technology/ai/nano-banana-google-product...

troupo 10/29/2025|||
> It’s kinda buried in Gemini.

> Google is so weirdly non-integrated.

Where by try gemini non- integrated have you tried gemini you mean gemini is here they shove use gemini gemini into every single product they have?

ec109685 10/29/2025||
It is terrible in all those services.
vunderba 10/29/2025|||
> That said, I am surprised Seedream 4.0 beat it in these tests.

OP here. While Seedream did have the edge in adherence it also tends to introduce slight (but noticeable) color gradation changes. It's not a huge deal for me, but it might be for other people depending on their goals in which case NanoBanana would be the better choice.

cosama 10/28/2025|||
I was trying to use gemini 2.5 flash image / nano banana to tidy up a picture of my messy kitchen. It failed horribly on my first attempt. I was quite surprised how much trouble it had with this simple task (similar to cleaning up the street in the post). On my second attempt I had it first analyze the image to point out all the items that clutter the space, and then on a second prompt had it remove all those items. That worked much better, showing how important prompt engineering is.
veunes 10/31/2025|||
That actually proves how important the “number of attempts” metric is. It’s not just a “make everything pretty” button - it’s more like a powerful but slightly dumb intern who needs clear, step-by-step instructions. Your two-step approach really captures the essence of prompt engineering
vunderba 10/29/2025|||
Yeah, that's part of the reason I list the number of attempts as part of the stats for each model + respective prompt. It's a loose metric of how "steerable" a given model is, or put another way, how much I had to fight with it before we were able to get it to follow the prompt directives.
herval 10/28/2025|||
Gemini is great when it gets it right, but in my experience, it sometimes gives you completely unexpected results and won't get it right no matter what. You can see that in some of the examples (eg the Girl with the pearl earring one). I'm constantly surprised by how good Flux is, but the tragedy is most people (me included) will just default to whatever they normally use (chatgpt and gemini, in my case), so it doesn't really matter that it's better
tigershark 10/29/2025|||
Flux kontext quality is noticeably worse that nano banana, Qwen image 2509 and Seedream 4 most of the times. For pure image generation instead Hunyuan image is scarily good.
dimitri-vs 10/28/2025|||
Agreed, to the point where I built my own UI where I can simultaneously generate three images and see a before/after. Most often only one of three is what I actually wanted.
epiccoleman 10/29/2025|||
half the time when i try to use nano banana, AI Studio fails, telling me it can't generate for some unspecified reason.

these aren't cases where I'm trying to do something that skirts the edge of copyright, either (like "Ghiblifying" images, for example).

that said, when it does work, it is super impressive.

minimaxir 10/29/2025|||
Let's just say I've tested around this.

Copyright: Zero guardrails on anything related to third-party IP, which lets you do some funny things. (I'm including a picture/prompt of Super Mario, Mickey Mouse, and Bugs Bunny partying at a nightclub in the blog post)

Moderation: It has far fewer guardrails and any other Google AI product I've tried, and it is possible to prompt engineer some images that would definitely be considered NSFW by most people — more NSFW than actual NSFW image generators (a post-generation filter will catch most nudity, however). I have not had any rejections for more innocous queries that could be misinterpreted as being NSFW.

vunderba 10/29/2025|||
It might be the safety moderation system. It's rather aggressive and when it does kick in (at least in the API), it often returns an empty response giving basically zero indication as to the root cause.
minimaxir 10/29/2025||
The empty response issue is annoying since there is already a PROHIBITED_CONTENT flag, but it is not used in this case.
BoorishBears 10/29/2025|||
No one is sleeping on nano-banana/Gemini Flash, it's highly over-tuned for editing vs novel generation and maxes out at a pretty low resolution.

Seedream 4.0 is somewhat slept on for being 4k at the same cost as nano-banana. It's not as great at perfect 1:1 edits, but it's aesthetics are much better and it's significantly more reliable in production for me.

Models with LLM backbones/omni-modal models are not rare anymore, even Qwen Image Edit is out there for open-weights.

veunes 10/31/2025|||
Gemini likely has a more powerful text encoder, which is why it's better at parsing complex, nuanced prompts. Seedream, on the other hand, might have a more advanced diffusion U-Net architecture that's better at preserving textures and handling local edits. One model understands better, the other draws better
tigershark 10/29/2025|||
Seedream 4 is better than nano banana on average, so that test result seems accurate to me
franze 10/29/2025|||
honest question: where is / how to do aspect ratio control for nano banana in aistudio?
minimaxir 10/29/2025||
It's on the right sidebar if Nano Banana is selected.
cpursley 10/28/2025||
Meh, most Google AI products look great on paper but fail in actual real scenarios. And that ranges from their Claude Code clone to their buggy storybook thing which I really wanted to like.
lxe 10/28/2025||
This is vastly more useful than benchmark charts.

I've been using Nano Banana quite a lot, and I know that it absolutely struggles at exterior architecture and landscaping. Getting it to add or remove things like curbs, walkways, gutters, etc, or to ask to match colors is almost futile.

estetlinus 10/28/2025|
I am trying Qwen Image Edit for turning day photos into night, mostly architecture etc. Most models are struggling, and Nano Banana misses edges and stuff, making the pictures align poorly.
roenxi 10/28/2025||
It is fun being one of the elderly who set their standards back in distant 2022. All these demos look incredible compared to SD1, 2 & 3. We've entered a very different era where the models seem to actually understand both the prompt and the image instead of throwing paint at the wall in a statistically interesting manner.

I think this was fairly predictable, but as engineering improvements keep happening and the prompt adherence rate tightens up we're enjoying a wild era of unleashed creativity.

zamadatix 10/29/2025||
I still feel varying the prompt text, number of tries, and varying strictness combined with only showing the result most liked dilute most of the value in these test. It would be better if there was one prompt 8/10 human editors understood and implemented correctly and then every model got 5 generation attempts with that exact prompt on different seeds or something. If it were about "who can create the best image with a given model" then I'd see it more, but most of it seems aimed at preventing that sort of thing and it ends up in an awkward middle zone.

E.g. Gemini 2.5 Flash is given extreme leeway with how much it edits the image and changes the style in "Girl with Pearl Earring" only to have OpenAI gpt-image-1 do a (comparatively) much better job yet still be declared failed after 8 attempts, while having been given fewer attempts than Seedream 4 (passed) and less than half the attempts of OmniGen2 (which still looks way farther off in comparison).

cttet 10/29/2025||
A "worst image" instead of best image competition may be easy to implement and quite indicative of which one has less frustration experience.
vunderba 10/29/2025||
OP here. That's kind of the idea of listing the number of attempts alongside failure/successes. It's a loose metric for how "compliant" a model is - e.g. how much work you have to put it in order to get a nominally successful result.
zamadatix 10/29/2025||
The OpenAI gpt-image-1 example was supposed to be noted as for the "You Only Move Twice" test.
hackthemack 10/28/2025||
I do not use ai image generating much lately. It seemed like there was a burst of activity a year and half ago with self hosted models and using some localhost web guis. But now it seems like it is moving more and more to online hosted models.

Still, to my eye, ai generated images still feel a bit off when doing with real world photographs.

George's hair, for example, looks over the top, or brushed on.

The tree added to the sleeping person on the ground photo... the tree looks plastic or too homogenized.

minimaxir 10/28/2025||
> But now it seems like it is moving more and more to online hosted models.

It's mostly because image model size and required compute for both training and inference have grown faster than self-hosted compute capability for hobbyists. Sure, you can run Flux Kontext locally, but if you have to use a heavily quantized model and wait forever for the generation to actually run, the economics are harder to justify. That's not counting the "you can generate images from ChatGPT for free" factor.

> George's hair, for example, looks over the top, or brushed on.

IMO, the judge was being too generous with the passes for that test. The only one that really passes is Gemini 2.5 Flash Image:

Flux Kontext: In addition to the hair looking too slick, it does not match the VHS-esque color grading of the image.

Qwen-Image-Edit: The hair is too slick and the sharpness/saturation of the face unnecessarily increases.

Seedream 4: Color grading of the entire image changes, which is the case with most of the Seedream 4 edits shown in this post, and why I don't like it.

janalsncm 10/29/2025||
For 99% of my use cases I’ll just use ChatGPT or Gemini due to convenience. But if you want something with a specific style, Flux LoRAs are much better, in which case I’ll boot up the old 4090.

The economics 1000% do not justify me owning a GPU to do this. I just happen to own one.

veunes 10/31/2025||
I think fine-tuning could fix that problem

If you take a base model and train it on a hundred Seinfeld frames, it would pick up the specific style - the color grading, grain, lighting - and it would add the hair way more naturally

jimmyl02 10/28/2025||
I think reve (https://reve.com) should be in the running and would be very curious to see the results!
achow 10/29/2025||
Thank you for the pointer. I was struggling with Nanobanana for editing an image which it had created earlier, but Reve gave me the edit result exactly the way I wanted in the first pass.

My usecase: An image of a cartoon character, holding an object and looking at it. Wanted to edit so that the character no longer has the object in her hand and now looking towards the camera.

Result Nanobanana: At first pass it only removed the object that the character was holding, however there was no change in her eyeline, she was still looking down at her now empty hand. Second prompt explicitly asked to change the eyeline to look at camera. Unsuccessful. Third attempt asked the character to look towards ceiling. Success but unusable edit as I wanted the character to look at the camera.

Result Reve: At first attempt it gave me 4 options and all 4 are usable. It not only removed the object and changed the eyeline of the character to look at the camera, but it also made posture changes so that the empty hands were appropriately positioned, and now since the character is in a different situation (sans the object that was holding her attention) Reve posed the character in different ways which were very appropriate - which I didn't think of prompting for earlier (maybe because my focus was on immediate need - object removal and change in eyeline).

On a little more digging found this writeup which will make me to signup for their product.

https://blog.reve.com/posts/reve-editing-model/

vunderba 10/29/2025|||
OP here. Thanks for the recommendation. I'll check it out and try to get them added!
ImHereToVote 10/29/2025||
Thanks for the tip.
shridharathi 10/29/2025||
Here's a post I wrote on the Replicate blog putting these image editing models head-to-head. Generally, I found Qwen Image Edit to be the cheapest and fastest model that was also quite capable of most image editing tasks.

If I were to make an image editing app, this would be the model I'd choose.

https://replicate.com/blog/compare-image-editing-models

silisili 10/29/2025||
Neat comparison. The only qualm I have is giving a pass on that last giraffe... it's not visibly any shorter, just bent awkwardly.

Even so, Gemini would lose by 1, but I found that I would often choose it as the winner(especially say, The Wave surfer). Would love to see a x/10 instead of pass/fail.

vunderba 10/29/2025|
Yeah that's a fair critique. Your description made me laugh. Can't wait to go to a zoo exhibit featuring "AWKWARDLY BENT GIRAFFE".
joomla199 10/28/2025||
Good effort, somewhat marred by poor prompting. Passing in “the tower in the image is leaning to the right,” for example, is a big mistake. That context is already in the image, and passing that as a prompt will only make the model apt to lean the tower in the result.
vunderba 10/29/2025||
I should have been more clear. Those are NOT the direct prompts. They are the starter prompts. In fact that's why the attempt numbers change, we adapt the exact prompts depending on the model.
joomla199 10/29/2025||
I understood that much, at least from the description you added on the Kontext result. I agree that you should provide more information here, though, especially around "we adapt the exact prompts depending on the model", since your strategy here could also reflect model strengths and weaknesses.
vunderba 10/29/2025||
Good point! Perhaps I should add in the "final model-specific prompt", or place them in an errata section.
joomla199 10/30/2025|||
By the way, this is what I got from Kontext after just a couple of tries: https://i.imgur.com/J4LwkVI.png

Prompt: "Keeping the glass and the hand behind the glass the same, please change only the three brown candies in the glass into green, yellow, red, and orange candies. Make no other changes. Change the reflection to remove the brown candy too." Seed was 1070229954903864, but your setup is probably too different for that to help.

It seems like Gemini 2.5 Flash was the only model that successfully removed the reflections...it should get some points for that!

keyle 10/28/2025|
This was fun.

Some might critique the prompts and say this or that would have done better, but they were the kind of prompt your dad would type in not knowing how to push the right buttons.

vunderba 10/29/2025|
OP here. You're the second person to say this. I cut my teeth on SD 1.5 - so I'm rather intimately familiar (for better or worse) with the level of prompt craft necessary depending on the model.

I feel like the FAQ section isn't displayed prominently enough:

How are the prompts written?

  In addition to giving models several attempts to generate an image, we also write several variations of the prompt to ensure that models don't get stuck on certain keywords or phrases depending on their training data. For example, while hippity hop is a relatively common name for the ball riding toy, it is also known as a space hopper. We try to use both terms in the prompts to ensure that models are not biased towards one or the other.

  Prompts for Hunyuan were attempted in both Chinese and English with and without Image Optimization.


Additionally when you see a prompt like "Turn on the lights" - the idea is to actually go beyond direct prompting commands - we're actually probing the capabilities of a truly multimodal LLM. It's a prompt that would spectacularly fail in more traditional models (such as SDXL).
More comments...