My 2 cents, mainly from AI/ComfyUI.LLL wrote: ↑Wed Aug 13, 2025 9:04 am I wanted to ask the various experts for some advice, using a modified pic by JD as an example.
I pasted a cutout of a young Nastassja Kinski over the upper body of Aloy using Photopea (free) and then smoothed the edges using Fooocus (free).
Of course, this was done quickly and the result has multiple issues - if I had taken more time I could have improved it slightly. But one issue stands out supreme: the different skin color.
Now once I would have tried to fix that using Photoshop's match color function. However I no longer have Photoshop available. Photopea too has a match color function but it works awfully.
So, is there a way to use Fooocus to make Aloy's body asume the color of Nastassja's? Or at least a comon middle ground? A search online seems to get a positive response, but as for how to proceed it's quite obscure and poor on details...
download/file.php?id=81362&mode=view
Technically, color editing isn't an AI/Stable Diffusion thing. ComfyUI has nodes that do color matching and stuff, but those nodes are essentially just Photoshop algorithms put into ComfyUI nodes, doing Photoshop stuff, like changing RGB values. AI image generation in the Latent Diffusion style implies we use a sampler with a trained AI model, to generate images either from scratch (Txt2Img) or from an already existing image (Img2Img).
If we want to color correct JD's image with Stable Diffusion, this is Img2Img: we (1) add noise = random pixels to JD's image and (2) pass the (re)noised image to a Sampler. The sampler will then denoise the image, resulting in a new image based on JD's image. The more noise is added (with a maximum of 100%), the less the new image will resemble the original.
If we want a new image that still resembles JD's original, we have to be conservative with the amount of noise we add. Something like 30%, maybe 50% at most. Here's the thing. Let's say we inject 30% of noise. This means 70% of the pixels of JD's image remain unchanged, meaning the color difference between Nastassja Kinski's torso and Alyx's body is still there. Stable Diffusion won't actively try to equalize the colors; all SD does is continue from the image you give it, and resolve the noise. Working with SD, I've found that colors are pretty persistent, for the reasons just explained. For example, if I start from an image with a girl with red hair, and I Img2Img with a prompt that says "blonde", I end up with a girl with reddish blonde hair. It continues from an image of a girl with red hair, and denoises towards "blonde", ending up somewhere between red and blonde.
What I would do is color correct Natasha Kinski's image before copy/pasting it into JD's image. This can be done in Photoshop, GIMP, any photo editing software. Then copy/paste the torso into JD's image.
Or do what JD suggested: greyscale the image and recolor it in ComfyUI, using the DDColor_Colorize node. You may still see the color difference in greyscale though, so the recoloring model may pick different colors in the new coloring as well. But this is probably something you can fix by editing the greyscale image to make the greyscale levels match for the whole body, before running it through the recoloring node.
tl;dr: Stable Diffusion/ComfyUI isn't the best solution if all you need is color correction.