[AI Technique] Existing Image -> Flux -> HighRes
[AI Technique] Existing Image -> Flux -> HighRes
This procedure explains how to generate highres Flux images starting from an already existing image, ie. a 3D image, or a movie still. Flux will take the composition and the color scheme from your image as additional input. The color scheme transfer is why for lineart, you need to transfer the image via (Canny) controlnet instead, otherwise the resulting image will be heavily influence by the predominantly white color of the lineart drawing.
The hardest part is installing ComfyUI. First copy the workflow file, below, onto your local machine, then drag and drop it on top of a pre-opened ComfyUI window. You may be prompted to install missing custom nodes (hit yes). If some nodes are still unrecognized, go to the ComfyUI Manager Menu and press Install Missing Custom Nodes.
I use an RTX 4060 Ti with 16GB of VRAM (= memory on the graphics card), with 32GB of system memory. Not bad, not too fancy either. Flux requires a lot of VRAM. This workflow will work on lower spec machines, to a certain point; in cases of insufficient VRAM, ComfyUI will try to load part of the model in system memory. This works but image generation will be slower. If a PC doesn't have enough VRAM and it doesn't have enough system memory either, ComfyUI will throw an insufficient memory error.
I used a slightly edited version of the movie still Jarpi posted in his thread as a base, staying with the Deathstalker theme.
Workflow file: https://filebin.net/kp764j8usmvq0y7c
Result (input, output):
The hardest part is installing ComfyUI. First copy the workflow file, below, onto your local machine, then drag and drop it on top of a pre-opened ComfyUI window. You may be prompted to install missing custom nodes (hit yes). If some nodes are still unrecognized, go to the ComfyUI Manager Menu and press Install Missing Custom Nodes.
I use an RTX 4060 Ti with 16GB of VRAM (= memory on the graphics card), with 32GB of system memory. Not bad, not too fancy either. Flux requires a lot of VRAM. This workflow will work on lower spec machines, to a certain point; in cases of insufficient VRAM, ComfyUI will try to load part of the model in system memory. This works but image generation will be slower. If a PC doesn't have enough VRAM and it doesn't have enough system memory either, ComfyUI will throw an insufficient memory error.
I used a slightly edited version of the movie still Jarpi posted in his thread as a base, staying with the Deathstalker theme.
Workflow file: https://filebin.net/kp764j8usmvq0y7c
Result (input, output):
Last edited by xs70 on Sun Apr 27, 2025 7:52 pm, edited 3 times in total.
Re: [AI Technique] Existing Image -> Flux -> HighRes
You need to download the following model files (you may have to create an account before you can download the files) and put them in the correct folder, before you can generate images:
flux1DevFp8VersionsScaled_fp8E4m3fn.safetensors : https://civitai.com/models/1032613?mode ... Id=1520991
Location: ComfyUI/models/unet
Note: My workflow refers to a different model file; I can't link to the one I use because the original download link is dead for some reason. The file I'm referring to here should be equivalent (11GB version of the original Flux.1 Dev model).
t5xxl_fp8_e4m3fn.safetensors : https://huggingface.co/comfyanonymous/f ... afetensors
Location: ComfyUI/models/clip
clip_l.safetensors : https://huggingface.co/comfyanonymous/f ... afetensors
Location: ComfyUI/models/clip
fluxVaeSft_aeSft.sft : https://huggingface.co/Albert-zp/flux-v ... _aeSft.sft
Location: ComfyUI/models/VAE
4x_NMKD-Siax_200k.pth : https://civitai.com/models/147641/nmkd-siax-cx
Location: ComfyUI/models/ESRGAN or ComfyUI/models/upscale_models
Note: Rename the file to 4x_NMKD-Siax_200k.pth
flux1DevFp8VersionsScaled_fp8E4m3fn.safetensors : https://civitai.com/models/1032613?mode ... Id=1520991
Location: ComfyUI/models/unet
Note: My workflow refers to a different model file; I can't link to the one I use because the original download link is dead for some reason. The file I'm referring to here should be equivalent (11GB version of the original Flux.1 Dev model).
t5xxl_fp8_e4m3fn.safetensors : https://huggingface.co/comfyanonymous/f ... afetensors
Location: ComfyUI/models/clip
clip_l.safetensors : https://huggingface.co/comfyanonymous/f ... afetensors
Location: ComfyUI/models/clip
fluxVaeSft_aeSft.sft : https://huggingface.co/Albert-zp/flux-v ... _aeSft.sft
Location: ComfyUI/models/VAE
4x_NMKD-Siax_200k.pth : https://civitai.com/models/147641/nmkd-siax-cx
Location: ComfyUI/models/ESRGAN or ComfyUI/models/upscale_models
Note: Rename the file to 4x_NMKD-Siax_200k.pth
Last edited by xs70 on Sun Apr 27, 2025 11:15 am, edited 2 times in total.
Re: [AI Technique] Existing Image -> Flux -> HighRes
The workflow is separated into three groups:
1. Model Loader (Flux)
This is where you can change the Flux model, add LoRAs (optional), type your prompt, and specify Denoise and Steps (below). In this workflow, I'm using (a downsized version of) the original Flux.1 Dev model because I find that unless you need NSFW, the original Flux base model still gives the best details. But you can use any other Flux checkpoint (= model derived from a base model). Original Flux won't do nipples and/or male and female genitalia, but other than that the results can still be spicy, and in my experience the original still looks best.
2a. Img2Img Generator
This is the image generator. Here you load your base image (drag the file onto the Load Image node). The RandomNoise node contains the seed (an integer number) that initializes the noise generator (a long story but unimportant at this point). Same seed with identical settings = same image. Different seed = different generation.
3a. Save Basic Image
This saves your generated image in the ComfyUI output folder.
3b. HighRes "Fix"
This step performs what's called the HighRes Fix step. It takes a basic image, upscales it, and re-generates it in higher resolution. This adds detail. Drawback is that it takes a lot more time. Settings you can change here are Denoise (higher is more variation) and Steps. I usually get nice results with 0.2 Denoise and 10 steps.
2b. Load Basic Image
A trick you can use is to generate a batch of basic images first without having to highres all of them. To do this, deactivate HighRes by Rightclicking on group 3b and clicking Bypass Group Nodes. Generate a bunch of basic images with HighRes deactivated. Inside the output folder, delete the basic images you don't like, keeping only the basic files you want to highres. Now deactivate the generator, group 2a. Activate HighRes by Rightclicking on group 3b and clicking Set Group Nodes to Always. Connect the Image output of group 2b: click and drag the blue dot that says Image to the image input of group 3b. Now you can use group 2b to load the basic images you want to highres (drag from your folder onto the Load Image node). Hit Queue to HighRes your selected image.
1. Model Loader (Flux)
This is where you can change the Flux model, add LoRAs (optional), type your prompt, and specify Denoise and Steps (below). In this workflow, I'm using (a downsized version of) the original Flux.1 Dev model because I find that unless you need NSFW, the original Flux base model still gives the best details. But you can use any other Flux checkpoint (= model derived from a base model). Original Flux won't do nipples and/or male and female genitalia, but other than that the results can still be spicy, and in my experience the original still looks best.
2a. Img2Img Generator
This is the image generator. Here you load your base image (drag the file onto the Load Image node). The RandomNoise node contains the seed (an integer number) that initializes the noise generator (a long story but unimportant at this point). Same seed with identical settings = same image. Different seed = different generation.
3a. Save Basic Image
This saves your generated image in the ComfyUI output folder.
3b. HighRes "Fix"
This step performs what's called the HighRes Fix step. It takes a basic image, upscales it, and re-generates it in higher resolution. This adds detail. Drawback is that it takes a lot more time. Settings you can change here are Denoise (higher is more variation) and Steps. I usually get nice results with 0.2 Denoise and 10 steps.
2b. Load Basic Image
A trick you can use is to generate a batch of basic images first without having to highres all of them. To do this, deactivate HighRes by Rightclicking on group 3b and clicking Bypass Group Nodes. Generate a bunch of basic images with HighRes deactivated. Inside the output folder, delete the basic images you don't like, keeping only the basic files you want to highres. Now deactivate the generator, group 2a. Activate HighRes by Rightclicking on group 3b and clicking Set Group Nodes to Always. Connect the Image output of group 2b: click and drag the blue dot that says Image to the image input of group 3b. Now you can use group 2b to load the basic images you want to highres (drag from your folder onto the Load Image node). Hit Queue to HighRes your selected image.
Re: [AI Technique] Existing Image -> Flux -> HighRes
This technique relies on the most important property of latent diffusion models (of which Stable Diffusion is a subset): AI models don't remember anything between subsequent generation steps. This means you can take an existing image, treat it with noise, and pass it onto an AI model, and the AI model will treat that image like it's a new AI image that is in the process of being generated. For example, if you take a movie still and add 70% noise to it, the model treats the result as an AI generated movie still with 70% to go. The more noise you add, the more detail you erase from the base image, and the more detail the AI model will fill back in.
I use these guidelines for adding noise (the parameter is called Denoise because the sampler will resolve = denoise the same amount of noise that you added to the image):
10% noise: the image is almost finished, only finishing touches will be altered
30% noise: the image is 70% complete, the final look will be generated by the model
50% noise: the image is halfway complete, the contours in the image can be changed but the main form of the image is already set
70% noise: 30% into generating the image, characters and other important items will more or less stay where they are but anything about them can be altered
80% noise: the general composition and color of the image is complete, but anything else still has to be decided. Based on your prompt you can go for a completely different style and/or image content.
90% noise: color and basic shapes will be used to form an image that still has 90% to go.
100% noise: basically a new (random) image, although for some reason the color scheme seems to persist.
You pick a number of Steps relative to the amount of Denoise you select. Ie. 0.1 Denoise = 10% denoise, meaning the image is already 90% complete, so maybe 5 steps or so should suffice to complete the image. For 30% Denoise I tend to pick 10 steps. 50% denoise, 12 steps. 70% Denoise I tend to pick 20 steps. You can play around with this.
To illustrate, the image in the first post was generated with 70% Denoise. As you can see, the image mimicks the base image. The image below is generated with 80% Denoise. More freedom means the model can make the characters look in a different direction, but the general composition is still maintained.
I use these guidelines for adding noise (the parameter is called Denoise because the sampler will resolve = denoise the same amount of noise that you added to the image):
10% noise: the image is almost finished, only finishing touches will be altered
30% noise: the image is 70% complete, the final look will be generated by the model
50% noise: the image is halfway complete, the contours in the image can be changed but the main form of the image is already set
70% noise: 30% into generating the image, characters and other important items will more or less stay where they are but anything about them can be altered
80% noise: the general composition and color of the image is complete, but anything else still has to be decided. Based on your prompt you can go for a completely different style and/or image content.
90% noise: color and basic shapes will be used to form an image that still has 90% to go.
100% noise: basically a new (random) image, although for some reason the color scheme seems to persist.
You pick a number of Steps relative to the amount of Denoise you select. Ie. 0.1 Denoise = 10% denoise, meaning the image is already 90% complete, so maybe 5 steps or so should suffice to complete the image. For 30% Denoise I tend to pick 10 steps. 50% denoise, 12 steps. 70% Denoise I tend to pick 20 steps. You can play around with this.
To illustrate, the image in the first post was generated with 70% Denoise. As you can see, the image mimicks the base image. The image below is generated with 80% Denoise. More freedom means the model can make the characters look in a different direction, but the general composition is still maintained.
Last edited by xs70 on Sun Apr 27, 2025 7:53 pm, edited 4 times in total.
Re: [AI Technique] Existing Image -> Flux -> HighRes
The size of your starting image matters a lot. Stable Diffusion models are trained on images with a certain size. Your base image is essentially an input element for Flux that Flux will try to replicate. If the dimensions are alien to Flux, the results won't be great. Standard dimensions are 1024x1024 (square), with landscape and portrait dimensions having a smaller width and greater height, or vice versa. There's quite a bit of tolerance around these values, but you will start getting poor results below 540px and above 2kpx. I tend to pick screen size proportions, ie. 768x1344 for my vertical screen. The highres step upscales to twice that size, so the end result is plenty big enough.
Re: [AI Technique] Existing Image -> Flux -> HighRes
ComfyUI generates PNG files which contain, next to the generated image, also the workflow that was used to generate it. This is great as it lets you drag and drop PNG files generated by ComfyUI onto a ComfyUI window, and ComfyUI will automatically load the workflow that was used to generate the image. If you don't want to share your workflow, then you should convert your .PNG files to JPEG instead, as JPEG images only contain image data. I use MS Paint for this as it's simple, fast and free. Open the .PNG in MS Paint, Save As -> JPEG picture. Done.
Re: [AI Technique] Existing Image -> Flux -> HighRes
Note that original Flux tends to mess up nipples. Note the artifacts in this image. This image would need some minor retouching before it's ready. Still nice though.
Last edited by xs70 on Sun Apr 27, 2025 7:35 pm, edited 1 time in total.
Re: [AI Technique] Existing Image -> Flux -> HighRes
Au contraire, mon frere! On a board dedicated abused pixels, those things you call artifacts are actually great. They can be construed as burning marks, thongs applications, hot irons, you name it. I wouldn't fix them for the world.

In this paradigm, the lovely lady was just released from the dungeon, with sore nipples and hurting tits ... a lovely thought!
Re: [AI Technique] Existing Image -> Flux -> HighRes
doe.1971 wrote: ↑Sun Apr 27, 2025 11:45 am Au contraire, mon frere! On a board dedicated abused pixels, those things you call artifacts are actually great. They can be construed as burning marks, thongs applications, hot irons, you name it. I wouldn't fix them for the world.
In this paradigm, the lovely lady was just released from the dungeon, with sore nipples and hurting tits ... a lovely thought!

Last edited by xs70 on Sun Apr 27, 2025 7:43 pm, edited 1 time in total.
Re: [AI Technique] Existing Image -> Flux -> HighRes
Switching to the NSFW Flux model I used in the Lineart -> Flux thread allows for the generation of topless warrior chicks. This image was made with the same workflow as above, but with the NSFW model and a prompt describing a topless warrior woman. At 80% denoise, the model picks up the bra from the base image and reinterprets it, but has trouble removing the piece of clothing altogether.
Last edited by xs70 on Sun Apr 27, 2025 7:32 pm, edited 7 times in total.
Who is online
Users browsing this forum: No registered users and 10 guests