Light plays a crucial role in photography, and it has a significant impact on the overall quality and atmosphere of an image. You can use light to enhance a subject, create depth and dimension, convey emotion, and highlight important details.
In this post, I'll show you how to control the light that generates images in stable diffussion.
hardware
We will use the AUTOMATIC1111 Stable Diffusion GUI to create the image.
Use of light keywords
The easiest way to control the light is to add a cue to theLight Keywords。
I will use the following basic and negative tips to illustrate the effect.
Positive Cue Words:
masterpiece,best quality,masterpiece,best quality,official art,extremely detailed CG unity 8k wallpaper,a beautiful woman,
Negative Cue Words:
lowers,monochrome,grayscales,skin spots,acnes,skin blemishes,age spot,6 more fingers on one hand,deformity,bad legs,error legs,bad feet,malformed limbs,extra limbs,
Model: majicmixRealistic_v7
Width: 512
Height: 768
CFG Scale: 7
Here are the images generated using the base cue words, they look okay, but the lighting is not so good.
Volumetric lightingIt is a beam of light that is noticeable in an image. It is used in photography to add volume.
Add keywords to the promptVolumetric lighting:
rim lightingAdds a bright outline to the theme. It may darken the subject. You can use it in combination with other light terms to illuminate the theme.
Add keywords to the promptrim lighting:
SunlightAdds sunlight to the image. It tends to render natural backgrounds.
Add keywords to the promptSunlight。
BacklightPlace the light source after the subject. By adding this keyword, you can produce some stylish effects.
Add to the promptBacklight。
It is well known that Stable Diffusion does not produce dark images without guidance.
There are many ways to solve this problem, including using models and LoRA. but a simpler way is to add some dim light keywords.
Add to the promptdimly lit。
Crepuscular raysAdds light through the clouds. It creates a stunning visual effect.
This tip and portrait aspect ratio usually presents a full-body image, adding theCrepuscular raysIt will amplify.
Tips:
-
If you are not seeing results, increase the weight of your keywords.
-
These light keywords don't always work. Generate a few images at a time for testing.
-
Find more light keywords in the tip generator.
Controls the light in a specific area
The light keyword in the tip applies to the entire image. Here I'll show you how to control the light in specific areas.
Here you need to install a plugin called regional Prompter.
The download address is below:/hako-mikan/
Once installed, you can find this Regional Prompter area at the bottom of your workspace.
In this example, we will apply different lighting to the upper and lower portions of the image.
existtxt2imgOn the page, expandregional PrompterPart.
Set it up as I chose above.
Basically the meaning is to split the picture into two parts in the ratio of 2:3 to set up the promotions separately.
The regional Prompter is a very powerful tool that can produce stunning results. I'll talk more about regional Prompter in a subsequent post.
Here it is just as a usage scenario.
Let's change the input prompt:
Positive Cue Words:
masterpiece,best quality,masterpiece,best quality,official art,extremely detailed CG unity 8k wallpaper,a beautiful woman,
BREAK
( hard light:1.2),(volumetric:1.2),well-lit,
BREAK
(dimly lit:1.4),
Negative cue words remain unchanged.
This brings us to a picture that is light on top and dim on the bottom.
Now try to swap the light assignments.
masterpiece,best quality,masterpiece,best quality,official art,extremely detailed CG unity 8k wallpaper,a beautiful woman,
BREAK
(dimly lit:1.4),
BREAK
( hard light:1.2),(volumetric:1.2),well-lit,
The light is exchanged accordingly.
Tips:
-
If you don't see results, adjust the weighting of your keywords.
-
Area tips don't always work 100% of the time. You can try some more pictures to see the results.
Controlling Light with ControlNet
In addition to the above cue word and regional Prompter to control the light. We can also use controlNet to have more precise control over the lighting of the image.
controlNet is a separate plugin, so you need to install it first.
Txt2img setup
After installing controlNet, thetxt2imgpage to generate images as usual.
Click to send toimg2img。
This action copies all tips, negative tips, image sizes and seed values to the img2img page.
Img2img settings
existimg2imgpage, navigate to the ControlNet section.
Upload the image you just saved toControlNet Unit 0。
Everyone can use my configuration options.
Here we need to select Depth model, in preprocessor select depth_zoe,model select control_xxxx_depth.
Scroll up toimg2img canvasDelete the image. Delete the image.
Then use the drawing tool to draw a black and white template image.
White represents light.
As shown below:
Upload this image toimg2img canvas。
commander-in-chief (military)Resize ModeSet to resize only.
commander-in-chief (military)denoising intensitySet to 0.9.
strike (on the keyboard)generating。
You should get an image with a horizontal light source.
If you don't want to create your own light source, then baidu the black and white light source image:
For example, for the first light source image, we can get the following image:
note
It is not necessary to use a depth control model. Other models, such as canny and lineart models, can also work. You can experiment with preprocessors to see which one works for you.
If you see an unnatural color, reduce theControlnet weights。
Adjust the denoising intensity and observe the effect.
Click on me for more highlights.