Location>code7788 >text

ComfyUI plugin: ComfyUI layer style node (four)

Popularity:861 ℃/2024-08-04 14:44:21

Preface:

Learning ComfyUI is a long battle, and ComfyUI layer style is a set of powerful nodes specially designed and produced for image design and integrated with Photoshop functions. The node almost all the functions of PhotoShop migrated to ComfyUI, such as providing imitation Adobe Photoshop layer styles, provide color adjustment functions (brightness, saturation, contrast, etc.), provide Mask auxiliary tools, provide layer compositing tools and workflow-related auxiliary nodes, provide image effects filters and so on. Designed to centralize the work platform so that we can achieve some of the basic functions of PhotoShop in ComfyUI.

catalogs

I. Installation

II. LayerMask: PersonMaskUltra node

III. LayerMask: PersonMaskUltraV2 node

IV. LayerMask: MaskGrow / MaskEdgeShrink nodes

V. LayerMask: PixelSpread node

VI. LayerMask: MaskByDifferent nodes

VII. LayerMask: MaskEdgeUltraDetail node

VIII. LayerMask: MaskEdgeUltraDetailV2 node

 

I. Installation

Method 1: Installation via ComfyUI Manager (recommended)

Open the Manager interface

1

2

Method 2: Installation with git clone command

Enter cmd in the ComfyUI/custom_nodes directory and press Enter to enter the computer terminal.

3

Enter the following line of code in the terminal to start the download

git clone /chflame163/ComfyUI_LayerStyle.git

4

 

 

II. LayerMask: PersonMaskUltra node

This node is used to detect people in an image and generate corresponding masks. This node is designed to automatically recognize people in an image through advanced image processing algorithms and generate precise masks for each detected person so that these masks can be used in subsequent processing for specific regions or enhancement.

5

Input:

images → input image

Parameters:

face → face recognition switch

hair → hair recognition switch

body → body recognition switch

clothes → clothes recognition switch

accessories → Accessories (e.g. backpacks) identification switch

background → background recognition switch

confidence → recognition threshold **lower values will output more mask range**

detail_range → edge detail range

black_point → edge black sampling threshold

white_point → edge black sampling threshold

process_detail → setting this to False will skip edge processing to save runtime

Output:

images → Output images

mask → Output mask

Example: Body Recognition only recognizes parts of the skin that are exposed outside of clothing

6

7

caveat

- Detection model selection: select the appropriate character detection model according to the specific needs in order to obtain the best detection results.

- Detection Accuracy Configuration: Set the accuracy of the detection according to the specific needs to ensure that the detection results are fine and in line with expectations. Higher precision may require more computing resources.

- Input Image Quality: The quality of the input image affects the effectiveness of character detection and mask generation, make sure the image is clear and the contrast between the character and the background is obvious.

- Processing Performance: Advanced character detection and mask generation processing may require high computational resources, ensure that system performance is sufficient to support processing requirements.

- Result checking: After the detection and mask generation are completed, the generated character mask data is checked to ensure that each mask region accurately corresponds to the detected character and that there are no missing or misidentified parts.

By using the LayerMask: PersonMaskUltra node, it is possible to achieve efficient and highly accurate character detection and mask generation in image processing workflows.

 

III. LayerMask: PersonMaskUltraV2 node

This node is an upgraded version of the previous node by using an improved advanced character detection model, which allows the node to more accurately detect characters in an image and generate accurate masks for subsequent processing.

8

Input:

images → input image

Parameters:

face → face recognition switch

hair → hair recognition switch

body → body recognition switch

clothes → clothes recognition switch

accessories → Accessories (e.g. backpacks) identification switch

background → background recognition switch

confidence → recognition threshold **lower values will output more mask range**

detail_method → Edge Processing Methods **VITMatte, VITMatte(local), PyMatting, GuidedFilter are provided. if the model has been downloaded after the first use of VITMatte, VITMatte(local) can be used thereafter **

detail_erode → Extent of inward erosion of mask edges **The larger the value, the greater the extent of inward repair**.

detail_dilate → extent of outward expansion of the mask edge **the larger the value, the larger the extent of outward restoration **black_point → edge black sampling threshold

white_point → edge black sampling threshold

process_detail → setting this to False will skip edge processing to save runtime

Output:

images → Output images

mask → Output mask

Example:

9

 

IV. LayerMask: MaskGrow / MaskEdgeShrink nodes

These two nodes are designed to optimize and refine the masking effect by expanding or contracting the masking edges in order to obtain better results in subsequent image processing tasks.

10

Input:

mask → input mask

Parameters:

invert_mask → whether to invert the mask or not

grow → expansion (positive values are outward expansion, negative values are inward contraction)

blur → degree of blurring

shrink_level → shrink smoothing level

soft → smooth amplitude

edge_shrink → edge shrinkage

edge_reserve → preserve edge detail magnitude (100 is fully preserved, 0 is not preserved at all)

Output:

mask → Output mask

Example:

111213

caveat

Input Mask Quality: The quality of the input mask affects the extension, make sure the mask edges are clear.

Expansion parameter configuration: set the number of pixels to be expanded according to the specific requirements to ensure that the expansion effect meets the expectations.

Shrinkage parameter configuration: set the number of pixels to be shrunk according to specific needs to ensure that the shrinkage effect is as expected.

By using the LayerMask: MaskGrow/MaskEdgeShrink node, you can flexibly adjust the edges of the mask and optimize the masking effect during the image processing workflow to improve the precision and quality of image processing.

 

V. LayerMask: PixelSpread node

This node focuses on expanding or contracting the edge pixels of the image mask. By adjusting the pixel distribution of the mask, the area covered by the mask can be increased or decreased to optimize the image processing results.

14

Input:

image → input image

mask → input mask

Parameters:

invert_mask → whether to invert the mask or not

mask_grow → mask_grow

Output:

image → output image

Example:

15

caveat

- Adjustment parameter configuration: set the number of expanding or contracting pixels according to specific needs to ensure that the adjustment effect meets expectations.

- Input Mask Quality: The quality of the input mask affects the adjustment, make sure the mask edges are clear.

- Operation Type Selection: Select the expand (expand) or shrink (shrink) operation as needed to achieve the desired mask adjustment effect.

- Processing performance: Edge pixel adjustment processing may require some computational resources, ensure that the system performance is sufficient to support the processing requirements.

- Result check: After the adjustment is complete, check the generated mask data to make sure that the mask area is as expected and that there are no misadjusted or incomplete parts.

By using the LayerMask: PixelSpread node, you can achieve efficient mask edge adjustment in your image processing workflow to optimize the accuracy and effectiveness of image processing.

 

VI. LayerMask: MaskByDifferent nodes

This node focuses on generating a difference mask by comparing two images. This node recognizes variations or differences between images and generates difference masks for subsequent processing.

16

Input:

image_1 → input the first image

image_2 → input the second image

Parameters:

gain → Calculate Gain **Turn up this value and weak differences will be presented more dramatically**

fix_gap → fix internal gaps in the mask **higher values will fix larger gaps**

fix_threshold → fix_threshold

main_subject_detect → Setting this item to True will enable subject detection, ignoring differences outside the subject.

Output:

mask → Output mask

Example:

1718

caveat

Comparison Parameter Configuration: Set the threshold and sensitivity of comparison according to specific needs to ensure that the recognition effect meets expectations. Lower thresholds may lead to misrecognition, and higher thresholds may lead to missed recognition.

Input Image Quality: The quality of the input image affects the comparison, make sure the image is clear and the changing parts are visible.

Processing Performance: Image comparison and mask generation processing may require some computational resources, ensure that system performance is sufficient to support processing requirements.

Result checking: After the comparison and mask generation is complete, the generated difference mask data is checked to ensure that each mask region accurately corresponds to the portion of the variation between images and that there are no misidentifications or omissions.

By using the LayerMask: MaskByDifferent node, efficient change detection and difference mask generation can be implemented in the image processing workflow.

 

VII. LayerMask: MaskEdgeUltraDetail node

This node focuses on hyperfine processing of the edges of image masks. By using advanced edge processing algorithms, the edges of the mask can be meticulously optimized and enhanced, resulting in smoother and more precise mask boundaries.

19

Input:

image → input image

mask → input mask

Parameters:

method → provides both PyMatting and OpenCV-GuidedFilter methods to process the edges **PyMatting is slower to process, but for video, it is recommended to use this method to get smoother mask sequences**.

mask_grow → how much the mask expands **Positive values expand outward, negative values shrink inward. For rougher masks, negative values are usually used to shrink the edges for better results **

fix_gap → fix the gap in the mask **If there is a more obvious gap in the mask, turn up this value appropriately**.

fix_threshold → threshold for fixing masks

detail_range → edge detail range

black_point → edge black sampling threshold

white_point → edge black sampling threshold

Output:

image → output image

mask → Output mask

Example:

20

caveat

- Refinement parameter configuration: set the degree of refinement and smoothing intensity according to specific needs to ensure that the processing effect meets expectations.

- Input Mask Quality: The quality of the input mask affects the refinement, make sure that the edges of the mask are sharp and free of severe noise or artifacts.

- Processing performance: Edge refinement processing may require high computational resources, ensuring that system performance is sufficient to support processing requirements.

- Result checking: After the refinement process is completed, the generated mask data is checked to ensure that the mask edges are detailed and smooth, and that there are no mishandled or missing parts.

By using the LayerMask: MaskEdgeUltraDetail node, you can achieve efficient mask edge refinement in your image processing workflow, optimizing the precision and effectiveness of image processing.

 

VIII. LayerMask: MaskEdgeUltraDetailV2 node

This node is an upgraded version of the previous node through more advanced high-precision edge processing algorithms to further optimize and refine the edges of the mask to make them smoother and more precise for higher quality results in subsequent image processing.

21

Input:

image → input image

mask → input mask

Parameters:

method → Edge Handling Methods ** Added VITMatte and VITMatte(local) methods. If the model has been downloaded after the first use of VITMatte, you can use VITMatte(local) afterwards **

mask_grow → how much the mask expands **Positive values expand outward, negative values shrink inward. For rougher masks, negative values are usually used to shrink the edges for better results **

fix_gap → fix the gap in the mask **If there is a more obvious gap in the mask, turn up this value appropriately**.

fix_threshold → threshold for fixing masks

edge_erode → extent of inward erosion of the mask edge **the larger the value, the larger the extent of the inward repair **

edge_dilate → extent of outward expansion of the mask edge **the larger the value, the larger the extent of the outward repair **

black_point → edge black sampling threshold

white_point → edge black sampling threshold

Output:

image → output image

mask → Output mask

Example:

22

**To go beyond oneself is to strive for excellence. Perseverance is the key to success. **