Location>code7788 >text

ComfyUI plugin: ComfyUI layer style node (3)

Popularity:753 ℃/2024-08-03 12:02:33

Preface:

Learning ComfyUI is a long battle, and ComfyUI layer style is a set of powerful nodes specially designed and produced for image design and integrated with Photoshop functions. The node almost all the functions of PhotoShop migrated to ComfyUI, such as providing imitation Adobe Photoshop layer styles, provide color adjustment functions (brightness, saturation, contrast, etc.), provide Mask auxiliary tools, provide layer compositing tools and workflow-related auxiliary nodes, provide image effects filters and so on. Designed to centralize the work platform so that we can achieve some of the basic functions of PhotoShop in ComfyUI.

catalogs

I. Installation

II. LayerMask: MaskBoxDetect node

III. LayerMask: SegmentAnythingUltra node

IV. LayerMask: SegmentAnythingUltraV2 node

V. LayerMask: RemBgUltra node

VI. LayerMask: RemBgUltraV2 node

VII. LayerMask: BiRefNetUltra node

VIII. LayerMask: Shadow & Highlight Mask node

 

I. Installation

Method 1: Installation via ComfyUI Manager (recommended)

Open the Manager interface

1

2

Method 2: Installation with git clone command

Enter cmd in the ComfyUI/custom_nodes directory and press Enter to enter the computer terminal.

3

Enter the following line of code in the terminal to start the download

git clone /chflame163/ComfyUI_LayerStyle.git

4

II. LayerMask: MaskBoxDetect node

This node is designed to detect target objects in an image by automatically detecting them and generating a mask box for each object to be used in subsequent processing.

5

Input:

mask → input mask

Parameters:

detect → detection method **min_bounding_rect is the minimum bounding rectangle of the block shape, max_inscribed_rect is the maximum bounding rectangle of the block shape, mask_area is the effective area of the masked pixels**.

x_adjust → correct the horizontal offset after probing

y_adjust → correct the vertical offset after probing

scale_adjust → Correct the scaling offset after detection.

Output:

box_preview → Preview view of the detection results. The red color shows the detected results and the green color shows the output with corrections.

x_percent → Horizontal position output as a percentage

y_percent → Vertical position is output as a percentage.

width → width output

height → height output

x → output of the x-coordinate of the upper left corner position

y → y-coordinate output of the upper-left corner position

Example:

6

caveat

- Detection model selection: select the appropriate detection model according to the specific needs in order to obtain the best detection results.

- Detection Threshold Configuration: Set the detection threshold according to specific needs to ensure that the detection results are accurate and meet expectations. Higher thresholds may lead to missed detections, while lower thresholds may lead to false detections.

- Input Image Quality: The quality of the input image affects the effectiveness of object detection, ensuring that the image is clear and contains a well-defined target object.

- Processing performance: Object detection processing may require high computational resources, ensuring that system performance is sufficient to support processing requirements.

- Result checking: After the detection is completed, the generated mask box data is checked to ensure that each mask box accurately corresponds to the detected object and that there are no omissions or misdetections.

By using the LayerMask: MaskBoxDetect node, efficient object detection and mask box generation can be realized in the image processing workflow, enhancing the automation and accuracy of image processing.

III. LayerMask: SegmentAnythingUltra node

This node is designed to accurately segment the objects in an image by means of advanced image segmentation algorithms and generate the corresponding masks for use in subsequent processing.

7

Input:

image → input image

Parameters:

sam_model → select SAM model

ground_dino_model → select Grounding DINO model

threshold → threshold of the SAM model

detail_range → edge detail range

black_point → edge black sampling threshold

white_point → edge black sampling threshold

process_detail → setting this to False will skip edge processing to save runtime

prompt → SAM's prompt input

Output:

image → output image

mask → Split target mask

Note: This node needs to refer to the installation method of ComfyUI Segment Anything to install the model. If ComfyUI Segment Anything has been installed correctly, this step can be skipped.

through (a gap)here areDownload ,, tokenizer_config.json, and 5 files to the ComfyUI/models/bert-base-uncased folder.

downloading GroundingDINO_SwinT_OGC config file, GroundingDINO_SwinT_OGC model, GroundingDINO_SwinB config file, GroundingDINO_SwinB model until (a time) ComfyUI/models/grounding-dinofile (paper)。

Download the files sam_vit_h, sam_vit_l, sam_vit_b, sam_hq_vit_h, sam_hq_vit_l, sam_hq_vit_b, mobile_sam to the ComfyUI/models/sams folder.

Example:

89

caveat

- Segmentation model selection: Select the suitable segmentation model according to the specific needs to get the best segmentation effect.

- Level of Detail Configuration: Set the level of detail of the segmentation according to the specific needs to ensure that the segmentation results are fine and in line with expectations. Higher levels of detail may require more computing resources.

- Input Image Quality: The quality of the input image affects the segmentation results, make sure the image is clear and contains clear target objects.

- Processing performance: Advanced segmentation processing may require high computational resources, ensuring that system performance is sufficient to support processing requirements.

- Result checking: After the segmentation is completed, the generated segmentation mask data is checked to make sure that each mask region corresponds exactly to the segmented object, and that there is no omission or misclassification.

By using the LayerMask: SegmentAnythingUltra node, you can realize efficient advanced object segmentation in the image processing workflow, improve the degree of automation and accuracy of image processing, and meet the needs of various complex image processing.

IV. LayerMask: SegmentAnythingUltraV2 node

This node is designed to accurately segment objects in an image through more efficient and accurate image segmentation techniques and generate the corresponding masks for use in subsequent processing.

10

Input:

image → input image

Parameters:

sam_model → select SAM model

ground_dino_model → select Grounding DINO model

threshold → threshold of the SAM model

detail_method → edge processing method

detail_erode → Extent of inward erosion of mask edges **The larger the value, the greater the extent of inward repair**.

detail_dilate → Extent of outward expansion of the mask edge **The larger the value, the larger the extent of the outward repair**.

black_point → edge black sampling threshold

white_point → edge black sampling threshold

process_detail → setting this to False will skip edge processing to save runtime

prompt → SAM's prompt input

Output:

image → output image

mask → Split target mask

Example:

11

caveat

- Segmentation model selection: Select the suitable segmentation model according to the specific needs to get the best segmentation effect.

- Level of Detail Configuration: Set the level of detail of the segmentation according to the specific needs to ensure that the segmentation results are fine and in line with expectations. Higher levels of detail may require more computing resources.

- Input Image Quality: The quality of the input image affects the segmentation results, make sure the image is clear and contains clear target objects.

- Processing performance: Advanced segmentation processing may require high computational resources, ensuring that system performance is sufficient to support processing requirements.

- Result checking: After the segmentation is completed, the generated segmentation mask data is checked to make sure that each mask region corresponds exactly to the segmented object, and that there is no omission or misclassification.

By using the LayerMask: SegmentAnythingUltraV2 node, you can realize efficient advanced object segmentation in the image processing workflow, improve the degree of automation and accuracy of image processing, and satisfy a variety of complex image processing needs.

V. LayerMask: RemBgUltra node

This node is designed to make image processing more flexible and professional by automatically removing the background from an image and keeping only the foreground objects through efficient image processing algorithms.

12

Input:

image → input image

Parameters:

detail_range → edge detail range

black_point → edge black sampling threshold

white_point → edge black sampling threshold

process_detail → setting this to False will skip edge processing to save runtime

Output:

image → output image

mask → Split target mask

Example: To use this node you need to download the BRIA Background Removal v1.4 model file () to the ComfyUI/models/rmbg/RMBG-1.4 folder.

13

caveat

- Processing model selection: Select the appropriate background removal model according to the specific needs to get the best processing results.

- Removal Intensity Configuration: Set the intensity of background removal according to specific needs to ensure that the edges of foreground objects are processed naturally and do not affect the overall quality.

- Input Image Quality: The quality of the input image affects the effectiveness of background removal; make sure that the image is clear and that the foreground objects contrast with the background.

- Processing performance: Advanced background removal processing may require high computational resources, ensure that system performance is sufficient to support processing requirements.

- Result checking: After the background removal is complete, check the resulting background-free image to make sure that the foreground objects are intact and the background is removed cleanly, with no residual or mistakenly removed parts.

By using the LayerMask: RemBgUltra node, you can realize efficient background removal in the image processing workflow, improve the automation and accuracy of image processing, and meet various complex image processing needs.

VI. LayerMask: RemBgUltraV2 node

This node is used for advanced background removal tasks and is an upgraded version of the LayerMask: RemBgUltra node. It is designed to make image processing more accurate and professional by removing the background from an image more accurately and efficiently with improved image processing algorithms, leaving only foreground objects.

14

Input:

image → input image

Parameters:

detail_method → edge processing method

detail_erode → Extent of inward erosion of mask edges **The larger the value, the greater the extent of inward repair**.

detail_dilate → Extent of outward expansion of the mask edge **The larger the value, the larger the extent of the outward repair**.

black_point → edge black sampling threshold

white_point → edge black sampling threshold

process_detail → setting this to False will skip edge processing to save runtime

Output:

image → output image

mask → Split target mask

Example:

15

VII. LayerMask: BiRefNetUltra node

This node is an advanced image processing node specialized for high-precision image segmentation and background removal via Bi-Reference Network (BRN).

16

Input:

image → input image

Parameters:

detail_method → Edge Processing Methods **VITMatte, VITMatte(local), PyMatting, GuidedFilter are provided. if the model has been downloaded after the first use of VITMatte, VITMatte(local) can be used thereafter **

detail_erode → Extent of inward erosion of mask edges **The larger the value, the greater the extent of inward repair**.

detail_dilate → Extent of outward expansion of the mask edge **The larger the value, the larger the extent of the outward repair**.

black_point → edge black sampling threshold

white_point → edge black sampling threshold

process_detail → setting this to False will skip edge processing to save runtime

Output:

image → output image

mask → Split target mask

Example:

17

caveat

- Reference Image Selection: Select a reference image similar to the target image to help improve the accuracy of segmentation and background removal.

- Processing model selection: Select the appropriate dual reference network model according to the specific needs to get the best processing results.

- Level of Detail Configuration: Set the level of detail of the segmentation according to the specific needs to ensure that the segmentation results are fine and in line with expectations. Higher levels of detail may require more computing resources.

- Input Image Quality: The quality of the input and reference images affects the segmentation effect, make sure that the image is clear and the contrast between the foreground objects and the background is obvious.

By using LayerMask: BiRefNetUltra nodes, efficient and highly accurate image segmentation and background removal can be realized in image processing workflows.

VIII. LayerMask: Shadow & Highlight Mask node

This node is designed to generate corresponding masks by recognizing the bright and dark parts of the image so that these masks can be used in subsequent processing for specific regions or enhancement.

18

Input:

image → input image

mask → input mask

Parameters:

shadow_level_offset → offset for dark fetch **larger values bring more areas closer to bright into dark**

shadow_range → dark transition range

highlight_level_offset → offset at which the bright level is taken **smaller values bring more areas closer to the shaded area into the bright level**

highlight_range → transition range of the highlight

Output:

shadow_mask → dark_mask

highlight_mask → highlight_mask

Example:

19

caveat

- Threshold Configuration: Set the thresholds for shadows and highlights according to specific needs to ensure that the recognition results are accurate and as expected. Lower thresholds may result in too large shadow areas and higher thresholds may result in too small highlight areas.

- Input Image Quality: The quality of the input image affects how well shadows and highlights are recognized, ensure that the image is clear and has an even distribution of brightness.

- Processing performance: Shadow and highlight recognition processing may require some computational resources, ensure that the system performance is sufficient to support the processing requirements.

- Result checking: After the recognition and mask generation is complete, the generated shadow masks and highlight masks are checked to ensure that each mask region corresponds accurately to the recognized shadow and highlight regions, and that there are no missing or misrecognized parts.

By using the LayerMask: Shadow & Highlight Mask node, it is possible to achieve efficient shadow and highlight region identification in the image processing workflow, generating accurate masks for subsequent processing.

**To go beyond oneself is to strive for excellence. Perseverance is the key to success. **