Location>code7788 >text

ComfyUI plugin: ComfyUI Impact node (II)

Popularity:232 ℃/2024-07-28 18:07:12

Preface:
Learning ComfyUI is a long battle, and ComfyUI Impact is a huge module node library, built-in many very useful and powerful nodes, such as Detector, Detail Enhancer, Preview Bridge, Wildcard, Hook, Picture Sender, Picture Receiver and so on. Through the combination of these nodes, we can achieve a lot of work, such as automatic face detection and optimization repair, area enhancement, local repainting, crowd control, hairstyling, changing the model's clothes and so on. ComfyUI Impact is a big river that everyone can't bypass on the way of ComfyUI advancement, so this post will lead you to understand and learn to use these nodes. I wish you all good luck in your studies and become a master of ComfyUI soon!

catalogs
I. Installation
II. ToBasicPipe / FromBasicPipe / FromBasicPipe_V2 node
III. ToDetailerPipe / ToDetailerPipeSDXL / FromDetailerPipe / FromDetailerPipe_V2 / FromDetailer(SDXL/pipe) Node
Edit BasicPipe / Edit DetailerPipe / Edit DetailerPipe(SDXL) node
V. BasicPipe->DetailerPipe / BasicPipe->DetailerPipe(SDXL) / DetailerPipe->BasicPipe Node
VI. Image Sender / Image Receiver node
VII. FaceDetailer / FaceDetailer(Pipe) node
VIII. Sample workflow

I. Installation
Method 1: Installation via ComfyUI Manager (recommended)
Open the Manager interface

Method 2: Installation with git clone command
Enter cmd in the ComfyUI/custom_nodes directory and press Enter to enter the computer terminal.

Enter the following line of code in the terminal to start the download
git clone /ltdrdata/ComfyUI-Impact-Pack

Advance tip: Pipe nodes are a very handy node type that Impact provides us with. When we need to use the model, clip, vae, prompt word and other elements many times, there will be a lot of lines on the UI side, which looks very confusing. And Pipe node solves this problem. We just need to use To node to integrate this information at the beginning, then connect to the place where we need to use this information through a line, and then parse it out with From node. In the whole process, only one wire connection is needed, which greatly improves the efficiency of the workflow. This article will focus on the use of Pipe nodes to help you master all the operations of such nodes to improve work efficiency.
II. ToBasicPipe / FromBasicPipe / FromBasicPipe_V2 node
The ToBasicPipe node integrates the most basic node information for building a workflow in a single pipe (Pipe), the FromBasicPipe node is used in conjunction with the ToBasicPipe node, which parses the information integrated by the ToBasicPipe node back to its original form, and the FromBasicPipe_V2 node can take the flexibility one step further. Let's take a common example: ToBasicPipe node and FromBasicPipe node are similar to the start and end stations of high-speed railroad, the integrated information will go from the start station to the end station, while FromBasicPipe_V2 node is the intermediate station, with which the integrated information can have a different direction, and a road network is formed.

Input:
basic_pipe → a pipe containing the basic information about model, clip, vae, positive, negative
model → the model to be integrated into the pipeline, which can be checkpoints model, Lora model, controlnet model, etc.
clip → clip model to be integrated into the pipeline
vae → vae model to be integrated into the pipeline
positive → positive cues to be integrated into the pipeline
negative → reverse cue word to be integrated into the pipeline
Output:
basic_pipe → a pipe containing the basic information about model, clip, vae, positive, negative
model → model parsed from the pipeline, can be checkpoints model, Lora model, controlnet model, etc.
clip → clip model parsed from pipeline
vae → vae model parsed from pipeline
positive → positive cue words parsed from the pipeline
negative → reverse cue word parsed from pipeline
Example:

Usage Scenarios:
- Data standardization: In complex workflows, ensure that data are formatted in a uniform way so that subsequent nodes can be processed correctly.
- Cross-stage data transfer: Transferring data between multiple processing stages ensures that the results of each stage can be seamlessly integrated.
- Data reformatting: reformatting standardized data into the format required for a particular processing node in the data processing flow.
- Follow-up processing: In the follow-up processing phase, the extracted data are used for further analysis and processing.
- Complex Data Processing: Handles a wider range of data types and formats in a complex data processing flow, providing greater flexibility and control.
- Advanced Data Conversion: Provides more advanced data conversion functions to meet more complex application requirements.
By using ToBasicPipe, FromBasicPipe and FromBasicPipe_V2 nodes, you can standardize, transfer and reformat data in complex workflows, ensure smooth data flow between processing stages, and improve workflow flexibility and efficiency.

III. ToDetailerPipe / ToDetailerPipeSDXL / FromDetailerPipe / FromDetailerPipe_V2 / FromDetailer(SDXL/pipe) Node
Used to handle data transfer and format conversion in image segmentation and refinement processing workflows, helping to manage and transfer data in complex image processing pipelines. Simply put, these nodes also make the lines neater and more aesthetically pleasing, but integrate more information.

Input:
model → the model to be integrated into the pipeline, which can be checkpoints model, Lora model, controlnet model, etc.
clip → clip model to be integrated into the pipeline
vae → vae model to be integrated into the pipeline
positive → positive cues to be integrated into the pipeline
negative → reverse cue word to be integrated into the pipeline
bbox_detector → the BBOX model to be integrated into the pipeline
sam_model_opt → the SAM model to be integrated into the pipeline
segm_detector_opt → SEGM model to be integrated into the pipeline
detailer_hook → customized detailing information to be integrated into the pipeline
Textbox → Wildcard specification, this option is ignored if left empty
Select to add LoRA → select the LoRA model to be loaded
Selert to add Wildcard → select the wildcard to add
refiner_model → the refiner macromodel to be integrated into the pipeline
refiner_clip → the clip model of the refiner model to be integrated into the pipeline
refiner_positive → the positive lifter of the refiner model connection to be integrated into the pipeline
refiner_negative → the reverse lifter of the refiner model connection to be integrated into the pipeline
detailer_pipe → a pipe that integrates the information from all the parameters above
Output:
model → base model separated from the pipeline, could be checkpoints model, Lora model, controlnet model, etc.
clip → clip model separated from the pipeline
vae → vae model separated from the pipeline
positive → positive cue word for the base model separated from the pipeline
negative → reverse cue word for the base model separated from the pipeline
bbox_detector → BBOX model detached from the pipeline
sam_model_opt → SAM model separated from pipeline
segm_detector_opt → SEGM model separated from pipeline
detailer_hook → customized detailing information separated from the pipeline
refiner_model → refiner model separated from the pipeline, can be checkpoints model, Lora model, controlnet model, etc.
refiner_clip → clip model in refiner model separated from pipeline
refiner_positive → positive cue word for refiner model isolated from pipeline
refiner_negative → reverse cue word for refiner models separated from the pipeline
detailer_pipe → a pipe that integrates all parameter information
Note: the nodes refiner_model, refiner_clip, refiner_positive, refiner_negative are parameters corresponding to the refiner model, and one advantage of the ToDetailerPipeSDXL node is that it allows you to combine the base model with the refiner model. One advantage of the ToDetailerPipeSDXL node is that it can combine the base model with the refiner model. This is shown in the following figure:

The pipeline integrated by the Detailer type node can be separated to connect the Basic type node. As shown in the following figure: the model, clip, positive, nagetive and other information of the base model and refiner model in the FromDetailer(SDXL/pipe) node are connected to the two ToBasicPipe nodes respectively, which can be used individually with the information of a certain model, and later on with the BasicPipe->DetailerPipe(SDXL) nodes to merge them.

Usage Scenarios:
- Data standardization: In complex workflows, ensure that data are formatted in a uniform way so that subsequent nodes can be processed correctly.
- Cross-stage data transfer: Transferring data between multiple processing stages ensures that the results of each stage can be seamlessly integrated.
- High-resolution image processing: Suitable for the refinement of high-resolution images, ensuring that the data format is suitable for subsequent processing steps.
- Data reformatting: reformatting standardized data into the format required for a particular processing node in the data processing flow.
- Follow-up processing: In the follow-up processing phase, the extracted data are used for further analysis and processing.
- Complex Data Processing: Handles a wider range of data types and formats in a complex data processing flow, providing greater flexibility and control.
- Advanced Data Conversion: Provides more advanced data conversion functions to meet more complex application requirements.
- Data extraction after refinement: Extraction of high-resolution data from the results of refinement processing for further processing or analysis.
By using these nodes, data can be standardized, transferred and reformatted in complex image processing workflows, ensuring smooth data flow between processing stages and improving workflow flexibility and efficiency.

Edit BasicPipe / Edit DetailerPipe / Edit DetailerPipe(SDXL) Node
These nodes are used for editing and adjusting data in the processing pipeline, allowing data to be modified during data transfer as necessary to meet specific processing needs.

As all the parameters about the Pipe type node in the previous have all been said, here will not waste space to continue to say the parameters, if there is no understanding of the parameters to turn up can be found. Below is a diagram to help understand:

caveat
- Data consistency: Ensure that the input data format is consistent with expectations so that nodes can process and transfer data correctly.
- Node Configuration: Adjust the configuration parameters of the node according to the specific needs in order to obtain the best data processing results.
- Edit with care: During the editing process, take care to save a copy of the original data to prevent data loss or errors caused by misuse.

V. BasicPipe->DetailerPipe / BasicPipe->DetailerPipe(SDXL) / DetailerPipe->BasicPipe Node
These nodes are used to convert between different data pipeline formats for seamless data transfer and processing in complex image processing workflows.

Pipe type node about all the parameters in the previous have all spoken, here is not to make up the word count to continue to say that the parameters, if there is no understanding of the parameters of the flip up can be found. Attached below is a diagram of the use of each node to help understand:


caveat
- Data consistency: Ensure that the input data format is consistent with expectations so that nodes can process and transfer data correctly.
- Node Configuration: Adjust the configuration parameters of the node according to the specific needs in order to obtain the best data processing results.
- Conversion accuracy: During the conversion process, ensure that the integrity and accuracy of the data are not compromised so that subsequent processing steps can use the converted data correctly.
By using these conversion nodes, seamless conversion and transfer of data formats can be realized in complex image processing workflows, ensuring smooth data flow between processing stages and improving workflow flexibility and efficiency.

VI. Image Sender / Image Receiver node
Picture senders and picture receivers, which in combination can be used to send pictures to any location in the workflow without wires.

Input:
image → the image to be transferred
Parameters:
filename_prefix → set the prefix of the image nameDuring the transfer process the nodes have an image name for the transferred image
link_id → set the id of the transmitter or receiverA transmitter can send to multiple receivers at the same time as long as the ids are the same
image → image name
save_to_workflow → select whether to save the image to the workflow or not
image_data → when choosing to save an image to a workflow, convert the image to a text message to be stored here (this message is saved with the workflow)
trigger_always → Controls whether the receiver is always triggered.
Note: Usually the image we load with LoadImage can't be used in another computer because this image is local, and save_to_workflow is the solution to this problem. It converts the image into a text message and saves it to the workflow. When other people download the workflow, the image in it is also downloaded along with the workflow and can be used. However, since the efficiency of converting images to text is very low, it should be used with caution for complex images with high resolution! Also the above process will significantly increase the size of the workflow, so it is recommended to use this feature for simple images such as MASK images.
Output:
IMAGE → Received image
MASK → Received image MASK data output port
Example:

caveat
- Data consistency: Ensure that the image data sent and received are in the same format so that the nodes can process and transfer the data correctly.
- Node Configuration: Configure the send target and receive source of the node according to the specific needs to realize the correct data transmission.
- Network connection: If remote processing is involved, ensure that the network connection is stable so that data can be transferred smoothly.

VII. FaceDetailer / FaceDetailer(Pipe) node
Specifically optimized for face details, with a built-in independent sampler for face re-diffusion. pipe version of FaceDetailer can combine base model and refiner model to achieve more detailed repair.

Input:
image → the original image to be redrawn
model → load large model
clip → load clip model
vae → load vae model
positive → import positive cues
negative → import reverse cue word
bbox_detector → load BBOX model
sam_model_opt → load SAM model
segm_detector_opt → load SEGM model
detailer_hook → extension interface for more fine-grained tuning of the model
detailer_pipe → Detailer pipe, adding detection to the Basic pipe
Parameters:
guide_size → reference sizeTarget images smaller than are zoomed in to match, while images larger than will be skipped because they don't require detail processing
guide_size_for → set what guide_size is based onWhen set to bbox, it uses the bbox detected by the detector as a reference; when set to crop_region, it uses the cropping region recognized based on the detected bbox as a reference
Note: When BBOX is selected, the size of the zoomed image based on crop_factor may be several times larger than guide_size.
max_size → maximum sizeLimits the longest edge of the target image to a safety measure less than max_size, which solves the problem that the bbox can become too large, especially if it has a slender shape
seed → seed with built-in KSampler
control_after_generate → control how the seed is changedfixed is a fixed seed, increment is an increase of 1 at a time, decrease is a decrease of 1 at a time, and randomize is a randomized seed.
steps → the number of denoising steps (which can also be interpreted as the number of steps to generate the image)
cfg → cue-word guidance coefficients, i.e., the magnitude of the effect that the cue word has on the resultExcessive levels can have a negative impact
sampler_name → select sampler
scheduler → Select scheduler
denoise → denoise amplitudeThe larger the value, the greater the impact and change it produces in the picture
feather → feather size
noise_mask → Controls whether a noise mask is used during repair.Although lower denoising values sometimes produce more natural results when the noise mask is not used, it is usually recommended to set this parameter to enabled
force_inpaint → prevent skipping all processes based on guide_sizeThis is useful when the goal is repair rather than refinement. SEGS smaller than guide_size will not match guide_size by decreasing it; instead, they will be repaired to their original size
bbox_threshold → detection threshold for BBOX models
bbox_dilation → expansion parameter for the bounding box of the BBOX model, used to extend the bounding box
bbox_crop_factor → the number of times the BBOX model determines that the surrounding area should be included in the detail repair process, based on the detected masked areaIf this value is small, the repair may not work correctly because there is no way to know the surrounding context
sam_detection_hint → Used in the SAM model to specify which type of detection result to use as a hint to help generate the mask during segmentation
sam_dilation → SAM model bounding box dilation parameter for expanding bounding boxes
sam_threshold → detection threshold for SAM models
sam_bbox_expansion → the SAM model expands the size of the boundaries when generating contours to ensure better inclusion of the target object
sam_mask_hint_threshold → Used in the SAM model with the sam_mask_hint_use_nagative parameter to specify a threshold for detection_hint, which interprets a mask value equal to or above the threshold in the mask region as a positive hint
sam_mask_hint_use_negative → Used to control whether the SAM model uses negative hints to aid in segmentationWhen set to True, very small points are interpreted as negative cues in masked points, and some regions with a mask value of 0 are interpreted as negative cues in masked regions
drop_size → parameter to set a size threshold for filtering smaller targetsRemoves noise or small irrelevant targets, making detection results more reliable and accurate
Textbox → Enter wildcards, this option is ignored if empty
refiner_ratio → When using SDXL, sets the percentage of the total process that the function to be refiner modeled is a part of.
cycle → number of iterations for sampling **When used with Detailer_hook, this option allows for the addition of intermittent noise, and can also be used to gradually reduce the denoising size, initially building the basic structure and then refining it.
inpaint_model → This option needs to be enabled when using the repair model to ensure correct inpainting at noise reduction values below 1.0
noise_mask_feather → Controls whether the feathering operation is applied to the mask of the repair process,
Note: noise_mask_feather does not guarantee a more natural image, while it may create artifacts at the edges, people set it as needed!
Output:
image → final redrawn image
cropped_refined → cropped and further processed image
cropped_enhanced_alpha → cropped and refined alpha channel
mask → mask information of the redrawn image
detailer_pipe → Detailer pipe, adding detection to the Basic pipe
cent_images → mask position map
Note: The parameter cent_images was talked about in the ComfyUI Impact node (I), so please absorb it with care!
Main functions and usage:
- Input face image: the node accepts image data containing a face, usually obtained from the preceding detection or segmentation node.
- Refinement Processing: Refinement processing of the input face image, such as enhancing details, removing noise, adjusting lighting and so on.
- Pipeline Integration: Supports seamless integration with other nodes, suitable for complex image processing pipelines.
- Output optimized image: Output high quality face image after refinement process.
caveat
- Input data quality: Ensure that the input image data contains clear face information for optimal refinement.
- Node Configuration: Adjust the refinement processing parameters according to specific needs to achieve the best image quality enhancement results.
- Processing performance: Refinement processing may require high computational resources, ensuring that system performance is sufficient to support processing requirements.

VIII. Sample workflows
With the above nodes, you can build a simple "face repair" workflow.

This workflow encompasses all the nodes learned in this article, and learning through this workflow will provide a deeper understanding of the nodes learned above. The main idea is that three FaceDetailers are used, and then the input information needed for each FaceDetailer is generated by adding, modifying, and splitting the information in the pipeline, followed by a sampler inside the FaceDetailer that repairs the details of the face based on the face mask detected by the detection model as well as the cue words. The four figures directly below the above figure are the original figure, the result of FaceDetailer using only the Base model, the result of FaceDetailer using a combination of Base and Refiner models, and the result of FaceDetailer using a combination of Base and Refiner models and introducing SEGS, as shown below, respectively:



To strive for excellence is to surpass oneself. Perseverance is the key to success.