Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

adetailer

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

adetailer

An object detection and auto-mask extension for stable diffusion webui.

  • 24.9.0
  • PyPI
  • Socket score

Maintainers
1

ADetailer

ADetailer is an extension for the stable diffusion webui that does automatic masking and inpainting. It is similar to the Detection Detailer.

Install

You can install it directly from the Extensions tab.

image

Or

(from Mikubill/sd-webui-controlnet)

  1. Open "Extensions" tab.
  2. Open "Install from URL" tab in the tab.
  3. Enter https://github.com/Bing-su/adetailer.git to "URL for extension's git repository".
  4. Press "Install" button.
  5. Wait 5 seconds, and you will see the message "Installed into stable-diffusion-webui\extensions\adetailer. Use Installed tab to restart".
  6. Go to "Installed" tab, click "Check for updates", and then click "Apply and restart UI". (The next time you can also use this method to update extensions.)
  7. Completely restart A1111 webui including your terminal. (If you do not know what is a "terminal", you can reboot your computer: turn your computer off and turn it on again.)

Options

Model, Prompts
ADetailer modelDetermine what to detect.None = disable
ADetailer model classesComma separated class names to detect. only available when using YOLO World modelsIf blank, use default values.
default = COCO 80 classes
ADetailer prompt, negative promptPrompts and negative prompts to applyIf left blank, it will use the same as the input.
Skip img2imgSkip img2img. In practice, this works by changing the step count of img2img to 1.img2img only
Detection
Detection model confidence thresholdOnly objects with a detection model confidence above this threshold are used for inpainting.
Mask min/max ratioOnly use masks whose area is between those ratios for the area of the entire image.
Mask only the top k largestOnly use the k objects with the largest area of the bbox.0 to disable

If you want to exclude objects in the background, try setting the min ratio to around 0.01.

Mask Preprocessing
Mask x, y offsetMoves the mask horizontally and vertically by
Mask erosion (-) / dilation (+)Enlarge or reduce the detected mask.opencv example
Mask merge modeNone: Inpaint each mask
Merge: Merge all masks and inpaint
Merge and Invert: Merge all masks and Invert, then inpaint

Applied in this order: x, y offset → erosion/dilation → merge/invert.

Inpainting

Each option corresponds to a corresponding option on the inpaint tab. Therefore, please refer to the inpaint tab for usage details on how to use each option.

ControlNet Inpainting

You can use the ControlNet extension if you have ControlNet installed and ControlNet models.

Support inpaint, scribble, lineart, openpose, tile, depth controlnet models. Once you choose a model, the preprocessor is set automatically. It works separately from the model set by the Controlnet extension.

If you select Passthrough, the controlnet settings you set outside of ADetailer will be used.

Advanced Options

API request example: wiki/REST-API

[SEP], [SKIP], [PROMPT] tokens: wiki/Advanced

Media

Model

ModelTargetmAP 50mAP 50-95
face_yolov8n.pt2D / realistic face0.6600.366
face_yolov8s.pt2D / realistic face0.7130.404
hand_yolov8n.pt2D / realistic hand0.7670.505
person_yolov8n-seg.pt2D / realistic person0.782 (bbox)
0.761 (mask)
0.555 (bbox)
0.460 (mask)
person_yolov8s-seg.pt2D / realistic person0.824 (bbox)
0.809 (mask)
0.605 (bbox)
0.508 (mask)
mediapipe_face_fullrealistic face--
mediapipe_face_shortrealistic face--
mediapipe_face_meshrealistic face--

The YOLO models can be found on huggingface Bingsu/adetailer.

For a detailed description of the YOLO8 model, see: https://docs.ultralytics.com/models/yolov8/#overview

YOLO World model: https://docs.ultralytics.com/models/yolo-world/

Additional Model

Put your ultralytics yolo model in models/adetailer. The model name should end with .pt.

It must be a bbox detection or segment model and use all label.

How it works

ADetailer works in three simple steps.

  1. Create an image.
  2. Detect object with a detection model and create a mask image.
  3. Inpaint using the image from 1 and the mask from 2.

Development

ADetailer is developed and tested using the stable-diffusion 1.5 model, for the latest version of AUTOMATIC1111/stable-diffusion-webui repository only.

License

ADetailer is a derivative work that uses two AGPL-licensed works (stable-diffusion-webui, ultralytics) and is therefore distributed under the AGPL license.

See Also

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc