You're Invited:Meet the Socket Team at BlackHat and DEF CON in Las Vegas, Aug 4-6.RSVP
Socket
Book a DemoInstallSign in
Socket

github.com/wang-chaoyang/RefLDMSeg

Package Overview
Dependencies
Alerts
File Explorer
Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

github.com/wang-chaoyang/RefLDMSeg

v0.0.0-20250325161735-04b58402df20
Source
Go
Version published
Created
Source

Explore In-Context Segmentation via Latent Diffusion Models

arXiv Project Website

AAAI 2025

Requirements

  • Install torch==2.1.0.
  • Install pip packages via pip install -r requirements.txt and alpha_clip.
  • Our model is based on Stable Diffusion, download and put it into datasets/pretrain. Put the checkpoints of alpha_clip into datasets/pretrain/alpha-clip.

Data Preparation

Please download the following datasets: COCO 2014, DAVIS16, VSPW, and PASCAL, which includes PASCAL VOC 2012 and SBD. And then download the meta files. Put them under datasets and rearrange as follows.

datasets
├── pascal
│   ├── JPEGImages
│   ├── SegmentationClassAug
│   └── metas
├── davis16
│   ├── JPEGImages
│   ├── Annotations
│   └── metas
├── vspw
│   ├── images
│   ├── masks
│   └── metas
└── coco20i
    ├── annotations
    │   ├── train2014
    │   └── val2014
    ├── metas
    ├── train2014
    └── val2014

Train

The codes in scripts is launched by accelerate. The saved path is specified by --output_dir defined in args.

# ldis1
accelerate launch --multi_gpu --num_processes [GPUS] scripts/modelf.py --config configs/cfg.py
# ldisn
accelerate launch --multi_gpu --num_processes [GPUS] scripts/modeln.py --config configs/cfg.py --mask_alpha 0.4

Inference

# ldis1
accelerate launch --multi_gpu --num_processes [GPUS] scripts/modelf.py --config configs/cfg.py --only_val 1 --val_dataset pascal --output_dir [the path of ckpt]
# ldisn
accelerate launch --multi_gpu --num_processes [GPUS] scripts/modeln.py --config configs/cfg.py --only_val 1 --val_dataset pascal --output_dir [the path of ckpt] --mask_alpha 0.4

The pretrained models can be found here.

Citation

If you find our work useful, please kindly consider citing our paper:

@article{wang2024explore,
  title={Explore In-Context Segmentation via Latent Diffusion Models},
  author={Wang, Chaoyang and Li, Xiangtai and Ding, Henghui and Qi, Lu and Zhang, Jiangning and Tong, Yunhai and Loy, Chen Change and Yan, Shuicheng},
  journal={arXiv preprint arXiv:2403.09616},
  year={2024}
}

License

MIT license

FAQs

Package last updated on 25 Mar 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

About

Packages

Stay in touch

Get open source security insights delivered straight into your inbox.

  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc

U.S. Patent No. 12,346,443 & 12,314,394. Other pending.