Socket
Book a DemoInstallSign in
Socket

hpsv3

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

hpsv3

HPSv3: Towards Wide-Spectrum Human Preference Score - A VLM-based preference model for image quality assessment

Source
pipPyPI
Version
1.0.0
Maintainers
1

🎯 HPSv3: Towards Wide-Spectrum Human Preference Score (ICCV 2025)

Project Website arXiv ICCV 2025 Model Dataset

Yuhang Ma1,3*Yunhao Shui1,4*Xiaoshi Wu2Keqiang Sun1,2†Hongsheng Li2,4,5†

1Mizzen AI   2CUHK MMLab   3King’s College London   4Shanghai Jiaotong University  

5Shanghai AI Laboratory   6CPII, InnoHK  

*Equal Contribution  Equal Advising

📖 Introduction

This is the official implementation for the paper: HPSv3: Towards Wide-Spectrum Human Preference Score. First, we introduce a VLM-based preference model HPSv3, trained on a "wide spectrum" preference dataset HPDv3 with 1.08M text-image pairs and 1.17M annotated pairwise comparisons, covering both state-of-the-art and earlier generative models, as well as high- and low-quality real-world images. Second, we propose a novel reasoning approach for iterative image refinement, CoHP(Chain-of-Human-Preference), which efficiently improves image quality without requiring additional training data.

Teaser

✨ Updates

  • [2025-8-05] 🎉 We release HPSv3: inference code, training code, cohp code and model weights.

📑 Table of Contents

🚀 Quick Start

HPSv3 is a state-of-the-art human preference score model for evaluating image quality and prompt alignment. It builds upon the Qwen2-VL architecture to provide accurate assessments of generated images.

💻 Installation


# Install locally for development or training.
git clone https://github.com/MizzenAI/HPSv3.git
cd HPSv3

conda env create -f environment.yaml
conda activate hpsv3
# Recommend: Install flash-attn
pip install flash-attn==2.7.4.post1

pip install -e .

🛠️ Basic Usage

Simple Inference Example

from hpsv3 import HPSv3RewardInferencer

# Initialize the model
inferencer = HPSv3RewardInferencer(device='cuda')

# Evaluate images
image_paths = ["assets/example1.png", "assets/example2.png"]
prompts = [
  "cute chibi anime cartoon fox, smiling wagging tail with a small cartoon heart above sticker",
  "cute chibi anime cartoon fox, smiling wagging tail with a small cartoon heart above sticker"
]

# Get preference scores
rewards = inferencer.reward(image_paths, prompts)
scores = [reward[0].item() for reward in rewards]  # Extract mu values
print(f"Image scores: {scores}")

🌐 Gradio Demo

Launch an interactive web interface to test HPSv3:

python gradio_demo/demo.py

The demo will be available at http://localhost:7860 and provides:

Gradio Demo

🏋️ Training

📁 Dataset

Human Preference Dataset v3

Human Preference Dataset v3 (HPD v3) comprises 1.08M text-image pairs and 1.17M annotated pairwise data. To modeling the wide spectrum of human preference, we introduce newest state-of-the-art generative models and high quality real photographs while maintaining old models and lower quality real images.

Detail information of HPD v3
Image SourceTypeNum ImagePrompt SourceSplit
High Quality Image (HQI)Real Image57759VLM CaptionTrain & Test
MidJourney-331955UserTrain
CogView4DiT400HQI+HPDv2+JourneyDBTest
FLUX.1 devDiT48927HQI+HPDv2+JourneyDBTrain & Test
InfinityAutoregressive27061HQI+HPDv2+JourneyDBTrain & Test
KolorsDiT49705HQI+HPDv2+JourneyDBTrain & Test
HunyuanDiTDiT46133HQI+HPDv2+JourneyDBTrain & Test
Stable Diffusion 3 MediumDiT49266HQI+HPDv2+JourneyDBTrain & Test
Stable Diffusion XLDiffusion49025HQI+HPDv2+JourneyDBTrain & Test
Pixart SigmaDiffusion400HQI+HPDv2+JourneyDBTest
Stable Diffusion 2Diffusion19124HQI+JourneyDBTrain & Test
CogView2Autoregressive3823HQI+JourneyDBTrain & Test
FuseDreamDiffusion468HQI+JourneyDBTrain & Test
VQ-DiffusionDiffusion18837HQI+JourneyDBTrain & Test
GlideDiffusion19989HQI+JourneyDBTrain & Test
Stable Diffusion 1.4Diffusion18596HQI+JourneyDBTrain & Test
Stable Diffusion 1.1Diffusion19043HQI+JourneyDBTrain & Test
Curated HPDv2-327763-Train

Download HPDv3

HPDv3 is comming soon! Stay tuned!

Pairwise Training Data Format

Important Note: For simplicity, path1's image is always the prefered one

[
  {
    "prompt": "A beautiful landscape painting",
    "path1": "path/to/better/image.jpg",
    "path2": "path/to/worse/image.jpg",
    "confidence": 0.95
  },
  ...
]

🚀 Training Command

# Use Method 2 to install locally
git clone https://github.com/MizzenAI/HPSv3.git
cd HPSv3

conda env create -f environment.yaml
conda activate hpsv3
# Recommend: Install flash-attn
pip install flash-attn==2.7.4.post1

pip install -e .

# Train with 7B model
deepspeed hpsv3/train.py --config hpsv3/config/HPSv3_7B.yaml
Important Config Argument
Configuration SectionParameterValueDescription
Model Configurationrm_head_type"ranknet"Type of reward model head architecture
lora_enableFalseEnable LoRA (Low-Rank Adaptation) for efficient fine-tuning. If False, language tower is fully trainable
vision_loraFalseApply LoRA specifically to vision components. If False, vision tower is fully trainable
model_name_or_path"Qwen/Qwen2-VL-7B-Instruct"Path to the base model checkpoint
Data Configurationconfidence_threshold0.95Minimum confidence score for training data
train_json_list[example_train.json]List of training data files
test_json_list[validation_sets]List of validation datasets with names
output_dim2Output dimension of the reward head for $\mu$ and $\sigma$
loss_type"uncertainty"Loss function type for training

📊 Benchmark

To evaluate HPSv3 preference accuracy or human preference score of image generation model, follow the detail instruction is in Evaluate Insctruction

Preference Accuracy of HPSv3
ModelImageRewardPickscoreHPDv2HPDv3
CLIP ViT-H/1457.160.865.148.6
Aesthetic Score Predictor57.456.876.859.9
ImageReward65.161.174.058.6
PickScore61.670.579.865.6
HPS61.266.777.663.8
HPSv265.763.883.365.3
MPS67.563.183.564.3
HPSv366.872.885.476.9
Image Generation Benchmark of HPSv3
ModelOverallCharactersArtsDesignArchitectureAnimalsNatural SceneryTransportationProductsOthersPlantsFoodScience
Kolors10.5511.7910.479.8710.8210.609.8910.6810.9310.5010.6311.069.51
Flux-dev10.4311.7010.329.3910.9310.3810.0110.8411.2410.2110.3811.249.16
Playgroundv2.510.2711.079.849.6410.4510.389.9410.5110.6210.1510.6210.849.39
Infinity10.2611.179.959.4310.369.2710.1110.3610.5910.0810.3010.599.62
CogView49.6110.729.869.339.889.169.459.699.869.459.4910.168.97
PixArt-Σ9.3710.089.078.419.838.868.879.449.579.529.7310.358.58
Gemini 2.0 Flash9.219.988.447.6410.119.429.019.749.649.5510.167.619.23
SDXL8.208.677.637.538.578.187.768.658.858.328.438.787.29
HunyuanDiT8.197.968.118.288.717.247.868.338.558.288.318.488.20
Stable Diffusion 3 Medium5.316.705.985.155.254.095.244.255.715.846.015.714.58
SD2-0.24-0.34-0.56-1.35-0.24-0.54-0.321.001.11-0.01-0.38-0.38-0.84

🎯 CoHP (Chain-of-Human-Preference)

COHP is our novel reasoning approach for iterative image refinement that efficiently improves image quality without requiring additional training data. It works by generating images with multiple diffusion models, selecting the best one using reward models, and then iteratively refining it through image-to-image generation.

cohp

🚀 Usage

Basic Command

python hpsv3/cohp/run_cohp.py \
    --prompt "A beautiful sunset over mountains" \
    --index "sample_001" \
    --device "cuda:0" \
    --reward_model "hpsv3"

Parameters

  • --prompt: Text prompt for image generation (required)
  • --index: Unique identifier for saving results (required)
  • --device: GPU device to use (default: 'cuda:1')
  • --reward_model: Reward model for scoring images
    • hpsv3: HPSv3 model (default, recommended)
    • hpsv2: HPSv2 model
    • imagereward: ImageReward model
    • pickscore: PickScore model

Supported Generation Models

COHP uses multiple state-of-the-art diffusion models for initial generation: FLUX.1 dev, Kolors, Stable Diffusion 3 Medium, Playground v2.5

How COHP Works

  • Multi-Model Generation: Generates images using all supported models
  • Reward Scoring: Evaluates each image using the specified reward model
  • Best Model Selection: Chooses the model that produced the highest-scoring image
  • Iterative Refinement: Performs 5 rounds of image-to-image generation to improve quality
  • Adaptive Strength: Uses strength=0.8 for rounds 1-2, then 0.5 for rounds 3-5

🦾 Results as Reward Model

We perform DanceGRPO as the reinforcement learning method. Here are some results. All experiments using the same setting and we use Stable Diffusion 1.4 as our backbone.

More Results of HPsv3 as Reward Model (Stable Diffusion 1.4)

cohp

📚 Citation

If you find HPSv3 useful in your research, please cite our work:

@inproceedings{hpsv3,
  title={HPSv3: Towards Wide-Spectrum Human Preference Score},
  author={Ma, Yuhang and Wu, Xiaoshi and Sun, Keqiang and Li, Hongsheng},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2025}
}

🙏 Acknowledgements

We would like to thank the VideoAlign codebase for providing valuable references.

💬 Support

For questions and support:

Keywords

machine learning

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts