🚀 Big News: Socket Acquires Coana to Bring Reachability Analysis to Every Appsec Team.Learn more

unicorn-eval

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

unicorn-eval

Evaluation and adaption method for the UNICORN Challenge

1.2.1
Maintainers
1

🧪 UNICORN Evaluation Toolkit

Welcome to the official evaluation repository for the UNICORN Challengea benchmark for foundation models in pathology, radiology, and medical language processing. This repository provides:

  • The official UNICORN Challenge evaluation code
  • A growing library of adaptors used to turn frozen features into predictions in vision tasks.

PyPI version

🚀 Challenge Overview

The UNICORN Challenge evaluates how well foundation models generalize across multiple modalities with minimal task-specific supervision:

  • 🧠 Language and Vision-Language tasks: algorithm directly outputs predictions — no adaptor required
  • 👁️ Vision tasks: algorithms outputs frozen features, these are passed through adaptors — lightweight models - to generate predictions.

We provide a few built-in adaptors, but you're highly encouraged to propose your own!
We maintain the full list of adaptors available on the Supported Adaptors page.

📦 Adaptors vs. Algorithms: What's the Difference?

In vision tasks, submissions consist of:

  • A feature extractor (your algorithm)
  • An adaptor (used to turn features into predictions)

You can experiment with different adaptors on top of the same algorithm without using up your submission slots.
Want to try a different adaptor? Email us using the provided template (see below) — we’ll run it for you on existing features.

In language and vision-language tasks, the algorithm outputs predictions directly, so no adaptor is needed.

ModalityWhat You SubmitAre Adaptors Used?Submission Limit Applies To
👁️ VisionAlgorithm (feature extractor) + Adaptor✅ YesAlgorithm only
🧠 LanguageAlgorithm (predictive)❌ NoAlgorithm
🧠 Vision-LanguageAlgorithm (predictive)❌ NoAlgorithm

🧩 Contributing a Custom Adaptor

Have a better idea for how to turn features into predictions?

You’re welcome to contribute a custom adaptor! Here's how:

  • Add your adaptor to src/unicorn_eval/adaptors/.
  • Inherit from one of the base adaptor classes in base.py.
  • Open a pull request with:
    • Your adaptor code
    • A short README.md that covers:
      • A clear description of your method
      • A list of tasks, or task types your method is designed for
    • A unique name (we will include your team name in the adaptor name to ensure you receive credit). When naming your method, please be as specific as possible — for example, indicate details like the number of layers or specific settings — so that related methods with different configurations can be distinctly named.
    • Any additional dependencies in a requirements.txt (details on adding new requirements below)

✅ Once accepted, your adaptor becomes selectable at submission time — and your team gets full recognition when it’s used!

💡 Keep in mind: we prioritize originality. If your adaptor is too similar to an existing one, it may not be accepted — so submit early and make it your own!

Implementation requirements for contributing a new adaptor

  • Your adaptor method must be implemented as a standalone function, following the baseline template base.py
  • It must complete within the allowed time limit of 1h
  • Submissions will be evaluated for correctness, efficiency, and compliance with the challenge policies
  • 🚨 Important: Pre-trained adaptors are not allowed! Be original — you can use the few-shots, for example, for fitting or training your adaptor, but don’t rely on pre-trained solutions

Dependencies

  • Each method must be able to run in the provided isolated environment
  • Additional dependencies can be requested, but:
    • Approval of new dependencies is not guaranteed, dependencies will be evaluated based on compatibility with other packages
    • Organizers reserve the right to modify the list of dependencies over time, though we aim to maintain compatibility with existing adaptors
    • When specifying dependencies, use the least restrictive version (e.g., package>=1.0.0) to ensure flexibility

💬 Teams are encouraged to share ideas and discuss approaches on the Grand Challenge forum. Support and Q&A will also be available through the forum.

📤 Requesting New Adaptor Runs

You can request us to apply additional adaptors to your existing vision submission without impacting your submission limit.

📧 Submission Instructions

  • Go to your submission URL: https://unicorn.grand-challenge.org/evaluation/<leaderboard-specific-number>/submissions/<your-submission-id>/
    (Use only this format — not other links)

  • For each submission that you want to rerun with a new adaptor, specify:

    • The full submission link
      Example: https://unicorn.grand-challenge.org/evaluation/30/submissions/bc9b9fe2-1f8d-4b9e-af7b-0edb87b127a4/
    • The new adaptor(s) you want to apply (chosen from the Supported Adaptors).
      ⚠️ Responsible use: You’re welcome to submit additional adaptor run requests over time. However, to ensure fair access for all participants, we ask that each request remains targeted and minimal (e.g., max 2 adaptors per leaderboard per request). Bulk or unfocused requests may be deprioritized.
  • Email your request to support@unicorn-challenge.com containing the following template:

Submission: https://unicorn.grand-challenge.org/evaluation/<leaderboard-specific-number>/submissions/your-submission-id/
Adaptors:
- teamname_adaptorX_v1
- teamname_adaptorY_v2

[Repeat for other submissions if needed]

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts