Introduction

Meta Omnium is a multi-task few-shot learning benchmark that evaluates generalization across diverse computer vision task types. It includes tasks such as recognition, keypoint localization and semantic segmentation, and so enables testing of few-shot generalization to a much larger extent than was previously possible.

Meta Omnium has a clear hyper-parameter tuning (HPO) and model selection protocol, to facilitate future fair comparison across current and future few-shot learning algorithms. The benchmark already includes multi-task extensions of the most popular few-shot learning approaches and analyzes their ability to generalize across tasks and to transfer knowledge between them.

We invite researchers to use our benchmark and study how to improve the ability of machine learning models to do general-purpose few-shot learning.

Why Meta Omnium?

Test the generality of your meta-learner: Can you really learn to learn, and not just learn-to-categorise or learn to regress?

A benchmark that really requires adaptation: pre-trained feature strength is not the dominant factor.

Supports study of task heterogeneity and out-of-domain generalisation for meta-learners.

Study the intersection of multi-task learning & meta-learning.

Offers a clear hyper-parameter tuning protocol to promote good science.

Lightweight for easy development (lightest setting only 2-hours full run on 1080 Ti)!

Dataset

Meta Omnium is lightweight yet includes images from many different domains and task types

  • 160,000 + Images
  • 21 Domains
  • 3+1 Seen and Unseen Tasks
  • 3.1GB Storage

Meta Omnium includes over 160,000 images images from 21 public datasets, representing 3 seen tasks: recognition, keypoint localization, semantic segmentation, and 1 unseen task: regression.
We have preprocessed the datasets and split them into: meta-training; in-domain (ID) and out-of-domain (OOD) meta-validation; in-domain, out-of-domain and out-of-task (OOT) meta-testing sets.

Detailed dataset information can be found in the tables:

Classification Datasets
Dataset Name Domain # Classes # Images Role Original Dataset Link
BCT-Trn Microscopy 23 920 Meta-train Link
BRD-Trn Bird 220 8800 Meta-train Link
CRS-Trn Car 137 5480 Meta-train Link
BCT-Val Microscopy 5 200 ID Meta-val Link
BRD-Val Bird 47 1880 ID Meta-val Link
CRS-Val Car 29 1160 ID Meta-val Link
FLW Flowers 102 4080 OOD Meta-val Link
MD-MIX OCR 706 28240 OOD Meta-val Link
PLK Plankton 86 3440 OOD Meta-val Link
BCT-Test Microscopy 5 200 ID Meta-test Link
BRD-Test Bird 48 1920 ID Meta-test Link
CRS-Test Car 30 1200 ID Meta-test Link
PLT-VIL Plant Disease 38 1520 OOD Meta-test Link
RESISC Remote Sensing 45 1800 OOD Meta-test Link
SPT Sports 73 2920 OOD Meta-test Link
TEX Textures 64 2560 OOD Meta-test Link
Segmentation Datasets
Dataset Name Domain # Classes # Images Role Original Dataset Link
FSS1000-Trn Natural Image 520 5200 Meta-train Link
FSS1000-Val Natural Image 240 2400 ID Meta-val Link
FSS1000-Test Natural Image 240 2400 ID Meta-test Link
Pascal 5i Natural Image 6 7247 OOD Meta-test Link
Vizwiz Natural Image 22 862 OOD Meta-val Link
PH2(Skin) Natural Image 3 200 OOD Meta-test Link
Keypoint Datasets
Dataset Name Domain # Keypoints # Images Role Original Dataset Link
Animal Pose-Trn Animal 40 3237 Meta-train Link
Animal Pose-Val Animal 40 2038 ID Meta-val Link
Animal Pose-Test Animal 20 842 ID Meta-test Link
Synthetic Animal Pose Synthetic Animal 22 20000 OOD Meta-val Link
MPII Human 16 28882 OOD Meta-test Link
Regression Datasets
Dataset Name Domain # Concepts # Images Role Original Dataset Link
ShapeNet1D-Test Synthetic Image 60 3000 OOT Meta-test Link
ShapeNet2D-Test Synthetic Image 300 9000 OOT Meta-test Link
Distractor-Test Synthetic Image 200 7200 OOT Meta-test Link
Pascal1D-Test Synthetic Image 15 1500 OOT Meta-test Link

*All datasets are released under their original licences, and our code is released under the MIT licence.

Code

Our code makes it simple to use the benchmark, run experiments and add new approaches

We provide all details and instructions needed to run experiments using Meta Omnium, including a link to download the preprocessed data. We include multi-task implementation of popular few-shot learning algorithms (e.g. MAML, ProtoNets, Deep differentiable ridge-regression) as well as simple baselines (e.g. fine-tuning or training from scratch). Additionally we include code for hyper-parameter optimization to simplify selection of hyper-parameters of new approaches.

Citation

If you are using Meta Omnium, please cite our paper:

@inproceedings{metaomnium2023,
	title={Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn},
	author={Bohdal, Ondrej and Tian, Yinbing and Zong, Yongshuo and Chavhan, Ruchika and Li, Da and Gouk, Henry and Guo, Li and Hospedales, Timothy},
	booktitle={CVPR},
	year = {2023}
}