CIDEr

E899060

CIDEr is an automatic evaluation metric designed to assess the quality of image captions by measuring their consensus with human-written descriptions.

Jump to: Surface forms Statements Referenced by

Observed surface forms (1)

Surface form Occurrences
CIDEr-D 0

Statements (45)

Predicate Object
instanceOf automatic evaluation metric
image captioning evaluation metric
image captioning evaluation metric
aggregationMethod averaging over reference captions
basedOn consensus among human reference captions
commonlyReportedIn image captioning research papers
commonlyUsedWith neural image captioning models
comparesTo human-written image descriptions
correlatesWith human caption quality assessments
designedFor evaluating image caption quality
image captioning
designedTo reduce the effect of outlier n-grams
domain image description
evaluatedOn MS COCO dataset NERFINISHED
evaluates machine-generated image captions
focusesOn content similarity between candidate and reference captions
fullName CIDEr with damping NERFINISHED
Consensus-based Image Description Evaluation NERFINISHED
hasAuthor C. Lawrence Zitnick NERFINISHED
Devi Parikh NERFINISHED
Ramakrishna Vedantam NERFINISHED
hasVariant CIDEr-D NERFINISHED
higherIsBetter true
introducedAt ECCV 2015 NERFINISHED
introducedFor automatic image caption evaluation
introducedInField computer vision
natural language processing
introducedYear 2015
languageAgnostic partially
metricRange 0 to 1
optimizedFor correlation with human judgments
paperTitle CIDEr: Consensus-based Image Description Evaluation NERFINISHED
publicationType research paper
relatedMetric BLEU NERFINISHED
METEOR NERFINISHED
ROUGE-L
SPICE NERFINISHED
requires multiple human reference captions per image
status standard benchmark metric for image captioning
taskType reference-based evaluation
usedIn MS COCO Captioning Challenge NERFINISHED
uses TF-IDF weighting of n-grams
cosine similarity over TF-IDF vectors
n-gram matching
term frequency-inverse document frequency weighting

Referenced by (1)

Full triples — surface form annotated when it differs from this entity's canonical label.