On developing benchmark evaluations

The Multimedia COMMONS 2016 workshop (October 16 2016) –that will run as part of the ACM Multimedia conference in Amsterdam– will provide a forum for the community of current and potential users of the Multimedia Commons. This is a multi-institution collaboration initiative, that was launched last year to compute features, generate annotations, and develop analysis tools, principally focusing on the Yahoo Flickr Creative Commons 100 Million dataset (YFCC100M), which contains around 99.2 million images and nearly 800,000 videos from Flickr. The workshop aims to share novel research using the YFCC100M dataset, emphasizing approaches that were not possible with smaller or more restricted multimedia collections; ask new questions about the scalability, generalizability, and reproducibility of algorithms and methods; re-examine how we use data challenges and benchmarking tasks to catalyze research advances; and discuss priorities, methods, and plans for continuously expanding annotation efforts.

At the MMCommons workshop I will discuss the development of benchmark evaluations in the context of  a series of tasks focusing on audiovisual search emphasizing its ‘multimodal’ aspects, starting in 2006 with the workshop on ‘Searching Spontaneous Conversational Speech’ that led to tasks in CLEF and MediaEval (“Search and Hyperlinking”), and recently also TRECVid (“Video Hyperlinking”). The value and importance of Benchmark Evaluations is widely acknowledged. Benchmarks play a key role in many research projects. It takes time, a well-balanced team of domain specialists preferably with links to the user community and industry, and a strong involvement of the research community itself to establish a sound evaluation framework that includes (annotated) data sets, well-defined tasks that reflect the needs in the ‘real world’, a proper evaluation methodology, ground-truth, including a strategy for repetitive assessments, and last but not least, funding. Although the benefits of an evaluation framework are typically reviewed from a perspective of ‘research output’ –e.g., a scientific publication demonstrating an advance of a certain methodology– it is important to be aware of the value of the process of creating a benchmark itself: it increases significantly the understanding of the problem we want to address and as a consequence also the impact of the evaluation outcomes.

The focus of my talk will be on the process rather than on the results of these evaluations themselves, and will address cross-benchmark connections, and new benchmark paradigms, specifically the integration of benchmarking in industrial ‘living labs’ or Evaluation-as-a-Service (EaaS) initiatives that are becoming popular in some domains.

Audiovisual Linking: Scenarios, Approaches and Evaluation

The concept of (hyper)linking, well-known in the text domain, has been inspiring researchers and practitioners in the audiovisual domain since many years. On 30th of August 2016, I will talk about audiovisual linking at the 1st International Conference on Multimodal Media Data Analytics (MMDA2016) in The Hague, The Netherlands.

Various application scenarios can benefit from audiovisual linking. In recent years, we have been looking at recommendation and storytelling scenarios for video-to-video linking in the context of the Video Hyperlinking task in the MediaEval and TRECVid benchmark evaluation series, text-to-video linking to support fast access to broadcast archives in news production context, and audio-to-image linking in the context of visual radio. The latter relates to quite a long history of research projects on ‘linking the spoken word’ to related information sources in scenarios that aim for example to assist participants in meetings or to suggest slides during presentations.

I will present some of the approaches we experimented with the past years and zoom in on the process of setting up the video hyperlinking benchmark evaluations we have been running in MediaEval and TRECVid.