Special session on video hyperlinking: what to link and how to do that?

Video Hyperlinking

Video hyperlinking is growing interest in the multimedia retrieval community. In video hyperlinking the goal is to apply the concept of linking that we are used to in the text domain to videos: enable the user to browse from one video to another. The assumption is that video hyperlinking can help to explore large video repositories more adequately. Links are created based on an automatically derived, topical relationship between video segments. The question however is, how do we identify which video segments in these repositories are good candidates for linking? And also, if we have such candidates, how to make sure that the links to video targets are really interesting for a user? Five research groups presented their view on this today, at a special session at the International Conference on Multimedia Retrieval (ICMR2017) in Bucharest.

Hubs and false links

IMG_8581Chong-Wah Ngo from City University of Hong Kong…

View original post 645 more words

CLARIN/CLARIAH Collaboration on Automatic Transcription Chain for Digital Humanities

In the CLARIAH project, we are developing the Media Suite, an application that supports scholarly research using audiovisual media collections. In 2017 we will also be integrating tools that support Oral History research in the Media Suite. From 10 to 12 May 2017,  scholars and technology experts discussed the development of an automatic transcription chain for spoken word collections in the context of CLARIN, the European counterpart of CLARIAH, at a CLARIN-PLUS workshop in Arezzo. We observed that CLARIAH and CLARIN use a different but interesting complementary approach to the development of such a transcription chain that encourages further collaboration.

Read more of this post

On developing benchmark evaluations

The Multimedia COMMONS 2016 workshop (October 16 2016) –that will run as part of the ACM Multimedia conference in Amsterdam– will provide a forum for the community of current and potential users of the Multimedia Commons. This is a multi-institution collaboration initiative, that was launched last year to compute features, generate annotations, and develop analysis tools, principally focusing on the Yahoo Flickr Creative Commons 100 Million dataset (YFCC100M), which contains around 99.2 million images and nearly 800,000 videos from Flickr. The workshop aims to share novel research using the YFCC100M dataset, emphasizing approaches that were not possible with smaller or more restricted multimedia collections; ask new questions about the scalability, generalizability, and reproducibility of algorithms and methods; re-examine how we use data challenges and benchmarking tasks to catalyze research advances; and discuss priorities, methods, and plans for continuously expanding annotation efforts.

At the MMCommons workshop I will discuss the development of benchmark evaluations in the context of  a series of tasks focusing on audiovisual search emphasizing its ‘multimodal’ aspects, starting in 2006 with the workshop on ‘Searching Spontaneous Conversational Speech’ that led to tasks in CLEF and MediaEval (“Search and Hyperlinking”), and recently also TRECVid (“Video Hyperlinking”). The value and importance of Benchmark Evaluations is widely acknowledged. Benchmarks play a key role in many research projects. It takes time, a well-balanced team of domain specialists preferably with links to the user community and industry, and a strong involvement of the research community itself to establish a sound evaluation framework that includes (annotated) data sets, well-defined tasks that reflect the needs in the ‘real world’, a proper evaluation methodology, ground-truth, including a strategy for repetitive assessments, and last but not least, funding. Although the benefits of an evaluation framework are typically reviewed from a perspective of ‘research output’ –e.g., a scientific publication demonstrating an advance of a certain methodology– it is important to be aware of the value of the process of creating a benchmark itself: it increases significantly the understanding of the problem we want to address and as a consequence also the impact of the evaluation outcomes.

The focus of my talk will be on the process rather than on the results of these evaluations themselves, and will address cross-benchmark connections, and new benchmark paradigms, specifically the integration of benchmarking in industrial ‘living labs’ or Evaluation-as-a-Service (EaaS) initiatives that are becoming popular in some domains.

Video Hyperlinking Explained in 7 minutes (in Dutch)

On the 2nd of February I was invited to have a short introduction on video hyperlinking at iMMovator‘s Cross Media Café.  Here are the slides and there is also a video:

Topic models and diversity in video hyperlinking

Video Hyperlinking

The use of hierarchical topic models to find anchor-target pairs could potentially improve diversity in video hyperlinking, and the evaluation of video hyperlinking should focus more on assessing serendipity in the links. These are two important findings of the work of Anca-Roxanna Simon who defended successfully her PhD thesis on “Semantic Structuring of Video Collections from Speech: Segmentation and Hyperlinking”, Wednesday 2nd of December at the University of Rennes, France.

View original post 432 more words

Video Hyperlinking @ TRECVid-2015

Video Hyperlinking

After running a video hyperlinking benchmark evaluation for a number of years at MediaEval, we are excited to have now an evaluation running on video hyperlinking at TRECVid as well. On the 17th of November 2015, we discussed the results of the evaluation and the plans for next year at the TRECVid workshop in Gaithersburg, US.

Benchmarking the concept of video hyperlinking already started in 2009 with the Linking Task in VideoCLEF that involved linking video to wikipedia material on the same subject in a different language. In 2012, we started a ‘brave new task’ in MediaEval, where we explored approaches to benchmark the concept of linking videos to other videos using internet video from blip.tv.  In 2013-2014, ‘search and hyperlinking’ ran as a regular MediaEval task, this time with a collection of about 2500 hours of broadcast video from BBC instead of internet video.

Thanks to MediaEval we could improve our understanding of the concept of…

View original post 600 more words