Challenges in Enabling Mixed Media Scholarly Research with Multi Media Data in a Sustainable Infrastructure

See below the presentation I gave on 29-06-2018 at the Digital Humanities 2018 Conference, Mexico City, on the development of the Media Suite, an online research environment that facilitates scholarly research using large multimedia collections maintained at archives, libraries and knowledge institutions. The Media Suite unlocks the data on the collection level, item level, and segment level, provides tools that are aligned with the scholarly primitives (discovery, annotation, comparison, linking), and has a ‘workspace’ for storing personal mixed media collections and annotations, and to do advanced analysis using Jupyter Notebooks and NLP tools.

See the notes for the narrative that goes with the slides. The screencasts that were originally in the slides are not included. I will post these later.

The Media Suite is developed in the Dutch CLARIAH Research Infrastructure project by an interdisciplinary, international team of developers, scholars, and information technology specialists, and is maintained at one of the CLARIAH Centers, The Netherlands Institute for Sound and Vision.

Advertisements

AV in the spotlight at DH2018

Next year we will have an easy ride to the Digital Humanities Conference as it will be organised in The Netherlands. But this year we’re off to Mexico City to have a “Humanidades Digitales” experience. My first DH conference was in 2011 at Stanford where I presented a poster on “Distributed Acces to Oral History collections: Fitting Access Technology to the Needs of Collection Owners and Researchers” (pdf) based on our experiences with the Verteld Verleden project. I remember I was a bit disappointed that at that time there was not so much interest in Oral History at the conference (and in my poster) nor in audiovisual content as a significant source for scholarly research. Thanks also to the workshops organized at DH in the past years by the AV in Digital Humanities Special Interest Group the topic ‘audiovisual’ in Digital Humanities is emerging. I am very pleased to be able to present our work on the CLARIAH Media Suite at this year’s conference (on Thursday) and show the huge progress the CLARIAH project made in unlocking multimedia content –radio, television, film, oral history, newspapers, contracts, posters and photos– from Dutch archival institutions for scholarly research.

 

 

Using open content for a music video

To create some nice music videos for the songs I created for the new Grafton Music album (preview), I fiddled around with the Open Images repository to create a remix that could work as a music video. My first try was for a song called ‘Lighthouse’. Well yes, after querying for ‘lighthouse’ I indeed stumbled upon some lighthouse related stuff. However, creating something exciting out of the beautiful, but the not really overwhelming amount of lighthouse footage, was quite a challenge. So I could say that it’s about the music and the video is ‘just for entertainment’.  On the other hand, the video somehow has a bit of the ’round-and-round-thingie’ you expect with a lighthouse, especially in the last part of the song.

My second try was for Belle Rebelle, a song by the famous French composer Gounod with lyrics (Jean-Antoine de Baïf ) from the late Middle Ages, put in a modern arrangement. As I wanted to follow the storyline of the song a bit, I tried several keywords such as ‘love’ and ‘beautiful’ but that didn’t quite do it. The keyword ‘fashion’ was the lucky shot that brought me some really cool retro footage that I thankfully ‘remixed’ for the music video. See below some outstanding example stills, or watch the video (and share-if-you-like, thank you).  Or try out openbeelden.nl yourself and dive into some great fashion content.

Special session on video hyperlinking: what to link and how to do that?

Video Hyperlinking

Video hyperlinking is growing interest in the multimedia retrieval community. In video hyperlinking the goal is to apply the concept of linking that we are used to in the text domain to videos: enable the user to browse from one video to another. The assumption is that video hyperlinking can help to explore large video repositories more adequately. Links are created based on an automatically derived, topical relationship between video segments. The question however is, how do we identify which video segments in these repositories are good candidates for linking? And also, if we have such candidates, how to make sure that the links to video targets are really interesting for a user? Five research groups presented their view on this today, at a special session at the International Conference on Multimedia Retrieval (ICMR2017) in Bucharest.

Hubs and false links

IMG_8581Chong-Wah Ngo from City University of Hong Kong…

View original post 645 more words

CLARIN/CLARIAH Collaboration on Automatic Transcription Chain for Digital Humanities

In the CLARIAH project, we are developing the Media Suite, an application that supports scholarly research using audiovisual media collections. In 2017 we will also be integrating tools that support Oral History research in the Media Suite. From 10 to 12 May 2017,  scholars and technology experts discussed the development of an automatic transcription chain for spoken word collections in the context of CLARIN, the European counterpart of CLARIAH, at a CLARIN-PLUS workshop in Arezzo. We observed that CLARIAH and CLARIN use a different but interesting complementary approach to the development of such a transcription chain that encourages further collaboration.

Read more of this post

Second CfP “Identifying and Linking Interesting Content in Large Audiovisual Repositories”

2nd CALL FOR ICMR2017 SPECIAL SESSION PAPERS

Identifying and Linking Interesting Content in Large Audiovisual Repositories

An emerging key challenge for multimedia information retrieval as technologies for component feature identification and standard ad hoc search mature is to develop mechanisms for richer content analysis and representations, and novel modes of exploration. For example, to enable users to create their own personal narratives by seamlessly exploring (multiple) large audiovisual repositories at the segment level, either by following established trails or creating new ones on the fly. A key research question in developing these new technologies and system is how we can automatically identify video content that viewers perceive to be interesting taking multiple modalities into account (visual, audio, text).

The ICMR2017 Special Session “Identifying and Linking Interesting Content in Large Audiovisual Repositories” is calling for papers (6 pages) presenting significant and innovative research focusing on mechanisms that help identifying significant elements within AV (or multimedia, in general) repositories and the creation of links between interesting video segments and other video segments (or multimedia content).

Papers should extend the state of the art by addressing new problems or proposing insightful solutions. We encourage submissions covering relevant perspectives in this area including:

  • Multi/mixed-media hyperlinking (audio-to-image, text-to-video)
  • Linking across audiovisual repositories (e.g., from professional to public)
  • Alignment of social media posts to video (segments)
  • Video-to-video search
  • Retrieval models that incorporate multimodal, segment-based retrieval and linking
  • Segment-level recommendation in videos
  • Video segmentation and summarization
  • Multimodal search (explicit combination of multimodal features)
  • Query generation from video
  • Video-to-text description
  • Content-driven, social-driven interestingness prediction
  • Object interestingness modeling and prediction
  • (User) evaluation of interestingness, hyperlinking or archive exploration systems
  • Use cases related to video hyperlinking or interestingness prediction in video
  • Interfaces for linked-video based storytelling.

For submission details see: http://icmr2017.ro/call-for-special-sessions-s2.php