Events 2017

Inverse Rendering

Shida Beigpour
Joint Lecture Series
01 Feb 2017, 12:15 pm - 2:15 pm
Saarbrücken building E1 5, room 002
Photographs as well as video frames are two-dimensional projections of three-dimensional real-world scenes. Photorealistic Computer Graphic Imagery (CGI) is usually generated by "rendering" a sophisticated three-dimensional model of a realistic scene on to a virtual camera sensor. Such process is not invertible due to the high amount of data loss. Still both physically captured photographs and synthesized photos (CGI) contain considerable amount of information about the characteristics of the scene itself. Even naïve human viewers are able to infer important information about the scene e.g. shape material illumination distance and actions solely from a single image. This is indeed much more difficult for computer systems. Inverse rendering is one of the hot-topics in computer vision in which the goal is to estimate and model the three-dimensional scene and its illumination automatically given only one or a sparse set of images from the scene. This is by nature a highly under-constraint problem which makes it very complex to solve. Yet the advancements in computation and imaging technologies (e.g. depth sensors and light-field cameras) open new horizons in this field. Inverse rendering makes many interesting applications possible including: creating novel views of the scene/objects re-lighting detecting and altering the object materials in the photographs and augmented reality.  This talk provides a brief overview of some of the state-of-the-art inverse rendering techniques and datasets with a strong focus on addressing the complexities of real-world scenarios.

Time for Text Mining and Information Retrieval

Jannik Strötgen
Joint Lecture Series
11 Jan 2017, 12:15 pm - 2:15 pm
Saarbrücken building E1 5, room 002
Temporal information is an important aspect in any information space and has important characteristics that make it highly eligible to be exploited in diverse text mining and information retrieval scenarios. For quite a long time only the metadata of text documents (e.g. publication dates) has been considered but with recent improvements in natural language processing (NLP) temporal expressions occurring in the content of documents (e.g. "January 2016" "next week") can be extracted and interpreted efficiently with high quality. This is particularly valuable as in many types of documents temporal expressions do not only occur frequently but also play a crucial role e.g. to anchor events in time.

In this talk after explaining the basics and challenges of the NLP task "temporal tagging" I present our approach to enrich documents with temporal information. Then I showcase several application scenarios in which we exploit automatically extracted temporal annotations for search and exploration tasks. These range from temporal information retrieval for news articles via Wikipedia-based event extraction up to the exploration of fictitious happenings in literary texts in the context of digital humanities research.