The main goal of multi-modal SpRL is to explore the extraction of spatial information from two information resources that are image and text. This is important for various applications such as semantic search, question answering, geographical information systems and even in robotics for machine understanding of navigational instructions or instructions for grabbing and manipulating objects. It is also essential for some specific tasks such as text to scene conversion or vice-versa, scene understanding as well as general information retrieval tasks when using a huge amount of available multimodal data from various resources. Moreover, there is an increasing interest in the extraction of spatial information from medical images that are accompanied by natural language descriptions. In this workshop we accept papers related to this research topic, moreover, we provide a pilot task and data for the interested participants to build a system for such extractions.
Tuesday, 9/12/2017, 9:00-11:00 am.
09:00-09:20 | CLEF 2017: Multimodal Spatial Role Labeling (mSpRL) Task Overview [PDF] | Parisa Kordjamshidi, Taher Rahgooy, Marie-Francine Moens, James Pustejovsky, Umar Manzoor, Kirk Roberts |
09:20-09:50 | LIP6@CLEF2017: Multi-Modal Spatial Role Labeling using Word Embeddings [PDF] | Eloi Zablocki, Patrick Bordes, Laure Soulier, Benjamin Piwowarski, Patrick Gallinari |
09:50-10:20 | Declarative Learning based Programming for Spatial Language Understanding | Parisa Kordjamshidi |
10:20-11:00 | Group discussion | |
About 20 kids in traditional clothing and hats waiting on stairs. A house and a green wall with gate in the background. A sign saying that plants can't be picked up on the right.
About 20 [kids]TRAJECTOR in traditional clothing and hats waiting [on]SPATIAL_INDICATOR [stairs]LANDMARK.
For example, in the above sentences the location of kids that is the {trajector} has been described with respect to the {stairs} that is the {landmark} using the preposition {on} that is the {spatial indicator}. These are examples of some spatial roles that we aim to extract from the sentence.spatial_relation(kids, on, stairs)
that form a kind of spatial relation/link between the three above mentioned roles. Recognizing the spatial relations is very challenging because there could be several spatial roles in the sentence and the model should be able to recognize the right connections. For example (waiting, on, stairs) is a wrong relation here because "kids" is the trajector in this sentence not "waiting". For this level also the information from the image can be very helpful.above(kids, stairs)
Parisa Kordjamshidi, Tulane University, pkordjam@tulane.edu
Taher Rahgooy, Tulane University, taher.rahgooy@gmail.com
Marie-Francine Moens, KULeuven, sien.moens@cs.kuleuven.be
James Pustejovsky, Brandeis University, jamesp@cs.brandeis.edu
Kirk Roberts, UT Health Science Center at Houston, kirk.roberts@uth.tmc.edu
[1]. Kordjamshidi, P., Moens, M., van Otterlo, M., (2010). Spatial Role Labeling: Task Definition and Annotation Scheme, LREC'10.
[2]. James Pustejovsky, Jessica Moszkowicz, and Marc Verhagen. A linguistically grounded annotation language for spatial information. TAL, 53(2), 2012. 6
[3]. Kordjamshidi, P., van Otterlo, M., Moens, M. (2011). Spatial role labeling: Towards extraction of spatial relations from natural language. ACM Transactions on Speech and Language Processing, 8(3), 4-36.
[4]. Kordjamshidi, P., Moens, M. (2015). Global machine learning for spatial ontology population. Journal of Web Semantics, 30, 3-21. Download
[5]. James Pustejovsky and Zachary Yochum. Image Annotation with ISO-Space: Distinguishing Content from Structure. Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), 2014.
[6]. Kordjamshidi, P., van Otterlo, M., Moens, M. (2015). Spatial role labeling annotation scheme. In: Pustejovsky J., Ide N. (Eds.), Handbook of Linguistic Annotation Springer Verlag. Download
[7]. James Pustejovsky and Zachary Yocum. Capturing motion in ISO-SpaceBank. In Workshop on Interoperable Semantic Annotation, page 25, 2013.