• BIRDS-bridging the gap between information science, information retrieval and data science

      Frommholz, Ingo; Liu, Haiming; Melucci, Massimo; University of Bedfordshire; University of Padova (Association for Computing Machinery, Inc, 2020-07-30)
      The BIRDS workshop aimed to foster the cross-fertilization of Information Science (IS), Information Retrieval (IR) and Data Science (DS). Recognising the commonalities and differences between these communities, the proposed full-day workshop brought together experts and researchers in IS, IR and DS to discuss how they can learn from each other to provide more user-driven data and infor-mation exploration and retrieval solutions. Therefore, the papers aimed to convey ideas on how to utilise, for instance, IS concepts and theories in DS and IR or DS approaches to support users in data and information exploration.
    • Semantic Hilbert space for interactive image retrieval

      Jaiswal, Amit Kumar; Liu, Haiming; Frommholz, Ingo; University of Bedfordshire (Association for Computing Machinery, Inc, 2021-07-11)
      The paper introduces a model for interactive image retrieval utilising the geometrical framework of information retrieval (IR). We tackle the problem of image retrieval based on an expressive user information need in form of a textual-visual query, where a user is attempting to find an image similar to the picture in their mind during querying. The user information need is expressed using guided visual feedback based on Information Foraging which lets the user perception embed within the model via semantic Hilbert space (SHS). This framework is based on the mathematical formalism of quantum probabilities and aims to understand the relationship between user textual and image input, where the image in the input is considered a form of visual feedback. We propose SHS, a quantum-inspired approach where the textual-visual query is regarded analogously to a physical system that allows for modelling different system states and their dynamic changes thereof based on observations (such as queries, relevance judgements). We will be able to learn the input multimodal representation and relationships between textual-image queries for retrieving images. Our experiments are conducted on the MIT States and Fashion200k datasets that demonstrate the effectiveness of finding particular images autocratically when the user inputs are semantically expressive.