1st Workshop on Advanced AI Techniques for Data Management and Analytics (AIDMA)
September 4, 2023 - Barcelona co-located with ADBIS 2023
Artificial Intelligence (AI) methods are now well-established and have been fully integrated by the data management and analytics (DMA) community as an innovative way to address some of its challenges. For example, and without being exhaustive, one may cite the personalization of queries on databases, the management of time series big data streams, the recommendation of dashboards in business intelligence, the indexing of large amounts of data possibly in distributed systems based on ML-based optimization methods, the guided exploration of novel data or the explanation of the provenance of data in queries.
The AIDMA workshop fully embraces this new trend in data management and analytics. It aims at gathering researchers from both AI, data management and analytics to address the new challenges of the domain. These challenges are now of prime importance for several reasons.
- Data are increasingly complex and, most of the time, are not simple tabular data.
- The remarkable achievements of AI and DMA in the past and present have spurred a growing demand from high-stake domains such as Renewable Energy Systems for comprehensive data management and analysis solutions
- Medicine and healthcare management systems now heavily rely on the recent progress of machine and deep learning to support complex decision tasks.
These new approaches must consider the user as a first class citizen in the data management and analysis process. This relates to the ability of the user to inflect the process and to understand the rationale of a data analysis, what is addressed in the explainability domain. Finally, interacting with users implies to be able to manage efficiently the uncertainty attached to data as well as in any artificial intelligence process. Uncertainty needs to be managed at various levels of data management process: data collection, data querying, machine learning and data analytics. For instance, the presence of uncertainty can be source of semantics errors during query evaluation. Moreover, traditional machine learning and deep learning models do not consider uncertainty in data and predictions while they are prone to noises. Then, quantifying uncertainty is a critical challenge for most machine learning techniques.
Program (subject to changes)
Monday, September 4, 2023 | |||
14:00 | 14:05 | Opening | |
14:05 | 15:30 | 14:05 - 14:40 | Keynote by Michele Linardi |
14:40 - 15:05 | Data exploration based on local attribution explanation: a medical use case by Elodie Escriva, Emmanuel Doumard, Jean-Baptiste Excoffier, Julien Aligon, Paul Monsarrat, and Chantal Soulé-Dupuy |
||
15:05 - 15:30 | Explainability based on Feature Importance for Better Comprehension of Machine Learning in Healthcare by Pronaya Prosun Das and Lena Wiese |
||
15:30 | 16:00 | Coffee break | |
16:00 | 17:15 | 16:00 - 16:25 | An Empirical Study on the Robustness of Active Learning for Biomedical Image Classification under Model Transfer Scenarios by Tamás Janusko, Julius Gonsior, and Maik Thiele |
16:25 - 16:50 | Evaluating the Robustness of ML Models To Out-of-Distribution Data Through Similarity Analysis by Joakim Lindén, Håkan Forsberg, Masoud Daneshtalab, and Ingemar Söderquist. |
||
16:50 - 17:15 | Holistic Analytics of Sensor Data From Renewable Energy Sources: A Vision Paper by Kejser Jensen and Christian Thomsen |
Keynote
Explainable Models for Time Series: Recent Advancements and New Perspectiveby Michele Linardi (Cergy Paris University, France)
Abstract: Multivariate time series analysis (MTS), such as Classification, Forecasting, and Anomaly Detection, are omnipresent problems in many scientific domains. In this context, several state-of-the-art solutions rely on deep learning (DL) architectures such as CNN (Convolutional Neural Network), LSTM (Long Short-Term Memory Network), and attention-based architecture like Transformer. Despite their effectiveness and usage, DL models remain uninterpretable black boxes, where the user feeds an input and obtains an output without understanding the motivations behind that decision. In countless real-world domains, from legislation and law enforcement to healthcare and precision agriculture, diagnosing what aspects of a model's input drive its output is essential to ensure that decisions get driven by appropriate insights in the context of its use. In this sense, Explainable machine learning techniques (xAI) aim to provide a solid descriptive approach to DL models, and it is at the cusp of becoming a compulsory requirement in all use cases. In this talk, we introduce and present the main state-of-the-art xAI methods adopted in the MTS DL models. Among several technical and fundamental aspects, we will show how xAI solutions become effective when they can leverage the causal relationships (between target and predictors) occurring over MTS variables. We will also present how xAI solutions can be effective in DL Domain Adaptation which is an omnipresent problem in various scientific fields, including Healthcare.
Speaker's short Bio: Michele Linardi (Ph.D in Comptue Science) is an assistant professor (maître de conferences) with Cergy Paris University (CYU - ETIS laboratory) . His research interests span the areas of time series analytics and databases, with a great interest in machine learning for temporal data.
Venue
See the ADBIS 2023 conference website: https://www.essi.upc.edu/dtim/ADBIS2023/Topics of interest
The AIDMA workshop is a consolidation event for the following workshop series: AID4RES 2023, EXEC-MAN 2023, MODUS 2023, and SMA2 2023
Workshop chairs
Allel Hadjali (Engineer School ENSMA, Poitiers, France)Anton Dignös (Free University of Bozen-Bolzano, Italy)
Danae Pla Karidi (Athena Research Center, Greece)
Fabio Persia (University of L'Aquila, Italy)
George Papastefanatos (Athena Research Center, Greece)
Giancarlo Sperlì (University of Naples "Federico II", Italy)
Giorgos Giannopoulos (Athena Research Center, Greece)
Haomiao Wang (RESTORE, France)
Julien Aligon (University Toulouse Capitole, France)
Manolis Terrovitis (Athena Research Center, Greece)
Nicolas Labroche (University of Tours, France)
Paul Monsarrat (RESTORE, France)
Richard Chbeir (University Pau & Pays Adour, Anglet, France)
Robin Cugny (SolutionData Group, France)
Sana Sellami (Aix Marseille University, Marseille, France)
Seshu Tirupathi (IBM Research Europe)
Torben Bach Pedersen (Aalborg University, Denmark)
Vincenzo Moscato (University of Naples "Federico II", Italy)
Proceedings
Workshop papers are published by Springer in the Communications in Computer and Information Science (CCIS) book series. The Springer CCIS volume with workshop papers will be published only as a DIGITAL volume, available for downloading from the Springer portal.
Authors of the best workshop papers will be invited to submit an extended version of their paper to the ComSIS Journal (2-year IF: 1.170) indexed by Science Citation Index (SCI) by Thomson Reuters, SCOPUS (Elsevier), Summon (Serials Solutions).
Diversity and Inclusion
We kindly ask authors to adopt inclusive language in their papers and presentations (https://dbdni.github.io/pages/inclusivewriting.html and https://dbdni.github.io/pages/inclusivetalks.html), and all participants to adopt a proper code on conduct (https://dbdni.github.io/pages/codeofconduct.html)