Invited Speakers

The following speakers have graciously accepted to give keynotes at AIST-2020.

Marcello Pelillo

Marcello Pelillo is a Full Professor of Computer Science at Ca’ Foscari University of Venice, Italy, where he leads the Computer Vision and Pattern Recognition group. He has directed the European Centre for Living Technology (ECLT) and has held visiting research/teaching positions in various institutions such as Yale University, McGill University, the University of Vienna, York University (UK), National ICT Australia (NICTA), Wuhan University, Huazhong University of Science and Technology, and South China University of Technology. He has been General Chair for ICCV 2017, Program Chair for ICPR 2020, and has served in various roles in the organization of the main conferences of his research areas. He is the Specialty Chief Editor of Frontiers in Computer Vision and serves, or has served, on the Editorial Boards of several journals, including IEEE Transactions on Pattern Analysis and Machine Intelligence, IET Computer Vision, Pattern Recognition, and Brain Informatics. He also serves on the Advisory Board of the International Journal of Machine Learning and Cybernetics. Prof. Pelillo has been elected a Fellow of the IEEE and a Fellow of the IAPR, and is an IEEE SMC Distinguished Lecturer. His Erdös number is 2.

Graph-theoretic Methods in Computer Vision: Recent Advances

Abstract: Graphs and graph-based representations have long been an important tool in computer vision and pattern recognition, especially because of their representational power and flexibility. There is now a renewed interest toward explicitly formulating computer vision problems as graph problems. This is particularly advantageous because it allows vision problems to be cast in a pure, abstract setting with solid theoretical underpinnings and also permits access to the full arsenal of graph algorithms developed in computer science and operations research. In this talk I’ll describe some recent developments in graph-theoretic methods which allow us to address within a unified and principled framework a number of classical computer vision problems. These include interactive image segmentation, image geo-localization, image retrieval, multi-camera tracking, and person re-identification. The concepts discussed here have intriguing connections with optimization theory, game theory and dynamical systems theory, and can be applied to weighted graphs, digraphs and hypergraphs alike.

Miguel Couceiro

Miguel Couceiro is a Professor of Computer Science at University of Lorraine in Nancy, and head of the ORPAILLEUR team, LORIA (UMR 7503). His current research focuses on knowledge discovery and multicriteria decision making and, recently, with a particular emphasis on fair and explainable models. He has (co-)authored more than 180 papers and book chapters. He is an elected member (2018-2020) of IEEE CS Technical Committee on MVL, and a PC member of several conferences. He is the local coordinator of the European Erasmus Mundus master's program LCT (Languaguage and communication Technologies) and is the responsible of the 2nd year master's program NLP at the University of Lorraine.

Making models fairer through explanations

Abstract: Algorithmic decisions are now being used on a daily basis, and based on Machine Learning (ML) processes that may be complex and biased. This raises several concerns given the critical impact that biased decisions may have on individuals or on society as a whole. Not only unfair outcomes affect human rights, they also undermine public trust in ML and AI.

In this talk we will address fairness issues of ML models based on decision outcomes, and we will show how the simple idea of “feature dropout” followed by an “ensemble approach” can improve model fairness. To illustrate we will revisit the case of “LimeOut” that was proposed to tackle “process fairness”, which measures a model’s reliance on sensitive or discriminatory features. Given a classifier, a dataset and a set of sensitive features, LimeOut first assesses whether the classifier is fair by checking its reliance on sensitive features using “Lime explanations”. If deemed unfair, LimeOut then applies feature dropout to obtain a pool of classifiers. These are then combined into an ensemble classifier that was empirically shown to be less dependent on sensitive features without compromising the classifier’s accuracy.

We will present different experiments on multiple datasets and several state of the art classifiers, which show that LimeOut’s classifiers improve (or at least maintain) on process fairness as well as on other fairness metrics, such as individual and group fairness, equal opportunity and demographic parity, among others.

Leonard Kwuida

Leonard Kwuida graduated from Université de Yaoundé I, in Mathematics. He holds a Ph.D from TU Dresden, in Algebra and Logic. His research interest includes Formal Concept Analysis and Data Analysis, algebraic operators for knowledge discovery in databases, explainable AI, ordered structures and weak types of negation. Dr. Kwuida teaches at Bern University of Applied Sciences, School of Business.

On interpretability and similarity in concept based Machine Learning

Abstract: Machine Learning (ML) provides important techniques for classification and predictions. Most of these are black box models for users and do not provide decision makers with an explanation. For the sake of transparency or more validity of decisions, the need to develop explainable/interpretable ML-methods is gaining more and more importance. Certain questions need to be addressed:

  • How does a ML procedure derive the class for a particular entity?
  • item Why does a particular clustering emerge from a particular unsupervised ML procedure?
  • What can we do if the number of attributes is very large?
  • What are the possible reasons of the mistakes for concrete cases and models?

For binary attributes, Formal Concept Analysis (FCA) offers techniques in terms of intents of formal concepts, and thus provides plausible reasons for model prediction. However, from the interpretable machine learning viewpoint, we still need to provide decision makers with the importance of individual attributes to classification of a particular object, which may facilitate explanations by experts in various domains with high-cost errors like medicine or finance.

In this talk, we will discuss how notions from cooperative game theory can be used to assess the contribution of individual attributes in classification and clustering processes in concept-based machine learning. To address the 3rd question, we present some ideas how to reduce the number of attributes using similarities in large contexts.

Santo Fortunato

Santo Fortunato is the Director of the Indiana University Network Science Institute (IUNI) and a faculty at Luddy School of Informatics, Computing and Engineering. Previously he was professor of complex systems at the Department of Computer Science of Aalto University, Finland. Prof. Fortunato got his PhD in Theoretical Particle Physics at the University of Bielefeld In Germany. He then moved to the field of complex systems, via a postdoctoral appointment at Luddy School of Informatics, Computing and Engineering of Indiana University. His current focus areas are network science, especially community detection in graphs, computational social science, science of science, climate change. His research has been published in leading journals, including Nature, Science, PNAS, Physical Review Letters, Reviews of Modern Physics, Physics Reports and has collected over 33,000 citations (Google Scholar). His review article Community detection in graphs (Physics Reports 486, 75-174, 2010) is one of the best known and most cited papers in network science. He received the Young Scientist Award for Socio- and Econophysics 2011, a prize given by the German Physical Society, for his outstanding contributions to the physics of social systems. He is the Founding Chair of the International Conference on Computational Social Science (IC2S2) and Chair of Networks 2021, the first merger of the NetSci and the Sunbelt conferences, possibly the largest ever event in network science.

Consensus clustering in networks

Abstract: Algorithms for community detection are usually stochastic, leading to different partitions for different choices of random seeds. Consensus clustering is an effective technique to derive more stable and accurate partitions than the ones obtained by the direct application of the algorithm. Here we will show how this technique can be applied recursively to improve the results of clustering algorithms. The basic procedure requires the calculation of the consensus matrix, which can be quite dense if (some of) the clusters of the input partitions are large. Consequently, the complexity can get dangerously close to quadratic, which makes the technique inapplicable on large graphs. Hence we also present a fast variant of consensus clustering, which calculates the consensus matrix only on the links of the original graph and on a comparable number of additional node pairs, suitably chosen. This brings the complexity down to linear, while the performance remains comparable as the full technique. Therefore, the fast consensus clustering procedure can be applied on networks with millions of nodes and links.




Industry talks

Nikita Semenov

Nikita Semenov is Head of Machine Learning Department at MTS AI. His area of interest has recently been focused on spoken language understanding and speech generation. Special attention in generation issues pays attention to contextual dependence and emotional component

Text and speech processing projects at MTS AI

Abstract: How to build effective systems for processing and understanding speech in a large corporation. What challenges the researcher faces. It’s no secret that any solution has its own life cycle, including a solution based on natural language processing technologies. Let us dwell in particular on the components of the life cycle of solutions with NLP and ASR technologies.

Ivan Smurov

Ivan Smurov has graduated from the Department of Mechanics and Mathematics of Moscow State University in 2012, where he specialized in mathematical linguistics. He has a decade-long experience of brining NLP research and industry closer: applying cutting edge NLP models to industrial cases as well as conducting research in the areas most useful for practical applications. Currently Ivan is the head of NLP in Advanced Research Department at ABBYY and a professor at Moscow Institute of Physics and Technology. At the moment he is most interested in syntactic and semantic parsing, relation extraction and summarization.

When conll-2003 is not enough: are academic NER and RE corpora well-suited to represent real-world scenarios?

Abstract: A lot of business applications require named entity recognition (NER) or relation extraction (RE). There exist several well-studied academic corpora. Scores obtained on these corpora are typically high. Taking recent advances in NER into account, one can even assume that it is a primarily solved task.

However, business applications seldom do enjoy the high scores reported in academia. One can theorize that the main reason for that is that both text sources and entities in industry and academia present with several noticeable differences.

In my talk, I will highlight the key differences between typical academic NER/RE corpus, and the data often dealt with in the industry. Further, I describe our attempt to bridge the gap between business and researchers by creating RuREBus corpus and conducting RuREBus shared task. Finally, I will provide some insights into the shared task results’ practical interpretation.