Research Center in Artificial Intelligence

CRIA Seminars


Speaker Staffan LARSSON
Abstract How are meanings of utterances related to the world and our perception of it? What is meaning, and how is it created? How do word meanings contribute to utterance meaning? We are working towards a formal semantics that aims to provide answers to these and related questions, starting from situated interaction between agents. The meanings of many expressions can be modeled as classifiers of real-world information. Expressions can be single words, or phrases and sentences whose meanings are composed from the meanings of their constituents. By interacting, agents coordinate on meanings by training classifiers. To make formally explicit the notions of coordination, compositionality and classification, and to relate these notions to each other, we use TTR (a type theory with records).
When and where Thursday 8th April, 10:30AM on Zoom
Biography Staffan Larsson (b. 1969) was educated at University of Gothenburg (1992-1996) and gained a PhD in Linguistics there (1997-2002). Since 2013, he is Professor of Computational Linguistics at the Department of Philosophy, Linguistics and Theory of Science at the University of Gothenburg. He is also a member of CLASP (Centre for Research on Linguistic Theory and Studies in Probability) and co-founder and Chief Science Officer of Talkamatic AB. His areas of interest include dialogue, dialogue systems, language and perception, pragmatics, formal semantics, semantic coordination, and philosophy of language.
Speaker Alberto TESTOLIN
Abstract Mathematics is one of the most impressive achievements of human cultural evolution. Despite we perceive it as being overly abstract, it is widely believed that mathematical skills are rooted into a phylogenetically ancient “number sense”, which allows us to approximately represent quantities. However, the relationship between number sense and the subsequent acquisition of symbolic mathematical concepts remains controversial. In this seminar I will discuss how recent advances in AI and deep learning research might allow to investigate how the acquisition of numerical concepts could be grounded into sensorimotor experiences. Success in this challenging enterprise would have immediate implications for cognitive science, but also far-reaching impact for educational practice and for the creation of the next generation of intelligent machines.
When and where Thursday 1st April, 10:30AM on Zoom
Biography Dr. Alberto Testolin received the M.Sc. degree in Computer Science and the Ph.D. degree in Psychological Sciences from the University of Padova, Italy, in 2011 and 2015, respectively. In 2019 he was Visiting Scholar at the Department of Psychology at Stanford University. He is currently Assistant Professor at the University of Padova, with a joint appointment at the Department of Information Engineering and the Department of General Psychology. He is broadly interested in artificial intelligence, machine learning and cognitive neuroscience. His main research interests are statistical learning theory, predictive coding, sensory perception, cognitive modeling and applications of deep learning to signal processing and optimization. He is an active member of the IEEE Task Force on Deep Learning.
Speaker Jun TANI
Abstract The focus of my research has been to investigate how cognitive agents can acquire structural representation via iterative interaction with the world, exercising agency and learning from resultant perceptual experience. For this purpose, my group has investigated various models analogous to predictive coding and active inference frameworks. For the past two decades, we have applied these frameworks to develop cognitive constructs for robots. My talk attempts to clarify underlying cognitive and mind mechanisms for compositionality, social cognition, and consciousness from analysis of emergent phenomena observed in these robotics experiments.
When and where Wednesday 31st Mars, 7:00PM (exceptionnally) on Zoom
Biography Jun Tani received the D.Eng. degree from Sophia University, Tokyo in 1995. He started his research career with Sony Computer Science Lab. in 1993. He became a Team Leader of the Laboratory for Behavior and Dynamic Cognition, RIKEN Brain Science Institute, Saitama, Japan in 2001. He became a Full Professor with the Electrical Engineering Department, Korea Advanced Institute of Science and Technology, Daejeon, South Korea in 2012. He is currently a Full Professor with the Okinawa Institute of Science and Technology, Okinawa, Japan. His current research interests include cognitive neuroscience, developmental psychology, phenomenology, complex adaptive systems, and robotics.
Speaker Gary LUPYAN
Résumé That people are able to communicate on a wide range of topics with reasonable success is often taken as evidence that we have a largely overlapping conceptual repertoire. But where do our concepts come from and how similar are they, really? On one widespread view, humans are born with a core-knowledge system and a set of conceptual categories onto which words map. Alternatively, many of our concepts — including some that seem very basic — may derive from our experience with and use of language. On this view, language plays a key role in both constructing and aligning our conceptual spaces. I will argue in favor of the second view, present evidence for the causal role of language in categorization and reasoning, and describe what consequences this position has for the theoretical possibility of telepathy.
When and where Thursday 25th March 10:30AM on Zoom
Biography Gary Lupyan is a professor of psychology at University of Wisconsin-Madison. He obtained his doctorate in 2007 at Carnegie Mellon with Jay McClelland, followed by postdocs in cognitive (neuro)science at Cornell University and University of Pennsylvania. At the center of his research interests is the question of whether and how our cognition and perception is augmented by language. What does language *do* for us? Other major research interests have spanned top-down effects in perception, the evolution of language, iconicity, and causes of linguistic diversity (do languages adapt to different socio-demographic environments?).
Speaker Tali Leibovich-Raveh
Abstract I will discuss the integration of non-numerical magnitudes during a quantity comparison task in humans and in an animal model – the archerfish. Then, I will discuss the influence of bottom up and top-down factors on the automatic processing of quantities when adults are asked to compare a specific non-numerical magnitude (convex hull, total surface area or the average diameters of the dots). I will briefly present one way in which studying the influence of non-numerical magnitudes can contribute to early mathematics education.
When and where Thursday 4th February 2021, 10:30AM on Zoom
Biography Tali Leibovich-Raveh ( B.sc Medical Laboratory Science, M.Sc. human genetics PhD in Cognitive Sciences) is a senior lecturer in the Department of Mathematical Education at Haifa University, the « Brain and math Education » internship, and an Editorial board member in the « Journal of Numerical Cognition ».
Conférencier Ahmed HALIOUI
Résumé Dans des domaines techniques comme la bioinformatique, l’acquisition des connaissances impliquées dans le processus de résolution de problèmes (e.g. flux de travaux) est sujet de plusieurs défis liés à la fois aux données et aux outils utilisés ainsi qu’à la représentation de son domaine d’application, e.g., en analyse phylogénétique. Des motifs de processus généralisés (abstraits) serviront ainsi de guider davantage une construction interactive des tâches de résolution de problèmes. Ici, l’espace de motifs généralisé est irrégulier cas il est induit par un schéma de processus – lui-même tiré d’une ontologie de domaine – avec des hiérarchies dédiées aux composants de processus (tâches, paramètres, contraintes sur les données, etc) et les interactions entre ces composants. Bien que les structures des flux de travaux soient des DAG (graphes orientés acycliques), nous montrons que notre problème pourrait être représenté par un espace de motifs séquentiels généralisés avec des liens étiquetés entre les éléments. Nous définissons dans ce travail la tâche d’exploration de l’espace et proposons une méthode de recherche en profondeur d’abord en utilisant uniquement un ensemble d’opérations primitives de raffinement exploitant la structure de l’ontologie. De plus, les hiérarchies de composants sont correctement indexées pour fournir un ordre total entre les composants des modèles qui permet une exploration exhaustive mais non redondante de l’espace de motifs. Une évaluation basée sur des recommandations indiquent que notre méthode se comporte d’une façon satisfaisante en termes d’efficacité et de précision prédictives.
Date et lieu Jeudi 13 Février 2020, 10:30, Salle PK-5115
Biographie Dr. Ahmed Halioui est un diplômé au doctorat en informatique cognitive de l’UQAM. Sa thèse intitulée « Extraction de flux de travaux abstraits à partir des textes : application à la bioinformatique » était sous la direction de M. Abdoulaye B. Diallo et Petko Valtchev. Il travaille maintenant à My Intelligent Machines (MIMs) – une start-up intégrant la Génomique, la Bioinformatique et l’Intelligence Artificielle – en tant que scientifique de données. Ses champs d’intérêts portent sur les ontologies, la fouille de donnée, l’extraction de l’information et la bioinformatique.
Conférencier Maxime Radmacher
Résumé L'industrie laitière est un secteur économique important pour le pays. Dans le cadre d’un projet industriel avec la société Valacta, notre équipe développe des modèles prédictifs sur la production laitière du cheptel bovin québécois afin de proposer un outil d’aide à la décision aux agriculteurs. L'équipe a développé un modèle de prédiction de séquence multivarié basé sur un réseau de neurones récurrents (LSTM). La validité du modèle reste un enjeu primordial pour la mise en production de ce dernier. Considérant, l’état actuel des sols et les dépendances de l’industrie laitière aux énergies fossiles, il est dérisoire de penser que la décennie future sera correctement modélisée en s’inspirant de la précédente. La création d’un modèle à long terme doit désormais intégrer une analyse de fond du domaine et la prise en compte de l’urgence climatique. Mais comment le faire ?
Date et lieu Vendredi 7 Février 2020, 10:30, Salle PK-4610
Pizzas et boissons servies. Faites le choix ICI
Biographie Maxime est ingénieur de recherche au laboratoire de bio-informatique de l’UQAM dirigé par Abdoulaye Baniré Diallo. Il est titulaire d'un diplôme d'ingénieur de l'Écolé Polytechnique de Paris, spécialisé en biologie.
Conférencier Jean Guy Meunier
Date et lieu Jeudi 30 Janvier 2020, 10:30, Salle PK-5115
Pizzas et boissons servies. Faites le choix ICI
Biographie Jean Guy Meunier est professeur associé au département de philosophie, codirecteur du Lanci, accrédité au programme doctoral d’informatique cognitive. Il est membre de l’Institut des Sciences Cognitives de l'UQAM. Membre de L’Académie Internationale de Philosophie des Sciences. (Bruxelles). Il effectue des recherches en analyse de textes assistée par ordinateur depuis les années 1968. Il est reconnu comme un des pionniers des Humanités Numériques. Il publiera à l’automne un livre chez Bloomburry (Londres) : Computational Semiotics.
Date et lieu Vendredi 13 Décembre 2019, 10:30, Salle PK-4610
Pizzas et boissons servies. Faites le choix ICI
Résumé Ceci est l’appel à participation à la rencontre inaugurale du *groupe de réflexion « Hybridation en IA »* au sein du CRIA. Conformément à la mission du centre, nous nous intéressons à la hybridation des méthodes basées sur le connexionisme et celle basées sur le traitements symboliques. Pour ce premier midi-discussion, le thème proposé est « Les limites de l’apprentissage profond et des réseaux de neurones artificiels »
Conferiencier Format discussion
Date et lieu 12 Décembre 2019, 10:30, Salle PK-5115
Pizzas et boissons servies. Faites le choix ICI
Résumé Judea Pearl, récipiendaire du prix Turing (2011), suggère que les algorithmes d'apprentissage profond cherchent jusqu'à présent à découvrir des associations et à s'adapter aux courbes préexistantes. Tant que les machines ne seront pas dotées de capacité de raisonnement, Bengio et Pearl [1, 2] estiment qu'elles ne seront pas réellement intelligentes. Et sans intelligence, l’utilité des machines reste limitée. Dans The Book of Why, Pearl propose, pour ce faire, une logique inférentielle, ayant trois niveaux de hiérarchie causale. 1) association : être en mesure de trouver des phénomènes qui sont liés; 2) intervention : être en mesure de deviner quel sera l'effet si l'on effectue une action; 3) contre-factuels : être capable de raisonner à propos de situations hypothétiques. Dans cette présentation, je vous propose un survol du livre de The book of Why. Nous verrons les limites inhérentes aux structures causales et méthodes de logique inférentielle suggérées par Pearl et al. pour construire les trois niveaux de hiérarchie causale.
Biographie Usef Faghihi est professeur adjoint à l’Université du Québec à Trois-Rivières. Auparavant, Usef était professeur à l’Université d’Indianapolis aux É.-U. Usef a obtenu son doctorat en informatique cognitive à l’UQAM. Par la suite, il est parti à Memphis aux É.-U pour faire un stage post-doctoral avec Professeur Stan Franklin, un des pionniers de l’intelligence artificielle. Ses intérêts de recherche sont les architectures cognitives et l’implémentation de différents types d'apprentissage dans ces architectures.
Date et lieu 29 Novembre 2019, 12:00, Salle PK-4610
Pizzas et boissons servies. Faites le choix ICI
Résumé La production laitière est une activité économique importante pour le Québec. Dans le cadre d’un projet industriel avec la société Valacta, notre équipe travaille sur la construction de modèles prédictifs à partir de données hétérogènes. L’objectif global est d’augmenter la rentabilité des troupeaux en aidant les fermiers à prendre les bonnes décisions. Les données sont organisées dans un triplestore, sous format RDF et une ontologie OWL a été développé pour faciliter la fédération des diverses sources concernant les divers aspects de la production laitière (génotypes et phénotypes des animaux, pédigrée, diététiques, santé et bien-être, qualité du lait produit, etc.). Notre propre rôle dans le projet, mis à part la construction de l’ontologie, est la fouille de caractéristiques typiques des vaches, sous la forme de graphes de données répétitifs. Ces graphes devraient permettre d’alimenter les modèles prédictifs par de features qui sont en même temps sémantiquement riches et interprétables par l’utilisateur final, qu’il soit fermier ou agronome, ou encore un spécialiste en qualité de lait. En outre, ils doivent faciliter la comparaison entre animaux, troupeaux ou fermes à l’aide de mesures de similarité. Afin d’injecter de l’expertise du domaine dans le processus d’analyse de données, nous définissent ses motifs par rapport à l’ontologie construite, ce qui a pour effet bénéfique d’augmenter leur niveau d’abstraction. Nous discutons des difficultés de la fouille de motifs de graphes de type « généralisés » et montrons quelques résultats préliminaires.
Biographie Tomas Martin, candidat au Doctorat en informatique à l'UQAM.
Date and place 22 November 2019, 12:00, Room to be specified
Pizzas and drinks served. Make the choiceRIGHT HERE
summary Generic Adversarial Networks (GANs) and Auto-Encoders (AE), is still challenging, especially for non-image based applications. Current distance measurements fail to illustrate the quality of the generated samples because they are not able to acknowledge the shape of the generated data manifold, ie its topological features, and the scale at which the manifold should be analyzed. We propose to rely on the persistent homology, the study of the topological features of a space at different spatial resolutions, for generative models to compare the nature of the original and the generated manifolds. We introduce persistent homology measures, both qualitative and quantitative, for a non-image based application on credit card transactions. We demonstrate how the persistence of homology gives us new perceptions for the quality of assessment.
Biography Jeremy recently obtained his PhD in computer science from the University of Luxembourg. During his PhD, he focused on homology, neural networks, reinforcement learning and linear algebra. Jeremy was a visiting PhD student at Columbia University. His PhD was done in collaboration with the national bank of Luxembourg, the Spuerkess, focusing on the provision of financial services. Prior to his PhD, Jeremy worked as an economist in Luxembourg for different financial institutions.
Details CRIA (Research Center in Artificial Intelligence) at UQAM is proud to partner with the Canadian Francophonie Scholarship Program to offer fellowships for postdoctoral fellowships in AI research for the September -2020 academic year.
We shot a video about it, which is on our channelYoutube.
We invite you to watch it for more information.
Target audience Student looking for a postdoc internship in AI.Conditions apply.
Date and place November 7, 2019, 13:30, Room PK-2265
summary In recent decades, Automatic Natural Language Processing (TALN) has undoubtedly made considerable progress in terms of the diversity of linguistic tools available to the community and the quality of the results obtained. Nevertheless, until recently this progress has been limited to a relatively small number of languages, mostly Western.
The poorly endowed languages, or languages-π, languages less well computerized than the major vehicular languages (English, Spanish, French, etc.), suffer from the lack of linguistic resources whose parallel corpora, important resources for development of NLP systems, including machine translation systems, and thus present several challenges in the area of NLP.
This research project focuses on the poor French-Vietnamese language pair in order to develop a powerful machine translation system while focusing on the recognition of named entities and their transliterations. First, a theoretical analysis, from a cognitive point of view, is approached from the angle of the theories of linguistics and translation. Subsequently, from an IT point of view, the main objective of our research project is to propose an original and reliable method as well as solutions to problems encountered during the automatic translation of named entities. Also, we determine the sub-objectives of this research project, including the recognition of named entities for the Vietnamese language and also the task of transliteration of entities named bilingual for the French-Vietnamese language pair. In addition to statistical approaches, we are adapting the deep learning approach in our various systems to further improve the quality and efficiency of automatic translation of named entities. Contributions to automatic translation and automatic transliteration of bilingual named entities have not only reduced the rate of words outside vocabularies, untranslated and / or incorrectly translated words, but also improved the quality of the machine translation system. .
Student Ngoc Tan LE, PhD student in Cognitive Computing.
Jury members
  • Yllias Chali, Professor, Department of Mathematics and Computer Science, University of Lethbridge, Alberta (External Member)
  • Grégoire Winterstein, UQAM, Linguistics Department (internal member)
  • Vladimir Makarenkov, UQAM, IT department (internal member and chairman of the jury)
  • Fatiha Sadat, UQAM, Professor in the Computer Science Department (Research Director)
  • Lucie Ménard, UQAM, Professor in the Department of Linguistics (Co-Research Director)
Speaker Bernabé Bathakui
Date and place October 17, 2019, 10:30, Room PK-5115
summary In most of the Saharan African countries, initial training needs are enormous because of very high population growth. The numbers in secondary and university institutions are plethoric because there is a serious lack of infrastructure. Massification contributes to a large decrease in the rate of supervision. Teaching and evaluation in this context can not keep their traditional form. Indeed, teaching methods, mainly transmissive, are currently inadequate. The consequence is a school and university failure rate of around 50 to 60%. Our collaborative project with UQAM aims to provide a solution with the fallout of a large decrease in this failure rate. The immediate goal is to leverage knowledge engineering to provide existing learning environments with intelligent tools for learning preparation and monitoring. This research framework answers the following questions:
  • How to observe the collective behavior of learners in a learning situation? The aim here is to provide teachers and tutors with dashboards that will allow them to follow the learners grouped according to their similarity in order to apply appropriate support strategies to each group. Methodological approach adopted is the exploitation of models such as decision trees, rule-based systems, k-means and neural networks to model the data from the processing of learning traces, then extract patterns that will highlight collective dynamics.
  • How to exploit the Internet to endow authors with educational content allowing them to build new content adapted to learning situations? Existing search engines are heavily populated and do not facilitate the retrieval of useful content. The goal here is to provide research environments with an additional filter with an ontological layer dedicated to educational content.
  • How to ensure access to training content in areas that do not have good internet coverage? We have seen many distance education initiatives sunk because of the unavailability of the internet. The goal here is to enable disadvantaged learners in the absence of Internet coverage to continue their training by providing learning platforms with data recovery and synchronization tools.
Organic Bernabé Batchakui, computer engineer and PhD, is currently professor-researcher at the National Polytechnic School of the University of Yaoundé I. He is responsible for the pole Techno-Pedagogy. His research is in the field of computer-assisted teaching and artificial intelligence. His research interests revolve around knowledge engineering for human learning support and learner data analysis.
Speaker Angel Adrienne Nyamen Tato
Date and place Thursday, October 10, 2019, 10:30, Room PK-5115
summary Deep Knowledge Tracing (DKT), along with other machine learning approaches, are skewed in favor of the data used during the training stage. Thus, for problems where we have little data, the power of generalization will be weak and the models will tend to give good results on classes containing many examples and poor results on those with few examples. These problems are common in the field of education where, for example, there are skills that are very difficult (floor) or very easy to master (ceiling). There will be less data on students who answered questions about difficult knowledge well and those who did not answer well-to-master knowledge questions. In this case, the DKT is not able to correctly predict learner behavior on issues associated with these skills. As a solution, we propose a penalization of the model by using the "cost sensitive" technique. In other words, we modify the loss function to mask certain skills to force the model to pay attention to other skills. We tested our solution on a public database and obtained promising results. In addition, to overcome the problem of low data volume, we also propose a hybrid model combining DKT and expert knowledge. Thus, the DKT is combined with a Bayesian network (built from domain experts) using the attention mechanism. The resulting model accurately tracks students' knowledge of the Logic-Muse Intelligent Tutorial (ITS) system, compared to the original BKT and DKT.
Organic Ange Tato is a student (in thesis evaluation) at the Ph.D. in Computer Science at the University of Quebec in Montreal under the supervision of professors Roger Nkambou (UQAM) and Aude Dufresne (UdeM). She is interested in fundamental research on machine learning algorithms applied among other things to user modeling in intelligent systems for human learning. She graduated as a State Engineer in Information Systems in 2014 at the Mohammedia School of Engineering. She then obtained a master's degree in computer science at UQAM in 2015 whose subject was the development of a smart tutorial system for learning logic. For 3 years, she has worked at:
  • improvement of first order optimization algorithms (with gradient descent);
  • the improvement of neural network architectures (inter alia for the processing of multimodal data and the problem of data in insufficient quantity) in order to predict or classify the behaviors of users (players, learners, etc.) of intelligent adaptive systems;
  • the integration of expert knowledge in deep learning models to improve their predictive power and traceability.
Speaker Roger Schank
Date and place 19 September 2019, 10:00, TÉLUQ, 5800 Saint-Denis (metro Rosemont), office 1105, room 11.051 (amphitheater),
summary What is artificial intelligence? Why is the hype about artificial intelligence now? The idea that we will build machines that are just like people, has captivated popular culture for a long time. Nearly every year, a new movie with a new kind of robot that is just like a person in the movies or in fiction. These robots might be starting to get better and better by interacting with people. But that robot will not be appearing any time soon.
Organic Schank has been a professor of psychology and computer science at Yale University. He is the John Evans Emeritus Professor of Computer Science, Psychology and Education, at Northwestern University. He was at the center of the research on artificial intelligence, with his work on conceptual dependency theory and dynamic memory. According to Google Scholar, Schank has been cited at least over 50,000 times. Beyond artificial intelligence, he is also interested in human learning, having multiple lead innovative education ventures. He now runs Socratic Arts, a company providing e-learning courses.
Speaker Mickael Wajnberg
Date and place September 12, 2019, 10:30, Room PK-4610
summary Knowledge Extraction is a discipline that seeks to detect patterns, groups, or patterns in large data sets. It has mainly been developed for single data tables (only one type of individual). However, most current information systems are based on multi-relational data sets, that is to say, with multiple types. The multi-relational data mining aims to fill a certain lack of precision and a loss of contextuality that appears when one analyzes separately the data of each type existing in such a data set.
Relational Concepts Analysis (ARC) is a method capable of extracting knowledge from a multi-relational dataset and explaining it in the form of: (1) patterns (patterns) and association rules or (2) ) homogeneous groups (clusters). It is an extension of the mathematical paradigm of formal concept analysis that admits a single table. ARC allows, in particular, to treat several types of objects-each represented in its own table, alias context-inter-connected through inter-context binary relations. This format is clearly compatible with linked data and RDF formalism, and thereby with the rest of the semantic Web stack technologies. The associated analytical method employs propositional mechanisms. These are inspired by the logics of description to bring the inter-object links to specific descriptors of these objects.
The ARC makes it possible to confront the implicit knowledge in a dataset with what is already known about the domain of origin of these, expressed, for example, in the form of an OWL ontology. In particular, by displaying subgroups of objects with common properties, ARC can detect the existence of relevant subclasses within a known class. Alternatively, through its output in the form of association rules, the ARC can validate the relevance of the characteristic attributes of a class, and detect instances with wrong typing or lacking characteristic properties.
Organic Mickael Wajnberg holds an engineering degree from the Ecole de Télécom Nancy. He also obtained a Master's degree in Computer Science from the Université du Québec à Chicoutimi. Currently he is a PhD candidate at UQAM under the direction of professors Alexandre Blondin-Massé and Petko Valtchev. He is also in co-supervision with France, led by professors Herve Panetto and Mario Lezoche of the University of Lorraine. His research interests lie in the field of knowledge discovery from multi-reagent data and conceptual modeling. He has developed applications in the analysis of medical, linguistic and production data.
Speaker William CLANCEY
Date and place April 11, 2019, 10:30, Room PK-5115
summary Using robotic systems operated from NOAA's ship, the Okeanos Explorer, oceanographers are now able to explore the depths of Earth's oceans, without leaving their homes. Unlike missions on Mars, undersea robots can be tele-operated, communicating without noticeable delay, and an international remote science team participates in the daily investigation unfolds. I conducted an ethnographic study during the American Samoa Expedition, focusing on how the two onboard scientists communicate with the remote scientists and the engineering team controlling the robots. What does their interaction reveal about the requirements for autonomous surveys of Mars or undersea on Europa? What kinds of explanations will unsupervised robots require from the scientists to conduct their journeys? How do these future needs relate to research on "explainable AI" today?
Organic William J. Clancey is a computer scientist whose research relates to cognitive and social science in the study of work practices and the design of agent systems. At NASA Ames Research Center, he was Chief Scientist of Human-Centered Computing, Intelligent Systems Division (1998-2013); his team automated management between Johnson Space Center Mission Control and the International Space Station. His studies related to the world of Arctic High Arctic to Belize and Polynesia. He is Senior Research Scientist at the Florida Institute for Human and Machine Cognition in Pensacola.
Speaker Julien MERCIER
Date and place 19 April 2019, 12:00, Room PK-4610
summary The study of cognition and affectivity in individuals traditionally operates on the basis of psychological constructs measured by behavioral techniques.Technological and methodological advances in brain imaging and other psychophysiological measures are attracting considerable interest. applied potential of cognitive and affective neuroscience to better understand how cognitive and affective processes are collectively responsible for the performance of a given performance. In particular, a significant benefit of this approach lies in the ability to measure a cognitive or affective phenomenon in a relatively non-intrusive manner at a frequency corresponding to the rate of change thereof during a given performance. A review of recent literature suggests that the interdisciplinary work required to realize the potential of this field of research currently requires considerable theoretical and methodological developments. The purpose of this presentation is to propose such advances, along with examples of ongoing work at NeuroLab to suggest and illustrate new avenues of research in an emerging field that addresses the real-time measurement of affectivity and of cognition.
Organic Julien Mercier is a professor at UQAM and director of NeuroLab, a research infrastructure funded by the Canadian Foundation for Innovation. Her research interests converge in the study of learning, interpersonal interactions, cognition and affectivity by combining behavioral and psychophysiological methods.
Speaker Angel Nyamen Tato
Date and place March 21, 2019, 10:30, Room PK-5115
summary There are several stochastic optimization algorithms. In most cases, choosing the best optimizer for a given problem is not an easy task because each of these solutions usually gives good results. So, instead of looking for another absolute best optimizer, speeding up those that already exist in the current context could be greatly beneficial. We will present a simple and intuitive technique that, when applied to first order optimization algorithms (and whose convergence is ensured), is able to improve the speed of convergence and achieve a better minimum for the loss function compared to original algorithms. The proposed solution modifies the rule of updating the learning parameters according to the change of the direction of the gradient. We conducted several comparative tests with state-of-the-art solutions (SGD, AdaGrad, Adam, and AMSGrad) on basic convex and non-convex functions (such as x ^ 2 and x ^ 3) and problems. with public datasets (MNIST, IMDB, CIFAR10). These tests were conducted on different neural network architectures (Logistic Regression, MLP and CNN). The results show that the proposed technique significantly improves the performance of existing optimizers and works well in practice.
Organic Angel Tato is a student (in thesis writing) at the Doctorate in Computer Science at the University of Quebec in Montreal. She is interested in fundamental research on machine learning algorithms applied among others to the modeling of users in intelligent systems. She graduated as a State Engineer in Information Systems in 2014 at the Mohammédia School of Engineering. She then obtained a master's degree in computer science at UQAM in 2015 whose subject was the development of a smart tutorial system for learning logic. For 3 years she has been working on 1) the improvement of first order optimization algorithms (with gradient descent); 2) the improvement of neural network architectures for multimodal data to predict or classify the behaviors of users (players, learners, etc.) of intelligent adaptive systems, 3) the integration of expert knowledge in deep learning models to improve their predictive power and traceability.
speakers Komi Sodoke
Date and place February 22, 2019, 12:00, Room PK-4610
summary The acquisition of expertise has been studied in several areas and this research demonstrates paradigm shifts in the different phases of evolution from novice to expert. This research project is part of this general framework and focuses specifically on "perceptive-decisional" expertise in the medical field. The approach adopted consists of two phases. The first phase will consist of an exploratory and comparative study of the perceptual-decisional abilities of novices and experts in authentic situations on a high-fidelity simulation. This research will be done by considering perception through the analysis of visual perception data, cognition through the analysis of clinical reasoning (CR). The data will be analyzed on the one hand in order to identify the differences, characteristic regularities of the novice and the expert. The second phase will exploit the outputs of the first phase to develop specifications for an Intelligent Tutorial System (ITS) that would provide novice tutorial services to gradually structure its visual decision-making perception as an expert.Specifically in this seminar we will present the points below:
  • The experimental protocol
  • Comparative analysis of fixations on vital signs and clinical management
  • Behavioral analysis of fixations during clinical complications
  • Visualization of the shape of the overall trajectory of the bindings and saccade
  • The classification based on a deep learning architecture that allowed to obtain a precision of more than 90.2% using only the data of the visual perception
  • The basic architecture of the STI.
  • Organic Komi Sodoké is interested in fundamental research on the application of psychometric models as well as techniques from Artificial Intelligence to improve computer-assisted learning and assessment. After working for several years as a programmer-analyst and research and development director in the E-learning industry, he joined the college education in computer science and international baccalaureate. He participates in the college network to projects on the use of advanced technologies in education such as virtual reality glasses, humanoid robots (Nao and Peper), etc. As part of his Ph.D., he is a researcher at the Center for Learning Attitudes and Clinical Skills (CAAHC).
    speakers François Chabot
    Business Age of Minds Inc.
    Date and place February 15, 2019, 15:00, Room DS-1950 Pavilion J.-A. DeSève
    summary Machine learning processes used in industry usually fall within the framework of off-line learning based on large databases or reinforcement approaches using simulated environments. Integrating humans into the learning loop beyond participatory procurement presents unique challenges. We will present an overview of these challenges, as well as the approach we take to systematically mitigate them.
    Organic François Chabot is a veteran of the video game industry (Capcom) and large-scale data processing (Google). He is the CTO of Age of Minds, a young company founded to explore human-machine relationships in a world where they are becoming more and more intelligent.
    Speaker Dr. Karim JERBI
    Date and place February 14, 2019, 10:30, Room PK5115
    summary Would it be possible to better understand human intelligence through artificial intelligence? This is one of the challenges that research in my lab is trying to address. To do this, several approaches are possible. For example, the modeling of information processing across brain networks via artificial neural network models is a possible avenue. In parallel, an increasing number of studies are using machine learning algorithms to classify multidimensional and complex brain data masses. These data-driven methods of analysis differ from conventional methods in hypothesis-driven neuroscience. The data mining used in brain research has important advantages, but does not replace traditional approaches based on hypotheses. During this seminar, several studies at the interface between machine learning and neuroscience will be presented. The data presented will focus in particular on the exploitation of electroencephalography (EEG) and magnetoencephalography recordings in healthy subjects as well as in clinical populations.
    Organic Karim Jerbi holds a Canada Research Chair in Systems Neuroscience and Cognitive Neuroimaging. He is Director of the Laboratory of Cognitive and Computational Neuroscience (CoCo Lab) at the Department of Psychology at the University of Montreal.Training: Dr Jerbi has a background in Biomedical Engineering (University of Karlsruhe, Germany) and a PhD in Cognitive Neuroscience and Neuroimaging (Paris VI University, France). Research: The work carried out in her laboratory seeks to better understand the functional role of the dynamic properties of brain networks, their links with cognition and their alterations in the context of brain pathologies. To do this, his work combines - among others - advanced methods in recording brain activity (eg magnetoencephalography), uni and multi-spectral spectral analyzes and tools derived from artificial intelligence (AI). Dr. Jerbi is a member of several research centers in Montreal (eg BRAMS, CRIUGM, CRIUSMM, NeuroQAM and the CRM Mathematics Research Center) and has numerous international collaborations at the intersection of neuroscience and AI.
    speaker Dr. Sasha Luccioni
    Date and place February 8, 2019, 12:00, Room PK4610
    summary The financial market generates a lot of real-time data, which presents a range of opportunities for applying machine learning techniques and automatic language processing. In this presentation, we will elaborate on the techniques and approaches used at Morgan Stanley to analyze these data and predict future developments, including referral systems, neural networks, and text mining.
    Organic Sasha Luccioni is a member of Morgan Stanley's Center of Excellence on Artificial Intelligence and Machine Learning. She holds a PhD in Cognitive Computing from UQAM, as well as degrees in Linguistics and Cognitive Sciences. She is interested in basic and applied research in various fields including Automatic Language Processing and Deep Learning.
    Speaker Teacher. Philippe Fournier-Viger
    Date and place January 31, 2019, 10:30, Room PK5115
    summary In many areas, data is represented as an event sequence. For example, in the field of e-learning, actions taken by a learner in an e-learning system can be viewed as sequences. Several algorithms have been designed to discover interesting patterns in the sequences, to better understand the data or to facilitate decision making. A typical type of analysis performed on sequences is the discovery of frequent patterns. Although this type of analysis is useful, an important limitation is that the assumption that what is common is interesting does not always hold. For example, when analyzing sequences of transactions made by consumers, measures such as profit generated by the sale of products may be more interesting than frequency. This presentation will describe our recent research on the development of algorithms for pattern discovery in the sequences. In particular, the problem of the discovery of highly useful patterns will be described ("high utility pattern mining"), where the utility is a measure of importance that can represent for example the profit generated by a sequence of purchases. In addition, a new extension will be described to discover patterns offering a good ratio between utility and cost ("cost-effective patterns"). This type of pattern can be applied for example in e-learning to discover sequences of low-cost actions (eg that require little time) to obtain a great utility (eg a good learning). Another example of application is the discovery of low-cost medical treatment sequences to obtain great utility (eg, healing). Algorithms will be presented as well as some patterns discovered in real data. In addition, the SPMF software for pattern discovery will be briefly described.
    Organic Philippe Fournier-Viger (PhD) obtained a PhD in cognitive computer science from UQAM in 2010. He was then a postdoctoral researcher in Taiwan and an assistant professor at the Université de Moncton. In 2015, he received the National Talent Award from the National Science Foundation of China and became Full Professor at the Harbin Institute of Technology in Shenzhen, China. He is director of the Center for Innovative Industrial Design. He has participated in more than 200 articles in international conferences and journals. His research interests are centered on the development of algorithms for the discovery of interesting motifs in transactions, sequences and graphs, the prediction of sequences, and some applications related to e-learning and social networks. He is the founder of the SPMF data mining library, which has been used in more than 600 articles since 2010. Recently, he was editor of the book "High Utility Mining: Theory, Algorithms and Applications" which will be published by Springer. He co-organized a workshop at KDD2018 on the same topic.
    Speaker Mickael Wajnberg
    Date and place Friday, January 25, 2019, 12:00, Room PK4610
    summary Knowledge Extraction is a discipline that seeks to detect patterns, groups, or patterns in large data sets. It has mainly been developed around cases of unique data tables. However, most current information systems are based on a multitable relational database representation, aka, multi-relational dataset. Multi-relational data mining aims to fill a certain lack of precision and a loss of contextualized that appears when the tables of such a dataset are analyzed separately or, alternatively, by joining all the data in a single table. Relational Concepts Analysis (ARC) is a method capable of extracting knowledge from a multi-relational dataset and explaining it in the form of: (1) pattern (s) and association rules or (2) groups homogeneous (clusters). It is an extension of the mathematical paradigm of formal concept analysis that admits a single table. ARC makes it possible, in particular, to treat several types of objects - each represented within its own table, alias context - interrelated through intercontextual binary relations. This format is clearly compatible with linked data and RDF formalism. The associated analytical method employs propositional mechanisms. These are inspired by the logics of description to bring the inter-object links to specific descriptors of these objects. A first case of application of the ARC method is in progress: it consists of crossing spatially accurate functional neural Magnetic Resonance Imaging data with temporally accurate electroencephalogram data to provide partner neurologists with a tool help with interpretation. In addition, the CRA would also allow the knowledge contained in a dataset to be compared to what is already known about the field of origin of the dataset, expressed, for example, in the form of an ontology. In particular, by displaying subgroups of objects with common properties, the ARC could detect the existence of potentially relevant subclasses within a known class. Alternatively, it can validate the relevance of certain property restrictions within a class. In addition, the ARC would suggest more restrictive types for objects populating an ontology using the associations between descriptors.
    Organic Mickael Wajnberg is a PhD student in co-supervision between the University of Quebec in Montreal and the University of Lorraine (France). He is currently working on relational concept analysis and knowledge extraction. After a preparatory class in math-physics, he graduated from an engineering school in France, Telecom Nancy and a master's degree from the University of Quebec in Chicoutimi where he specialized in metaheuristics and more generally in algorithmic and theoretical computing. For more information on this seminar and our upcoming events, visit: http://gdac.uqam.ca/CRIA/events.html
    Speaker Claude Coulombe
    Date and place January 24, 2019, 10:30, Room PK5115
    summary In natural language processing, it is not uncommon to end up with vastly inadequate amounts of data to drive a deep model. This "massive data wall" is a challenge for minority language communities on the Web, organizations, laboratories and businesses competing with the GAFAM giants. This presentation will discuss the feasibility of various simple, practical and robust text-based amplification techniques based on natural language processing and machine learning to overcome the lack of textual data for training large statistical models, particularly for deep learning.
    Organic Claude Coulombe evolved from the budding scientist who participated in 15 science fairs, the B.Sc. in physics and the master's degree in AI at UdeM (Homo scientificus), to the passionate high-tech entrepreneur from Quebec, co- founder of Machina Sapiens where he participated in the creation of a new generation of grammar correction tools (Homo québecensis). Following the bursting of the technology bubble, Claude moved into a new evolutionary path to found a family, launch Lingua Technologies that combines machine translation and Internet technologies and undertake a doctorate in machine learning at MILA under the direction of Yoshua Bengio ( Homo familIA). In 2008, resources became scarce, Claude muta in Man of Java, specializing in the realization of rich web applications with Ajax, HTML5, Javascript, GWT, REST architectures, cloud computing and mobile applications. In 2013, Claude began a PhD in Cognitive Computing, participated in the development of two open and massive online courses (CLOM / MOOC) at TÉLUQ, learned Python and deep learning to become the Python Man (not to be confused with the Piltdown Man). In short, Claude is an old fossil who has evolved, reproduce, create tools and adapt to the rhythm of his passions.
    Speaker Jean Massardi
    Date and place January 18, 2019, 12:00, Room PK4610
    summary Plan recognition is the reverse task to planning. This is based on a sequence of observations to determine what is the objective behind the actions of an agent. There are many algorithms to solve this problem, however most are based on the assumption that the observations are perfect, that is, without noise and without missing observations. For many applications, this assumption is not satisfied. In the case of plane recognition using hierarchical planes, the addition of imperfect observations leads to a combinatorial explosion. We propose to solve this problem by using a particle filter.
    Organic Jean Massardi is a PhD student in Computer Science at UQAM, previously graduated from INSA Toulouse in software engineering.
    Speaker Pr. Serge Robert
    Date and place January 14, 2019, 15:00, Room R-M130 (Pavilion of Management Sciences)
    Context: This conference is given on the occasion of the first edition of the International Day of Logic (World Logic Day), created by the international association Universal Logic. She is sponsored by the SAI with the support of CRIA. Professor Serge Robert is Professor in the Department of Philosophy, Member of the Institute of Cognitive Sciences and CRIA
    speaker Menhour Ihssene
    Date and place December 4, 2018, 12:00, PK4610
    summary Activity recognition plays an important role in medical monitoring and eldercare applications, whether in smart habitats for health, hospitals, or homes for the aged. For example, activity recognition makes it possible to detect events that could be dangerous for the elderly person and to prevent any risk situation. Although there are various possible solution designs (eg, image recognition of cameras, intelligent homes equipped with sensors, etc.), we will mainly discuss the use of smartphone sensors to collect data and the development of an activity recognition method from them, to establish the profile of the user.

    Our team

    A team composed of several researchers.

    Projects

    Current list of current research projects.

    Publications

    List of publications made by our researchers