For any questions regarding the tutorials, please contact the Tutorial chair Dr Emir Demirović (email@example.com).
Explainable Constraint Solving - A Hands-On Tutorial
Explainable constraint solving is a sub-field of explainable AI (XAI) concerned with explaining constraint (optimization) problems. Although constraint models are explicit: they are written down in terms of individual constraints that need to be satisfied, the solution to such models can be non-trivial to understand.
Driven by the use-case of nurse scheduling, we will demonstrate the type of questions a user can have about (non)-solutions, as well as reviewing what kind of computational tools are available today to answer such questions. We will cover classical methods such as MUS/MCS extraction, and more recent advances in the field such as step-wise explanations, constraint relaxation methods and counterfactual solutions. We will demonstrate and give special attention to techniques that we have successfully (re)implemented on top of the CPMpy constraint solving library, and hence can be readily used today.
Check out Tias' code for this tutorial here!
Ignace Bleukx is a 2nd-year PhD student at KU Leuven and a key developer of the CPMpy library, which he presented in a tutorial at IJCAI'22. His PhD centers around explainable constraint solving with a special interest in explaining optimization problems.
Dimos Tsouros is a PostDoc at the DTAI KU Leuven lab. His expertise is in interactive constraint acquisition, earning an honorable mention in the Doctoral Dissertation Award of ACP in 2022. He has a strong interest in rich interactive techniques between users and solvers, focusing on a more human-aware approach to the solving process.
Prof. Tias Guns works at the intersection of constraint solving, machine learning and explainable AI. With his prestigious ERC Consolidator grant on 'Conversational Human-aware Technologies for Optimisation' he is pushing the boundaries of what can be done at the intersection of these fields.
Machine Learning for Solvers
From its very beginning as a field, AI has had two primary pillars, namely, learning and reasoning. For a variety of reasons, until recently there has been little interaction between these two sub-fields of AI. Fortunately, in the last decade or so, many researchers have been attempting to bridge learning and reasoning, with the goal of solving problems that are otherwise too difficult or impossible to solve. In this tutorial, I will present a particular approach to leveraging learning in service of reasoning in the context of SAT/SMT solvers. I will provide historical context of the different ways in which learning has been used for reasoning, and then zero-in on the use of reinforcement learning in solvers. Time permitting, I will also touch upon research in the opposite direction, i.e., the use of solvers in service of learning.
Dr. Vijay Ganesh is an associate professor at the University of Waterloo and the Co-Director of the Waterloo Artificial Intelligence Institute. Prior to joining Waterloo in 2012, he was a research scientist at MIT (2007-2012) and completed his PhD in computer science from Stanford in 2007. Vijay's primary area of research is the theory and practice of SAT/SMT solvers aimed at AI, software engineering, security, mathematics, and physics. In this context he has led the development of many SAT/SMT solvers, most notably, STP, Z3 string, MapleSAT, and MathCheck. He has also proved several decidability and complexity results in the context of first-order theories. He has won over 30 awards, honors, and medals to-date for his research, including an ACM Impact Paper Award at ISSTA 2019, ACM Test of Time Award at CCS 2016, and a Ten-Year Most Influential Paper citation at DATE 2008.