Keynote Speakers


Jürgen Schmidhuber (IDSIA)

Since age 15 or so, the main scientific ambition of professor Jürgen Schmidhuber (pronounce: You_again Shmidhoobuh) has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire. He has pioneered self-improving general problem solvers since 1987, and Deep Learning Neural Networks (NNs) since 1991. The recurrent NNs (RNNs) developed by his research groups at the Swiss AI Lab IDSIA & USI & SUPSI & TU Munich were the first RNNs to win official international contests. They have revolutionized connected handwriting recognition, speech recognition, machine translation, optical character recognition, image caption generation, and are now in use at Google, Microsoft, IBM, Baidu, and many other companies. Founders & staff of DeepMind (sold to Google for over 600M) include 4 former PhD students from his lab. His team's Deep Learners were the first to win object detection and image segmentation contests, and achieved the world's first superhuman visual classification results, winning nine international competitions in machine learning & pattern recognition (more than any other team). They also were the first to learn control policies directly from high-dimensional sensory input using reinforcement learning. His research group also established the field of mathematically rigorous universal AI and optimal universal problem solvers. His formal theory of creativity & curiosity & fun explains art, science, music, and humor. He also generalized algorithmic information theory and the many-worlds theory of physics, and introduced the concept of Low-Complexity Art, the information age's extreme form of minimal art. Since 2009 he has been member of the European Academy of Sciences and Arts. He has published 333 peer-reviewed papers, earned seven best paper/best video awards, the 2013 Helmholtz Award of the International Neural Networks Society, and the 2016 IEEE Neural Networks Pioneer Award. He is also president of NNAISENSE, which aims at building the first practical general purpose AI.

Daniel Whiteson (UC Irvine)

Open Machine Learning Problems in High Energy Physics

Machine Learning tools have revolutionized data analysis in high-energy physics (HEP). But the problems posed by HEP are unique in many aspects, presenting novel challenges and requiring novel solutions. I will describe recent progress in tackling these open problems and describe current outstanding issues.

Michael Williams (MIT)

Artificial Physicists

Over the past decade, the use of machine learning algorithms to classify event types has become commonplace in particle physics. However, in many cases it's not obvious how to teach the machine what the physicist wants it to learn. I will discuss some modified classifiers developed for use in such cases, and then reflect on the questions: What is it that physicists actually do when analyzing data? How can we teach machines to do this for - and better than - us? Finally, what role will deep learning play in future particle physics experiments?

Timothy Daniel Head (EPFL)

Data science in LHCb

Machine learning is used at all stages of the LHCb experiment. It is routinely used: in the process of deciding which data to record and which to reject forever, as part of the reconstruction algorithms (feature engineering), and in the extraction of physics results from our data. This talk will highlight current use cases, as well as ideas for ambitious future applications, and how we (machine learning expert + physicist) can collaborate on them.

Kyle Cranmer (New York University)

An alternative to ABC for likelihood-free inference

The field of particle physics has the luxury of very predictive models of the data based on quantum field theory; however, the simulation of a complicated experimental apparatus makes it impractical to directly evaluate the likelihood for a given observation. A popular approach to this class of problems is Approximate Bayesian Computation (ABC). I will describe an alternative technique for parameter inference in this “likelihood-free” setting that is based on a parametrized family of classifiers and univariate density estimation. I will end with examples where this technique is being applied to problems at the LHC.

Balázs Kégl (CNRS / Université Paris-Saclay)

What is wrong with data challenges

We will develop a constructive criticism of the data challenge format practiced today. It will be illustrated by our story of the HiggsML challenge, but our conclusions will go beyond. In a nutshell, challenges are long job interviews for participants, publicity for organizers, and benchmarking and teaching aids for the data science community. What are they not? They will not deliver a workable solution to your problem, not even a prototype, partly because the very problem you can squeeze into the competitive gaming mechanism is a diluted or abstract version of the real problem you want to solve. You will have no access to the data scientists participating in the challenge, unless of course you can hire them. They incentivize neither collaboration nor creativity.

In the last third of the talk I will describe the format and tool that we have been developing to run collaborative hackatons (RAMPs for Rapid Analytics and Model Prototyping) at the Paris-Saclay Center for Data Science which implements some of the missing features of the classical challenge format.

(spoiler)