Final Programme

Programme Overview can be found here

Full programme with detail on individual sessions can be found here

Links to the conference proceedings are below:

https://link.springer.com/book/10.1007/978-3-030-33607-3

https://link.springer.com/book/10.1007/978-3-030-33617-2

Tutorial

Tutorial slides

Theory and Applications of State Space Models for Time Series Data

Abstract:

State space modelling is a popular approach to processing data with temporal dependencies. There is a huge variety of state space models, but they all share the same common underlying modelling principle:

Collapse all the ‘necessary’ information about the past into an ‘abstract’ information processing state, which can be updated recursively as more and more data arrives. Outputs of such models are then determined based on the state information (representing the past) and the current data.

In machine learning, state space models take on many forms in various contexts, depending on e.g.

• state space nature and cardinality (e.g. finite number of states, discrete state space, continuous state space)
• whether temporal mapping (inputs ! outputs) or input series modelling is considered (supervised vs. unsupervised learning)
• model formulation framework - e.g. probabilistic vs. non-probabilistic

Some well-known model classes include

• recurrent neural networks (including LSTM) and Echo State Networks - parametrized state space models using model structures of artificial neural networks. The state space is infinite (uncountable), inputs outputs can be continuous or discrete.
• finite state machines - whether with probabilistic or non-probabilistic transitions. These models have a finite number of possible states and finite alphabets of possible inputs/outputs.
• hidden Markov models - probabilistic formulations of state space models with finite number of states and discrete/continuous observations.
• (extended) Kalman filters - probabilistic formulations of state space models with continuous state space and typically continuous inputs/outputs.

In the tutorial I will present a unified general view of all such systems as non-autonomous input-driven dynamical systems. I will then explain in detail different model formulations and adaptation of their free parameters, always making links to the general principles outlined in the first part of the tutorial.

Bio:

Peter Tino is Professor of Complex and Adaptive Systems at the School of Computer Science, University of Birmingham, UK. He held a Fulbright Fellowship (at NEC Research Institute, Princeton, USA) and a UK-Hong Kong Fellowship for Excellence. Peter is a recipient of three IEEE Computational Intelligence Society Outstanding Paper of the Year awards in IEEE Transactions on Neural Networks (1998, 2011) and IEEE Transactions on Evolutionary Computation (2010). He served on Editorial Boards of several journals including IEEE Transactions on Neural Networks and Learning Systems (IEEE), Scientific Reports (Nature Publishing), IEEE Transactions on Cybernetics (IEEE) and Neural Computation (MIT Press). Peter's scientific interests include machine learning, dynamical systems, evolutionary computation, complex systems, probabilistic modelling and statistical pattern recognition.