Research

At the Centre for AI Fundamentals, we do machine learning and collaborate with experts in a range of fields to solve real-world problems.

We therefore are already working closely with an expanding group of leading researchers in a number of key disciplines across a range of cross-cutting themes.

Research themes

Our main themes are:

  • placeholder

    AI for science

    Through research into new methodologies for of AI and machine learning, we're seeking to help translate scientific discovery into real-world applications. The Centre will be at the interface between AI methods and science and engineering, via virtual (simulation based) laboratories that will apply ML-based probabilistic modelling, simulator-based inference, digital twins and collaborative AI for human-AI teamwork. A wide range of disciplines and sectors are anticipated to collaborate with this work.

  • placeholder

    Decision making in machine learning systems

    For scientific systems that produce huge data volumes, so-called “big science” (e.g., SKA, CERN), AI-driven decisions are increasingly necessary to replace human decisions at multiple points within large scientific analyses and other areas such as facility operations. Our team will research automated AI approaches that can ensure such systems combination is robust, safe and accurate. Decision making with AI needs to be interpretable and explainable to facilitate interrogation of decision processes such that trust can be built by the human, and it is essential for understanding and meeting ethical and legal implications.

  • placeholder

    Decision making with humans in the loop

    A human user, in many cases, is unable to fully specify all details a computer systems would require. By jointly modelling the machine learning task with humans in the loop, a system’s decision making can be improved over time. Through this AI technologies will be more efficient in addressing key challenges such as experimental design from limited data as well as promoting trust in AI-enabled systems. Decision making with Humans in the Loop (DMHL) is a key theme in researching the fundamentals of AI.

  • placeholder

    Theory of machine learning

    We are uncovering the core mathematical principles that underpin machine learning. Theory research can yield overarching ideas that one can rely on for building deployable intelligent machines. Focus areas include: developing generalisation bounds for conventional deep-neural systems and operator learning setups; establishing novel principles to provably design algorithms for training neural nets and do reinforcement learning; and addressing the mystery of why the size of neural architectures is such a critical factor behind performance - particularly for (operator) neural nets that solve (systems of) partial differential equations.

  • placeholder

    Uncertainty in complex systems

    Human and AI collaborative decision making requires principled uncertainty quantification. Our researchers seek to develop leading-edge methods in probabilistic machine learning, leveraging uncertainty in a statistical manner to drive the exploration of new parameter spaces and promote scientific discovery. Focusing both on the methodological and theoretical aspects, our research aims to help any field where decision-making is critical. Uncertainty quantification and modelling underpins two further themes on decision making.

How we apply our research

Outcomes of our leading-edge work on the fundamentals of AI will be applied in varied domains. Examples include: