Contact Us

Give us a call or drop by anytime, we endeavor to answer all inquiries within 24 hours.

map

Find us

PO Box 16122 Collins Street West Victoria, Australia

Email us

info@domain.com / example@domain.com

Phone support

Phone: + (066) 0760 0260 / + (057) 0760 0560

Machine Learning

  • May 12, 2023
  • Sheetal Srivastava

Earth & Ocean Image Processing Made Easy with MATLAB – Lunch and Learn

Join us for a lunch-n-learn technical seminar from our MathWorks team on image processing with MATLAB! MathWorks is looking to create a connection point for conversations about the landscape of computational languages – and the broader impact that software has in academia and industry today. This session explores the basics of pixel-level image processing and high-level machine learning models in MATLAB for images.

Read More
  • April 24, 2023
  • Kaleigh O'Merry

Leveraging Simulators for ML Inference in Particle Physics

Abstract: The field of research investigating machine-learning (ML) methods that can exploit a physical model of the world through simulators is rapidly growing, particularly for applications in particle physics. While these methods have shown considerable promise in phenomenological studies, they are also known to be susceptible to inaccuracies in the simulators used to train them. In this work, we design a novel analysis strategy that uses the concept of simulation-based inference for a crucial Higgs Boson measurement, where traditional methods are rendered sub-optimal due to quantum interference between Higgs and non-Higgs processes. Our work develops uncertainty quantification methods that account for the impact of inaccuracies in the simulators, uncertainties in the ML predictions themselves, and novel strategies to test the coverage of these quoted uncertainties. These new ML methods leverage the vast computational resources that have recently become available to perform scientific measurements in a way that was not feasible before. In addition, this talk briefly discusses certain ML-bias-mitigation methods developed in particle physics and their potential wider applications.

Read More
  • April 13, 2023
  • Sheetal Srivastava

Beyond classification: using Machine Learning to probe new physics with the ATLAS experiment in “impossible” final states

Abstract: Although the discovery of the Higgs Boson is often referred to as the completion of the Standard Model of Particle Physics, the many outstanding mysteries of our universe indicate that […]

Read More
  • April 7, 2023
  • Kaleigh O'Merry

Decoding Nature’s Message Through the Channel of Artificial Intelligence

Abstract: Nature contains many interesting physics we want to search for, but it cannot speak them out loud. Therefore physicists need to build large particle physics experiments that encode nature’s message into experimental data. My research leverages artificial intelligence and machine learning to maximally decode nature’s message from those data. The questions I want to ask nature is: Are neutrinos Majorana particles? The answer to this question would fundamentally revise our understanding of physics and the cosmos. Currently, the most effective experimental probe for Majorana neutrino is neutrinoless double-beta decay(0vββ). Cutting-edge AI algorithms could break down significant technological barriers and, in turn, deliver the world’s most sensitive search for 0vββ. This talk will discuss one such algorithm, KamNet, which plays a pivotal role in the new result of the KamLAND-Zen experiment. With the help of KamNet, KamLAND-Zen provides a limit that reaches below 50 meV for the first time and is the first search for 0νββ in the inverted mass ordering region. Looking further, the next-generation 0vββ experiment LEGEND has created the Germanium Machine Learning group to aid all aspects of LEGEND analysis and eventually build an independent AI analysis. As the odyssey continues, AI will enlighten the bright future of experimental particle physics.

Read More
  • March 23, 2023
  • Kaleigh O'Merry

Structured Transformer Models for NLP

The field of natural language processing has recently unlocked a wide range of new capabilities through the use of large language models, such as GPT-4. The growing application of these models motivates developing a more thorough understanding of how and why they work, as well as further improvements in both quality and efficiency.

In this talk, I will present my work on analyzing and improving the Transformer architecture underlying today’s language models through the study of how information is routed between multiple words in an input. I will show that such models can predict the syntactic structure of text in a variety of languages, and discuss how syntax can inform our understanding of how the networks operate. I will also present my work on structuring information flow to build radically more efficient models, including models that can process text of up to one million words, which enables new possibilities for NLP with book-length text.

Read More
  • March 23, 2023
  • Kaleigh O'Merry

Acceleration in Optimization, Sampling, and Machine Learning

Optimization, sampling, and machine learning are essential components of data science. In this talk, I will cover my work on accelerated methods in these fields and highlight some connections between them.

In optimization, I will present optimization as a two-player zero-sum game, which is a modular approach for designing and analyzing convex optimization algorithms by pitting a pair of no-regret learning strategies against each other. This approach not only recovers several existing algorithms but also gives rise to new ones. I will also discuss the use of Heavy Ball in non-convex optimization, which is a popular momentum method in deep learning. Despite its success in practice, Heavy Ball currently lacks theoretical evidence for its acceleration in non-convex optimization. To bridge this gap, I will present some non-convex problems where Heavy Ball exhibits provable acceleration guarantees.

In sampling, I will describe how to accelerate a classical sampling method called Hamiltonian Monte Carlo by setting its integration time appropriately, which builds on a connection between sampling and optimization. In machine learning, I will talk about Gradient Descent with pseudo-labels for fast test-time adaptation under the context of tackling distribution shifts.

Read More