Contact Us

Give us a call or drop by anytime, we endeavor to answer all inquiries within 24 hours.

map

Find us

PO Box 16122 Collins Street West Victoria, Australia

Email us

info@domain.com / example@domain.com

Phone support

Phone: + (066) 0760 0260 / + (057) 0760 0560

Machine Learning

  • April 13, 2023
  • HDSIComm

Beyond classification: using Machine Learning to probe new physics with the ATLAS experiment in “impossible” final states

Abstract: Although the discovery of the Higgs Boson is often referred to as the completion of the Standard Model of Particle Physics, the many outstanding mysteries of our universe indicate that […]

Read More
  • April 7, 2023
  • Kaleigh O'Merry

Decoding Nature’s Message Through the Channel of Artificial Intelligence

Abstract: Nature contains many interesting physics we want to search for, but it cannot speak them out loud. Therefore physicists need to build large particle physics experiments that encode nature’s message into experimental data. My research leverages artificial intelligence and machine learning to maximally decode nature’s message from those data. The questions I want to ask nature is: Are neutrinos Majorana particles? The answer to this question would fundamentally revise our understanding of physics and the cosmos. Currently, the most effective experimental probe for Majorana neutrino is neutrinoless double-beta decay(0vββ). Cutting-edge AI algorithms could break down significant technological barriers and, in turn, deliver the world’s most sensitive search for 0vββ. This talk will discuss one such algorithm, KamNet, which plays a pivotal role in the new result of the KamLAND-Zen experiment. With the help of KamNet, KamLAND-Zen provides a limit that reaches below 50 meV for the first time and is the first search for 0νββ in the inverted mass ordering region. Looking further, the next-generation 0vββ experiment LEGEND has created the Germanium Machine Learning group to aid all aspects of LEGEND analysis and eventually build an independent AI analysis. As the odyssey continues, AI will enlighten the bright future of experimental particle physics.

Read More
  • March 23, 2023
  • Kaleigh O'Merry

Structured Transformer Models for NLP

The field of natural language processing has recently unlocked a wide range of new capabilities through the use of large language models, such as GPT-4. The growing application of these models motivates developing a more thorough understanding of how and why they work, as well as further improvements in both quality and efficiency.

In this talk, I will present my work on analyzing and improving the Transformer architecture underlying today’s language models through the study of how information is routed between multiple words in an input. I will show that such models can predict the syntactic structure of text in a variety of languages, and discuss how syntax can inform our understanding of how the networks operate. I will also present my work on structuring information flow to build radically more efficient models, including models that can process text of up to one million words, which enables new possibilities for NLP with book-length text.

Read More
  • March 23, 2023
  • Kaleigh O'Merry

Acceleration in Optimization, Sampling, and Machine Learning

Optimization, sampling, and machine learning are essential components of data science. In this talk, I will cover my work on accelerated methods in these fields and highlight some connections between them.

In optimization, I will present optimization as a two-player zero-sum game, which is a modular approach for designing and analyzing convex optimization algorithms by pitting a pair of no-regret learning strategies against each other. This approach not only recovers several existing algorithms but also gives rise to new ones. I will also discuss the use of Heavy Ball in non-convex optimization, which is a popular momentum method in deep learning. Despite its success in practice, Heavy Ball currently lacks theoretical evidence for its acceleration in non-convex optimization. To bridge this gap, I will present some non-convex problems where Heavy Ball exhibits provable acceleration guarantees.

In sampling, I will describe how to accelerate a classical sampling method called Hamiltonian Monte Carlo by setting its integration time appropriately, which builds on a connection between sampling and optimization. In machine learning, I will talk about Gradient Descent with pseudo-labels for fast test-time adaptation under the context of tackling distribution shifts.

Read More
  • March 15, 2023
  • Kaleigh O'Merry

Scientific Machine Learning Symposium

Recent progress in Artificial Intelligence (AI) and Machine Learning (ML) has provided groundbreaking methods for processing large data sets. These new techniques are particularly powerful when dealing with scientific data with complex structures, non-linear relationships, and unknown uncertainties that are challenging to model and analyze with traditional tools. This has triggered a flurry of activity in science and engineering, developing new methods to tackle problems which used to be impossible or extremely hard to deal with.

The goal of this symposium is to bring together researchers and practitioners at the intersection of AI and Science, to discuss opportunities to use AI to accelerate scientific discovery, and to explore the potential of scientific knowledge to guide AI development. The symposium will provide a platform to nurture the research community, to fertilize interdisciplinary ideas, and shape the vision of future developments in the rapidly growing field of AI + Science.

We plan to use the symposium as the launching event for the AI + Science event series, co-hosted by Computer Science and Engineering(CSE), Halıcıoğlu Data Science Institute (HDSI), and Scripps Institution of Cceanography(SIO) at UC San Diego. The symposium will include a combination of invited talks, posters, panel discussions, social and networking events. The first event will put a particular emphasis on AI + physical sciences. We will invite contribution and participation from physics, engineering, and oceanography, among others. Part of the program will highlight the research from climate science, as a result of our DOE funded scientific ML project for tackling climate extremes.

Read More
  • March 15, 2023
  • Kaleigh O'Merry

Optimal methods for reinforcement learning: Efficient algorithms with instance-dependent guarantees | Wenlong Mou

Reinforcement learning (RL) is a pillar for modern artificial intelligence. Compared to classical statistical learning, several new statistical and computational phenomena arise from RL problems, leading to different trade-offs in the choice of the estimators, tuning of their parameters, and the design of efficient algorithms. In many settings, asymptotic and/or worst-case theory fails to provide the relevant guidance.
In this talk, I present recent advances that involve a more refined approach to RL, one that leads to non-asymptotic and instance-optimal guarantees. The bulk of this talk focuses on function approximation methods for policy evaluation. I establish a novel class of optimal and instance-dependent oracle inequalities for projected Bellman equations, as well as efficient computational algorithms achieving them. Among other results, I will highlight how the instance-optimal guarantees guide the selection of tuning parameters in temporal different methods, and tackle the instability issue with general function classes. Drawing on this perspective, I will also discuss a novel class of stochastic approximation methods that yield optimal statistical guarantees for policy optimization problems.

Read More