by Bobby Gordon
Several faculty from the Halıcıoğlu Data Science Institute at UC San Diego are featured presenters at the 2020 NeurIPS Annual Meeting this week. Featuring peer-reviewed research and talks by industry and field leaders, “the purpose of the Neural Information Processing Systems (NeuIPS) annual meeting is to foster the exchange of research on neural information processing systems in their biological, technological, mathematical, and theoretical aspects,” according to this year’s Annual Meeting site.
Mikhail (Misha) Belkin, Professor with the Halıcıoğlu Data Science Institute, is leading a presentation based on his ongoing work with his Ph.D. students (Chaoyue Liu & Libin Zhu). “A recent discovery about neural networks is that certain very large neural networks are essentially linear functions of parameters,” says Belkin. He adds, “This is very surprising since the structure of neural networks is highly non-linear and it is not clear why this ‘transition to linearity’ should occur for wide networks. In our work we provide a new perspective on this phenomenon, showing why it happens and demonstrating that it is not a general property of big systems, but is specific to certain architectures.” The team is asking key questions related to neural networks and identifying whether they have advantages of better understood and more traditional kernel methods; and if yes, identifying whether they can analyze those advantages. Belkin plans to build upon these findings in their on-going research, and plans to incorporate these into his HDSI Data Science courses as well.
Rose Yu, Assistant Professor with Computer Science and Engineering and the Halıcıoğlu Data Science Institute, is also a featured presenter this week. Her work, supported by an Army Research Office grant in partnership with colleagues from Northeastern University, focuses on building “an intelligent machine that can reason about what objects are, how they move, and what happens when they are missing in videos. We also want to do this in a completely self-supervised fashion without labeled training data,” according to Yu. She adds, “A 5-month-old child can understand that objects continue to exist even when they are unseen, a phenomenon known as ‘object permanence.’ However, current deep learning methods cannot reason about the objects when they are missing in videos. Our method can simultaneously perform object decomposition, latent space disentangling, missing data imputation, and video forecasting.” Yu has already incorporated this work into her curriculum at UC San Diego, and she’s working with HDSI colleagues to apply this model to COVID-19 forecasting.
Julian McAuley, Associate Professor with Computer Science and Engineering and the Halıcıoğlu Data Science Institute, is part of a team project originally supported by Microsoft Research Asia; proposing a dynamic acceleration method for large language models. Their unique approach “terminates the forward pass of a neural network early using ‘patience’ as a signal,” according to Canwen Xu, Ph.D. student with Computer Science and Engineering at UC San Diego. “Our idea came from neural network training. We found the similarity between training and inference of a neural network. This research highlights the ‘overthinking’ problem in neural networks and opens a new door for faster inference, and more efficient neural networks,” continues Xu.
Other Halıcıoğlu Data Science Institute faculty featured in this week’s schedule include Yian Ma Assistant Professor, Halıcıoğlu Data Science Institute, and Henrik Christensen, Director of the Contextual Robotics Institute; also with Computer Science and Engineering and the Halıcıoğlu Data Science Institute at UC San Diego.
Learn more about the 2020 NeurIPS Annual Meeting schedule here.
For questions regarding this article and other HDSI information, please contact HDSIComm@ucsd.edu.