Weekly bulletin

Find out what's happening this week at MSI.

12
May

May12 Women in Mathematics

  • Tuesday, 12 May 2026, 12 - 1pm
  • Seminar Room 1.33, Hanna Neumann Building #145

May12 is an international initiative celebrating women in mathematics and takes place every year around the world on 12 May, the birthday of Maryam Mirzakhani, the first woman to receive the prestigious Fields Medal.

Join MSI as we celebrate women in mathematics with a special screening of Secrets of the Surface.

This global celebration is an opportunity for the mathematical community to recognise and honour the achievements of women in mathematics, past and present.

Commence at 11.30 am, over a late morning tea.

This event is hosted by the MSI Equity and Diversity Committee

12
May

Uniform bounds for fixed vectors in representations of a p-adic GL_N

  • Tuesday, 12 May 2026, 3 - 4pm
  • Hanna Neumann, Seminar room 1.33

  • Simon Marshall (University of Melbourne)

Abstract: Let G be a reductive p-adic group, K a compact open subgroup of G, and \pi a representation of G.  Bernstein’s uniform admissibility theorem states that the dimension of fixed vectors in \pi under K, denoted \dim \pi^K, is bounded independently of \pi.  On the other hand, if \pi is fixed and K varies in a family of principal congruence subgroups of G, \dim \pi^K grows in a manner governed by the Gelfand-Kirillov dimension of \pi.  In this talk, I will present a theorem for GL_N that combines these results by proving a bound for \dim \pi^K that is essentially as strong as the Gelfand-Kirillov bound but which is uniform in \pi.  This is joint work with Rahul Dalal and Mathilde Gerbelli-Gauthier.

14
May

Quantum Ergodicity in the level aspect for locally symmetric spaces

  • Thursday, 14 May 2026, 10 - 11am
  • Hanna Neumann, Seminar Room 1.33

  • Simon Marshall (University of Melbourne)

Abstract: I will discuss a variant of Quantum Ergodicity in which the underlying manifold varies, known as Quantum Ergodicity in the level aspect. This was originally established for finite regular graphs by Anantharaman-Le Masson, before being extended to compact hyperbolic manifolds by Le Masson-Sahlsten and Abert-Bergeron-Le Masson. I will present a proof of Quantum Ergodicity in the level aspect for sequences of compact locally symmetric spaces whose associated Lie group has a restricted root system that is either of classical type, or type E7. In particular, this includes all groups of classical type.  This is joint work with Farrell Brumley, Jasmin Matz, and Carsten Peterson.

14
May

Margin-Based Thresholds for Learnability in Online Strategic Classification

  • Thursday, 14 May 2026, 1 - 2pm
  • Room 1.33 Hanna Neumann Building #145

  • Nam Ho-Nguyen (The University of Sydney)

Abstract: We study online strategic classification, in which the learner’s deployed classifier influences the data it subsequently observes because agents can strategically alter their reported features. Learnability is well-understood when the agents' underlying true features satisfy a large-margin separability condition. Our goal is to characterize when meaningful learning remains possible under strategic behaviour. We show that learnability is governed by margin-based thresholds. In particular, there are regimes in which no online method can guarantee finitely many prediction errors, and stricter regimes are required to simultaneously control both prediction errors and strategic manipulation. To complement these limits, we develop a general reduction framework that converts standard online large-margin algorithms into algorithms for strategic classification. Using reported features alone yields guarantees in favorable regimes, while a proxy-data construction extends finite-mistake guarantees to the full learnable range. We also study additive noise in reported features and show that the associated learnability thresholds depend sharply on the feedback model and on what information about the perturbation is available to the learner. Counterintuitively, learnability thresholds do not vary continuously with the noise level: the introduction of arbitrarily small additive noise can create a strictly harder learning problem than in the noiseless case, producing a discontinuous jump in the threshold for learnability.