9th Workshop on High-Dimensional Approximation (HDA2023)

The High-Dimensional Approximation (HDA) Workshop is a series of biennial international meetings covering current research on high-dimensional problems. This ninth workshop (HDA2023) will be held at ANU in Canberra.

schedule Date & time
20 Feb 2023 | 1pm - 24 Feb 2023 | 5pm
monetization_on Cost


Registration fees 

  • Early bird registration (in-person) $400
  • General registration (in-person) $450
  • Student/retired fellow $200


contact_support Contact
Contact name
Aline Lorieri
Contact number

Content navigation



High-Dimensional Approximation (HDA2023)

The High-Dimensional Approximation (HDA) Workshop is a series of biennial international meetings covering current research on high-dimensional problems. This ninth workshop (HDA2023) will be held at ANU in Canberra.

HDA2023 will cover a range of topics central to modern high-dimensional approximation and their applications. Topics include, but are not limited to,

  • Nonlinear approximation
  • Numerical integration
  • Tensor decomposition
  • Sparsity-exploiting approximation
  • Sparse grid methods
  • Quasi-Monte Carlo methods
  • Polynomial chaos expansions
  • Discrepancy and dispersion theory
  • Dimensionality reduction
  • Uncertainty quantification
  • Reduced modelling
  • Inverse problems

Participants are welcomed from all around the world. A key feature of this workshop is that there are no parallel sessions and generally all talks are of equal length. Participants are welcome to present work in progress, and time will be set aside for informal discussions. The number of talks will be limited. It is not essential that everyone gives a talk. Collaborators are encouraged to coordinate and elect a representative to present joint work.       

All talks are 25 minutes long with 5 minutes for questions at the end.

Registration fees 

  • Early bird registration (in-person) $400
  • General registration (in-person) $450
  • Student/retired fellow $200

Warning! Please be aware that scammers are targeting conference participates. A number of participants have received phone calls requesting credit card information for hotel bookings. Under no circumstance give this information. Please do not hesitate to contact the organisers if you are unsure if a communication is legitimate.

Abstract and Support Submission

A small amount of funding has been allocated to provide accommodation and cover the conference fees for the visits of students, early career researchers, and attendees from developing nations.

Abstract and Support Submission


Programme Committee

  • Jochen Garcke (Uni Bonn)
  • Markus Hegland (ANU)
  • Frances Kuo (UNSW)
  • Michael Griebel (Uni Bonn)
  • Ian Sloan (UNSW)

Local Organising Committee

  • James Nichols (ANU)
  • Alexander Gilbert (UNSW)
  • Tiangang Cui (Monash Uni)
  • Markus Hegland (ANU)


Early bird registration: 9 December 2022

Travel support application deadline: 16th December 2022

We will continue to receive and review abstract submissions on a rolling basis




Time Session

Welcoming Remarks 

James Nichols - ANU


On tensor approximation of continuous functions

Helmut Harbrecht / Department of Mathematics and Computer Science, University of Basel, Switzerland

In this talk, we analyze tensor approximation schemes for continuous functions. We assume that the function to be approximated lies in either a standard Sobolev space or in a Sobolev space with dominating mixed smoothness. Then, we discuss the cost when approximating this function in a tensor format such as the tensor train format or the H-Tucker format.


Why the `2=`1 embedding cannot be solved by randomized algorithms

Marcin Wnuk / Osnabrueck University

We are working in the framework of information-based complexity, the basic notions will be explained during the talk. Let S be a bounded linear operator between some separable Banach spaces. We say that S is solvable by a class of algorithms A if there exists a sequence (An)n of algorithms from A such that the errors made by (An)n when approximating S converge to 0. The de nition of the error may, and usually does, depend on the class A under consideration. It is well-known that the class of operators solvable by deterministic algorithms coincides with the compact operators. The main question we ask is: which operators are solvable by randomized algorithms? When proving negative results we rely on the Bakhvalov technique. In particular the results of S. Heinrich from "Lower bounds for the complexity of Monte Carlo function approximation" turn out to be very useful. Applying them we were able to obtain some elementary results on nonsolvability of certain operators. The introductory part of the talk will be devoted to presenting those results. In the main part of the talk I would like to focus on one of the simplest operators for which the method developed by Heinrich fails, namely when S is the indentical embedding of the sequence space `2 into `1. Careful analysis of the nite-dimensional subproblems shows that for some constant C > 0 obtaining in the m-dimensional subproblems error smaller than C requires evaluating roughly log(m) (for the non-adaptive randomized algorithms) or log(log(m)) (for the adaptive randomized algorithms) functionals. Thus we cannot approximate S with error smaller than C using only nitely many functionals. If time permits I would like to go into some details of the proof drawing upon the classical information theory. The talk is based on the joint work with Robert Kunsch (RWTH Aachen) and Erich Novak (University of Jena).


Afternoon Tea


Pre-integration and Dimension Reduction with Generalized Active Subspaces

Sifan Liu / Stanford University

Pre-integration and dimension reduction are useful techniques for improving randomized quasi- Monte Carlo (RQMC) accuracy. To choose the variable to pre-integrate with, one needs to consider both the variable importance and the tractability of the pre-integration step. For integrals over a Gaussian distribution, one can potentially pre-integrate over any linear combination of variables. I will talk about how to nd the important direction to pre-integrate with by active subspaces while incorporating certain constraints so that the pre-integration step has a closed form. Meanwhile, this generalized active subspaces method can reduce the e ective dimension of the integrand. The proposed algorithm is shown to work e ciently in some nancial applications, including pricing exotic options and computing Greeks. In particular, the proposed method brings a big variance reduction for pricing Asian options under stochastic volatility models.


Fast computation of matrix-vector products using reduced lattice points

Peter Kritzer / RICAM, Austrian Academy of Sciences

In various applications (e.g., mathematical nance or PDEs with random coe cients) one is interested in approximating integrals of the form Z D f(x>A)d (x); for a domain D  Rs, an s  t matrix A 2 Rs t, and a function f : D ! R, by quasi-Monte Carlo integration rules of the form QN(f) = 1 N NX􀀀1 k=0 f(x>k A): We are interested in situations where the main computational cost of computing QN(f) arises from the vector-matrix multiplication x>k A for all N points, which requires O(Ns t) operations. It was shown by Dick, Kuo, Le Gia, and Schwab that, when using particular types of QMC rules, the cost to evaluate QN(f) can be reduced to only O(tN logN) operations. This drastic reduction in computational cost (as long as logN  s) is achieved by a fast matrix-matrix multiplication that exploits the fact that for the chosen point sets the matrix X can be re-ordered to be of circulant structure. The fast multiplication is then realized by the use of the fast Fourier transformation (FFT). 2 In our talk, we will outline a di erent method which can also drastically reduce the computation cost of evaluating QN(f). The reduction in computational complexity is achieved by using point sets which possess a certain repetitiveness in their components.


Constructing Embedded Lattice-based Algorithms to Approximate Multivariate Function with a Composite Number of Points

Weiwen Mo / KU Leuven

Rank-1 lattice rules, a main family of QMC rules, are characterised by generating vectors, and the component-by-component (CBC) construction is an e cient way to construct the generating vector. In this talk, we introduce some new results in approximating multivariate one-periodic functions using function values at rank-1 lattice points. The number of points is not limited to a prime number as in currently available literature, but can take any composite value. The new results cannot be trivially generalised from existing results for prime number and a new proof technique is required. We also provide new CBC algorithm to construct embedded lattice sequences for a range of number of points. We theoretically show that embedded lattice sequences are essentially as good as lattice rules with the same number of points. With some modi cations, the search criterion in the CBC construction under L2 norm can also be applied to L1 norm, which allows fast CBC imp lementations for both L2 and L1 norms.



Time Session

Self-reinforced Knothe{Rosenblatt rearrangements for high-dimensional stochastic computation

Tiangang Cui / Monash

Characterising intractable high-dimensional random variables is a fundamental task in stochastic computation. It has broad applications in statistical physics, machine learning, uncertainty quanti - cation, and beyond. The recent surge of transport maps o ers new insights into this task by constructing variable transformations that couple intractable random variables with tractable reference random variables. In this talk, we will present numerical methods that build the Knothe{Rosenblatt (KR) rearrangement| a family of transport maps in a triangular form|in high dimensions. We rst design function approximation tools to realise the KR rearrangement that ensures the order-preserving property with controlled statistical errors. We then introduce a self-reinforced procedure to adaptively precondition the construction of KR rearrangements to signi cantly expand their capability of handling random variables with complicated nonlinear interactions and concentrated density functions. We demonstrate the e ciency of the resulting self-reinforced KR rearrangements on applications in statistical learning and uncertainty quanti cation, including parameter estimation for dynamical systems, PDE-constrained inverse problems, and rare event estimation.


Approximating distribution functions in uncertainty quanti cation using quasi-Monte Carlo methods

Abi Srikumar / UNSW Sydney

As high-dimensional problems become increasingly prevalent in many applications, the e ective evaluation of these problems within the limits of our current technology poses a great 3 hurdle due to the exponential increase in computational cost as dimensionality increases. One class of strategies for evaluating such problems e ciently are quasi-Monte Carlo (QMC) methods. Recently the application of quasi-Monte Carlo methods to approximate expected values associated with solutions to elliptic partial di erential equations with random coe cients in uncertainty quanti cation has been of great interest. In this talk, we look into extending this from the computation of expected values of functionals of the PDE solution to the approximation of distribution functions. This done by reformulating these functions as expectations of an indicator function. However due to the discontinuous nature of the indicator functions, we do not have an integrand that is conducive to obtaining the optima l rate of error convergence. We seek to alleviate this issue using preintegration, whereby we integrate out a single variable of the discontinuous function in order to obtain a function of one dimension less with a su cient level of smoothness to apply QMC methods. Some theoretical results regarding the error bounds associated with such approximations and the results of numerical experiments will be presented.


Morning Tea


Quasi-Monte Carlo methods and discontinuous Galerkin

Andreas Rupp / LUT University

In this talk, we design and develop Quasi-Monte Carlo (QMC) cubatures for nonconforming discontinuous Galerkin approximations of elliptic PDE problems with random coe cients. In particular, we are interested in using QMC cubatures to compute the response statistics (expectation and variance) of the discretized PDE problem and we derive rigorous QMC convergence rates for this problem.


Periodic kernel-based high-dimensional approximation

Ian Sloan / University of New South Wales, Sydney Australia

This talk describes a recent periodic-kernel-based QMC method for high-dimensional approximation. In this work, jointly with Vesa Kaarnioja, Yoshihito Kazashi, Frances Kuo and Fabio Nobile, an elliptic partial di erential equation with a random input eld is modelled in a non-standard way with periodic random variables together with kernel-based interpolation, giving a computational cost that grows merely linearly with dimensionality. The method is feasible even with hundreds of parameters in an input random field.


Theory and construction of lattice rules after preintegration for option pricing

Alexander Gilbert / UNSW Sydney

Since the 1990's it has been known that quasi-Monte Carlo (QMC) methods are extremely e ective at computing the integrals required for pricing Asian options, despite the fact that the kink in the payo  function means that such problems are not smooth enough for the QMC theory to apply. A popular remedy is to rst smooth the payo  by preintegration, which is a speci c case of the more general method of conditional sampling. Here one rst integrates the non-smooth payo  with respect a specially chosen variable (or equivalently conditions on partial information), with the result being a smooth function, but now in one dimension less. In this talk we look at pricing an Asian option by performing preintegration then applying a randomly shifted lattice rule to the result. By studying the 4 regularity of the payo  after preintegration, we can choose appropriate weights for each speci c Asian option problem, which in turn allows tailored randomly shifted lattice rules to be constructed using the component-by-component algorithm. We give rigorous error bounds with rst-order convergence and which are explicit in the dependence on dimension. Along the way I will also highlight a key step in the analysis: an equivalence between the Sobolev spaces used to analyse preintegration and the weighted spaces used to analyse randomly shifted lattice rules.




The Double-Adaptive Tensor Product Multilevel Method

Rüdiger Kempf / University of Bayreuth

The Tensor Product Multilevel Method (TPML) is a combination of Smolyak's construction for a high-directional approximation operator with low-dimensional kernel-based multilevel schemes. This yields a high-order method which allows scattered data approximation on relatively arbitrary highdimensional domains. Both building blocks of TPML have adaptive versions which can be combined in the same way as their non-adaptive ones. This allows us to iteratively build the index set the Smolyak algorithm uses and/or adaptively choose the sites the multilevel scheme uses. We call the resulting version of TPML the Double-Adaptive Tensor Product Multilevel Method. In this talk we brie y repeat the basics of TPML and introduce our ideas for the double-adaptive TPML. If time permits we show the numerical performance of the methods with the help of an example.


Tensor-based methods for sequential state and parameter estimation in state space models

Yiran Zhao / Monash University

Numerous real-world applications involve the ltering problem: one aims to sequentially estimate the states of a (stochastic) dynamical system from incomplete, indirect, and noisy observations over time to forecast and control the underlying system. In addition to the ltering problem, it is often of interest to estimate some parameters that govern the evolution of the system. Both the ltering and the parameter estimation can be naturally formalized under the Bayesian framework. However, the Bayesian solution poses some signi cant challenges. For example, the most widely used particle lters can su er from particle degeneracy and the more robust ensemble Kalman lters rely on the rather restrictive Gaussian assumptions. The solution to parameter and state estimation in high-dimensional model is rather unsolved. Exploiting the interplay between the low-rank tensor structure and Markov property of the ltering problem, we present a tensor-train-based approach for tackling Bayesian ltering and parameter estimation altogether. Our approach aims at exact Bayesian solutions and does not su er from particle degeneracy. Applications on several challenging numerical examples are presented.


Uniform recovery of periodic functions

David Krieg / Johannes Kepler University, Linz, Austria

We consider the problem of function recovery for classes of high-dimensional periodic and continuous functions in the uniform norm. We show that algorithms based on function values (samples) are as powerful as algorithms based on general linear information (like Fourier coe cients). Moreover, there is a simple algorithm that achieves almost the optimal error with high probability. We also discuss a result for classes of non-periodic functions.


Afternoon tea


Dynamical low-rank approximation of Vlasov-Poisson equations on polygonal spatial domains

Andreas Zeiser / HTW Berlin, Germany

We consider dynamical low-rank approximation (DLRA) for the numerical simulation of Vlasov-Poisson equations based on separation of space and velocity variables, as proposed in several recent works. The standard approach for the time integration in the DLRA model uses a splitting of the tangent space projector for the low-rank manifold according to the separated variables. It can also be modi ed to allow for rank-adaptivity. A less studied aspect is the incorporation of boundary conditions in the DLRA model. We consider a variational formulation of the projector splitting which allows to handle in ow boundary conditions on piecewise polygonal spatial domains. Numerical experiments demonstrate the principle feasibility of this approach.


The e ects of minimising the training set ll distance in machine learning regression

Paolo Climaco / University of Bonn

Machine learning (ML) regression methods are powerful tools leveraging large datasets for nonlinear function approximation in high-dimensional domains. Unfortunately, learning from large datasets may be unfeasible due to computational limitations and the cost of producing labels for the data points, as in the case of quantum-chemistry applications. Therefore, an important task in scienti c ML is sampling small training sets from large pools of unlabelled data points to maximise models performance while keeping the computational e ort of the training and labelling processes low. In this talk, we analyse the advantages of a common approach for training set sampling aiming to minimise the ll distance of the selected set, which can be considered a measure of the data distribution. We provide theoretical results that minimising the training set ll distance reduces the worst-case approximation error of several ML models, which can be interpreted as an indicator of the robustness of a model's approximation. We support our theoretical results with experiments considering various ML techniques, such as Kernel methods and Neural Networks. The application context is quantum-chemistry, where di erent datasets have been generated to develop e ective ML techniques and approximate the functions mapping high-dimensional representations of molecules to their physical and chemical properties.


Learning Embeddings of Deformation Patterns of Surface Meshes

Jochen Garcke 

The underlying dynamics and patterns of 3D surface meshes deforming over time can be discovered by unsupervised learning, especially autoencoders, which calculate low-dimensional embeddings of the surfaces. To study the deformation patterns of unseen shapes by transfer learning, we want to train an autoencoder that can analyze new surface meshes without training a new network. Here, most state-of-the-art autoencoders cannot handle meshes of different connectivity and therefore have limited to no generalization capacities to new meshes. Also, reconstruction errors strongly increase in comparison to the errors for the training shapes. To address this, we propose novel spatial and spectral CoSMA (Convolutional Semi-Regular Mesh Autoencoder) network, respectively. 




Time Session

Quasi-Monte Carlo nite element approximation of the Navier{Stokes equations with initial data modeled by log-normal random elds

Guanglian Li / The University of Hong Kong

We analyze the numerical approximation of the Navier{Stokes problem over a polygonal domain in R2, where the initial condition is modeled by a log-normal random eld. This problem usually arises in the area of uncertainty quanti cation. We aim to compute the expectation value of linear functionals of the solution to the Navier{Stokes equations and perform a rigorous error analysis for the problem. In particular, our method includes the nite element, fully-discrete discretizations, truncated 7 Karhunen{Lo eve expansion for the realizations of the initial condition, and lattice-based quasi-Monte Carlo (QMC) method to estimate the expected values over the parameter space. Our QMC analysis is based on randomly-shifted lattice rules for the integration over the domain in high-dimensional space, which guarantees the error decays with O(N􀀀1+ ), where N is the number of sampling points,  > 0 is an arbitrary small number, and the constant in the decay estimate is independent of the dimension of integration.


Sparse grid approximation of stochastic dynamic micromagnetics

Andrea Scaglioni / Vienna university of technology

We consider the stochastic Landau-Lifschitz-Gilbert problem, an SPDE model for dynamic micromagnetism. We rst convert the problem to a (highly nonlinear) PDE with parametric coe cients using the Doss-Sussmann transform and the L evy-Ciesielsky parametrization of the Brownian motion. We prove analytic regularity of the parameter-to-solution map and derive estimates of arbitrary derivatives of the parameter-to-solution map. We apply these estimates to prove dimension-independent algebraic convergence of a piecewise-polynomial sparse grid discretization. If time permits, we discuss space and time discretization and e cient multilevel strategies for large-scale simulation.


Morning tea


reduced dynamic of the inhibitor-activator system

Erika Hausenblas / Montanuniversity Leoben, Leoben, Austria

Chemical and biochemical reactions can exhibit surprisingly di erent behaviors from multiple steady-state solutions to oscillatory solutions and chaotic behaviors. Such behavior has been of great interest to researchers for many decades. The Briggs-Rauscher, Belousov-Zhabotinskii, and Bray- Liebhafsky reactions, for which periodic variations in concentrations can be visualized by changes in color, are experimental examples of oscillating behavior in chemical systems. These types of systems are modeled by a system of partial di erential equations coupled with a nonlinearity. However, analyzing the pattern, one may suspect that the dynamic is only generated by a nite number of spatial Fourier modes. In  uid dynamics, it is shown that for large times, the solution is determined by a nite number of spatial Fourier modes, called determining modes. In the article, we rst introduce the concept of determining modes and show that, indeed, it is su cient to characterize the dynamic by only a nite number of spatial Fourier modes.


Free University of Berlin

Vesa Kaarnioja / Free University of Berlin

Contemporary quasi-Monte Carlo (QMC) methods are based on tailoring specially designed cubature rules for high-dimensional integration problems. By exploiting the smoothness and anisotropy of an integrand, it is possible to achieve faster-than-Monte Carlo convergence rates. To this end, QMC methods have become a popular tool for numerical treatment of partial di erential equations involving random coe cients. Meanwhile, the goal in Bayesian optimal experimental design is to maximize the expected information gain for the reconstruction of unknown quantities when there is a limited budget for collecting measurement data. In this talk, we derive tailored QMC cubature 8 rules to alleviate the computational burden associated with Bayesian optimal experimental design problems governed by partial di erential equations.


The xed vector randomised lattice algorithm for high-dimensional integration

Laurence Wilkes / KU Leuven

The recently developed xed vector algorithm o ers a very simple solution to the problem of producing a lattice-based randomised algorithm for numerical integration with the optimal randomised error. The essential idea of this algorithm is to shift the construction of the generating vector of the lattice to a precomputation so that the only randomised element is to choose the number of function evaluations from a suitable range. In the talk, we will look at the existence result for such a xed generating vector, a method to construct the vector and nally look at how the algorithm can be generalised to work in the half-period cosine space and a Sobolev space with a lower smoothness parameter.




Excursion / Collaboration

Excursion / Desks and meeting rooms available for collaboration



Time Session

Sparse functional representations for Neuro-medical imaging

Leo Lebrat / CSIRO

The study of numerous neurodegenerative diseases relies on the reconstruction and analysis of the brain cortices from magnetic resonance imaging (MRI). Traditional volumetric representations for these surfaces fail to capture ne-grained cortical foldings. In this talk, we will present memorye cient functional representations for those surfaces and exhibit empirical evidence for their robustness to dataset shift. Leveraging analogous sparse mathematical objects, we will also present a novel explainability method allowing us to produce plausible counterfactual examples. [1] Lebrat, Leo, Santa Cruz, Rodrigo et al. "Cortical ow: a di eomorphic mesh transformer network for cortical surface reconstruction." Advances in Neural Information Processing Systems 34 (2021) [2] Santa Cruz, Rodrigo, Lebrat, Leo et al. "Deepcsr: A 3d deep learning approach for cortical surface reconstruction." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2021. [3] Peters, Joshua, Lebrat, Leo et al. "DBCE: A Saliency Method for Medical Deep Learning Through Anatomically-Consistent Free-Form Deformations." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023.


High dimensional inner products of SGD iterates in exponential family models resemble neural network kernels

Russell Tsuchida / Data61-CSIRO

We discuss a problem involving computing with point estimates of exponential family likelihoods in a high dimensional regime. We present background on exponential families, a general and mathematically convenient family of probability distributions studied by probabilists, statisticians and machine learners alike. In classical applications, a dependent variable follows an exponential family likelihood with an expectation that is a linear function of the independent variable. In an unsupervised setting, the independent variable is learnt, possibly together with the linear function. We consider 9 the setup of updating the independent variable using stochastic gradient descent (SGD) in such an unsupervised setting. We nd that in a high dimensional limit, the inner product between any two learnt independent variables can be updated exactly using a deterministic rule. This derived update rule resembles previously introduced in nitely wide neural network kernels. As part of our analysis, we are able to draw new connections between the statistician's inverse link function and the machine learner's activation function. We describe an interesting property of SGD in this high dimensional limit | even though individual iterates are random vectors, inner products of any two iterates are deterministic, and can converge to a unique xed point as the number of iterates increases. Our update rule allows us to compute using in nite dimensional independent variables using a popular class of machine learning algorithms called kernel methods.


Morning tea


Recovery from corrupted data: recent results for various models

Simon Foucart / Texas A&M University

The goal of Optimal Recovery is to uncover procedures that are worst-case optimal for the task of learning / recovering unknown functions from a priori information (the model) and a posteriori information (the observations). Optimal recovery procedures can sometimes be chosen as linear maps, at least when the observations are accurate. When they are corrupted, however, the problem becomes more complicated, especially if the focus is put on the practical construction of the (near) optimal procedure. This talk showcases positive results in three situations: deterministic observation errors bounded in `2; deterministic observation errors bounded in `1; and stochastic log-concave observation errors.


Integration and Function Recovery on Hermite Spaces

Michael Gnewuch / Osnabruck University

We consider spaces of functions of in netely many variables that are de ned with the help of all univariate Hermite polynomials, which (properly normalized) form an orthonormal basis of the space of all square-integrable functions over the real line endowed with the standard normal distribution. Those spaces belong to the class of reproducing kernel Hilbert spaces of increasing smoothness and their elements are de ned on proper subsets of the sequence space (i.e., the space of all real-valued sequences). Their norms are induced by an underlying function space decomposition, namely the in nite-dimensional ANOVA decomposition. We discuss further properties of those spaces and present sharp results on numerical integration and on function recovery. The Talk is based on joint work with Mario Hefter, Aicke Hinrichs, Klaus Ritter and Robin Russmann.


A Space-Time Adaptive Low-Rank Method for High-Dimensional Parabolic PDEs

Manfred Faldum / RWTH Aachen University

In this talk, we present the construction and analysis of a space-time adaptive method for parabolic partial di erential equations that combines sparse wavelet expansions in time with adaptive low-rank approximations in the spatial variables. Similar to the existing adaptive low-rank method for elliptic problems [1], we use a perturbed Richardson iteration, where we apply two reduction operators to the iterates to keep the support as well as the arising ranks of the low-rank approximations near-optimal for a given error tolerance. This perturbed Richardson iteration is applied to a bi-in nite matrix-vector problem based on the space-time variational formulation [2] which is equivalent to the parabolic initial boundary value problem. For the analysis of the method, we propose a new approximation class for the temporal operator which is necessary due to the interaction between hierarchical tensor formats of di erent time indices. One of the main challenges is the fact that the parabolic operator is an isomorphism with respect to spaces not endowed with a cross norm. Therefore, as in [1], we use a method for preconditioning operators in low-rank format by exponential sum approximations. The method is shown to converge and satisfy similar complexity bounds as the existing adaptive low-rank methods for elliptic problems [1], establishing its suitability for parabolic problems on high-dimensional spatial domains and does not su er from the curse of dimensionality. The construction also yields computable rigorous a posteriori error bounds for the total error depending on the activated basis functions and ranks in the approximation. The results are illustrated by numerical experiments for the heat equation in high dimensions, demonstrating practical e ciency. 10 [1] M. Bachmayr, W. Dahmen, Adaptive Low-Rank Methods: Problems on Sobolev Spaces, SIAM Journal on Numerical Analysis 54(2), pp.744-796 [2] C. Schwab and R. Stevenson, Space-Time Adaptive Wavelet Methods for Parabolic Evolution Problems, Mathematics of Computatio




Energy and Discrepancy of Greedy Sequences on the Sphere

Ryan W Matzke / Vanderbilt University

Greedy algorithms are surprisingly e ective in a wide range of optimization problems, though have only recently been considered as a possible way to nd point con gurations with low discrepancy or energy. On the sphere and other manifolds, greedy algorithms minimizing certain discrepancies or energies at each step have been shown to be uniformly distributed. In this talk, we further quantify some of this behavior on the sphere, in particular discussing the e ectiveness of greedily generated point con gurations as minimizers of Riesz energies and the quadratic spherical cap discrepancy. In particular, we show that they are optimal for singular Riesz energies. This is based on joint work with Dmitriy Bilyk, Michelle Mastrianni, and Stefan Steinerberger.


High-Dimensional Optimization with a Novel Nonlocal Gradient

Hoang Tran / Oak Ridge National Laboratory

The problem of minimizing multi-modal loss functions with a large number of local optima frequently arises in machine learning and model calibration problems. Since the local gradient points to the direction of the steepest slope in an in nitesimal neighborhood, an optimizer guided by the local gradient is often trapped in a local minimum. To address this issue, we develop a novel nonlocal gradient to skip small local minima by capturing major structures of the loss's landscape in black-box optimization. The nonlocal gradient is de ned by a directional Gaussian smoothing (DGS) approach. The key idea of DGS is to conducts 1D long-range exploration with a large smoothing radius along d orthogonal directions in Rd, each of which de nes a nonlocal directional derivative as a 1D integral. Such long-range exploration enables the nonlocal gradient to skip small local minima. The d directional derivatives are then assembled to form the nonlocal gradient. We use the Gauss -Hermite quadrature rule to approximate the d 1D integrals to obtain an accurate estimator. We provide a convergence theory in the scenario where the objective function is composed of a convex function perturbed by a highly oscillating, deterministic noise. We prove that our method exponentially converge to a tightened neighborhood of the solution, whose size is characterized by the noise wavelength. The superior performance of our method is demonstrated in several high-dimensional benchmark tests, machine learning and model calibration problems.


Preparing Hamiltonians for Quantum Simulation: A Computational Framework for Cartan Decomposition via Lax Dynamics

Moody Chu / North Carolina State University

Simulating the time evolution of a Hamiltonian system on a classical computer is hard {The computational power required to even describe a quantum system scales exponentially with the number of its constituents, let alone integrate its equations of motion. Hamiltonian simulation on a quantum machine is a possible solution to this challenge { Assuming that a quantum system composing of 11 spin-½ particles can be manipulated at will, then it is tenable to engineer the interaction between those particles according to the one that is to be simulated, and thus predict the value of physical quantities by simply performing the appropriate measurements on the system. Establishing a linkage between the unitary operators described mathematically as a logic solution and the unitary operators recognizable as quantum circuits for execution is therefore essential for algorithm design and circuit implementation. Most current techniques are fallible because of truncation errors or t he stagnation at local solutions. This work o ers an innovative avenue by tackling the Cartan decomposition with the notion of Lax dynamics. Within the integration errors that is controllable, this approach gives rise to a genuine unitary synthesis which not only is numerically feasible, but also can be utilized to gauge the quality of results produced by other means, and extend the knowledge to a wide range of applications. This paper aims at establishing the theoretic and algorithmic foundations by exploiting the geometric properties of Hamiltonian subalgebras and describing a common mechanism for deriving the Lax dynamics. 12


Afternoon tea


MSI Colloquium - Analytical and numerical methods in shape optimization

Helmut Harbrecht / Department of Mathematics and Computer Science, University of Basel, Switzerland

Shape optimization is indispensable for designing and constructing  industrial components. Many problems that arise in application, particularly in  structural mechanics and in the optimal control of distributed parameter  systems, can be formulated as the minimization of functionals which  are defined over a class of admissible domains.  The application of gradient based minimization algorithms involves the  shape functionals’ derivative with respect to the domain under consideration.  Such derivatives can analytically be computed by means of shape calculus  and enable the paradigm first optimize then discretize. Especially, by identifying  the sought domain with a parametrization of its boundary, the solution of the  shape optimization problem will be equivalent to solving a nonlinear  pseudodifferential equation for the unknown parametrization.  The present talk aims at surveying on analytical and numerical methods  for shape optimization. In particular, besides several applications of  shape optimization, the following items will be addressed:  • first and second order optimality conditions • discretization of shapes • existence and convergence of approximate shapes • efficient numerical techniques to compute the state equation


Workshop Dinner 

 Venue: Ovolo Nishi / Address: Ground/25 Edinburgh Ave, Canberra ACT 2601 / Time: Arrival from 6pm, dinner will be served at 6:30pm / *Bar tab available and once finished, participants are to purchase their own drinks


All day

Desks and meeting rooms available for collaboration.



Seminar Room 1.33 & 1.37,
Hanna Neumann Building #145, Science Road,
The Australian National University

-35.275527301698, 149.11904038165

General information

Information about Canberra, visas, transportation, accommodation, emergency, etc.

About Canberra

Canberra is located in the Australian Capital Territory, on the ancient lands of the Ngunnawal people, who have lived here for over 20,000 years. Canberra’s name is thought to mean ‘meeting place’, derived from the Aboriginal word Kamberra. European settlers arrived in the 1830s, and the area won selection by ballot for the federal capital in 1908. Since then the ‘Bush Capital’ has grown to become the proud home of the Australian story, with a growing population of around 390,000.

Canberra hosts a wide range of tourist attractions, including various national museums, galleries and Parliament House, as well as beautiful parks and walking trails. Several attractions are within walking distance of the ANU campus, including the National Museum of Australia and the Australian National Botanic Gardens. Canberra is also a fantastic base from which to explore the many treasures of the surrounding region, including historic townships, beautiful coastlines and the famous Snowy Mountains. Learn more about what to do and see during your stay in Canberra.


International visitors to Australia require a visa or an electronic travel authority (ETA) prior to arrival. It is your responsibility to ensure documentation is correct and complete before you commence your journey. Visit the Australian Immigration and Citizenship website for information about visas and ETAs.


There are many ways to get around Canberra. Below is some useful information about Bus & Taxi transport around the ANU, the Airport and surrounding areas.


If you are catching a taxi or Uber to the ANU Mathematical Sciences Institute, ask to be taken to Building #145, Science Road, ANU. We are located close to the Ian Ross Building and the ANU gym. A Taxi from the airport will usually cost around $40 and will take roughly 15 minutes. Pricing and time may vary depending on traffic.

Phone Canberra Elite Taxis on 13 22 27 to book a taxi.


Canberra buses are a cheap and easy way of getting around town once you're here. View bus services and fares information on the Transport Canberra website.

To travel just use your MyWay card or pay a cash fare to the driver when boarding. A single adult trip when paying cash will cost $4.80 with cheaper fares for students and children. Significant savings can be made when travelling with MyWay. View MyWay and Fares information.

Canberra Airport

The ACT government has implemented a public bus service from the CBD via the Canberra Airport via bus Route 3, seven days a week.

For more information about the buses to Canberra airport, click here.


Below are some accommodation options for your visit to Canberra.

Emergency Information

In case of an emergency, please evactuate to the lawns on the southern side of the building (colloquiually known as "University Avenue"). 

We have dedicated first aiders on staff, please seek them out if you require assistance. There is also a first aid room on the first floor, behind the women's bathroom, which contains basic supplies.