9th Workshop on HighDimensional Approximation (HDA2023)
The HighDimensional Approximation (HDA) Workshop is a series of biennial international meetings covering current research on highdimensional problems. This ninth workshop (HDA2023) will be held at ANU in Canberra.
Cost
Registration fees
 Early bird registration (inperson) $400
 General registration (inperson) $450
 Student/retired fellow $200
Contact
Content navigation
RegisterDescription
The HighDimensional Approximation (HDA) Workshop is a series of biennial international meetings covering current research on highdimensional problems. This ninth workshop (HDA2023) will be held at ANU in Canberra.
HDA2023 will cover a range of topics central to modern highdimensional approximation and their applications. Topics include, but are not limited to,
 Nonlinear approximation
 Numerical integration
 Tensor decomposition
 Sparsityexploiting approximation
 Sparse grid methods
 QuasiMonte Carlo methods
 Polynomial chaos expansions
 Discrepancy and dispersion theory
 Dimensionality reduction
 Uncertainty quantification
 Reduced modelling
 Inverse problems
Participants are welcomed from all around the world. A key feature of this workshop is that there are no parallel sessions and generally all talks are of equal length. Participants are welcome to present work in progress, and time will be set aside for informal discussions. The number of talks will be limited. It is not essential that everyone gives a talk. Collaborators are encouraged to coordinate and elect a representative to present joint work.
All talks are 25 minutes long with 5 minutes for questions at the end.
Registration fees
 Early bird registration (inperson) $400
 General registration (inperson) $450
 Student/retired fellow $200
Warning! Please be aware that scammers are targeting conference participates. A number of participants have received phone calls requesting credit card information for hotel bookings. Under no circumstance give this information. Please do not hesitate to contact the organisers if you are unsure if a communication is legitimate.
Abstract and Support Submission
A small amount of funding has been allocated to provide accommodation and cover the conference fees for the visits of students, early career researchers, and attendees from developing nations.
Abstract and Support Submission
Abstracts

Document

Document
Programme Committee
 Jochen Garcke (Uni Bonn)
 Markus Hegland (ANU)
 Frances Kuo (UNSW)
 Michael Griebel (Uni Bonn)
 Ian Sloan (UNSW)
Local Organising Committee
 James Nichols (ANU)
 Alexander Gilbert (UNSW)
 Tiangang Cui (Monash Uni)
 Markus Hegland (ANU)
Timeline
Early bird registration: 9 December 2022
Travel support application deadline: 16th December 2022
We will continue to receive and review abstract submissions on a rolling basis
Schedule
Program
Time  Session 

13:00 
Welcoming RemarksJames Nichols  ANU 
13:30 
On tensor approximation of continuous functionsHelmut Harbrecht / Department of Mathematics and Computer Science, University of Basel, Switzerland In this talk, we analyze tensor approximation schemes for continuous functions. We assume that the function to be approximated lies in either a standard Sobolev space or in a Sobolev space with dominating mixed smoothness. Then, we discuss the cost when approximating this function in a tensor format such as the tensor train format or the HTucker format. 
14:00 
Why the `2=`1 embedding cannot be solved by randomized algorithmsMarcin Wnuk / Osnabrueck University We are working in the framework of informationbased complexity, the basic notions will be explained during the talk. Let S be a bounded linear operator between some separable Banach spaces. We say that S is solvable by a class of algorithms A if there exists a sequence (An)n of algorithms from A such that the errors made by (An)n when approximating S converge to 0. The de nition of the error may, and usually does, depend on the class A under consideration. It is wellknown that the class of operators solvable by deterministic algorithms coincides with the compact operators. The main question we ask is: which operators are solvable by randomized algorithms? When proving negative results we rely on the Bakhvalov technique. In particular the results of S. Heinrich from "Lower bounds for the complexity of Monte Carlo function approximation" turn out to be very useful. Applying them we were able to obtain some elementary results on nonsolvability of certain operators. The introductory part of the talk will be devoted to presenting those results. In the main part of the talk I would like to focus on one of the simplest operators for which the method developed by Heinrich fails, namely when S is the indentical embedding of the sequence space `2 into `1. Careful analysis of the nitedimensional subproblems shows that for some constant C > 0 obtaining in the mdimensional subproblems error smaller than C requires evaluating roughly log(m) (for the nonadaptive randomized algorithms) or log(log(m)) (for the adaptive randomized algorithms) functionals. Thus we cannot approximate S with error smaller than C using only nitely many functionals. If time permits I would like to go into some details of the proof drawing upon the classical information theory. The talk is based on the joint work with Robert Kunsch (RWTH Aachen) and Erich Novak (University of Jena). 
14:30  
15:00 
Afternoon Tea 
15:30 
Preintegration and Dimension Reduction with Generalized Active SubspacesSifan Liu / Stanford University Preintegration and dimension reduction are useful techniques for improving randomized quasi Monte Carlo (RQMC) accuracy. To choose the variable to preintegrate with, one needs to consider both the variable importance and the tractability of the preintegration step. For integrals over a Gaussian distribution, one can potentially preintegrate over any linear combination of variables. I will talk about how to nd the important direction to preintegrate with by active subspaces while incorporating certain constraints so that the preintegration step has a closed form. Meanwhile, this generalized active subspaces method can reduce the e ective dimension of the integrand. The proposed algorithm is shown to work e ciently in some nancial applications, including pricing exotic options and computing Greeks. In particular, the proposed method brings a big variance reduction for pricing Asian options under stochastic volatility models. 
16:00 
Fast computation of matrixvector products using reduced lattice pointsPeter Kritzer / RICAM, Austrian Academy of Sciences In various applications (e.g., mathematical nance or PDEs with random coe cients) one is interested in approximating integrals of the form Z D f(x>A)d (x); for a domain D Rs, an s t matrix A 2 Rs t, and a function f : D ! R, by quasiMonte Carlo integration rules of the form QN(f) = 1 N NX1 k=0 f(x>k A): We are interested in situations where the main computational cost of computing QN(f) arises from the vectormatrix multiplication x>k A for all N points, which requires O(Ns t) operations. It was shown by Dick, Kuo, Le Gia, and Schwab that, when using particular types of QMC rules, the cost to evaluate QN(f) can be reduced to only O(tN logN) operations. This drastic reduction in computational cost (as long as logN s) is achieved by a fast matrixmatrix multiplication that exploits the fact that for the chosen point sets the matrix X can be reordered to be of circulant structure. The fast multiplication is then realized by the use of the fast Fourier transformation (FFT). 2 In our talk, we will outline a di erent method which can also drastically reduce the computation cost of evaluating QN(f). The reduction in computational complexity is achieved by using point sets which possess a certain repetitiveness in their components. 
16:30 
Constructing Embedded Latticebased Algorithms to Approximate Multivariate Function with a Composite Number of PointsWeiwen Mo / KU Leuven Rank1 lattice rules, a main family of QMC rules, are characterised by generating vectors, and the componentbycomponent (CBC) construction is an e cient way to construct the generating vector. In this talk, we introduce some new results in approximating multivariate oneperiodic functions using function values at rank1 lattice points. The number of points is not limited to a prime number as in currently available literature, but can take any composite value. The new results cannot be trivially generalised from existing results for prime number and a new proof technique is required. We also provide new CBC algorithm to construct embedded lattice sequences for a range of number of points. We theoretically show that embedded lattice sequences are essentially as good as lattice rules with the same number of points. With some modi cations, the search criterion in the CBC construction under L2 norm can also be applied to L1 norm, which allows fast CBC imp lementations for both L2 and L1 norms. 
Program
Time  Session 

09:30 
Selfreinforced Knothe{Rosenblatt rearrangements for highdimensional stochastic computationTiangang Cui / Monash Characterising intractable highdimensional random variables is a fundamental task in stochastic computation. It has broad applications in statistical physics, machine learning, uncertainty quanti  cation, and beyond. The recent surge of transport maps o ers new insights into this task by constructing variable transformations that couple intractable random variables with tractable reference random variables. In this talk, we will present numerical methods that build the Knothe{Rosenblatt (KR) rearrangement a family of transport maps in a triangular formin high dimensions. We rst design function approximation tools to realise the KR rearrangement that ensures the orderpreserving property with controlled statistical errors. We then introduce a selfreinforced procedure to adaptively precondition the construction of KR rearrangements to signi cantly expand their capability of handling random variables with complicated nonlinear interactions and concentrated density functions. We demonstrate the e ciency of the resulting selfreinforced KR rearrangements on applications in statistical learning and uncertainty quanti cation, including parameter estimation for dynamical systems, PDEconstrained inverse problems, and rare event estimation. 
10:00 
Approximating distribution functions in uncertainty quanti cation using quasiMonte Carlo methodsAbi Srikumar / UNSW Sydney As highdimensional problems become increasingly prevalent in many applications, the e ective evaluation of these problems within the limits of our current technology poses a great 3 hurdle due to the exponential increase in computational cost as dimensionality increases. One class of strategies for evaluating such problems e ciently are quasiMonte Carlo (QMC) methods. Recently the application of quasiMonte Carlo methods to approximate expected values associated with solutions to elliptic partial di erential equations with random coe cients in uncertainty quanti cation has been of great interest. In this talk, we look into extending this from the computation of expected values of functionals of the PDE solution to the approximation of distribution functions. This done by reformulating these functions as expectations of an indicator function. However due to the discontinuous nature of the indicator functions, we do not have an integrand that is conducive to obtaining the optima l rate of error convergence. We seek to alleviate this issue using preintegration, whereby we integrate out a single variable of the discontinuous function in order to obtain a function of one dimension less with a su cient level of smoothness to apply QMC methods. Some theoretical results regarding the error bounds associated with such approximations and the results of numerical experiments will be presented. 
10:30 
Morning Tea 
11:00 
QuasiMonte Carlo methods and discontinuous GalerkinAndreas Rupp / LUT University In this talk, we design and develop QuasiMonte Carlo (QMC) cubatures for nonconforming discontinuous Galerkin approximations of elliptic PDE problems with random coe cients. In particular, we are interested in using QMC cubatures to compute the response statistics (expectation and variance) of the discretized PDE problem and we derive rigorous QMC convergence rates for this problem. 
11:30 
Periodic kernelbased highdimensional approximationIan Sloan / University of New South Wales, Sydney Australia This talk describes a recent periodickernelbased QMC method for highdimensional approximation. In this work, jointly with Vesa Kaarnioja, Yoshihito Kazashi, Frances Kuo and Fabio Nobile, an elliptic partial di erential equation with a random input eld is modelled in a nonstandard way with periodic random variables together with kernelbased interpolation, giving a computational cost that grows merely linearly with dimensionality. The method is feasible even with hundreds of parameters in an input random field. 
12:00 
Theory and construction of lattice rules after preintegration for option pricingAlexander Gilbert / UNSW Sydney Since the 1990's it has been known that quasiMonte Carlo (QMC) methods are extremely e ective at computing the integrals required for pricing Asian options, despite the fact that the kink in the payo function means that such problems are not smooth enough for the QMC theory to apply. A popular remedy is to rst smooth the payo by preintegration, which is a speci c case of the more general method of conditional sampling. Here one rst integrates the nonsmooth payo with respect a specially chosen variable (or equivalently conditions on partial information), with the result being a smooth function, but now in one dimension less. In this talk we look at pricing an Asian option by performing preintegration then applying a randomly shifted lattice rule to the result. By studying the 4 regularity of the payo after preintegration, we can choose appropriate weights for each speci c Asian option problem, which in turn allows tailored randomly shifted lattice rules to be constructed using the componentbycomponent algorithm. We give rigorous error bounds with rstorder convergence and which are explicit in the dependence on dimension. Along the way I will also highlight a key step in the analysis: an equivalence between the Sobolev spaces used to analyse preintegration and the weighted spaces used to analyse randomly shifted lattice rules. 
12:30 
Lunch 
13:30 
The DoubleAdaptive Tensor Product Multilevel MethodRüdiger Kempf / University of Bayreuth The Tensor Product Multilevel Method (TPML) is a combination of Smolyak's construction for a highdirectional approximation operator with lowdimensional kernelbased multilevel schemes. This yields a highorder method which allows scattered data approximation on relatively arbitrary highdimensional domains. Both building blocks of TPML have adaptive versions which can be combined in the same way as their nonadaptive ones. This allows us to iteratively build the index set the Smolyak algorithm uses and/or adaptively choose the sites the multilevel scheme uses. We call the resulting version of TPML the DoubleAdaptive Tensor Product Multilevel Method. In this talk we brie y repeat the basics of TPML and introduce our ideas for the doubleadaptive TPML. If time permits we show the numerical performance of the methods with the help of an example. 
14:00 
Tensorbased methods for sequential state and parameter estimation in state space modelsYiran Zhao / Monash University Numerous realworld applications involve the ltering problem: one aims to sequentially estimate the states of a (stochastic) dynamical system from incomplete, indirect, and noisy observations over time to forecast and control the underlying system. In addition to the ltering problem, it is often of interest to estimate some parameters that govern the evolution of the system. Both the ltering and the parameter estimation can be naturally formalized under the Bayesian framework. However, the Bayesian solution poses some signi cant challenges. For example, the most widely used particle lters can su er from particle degeneracy and the more robust ensemble Kalman lters rely on the rather restrictive Gaussian assumptions. The solution to parameter and state estimation in highdimensional model is rather unsolved. Exploiting the interplay between the lowrank tensor structure and Markov property of the ltering problem, we present a tensortrainbased approach for tackling Bayesian ltering and parameter estimation altogether. Our approach aims at exact Bayesian solutions and does not su er from particle degeneracy. Applications on several challenging numerical examples are presented. 
14:30 
Uniform recovery of periodic functionsDavid Krieg / Johannes Kepler University, Linz, Austria We consider the problem of function recovery for classes of highdimensional periodic and continuous functions in the uniform norm. We show that algorithms based on function values (samples) are as powerful as algorithms based on general linear information (like Fourier coe cients). Moreover, there is a simple algorithm that achieves almost the optimal error with high probability. We also discuss a result for classes of nonperiodic functions. 
15:00 
Afternoon tea 
15:30 
Dynamical lowrank approximation of VlasovPoisson equations on polygonal spatial domainsAndreas Zeiser / HTW Berlin, Germany We consider dynamical lowrank approximation (DLRA) for the numerical simulation of VlasovPoisson equations based on separation of space and velocity variables, as proposed in several recent works. The standard approach for the time integration in the DLRA model uses a splitting of the tangent space projector for the lowrank manifold according to the separated variables. It can also be modi ed to allow for rankadaptivity. A less studied aspect is the incorporation of boundary conditions in the DLRA model. We consider a variational formulation of the projector splitting which allows to handle in ow boundary conditions on piecewise polygonal spatial domains. Numerical experiments demonstrate the principle feasibility of this approach. 
16:00 
The e ects of minimising the training set ll distance in machine learning regressionPaolo Climaco / University of Bonn Machine learning (ML) regression methods are powerful tools leveraging large datasets for nonlinear function approximation in highdimensional domains. Unfortunately, learning from large datasets may be unfeasible due to computational limitations and the cost of producing labels for the data points, as in the case of quantumchemistry applications. Therefore, an important task in scienti c ML is sampling small training sets from large pools of unlabelled data points to maximise models performance while keeping the computational e ort of the training and labelling processes low. In this talk, we analyse the advantages of a common approach for training set sampling aiming to minimise the ll distance of the selected set, which can be considered a measure of the data distribution. We provide theoretical results that minimising the training set ll distance reduces the worstcase approximation error of several ML models, which can be interpreted as an indicator of the robustness of a model's approximation. We support our theoretical results with experiments considering various ML techniques, such as Kernel methods and Neural Networks. The application context is quantumchemistry, where di erent datasets have been generated to develop e ective ML techniques and approximate the functions mapping highdimensional representations of molecules to their physical and chemical properties. 
16:30 
Learning Embeddings of Deformation Patterns of Surface MeshesJochen Garcke The underlying dynamics and patterns of 3D surface meshes deforming over time can be discovered by unsupervised learning, especially autoencoders, which calculate lowdimensional embeddings of the surfaces. To study the deformation patterns of unseen shapes by transfer learning, we want to train an autoencoder that can analyze new surface meshes without training a new network. Here, most stateoftheart autoencoders cannot handle meshes of different connectivity and therefore have limited to no generalization capacities to new meshes. Also, reconstruction errors strongly increase in comparison to the errors for the training shapes. To address this, we propose novel spatial and spectral CoSMA (Convolutional SemiRegular Mesh Autoencoder) network, respectively. 
Program
Time  Session 

09:30 
QuasiMonte Carlo nite element approximation of the Navier{Stokes equations with initial data modeled by lognormal random eldsGuanglian Li / The University of Hong Kong We analyze the numerical approximation of the Navier{Stokes problem over a polygonal domain in R2, where the initial condition is modeled by a lognormal random eld. This problem usually arises in the area of uncertainty quanti cation. We aim to compute the expectation value of linear functionals of the solution to the Navier{Stokes equations and perform a rigorous error analysis for the problem. In particular, our method includes the nite element, fullydiscrete discretizations, truncated 7 Karhunen{Lo eve expansion for the realizations of the initial condition, and latticebased quasiMonte Carlo (QMC) method to estimate the expected values over the parameter space. Our QMC analysis is based on randomlyshifted lattice rules for the integration over the domain in highdimensional space, which guarantees the error decays with O(N1+ ), where N is the number of sampling points, > 0 is an arbitrary small number, and the constant in the decay estimate is independent of the dimension of integration. 
10:00 
Sparse grid approximation of stochastic dynamic micromagneticsAndrea Scaglioni / Vienna university of technology We consider the stochastic LandauLifschitzGilbert problem, an SPDE model for dynamic micromagnetism. We rst convert the problem to a (highly nonlinear) PDE with parametric coe cients using the DossSussmann transform and the L evyCiesielsky parametrization of the Brownian motion. We prove analytic regularity of the parametertosolution map and derive estimates of arbitrary derivatives of the parametertosolution map. We apply these estimates to prove dimensionindependent algebraic convergence of a piecewisepolynomial sparse grid discretization. If time permits, we discuss space and time discretization and e cient multilevel strategies for largescale simulation. 
10:30 
Morning tea 
11:00 
reduced dynamic of the inhibitoractivator systemErika Hausenblas / Montanuniversity Leoben, Leoben, Austria Chemical and biochemical reactions can exhibit surprisingly di erent behaviors from multiple steadystate solutions to oscillatory solutions and chaotic behaviors. Such behavior has been of great interest to researchers for many decades. The BriggsRauscher, BelousovZhabotinskii, and Bray Liebhafsky reactions, for which periodic variations in concentrations can be visualized by changes in color, are experimental examples of oscillating behavior in chemical systems. These types of systems are modeled by a system of partial di erential equations coupled with a nonlinearity. However, analyzing the pattern, one may suspect that the dynamic is only generated by a nite number of spatial Fourier modes. In uid dynamics, it is shown that for large times, the solution is determined by a nite number of spatial Fourier modes, called determining modes. In the article, we rst introduce the concept of determining modes and show that, indeed, it is su cient to characterize the dynamic by only a nite number of spatial Fourier modes. 
11:30 
Free University of BerlinVesa Kaarnioja / Free University of Berlin Contemporary quasiMonte Carlo (QMC) methods are based on tailoring specially designed cubature rules for highdimensional integration problems. By exploiting the smoothness and anisotropy of an integrand, it is possible to achieve fasterthanMonte Carlo convergence rates. To this end, QMC methods have become a popular tool for numerical treatment of partial di erential equations involving random coe cients. Meanwhile, the goal in Bayesian optimal experimental design is to maximize the expected information gain for the reconstruction of unknown quantities when there is a limited budget for collecting measurement data. In this talk, we derive tailored QMC cubature 8 rules to alleviate the computational burden associated with Bayesian optimal experimental design problems governed by partial di erential equations. 
12:00 
The xed vector randomised lattice algorithm for highdimensional integrationLaurence Wilkes / KU Leuven The recently developed xed vector algorithm o ers a very simple solution to the problem of producing a latticebased randomised algorithm for numerical integration with the optimal randomised error. The essential idea of this algorithm is to shift the construction of the generating vector of the lattice to a precomputation so that the only randomised element is to choose the number of function evaluations from a suitable range. In the talk, we will look at the existence result for such a xed generating vector, a method to construct the vector and nally look at how the algorithm can be generalised to work in the halfperiod cosine space and a Sobolev space with a lower smoothness parameter. 
12:30 
Lunch 
13:30 
Excursion / CollaborationExcursion / Desks and meeting rooms available for collaboration 
Program
Time  Session 

09:30 
Sparse functional representations for Neuromedical imagingLeo Lebrat / CSIRO The study of numerous neurodegenerative diseases relies on the reconstruction and analysis of the brain cortices from magnetic resonance imaging (MRI). Traditional volumetric representations for these surfaces fail to capture negrained cortical foldings. In this talk, we will present memorye cient functional representations for those surfaces and exhibit empirical evidence for their robustness to dataset shift. Leveraging analogous sparse mathematical objects, we will also present a novel explainability method allowing us to produce plausible counterfactual examples. [1] Lebrat, Leo, Santa Cruz, Rodrigo et al. "Cortical ow: a di eomorphic mesh transformer network for cortical surface reconstruction." Advances in Neural Information Processing Systems 34 (2021) [2] Santa Cruz, Rodrigo, Lebrat, Leo et al. "Deepcsr: A 3d deep learning approach for cortical surface reconstruction." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2021. [3] Peters, Joshua, Lebrat, Leo et al. "DBCE: A Saliency Method for Medical Deep Learning Through AnatomicallyConsistent FreeForm Deformations." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023. 
10:00 
High dimensional inner products of SGD iterates in exponential family models resemble neural network kernelsRussell Tsuchida / Data61CSIRO We discuss a problem involving computing with point estimates of exponential family likelihoods in a high dimensional regime. We present background on exponential families, a general and mathematically convenient family of probability distributions studied by probabilists, statisticians and machine learners alike. In classical applications, a dependent variable follows an exponential family likelihood with an expectation that is a linear function of the independent variable. In an unsupervised setting, the independent variable is learnt, possibly together with the linear function. We consider 9 the setup of updating the independent variable using stochastic gradient descent (SGD) in such an unsupervised setting. We nd that in a high dimensional limit, the inner product between any two learnt independent variables can be updated exactly using a deterministic rule. This derived update rule resembles previously introduced in nitely wide neural network kernels. As part of our analysis, we are able to draw new connections between the statistician's inverse link function and the machine learner's activation function. We describe an interesting property of SGD in this high dimensional limit  even though individual iterates are random vectors, inner products of any two iterates are deterministic, and can converge to a unique xed point as the number of iterates increases. Our update rule allows us to compute using in nite dimensional independent variables using a popular class of machine learning algorithms called kernel methods. 
10:30 
Morning tea 
11:00 
Recovery from corrupted data: recent results for various modelsSimon Foucart / Texas A&M University The goal of Optimal Recovery is to uncover procedures that are worstcase optimal for the task of learning / recovering unknown functions from a priori information (the model) and a posteriori information (the observations). Optimal recovery procedures can sometimes be chosen as linear maps, at least when the observations are accurate. When they are corrupted, however, the problem becomes more complicated, especially if the focus is put on the practical construction of the (near) optimal procedure. This talk showcases positive results in three situations: deterministic observation errors bounded in `2; deterministic observation errors bounded in `1; and stochastic logconcave observation errors. 
11:30 
Integration and Function Recovery on Hermite SpacesMichael Gnewuch / Osnabruck University We consider spaces of functions of in netely many variables that are de ned with the help of all univariate Hermite polynomials, which (properly normalized) form an orthonormal basis of the space of all squareintegrable functions over the real line endowed with the standard normal distribution. Those spaces belong to the class of reproducing kernel Hilbert spaces of increasing smoothness and their elements are de ned on proper subsets of the sequence space (i.e., the space of all realvalued sequences). Their norms are induced by an underlying function space decomposition, namely the in nitedimensional ANOVA decomposition. We discuss further properties of those spaces and present sharp results on numerical integration and on function recovery. The Talk is based on joint work with Mario Hefter, Aicke Hinrichs, Klaus Ritter and Robin Russmann. 
12:00 
A SpaceTime Adaptive LowRank Method for HighDimensional Parabolic PDEsManfred Faldum / RWTH Aachen University In this talk, we present the construction and analysis of a spacetime adaptive method for parabolic partial di erential equations that combines sparse wavelet expansions in time with adaptive lowrank approximations in the spatial variables. Similar to the existing adaptive lowrank method for elliptic problems [1], we use a perturbed Richardson iteration, where we apply two reduction operators to the iterates to keep the support as well as the arising ranks of the lowrank approximations nearoptimal for a given error tolerance. This perturbed Richardson iteration is applied to a biin nite matrixvector problem based on the spacetime variational formulation [2] which is equivalent to the parabolic initial boundary value problem. For the analysis of the method, we propose a new approximation class for the temporal operator which is necessary due to the interaction between hierarchical tensor formats of di erent time indices. One of the main challenges is the fact that the parabolic operator is an isomorphism with respect to spaces not endowed with a cross norm. Therefore, as in [1], we use a method for preconditioning operators in lowrank format by exponential sum approximations. The method is shown to converge and satisfy similar complexity bounds as the existing adaptive lowrank methods for elliptic problems [1], establishing its suitability for parabolic problems on highdimensional spatial domains and does not su er from the curse of dimensionality. The construction also yields computable rigorous a posteriori error bounds for the total error depending on the activated basis functions and ranks in the approximation. The results are illustrated by numerical experiments for the heat equation in high dimensions, demonstrating practical e ciency. 10 [1] M. Bachmayr, W. Dahmen, Adaptive LowRank Methods: Problems on Sobolev Spaces, SIAM Journal on Numerical Analysis 54(2), pp.744796 [2] C. Schwab and R. Stevenson, SpaceTime Adaptive Wavelet Methods for Parabolic Evolution Problems, Mathematics of Computatio 
12:30 
Lunch 
13:30 
Energy and Discrepancy of Greedy Sequences on the SphereRyan W Matzke / Vanderbilt University Greedy algorithms are surprisingly e ective in a wide range of optimization problems, though have only recently been considered as a possible way to nd point con gurations with low discrepancy or energy. On the sphere and other manifolds, greedy algorithms minimizing certain discrepancies or energies at each step have been shown to be uniformly distributed. In this talk, we further quantify some of this behavior on the sphere, in particular discussing the e ectiveness of greedily generated point con gurations as minimizers of Riesz energies and the quadratic spherical cap discrepancy. In particular, we show that they are optimal for singular Riesz energies. This is based on joint work with Dmitriy Bilyk, Michelle Mastrianni, and Stefan Steinerberger. 
14:00 
HighDimensional Optimization with a Novel Nonlocal GradientHoang Tran / Oak Ridge National Laboratory The problem of minimizing multimodal loss functions with a large number of local optima frequently arises in machine learning and model calibration problems. Since the local gradient points to the direction of the steepest slope in an in nitesimal neighborhood, an optimizer guided by the local gradient is often trapped in a local minimum. To address this issue, we develop a novel nonlocal gradient to skip small local minima by capturing major structures of the loss's landscape in blackbox optimization. The nonlocal gradient is de ned by a directional Gaussian smoothing (DGS) approach. The key idea of DGS is to conducts 1D longrange exploration with a large smoothing radius along d orthogonal directions in Rd, each of which de nes a nonlocal directional derivative as a 1D integral. Such longrange exploration enables the nonlocal gradient to skip small local minima. The d directional derivatives are then assembled to form the nonlocal gradient. We use the Gauss Hermite quadrature rule to approximate the d 1D integrals to obtain an accurate estimator. We provide a convergence theory in the scenario where the objective function is composed of a convex function perturbed by a highly oscillating, deterministic noise. We prove that our method exponentially converge to a tightened neighborhood of the solution, whose size is characterized by the noise wavelength. The superior performance of our method is demonstrated in several highdimensional benchmark tests, machine learning and model calibration problems. 
14:30 
Preparing Hamiltonians for Quantum Simulation: A Computational Framework for Cartan Decomposition via Lax DynamicsMoody Chu / North Carolina State University Simulating the time evolution of a Hamiltonian system on a classical computer is hard {The computational power required to even describe a quantum system scales exponentially with the number of its constituents, let alone integrate its equations of motion. Hamiltonian simulation on a quantum machine is a possible solution to this challenge { Assuming that a quantum system composing of 11 spin½ particles can be manipulated at will, then it is tenable to engineer the interaction between those particles according to the one that is to be simulated, and thus predict the value of physical quantities by simply performing the appropriate measurements on the system. Establishing a linkage between the unitary operators described mathematically as a logic solution and the unitary operators recognizable as quantum circuits for execution is therefore essential for algorithm design and circuit implementation. Most current techniques are fallible because of truncation errors or t he stagnation at local solutions. This work o ers an innovative avenue by tackling the Cartan decomposition with the notion of Lax dynamics. Within the integration errors that is controllable, this approach gives rise to a genuine unitary synthesis which not only is numerically feasible, but also can be utilized to gauge the quality of results produced by other means, and extend the knowledge to a wide range of applications. This paper aims at establishing the theoretic and algorithmic foundations by exploiting the geometric properties of Hamiltonian subalgebras and describing a common mechanism for deriving the Lax dynamics. 12 
15:30 
Afternoon tea 
16:00 
MSI Colloquium  Analytical and numerical methods in shape optimizationHelmut Harbrecht / Department of Mathematics and Computer Science, University of Basel, Switzerland Shape optimization is indispensable for designing and constructing industrial components. Many problems that arise in application, particularly in structural mechanics and in the optimal control of distributed parameter systems, can be formulated as the minimization of functionals which are defined over a class of admissible domains. The application of gradient based minimization algorithms involves the shape functionals’ derivative with respect to the domain under consideration. Such derivatives can analytically be computed by means of shape calculus and enable the paradigm first optimize then discretize. Especially, by identifying the sought domain with a parametrization of its boundary, the solution of the shape optimization problem will be equivalent to solving a nonlinear pseudodifferential equation for the unknown parametrization. The present talk aims at surveying on analytical and numerical methods for shape optimization. In particular, besides several applications of shape optimization, the following items will be addressed: • first and second order optimality conditions • discretization of shapes • existence and convergence of approximate shapes • efficient numerical techniques to compute the state equation 
18:00 
Workshop DinnerVenue: Ovolo Nishi / Address: Ground/25 Edinburgh Ave, Canberra ACT 2601 / Time: Arrival from 6pm, dinner will be served at 6:30pm / *Bar tab available and once finished, participants are to purchase their own drinks 
Time  

All day 
Desks and meeting rooms available for collaboration. 
Location
Seminar Room 1.33 & 1.37,
Hanna Neumann Building #145, Science Road,
The Australian National University
General information
Information about Canberra, visas, transportation, accommodation, emergency, etc.
About Canberra
Canberra is located in the Australian Capital Territory, on the ancient lands of the Ngunnawal people, who have lived here for over 20,000 years. Canberra’s name is thought to mean ‘meeting place’, derived from the Aboriginal word Kamberra. European settlers arrived in the 1830s, and the area won selection by ballot for the federal capital in 1908. Since then the ‘Bush Capital’ has grown to become the proud home of the Australian story, with a growing population of around 390,000.
Canberra hosts a wide range of tourist attractions, including various national museums, galleries and Parliament House, as well as beautiful parks and walking trails. Several attractions are within walking distance of the ANU campus, including the National Museum of Australia and the Australian National Botanic Gardens. Canberra is also a fantastic base from which to explore the many treasures of the surrounding region, including historic townships, beautiful coastlines and the famous Snowy Mountains. Learn more about what to do and see during your stay in Canberra.
Visas
International visitors to Australia require a visa or an electronic travel authority (ETA) prior to arrival. It is your responsibility to ensure documentation is correct and complete before you commence your journey. Visit the Australian Immigration and Citizenship website for information about visas and ETAs.
Transportation
There are many ways to get around Canberra. Below is some useful information about Bus & Taxi transport around the ANU, the Airport and surrounding areas.
Taxi
If you are catching a taxi or Uber to the ANU Mathematical Sciences Institute, ask to be taken to Building #145, Science Road, ANU. We are located close to the Ian Ross Building and the ANU gym. A Taxi from the airport will usually cost around $40 and will take roughly 15 minutes. Pricing and time may vary depending on traffic.
Phone Canberra Elite Taxis on 13 22 27 to book a taxi.
Buses
Canberra buses are a cheap and easy way of getting around town once you're here. View bus services and fares information on the Transport Canberra website.
To travel just use your MyWay card or pay a cash fare to the driver when boarding. A single adult trip when paying cash will cost $4.80 with cheaper fares for students and children. Significant savings can be made when travelling with MyWay. View MyWay and Fares information.
Canberra Airport
The ACT government has implemented a public bus service from the CBD via the Canberra Airport via bus Route 3, seven days a week.
For more information about the buses to Canberra airport, click here.
Accommodation
Below are some accommodation options for your visit to Canberra.
Emergency Information
In case of an emergency, please evactuate to the lawns on the southern side of the building (colloquiually known as "University Avenue").
We have dedicated first aiders on staff, please seek them out if you require assistance. There is also a first aid room on the first floor, behind the women's bathroom, which contains basic supplies.