# Challenges in high performance computing

Scientific computing is often termed as the "third way to do science", alongside theory and experiments. The focus of the workshop is to investigate the current challenges of solving large scale problems on high performance computers. To achieve optimal performance it is critical to incorporate techniques that are at the forefront of both the mathematical and computer sciences. Consequently, the workshop has a strong multidisciplinary focus covering the five important areas of: Algorithms, Applications, Middleware, Resilience and Software.

Each day of the conference will address one of these topics. A review lecture will be given in the morning by an eminent researcher in that area. Participants are invited to give more specialised talks in the afternoon, followed by a discussion session.

The major aim of the workshop is to foster cooperation and communication between members of each of these five different communities, as well as to strongly encourage student participation.

This event is part of the MSI Special Year 2019 in Computational Mathematics.

### Invited Speakers:

- Algorithms: David Keyes, King Abdullah University of Science and Technology
- Resilience: Ulrich Ruede, Friedrich-Alexander-University of Erlangen-Nuremberg
- Software: Lois Curfman McInnes, Argonne National Laboratory
- Applications: Raquel Salmeron, Airservices Australia
- Middleware: George Boscila, University of Tennessee

### Organising committee:

- Brendan Harding, University of Adelaide
- Stuart Hawkins, Macquarie University
- Lilia Ferrario, Australian National University
- Linda Stals, Australian National University
- Peter Strazdins, Australian National University

### Welcome reception

Come and meet the other participants at the conference welcome reception. The reception will be held on Sunday 1st September from 4pm at the Mathematical Sciences Building, ANU. Food and drinks will be provided.

### Women in Computational Mathematics luncheon

The conference will host a Women in computational mathematics pizza lunch on the Wednesday 4th September to celebrate women in the field.

### Abstract submission

Abstract submission is now closed.

### Code of conduct

The Mathematical Sciences Institute (MSI) special year is committed to ensuring all workshops, conferences and seminars are accessible to a diverse range of participants. We aim to create a safe, respectful and supportive environment to allow free flow of information, discussions and ideas. All staff and students have the right to be treated with courtesy, fairness and professionalism. Discriminatory or harassing behaviour will not be tolerated.

The essential part of maintaining a safe and respectful work environment is to ensure that individuals report any witnessed or experienced discrimination or harassment to the organiser’s attention, or a member of staff you feel comfortable talking to. If you would like to contact the department anonymously, please email admin.research.msi@anu.edu.au.

We ask all participants to review the ANU’s code of conduct and maintain the principles of the document for the duration of the workshop.

https://policies.anu.edu.au/ppl/document/ANUP_000388

### Conference dinner and excursion

The conference dinner will be held on Wednesday 4th September at the Australian National Botanic Gardens. A guided tour of the gardens will be organised for participants before the dinner.

To attend the conference dinner & Excursion there will be a $20 fee. This will be charged through the registration page.

## Sessions

Time | Session | |
---|---|---|

4pm | Welcome reception |

Time | Session | |
---|---|---|

8:30am | Registration | |

9am | High Performance Computing - Air Traffic Management Raquel Salmeron, Airservices Australia High performance computing (HPC) is widely used in virtually all branches of industry and science, in order to advance competitiveness, generate new knowledge and propel the rapid technological developments of the modern world. HPC is being used to address key social challenges such as health care, climate change, public safety and the impacts of weather, to name just a few examples. Indeed, HPC may be transforming the scientific method itself by opening up new avenues of discovery, not available via experimentation. High performance computing is crucially important for the aviation industry, as it faces the most disruptive decade since the first powered flight over a century ago. New types of aircraft and autonomous navigation systems from drones to flying vehicles will add to an increasingly complex and congested air traffic network. Vast amounts of data will need to be collected and analysed to drive operational efficiency and effectiveness, with insight-driven value propositions becoming the standard. I my talk I will first briefly discuss high performance computing as an enabler of key developments and discoveries in different environments, as well as the benefits and drivers of new advances in performance. I will then discuss current initiatives in Airservices Australia to harness the power of HPC to deliver new, innovative products and services, and create value for the aviation industry. | |

10:15am | Morning tea | |

10:45am | Case Studies in the GPU Acceleration of Two Earth Sciences Applications Peter Strazdins, Australian National University (Download slides) Modern graphics processing units (GPUs) have became powerful and costeffective computing platforms. Parallel programming standards (e.g. CUDA) and directive-based programming standards (like OpenHMPP and OpenACC) are available to harness this tremendous computing power to tackle largescale modelling and simulation in scientific areas. In this talk, we give a brief overview of GPUs and their programming, followed by our experiences in accelerating the following two applications. ANUGA is a tsunami modelling application which is based on unstructured triangular meshes and implemented in Python/C. We found that host-device data transfer overheads necessitated an advanced approach where all key data structures are mirrored on the host and the device. This in turn requires systematic debugging infrastructure and to this end we developed a a generic Python-based implementation of the relative debugging technique. Our CUDA version of ANUGA achieved 28x speedup, and the OpenHMPP achieved 16x; In terms of productivity, however, OpenHMPP achieved significantly better speedup per hour of programming effort. The HYSPLIT air concentration model is an operational Lagrangian trajectory and dispersion model that calculates the concentration or the distribution of pollutants by releasing and tracking particles or puffs. The model had to be non-trivially analysed, profiled and extensively refactored to remove barriers for parallelization. We found that host-device transfer overhead and low GPU utilization were limiting factors, and found that this could be alleviated by GPU coarse-grained parallelism, with a maximum 12.9x speedup. | |

12pm | Lunch | |

1:30pm | An overview of the fault tolerant combination technique Brendan Harding, Univeristy of Adelaide The sparse grid combination technique is a powerful method for approximating solutions to high dimensional problems. It essentially involves the computation of the same problem on several coarse anisotropic grids which are then combined to approximated a full/fine grid solution. Apart from significantly reducing the cost of high dimensional approximation, it also features an additional layer of parallelism which can improve scalability on high performance computers. The fault tolerant combination technique is a generalisation of this method which allows accurate solutions to be recovered in the event of node failures without the need for a checkpoint-restart mechanism. The key is to exploit inherent redundancies within the method, and even add a few more, so that a full/fine grid can be approximated in a large number of ways independent of any one coarse approximation. I'll describe the method and a couple of different ways the combination coefficients can be determined. | |

2pm | Preconditioners for a saddle point problem for the use of Multigrid method. Shilu Feng, Australian National University Preconditioning techniques form an crucial part of establishing efficient solvers of saddle point problems, and is often conjugated with Krylov's subspace methods. One of the saddle point system can be arisen from discretisation of a thin plate spline which is a popular data fitting technique. Such a system is problematic with respect to its condition caused by a penalty term. Some common saddle point preconditioners are considered, we also propose a new but problem-dependent preconditioner for the use of Multigrid method that facilitates the smoothing effect. | |

2:30pm | Topic: Coding in academia Panel Session | |

3:30pm | Afternoon tea | |

4pm | Dynamic earthquake rupture simulations on non-planar faults embedded in 3D heterogeneous elastoplastic solids. Kenneth Duru, Australian National University Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake. The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a numerical method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of elastic-plastic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation with off-fault plasticity on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. Drucker-Prager plasticity is added as source term and updated using the return-mapping algorithm. We will present numerical experiments using both slip-weakening and rate-and-state friction laws on non-planar faults, including recent benchmark problems proposed by Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Project. We also show simulations on fractal faults reveali | |

4:30pm | Digital Earth Australia: Scaling machine learning to Petascale datasets Dale Roberts, Australian National University I will give an overview to the Digital Earth Australia (DEA) initiative that I have been part of for the last 6 years in collaboration with Geoscience Australia. The aim is to use satellite data to detect physical changes across Australia in unprecedented detail such as identifying soil and coastal erosion, crop growth, water quality and changes to cities and regions. I will discuss our results and also the challenges on developing operational algorithms that scale to some of the largest spatial-temporal datasets that exist. |

Time | Session | |
---|---|---|

9am | Hierarchical Algorithms on Hierarchical Architectures (200 words David Keyes, King Abdullah University of Science and Technology (Download slides) Some algorithms, such as multigrid, achieve optimal arithmetic complexity but have low arithmetic intensity (operations per Byte moved). Others, such as dense Gaussian elimination, possess high arithmetic intensity but lack optimal complexity. A special group of algorithms, Fast Multipole and its H-matrix generalizations, realizes a combination of optimal complexity and high intensity. Hierarchically low-rank linear algebra is bringing about a renaissance in linear algebra, offering data sparsity to problems formally defined as dense, and thus significantly increasing the range of problem sizes that can be accommodated in (among others) integral equations, covariance matrices in statistics, and Hessians in optimization. Implemented with task-based dynamic runtime systems, these hierarchical methods also have potential for relaxed synchrony, which is important for future energy-austere architectures, since there may be significant nonuniformity in processing rates of different cores even if task sizes can be controlled. We describe modules of KAUST's Hierarchical Computations on Manycore Architectures (HiCMA) software toolkit that illustrate these features and are intended as building blocks of more sophisticated applications, such as matrix-free higher-order methods in optimization. HiCMA's target is hierarchical algorithms on emerging architectures, which have hierarchies of their own that generally do not align with those of the algorithm. Some modules of this open source project have been adopted in the software libraries of major vendors. | |

10:15am | Morning tea | |

10:45am | David Keyes, King Abdullah University of Science and Technology (Download slides) | |

12pm | Lunch | |

1:30pm | Application of HPC for stochastic wave propagation models Stuart Hawkins, Macquarie University We present some recent results on the application of HPC for stochastic wave propagation models in which three dimensional PDEs must be solved thousands of times for different values of parameters governing the PDE. The PDE models a wave propagating through a medium described by the parameters. Pertinent examples include clouds of particles whose positions are uncertain, or media containing particles whose shape or composition is uncertain. The stochastic models facilitate uncertainty quantification of par- ticular quantities of interest, or application of Bayesian techniques to recover model parameters from measured data. Solving the wave propagation PDE for a single set of fixed parameters is computationally challenging in its own right, and solving thousands of such PDEs requires careful implementation on large scale HPC platforms. | |

2pm | Large-scale Applications made Fault-tolerant using the Sparse Grid Combination Technique Peter Strazdins, Australian National University Many petascale and exascale scientific simulations involve the time evolution of systems modelled as Partial Differential Equations (PDEs). The sparse grid combination technique (SGCT) is a cost-effective method for solve time- evolving PDEs, especially for higher-dimensional problems. It consists of evolving PDE over a set of grids of differing resolution in each dimension, and then combining the results to approximate the solution of the PDE on a grid of high resolution in all dimensions. It can also be extended to support algorithmic-based fault-tolerance, which is also important for computations at this scale. In this talk, we firstly present two new parallel algorithms for the SGCT that supports the full distributed memory parallelization over the dimensions of the component grids, as well as over the grids as well. The direct algorithm is so called because it directly implements an SGCT combination formula. The second algorithm converts each component grid into its hierarchical surpluses, and then uses the direct algorithm on each of the hierarchical surpluses. An analysis indicates the direct algorithm minimizes the number of messages, whereas the hierarchical surplus offers a reduction in bandwidth by a factor of 1-2^{-d}, where d is the dimensionality of the SGCT. However, this is offset by its incomplete parallelism and load imbalance in practical scenarios. Experimental results indicate that, for scenarios of practical interest, both are sufficiently scalable to support large- scale SGCT but the direct algorithm has generally better performance. We secondly present how this algorithm was used to make three pre-existing large-scale applications fault-tolerant using this technique. These are the GENE gyrokinetic plasma, Taxila Lattice Boltzmann Method, and Solid Fuel Ignition applications. We use an alternate component grid combination formula by adding some redundancy on the SGCT to recover data from lost processes. User Level Failure Mitigation (ULFM) MPI is used to recover the processes (and communica | |

2:30pm | Topic: Code validation Panel Session | |

3:30pm | Afternoon tea | |

4pm | Sparse-Grid-Based Uncertainty Quantification Applied to Tsunami Run-up and Inundation Stephen Roberts, Australian National University Given a numerical simulation, the objective of uncertainty quantification is to provide an output distribution for a quantity of interest given a distribution of uncertain input parameters. However exploring this output distribution using for instance a Monte Carlo strategy requires a high number of numerical simulations, which can make the problem impracticable within a given computational budget. A well-known approach to reduce the number of required simulations is to construct a surrogate, which — based on a set of training simulations — can provide an inexpensive approximation of the simulation output for any parameter configuration. To further reduce the total cost of the simulations, we can introduce alternative sampling strategies such as sparse grid sampling which can lead to a substantial cost reduction in the construction of a surrogate. An additional strategy is to augment a reasonably small number of high-resolution training simulations with many cheap low-resolution simulations. This technique can lead to orders of magnitude increase in efficiency in the construction of surrogate models with reasonably high (8-15) dimensional input parameter spaces. In this talk I will present some methods based around sparse grid approximation for producing efficient surrogate models and demonstrate these methods applied to quantifying the uncertainty in the height and extent of tsunami inundation which has application in evacuation planning. | |

4:30pm | The HPC age from a plasma theorist’s perspective Robert Dewar, Australian National University The HPC age from a plasma theorist’s perspective: “Computing is for insight, not numbers” --- Richard Hamming After a brief historical tour of the “high-performance” computers I have run across in my career (starting with CSIRAC), I focus on the challenge of understanding magnetohydrodynamic equilibrium and stability of toroidal plasmas using a symbiotic combination of numerical analysis, computational implementation, and mathematical analysis. |

Time | Session | |
---|---|---|

9am | Middleware George Boscila, University of Tennessee (Download Slides) | |

10:15am | Morning tea | |

10:45am | Middleware George Boscila, University of Tennessee (Download slides) | |

12pm | Lunch | |

1:30pm | Equation-free toolbox for multiscale modelling Judy Bunder, Univeristy of Adelaide Numerical experimentation is increasingly used as a predictive tool in engineering and science; but often the known microscopic model is too complex to permit a full solution within a realistic time. The ‘Equation-free Toolbox’ (https://github.com/uoa1184615/ EquationFreeGit) implements patch dynamics and projective integration to compute the microscopic model, but only within sparsely separated patches which are coupled across unsimulated space and time. Reducing the domain to coupled patches decreases the simulation time and permits an efficient and accurate simulation the emergent macroscale dynamics, leading to improved prediction and understanding of the significant features of complex microscale systems at the scale relevant to engineers and scientists. In this presentation I will discuss the theory underlying the Equation-free Toolbox and present a practical guide to implementation. | |

2pm | Status and challenges of the stepped pressure equilibrium code Zhisong Qu, Australian National University The magnetic field line is a dynamical system which can easily have non-integrable solutions. To solve the plasma force balance equation which gives the magnetic field in 3D, one has to consider the co-existence of flux surfaces, islands at rational rotation numbers and chaos, as a consequence of the interaction between islands. Instead of attempting to capture the fractal structure of chaos, we seek for a “weak” solution in which the volume is partitioned into sub-domains separated by a set of Kolmogorov–Arnold–Moser (KAM) surfaces. Within each volume the magnetic field is a Beltrami field. Between the volumes, a force balance is satisfied. This leads to the development of the Stepped Pressure Equilibrium Code, or SPEC for short. In this talk, we will demonstrate the numerical algorithm used by SPEC. The Beltrami field within the volume is solved using a regular Galerkin method with Chebyshev-Fourier basis. The force balance condition between volumes, however, needs a nonlinear optimizer and causing problems. We will show a few examples to explain the current challenges, the possible causes, and ideas for improvement. | |

2:30pm | Break out session | |

3:30pm | Afternoon tea | |

4pm | Excursion | |

6pm | Conference dinner |

Time | Session | |
---|---|---|

9am | Extreme Scale Resilient Multigrid Solvers Ulrich Ruede, Friedrich-Alexander-University of Erlangen-Nuremberg (Download slides) Multigrid methods can solve discretized PDE with a cost that is proportional to the number of unknowns. This algorithmic scalability is required to design scalable solvers, i.e. parallel methods that can scale up to solving very large systems. Indeed we will demonstrate that multigrid can solve geophysical flow problems leading to more than ten trillion (10^13) degrees of freedom. This means that the solution vector will require 80 TBytes of memory in double precision and consequently storing the finite element stiffness matrix would exceed the memory capacity of even the largest computers on the planet. Matrix-free techniques are therefore a must. On future exascale systems, it is also expected that faults will become more frequent. Clearly for computations so large, classical resilience techniques, such as checkpoint-restart, will become very expensive. We will therefore show that the data lost by hard faults can be recovered algorithmically and that only minimal delay is caused by such faults, when the recovery is executed asynchronously and when the local recovery is accelerated by a superman strategy. | |

10:15am | Morning tea | |

10:45am | Adaptive refinement recovery after fault simulation Linda Stals, Australian National University We will present a parallel adaptive multigrid method that uses dynamic data structures to store a nested sequence of meshes and the iteratively evolving solution. After a fault, the data residing on the faulty processor will be lost. However, with suitably designed data structures, the neighbouring processors contain enough information so that a consistent mesh can be reconstructed in the faulty domain with the goal of resuming the computation without having to restart from scratch. | |

11:15 | Building simple lumped hydrological models from complex distributed models Barry Croke, Australian National University Modelling water resources generally involves estimation of streamflows at specific locations (e.g. inflow into dams, or at key ecological sites, …). This is most efficiently done using lumped models that treat the entire catchment as a single unit. Such models are based on the signals that can be seen in the available weather and streamflow data. The problem is the high level of uncertainty in the observations, which limits our ability to uniquely define suitable functional forms for the key processes involved. This leads to very simple models that have good precision, but not necessarily good accuracy. Complex spatially distributed models like HydroGeoSphere attempt to reproduce the spatial variability and processes in much more detail, at the cost of many more parameters needing to be calibrated. This leads to good accuracy, but poor precision. This project aims to develop a virtual lab based on HydroGeoSphere models of synthetic and real catchments, and then explore the use of model emulation methods (e.g. Gaussian Processes, Polynomial Chaos Expansion) to build new structures for lumped models. The performance of such models will then be compared with existing lumped models to determine whether the new models perform better than the existing models. | |

12pm | Lunch | |

1:30pm | Resilience in the Asynchronous Partitioned Global Address Space Programming Model Josh Milthorpe, Australian National University The asynchronous partitioned global address space (APGAS) programming model provides a simple way to decompose a distributed computation into nested task groups, each of which is managed by a ‘finish’ that signals the termination of all tasks within the group. Ensuring correct operation of application programs in the presence of hard failures requires resilience of both control and data. We will discuss a multi-resolution approach to resilience in the APGAS model, in which high-level, productive frameworks for resilient application programming are composed from efficient lower-level constructs. For common classes of scientific application codes, this approach provides resilience with low runtime overhead and minimal programmer effort. | |

2pm | Optimizing workflow scheduling and capacity management of high performance cycling systems Paul Leopardi, Bureau of Meteorology Many national or regional weather forecasting services run high performance computing facilities that feature a largely predictable cycling workflow. The workflow is made up of operational numerical weather forecasting suites. These suites consist of tasks that consume observations and previous forecasts, and produce new forecasts. This workflow is usually scheduled by two levels of scheduler: a higher level scheduler that is aware of suites and schedules tasks according to time and precedence constraints, and a lower level scheduler that schedules tasks according to resource capacity constraints. In this context, suites are occasionally replaced by a new suite with improved forecast skill and greater resource requirements. This talk outlines the optimization problems involved in introducing a new suite into such a workflow. | |

2:30pm | Topic: Future directions Panel Session | |

3:30pm | Afternoon tea | |

4pm | Error indicators of discrete thin-plate splines Lishan Fang, Australian National University The thin-plate spline is a technique for interpolating and smoothing surface over scattered data in many dimensions. It is a type of polyharmonic splines that appears in various applications, including image processing and correspondence recovery. It has some favourable properties like being insensitive to noise in data. One major limitation of the thin-plate spline is that the resulting system of equations is dense and the size depends on the number of data points, which is impractical for large datasets. A discrete thin-plate spline smoother has been developed to approximate the thin-plate spline with piecewise linear basis functions. The resulting system of equations is sparse and the size depends only on the number of nodes in the finite element grid. | |

4:30pm | The combination technique applied to the computation of quantities of interest in GENE Markus Hegland/Yuancheng Zhou, Australian National University We will discuss how to compute a special kind of high dimensional integrals by using the sparse grid combination technique. Many physical quantities in GENE (Gyrokinetic Electromagnetic Numerical Experiment) are integrals in that form. We will show how to improve the computation of these quantities without change of the legacy code GENE. Two different sparse grid combination techniques based on different error splitting models are used in the computation. |

Time | Session | |
---|---|---|

9am | Toward Community Software Ecosystems for High-Performance Computational Science Lois Curfman McInnes, Argonne National Laboratory (Download slides) Software---cross-cutting technology that connects advances in mathematics, computer science, and domain-specific science and engineering---is a cornerstone of long-term collaboration and progress in computational science and engineering (CSE). As we leverage unprecedented high-performance computing resources to work toward predictive science, software complexity is increasing due to multiphysics and multiscale modeling, the coupling of simulations and data analytics, and the demand for greater reproducibility and sustainability, all in the midst of disruptive architectural changes. Applications increasingly require the combined use of independent software packages, whose development teams have diverse sponsors, priorities, software engineering expertise, and processes for development and release. The developers of open-source scientific software are increasingly encouraging community contributions and considering more effective strategies for connections among complementary packages. In this presentation I will discuss work toward broader software interoperability and scientific software ecosystems needed to support next-generation CSE. | |

10:15am | Morning tea | |

10:45am | Why Computers Lie Badly At Alarming Speed and the Unum Promise Lev Lafayette, University of Melbourne The translation of arithmetic to physical hardware with using the IEEE standard employed numerical representation is fraught with difficulty. As is well known by any who have used even a pocket calculator, computer processors are imprecise with dangerous rounding errors, which vary on different systems. Further, the standard representation method, IEEE 754 "Standard for Floating- Point Arithmetic" (1985, revised 2008), is extremely inefficient from an engineering perspective with increasing physical cost when additional precision is sought. The basic issue is the limitations in converting decimal or floating point notation into binary form. The IEEE standard suggests that when a calculation overflows the value +inf should be used instead, and when a number is too small the standard says to use 0 instead. Inserting infinity to represent "a very big number" or 0 to represent a "very small number" will certainly cause computational issues. Floating point operations have additional issues when employed in parallel, breaking the logic of associative properties. The equation (a + b) + (c + d) in parallel will not equal the equation ((a + b) + c) + d when run in serial. These issues have been known in computer science for some decades (Goldberg, 1991). In recent years an attempt has been made to reconstruct the physical implementation of arithmetic to physical hardware by providing a superset to IEEE's 754 standard and IEEE 1788, Standard for Interval Arithmetic. This number format, the Unum (Gustafson, 2015), consists of a bit string of variable length with six sub-fields: a sign bit, exponent, fraction, uncertainty bit, exponent size, and fraction size. The uncertainty bit, or ubit, specifies whether or not there are additional bits after fraction, instead of rounding, in other words a precise interval. This means that numbers that are close to zero or infinity are treated as such and are never represented as zero or infinity. To date, Unums have not been translated into hardware as they require more logic than floati | |

Computation of Probabilistic Saturations and Alt's Problem in Mechanism Design Martin Helmer, Australian National University Alt’s problem formulated in 1923 is to count the number of four-bar linkages whose coupler curve interpolates nine general points in the plane. This problem can be formulated as counting the number of solutions to a system of polynomial equations which was first solved numerically using homotopy continuation by Wampler, Morgan, and Sommese in 1992. Since there is still not a proof that all solutions were obtained, we consider upper bounds for Alt’s problem by counting the number of solutions outside of the base locus to a system arising as the general linear combination of polynomials. In particular, we derive effective symbolic and numeric methods for studying such systems using probabilistic saturations that can be employed using both finite fields and floating-point computations. The methods are probabilistic and we give bounds on the size of finite field required to achieve a desired level of certainty. In this talk I will discuss the computational challenges involved in solving Alt's problem and the theoretical and practical techniques used to overcome them. This talk is based on joint work with Jonathan Hauenstein (University of Notre Dame). | ||

12pm | Lunch |

## Registration fees

- General registration $160
- AMSI institution members $120
- AMSI Student/ retired fellow $80

The workshop will provide participants with morning tea, lunch and afternoon tea.

### Registration closes August 19, 2019

### Conference dinner

The conference dinner will be held on Wednesday 4th September at the Australian National Botanic Gardens. A guided tour of the gardens will be organised for participants for the afternoon on the Wednesday, which will follow onto the dinner.

To attend the conference dinner there will be a $20 fee. This will be charged through the registration page.

## Funding Support

### AMSI funding

This event is sponsored by the Australian Mathematical Sciences Institute (AMSI). AMSI allocates a travel allowance annually to each of its member universities (for list of members, see www.amsi.org.au/members).

Students or early career researchers from AMSI member universities without access to a suitable research grant or other source of funding may apply (with approval of their Head of Mathematical Sciences) for subsidy of travel and accommodation out of their home departmental travel allowance.

Seminar Room 1.33 & 1.37, Building #145, Science Road, The Australian National University

## Map

## About Canberra

Canberra is located in the Australian Capital Territory, on the ancient lands of the Ngunnawal people, who have lived here for over 20,000 years. Canberra’s name is thought to mean ‘meeting place’, derived from the Aboriginal word Kamberra. European settlers arrived in the 1830s, and the area won selection by ballot for the federal capital in 1908. Since then the ‘Bush Capital’ has grown to become the proud home of the Australian story, with a growing population of around 390,000.

Canberra hosts a wide range of tourist attractions, including various national museums, galleries and Parliament House, as well as beautiful parks and walking trails. Several attractions are within walking distance of the ANU campus, including the National Museum of Australia and the Australian National Botanic Gardens. Canberra is also a fantastic base from which to explore the many treasures of the surrounding region, including historic townships, beautiful coastlines and the famous Snowy Mountains. Learn more about what to do and see during your stay in Canberra at https://visitcanberra.com.au

## Transport

There are many ways to get around Canberra. Below is some useful information about Bus & Taxi transport around the ANU, the Airport and surrounding areas.

### Taxi

If you are catching a taxi or Uber to the ANU Mathematical Sciences Institute, ask to be taken to Building #145, Science Road, ANU. We are located close to the Ian Ross Building and the ANU gym. A Taxi will generally cost around $40 and will take roughly 15 minutes. Pricing and time may vary depending on traffic.

Taxi bookings can be made through Canberra Elite Taxis - 13 22 27.

### Transport from the airport

the ACT government has implemented a public bus service from the CBD to the Canberra Airport via bus Route 11 and 11A, seven days a week. Services run approximately every half hour, and better during peak times (weekdays) and every hour (weekends).

To travel just use your MyWay card or pay a cash fare to the driver when boarding. A single adult trip when paying cash will cost $4.80 with cheaper fares for students and children. Significant savings can be made when travelling with MyWay.

View MyWayand Fares information.

For more information about the buses to Canberra airport.

### Action Buses

Canberra buses are a cheap and easy way of getting around town once you're here.

For more information about bus services and fares.

### Light rail

If you are staying in the North of Canberra the Light rail would be a suitable option for you. The train runs from Gungahlin marketplace to the Canberra city. To travel just use your MyWay card or purchase a ticket from the vending machines located at all the stops and bus interchanges. You can purchase a single trip ticket which includes a free 90 minute transfer period.

For more information on the light rail go to: https://www.transport.act.gov.au/about-us/public-transport-options/light...

## Accommodation

Below are some accommodation options for your visit to Canberra.

## Cafés and Dining

Below are some café, restaurant and bar options in the Canberra city and surrounding areas.