Invariant domain preserving approximations for hyperbolic systems and related problems
Many important physical phenomena are modeled by nonlinear systems of hyperbolic conservation laws. When approximating such problems one encounters sharp interfaces, contact discontinuities, shocks, and other nonlinear wave interactions. In thus talk we will present a construction of invariant domain preserving methods for such systems. The invariant domain depends on the system at hand. For the compressible Euler equations preserving the invariance means that the numerical method produces physically relevant quantities: positive density, positive internal energy and preserves a minimum principle on the specific entropy. Our methods are high-order accurate and efficient and preserving the desired properties is guaranteed under a standard CFL condition; hence the methods are robust. Numerical examples will be presented in various applications: the compressible Euler equations, the Navier Stokes equations, and some shallow water models. We demonstrated the scalability of our research codes by running massively parallel algorithms: up to 90 billion spatial degrees of freedom running on up to 100 thousand computing nodes. This is a joint work with Jean-Luc Guermond.
In this talk, I will review several different algebras that appear in 2D conformal field theory (CFT). These include infinite-dimensional Lie algebras such as affine Kac-Moody algebras and the Virasoro algebra. Their commutation relations can be encoded in a Lie bracket depending on a formal variable, which leads to the notion of a Lie conformal algebra. A vertex algebra axiomatizes the algebraic properties of quantum fields in 2D CFT, while the semi-classical limit of a vertex algebra is a Poisson vertex algebra. I will explain how all these algebras are related to each other and will present a unified approach to them as Lie algebras in certain pseudo-tensor categories, or equivalently, as morphisms from the Lie operad to certain operads. As an application, I will introduce a cohomology theory of vertex algebras similarly to Lie algebra cohomology. (Based on joint work with Alberto De Sole, Reimundo Heluani, Victor Kac, and Veronica Vignoli.)
Isotropic Killing vector fields and complex surfaces
In a 4-dimensional vector space with metric of signature (2,2), two isotropic vectors spanning an isotropic plane determine a canonical action of the split quaternions. We notice that on an oriented manifold with two isotropic Killing vector fields spanning an isotropic plane everywhere, the induced almost para-hypercomplex structure is integrable. Based on the classification of compact complex surfaces this allows to describe the topology of the compact 4-manifolds with such vector fields. In the talk I’ll discuss the relation of the result with other geometric properties of split-siganture 4-manifolds as well as present examples of para-hyperhermitian structures admitting 2 null Killing vector fields on most of these manifolds. (Joint work with J. Davidov and O. Mushkarov)
Given a set of integers, we wish to know how many primes there are in the set. Modern tools allow us to obtain an asymptotic for the number of primes, or at least a lower bound of the expected order, assuming certain strength Type-I information (the distribution of the sequence in progressions) and Type-II information (bilinear sums over the sequence). The methods used previously, especially Harman’s sieve, are largely ad-hoc and shed little light on the limitations of the methods. In joint work with James Maynard, we develop a systematic framework for understanding the theoretical limits of these prime detecting sieves, which allow us, in principle, to answer these questions for any given Type I and Type-II information.
Letting the steam off STEAM education (Developing and utilizing intuition, imperfection and gamification in STEAM)
“Wabi-sabi”, “kintsugi” and “ma” are words that capture the essence of Japanese culture and expose an in-depth perception of the world. Their concepts could be applied to STEAM education by shifting the focus from the main STEAM components (S, T, E, A and M) into what exists between them. This presentation proposes a point of view on STEAM education from the perspective of these three Japanese concepts. Applying wabi-sabi reduces the educational load by cognitive optimization. Kintsugi introduces carefully-crafted imperfections by interpreting ‘optimal’ as a relative rather than absolute category. Ma flips STEAM components inside-out and refocuses attention to their interconnected nature. Several examples from author’s experience will be presented, including a newly developed set of gamified educational modules. These modules are the first educational applications that combine mobile 3D graphics with SCORMs (Shareable Content Object Reference Model). They are used for learning, evaluation and assessment in a bachelor course at Faculty of Mathematics and Informatics, Sofia University. The set of games present different problems, like color composition, normal vectors, transformation matrices, angular velocities, Euler characteristic, Cohen-Sutherland line splitting algorithm, and more. The main challenge is to provide activities that do not enforce building formal solutions, but rely on soft-skills, intuition and imperfections. A preliminary results and feedback from students’ use of these modules will be presented, as well as ideas of how gamification could help educators to reduce the anticipated negative impact of AGI (Artificial general intelligence) and ChatGPT.
This presentation will summarize some of the approaches developed by the author, in collaboration with Jean-Luc Guermond (Texas A&M University), for efficient integration of the incompressible Navier-Stokes equations. Particular attention will be paid on the suitability of these algorithms for parallel implementation. The approaches that will be discussed include a generalization of the projection method, the artificial compressibility method of arbitrary accuracy, and a method based on a novel gradient formulation of the equations of incompressible flow (developed in collaboration with Petr Vabishchevich, RAS).
The capabilities of these methods will be demonstrated on examples of complex flows and fluid-structure interaction problems. The parallel performance of the methods will also be demonstrated on some test problems.
Acknowledgements. This work was supported by a Discovery grant of the Natural Sciences and Engineering Research Council of Canada, and by a grant #55484-ND9 of the Petroleum Research Fund of the American Chemical Society.
Fejér-Riesz factorizations and Bernstein-Szegő measures
A classical theorem of Fejér and Riesz says that a non-negative trigonometric polynomial of a single variable can be represented as the square of another polynomial. Though simple to prove, this result has been useful in a number of contexts. The natural question whether a positive bivariate polynomial can be represented as the square of a polynomial amounts to a more difficult problem. I will explain how this question can be related to spectral properties of Bernstein-Szegő measures and discuss some applications. The talk is based on joint works with J. Geronimo.
Geodesics in the space of convex and plurisubharmonic functions
We endow the space of strictly convex functions vanishing on the boundary of a fixed domain with the structure of an infinite dimensional Riemannian manifold. The geodesic equation can then be reformulated as the homogeneous Monge-Ampere equation. We shall then address the question when two convex functions can be joined by a geodesic. We shall also discuss partial results in the complex setting with convex functions replaced by plurisubharmonic ones. This is a joint work with S. Abja.
Finite-state machines and neural networks for language modelling
Language models are an essential tool for solving many complex tasks such as natural language generation, translation and understanding, information extraction, speech communication, and many others. Traditionally, language models have been represented with finite-state machines. In the last few years, however, neural network based language models have shown superior performance and achieved human-like capabilities in many natural language understanding tasks.
In the upcoming lecture, we first give a brief overview of traditional representations of language models based on finite-state machines. We present a framework for building probabilistic and conditional probabilistic finite-state f-transducers used for the implementation of language models. Next, we show commonly used neural network architectures for language modelling. We discuss their advantages over the models based on finite-state transducers. In the third part of the lecture, we present our approach for constructing finite-state machines by utilizing the deep learning framework. In this way we are able to transfer the advantages of the neural networks for language modelling to the finite-state machines. We show that a class of neural network architectures is computationally isomorphic to a class of finite-state machines. Finally, we present empirical experiments showing that using our approach we reach a perplexity of language models realized with finite-state machines which is competitive with the perplexity of the neural networks.