## Third Meeting of Young Bulgarian Mathematicians

### Speakers

**Martin Vechev**, ETH Zurich, Switzerland & INSAIT, Bulgaria –*invited speaker***Borislav Mladenov**, UC Berkeley, USA**Ina Petkova**, Dartmouth College, USA

**Nikola Konstantinov**, INSAIT, Bulgaria**Stoyan Apostolov**, Sofia University “St. Kliment Ohridski”, Bulgaria**Stoyan Dimitrov**, Rutgers University, USA

More information can be found here.

### Program and abstracts

Creating deep learning models that are provably robust, fair and secure deep is a fundamental challenge of societal importance. In this lecture I will discuss some of the latest and most promising research results and future directions we are exploring towards addressing this challenge. These directions include both new verification techniques based on convex relaxations and branch-and-bound methods as well as new certified training methods and optimization problems which produce more verifiable machine learning models.

I will start by explaining Kapustin’s Seiberg-Witten conjectural duality between A and B-branes on hyperkähler manifolds. I will use this conjectural framework and work of Ivan Smith and Solomon-Verbitsky on the A side to motivate results and explicit conjectures on the B side. In particular, I will calculate the space which counts massless open strings connecting a Lagrangian D-brane wrapped on a holomorphic Lagrangian L to itself with suitable gauge bundles. This space is the cohomology of a differential graded algebra. I will state a formality result for this dga and, time permitting, mention various generalisations to pairs of Lagrangians, DG categories and some open questions.

Modern machine learning methods often require large amounts of labeled data for training. Therefore, it has become a standard practice to collect data from external sources, e.g. via crowdsourcing, by web crawling or through collaboration with other institutions. Unfortunately, the quality of these sources is not always guaranteed and this may results in noise, biases and even systematic manipulations entering the training data.

In this talk I will present some results on the statistical limits of learning in the presence of training data corruption. In particular, I will speak about the hardness of achieving algorithmic fairness when a subset of the data is prone to adversarial manipulations. I will also discuss several results on the sample complexity of learning from multiple unreliable data sources. Finally, I will present recent work that provides statistical and stochastic optimization guarantees for collaborative learning in the presence of conflicting participants’ incentives.

We investigate the most important transversality concepts in variational analysis. We talk about some of the motivation behind them and their applications. We derive new, metric style characterizations, in a unified manner, which also establishes previously unknown relations between them. This also shows that while some of them are originally defined relying on the dual structure of the space, they are essentially metric properties. We also use these characterizations to prove in a new way characterizations of the respective metric regularity counterparts. In this way, it is essentially seen, that one can build the theory starting from transversality, rather than regularity (as was originally done).

While counting questions were among the first that people asked and pursued, the branch of enumerative combinatorics is relatively new. We will discuss some recent results on various enumerative questions, illustrating applications of different kinds in theoretical computer science and statistics. The talk will be for the general audience.