“Mathematics Days in Sofia”

Section “Mathematical Foundations of Computer Science and Artificial Intelligence”


Invited Speakers

  • Petia Koprinkova-Hristova, Institute of Information and Communication Technologies, Bulgaria

  • Petia Radeva, Universitat de Barcelona, Spain

  • Stefka Fidanova, Institute of Information and Communication Technologies, Bulgaria


  • Galina Momcheva, Research Institute in Medical University Varna, Bulgaria

  • Krassimira Ivanova, Institute of Mathematics and Informatics, Bulgaria

  • Milena Dobreva, GATE Institute, Sofia University “St. Kliment Ohridski”, Bulgaria

  • Negoslav Sabev, Institute of Mathematics and Informatics, Bulgaria

  • Radoslav Markov, Institute of Mathematics and Informatics, Bulgaria

  • Teodor Boyadzhiev, Institute of Mathematics and Informatics, Bulgaria

Program and Abstracts

The artificial intelligence (AI) aims to mimic the way living creatures perceive the world, take decisions and act towards achievement of their goals. The first artificial models like perceptron however were rather simplified in comparison with the biological systems (neurons) they are trying to represent. Even now the main trend in AI development targets simulation of the behavior rather than the underlying phenomena in living organisms whose result it is. Nowadays due to advance in the technology allowing for deeper investigation of the brain a lot of information for the mechanisms of its work was accumulated. The modern computational neurobiology has developed numerous quite realistic mathematical models of neural cells and mechanisms of their communication that underlay the brain functioning. The advance in computational technologies give the opportunity for simulation of rather complicated mathematical dependencies describing in deep the brain activity. This provoked a new trend in AI – so called brain-inspired models. The talk will present the relations between AI and neurobiological brain models as well as the way they can merge and result in biologically plausible intelligent systems.

Deep Learning (DL) has made remarkable progress in tasks such as face and lip recognition or cancer detection in medical images, achieving super-human performance. However, when it comes to classifying a large number of classes, such as in fine-grained recognition, there is still much room for improvement, especially for groups of classes that are easily confused. Additionally, DL relies on greedy methods that require thousands of annotated images, which can be a time-consuming and tedious process.

To address these issues, self-supervised learning offers an efficient way to leverage a large amount of non-annotated images and make DL models more robust and accurate. In this talk, we will present our work on self-supervised learning and fine-grained recognition, highlighting how this approach can help solve complex computer vision problems like food image recognition. Food classes have high variability, significant similarity between classes, and a vast number of unannotated images. By using self-supervised learning and fine-grained recognition, we demonstrate how these challenges can be overcome.

We can learn a lot by observing nature. There is no waste with it. Everything is done in the most economical, optimal way. Particularly impressive is the collective intelligence of a group of individuals working together. Bees, ant colonies, bird flocks, fish passages, etc. can be given as examples of group intelligence. Animals that do not have a high level of individual intelligence deal with difficult problems using a collective approach. This gave scientists the idea to create algorithms inspired by nature, mimicking the collective intelligence of some animals. These are the so-called metaheuristic methods.

Artificial neural networks are one of the clearest examples of biomimetics, but the nature and the universe are still sources for their optimization and enhancement. The main inspiration for the design of neural networks was the circuitry involved in sensory processing in the central nervous system. But the morphology and physiological activity of neurons may actively change the way information is processed. The studies in the brain research area that the complexity of the dendritic tree of a neuron is related to its capability to solve computational problems and current brain research innovations extend the research space in this area.

This talk aims to represent ideas and will share current results of research and perspectives for the design of new ANN architectures inspired by modern trends in brain research (dendritic computing, neuroplasticity). Some initial experiments in applying an integrated approach of applying sonification and ANN for space and medical image analyses will be also discussed.

The global spread of misinformation and disinformation has become a huge societal issue. The European Commission is implementing a broad programme of measures to counteract misinformation, including introducing policy measures, developing technological tools and increasing media and information literacy (MIL). One particularly promising avenue of work in MIL is the development of different educational and serious games. Such games aim to support different skills and/or increase knowledge in domains where spreading misinformation and disinformation is particularly popular, e.g. Covid-19 pandemic.

Many games claim they improve skills, especially those related to critical thinking. However, there is a gap in applying methodologies which would help measure objectively the individual progress in developing skills. Hence, our talk’s main aim is to explore how it could be possible to measure games’ contribution to improving critical literacy skills. We will provide an overview of various types of games in the disinformation domain and a critical assessment of the potential evaluation strategies of their usefulness in acquiring skills.

“Web accessibility” isn’t just another buzzword. It has been around since the birth of the web, and it is here to stay. The accessible web ensures equality in the creation and use of content of diverse types and importance for all regardless of their capabilities. Research in this direction on a global scale has been carried out for many years, but in Bulgaria this process started relatively recently. 1 such study was conducted in 2016 and found that half of the websites of public institutions are inaccessible to visually impaired people. In the spring of 2023, a similar study was conducted again. This report presents its findings, reflecting trends and outlining the main issues blind people are facing daily.

The talk addresses the growing need for restored audiovisual archive content, considering the abundance of visual archives, videos, and films worldwide. The used methods for manual restoration or non-AI-based algorithms are slow and cost-ineffective. On the other side, fully automated AI restoration has not yet achieved the desired quality. We propose a novel method that combines the quality of manual restoration with the speed and cost-effectiveness of AI-based techniques. The approach leverages existing AI techniques, repurposed from other domains, to achieve faster and more qualitative restoration and enhancement. By automatically selecting keyframes from the initial material and restoring or enhancing them using AI techniques under human supervision, the proposed method achieves impressive results. Finally, style transfer Generative Adversarial Networks are utilized to apply the restored/enhanced keyframes to the original material.

Auto-encoders are non-linear dimensionality reduction methods, which can be used for feature extraction from large non-labelled datasets in an unsupervised fashion. Adaptation of auto-encoder for processing images yields the convolutional auto-encoder. When using convolutions, some biases exists, such as positional invariance of features. Therefore, the channels of a feature map can be though as a semantic feature, while the pixels can be thought as positional features.

This talk aims to explore the latent space of convolutional auto-encoders, to find how the ratio of positional to semantic features affects the quality of reconstruction. The results show that low number of positional features, in the interest of high number of channels, significantly degrades the quality of reconstruction. The reason for these results is that the datasets used in the experiment has high variance in terms of position and scale and low resolution feature maps at the bottleneck destroy this information, due to the positional invariance of the convolutional features.

The Event is Supported by: