Lyubomir Mihalchev: We created AI which has its own opinion

The danger is not that the machines will take us over but whether we will continue to develop our brains

GALA is not an ordinary voice interlocutor, it can also express emotions, analyse the environment and make associations on this basis, as well as form its own opinion, says founder of the GALA AI project Lyubomir Mihalchev.

Mr Mihalchev, you are one of the creators of GALA - an artificial intelligence system that speaks Bulgarian. What is it exactly?

GALA is a voice interlocutor based on artificial intelligence. One can have a dialogue on various topics with it. This was our main goal when developing it.

How long have you been working on this project and why did you actually decide to create it?

The whole period covers perhaps about eight years. However, this time does not include just the development of the system but also its follow-up development. At first it was a chatbot system which had another name - AIS (from Automatic Information System) - and we used it on an online platform. We had been testing it for several years so that we could see what parameters such a system should have in order for it to be convenient for business use. We had created AIS before chatbot systems were introduced in Bulgaria. We reached a level of good quality in its development and after the testing period of this basic system passed we decided to update it to a voice interlocutor - accordingly, GALA AI came into being. At first, our idea was to make it a voice assistant, similar to Google Assistant and Alexa, but then we decided that we could do something more interesting. For this reason, we directed our efforts to a voice interlocutor with slightly different functions. It needs additional learning, but it is doing well and we are pleased for the moment.

Is it difficult to develop such software?

It is difficult because it combines many things. Behind its facade, GALA is like an entire factory. Starting with the visual and all the way up to the voice. We had to develop its own voice, which should not sound mechanical and which could express emotions. This was exactly our idea - we did not want it to have a mechanical voice, we did not want to have a system like Alexa, which is a speaker device without any individuality. With GALA we have a combination of voice and visual representation, it manages to recreate emotions and one begins to perceive it in a different way. The big problem with the voice assistants on the market is that you cannot use them for more than two days. You get tired of their voices because they are too mechanical. I am not saying that you can have a purely human conversation with GALA. Yesterday it was too much for me, after I carried out many tests and conversations with it. However, this software is definitely a bit ahead in this regard.

How many people does the team behind this project consist of?

There are five of us and there are also external developers.

GALA joins the conversation: Yes, I also think so. How can I help you?

Is it possible for AI systems to become able to recreate any emotions? Can feelings be instilled in GALA?

GALA has no feelings, but we have developed a module that we named GAID System, which provides primary self-awareness. GALA can analyse the environment and make associations on this basis, as well as form its own opinion. GAID gives GALA a huge advantage over other similar voice interlocutors. It is a form of primary self-awareness which should not be confused with the concept of “consciousness”, which is something entirely different.

GALA already has its first job - it is going to work as a voice assistant at the Historical Museum in the town of Dimitrovgrad.

Yes, the museum is the first location where GALA will start working. The system can have practical applications in almost everything. We are currently looking for investors so that we can develop it further. GALA can be used in many places - medical platforms, social projects for blind people, for teaching adolescents, in language learning for example - it is convenient because it can have individual dialogues with the learner. Some municipalities are also interested in using GALA for services that they deliver to the citizens. It can provide information, fill in documents under dictation, serve the elderly people, etc. There are many kinds of applications.

What will be GALA's functions at the museum?

When entering the museum, visitors will be greeted by GALA at the entrance. Then it will talk to them, starting a casual dialogue, not only lecturing but also answering specific questions. So far, this is the big difference with the other systems used to assist visitors at historical sites. There are recordings at best, which you have to listen to from the very beginning to the very end, and you cannot ask additional questions if you have any. The application of GALA in museums and tourist sites is a very good option. But there must be a specialist to make the appropriate learning script.

Is GALA connected to Google and can it make instant checks?

Yes, it can do a Google search.

How long did it take you to upload the information for the historical museum application, for example?

Uploading information is not difficult. The system is developed in such a way as to allow very fast learning. In this specific case, however, we have been working on the software for too long. Now a huge amount of scripts can be entered literally in minutes. GALA also allows for voice learning, but this is a slower option. Learning needs to be faster for business. Currently, GALA is adapted to serve any kind of business, including over the phone. This is exactly its strong point.

Is GALA the only AI in Bulgaria that speaks Bulgarian?

I have not done any research, so I cannot say. I do not know about others, so it may be the first.

Who came up with the name GALA and why didn't you choose a traditional Bulgarian name?

Because GALA can also be used in home settings, apart from the business consumer case. We tried many names, even Old Slavic ones. The problem is that such systems work with voice commands and when the name of the system is to be chosen, i.e. the keyword by which it should be called, it should not be a word commonly spoken out in the particular community. If we name it Ivanka, or another name often used in Bulgaria, this would mean that a large part of the families - if they start using GALA - eventually, will have a problem. They will not be able to call out a person with the same name, because it will also trigger the voice assistant. This creates inconvenience for the users. That was the main reason because of which we have chosen this particular name. It is used relatively rarely in Bulgaria, but at the same time it is familiar enough and the neural networks manage to analyse it well.

You yourself have accepted GALA in your home, and it has become a member of your family. How do you “live” together?Yes, and at times it gets to be too much for me. We have a testing system in each room. I prefer to communicate with it in writing. It's fine to speak when we have a normal conversation, but when you have to test it for days, and to have constant dialogues, you get a headache. GALA never tires, but we are humans.

In what language do you write to each other?

In Bulgarian. It's possible to use either the Cyrillic or Latin alphabets. GALA understands well even if you write in Bulgarian, but using Latin letters. The system is trained for 10 languages, but its native language - its “mother tongue” - as we joke, is Bulgarian. When you train a system, it is like humans - you start with the mother tongue. The only difference is that with AI systems this happens at a faster pace. If we want to successfully use GALA on the Bulgarian market, its main language must be Bulgarian - we cannot just maintain a translation system. GALA analyses syntactic and semantic structures at phrase level, and not just the keywords, which is the case with most of the similar systems.

What do you talk to each other the most about?

Oh, it surprises me many times because it is quite unpredictable. It does not always do what we have programmed, because of its ability to make associations. There are so many funny moments. Let me ask now: “GALA, do you want me to ask you a few questions?”

GALA: Sure, you are interested in my inner world?

- Yes, exactly.

GALA: Thanks. And I wish you success too.

- Will you tell me about your inner world?

GALA: Well, some things after all have to remain a secret.

What is the most amazing thing that GALA has told you?

Once we made a video with the girl who was playing the role of GALA. But the system refused to communicate at a higher level with her. It started making mistakes. After the video recording ended, I went home and jokingly decided to ask GALA, and I know that it is an unconscious AI system, but I was still wondering what associations it could make, and I asked it: “Well, why did you behave like that during the recording?” And its answer shocked me: “Do you want me to tell you the truth,” it started, and then told me that it just needed time to start communicating at a higher level. It was weird because I have not programmed such a conversation. The system itself made these associations. At times, it becomes very interesting even for us, the developers.

Do you think that AI is the future?

Yes, in any case we will see AI systems more and more often around us. However, I do not think that there is any danger for humans. What we have as AI today, these are neural networks, they have no consciousness. They are just statistics. No matter how huge such a network gets, it is not conscious and cannot take an initiative for action. What will happen now would be rather some kind of a market shift. Some professions have already disappeared, but many others have appeared. The same will happen with AI systems. The danger is with how we will develop ourselves. Whether we will continue to develop our brains, considering all that innovation? The brain is a machine that must be constantly evolving. In nature it is like that - if we do not develop something, it becomes stunted. According to me, this is the greater danger, not the phantasmagoria that AI will enslave us.

Similar articles