The current state of Artificial Intelligence

The current state of Artificial Intelligence - IBM - Wouter Denayer

General AI (Artificial Intelligence) is coming closer thanks to combining neural networks, narrow AI and symbolic AI. Yves Mulkers, Data strategist and founder of 7wData talked to Wouter Denayer, Chief Technology Officer at IBM Belgium, to share his enlightening insights on where we are and where we are going with Artificial Intelligence.

Join us in our chat with Wouter.

Yves Mulkers
Hi and welcome, today we're together with Wouter Denayer, Chief Technology Officer at IBM. Wouter, you're kind of authority in Belgium and I think outside the borders of Belgium as well on artificial intelligence. Can you tell me a bit more about what you're doing at IBM and What keeps you busy? Most of the days?

Wouter Denayer
Yeah, Yves, thank you, and thanks for having me. Of course, if you call me an authority already, I think if you call yourself an authority, then something is wrong. It's almost impossible to follow everything that's going on in AI, the progress is actually amazing. You are right. I do love to follow everything that's going on as much as possible, especially focussing on what IBM Research is doing, we can come back to that later. That's part of my role.

In my role as CTO for IBM Belgium, I communicate a lot with C-level people in our strategic clients. Sometimes global clients that really want to know what's coming, what is this AI thing. We're beyond that. People understand more or less. But they want to know what's coming. What's next. How can this have an influence on my business? How can I make use of this technology most of all, and how does it fit in my current approach? That's kind of what I try to do, is to explain, to evangelize in a way, but with some deep technical understanding of how things work. That's what I try to do is really understand and convey how things work and what they can do and also what they can not yet do. There is a lot of hype going on and I do my best to avoid all of that. And to clarify the reality of things.

Yves Mulkers
Like you say, you try to keep on top of it and see how is the best match. We've come a long way artificial intelligence, a big hype word. It's coming from the 60s, you said the first AI that we saw, was able to recognize six or 16 words I don't recall anymore. And we all have these ideas that the computer will take over. So I think we had a big learning path last years. What we see evolving. You called it general AI in previous discussions. What do you see where we are on this turning point with AI? What did we achieve last years? And what are the bottlenecks?

Wouter Denayer
The general AI is this big idea that computers basically will be able to think and reason like humans. That's been the goal, or the ambition level since the beginning of computers, because from the absolute beginning, people try to use computers to mimic the human brain. Even Alan Turing, for example, created a chess game the first chess game that ever existed on a computer. Although the computers, the hardware, we're not there yet. So he had to do all of that in his mind. He is the grandfather of computing. He invented basically the basic principle, and he had to, just as an anecdote, he had to play this game. by hand, calculating each move took him half an hour per move, but they did it. And so he proved basically that the chess program could be built, and chess has been always considered as the kind of the realm of people, if you're a smart person, intellect is kind of epitomized by chess.

People thought if we can copy chess, then we're there, then we have intelligence or general intelligence this also culminated much later, in the big IBM Deep Blue win against Kasparov, the greatest chess player of all time, we can probably say, but still, then it turned out that even that is not intelligence. That more is needed and that this does not generalize; the software could play chess amazingly well. Could win against the best human player ever but could not think about anything else. It could not do anything outside of chess. That's the principle of general intelligence, that we can have an intellect, a computer intellect that can do all the things that humans can do, so that gets broader. We are not there by a longshot.

That's not to say that we did not make a lot of progress, we did, it's quite amazing. If you see what has happened in the last eight years with the neural networks. There's been an explosion in capability, especially in visual recognition. For example, audio, speech recognition, we can all talk to our mobile phones, speech is pretty well recognized. People thought, wow, this is amazing and it is. But then people extrapolate and see if we can do all this in such a short period of time, eight years and surely, very, very soon we will have a system that can think like human and of course, the science fiction movies get into the picture and people think that the evil AI is coming. I can reassure you, it's not coming. It's not coming because we are not there by a longshot. We actually don't know how to build such a system. I'm not afraid and I don't think you should be afraid Yves. Because we are very, very far away from that, we make progress, making the speech recognition better, making the image recognition better, but making hardly any progress towards general AI.

Yves Mulkers
What do you see? People are frightened from this. They say we can't see what the program is doing. What is the expected result. I think the fear comes from that you explained a bit the explainability and that's a part of that will help overcome. Just adopt or accept these ways of working with hardware and software. What do you see could help in bringing that acceptance towards the people working with, let's call it an AI?

Wouter Denayer
indeed, we are making progress and smallish AI systems are finding their way into the workplace. We are being helped [they are a] kind of helpers. But people are a bit weary or, not afraid maybe, but they don't really understand. To be able to really make full use of the AI systems, in the workplace or outside the workplace, for that matter. We need to trust the system. And that is a keyword? Trust is a keyword. And we don't trust things that we don't understand.

First of all, some of the things I do is explain where are we really, know why you should not be afraid of the evil robots What can we really do, that's one part and the second part is if we accept that then we still need to understand why a certain decision was made. Take, for example, a loan, if I go online to my bank, on the website, I type online information and I ask them, will you give me this loan? And the bank, on their side, they want to assess the risk. That's what they do. They want to make sure that you pay back your loan. If for example, they say, dear client, we cannot give you this loan, then I want to understand why. It's plain and simple because I don't just accept ‘no’, they cannot say “the machine says no”. I need to understand.

The problem with AI systems today is that we use neural networks, inspired by the brain, that are essentially a black-box. They do their magic. They work pretty well, but we cannot look inside of them. We cannot understand what's going on in that black-box. We get an input, this is my file, I make so much money, this is my age, I'm married, or not or whatever and then the output is, Yes you get a loan or no, you can not get the loan and essentially up to now the case was we didn't understand what was going on. We knew the system was correct in 99.8% of the cases, which means it's wrong in 2 out of 1000 cases. We could not understand or see inside of the system and that is problematic. Because if I don't understand why something is going on, or decided, I cannot change anything about it, I cannot see if it is fair, is that correct decision or is the system making a mistake in this case?

Is there maybe some bias going, maybe some sexism, knowing that is always possible, maybe the system was not trained very well, it happens. There are very well known cases published about this. So essentially, it comes down to understanding. If I don't understand the reasoning of the system, I will not trust the system. It is very simple. As a client, I am disappointed because the bank cannot explain to me why the loan was denied. For the bank itself is a problem, because now they have an upset or angry customer and the regulator will also be not amused because the regulator will also not be amused because they must make sure that the bank is fair. That it is a fair business and that is not, for example, biased. For all these different cases, it is necessary to look inside, to explain, and that's where we then use the word “explainability”, the ability to explain, to look inside of AI.

Up to very shortly, it was really problematic. And now we are making progress. For example within IBM Research, we're able to look inside the neural networks, maybe translate this complex decision into more simple understandable parameters like make me understand that maybe my income was not high enough, or maybe there was just not enough money on my account or maybe I should have my salary be dropped on the account of the bank and not of another bank. Once I understand, I can also act. The bank could give me advice. Please save a bit more money before you ask for the loan or, please wait a bit longer or please, put your salary on our account. We can give advice in this problem and we can get from a no-decision to a yes-decision and a happy customer.

Yves Mulkers
Not only that you get the bylaws and the policies and you have to make up your mind to understand it, but just in a conversational way that you understand why the decision was taken. What do you see can help in the future in the next steps, or let's call it the next AI evolution. What are the other aspects that could help in the adoption

Wouter Denayer
One of the things we see is that AI systems and neural network systems are based on statistics. They make mistakes.It's a bit like the human brain unfortunately, it is very optimized. We all are proud of our brains we should be. But still, we make mistakes a lot of the time. And so we want to make sure that this statistical system, these neural networks, that they don't make the same mistakes. And we want to infuse common sense. Some of the common-sense rules of the world, right. And for example, in visual recognition, many mistakes are made that a human would never ever make. For example, there is a known example of a stop sign that is recognized by self-driving car, and instead of the stop sign, it sees for example a speed limit sign, because there has been some small manipulation. Some people put some black stickers in the right place to this place. So they fooled AI into recognizing the stop sign as a speed limit sign, which is a totally different, I’m sure you agree, which is problematic, right? If I'm driving my self-driving the car, this is a real problem. And at the same time, a human would never ever make that mistake. So we need to find ways to, to combat, to act against these attacks, and to make the systems again more robust. So that's one thing, getting more robust. Second thing is using common sense and common sense is knowledge of the world right and knowing that certain things are impossible, that they cannot just be like that.

Common sense is a problem. Because for a long time, people thought that if you train a big neural network, a big AI system, then it will learn by itself the common sense. We feed it for example, all the text that is available on the internet, and then it will learn also common sense. It turns out that it doesn't, an example I sometimes use is the colour of sheep. If I ask you, Yves, what is the colour of a sheep? It is? You say white, no?. White yes. Well… if you look at the text evidence, and if you have asked the same question to an AI system, they will say black, and the reason, it’s crazy, because in the literature, we don't say that sheep are white, because of course they are white. It's not something we have to say, so it's not present in text. And of course, we know the expression “black sheep”, it is a well-known expression. That's what AI taking into account.

So, tada, statistically, for the AI, sheep are black. This doesn't make any sense. But clearly shows you there is something missing and so we need to bring common sense into the system. It's not working out through statistics or through reading massive amounts of text. It's not working out through making the AI systems bigger and bigger and bigger. We need something else. We basically and that is interesting, to solve that problem we are going back to the beginning. I talked about Alan Turing and his chess software program. To play chess you need a number of rules.

What Alan Turing did, they put the rules into the system. A human put the rules into the system. It's called symbolic AI because it's working with symbols, in this case, chess pieces and we tell the computer, what are the rules. This piece of chess can only move it this way, And the other piece can only move in that way. We give it knowledge of the world. We also give it a big database of all the previous historical chess games, we say you can reason with that. but we feed it a lot of information. That's how we worked in the past. Then after that, neural networks came, and we said, [creating] all these rules, it is too much work. The rules change all the time. We cannot do it anymore, let's learn everything from the ground up. That turns out to also have its challenges. Now we combine both of them. We take all the best of the neural networks, statistical or probabilistic learning from the ground up and we bring back the rule systems, the symbolic systems. That is the next avenue.

The next wave in AI now is basically bringing these two paths together in the best possible way to solve the common-sense problem. By the way, it is also solving another problem that we have today. We read a lot of examples, if we want to recognize 1000 cats and dogs and whatever animals, we need a 1000 examples of a cat and a thousand pictures of dog and so forth. It's time-consuming to gather all these examples to feed them into the system, and in some cases, it's also not possible. What we want is to have an AI that can learn with less examples. As a human, I also don't need a thousand to see a 1000 dogs to recognize a dog, I mean a child can do this from one or two examples. This is inspiring for us, how can we build a system based on the symbolism of the past combined with the neuron approach, the neural network approach of today, to build also systems that can learn with less data. This is very promising. We're taking a lot of attention on that path now to move forward towards general AI, this holy grail again. So yeah, that's the new movement.

Yves Mulkers
That means, with less data, you're becoming better or creating the right context and the right connections between things. I'm just thinking on how my mind sometimes jumps from one thought to the other thought and connecting things You got these theories and you got frightened as well sometimes and you, you have to step back and reflect on, Hey, is this right what I got with the input and the associations what I'm making. That kind of rule-based, it will become kind of a combination between human common sense and common sense that the machine is learning.

Wouter Denayer
There is a big history and a big body of work that was built out in the last 50 years around symbolic AI. There are many problems that we can solve in the traditional symbolic way. And we can now use all of these things that we have learned and combine them with the new neuron approach of AI. For example, if we want to solve a puzzle, you know these sliding puzzles like you have nine images and you have to slide them to make a dinosaur like for little kids five-year-old like in Tic Tac in the past.

This problem is very easily solved. It's a planning problem, you have to plan your moves, first I move this piece here and then that piece there, it's a planning problem. It is easily solved if I give all the information to the AI system, then I can use a classical planner. If I tell it these are the pieces that you can move, these are the moves that you can make. You can only move up down left, right when there is a piece you can only move up when there is an empty space above the position where you are you can slide the piece upwards. If I give the computer, all that information it is really an absurdly simple problem to solve. If I don't give it any examples, I just give it the puzzle which is not solved and I tell it this is the solution but you figure it out. Just figure it out by yourself. I just give you a very few examples of showing you a slide move, and I don't tell you anything else. That's an extremely hard problem to solve for a computer. Because it doesn't have any context it has to learn everything, it has to learn what the pieces are, what the possible actions are, it has to learn everything by itself. That's, quite amazing. IBM research proposed a solution to that combining, symbolic AI with neuron AI, and combining the strengths of both where with a very, very small number of examples, the AI is able to solve that puzzle, which is really quite amazing. An interesting thing is, once it has figured out what the symbols are, then we can use a classical traditional planner algorithm. So we combine the best of both worlds, we bring back into the system all that we learned in the last fifty years. It’s quite amazing

Yves Mulkers
That makes me think of something I read about the evolution theory. If you say just do natural selection, it would take so much longer than we see how the evolution went. The idea now is that some shortcuts have been taken in there. It relates to what you just told me on how we go about that symbolic AI and then putting that together with the classical solution.

Wouter Denayer
it's a good analogy, Yves, indeed, we as humans, when we're born, we are not empty, our brain is not empty, we get all the structure in our brain from millennia, since the beginning of time of evolution. The brain has a certain structure, that's nature, that's what we're born with. And then we learn on top of that, and you're correct we’re bringing that back, we bring back again structure, and then we learn on top of that, a quite smart approach.

Yves Mulkers
Makes me think of my science lessons back in the day, where the teacher said, all we do as humans is copy the nature and try to put that in type of solutions. Thinking back at the school days, a lot of knowledge is going to be needed in the future to support and build that next-gen AI. How do you see that AI can help on bridging that shortage on skills for the AI? Is there a kind of solution to it? What do you see Wouter?

Wouter Denayer
What we are trying to do here is to use AI to build AI. Let me explain. When you build an AI system today, it is quite a lot of work. You need a very deep expertise. You need to find your data, you have to clean your data, you have to define the neural network models that you will be using. You have to tune them, you have to train them, you have to try again and again. And it's more kind of an art than a science.

There's a lot of evolution. it's very specialized work. Sometimes it can take months, if really top performance is very important in your use case, then it can take a large team quite a bit of time and resources, compute resources to actually come to a satisfying solution. Now, what if we could have an AI that's actually helping you look for the optimal solution? For example, if I do visual recognition, there are certain neural network patterns, think of them as connections in your brain, I can do it this way, I can do it the other way. There are so many different ways I can do this. What if I can have an AI that figures out, that looks for the optimal way. That helps me look for that and that helps me tune and it tries 1000 different ways and it tells me: probably this one is the best because I tried all of them. it's compressing the time, that I need, as a data scientist to figure out all these different options. Otherwise what I would do, is I would always use my favorite best option, the one that I know that I used in the past, but of course, things evolve, and there are now new ways and new tuning parameters. I cannot be aware of all of that.

What if you could have a tool that is aware of all that and that can just test all of them, and tune for you. We are at a point that these automated systems, Auto AI, we call it, they are almost as good as the best human data scientists could possibly do. But instead of taking months to do that, they can do it in an hour. That's just mind-boggling. That is changing the game. I give them this powerful tool that really helps them accelerate and helps you accelerate as a business, the time to market with the best possible solution. That's auto AI, it's helping data scientists much more quickly figure out, what is the optimal way to solve this particular problem. This is exploding now. As IBM we put in our tooling, cloud-based tooling, will make this very easy for you to do. By the way, same way that we embed explainability. I think we are in the next step, of artificial intelligence, development. In the past, people were playing around and working with a lot of libraries, and figuring it out by themselves and building their own pipelines and that's how development started in the past always. It's new, it's adventurous, we bring everything together that we need. At some point you need to professionalize.

We are now at that point that this AI pipeline, like a Dev-ops for AI, it's being professionalized. That means that I control everything. I control my data, I control all the steps, I can have this auto AI, play a role, I can embed explainability, and all those things, I can check for bias in my data. I can do in a controlled way without jeopardizing creativity, again, of the data scientists because we don't want to do that, they know their domain, they know what they're doing. We need to keep that freedom. As a company, you also want governance, you also want control. We try to bring all of that together. That's the next wave, we are really professionalizing now, which is really a good thing.

Yves Mulkers
That's the struggle I picked up for data science projects. I see it in AI. And we saw it back in the days, even with development and typically in business intelligence. So history repeats itself. But do you see AI as well in in the development for people. That it guides you in which kind of curriculum that you can take to develop your path. Or if you say, this is one to one, this is the knowledge I want to achieve. Whereas now it's a predefined curriculum that exists. Could that be helpful where the AI looks on outside on the internet? And then they find this is some piece of information that is helpful for you. I see it already in the content curation platforms that I'm using. The AI is supporting me to select the content which is relevant to me based upon what I say, these are the topics I do like to follow. So what do you see that evolution going

Wouter Denayer
You see that as well, in learning platforms, for example. So, especially now I would say in the special situation that we are all in with Covit-19, people spend a little bit more time now on education, but where do you start, it's very hard. There are these massive libraries of online trainings. That's all, you know, good and well, and the level or the quality is very high, but what do I do first? I have certain skill set, I have also have a certain expectation from my employer. Skills that I should have, I also have a career ambition, I am now here but I want to go there. How do I bring all of this together in a learning plan? A learning plan that is customised for me. Within IBM, we have been using a platform we call YourLearning, Which is basically trying to do that? It's curated content. We know what the content is about, what the skill level of the content is. But they also know about me, they know who I am they, know my position in the company. They know what kind of projects I do, what my specialties are, they know what projects I work on, they know my certification level and my profession, I am an architect, but I could also be a sales person or project manager or consultant. They know all of this about me. They can also say what my ambitions are and they know all of these things about the content. it's clear that we have some matching problem. That's what we start to see.

When I come on the IBM my learning platform, I am proposed certain learning curricula, some are mandatory, things I have to do, others are optional, some are in helping me with my next certification level. We keep it up to date. We see a lot of evolution and we also pushed this platform outside. We made it available to clients. In Belgium, KBC is using it, implementing the IBM learning platform. The nice thing is, it's not limited to your own content. We also bring in content from Coursera, content from Udemy, it is all controlled by your company, and they track what to do and how many hours that you spend on learning. I have a target, a learning target, how many hours I should spend at least. You can always overachieve.

It's really changing the aspect of learning. Whereas in the past, the catalog was central, the Library of learning, now the person is central. I am central to the whole learning, but my employer has plans, they want to make sure that on a whole they have a sufficient number of people with certain skills and certain levels of skills, and they also want to track that because the marketplace is a tough place, so we have to be competitive.

Share it:
Share it:

[Social9_Share class=”s9-widget-wrapper”]

Yves Mulkers

Yves Mulkers

Data Strategist at 7wData

Yves is a Data Architect, specialised in Data Integration. He has a wide focus and domain expertise on All Things Data. His skillset ranges from the Bits and Bytes up to the strategic level on how to be competitive with Data and how to optimise business processes.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You Might Be Interested In

Analysing Humans Dream on a Massive Scale Using AI Tools

9 Sep, 2020

Dreams and nightmares are a natural occurrence to humans. Some say that dreams reflect the mentality and thoughts of a …

Read more

Where Smart Cities and Industry 4.0 intersect

24 Sep, 2019

Harnessing the potential of the 4th Industrial Revolution (4IR) will require cities to integrate new innovation and technology-related concepts, to …

Read more

CISOs: Missing an Opportunity to Partner with Your CDO?

23 May, 2021

CISOs can tap into the CDOs data knowledge and governance skills, while CDOs can tap into the CISO’s knowledge of …

Read more

Do You Want to Share Your Story?

Bring your insights on Data, Visualization, Innovation or Business Agility to our community. Let them learn from your experience.

Get the 3 STEPS

To Drive Analytics Adoption
And manage change

3-steps-to-drive-analytics-adoption

Get Access to Event Discounts

Switch your 7wData account from Subscriber to Event Discount Member by clicking the button below and get access to event discounts. Learn & Grow together with us in a more profitable way!

Get Access to Event Discounts

Create a 7wData account and get access to event discounts. Learn & Grow together with us in a more profitable way!

Don't miss Out!

Stay in touch and receive in depth articles, guides, news & commentary of all things data.