What Technology Infrastructure Do You Need For Artificial Intelligence (AI)
- by 7wData
When I work with companies and organizations that are interested in Artificial Intelligence and what it can do for them, some of the most frequent questions I am asked are:
· What technology do we need to start working with AI?
· What are our infrastructure needs?
· How do we need to rethink our current approach to information technology?
Recently I got the chance to sit down and talk to Ivo Koerner, VP of IBM Systems, and I took the opportunity to get his views on the subject.
Part of the issue is that because AI as we know it today – which generally means machine learning, deep learning, and neural networks – is such a new and fast-developing field, there are as of yet no hard-and-fast rules about how to do it.
And because until very recently it has generally only been very large, well-resourced companies that have been able to get involved in the "AI gold rush," there aren't a lot of examples out there that it can be learned from!
During our conversation, Ivo told me that there are two primary requirements for any organization wanting to move towards a more automated and intelligent approach to doing business and running their essential processes and operations. Firstly, there is the need for enough (and the correct type of) compute power to carry out the high-speed, high-volume number-crunching that makes AI happen.
The second requirement – and this is the one that generally needs the most careful consideration – is the data itself. AI today usually means machine learning – literally computers that are able to learn for themselves and become increasingly good at carrying out tasks. This learning requires data – usually lots of it, and the data must be accurate, up-to-date, and easily accessible.
Firstly discussing the compute requirements, Ivo explained that the hardware typically involves two kinds of data-processing technologies – computer CPUs (central processing units) and GPUs (graphics processing units). A CPUs is the regular everyday computer chip that can simply be thought of as the “brains” of any computer. It carries out logical operations and simple mathematics involving the data it is fed – and in the case of modern CPUs, it does this very, very quickly.
GPUs are more specialized hardware originally designed for carrying out the more complex mathematical models needed to generate high-end computer imagery of the type seen in movies and video games. During the last decade, it became apparent that these specialized maths chips are also highly suitable for use in machine learning operations.
For both your compute and your data storage requirements, however, an important decision needs to be made early on in the process. Are you going to host all of the infrastructure yourself, or are you going to rely on one of the readily-available cloud-based platform-as-a-service (PaaS) providers?
Relying on a cloud provider (three examples being IBM Cloud, Amazon Web Services, or Google Cloud) may seem the obvious way to go. Initial setup costs are likely to be lower, and the platforms are built to scale as your company requires – while you pay by the hour, or by volume of data, for the service you use.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read More