What exactly is machine learning in layman terms? How does machine learning work? if you are looking at this query, you are at the right place. Here, I will explain ML in very simple terms. So, let’s begin:
The term machine learning has often been thrown around in discussions concerning AI and by several machine learning companies, and me when I’m dealing with clients curious about artificial intelligence. But in the end, I’m always left asking myself- how well did they know machine learning? Forget everything, do they even know what is AI?
These are a couple of questions; the answers to which I think my prospective clients should know, and so should you even if you’re not one.
Now I’ll try to answer some of these questions, while also addressing some misconceptions regarding differences between machine learning programming and general software without AI.
By the time you will finish reading this, I hope you will be acquainted with more knowledge of AI and its crucial deeper subset: machine learning.
But what is AI, though?
In order to know why AI matters to your business, you first need to know what artificial intelligence really is. In the most simple and accurate way, AI is everything, a machine, a program- just about anything that can simulate human intelligence.
If you pay enough attention, this definition will lead to two bigger questions, which are:
But what is human intelligence? And how can AI simulate it?
The fact that you’re able to read this makes you intelligent. Human intelligence is basically everything that enables us to think, question, contemplate and take action.
So once again, how can AI simulate this human intelligence?
To answer how AI works to simulate human intelligence, it is important to understand the concept of algorithms. What is an algorithm? Algorithms can be simply defined as a set of rules which provoke a software program to take an action. Algorithms normally work on the basis of “if” and “when” functions. If your car’s sensors detect that the sky is dark, an algorithm would instruct your car to turn on the headlights. Such examples and use cases are where an algorithm is needed.
In fact, computer programs themselves function based on these same principles of “if” and when”. Algorithms help a computer program to understand what it needs to do to fulfill a specific objective.
So, do algorithms help simulate human intelligence?
If you think of an algorithm as something which can simulate human intelligence, then you’ll not be blamed for assuming so. Since humans themselves employ certain sets of nature-built algorithms for the creation and fulfillment of specific emotions such as hunger and thirst.
But merely having such emotions doesn’t make a human intelligence, right? The same does also apply to algorithms. Merely equipping a program with a set of algorithms isn’t enough to market a product as AI, and hype its functions with terms such as “smart”. So no, algorithms DO NOT help simulate human intelligence.
So what do algorithms really need to do in order to simulate human intelligence? The answer is to “Change”.
Yes, change is what makes humans intelligent and smart. A change is what motivates a human to improve, and a change is what could make algorithms change its rules to aim for a higher reward, or perhaps to achieve the same rewards in more efficient ways.
So if algorithms alone cannot simulate human intelligence, and change is one of the variables that create human intelligence; how can we apply the same to algorithms? How can we “change” algorithms?
Software algorithms can change in two ways. One way is by human intervention, and the other way is through data and predictions. To put in some perspective, humans learn and gain more intelligence from experiences, mistakes, and recollection of memories. Computers and software, on the other hand, don’t have such luxuries with them, so they have to rely on either of the above-mentioned ways,
But if software algorithms are modified through human intervention, we won’t call it AI. It would simply be referred to as a software program. To call something AI, we have to use the other mentioned way, i.e., to change software algorithms through data and predictions. This is what we call machine learning programming.
Defining Machine Learning
I believe we’re now closer to defining what machine learning truly is. Machine Learning can be defined as a state when a software program is able to change its algorithms through data, without any human intervention while making predictions to get to the desired result.
How does machine learning work?
Traditional algorithms rely solely on an “if” and “when” approach, this approach is what facilitates the creation of nearly all common software-powered devices we use today. But when the intention is to simulate human intelligence, the traditional approach simply doesn’t hold its ground.
This is why machine learning programming uses and processes data through an error function, a loss function and an objective function which enables software programs to learn by itself.
An objective function defines the goal of the program, i.e., what it needs to achieve. The other two functions use data to work towards reducing the difference between the desired goal vs. the actual results interpreted by the problem; by modifying the algorithm to suit the needs.
To put it in perspective, let’s say that you want a machine learning program to identify a comic book without your intervention. In a traditional algorithm-based approach, you’d code this program, or you may say create an algorithm using data about the structure and style of a typical comic book.
However, in a machine learning based approach, your algorithm will learn to identify comic books all by itself by learning the style and structure of a typical comic book; simply by using data of a million comic books you would feed to the algorithm.
While traditional algorithms are created and perform with the intention of producing the desired result, machine learning algorithms are created with the intent of having the program “learn” how to produce the desired result through the use of structured data.
So in a nutshell, machine learning algorithms use error, loss, and objective functions with data and attempt to predict results continuously, until the difference between the produced results and the desired results is zero.