Artificial Intelligence 13 Dec, 2024
Non-Technical Guide to Machine Learning and AI
16 Nov, 2017
7 min read
For as long as new technologies have been invented, someone somewhere has freaked out. Fire? The world will end! The wheel? The sky is surely falling! The printing press? The devil’s work and a sign of the end times! Computers? The machines are going to kill us all!
By and large, we come to appreciate these new inventions, even if we did kick and scream at first. Yet, when it comes to computers, the anxiety hasn’t ever fully abated. Make no mistake, we love them. In addition to the ones we use at home, we carry them with us everywhere as more and more people have smartphones, which essentially puts a very tiny computer in your pocket. In fact, we’re so attached to them that we start feeling waves of anxiety when your battery gets low and we can’t find an outlet to charge it.
The Persistent Unease: Why AI and Machine Learning Still Make Us Nervous
Yet, alongside our deep attachment to our relatively new ability to look up obscure facts wherever we are, come concerns. Sure, you’re probably spending too much time looking at a screen, but that’s not what we mean. Rather, one issue that comes up over and over and which has yet to be satisfactorily resolved: machine learning and AI (artificial intelligence).
Many times when we talk about AI (artificial intelligence) or ML (machine learning), it’s mind-numbingly dull. Oh good, more esoteric statical information that you can(not) break out at a dinner party. We’re not going to do that here because, a) we like you, and b) these are real issues that impact all of us. That means you, even if you’re the least tech-savvy person on the planet, need to understand it. Why? Because at its core, conversations about machine learning, which isn’t going away anytime soon, are dealing with questions about what it means to be a human. That’s right, technology just got philosophical.
We’ll start by understanding what machine learning and artificial intelligence are and what they’re used for. Then, we’ll take a look at the key tasks given over to machine learning and the types of machine learning that are out there. Then we’ll dive into a brief look at the three most common AI models and why it’s important that we know about and understand them.
Machine Learning and Artificial Intelligence: What Are They and What’s the Difference?
Machine learning is a field of study. In it, by using the principles of computer science, statistical models are generated. These models typically get used for two things:
Inference – discovering patterns in data
Prediction – making predictions about future data based on past data.
While no universally accepted distinction exists between ML and AI, artificial intelligence typically involves programming computers to make decisions, while machine learning primarily focuses on making predictions. Because the two are so interconnected, people often treat them as the same when discussed in a non-technical sense.
Using an iPhone, we can understand how ML and AI work in a very simple way. When you first set your phone up and go to type out a message, the predictive text is a mixed bag – sure it’s getting a lot right, but you definitely weren’t trying to write “I ducking love Game of Thrones”. Over time, as you use your phone, the predictive text gets more accurate or, put another way, your tiny pocket computer is demonstrating machine learning in a very real way. Your Spotify recommendations are another example of ML doing its thing.
Machine Learning’s Three Key Tasks
Now that we have a basic understanding of machine learning and artificial intelligence and what they are, let’s take a look at the kinds of tasks that we frequently see them put to: classification, regression, and ranking.
Classification can be seen in software and tools that deal with image recognition. With this, you can input the image and then predict the primary subject matter of said image.
Regression deals with predicting the real numerical value(s) from an input, such as predicting the future value of a home.
Ranking is fairly straightforward. This is about predicting which items are “best” in a given setting, such as filtering search results to yield the most relevant ones for a given query.
Model Selection and Understanding Data
Let’s take a look at the kinds of models that arise from all of this.
Once you’ve used inference or prediction to get your data, you need to feed it into a machine learning model. No, not the Cindy Crawford type, we mean the kind of model that is a representation of how the world (theoretically) works. You feed in functions based off datasets which give you a model that can be used for future data predictions.
Functions are estimated using algorithms. In this context, an algorithm is best understood as being like a recipe, you have to follow certain steps in a certain order to get to your desired result.
All types of ML require the building blocks of models and algorithms in order to make evolving predictions and inferences about the world. Broadly speaking, there are two types of machine learning but in order to discuss those, we need to understand a few key concepts.
Because any model requires some type of data input, it’s important to understand the types of data used. First, we have training data. This data comes from past observations and presents itself in an easy-to-understand way, such as an Excel spreadsheet where each row represents a datum (or, more simply put, an individual observation) and each column represents a predictor (again, more simply, this means different features). Training data also involves the response variable and this is the column that shows the data you’re hoping to predict. This is also known as the dependent variable or the output variable.
The vast majority of models accept this kind of data and it’s called structured data.
Even Machines Have Different Learning Styles
With us so far? Good because now we’re going take that information and talk about how statistical models are constructed and the two key types of machine learning that enable this.
Supervised Learning is the more common of the two learning styles. With supervised learning, you use data that includes using response variable to make predictions (remember when talked about that up above?).
Unsupervised Learning, on the other hand, doesn’t use a response variable and is mainly used to find interesting patterns – this is the inference we discussed earlier.
Although unsupervised learning is important, it’s less common (and much more difficult to explain) so, from here out, we’re largely going to restrict our discussion to supervised learning.
Building a Statistical Model and Why You Should Care
Once you have your dataset and you think there’s probably some relationship between the predictors and the response, you can start building a statistical model. First, you need to pick which model to use. However, before we do that, here’s why non-technical people should care about this concept: a poorly chosen model is going to result in inaccurate predictions which will lead to unformed policies and actions.
Generally, there are two categories of models:
Regression Models are used when the response variable is continuous. Put another way, the variable can all be placed and ordered on a numbered line.
Classification Models are used for data that doesn’t have a numerical ordering (alternately called categorical data).
To pick between these you need to first determine whether your response variable is quantitative or categorical.
Regression Model: Linear Regression
You use a linear regression model when you believe you can describe the relationship between your features and your response with a straight (linear) line.
To start, you teach your model the dataset you are working from because the model learns by applying an algorithm to this information. It will then produce a line that can be used for prediction.
There are a few things well worth remembering here. The first is that as intelligent as these models may be, they can’t tell whether the training data they were given was any good. This means that if you include unrepresentative, biased, or even simply incorrect training data, you can cause equally faulty and biased predictions. Depending on what data you’re working from, this can have significant real-world ramifications. Machine learning models learn from the past and they assume that the future will behave similarly which we, as humans know isn’t necessarily true.
The lesson we should take from this that as amazing as machine learning is, it’s not infallible and it’s not all-knowing (not yet at least!).
Classification Model: KNN
K-Nearest Neighbors (KNN) is a conceptually intuitive classification algorithm. Imagine that you’re trying to determine the rainfall in a given area to create an accurate dataset for local governments. This data would help them better prepare for flooding and other natural disasters. You would start by identifying nearby areas and the rainfall they experience (or don’t). The goal would be to train the classification model to predict rainfall in different areas.
With this model, the phrase “nearest neighbors” refers to the already classified dataset. The “K” in the name represents the number of neighbors you consider. This is used when determining how to classify a test point. The K-value that you choose will depend entirely on the problem you’re trying to solve. Typically, data scientists have to try many different values in order to see which leads to the most accurate classifications possible. It’s worth remembering that the model doesn’t necessarily learn K through training; instead, we explicitly set it as a special parameter. These parameters are called hyperparameters.
Hyperparameters are important to know about because most machine learning codes have default hyperparameters so if you forget to set one, it will use that default setting and this can have significant implications for the model’s results.
Artificial Neural Nets
People can say a lot about artificial neural nets. They are complex models used in nearly every technology that requires machine learning. For our purposes, we’re going to keep it simple and brief.
Artificial neural nets are biologically inspired models. In them, collections of units or nodes (or interconnected ‘neurons’) work together to change input data into output data. Every connection between a node represents a different parameter of the model. This means that any neural net with millions of units will have millions upon millions of parameters. These parameters are associated with each unit in the network. While this can make the model incredibly powerful, it also brings up a criticism. Specifically, neural nets are often seen as too difficult to interpret.
Where a simple linear regression model is vastly easier to understand, it’s not as powerful as the neural net models. Whether or not the tradeoff is worth it, is down to what you’re trying to achieve.
Why Should You Care?
Increasingly, we rely on technology and machines for nearly everything. It’s useful to understand the basics of how the things that we use daily actually work. Now you know that Spotify isn’t actually listening into your conversations to gain information about you, it’s using machine learning.
More importantly, as machine learning advances, it’s replacing a lot of jobs. It’s also creating new jobs, as companies need people to maintain these data systems. However, in the process, many people are facing seismic changes in their professions. For example, consulting firm Opimas believes that artificial intelligence could eliminate nearly 230,000 financial services jobs by 2025. Opimas suggests that those who work in asset management will be hardest hit, with nearly 100,000 jobs likely to vanish. Why? Well, asset managers charge huge fees, so clients are already looking for ways to reduce those fees. Humans are especially fallible and can’t necessarily keep track of their own personal datasets. Machine learning gets the advantage here as it can manage shockingly large datasets to provide increasingly accurate predictions.
If you’re a developer, understanding AI is key to whatever you do. If you’re a layperson, understanding AI is key to grasping the changes coming to our world and our professions. Machine learning can’t do everything, but it can do a lot. As such, we need to be knowledgeable about the things that are increasingly shaping the world we live in.
Jourdan Chohan
Jourdan Chohan is the Vice President of Product Strategy and Client Engagement at Cubix. He oversees various teams at Cubix with his focus on improving processes to enhance the client on-boarding and engagement process. He also helps to formulate strategies to help with new partnerships, affiliations, business development, and product development. Part of his role also focuses on implementing new tools, technologies, strategies, and processes to constantly optimize the development side of things to ensure a successful launch for all internal and consumer products.
Category