Gaggle Speaks

Ideas, news, and advice for K-12 educators and administrators to help create safe learning environments.

Technology

Exploring the Definitions of Artificial Intelligence (AI) and Machine Learning (ML): What Are the Key Differences and Considerations?


close
Written by Bob LoGalbo
on March 30, 2023

The concept of intelligent machines has been introduced to culture largely as something to be distrusted, much like SkyNet (from the movie “The Terminator”), a self-aware and competitive threat to humanity. Despite having become ubiquitous in our everyday experiences e.g. from voice control to image recognition, society has now at least come to trust that its utility can be leveraged. But the terminology to define intelligent machines such as AI and ML (Machine Learning), terms that are commonly thrown around to define how they work, aren’t generally defined in everyday publications. They have had definitions in the statistical community for decades and making their definitions commonly understood, could go a long way to demystify and make intelligent machines less intimidating and more accessible. 

 

Machine Learning (ML) is the more direct and simplistic to define. A good way to define it is to use an example. Programmers often sit down to code, knowing the states that shall be defined by the code along with the boundaries and events which separate them. Transitioning between states requires conditions which are typically known a priori. So if writing an if-then statement, the conditional boundary conditions would therefore be generally known at the time of coding, and, during runtime, the actual data would be applied.

 

But with ML, things work in reverse order. With ML, a priori data is used by the machine to derive the boundary conditions as opposed in the previous example where the human person programmer derives the necessary boundary condition from the data[1]. What makes this somewhat a spectacular invention is that complex, multi-dimensional decision boundaries can be derived, say with using SVM’s (Support Vector Machines) or Deep Learning Neural Nets, both among the more sophisticated ML algorithms[2]

 

On the other hand, the definition of AI is generally not uniformly agreed upon. Most agree that AI solutions are at least partially composed of individual ML algorithms. However, AI is generally more than that. Since different types of animals show varying degrees of intelligence, some AI experts specify what kind of animal’s intelligence equates to the term AI in their peer reviewed documentation. For example, Deep learning used for image recognition resembles very closely a dragonfly’s vision and ability to react to specific images[3]. ChatGPT on the other hand is a intriguing emulator of human dialogue, at a minimum, not even discussing its ability to learn, write, code and classify[4].

 

However, AI generally lacks even the ability to solve basic physical problems. For example if you watch the dog solve the spatial problem in this video,  we’re not really there yet today with AI. This concept of a ‘world model’ complete with basics of animal-world interaction is a concept largely missing from AI today, including such accessible examples like ChatGPT; but according to Yann LeCun, a leading AI expert, and one of a handful of Turing award winners for AI[5], the world model would go a long way to constructing AI solutions that would approach mammalian problem solving ability[6]. In other words, again using ChatGPT as the quintessential AI development today, despite its exponential growth; the addition of plugins; the latest leaps and bounds of deductive ability which has come from GPT4; all of OpenAI's learning thus far is derivational and based upon trust - trust upon the information it has been provided and to which it has access. It is not experiential, learned first hand; nor can it seek information to personally derive hypothesis, test hypothesis and develop analogy to respond to personal experiment; again all minimum components of human (mammalian) intelligence.

 

The takeaway from this dialogue is that ML is well defined by general consensus; but the definition of AI is so arbitrary, that to have a meaningful dialogue regarding AI would require a good amount of time just to come to a consensus, as no one definition, among many, can be commonly assumed. This can be a good thing; taking the time to come to a common understanding of AI in any dialogue should hopefully eliminate any misunderstandings that a self-aware SkyNet is the fated end game for all AI.

 

[1] https://www.youtube.com/watch?v=mbyG85GZ0PI

[2] https://en.wikipedia.org/wiki/Decision_boundary

[3] https://spectrum.ieee.org/fast-efficient-neural-networks-copy-dragonfly-brains

[4] https://www.makeuseof.com/things-you-can-do-with-chatgpt/

[5] https://amturing.acm.org/byyear.cfm

[6] https://www.technologyreview.com/2022/06/24/1054817/yann-lecun-bold-new-vision-future-ai-deep-learning-meta/

 

Let Us Know What You Thought About This Post.

Put your comment below.

You may also like:

Safe Learning Management G Suite for Education Student Safety Digital Citizenship Technology

How Can Web Activity Monitoring Benefit My District?

Let’s face it — protecting your students from online dangers becomes more difficult by the day. The internet is massive,...

Gaggle Student Safety Student Mental Health Technology

Gaggle’s Support Team Spotlight | Part Two

Welcome to the second post of our latest series: Gaggle’s Support Team Spotlight. Through this series, we're diving into...

Gaggle Student Safety Student Mental Health Technology

Utilizing ESSER Funds in K-12 Districts

The Elementary and Secondary School Emergency Relief (ESSER) fund has emerged as a crucial resource for K-12 districts. ...