AI: The basics

Andrew Douglas
10 min readFeb 7, 2020
Photo by Hitesh Choudhary on Unsplash

Ask 100 AI leaders to define artificial intelligence (“AI”), and you will get 100 different definitions. This is part of the challenge that businesses and individuals have when understanding and attempting to implement AI. How can we hope to successfully utilise a technology if we can’t first define and understand what it is that we are using?

The first concrete definition for AI came from cognitive scientist Marvin Minsky and computer scientist John McCarthy, who in the 1950’s defined AI as:

…machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. … For the present purpose, the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving.

This is a very broad and basic definition, but it allows us to set a standard definition for AI, as proposed by the original researchers. By this definition, AI is any action undertaken by a computer in which if that action was undertaken by a human, then the action would have been said to have required “intelligence”. This does not dictate a level for the intelligence, but we can assume that simple repetitive tasks or formulaic activities with predictable outcomes can be disregarded.

The McCarthy and Minsky definition helps to remove some of the perceived “magic” around AI as a concept. However, what does AI mean today, after more than 60 years of development?

The common practice today is to split the topic of AI into two distinct but related areas:

Narrow AI — Systems which are “taught” or “learn” to undertake a specific task, such as vision or speech recognition systems. Narrow AI systems cannot undertake tasks other than they have been taught, and they are only suitable for the use case for which they were designed. This does not mean that narrow AI systems are not powerful, the majority of AI systems in use today are narrow, and they have improved our ability to process complex information sets.

General AI — (Also referred to as “Artificial General Intelligence”) General AI can broadly be defined as any AI system which has the ability to exhibit attributes of the human intellect, such as: adapting to new tasks; the flexibility to incoming information types; and learning new skills. This form of AI draws far more attention in the media and has been the mainstay of sci-fi for the past 60 years. The future of AI will be general AI, but we are still a long way from functioning implementations.

Uses for AI

As it stands today the deployment of AI is still in its infancy. Many companies are calling their technologies AI when they are not necessarily so. As we have seen this can easily happen as the original definition of AI is very broad. Companies can claim to use AI, even when the underlying algorithms in use may be predefined and only reacting to prescribed inputs. The question this poses therefore is whether “complexity = intelligence?”

So what currently are some of the valid uses for AI? I’ve split this into the two classifications of AI presented above:

Narrow AI — Digital assistants (Siri, Alexa, …), Optical recognition (driverless cars, …), Financial trading, Handwriting recognition, Facial recognition, Market analysis, Speech recognition, Natural language processing (chatbots, …), Pattern recognition (in data for example), Game theory optimisation, Trend identification.

General AI — Currently research only, Future, any task currently only solvable by a human.

This list is not exhaustive, and it should be said, that lots of the categories for uses of Narrow AI can be useful in many industries. For example, pattern and trend recognition can be applied to almost any data set (although it should be noted that in many cases problems using “AI” approaches can be solved just as efficiently using traditional statistical approaches).

AI vs ML

Machine learning (“ML”) is a branch of AI, one of the ways we expect to be able to achieve AI. Best defined in the 1990s by pioneer computer scientist Tom Mitchell as:

… the study of computer algorithms that allow computer programs to automatically improve through experience.

In general, ML involves using some data set to “train” a computer program to undertake a task. In ML these programs are referred to as “models”, and training is the process of refining and adjusting the model to improve the accuracy of any outcomes.

The most famous type of ML models currently in use is Artificial Neural Networks. These networks are constructed using connected nodes known as “artificial neurons”, which loosely model the neurons in a biological brain. Neurons are connected by pathways which are weighted (i.e. given significance over each other). Training allows the adjustment of these weights to produce different outcomes.

For example, a model could be produced to classify images of red balls. A data set of 100,000 pictures where it is known if the picture contains a red ball or not, is provided as training data. The model takes a picture as input and outputs a single decimal number 0 for where is no ball is present and 1 for where a red ball is present. Training runs each picture in the data set through the model and then compares the output value with the value recorded of the picture in the training set (which will be either 0 or 1). In between each run, the model is tweaked (typically the weights) and then the data set is re-run and outputs compared again. This process continues until the model outputs values closer to those recorded in the training set.

ML-based approaches such as neural networks benefit from being able to model complex systems to a much higher degree than traditional logic and statistical-based approaches. In addition, training then allows the model to be improved much faster (and in ways that might have been unseen in traditional human design -> implement -> measure development iterations). These systems, therefore, have proved useful in analysing image or sound-based tasks where the inputs may be hard to classify and data sets are large.

A future blog post will look in more detail at ML. For this post, it is important to take away that AI does not equal ML, but that ML can enable AI.

Common misconceptions of AI

2019 was the year when the hype surrounding AI reached new heights. Walk any trade conference hall, website, or marketing material and the letters AI were sure to be found.

My company needs AI

No, your company probably does not need AI. Probably, not the most popular statement but probably true. Not every problem has a solution in AI and often focusing on your customers and providing value faster with a classical approach will be more beneficial (it just doesn’t sound as sexy).

However, every company should have a strategy for AI, even if the outcome is that it will not use AI. Evaluate your options with regards to how you are solving your customer’s problems and how and if AI can help. Having a strategy also allows you to take things such as investors and funding into account. Recent studies have shown that startups utilising AI receive 15%+ more funding, so your core business may not need AI, but your investment strategies may need the associated hype.

We are close to General AI

As we touched on above, we have learned to engineer impressive solutions using Narrow AI. In areas such as image and speech recognition, AI has become the standard approach. However, we are still years away from producing a generalised system which could exhibit even a small amount of the flexibility and reasoning of humans.

AI is free of bias

Computers will famously provide perfect output if given perfect input. Unfortunately AI models, algorithms and systems are written by people, with all their inbuild bias and flaws. A prejudiced data scientist will likely produce a prejudiced model. This even applies to the data sets we use to train our AI systems, and so bias here can lead to fair models trending towards bias.

As with any computer architecture, bias can be filtered out with a good design, rigorous testing, monitoring and iterative improvements. However, many companies rush to AI without these in place, risking outcomes.

ML is the same as AI

As noted above, ML is a part of AI. ML is the ability of machines to predict outcomes and give recommendations without explicit instructions from programmers. AI, on the other hand, is much larger in scope.

AI can solve all our data processing problems

Many companies tout AI as a way to solve their customer’s data issues. From alerting, and trend detection to automated assistance based on features found in data, AI can be useful. However, Narrow AI needs to be built for a specific task, or at least be working on data which has been “labelled” to make it accessible to the AI. Without preparing the model or data, AI-based outcomes can be worse than classical approaches, but all this prep takes time and resources. So the question to ask is: where is our current (non-AI) approach failing us, and are the overheads of AI worth the requirements to run an AI solution?

General AI may be able to help companies with data processing by being more flexible and capable of consuming arbitrary data and making decisions, but as we have discussed General AI is still some way off.

Challenges of deploying AI for businesses

As we have discussed, not all companies need AI, but every company should have a strategy for AI. When devising such a strategy, what challenges should companies consider?

In 2019, O’Reilly published an ebook summarizing a survey conducted concerning AI adoption in enterprises (https://www.oreilly.com/data/free/ai-adoption-in-the-enterprise.csp). In this, they listed some of the most common factors that hold back further AI implementation. This list provides a great starting point for any company wanting to discuss the challenges of AI adoption.

Company culture

Company culture not being ready for AI was identified in the O’Reilly survey as the top challenge for AI adoption. Misconceptions about AI, scare stories in media, hype pushed by vendors (along with the inevitable cost increases), and a perception that AI is likely to be expensive and disruptive, all contribute to a hesitancy for management to consider AI.

The solution is education, from the top of the company to the bottom (often we see this education occurring naturally from bottom to top). Managers and decision-makers should have a clear understanding of the possibilities, limits, benefits and risks of adopting AI. This education can utilise internal resources or may utilise external training resources. Any education piece should conclude with the company having that all-important strategy for AI.

Quality of available data

Any company implementing AI today will be implementing Narrow AI. This requires two things: a clear design based on an individual company’s use case; and well-labelled data for training and/or testing.

This issue encompasses a whole set of challenges and will be the subject of a future blog post.

In general, AI data requirements fall into 3 categories:

Data quantity — For training and testing AI systems (especially in ML-based approaches), the more data that can be fed to the model the better. Models improve through numerous iterations, each of which requires different input data. AI requires a large quantity of data to “learn” much larger than humans might require.

Data quality — Microsoft famously launched an AI Twitter bot and allowed it to learn from interactions it had on the platform. Within a few days, developers had to cease the AI as it had become racist and offensive. It had learned from abuse sent to it by malevolent Twitter users and adapted its behaviour to suit the incoming data. As noted earlier in this post the data used to train and test any AI models is key as this is how the models will “learn”. Companies must have a plentiful source of data which is representative of the use case they are trying to solve. This data should be as unbiased as possible. The key is to know what data a company has and what data is needed, doing so a plan can be drawn up to expand your training datasets.

Data labelling — As discussed above, ML systems in particular need to “learn” by comparing labelled input data with outcomes after every training run. Having data labels which are as accurate and reliable as the input data itself is important for training and assuring that any AI system is accurate. There are many ways to label data, including: using an internal data-science team; external consultants; using external labelled data sources; crowdsourcing; programming a classifier or using a feedback loop. Having well-labelled data is often the key technical challenge that companies need to overcome when adopting AI.

Staffing

There is currently a shortage of data scientists available in the job market today, especially those trained in AI. Outside of the AI behemoths of Google, Facebook, Apple; companies may find it hard to recruit data scientists who are skilled in AI. As an SME even if you find a skilled professional how will you know if they are a good fit for your business use case? Again here, education is key in helping management identify good candidates and provide them with clear goals. One solution may be to use external specialists to fill this skill gap. This solution may work in the short term to help a company produce their AI strategy, but in the long term, someone internal will be required to maintain and improve any deployed AI system.

Business use case

Identifying business case for AI requires managers to have a clear understanding of AI technologies, their possibilities and limitations. The lack of AI know-how may hinder adoption in many companies. Once knowledge has been obtained, clear plans, objectives and KPIs can be produced to allow management to benchmark any AI-based approach. As with any decision about the adoption of technology to a business problem, it is key that the technology is shaped to fit the problem, not the problem to the technology.

The promise of AI is still to be recognised for many businesses where the technology is applicable. Many challenges are to be overcome and can be provided businesses have a good education about AI and a clear AI strategy.

--

--