Analytics, Machine Learning, AI and Automation

In the last few years buzzwords such as Machine Learning (ML), Deep Learning (DL), Artificial Intelligence (AI) and Automation have taken over from the excitement of Analytics and Big Data.

Often ML, DL and AI are placed in the same context especially in product and job descriptions. This not only creates confusion as to the end target, it can also lead to loss of credibility and wasted investment (e.g. in product development).

Figure 1: Framework for Automation

Figure 1 shows a simplified version of the framework for automation. It shows all the required ingredients to automate the handling of a ‘System’. The main components of this framework are:

  1. A system to be observed and controlled (e.g. telecoms network, supply chain, trading platform, deep space probe …)
  2. Some way of getting data (e.g. telemetry, inventory data, market data …) out of the system via some interface (e.g. APIs, service endpoints, USB ports, radio links …) [Interface <1> Figure 1]
  3. A ‘brain’ that can effectively convert input data into some sort of actions or output data which has one or more ‘models’ (e.g. trained neural networks, decision trees etc.) that contain its ‘understanding’ of the system being controlled. The ‘training’ interface that creates the model(s) and helps maintain them, is not shown separately
  4. Some way of getting data/commands back into the system to control it (e.g. control commands, trade transactions, purchase orders, recommendations for next action etc.) [Interface <2> Figure 1]
  5. Supervision capability which allows the ‘creators’ and ‘maintainers’ of the ‘brain’ to evaluate its performance and if required manually tune the system using generated data [Interface <3> Figure 1] – this itself is another Brain (see Recursive Layering)

This is a so called automated ‘closed-loop’ system with human supervision. In such a system the control can be fully automated, only manual or any combination of the two for different types of actions. For example, in safety critical systems the automated closed loop can have cut out conditions that disables Interface <2> in Figure 1. This means all control passes to the human user (via Interface <4> in Figure 1).

A Note about the Brain

The big fluffy cloud in the middle called the ‘Brain’ hides a lot of complexity, not in terms of the algorithms and infrastructure but in terms of even talking about differences between things like ML, DL and AI.

There are two useful concepts to use when trying to put all these different buzzwords in context when it comes to the ‘Brain’ of the system. In other words next time some clever person tells you that there is a ‘brain’ in their software/hardware that learns.. ask them two questions:

  1. How old is the brain?
  2. How dense is the brain?

Age of the Brain

Age is a very important criteria in most tasks. Games that preschool children struggle with are ‘child’s play’ for teenagers. Voting and driving are reserved for ‘adults’. In the same way for an automated system the age of the brain talks a lot about how ‘smart’ it is.

At its simplest a ‘brain’ can contain a set of unchanging rules that are applied to the observed data again and again [so called static rule based systems]. This is similar to a new born baby that has fairly well defined behaviours (e.g. hungry -> cry). This sort of a brain is pretty helpless in case the data has large variability. It will not be able to generate insights about the system being observed and the rules can quickly become error prone (thus the age old question – ‘why does my baby cry all the time!’).

Next comes the brain of a toddler which can think and learn but in straight lines and that too after extensive training and explanations (unless you are a very ‘lucky’ parent and your toddler is great at solving ‘problems’!). This is similar to a ‘machine learning system’ that is specialised to handle specific tasks. Give it a task it has not trained for and it falls apart.

Next comes the brain of a pre-teen which is maturing and learning all kinds of things with or without extensive training and explanations. ‘Deep learning systems’ have similar properties. For example a Convolutional Neural Network (CNN) can extract features out of a raw image (such as edges) without requiring any kind of pre-processing and can be used on different types of images (generalisation).

At its most complex, (e.g. a healthy adult) the ‘brain’ is able to not only learn new rules but more importantly evaluates existing rules for their usefulness. Furthermore, it is capable of chaining rules, applying often unrelated rules to different situations. Processing of different types of input data is also relatively easy (e.g. facial expressions, tone, gestures, alongside other data). This is what you should expect from ‘artificial intelligence‘. In fact with a true AI Brain you should not need Interface <4> and perhaps a very limited Interface <3> (almost a psychiatrist/psycho-analyst to a brain).

Brain Density

Brain density increases as our age increases and then stops increasing and starts to decrease. From a processing perspective its like the CPU in your phone or laptop starts adding additional processors and therefore is capable of doing more complex tasks.

Static rule-based systems may not require massive computational power. Here more processing power may be required for <1>/<2>. to prepare the data for input and output.

Machine-learning algorithms definitely benefit from massive computational powers especially when the ‘brain’ is being trained. Once the model is trained however, the application of the model may not require computing power. Again more power may be required to massage the data to fit the model parameters than to actually use the model.

Deep-learning algorithms require computational power throughout the cycle of prep, train and use. The training and use times are massively reduced when using special purpose hardware (e.g. GPUs for Neural Networks). One rule of thumb: ‘if it doesn’t need special purpose hardware then its probably not a real deep-learning brain, it may simply be a machine learning algorithm pretending to be a deep-learning brain’. CPUs are mostly good for the data prep tasks before and after the ‘brain’ has done its work.

Analytics System

If we were to have only interfaces <1> and <3> (see Figure 1) – we can call it an analytics solution. This type of system has no ability to influence the system. It is merely an observer. This is very popular especially on the business support side. Here the interface <4> may not be something tangible (such REST API or a command console) all the time. Interface <4> might represent strategic and tactical decisions. The ‘Analytics’ block in this case consists of data visualisation and user interface components.

True Automation

To enable true automation we must close the loop (i.e. Interface <2> must exist). But there is something that I have not shown in Figure 1 which is important for true automation. This missing item is the ability to process event-based data. This is very important especially for systems that are time dependent – real-time or near-real-time – such as trading systems, network orchestrators etc. This is shown in Figure 2.

Figure 2: Automation and different types of data flows

Note: Events are not only generated by the System being controlled but also by the ‘Brain’. Therefore, the ‘Brain’ must be capable of handling both time dependent as well as time independent data. It should also be able to generate commands that are time dependent as well as time independent.

Recursive Layers

Recursive Layering is a powerful concept where an architecture allows for its implementations to be layered on top of each other. This is possible with ML, DL and AI components. The System in Figures 1 and 2 can be another combination of a Brain and controlled System where the various outputs are being fed in to another Brain (super-brain? supervisor brain?). An example is shown in Figure 3. This is a classic Analytics over ML example where the ‘Analytics’ block from Figure 1 and 2 has a Brain inside it (it is not just restricted to visualisation and UI). It may be a simple new-born brain (e.g. static SQL data processing queries) or a sophisticated deep learning system.

Figure 3: Recursive layering in ML, DL and AI systems.

The Analytics feed is another API point that can be an input data source (Interface <1>) to another ‘Brain’ that is say supervising the one that is generating the analytics data.

Conclusion

So next time you get a project that involves automation (implementing or using) – think about the interfaces and components shown in Figure 1. Think about what type of brain do you need (age and density).

If you are on the product side then make sure bold claims are made, not illogical or blatantly false ones. Just as you would not ask a toddler to do a teenagers job, don’t advertise one as the other.

Finally think hard about how the users will be included in the automation loop. What conditions will disable interface <2> in Figure 1 and cut out to manual control? How can the users monitor the ‘Brain’? Fully automated – closed loop systems are not good for anyone (just ask John Connor from the Terminator series or people from Knight Capital https://en.wikipedia.org/wiki/Knight_Capital_Group). Humans often provide deeper insights based on practical experience and knowledge than ML or DL is capable of.

Reduce Food Wastage using Machine Learning

A scenario the readers might be familiar with: food items hiding around in our refrigerator way past their expiry date. Once discovered, these are quickly transferred to the bin with promises to self that next time it will be different for sure OR worse yet we stuff the items in our freezer!

Estimates of waste range from 20% to 50% (in countries like USA). This is a big shame given the fact that hundreds of millions of people around the world don’t have any form of food security and face acute shortage of food.

What can we do about this? 

One solution is to help people be a bit more organised by reminding them of the expiry dates of various items. The registration of items has to be automated and smart. 

Automated:

If we insist on manual entry of items with their expiry date – people are likely to not want to do this especially right after a long shop! Instead, as the items are checked out at the shop, an option should be available to email the receipt which should also contain an electronic record of the expiry date of the purchased items. This should include all groceries as well as ‘ready to eat’ meals. Alternatively, one can also provide different integration options using open APIs with some sort of a mobile app.

Smarter:

Once we have the expiry dates we need to ensure we provide the correct support and advice to the users of the app. To make it more user-friendly we should suggest recipes from the purchased groceries and put those on the calendar to create a ‘burn-down’ chart for the groceries (taking inspiration from Agile) which optimises for things like freshness of groceries, minimising use of ‘packaged foods’ and maintaining the variety of recipes.

Setup:

Steps are as follows:

  1. When buying groceries the expiry and nutrition information are loaded into the system
  2. Using a matrix of expiry to items and items to recipes (for raw groceries) we get an optimised ordering of usage dates mapped to recipes
  3. With the item consumption-recipe schedule we can then interleave ready to eat items, take-away days and calendar entries related to dinner/lunch meetings (all of these are constraints)
  4. Add feedback loop allowing users to provide feedback as to what recipes they cooked, what they didn’t cook, what items were wasted and where ‘unscheduled’ ready to eat items were used or take-away called for
  5. This will help in encouraging users to buy the items they consume and warn against buying (or prioritise after?) items that users ‘ignore’ 

I provide a dummy implementation in Python using Pandas to sketch out some of the points and to bring out some tricky problems.

The output is a list of purchased items and a list of available recipes followed by a list of recommendations with a ‘score’ metric that maximises ingredient use and minimises delay in usage.

Item: 0:cabbage
Item: 1:courgette
Item: 2:potato
Item: 3:meat_mince
Item: 4:lemon
Item: 5:chicken
Item: 6:fish
Item: 7:onion
Item: 8:carrot
Item: 9:cream
Item: 10:tomato


Recipe: 0:butter_chicken
Recipe: 1:chicken_in_white_sauce
Recipe: 2:mince_pie
Recipe: 3:fish_n_chips
Recipe: 4:veg_pasta
Recipe: 5:chicken_noodles
Recipe: 6:veg_soup

Recommendations

butter_chicken:     Score:30            Percentage items consumed:36%

chicken_in_white_sauce:     Score:26            Percentage items consumed:27%

Not enough ingredients for mince_pie

fish_n_chips:       Score:20            Percentage items consumed:27%

veg_pasta:      Score:26            Percentage items consumed:27%

chicken_noodles:        Score:28            Percentage items consumed:36%

veg_soup:       Score:20            Percentage items consumed:27%

The recommendation is to start with ‘butter chicken’ as we use up some items that have a short shelf life. Here is a ‘real’ recipe – as a thank you for reading this post: 

http://maunikagowardhan.co.uk/cook-in-a-curry/butter-chicken-murgh-makhani-chicken-cooked-in-a-spiced-tomato-gravy/h

Tricky Problems:

There are some tricky bits that can be solved but will need some serious thinking:

  1. Updating recommendations as recipes are cooked
  2. Updating recommendations as unscheduled things happen (e.g. item going bad early or re-ordering of recipes being cooked)
  3. Keeping track of cooked items and other interleaved schedules (e.g. item being frozen to use later)
  4. Learning from usage without requiring the user to update all entries (e.g. using RFID? Deep Learning – from images taken of your fridge with the door open)
  5. Coming up with innovative metrics to encourage people to eat healthy and eat fresh – lots of information can be extracted (E.g. nutrition information) if we have a list of purchased items
  6. Scheduling recipes around other events in a calendar or routine items (e.g. avoiding a heavy meal before a scheduled gym appointment)