Most Keras examples show neural networks that use the Sequential class. This is the simplest type of Neural Network where one input gives one output. The constructor of the Sequential class takes in a list of layers, the lowest layer is the first one in the list and the highest layer the last one in the list. It can also be pictured as a stack of layers (see Figure 1). In Figure 1 the arrow shows the flow of data when we are using the model in prediction mode (feed-forward).
Figure 1: Stack of layers
Sequential class does not allow us to build complex models that require joining of two different set of layers or forking out of the current layer.
Why would we need such a structure? We may need a model for video processing which has two types of inputs: audio and video stream. For example if we are attempting to classify a video segment as being fake or not. We might want to use both the video stream as well as the audio stream to help in the classification.
To do this we would want to pass the audio and video through encoders trained for the specific input type and then in a higher layer combine the features to provide a classification (see Figure 2).
Figure 2: Combining two stacked network layers.
Another use-case is to generate lip movements based on audio segments (Google LipSync3D) where a single input (audio segment) generates both a 3D mesh around the mouth (for the lip movements) and a set of textures to map on the 3D mesh. These are combined to generate a video with realistic facial movements.
This common requirement of combining two stacks or forking from a common layer is the reason why we have the Keras Functional API and the Model class.
Keras Functional API and Model Class
The Functional API gives full freedom to create neural networks with non-linear topologies.
The key class here is tf.keras.Model which allows us to build a graph (a Directed Acyclic Graph to be exact) of layers instead of restricting us to a list of layers.
Make sure you use Keras Utils plot_model to keep an eye on the graph you are creating (see below). Figure 3 shows an example of a toy model with two input encoding stacks with a common output stack. This is similar to Figure 2 except the inputs are shown at the top.
Code for this can be seen below. The main difference is that instead of passing layers in a list we have to assemble a stack of layers (see input stack 1 and 2 below), starting with the tf.keras.layers.Input layer, and connect them through a special merging layer (tf.keras.layers.concatenate in this example) to the common part of the network. The Model constructor takes a list of these Input layers as well as the final output layer.
The Input layers mark the starting point of the graph and the output layer (in this example) marks the end of the graph. The activation will flow from Input layers to the output layer.
We know GPT stands for Generative Pre-trained Transformers. But what does ‘Chat’ mean in ChatGPT and how is it different from GPT-3.5 the OpenAI large language model?
And the really interesting question for me: Why doesn’t ChatGPT say ‘Hi’?
The Chat in ChatGPT
Any automated chat product must have the following capabilities, to do useful work:
Understand entity (who), context (background of the interaction), intent (what do they want) and if required user sentiment.
Trigger action/workflow to retrieve data for response or to carry out some task.
The first step is called Natural Language Understanding and the third step is called Natural Language Generation. For traditional systems the language generation part usually involves mapping results from step (1) against a particular pre-written response template. If the response is generated on the fly without using pre-written responses then the model is called a generative AI model as it is generating language.
ChatGPT is able to do both (1) and (3) and can be considered as generative AI as it does not depend on canned responses. It is also capable of generating a wide variety of correct responses to the same question.
With generative AI we cannot be 100% sure about the generated response. This is not a trivial issue because, for example, we may not want the system to generate different terms and conditions in different interactions. On the other hand we would like it to show some ‘creativity’ when dealing with general conversations to make the whole interaction ‘life like’. This is similar to a human agent reading off a fixed script (mapping) vs allowed to give their own response.
Another important point specific to ChatGPT is that unlike an automated chat product it does not have access to any back-end systems to do ‘useful’ work. All the knowledge (upto the year 2021) it has is stored within the 175 billion parameter neural network model. There is no workflow or actuator layer (as yet) to ChatGPT which would allow it to sequence out requests to external systems (e.g. Google Search) and incorporate the fetched data in the generated response.
Opening the Box
Let us now focus on ChatGPT specifics.
Chat GPT is a Conversational AI model based on the GPT-3.5 large language model (as of writing this). A language model is an AI model that encapsulates the rules of a given language and is able to use those rules to carry out various tasks (e.g. text generation).
The termlanguage can be understood as the means of expressing something using a set of rules to assemble some finite set of tokens that make up the language. This applies to human language (expressing our thoughts by assembling alphabets), computer code (expressing software functionality by assembling keywords and variables) as well as protein structures (expressing biological behavior by assembling amino-acids).
The term large refers to the (175 billion) number of parameters within the model which are required to learn the rules. Think of a model like a sponge, complex language rules like water. More complex the rules, bigger the sponge you will need to soak it all up. If the sponge is small then rules will start to leak out and we won’t get an accurate model.
Now a large language model (LLM) is the core of ChatGPT but it is not the only thing. Remember our three capabilities above? The LLM is involved in step (3) but there is still step (1) to consider.
This is where the ChatGPT model comes in. The ChatGPT model is specifically a fine-tuned model based on GPT-3.5 LLM. In other words, we take the language rules captured by GPT-3.5 model and we fine tune it (i.e. retrain a part of the model) to be able to answer questions. So ChatGPT is not a chat platform (as defined by the capability to do Steps 1-3 above) but a platform that can respond to a prompt in a human-like way without resorting to a library of canned responses.
Why do I say ‘respond to a prompt’? Did you notice that ChatGPT doesn’t greet you? It doesn’t know when you have logged in and are ready to go, unlike a conventional chatbot that chirps up with a greeting. It doesn’t initiate a conversation, instead it waits for a prompt (i.e. for you to seed the dialog with a question or a task). See examples of some prompts in Figure 1.
Figure 1: ChatGPT example prompts, capabilities and limitations. Source [https://chat.openai.com/chat]
This concept needing a prompt is an important clue in how ChatGPT was fine tuned from the GPT-3.5 base model.
Fine Tuning GPT-3.5 for Prompts
As the first step the GPT-3.5 is fine-tuned using supervised learning on a prompt sampled from a prompt database. This is quite a time consuming process because while we may have a large collection of prompts (e.g.: https://github.com/f/awesome-chatgpt-prompts) and a model capable of generating a response based on a prompt, it is not easy to measure the quality of the response except in the simplest of cases (e.g. factual answers).
For example if the prompt was to ‘Tell me about the city of Paris’ then we have to ensure that the facts are correct as well as their presentation is clear (e.g. Paris is the capital of France). Furthermore we have to ensure correct grouping and flow within the text. It is also important to understand where opinion is presented as a fact (hence the second limitation in Figure 1).
Human in the Loop
The easiest way to do this is to get a human to write the desired response to the sampled prompt (from a prompt dataset) based on model generated suggestions. This output when formatted into a dialog structure (see Figure 2) provides labelled data for using supervised learning to fine-tuning the GPT-3.5 model. This basically teaches GPT-3.5 what a dialog is.
Figure 2: Casting text as a set of dialogs.
But this is not the end of Human in the Loop. To ensure that the model can self-learn a reward model is built. This reward model is built by taking a prompt and few generated outputs (from the fine-tuned model) and asking a human to rank them in order of quality.
This labeled data is used to then create the reward function. Reward functions are found in Reinforcement Learning (RL) systems which also allow self-learning. Therefore there must be some RL going on in ChatGPT training.
Reinforcement Learning: How ChatGPT Self-Learns
ChatGPT uses the Proximal Policy Optimization RL algorithm (https://openai.com/blog/openai-baselines-ppo/#ppo) in a game-playing setting to further fine-tune the model. The action is the generated output. The input is the reward value from the Reward function (as above). Using this iterative process the model can be continuously fine-tuned using simple feedback (e.g. ‘like’ and ‘dislike’ button that comes up next to the response). This is very much the wisdom of the masses being used to direct the evolution of the model. It is not clear though how much of this feedback is reflected back into the model. Given the public facing nature of the model you would want to carefully monitor any feedback that is incorporated into the training.
What is ChatGPT Model doing?
By now it should be clear that ChatGPT is not chatting at all. It is filling in next bit of text in a dialog. This process starts from the prompt (seed) that the user provides. This can be seen in the way it is fine-tuned.
ChatGPT responds based on tokens. A token is (as per my understanding) a combination of up to four characters and can be a full word or part of one. It can create text consisting of up to 2048 tokens (which is a lot of text!).
Figure 3: Generating response as a dialog.
The procedure for generating a response (see Figure 3) is:
Start with the prompt
Take the text so far (including the prompt) and process it to decide what goes next
Add that to existing text and check if we have encountered the end
If yes, then stop otherwise go to step 2
This allows us to answer the question: why doesn’t ChatGPT say ‘Hi’?
Because if it seeded the conversation with some type of greeting then we would by bounding the conversation trajectory. Imagine starting with the same block in Figure 3 – we would soon find that the model starts going down a few select paths.
ChatGPT confirms this for us:
I hope you have enjoyed this short journey inside ChatGPT.
In the previous post we built a simple time-series forecasting model using a custom neural network model as well as a SARIMAX model from Statsmodels TSA library. The data we used was the monthly house sales transaction counts. Training data was from 1995 – 2015 and the test data was from 2016 – end 2022.
But we can see only about 340 data points available (Figure 1a) and the data is quite spiky (see Figure 1a/1b). From Figure 1 we can see the Test Data is quite Note the outliers in the historgram (Figure 1b) in the Test data. The distribution plot was produced using Seaborne (using the .distplot() method).
Figure 1a: Monthly transaction counts – showing test/train split.Figure 1b: Density plots for Training (blue) and Test (orange) data.
Any model that is trained on this data set is likely to not perform well especially after 2016 (see Figure 1) due to the sharp rise and fall. In fact this is the time period we would use to validate our model before attempting to forecast beyond the available time horizon.
We can see from the performance of the Neural Network (NN) trained on available data (Figure 2 Bottom) that the areas where data is spiky (orange line) the trained network (blue line) doesn’t quite reach the peaks (see boxes).
Figure 2 Top: Neural network trained using over sampled data; Bottom: Neural network trained using available data.
Similarly if we over sample from available data especially around the spiky regions the performance improves (Figure 2 Top). We can see the predicted spikes (blue line) are lot closer to the actual spikes in the data (orange line). If they seem ‘one step ahead’ it is because this is a ‘toy’ neural network model which has learnt to follow the last observed value.
As an aside we can see the custom Neural Network is tracking SARIMAX quite nicely except that it is not able to model the seasonal fluctuation.
More sophisticated methods such as RNNs will product very different output. We can see in Figure 3 how RNNs model the transaction data. This is just to model the data not doing any forecasting yet. The red line in Figure 3 is a custom RNN implementation and the orange line is Tensorflow RNN implementation.
Sampling
To understand why oversampling has that effect let us understand how we sample.
Figure 4 shows this sampling process at work. We normalise the data and take a chunk of data around a central point. For this chunk, we calculate the standard deviation. If standard deviation of the chunk is more than 0.5 the central point is accepted into the sample.
As we decrease the chunk size we see less points are collected. Smaller the chunk size more variability will be needed in the neighborhood for the central point to be sampled. For example for chunk size of two (which mean two points either side of the central point – see Figure 4), we find sampling from areas of major changes in transaction counts.
Figure 4: Sample points (orange) collected from the Training data using different
The other way of getting around this is to use a Generative AI system to create synthetic data that we can add to the sample set. The generative AI system will have to create both the time value (month-year) as well as the transaction count.
In my previous post I described how to setup time-series collections in MongoDB and run time-series queries on property transaction dataset.
In this post I am going to showcase some data pipelines that use Mongo Aggregations and some time-series forecasts using TensorFlow and statsmodel libraries (as a comparison).
Feature File associated with the forecasting task which is used by the…
Python program to build a forecasting model using Statsmodels and Tensorflow
These map to the AI-IA reference architecture as below:
The Aggregation Pipeline
We leave the heavy lifting Feature creation to Mongo DB. The pipeline takes the ‘raw’ data and groups it by month and year of the transaction – taking average of the price and count of transactions in a given month. Then we sort it by the id (month-year) to get the data in chronological order.
The feature that we will attempt to forecast is the monthly transaction count.
The output looks something like below where on the X-axis we have the month-year of the transactions and on Y-axis the monthly transaction count.
Looking at above we can see straight away this is going to be an interesting forecasting problem. This data has three big feature points: drop in transaction volumes post 2008 financial crash, transaction spike in 2014 just before stamp duty changes and then towards the right hand side the pandemic – recovery – panic we are in the middle of.
The features data is stored as a local file which is consumed directly by the Python program.
Forecasting Model
I used the Statsmodels TSA library for their ‘out of the box’ SARIMAX model builder. We can use the AIC (Akaike Information Criteria) to find the values for the order of Auto-regression, difference and Moving-Average parts of SARIMAX. Trying different order values I found the following to give the best (minimum) AIC value: [ AR(4), Diff(1), MA(2)]
I used Keras to build a ‘toy’ NN-model that takes the [i-1,i-2,…, i-m] values to predict the i’th value as a comparison. I used m = 12 (i.e. 12 month history to predict the 13th month’s transaction count).
The key challenge is to understand how the models deal with the spike and collapse as we attempt to model the data. We will treat the last 8 years data as the ‘test’ data (covering 2015-2022 end). Data from 1996 – 2014 will be used to train the model. Further we fill forecast the transaction count through to end of 2023 (12 months).
The result can be seen below.
The Orange line is the monthly count of transactions from 2015 till Dec. 2022. We can see a complete crash thanks to the jump in mortgage interest rates after the now infamous ‘mini-budget’ under Liz Truss. Blue line is the value predicted by the ‘toy’ NN model. You can see the gap as the model ‘catches-up’ with the actual values. The Green line is the forecast obtained using the same NN model (beyond Dec. 2022). The Red line is the forecast obtained using the Statsmodels SARIMAX implementation.
We can see that the NN model fails to follow the dip and shows an increase as we reach the end of 2023. The SARIMAX model shows similar trend but with few more dips.
Next: Using a recurrent neural network – digging deep 🙂
In my previous post I took the England and Wales property sale data and built a Google Cloud dashboard project using Terraform. In this post I am going to use the same dataset to try MongoDB’s new time-series capabilities.
This time I decided to use my personal server instead of on a cloud platform (e.g. using MongoDB Atlas).
Step 1: Install the latest version of MongoDB Community Server
It can be downloaded from here, I am using version 6.0.3. Installation is pretty straightforward and installers available for most platforms. Installation is well documented here.
Step 2: Create a database and time-series type of collection
Time-series collections in MongoDB are structured slightly differently as compared to the collections that store documents. This difference comes from the fact that time-series data is usually bucketed by some interval. But that difference is abstracted away by a writeable non-materialized view. In other words a shadow of the real data optimized for time-based queries [1].
The creation method for time-series collections is slightly different so be careful! In case you want to convert an existing collection to time-series variant then you will have to dump the data out and create a time-series collection into which you import the dumped data. Not trivial if you are dealing with large amounts of data!
There are multiple ways of interacting with MongoDB:
Let us see how to create a time-series collection using the CLI and Python:
CLI:
db.createCollection(
"<time-series-collection-name>",
{
timeseries :
{
timeField:"<timestamp field>",
metaField: "<metadata group>",
granularity:"<one of seconds, minutes or hours>"
}
})
Python:
db.create_collection(
"<time-series-collection-name>",
timeseries = {
"timeField": "<timestamp field>",
"metaField": "<metadata group>",
"granularity": "<one of seconds, minutes or hours>"
})
The value given to the timeField is the key in the data that will be used as the basis for time-series functions (e.g. Moving Average). This is a required key-value pair.
The value given to the metaField is the key that represents a single or group of metadata items. Metadata items are keys you want to create secondary indexes on because they show up in the ‘where’ clause. This is an optional key-value pair.
The value for granularity is set to the closest possible arrival interval scale for the data and it helps in optimizing the storage of data. This is an optional key-value pair.
Any other top level fields in the document (alongside the required timeField and optional metaField) should be related to the measurements that we want to operate over. Generally these will be the values that we use an aggregate function (e.g. sum, average, count) over a time window based on the timeField.
Step 3: Load some Time-series Data
Now that everything is set – we load some time series data into our newly created collection.
We can use mongoimport CLI utility, write a program or use MongoDB Compass (UI). Given this example is with the approx 5gb csv property sale data I suggest using either mongoimport CLI utility or MongoDB Compass. If you want to get into the depths of high performance data loading then the ‘write a program’ option is quite interesting. You will be limited by the MongoDB driver support in the language of choice.
The mongoimport CLI utility takes approximately 20 minutes to upload the full 5gb file when running locally. But we need to either convert the file from csv into json (fairly easy one time task to write the converter) or to use a field file as the original csv file does not have a header line.
Note: You will need to convert to JSON if you want to load into a time-series collection as CSV is a flat format and time-series collection requires a ‘metaField’ key which in this case points to a nested document.
Compass took about 1 hour to upload the same file remotely (over the LAN).
Compass is also quite good to test the data load. If you open up the collection in it you will see the ‘Time-Series’ badge next to the collection name (‘price_paid’ in the image below).
Let us see example of a single document (image above) to understand how we have used the timeField, metaField and top level fields.
timeField: “timestamp” – the date of the transaction
metaField: “metadata” – address, city, hold-type, new-built, post_code and area, property type – all the terms we may want to query by
top level: “price” – to find average, max, min etc. of the price range.
Step 4: Run a Query
Now comes the fun bit. I give an example below of a simple group-by aggregation that uses the date of transaction and the city that the property is in as the grouping clause. We get the average price of transactions, the total number of transactions and standard deviation of the price in that group:
The above aggregation query uses the ‘setWindowFields‘ stage – partitioning by ‘city’ and sorting by the time field (‘timestamp’). We use a window of 6 months before the current timestamp value to calculate the 6 month moving average price. This can be persisted to a new collection using the ‘out‘ stage.
The image above shows the 3 (green), 6 (orange) and 12 (blue) month moving average price for the city of Bristol. The image below shows the same for London. These were generated using matplotlib via a simple python program to query the relevant persisted moving average collections.
Reflecting on the Output
I am using an Intel NUC (dedicated to MongoDB) with an i5 processor and 24GB of DDR4 RAM. In-spite of the fact that MongoDB attempts to corner 50% of total RAM and the large amount of data we have, I found that 24GB RAM is more than enough. I got decent performance for the first level grouping queries that were looking at the full data-set. No query took more than 2-3 minutes.
When I tried the same query on my laptop (i7, 16GB RAM) the execution times were far longer (almost double).
The single CSV datafile (going back almost 30 years) is approximately 5gb.
I wanted to use this as a target to learn more about the cloud as it is not a ‘toy’ dataset. The idea is to do everything using cloud components and services. I decided to use GCP as it has a generous ‘free’ tier.
Also I wanted to automate as much of the deployment as I could using Terraform Infrastructure as Code (IaC). This makes maintaining the project a lot easier as well as making changes. One of the things to remember is that even with IaC there are many components that support ‘config change’ by tearing down and creating the resource again.
Terraform IaC
Terraform is at its core four things:
A Language (HashiCorp Configuration Language) to describe configuration to be applied
A Set of Providers that convert configuration described in the Terraform language to API commands for the target platform
The ‘terraform’ command that brings together the Terraform config files and Providers
Configuration state store
The first feature allows one language to describe all kinds of configurations (IaC). The second feature abstracts away the application of that config via Providers (adapters). The third feature allows us to interact with Terraform to get the magic to work. The fourth feature provides a ‘deployed’ state view.
A key artefact when applying config is what Terraform calls a ‘plan’ that explains what changes will be made.
Terraform interacts with target providers (like Google Cloud, AWS, Azure) using ‘provider’ code blocks and with components made available by those providers using ‘resource‘ code block. Each ‘resource’ block starts off with the resource-type it represents and its name.
One thing to remember is that a Provider (defined using the ‘provider’ code block) has no special way of interacting with the target provider. It is ultimately dependent on the APIs exposed by the target provider. If a provider forces you to use their UI for certain management scenarios then you won’t be able to use Terraform.
Also providers generally have their own IaC capabilities. For example the Google Cloud Deployment Manager and the AWS CloudFormation.
The Project (ver. 1)
Given that I am using the ‘free’ tier of Google Cloud and want to learn those capabilities step-by-step (while avoiding the ‘paid’ tier) I am building my cloud architecture using basic services such as Google Cloud Storage, BigQuery and Looker.
My cloud architecture v1 looks like this:
The CSV file containing the full housing data set gets uploaded to GCS bucket. From there a BigQuery Data Transfer Service parses the CSV file and injects it into a BigQuery table. We then run a set of saved queries to create views for our dashboard. Finally those views are consumed from a Looker Studio Dashboard.
The Terraform symbol in the figure indicates automated resource creation. In the current version I am auto-creating the GCS bucket, BigQuery Data Transfer Service and the BigQuery table. The CSV upload can also be automated by using the ‘google_storage_bucket_object’ resource.
The reason I started off with the above pieces of automation is because I found taking these steps again and again to be very frustrating – especially creating the Table in BigQuery as the schema is fairly complex as well as creating a data transfer job with correct parsing parameters.
You can filter by property type: Terraced (T), Flat (F), Other (O), Semi-detached (S) and Detached (D). Then you can see how the combined old + new transactions signal splits if we see only the New Builds (top – right) or only the Resales (Old Builds in bottom – right).
The Terraform code is given below, items in <> need to filled in before running the script:
To run the above simply save it as a Terraform file (extension .tf) in its own folder. Make sure Terraform is installed, in the path and associated with your GCP account. Then run from the same directory as the .tf file:
terraform validate to validate the file
terraform plan to review the config change plan without deploying anything
terraform apply to apply config changes to selected Google Cloud project
terraform destroy to destroy changes made by apply
DBSCAN is quite an old algorithm, being proposed in 1996 [1]. But that doesn’t make it any less exciting. Unlike K-Means Clustering, in DBSCAN we do not need to provide the number of clusters as a parameter as it is ‘density’-based. What we provide instead are: size of the neighbourhood (epsilon – based on a distance metric) and minimum number of points to form a dense region (including the point being examined).
Minimum Number of Points: this is usually the dimensionality of the data + 1. If this is a large number we will find it difficult to designate a dense region.
Epsilon – Neighbourhood Size: this should be chosen keeping in mind that a high value will tend to group regions together into large clusters and a low value will result in no clustering at all.
The algorithm attempts to sort a dataset into either a noise point (an outlier – not in any cluster) or a cluster member.
Walking through the DBSCAN Algorithm
I will walk through the algorithm linking it with specific code segments from: https://github.com/amachwe/db_scan/blob/main/cluster.py. Figure 1 shows some data points in 2d space. Assume minimum number of points is 3 (2d + 1).
Figure 1: Dataset
First loop of the iteration we select a point and identify its neighbourhood [line 31]. From Figure 2 we see 3 points in the neighbourhood therefore the point being examined is not a noise point [line 33]. We can therefore assign the point to a cluster (orange) [line 37].
Figure 2: Finding the neighbourhood and identifying if a noise point or not
Figure 3 shows the seed set for the point being examined [lines 38-40] as the green dotted line.
Figure 3: Seed set creation for point under examination
We take the first point in the seed set (marked by the blue arrow) [line 40] and mark it as a point belonging the current cluster (orange) [line 48] then we identify its neighbourhood [line 49]. This is shown in Figure 4.
Figure 4: Taking first point from seed set (blue arrow) and repeating neighbourhood finding procedure
Figure 5 shows the expanded seed set [lines 51-56] because of the new neighbourhood of the current seed set point being examined.
Figure 5: Current seed set being extended
We keep repeating the process till all points in the seed set are processed (Figure 6-8) [lines 40-56]. Even though the seed set contains the first point we started with, when it is processed line 44 checks if the point being checked has already been assigned to a cluster.
Figure 6: Processing next point in Seed Set
Figure 7: Current seed set after 3 points have been added to a cluster
Figure 8: Final point of the seed set being examined
Noise Points
Figure 9 shows the state after first four points have been processed and have been identified to belong to a cluster (orange). Now we get a point that we can visually confirm is an outlier. But we need to understand how the algorithm deals with it. We can see the selected point (red) has no neighbours within distance e. Therefore the condition on line 33 will kick in and this point will be identified as a noise point.
Figure 9: Noise point identification
The algorithm continues to the next point (see Figure 10) after incrementing the cluster label [line 58] and starts a new cluster (green). We again identify its neighbourhood points and create a new seed set (Figure 11)
Figure 10: Starting a new cluster
Figure 11: Starting a new seed set for the Green cluster
So you used the goodness of DevOps and Agile methodologies to release an app that delivers real business value by automating processes, reducing swivel-chair and simplifying the user journeys (process before tools!). You made the app to be container based with microservice goodness, compliant REST APIs and five types of databases (cloud native of course!).
As your users start using this app – you are amazed at all the high quality and timely business data that is being collected thanks to shifting away from manual processes. You also get tons of data about the app that helps you support it and improve its reliability.
Suddenly some bright spark has the idea of processing all this business and app data to answer questions like ‘Can we forecast customer order journey time?’ and ‘Can we predict journeys that are likely to get stuck?’
You just got your first business problem with those magic keywords of ‘forecast’ and ‘predict’ that allows you to take scikit-learn for a spin!
You discuss with the benefits owner how they will measure the benefits of this. Thereafter, on a Friday afternoon you find yourself installing python, sklearn and downloading some data.
Congratulations – you have taken your first steps in MLOps – you are planning to build a model, understanding what features to use and thinking about how to measure its business performance.
Over the next week or so you build some forecasting and classification models that give you good results (business results – not AUC!). The business benefit owner is impressed and gives you the go ahead to generate a report every week so that such orders can be triaged early. Now you need to start thinking about rebuilding the model regularly, checking and comparing its performance with the active model. You don’t mind running the model on your laptop and emailing the report every week.
This is your second step in MLOps – to understand how to train/retrain your model and how to select the best one for use. For this you will need to establish feature pipelines that run as part of the training process and ensure the whole thing is a one-command operation so that you can generate the report easily.
Then someone has a good idea – why not create a dashboard that updates daily and provides a list of order journeys with a high probability of getting stuck so that we further improve our response time AND provide the order completion forecast to the customers for better customer service (and to reduce inbound service calls).
This puts you in a fix – because till now you were just running the model on your laptop every week and creating a report in MS Excel – now you need to grow your model and make it part of a product (the app).
It should be deployable outside your laptop, in a repeatable and scalable way (e.g. infra-as-code). Integrations also need to be worked on – your model will need access to feature data as well as an API for serving the results. Also you need to stand up your model factory that will retrain models, compare with existing ones (quality control) and deploy as required. You will also need to think about infrastructure and model support. Since it will be used daily and some benefit will depend on it working properly – someone needs to keep an eye on it!
This is the third and biggest step in MLOps that moves you from the realm of ad-hoc ML to an ML Product – with product thinking around it like assurance, support, roadmap and feedback.
Now you have to check in all your code so that others can work on the app in case you are chilling on a beach somewhere.
This third big step brings hidden complexity in monitoring requirements. While the model was on your laptop being used on a weekly-basis you did not have to worry about the ‘model environment’ or automated monitoring. You had time for manual monitoring and validation.
Now that model will be deployed and run daily with the output being used to drive customer interaction we cannot depend on manual monitoring. Once the model is deployed it will continuously give forecasts (as orders are generated). If those forecasts are too optimistic then you will increase the in-bound call pressure as people chase up on orders. If they are too conservative then you will reduce sales as customers might find the wait too long. Also unlike software, your ML model could start misbehaving due to data drift (intended or unintended). If you don’t detect and correct for this the forecast results from the model could stop adding any value (the best case) or worse actually increase customer’s pain (the worst case).
In traditional software we would trap these data issues at the edge through validations and we would log these as error events. But here the data can make perfect business sense just that the model can’t make sense of it to give a forecast or a prediction.
Therefore we need to start checking data for data drift, model for bias, model performance, features for validity and the business performance of the app for continued benefit realisation.
Also, this step by step approach for AI/ML is old school. We need to deploy a more continuous approach to discover new problems that can either be farmed off to multi-disciplinary teams if the org is mature enough or can be tackled in phases (going from analytics to prediction to prescription) if the org is still developing the right skill sets.
Services contribute, by far, the largest amount to a countries economic output. Why? Because they are easier to setup, scale-up/down and close than say a manufacturing unit or a farm.
Last few years have seen massive growth in certain services even as other services declined. One example of the former is the ‘delivery service’ (e.g. Deliveroo) that delivers some product (e.g. food). Covid-19 helped accelerate the growth as people could not go to physical locations to access those products.
But how do these services work? Now that the lockdowns are over and people are not afraid to mingle, what will happen to such services? What factors will impact the future prospects of such services? Let us investigate.
Business Model
To answer the above questions we need to figure out how do delivery services interact with the products they deliver in terms of price and value. Now we know the sale price of any product (the price we see as consumers) includes service costs that went into making that product (e.g. chef’s services in cooking a dish). One of these services is product delivery that gives us access to that product (e.g. salary of the waiter in a restaurant or the delivery driver).
Delivery Fleet toDelivery Aggregator
Before delivery aggregators gave access to a large pool of delivery personnel, each producer had their own delivery fleet (e.g. Dominos, local take-aways etc.) cost of which was either included in the product sale price (see equation 1) or added on as a fixed delivery charge or calculated per delivery (e.g. courier).
Product Sale Price = Base Price + Delivery Price (1)
Consumers could only order certain items usually from local businesses. Constraints like minimum spend also were a pain point. Also you could only order from one business.
Now businesses need not maintain their own delivery fleet and for consumers more items can be ordered from a wider range of producers (with the ability to mix and match) sitting in the comfort of your home/office. This can be thought of as decoupling of the product from the delivery channel.
The delivery firms make money out of the perceived value of accessing the product withminimum effort (e.g. walking, driving, parking) where consumers are trading money for time. They save money through economies of scale and by (sometimes) treating employees like they are not employees (which reduces operating costs).
Final Product Price = Price Paid for Delivery + Product Sale Price (2)
We would also expect businesses to reduce the product sale price (see equation 1) because we are now accounting for the delivery price separately (see equation 2). This should lead to benefits all around – as consumers should pay a lower cost to access the product at home (which can be identified by comparing say take-away menu price vs price for home delivery via aggregator). But we know how pricing in general works. More processing some raw material requires before it can be consumed, less is the probability its price will ever fall. Why? Because with processing, cost increases and more factors come into play, read more here: Causes of Inflation
Trouble with this Business Model
There are few issues with this business model and equation (2) points to a big one. The demand for the service provided by delivery aggregators is entirely dependent on the demand for the products they deliver.
Therefore if the products they deliver are ‘luxuries’ such as take-away meals or restaurant dinners then the demand will go down if an economic downturn is expected (such as now). This is one reason you find delivery aggregators like Uber and Deliveroo are diversifying into daily-use groceries (which are not seen as luxury items).
Impact to Demand for the Service
The other thing that can impact demand for the service is the demand for the service! Remember, unless the product is exclusively sold through the delivery aggregator, people can still consume the product without consuming the delivery service. That is why you have exclusive tie-ups between producers and delivery aggregators (e.g. Nando’s and Deliveroo).
Figure 1: Some factors that can impact demand for delivery service.
There could be many reasons why demand for the delivery service changes. I have attempted to illustrate some of the reasons why in Figure 1. But given the seasonal, location and other factors I am sure there are many more.
Imagine a food delivery scenario where we look at two groups of people: one who live near the city centre (with high concentration of restaurants and take-aways) and the other who live in the suburbs (with low concentration of restaurants and take-aways).
For the City Centre group – distance to the eating joints is probably not a big pain point but still delivery is convenient (trading money for time). But for the Suburbs group it is and because delivery service allows them access to food from the City Centre, the Suburbs group is happy to pay a bit more for the delivery as it removes the big pain point of travelling to the City Centre, finding and paying for parking etc.
But this can change very easily. For example if the weather improves then City Centre group might enjoy a walk to the eating joint (even if they get a take-away). Or if the base price of the food increases it might encourage them to walk (especially given the health benefits of exercise).
For the Suburbs group – if the delivery price increases even if the base price of the food remains the same – they may choose to make the effort to get the food themselves. The delivery price can increase for many reasons – e.g. if there is high demand or cost of fuel goes up. Another factor could be the end of lockdown: the prospect of going to the City Centre may not be such a big pain point (especially when the weather is good or during holidays).
Concepts like ‘dark kitchens’, where food from different restaurants is cooked in the same kitchen, located in different parts of the city, are coming to address price variability, improve access and reduce costs.
What Does the Future Hold?
Given the slim margins there is very little room to increase spending without impacting the delivery price. Here are some factors that will decide what direction this space takes:
Regulations: Given that gig workers can be easily left without any cover unlike regular employees there is a big push to reclassify delivery personnel which means giving them paid leave, sick pay and other benefits and reducing profits for the delivery aggregator
Technology: Delivery is human-labour intensive and we will not be able to reduce costs easily. Technology such as drones can provide the next level of cost reduction but that doesn’t look like something around the corner
Income Levels: Delivery Aggregators depend on disposable income of the consumers so they can pay that little bit extra for delivery. If income levels start to fall all these ‘little extra bits’ will start to bite. This can be seen in other areas as well like Content Platforms (e.g. Netflix, Disney+) where people are cutting down spending
Product Experience: Experience around the product is just as important as the product itself. For example when we go to a grocery store we end up buying items not on the list or discovering new products. With delivery aggregators we cannot get that experience easily
Lifestyle Changes: After the Covid-19 lockdowns and large scale work-from-home most companies are exploring different work arrangements. From flexible working arrangements to a 4-day work week. All these things impact the one thing that delivery aggregators are meant to save – time. With changes to work patterns people have more time to spare. Therefore, the value of time goes down and they may not want to ‘buy’ time with money
In general I don’t think we will see skyrocketing growth in this area and given the bleak economic output one can only predict a short term decline and longer term stabilisation.
The word above is on everyone’s mind these days. There is a lot of panic around rising prices. But what is Inflation and how can we understand it? In this post I want to present a simple mental model and work through some sources of Inflation – because not all inflation is the same.
Inflation is defined with respect to the rise in price of a basket of products. These could be consumer products or wholesale products. The key idea is to track the price of that basket. If the price is rising we call it inflation and the rate of change of this price is the often quoted ‘rate of inflation’.
Central Banks usually target an inflation rate slightly greater than 0%. Bank of England for example targets the rate to be in 0-2% range. But why do we need this to be a small positive number? Why can’t we freeze the prices? Won’t that be good for all? Let us try and understand this using a simple model.
The Price Model
To understand inflation we need to first understand how products and services are priced. That will help us understand how it can change.
Production Model with Factors of Input
When we consume anything – be it a service (e.g. Uber ride) or a product (e.g. chocolate bar) we pay a price that includes all the factors that went into production. The common factors are stated in the image above: Land, Capital (money and goods), Labour (skilled and unskilled), Raw Materials and Energy.
The price also includes the perceived value of the product or service, margins, cost of hidden services (e.g. HR, Legal, Finance) and tax costs. This is above the cost of production.
Input factors underlined in green can change very quickly due to various factors. It is said that current bout of inflation is due to rising energy prices (oil and gas) caused by the Ukraine war.
There are also hidden Services that are part of production (e.g. logistics) that are impacted by rising factors such as fuel costs.
Therefore:
Price = Perceived Value + Cost of Input Factors + Cost of Hidden Services + Margins + Taxes
Furthermore, Demand for a product along with its Price gives us one part of the Profit equation. After all aim of a selling a product or service is to make a Profit – which means the Demand and the Price should provide enough money to not only cover the outflows in terms of costs, margins and taxes but also to generate a return for the shareholders.
Origins of Inflation: Supply Side
Now we start looking at the Supply Side (from producers point of view) sources of price rise (inflation). A common thread in all these is ‘expectation’. If as a producer I expect people can pay a bit more I will try and raise the price before the competition catches on and make that bit of profit. Similarly, if as a seller I expect my input costs to rise (e.g. rise in interest rate, raw material costs or salary increases to retain talent).
Profit Motiveand Perceived Value
Price can increase if I as a producer I feel:
there is extra money for people to spend (e.g. during a lockdown) and
the Perceived Value of my product is significant
or
supply is reducing
the Perceived Value of my product is significant
then I know increasing the price should not badly impact demand. And since the above information is not secret if many producers increase their prices it can lead to wider price rise as knock on effects kick in. Perfect case in mind: hand sanitisers during the early days of Covid-19 pandemic.
While not a driver for inflation – one example where we can see two different producers copying each other is the pricing of Apple smartphones compared with Samsung smartphones. In the beginning Apple devices were more expensive than Samsung Galaxy S devices. Now Samsung Galaxy S series is more expensive given that they have no ‘iPhone 13 mini’ equivalent and are therefore firmly aimed at the value buyer.
Factor of Production Price Increase/Supply Decrease
This is where the web of inputs and outputs transmits price rise from source to target (i.e. our pockets). It is also lot easier to understand. For example: fuel gets more expensive (e.g. due to a war) that leads to wholesale food price increase (as logistics depends on diesel) which leads to an increase in retail price of food and eating out. This leads to people demanding higher salaries to compensate which in turn starts impacting other industries (as their skilled / unskilled labour cost increases). Because salaries don’t rise as quickly as prices people end up cutting expenditure.
Rising Cost of Capital
If the cost of capital increases (i.e. rise of interest rates) then not only do producers find it hard to borrow money to expand, the consumers find it hard to borrow money to buy things (e.g. houses, cars, phones). Producers find it harder to repay debts (e.g. commercial loans, mortgages) and are forced to raise prices.
Expectation
If there is an expectation for prices to rise they will rise. For example, in the current scenario where there is massive press coverage of pending energy price rise, businesses will start raising prices because energy bills and other costs are difficult to trim down quickly. Cost of living increases will also force labour costs to go up which will accelerate price rise.
Origin of Inflation: Demand Side
The other side of inflation is Demand Side (from the Consumers point of view). The standard story is when there is too much money in the market the ‘value of money goes down’ therefore you have to pay more for the same product. But I see that the same as expectation driven. Consumers don’t set the price, they set the demand where possible. I say where possible because you can only reduce expenses to an extent. The way money floods the market is also asymmetric.
The ‘right’ price is not a single thing. As I explained with my pricing model various objective and subjective factors go into setting the price.
Therefore Demand Side inflation can be thought of as a game. Producers and Consumers are trying to find what is the ‘right’ price that maximises the return for Producers and benefit for Consumers (or so called utility). The rules of the game are simple:
If the price falls below a certain level where Producers don’t generate any surplus at that level of Demand (i.e. their income) then there is no point in continuing to Produce otherwise the Producer won’t be able to Consume anything (unless they have other sources of income)
if the price rises above a certain level where it impacts more and more people thereby Demand reduces to an extent that Producers no longer generate any surplus (at that price level) then there is no point in continuing to Produce
therefore the game is to maintain the price between the two extremes while compensating for external influence (e.g. cost of capital, perceived value etc.)
Why a small positive number for Rate of Inflation?
This is because if there is expected growth in demand only then will there be a real growth in supply (no one will Produce excess goods unless they can be sold). With that in mind, for there to be expectation of growth in demand either the prices must fall or more money must be made available to buy the excess goods. Since it is difficult for prices to fall (especially given the Profit Motive and the fact that excess demand for Input Factors is likely to increase those prices) more money must be made available. With more money being made available we are seeding inflation.
What if we kept strict price controls?
What if there was a product P which didn’t depend on any imports. Not even for energy or logistics (no diesel trucks) or any outsourced service. I bet you are finding it difficult to think of many (any?) such products?
But let us assume there are products like that and we can keep prices of all the input factors constant (and the input factors going into producing those input factors and so on constant as well – it is a web at the end of the day).
So with that magic in place if we kept producing P at a fixed price then to still avoid inflation we must keep demand the same. Why? Because if demand is increasing (say due to population increase or expectation of shortage) and we don’t increase supply then there is an opportunity for arbitrage. People can buy and fixed price but sell at higher price to someone with greater need. Now if we increase supply to match the demand – we will transmit that demand down the supply web (i.e. we will need more input factors etc.) and at the end we will end up with someone owning raw materials (like oil, ore, IPR etc.) who will need to spend more money (e.g. wages, capital) to extract more. They won’t be able to transfer the increased demand down the web because they are at the edge. Now at that point these edge producers have an option to not raise prices: either get additional labour, output without increasing salary, interest bill or to reduce the profit margin. Assuming they reduce the profit margin (as generally people won’t work more for the same salary or accept less interest for a loan) they will avoid a price rise.
But what happens when the demand rises again (population keeps on increasing). There will come a time when profit cannot be reduced any further – and it will not be worth the edge producers while to remain in business. This is what happened during Covid-19 where the edge producers (in China) stopped producing at same levels – leading to price shocks around the world.
Conclusion
Hopefully I have peeled back some of the confusion around Inflation and its sources.
Some key points:
Inflation is driven more by expectation than anything else
Shocks can kick start inflation but it is the expectation that those shocks give rise to that really ramps up the price rise
Shock -> Expectation -> Panic is the perfect storm for price rise
Early inflation can lead to arbitrage opportunities where people buy cheap, hold and sell at a higher price