DBSCAN: Clustering Algorithm

DBSCAN is quite an old algorithm, being proposed in 1996 [1]. But that doesn’t make it any less exciting. Unlike K-Means Clustering, in DBSCAN we do not need to provide the number of clusters as a parameter as it is ‘density’-based. What we provide instead are: size of the neighbourhood (epsilon – based on a distance metric) and minimum number of points to form a dense region (including the point being examined).

Minimum Number of Points: this is usually the dimensionality of the data + 1. If this is a large number we will find it difficult to designate a dense region.

Epsilon – Neighbourhood Size: this should be chosen keeping in mind that a high value will tend to group regions together into large clusters and a low value will result in no clustering at all.

The algorithm attempts to sort a dataset into either a noise point (an outlier – not in any cluster) or a cluster member.

Walking through the DBSCAN Algorithm

I will walk through the algorithm linking it with specific code segments from: https://github.com/amachwe/db_scan/blob/main/cluster.py. Figure 1 shows some data points in 2d space. Assume minimum number of points is 3 (2d + 1).

Figure 1: Dataset

First loop of the iteration we select a point and identify its neighbourhood [line 31]. From Figure 2 we see 3 points in the neighbourhood therefore the point being examined is not a noise point [line 33]. We can therefore assign the point to a cluster (orange) [line 37].

Figure 2: Finding the neighbourhood and identifying if a noise point or not

Figure 3 shows the seed set for the point being examined [lines 38-40] as the green dotted line.

Figure 3: Seed set creation for point under examination

We take the first point in the seed set (marked by the blue arrow) [line 40] and mark it as a point belonging the current cluster (orange) [line 48] then we identify its neighbourhood [line 49]. This is shown in Figure 4.

Figure 4: Taking first point from seed set (blue arrow) and repeating neighbourhood finding procedure

Figure 5 shows the expanded seed set [lines 51-56] because of the new neighbourhood of the current seed set point being examined.

Figure 5: Current seed set being extended

We keep repeating the process till all points in the seed set are processed (Figure 6-8) [lines 40-56]. Even though the seed set contains the first point we started with, when it is processed line 44 checks if the point being checked has already been assigned to a cluster.

Figure 6: Processing next point in Seed Set
Figure 7: Current seed set after 3 points have been added to a cluster
Figure 8: Final point of the seed set being examined

Noise Points

Figure 9 shows the state after first four points have been processed and have been identified to belong to a cluster (orange). Now we get a point that we can visually confirm is an outlier. But we need to understand how the algorithm deals with it. We can see the selected point (red) has no neighbours within distance e. Therefore the condition on line 33 will kick in and this point will be identified as a noise point.

Figure 9: Noise point identification

The algorithm continues to the next point (see Figure 10) after incrementing the cluster label [line 58] and starts a new cluster (green). We again identify its neighbourhood points and create a new seed set (Figure 11)

Figure 10: Starting a new cluster
Figure 11: Starting a new seed set for the Green cluster

Testing

Testing script (Python Notebook) can be found here: https://github.com/amachwe/db_scan/blob/main/cluster_test.ipynb

The first block contains some data generation functions – a combination of commonplace ones such as Iris and Make Moons as well as some custom ones.

Make Moons Function

No noise points. Number of Clusters: 2.

Figure 12: Half moons – comparing custom implementation with SKLearn

Iris Dataset

Noise match (%): 85.71 between algorithms. Number of clusters: 3.

Figure 13: Iris first two dimensions – comparing custom implementation with SKLearn

A Voyage of MLOps Discovery

So you used the goodness of DevOps and Agile methodologies to release an app that delivers real business value by automating processes, reducing swivel-chair and simplifying the user journeys (process before tools!). You made the app to be container based with microservice goodness, compliant REST APIs and five types of databases (cloud native of course!).

As your users start using this app – you are amazed at all the high quality and timely business data that is being collected thanks to shifting away from manual processes. You also get tons of data about the app that helps you support it and improve its reliability.

Suddenly some bright spark has the idea of processing all this business and app data to answer questions like ‘Can we forecast customer order journey time?’ and ‘Can we predict journeys that are likely to get stuck?’

You just got your first business problem with those magic keywords of ‘forecast’ and ‘predict’ that allows you to take scikit-learn for a spin!

You discuss with the benefits owner how they will measure the benefits of this. Thereafter, on a Friday afternoon you find yourself installing python, sklearn and downloading some data.

Congratulations – you have taken your first steps in MLOps – you are planning to build a model, understanding what features to use and thinking about how to measure its business performance.

Over the next week or so you build some forecasting and classification models that give you good results (business results – not AUC!). The business benefit owner is impressed and gives you the go ahead to generate a report every week so that such orders can be triaged early. Now you need to start thinking about rebuilding the model regularly, checking and comparing its performance with the active model. You don’t mind running the model on your laptop and emailing the report every week.

This is your second step in MLOps – to understand how to train/retrain your model and how to select the best one for use. For this you will need to establish feature pipelines that run as part of the training process and ensure the whole thing is a one-command operation so that you can generate the report easily.

Then someone has a good idea – why not create a dashboard that updates daily and provides a list of order journeys with a high probability of getting stuck so that we further improve our response time AND provide the order completion forecast to the customers for better customer service (and to reduce inbound service calls). 

This puts you in a fix – because till now you were just running the model on your laptop every week and creating a report in MS Excel – now you need to grow your model and make it part of a product (the app).

It should be deployable outside your laptop, in a repeatable and scalable way (e.g. infra-as-code). Integrations also need to be worked on – your model will need access to feature data as well as an API for serving the results. Also you need to stand up your model factory that will retrain models, compare with existing ones (quality control) and deploy as required. You will also need to think about infrastructure and model support. Since it will be used daily and some benefit will depend on it working properly – someone needs to keep an eye on it!

This is the third and biggest step in MLOps that moves you from the realm of ad-hoc ML to an ML Product – with product thinking around it like assurance, support, roadmap and feedback.

Now you have to check in all your code so that others can work on the app in case you are chilling on a beach somewhere.

This third big step brings hidden complexity in monitoring requirements. While the model was on your laptop being used on a weekly-basis you did not have to worry about the ‘model environment’ or automated monitoring. You had time for manual monitoring and validation.

Now that model will be deployed and run daily with the output being used to drive customer interaction we cannot depend on manual monitoring. Once the model is deployed it will continuously give forecasts (as orders are generated). If those forecasts are too optimistic then you will increase the in-bound call pressure as people chase up on orders. If they are too conservative then you will reduce sales as customers might find the wait too long. Also unlike software, your ML model could start misbehaving due to data drift (intended or unintended). If you don’t detect and correct for this the forecast results from the model could stop adding any value (the best case) or worse actually increase customer’s pain (the worst case).

In traditional software we would trap these data issues at the edge through validations and we would log these as error events. But here the data can make perfect business sense just that the model can’t make sense of it to give a forecast or a prediction.

Therefore we need to start checking data for data drift, model for bias, model performance, features for validity and the business performance of the app for continued benefit realisation. 

Also, this step by step approach for AI/ML is old school. We need to deploy a more continuous approach to discover new problems that can either be farmed off to multi-disciplinary teams if the org is mature enough or can be tackled in phases (going from analytics to prediction to prescription) if the org is still developing the right skill sets.

More on this later..

Survival/Demise of the Delivery Service Economy

Services contribute, by far, the largest amount to a countries economic output. Why? Because they are easier to setup, scale-up/down and close than say a manufacturing unit or a farm.

Last few years have seen massive growth in certain services even as other services declined. One example of the former is the ‘delivery service’ (e.g. Deliveroo) that delivers some product (e.g. food). Covid-19 helped accelerate the growth as people could not go to physical locations to access those products.

But how do these services work? Now that the lockdowns are over and people are not afraid to mingle, what will happen to such services? What factors will impact the future prospects of such services? Let us investigate.

Business Model

To answer the above questions we need to figure out how do delivery services interact with the products they deliver in terms of price and value. Now we know the sale price of any product (the price we see as consumers) includes service costs that went into making that product (e.g. chef’s services in cooking a dish). One of these services is product delivery that gives us access to that product (e.g. salary of the waiter in a restaurant or the delivery driver).

Delivery Fleet to Delivery Aggregator

Before delivery aggregators gave access to a large pool of delivery personnel, each producer had their own delivery fleet (e.g. Dominos, local take-aways etc.) cost of which was either included in the product sale price (see equation 1) or added on as a fixed delivery charge or calculated per delivery (e.g. courier).

Product Sale Price = Base Price + Delivery Price (1)

Consumers could only order certain items usually from local businesses. Constraints like minimum spend also were a pain point. Also you could only order from one business.

Now businesses need not maintain their own delivery fleet and for consumers more items can be ordered from a wider range of producers (with the ability to mix and match) sitting in the comfort of your home/office. This can be thought of as decoupling of the product from the delivery channel.

The delivery firms make money out of the perceived value of accessing the product with minimum effort (e.g. walking, driving, parking) where consumers are trading money for time. They save money through economies of scale and by (sometimes) treating employees like they are not employees (which reduces operating costs).

Final Product Price = Price Paid for Delivery + Product Sale Price (2)

We would also expect businesses to reduce the product sale price (see equation 1) because we are now accounting for the delivery price separately (see equation 2). This should lead to benefits all around – as consumers should pay a lower cost to access the product at home (which can be identified by comparing say take-away menu price vs price for home delivery via aggregator). But we know how pricing in general works. More processing some raw material requires before it can be consumed, less is the probability its price will ever fall. Why? Because with processing, cost increases and more factors come into play, read more here: Causes of Inflation

Trouble with this Business Model

There are few issues with this business model and equation (2) points to a big one. The demand for the service provided by delivery aggregators is entirely dependent on the demand for the products they deliver.

Therefore if the products they deliver are ‘luxuries’ such as take-away meals or restaurant dinners then the demand will go down if an economic downturn is expected (such as now). This is one reason you find delivery aggregators like Uber and Deliveroo are diversifying into daily-use groceries (which are not seen as luxury items).

Impact to Demand for the Service

The other thing that can impact demand for the service is the demand for the service! Remember, unless the product is exclusively sold through the delivery aggregator, people can still consume the product without consuming the delivery service. That is why you have exclusive tie-ups between producers and delivery aggregators (e.g. Nando’s and Deliveroo).

Figure 1: Some factors that can impact demand for delivery service.

There could be many reasons why demand for the delivery service changes. I have attempted to illustrate some of the reasons why in Figure 1. But given the seasonal, location and other factors I am sure there are many more.

Imagine a food delivery scenario where we look at two groups of people: one who live near the city centre (with high concentration of restaurants and take-aways) and the other who live in the suburbs (with low concentration of restaurants and take-aways).

For the City Centre group – distance to the eating joints is probably not a big pain point but still delivery is convenient (trading money for time). But for the Suburbs group it is and because delivery service allows them access to food from the City Centre, the Suburbs group is happy to pay a bit more for the delivery as it removes the big pain point of travelling to the City Centre, finding and paying for parking etc.

But this can change very easily. For example if the weather improves then City Centre group might enjoy a walk to the eating joint (even if they get a take-away). Or if the base price of the food increases it might encourage them to walk (especially given the health benefits of exercise).

For the Suburbs group – if the delivery price increases even if the base price of the food remains the same – they may choose to make the effort to get the food themselves. The delivery price can increase for many reasons – e.g. if there is high demand or cost of fuel goes up. Another factor could be the end of lockdown: the prospect of going to the City Centre may not be such a big pain point (especially when the weather is good or during holidays).

Concepts like ‘dark kitchens’, where food from different restaurants is cooked in the same kitchen, located in different parts of the city, are coming to address price variability, improve access and reduce costs.

What Does the Future Hold?

Given the slim margins there is very little room to increase spending without impacting the delivery price. Here are some factors that will decide what direction this space takes:

  1. Regulations: Given that gig workers can be easily left without any cover unlike regular employees there is a big push to reclassify delivery personnel which means giving them paid leave, sick pay and other benefits and reducing profits for the delivery aggregator
  2. Technology: Delivery is human-labour intensive and we will not be able to reduce costs easily. Technology such as drones can provide the next level of cost reduction but that doesn’t look like something around the corner
  3. Income Levels: Delivery Aggregators depend on disposable income of the consumers so they can pay that little bit extra for delivery. If income levels start to fall all these ‘little extra bits’ will start to bite. This can be seen in other areas as well like Content Platforms (e.g. Netflix, Disney+) where people are cutting down spending
  4. Product Experience: Experience around the product is just as important as the product itself. For example when we go to a grocery store we end up buying items not on the list or discovering new products. With delivery aggregators we cannot get that experience easily
  5. Lifestyle Changes: After the Covid-19 lockdowns and large scale work-from-home most companies are exploring different work arrangements. From flexible working arrangements to a 4-day work week. All these things impact the one thing that delivery aggregators are meant to save – time. With changes to work patterns people have more time to spare. Therefore, the value of time goes down and they may not want to ‘buy’ time with money

In general I don’t think we will see skyrocketing growth in this area and given the bleak economic output one can only predict a short term decline and longer term stabilisation.

Inflation

The word above is on everyone’s mind these days. There is a lot of panic around rising prices. But what is Inflation and how can we understand it? In this post I want to present a simple mental model and work through some sources of Inflation – because not all inflation is the same.

Inflation is defined with respect to the rise in price of a basket of products. These could be consumer products or wholesale products. The key idea is to track the price of that basket. If the price is rising we call it inflation and the rate of change of this price is the often quoted ‘rate of inflation’.

Central Banks usually target an inflation rate slightly greater than 0%. Bank of England for example targets the rate to be in 0-2% range. But why do we need this to be a small positive number? Why can’t we freeze the prices? Won’t that be good for all? Let us try and understand this using a simple model.

The Price Model

To understand inflation we need to first understand how products and services are priced. That will help us understand how it can change.

Production Model with Factors of Input

When we consume anything – be it a service (e.g. Uber ride) or a product (e.g. chocolate bar) we pay a price that includes all the factors that went into production. The common factors are stated in the image above: Land, Capital (money and goods), Labour (skilled and unskilled), Raw Materials and Energy.

The price also includes the perceived value of the product or service, margins, cost of hidden services (e.g. HR, Legal, Finance) and tax costs. This is above the cost of production.

Input factors underlined in green can change very quickly due to various factors. It is said that current bout of inflation is due to rising energy prices (oil and gas) caused by the Ukraine war.

There are also hidden Services that are part of production (e.g. logistics) that are impacted by rising factors such as fuel costs.

Therefore:

Price = Perceived Value + Cost of Input Factors + Cost of Hidden Services + Margins + Taxes

Furthermore, Demand for a product along with its Price gives us one part of the Profit equation. After all aim of a selling a product or service is to make a Profit – which means the Demand and the Price should provide enough money to not only cover the outflows in terms of costs, margins and taxes but also to generate a return for the shareholders.

Origins of Inflation: Supply Side

Now we start looking at the Supply Side (from producers point of view) sources of price rise (inflation). A common thread in all these is ‘expectation’. If as a producer I expect people can pay a bit more I will try and raise the price before the competition catches on and make that bit of profit. Similarly, if as a seller I expect my input costs to rise (e.g. rise in interest rate, raw material costs or salary increases to retain talent).

Profit Motive and Perceived Value

Price can increase if I as a producer I feel:

  • there is extra money for people to spend (e.g. during a lockdown) and
  • the Perceived Value of my product is significant

or

  • supply is reducing
  • the Perceived Value of my product is significant

then I know increasing the price should not badly impact demand. And since the above information is not secret if many producers increase their prices it can lead to wider price rise as knock on effects kick in. Perfect case in mind: hand sanitisers during the early days of Covid-19 pandemic.

While not a driver for inflation – one example where we can see two different producers copying each other is the pricing of Apple smartphones compared with Samsung smartphones. In the beginning Apple devices were more expensive than Samsung Galaxy S devices. Now Samsung Galaxy S series is more expensive given that they have no ‘iPhone 13 mini’ equivalent and are therefore firmly aimed at the value buyer.

Factor of Production Price Increase/Supply Decrease

This is where the web of inputs and outputs transmits price rise from source to target (i.e. our pockets). It is also lot easier to understand. For example: fuel gets more expensive (e.g. due to a war) that leads to wholesale food price increase (as logistics depends on diesel) which leads to an increase in retail price of food and eating out. This leads to people demanding higher salaries to compensate which in turn starts impacting other industries (as their skilled / unskilled labour cost increases). Because salaries don’t rise as quickly as prices people end up cutting expenditure.

Rising Cost of Capital

If the cost of capital increases (i.e. rise of interest rates) then not only do producers find it hard to borrow money to expand, the consumers find it hard to borrow money to buy things (e.g. houses, cars, phones). Producers find it harder to repay debts (e.g. commercial loans, mortgages) and are forced to raise prices.

Expectation

If there is an expectation for prices to rise they will rise. For example, in the current scenario where there is massive press coverage of pending energy price rise, businesses will start raising prices because energy bills and other costs are difficult to trim down quickly. Cost of living increases will also force labour costs to go up which will accelerate price rise.

Origin of Inflation: Demand Side

The other side of inflation is Demand Side (from the Consumers point of view). The standard story is when there is too much money in the market the ‘value of money goes down’ therefore you have to pay more for the same product. But I see that the same as expectation driven. Consumers don’t set the price, they set the demand where possible. I say where possible because you can only reduce expenses to an extent. The way money floods the market is also asymmetric.

The ‘right’ price is not a single thing. As I explained with my pricing model various objective and subjective factors go into setting the price.

Therefore Demand Side inflation can be thought of as a game. Producers and Consumers are trying to find what is the ‘right’ price that maximises the return for Producers and benefit for Consumers (or so called utility). The rules of the game are simple:

  • If the price falls below a certain level where Producers don’t generate any surplus at that level of Demand (i.e. their income) then there is no point in continuing to Produce otherwise the Producer won’t be able to Consume anything (unless they have other sources of income)
  • if the price rises above a certain level where it impacts more and more people thereby Demand reduces to an extent that Producers no longer generate any surplus (at that price level) then there is no point in continuing to Produce
  • therefore the game is to maintain the price between the two extremes while compensating for external influence (e.g. cost of capital, perceived value etc.)

Why a small positive number for Rate of Inflation?

This is because if there is expected growth in demand only then will there be a real growth in supply (no one will Produce excess goods unless they can be sold). With that in mind, for there to be expectation of growth in demand either the prices must fall or more money must be made available to buy the excess goods. Since it is difficult for prices to fall (especially given the Profit Motive and the fact that excess demand for Input Factors is likely to increase those prices) more money must be made available. With more money being made available we are seeding inflation.

What if we kept strict price controls?

What if there was a product P which didn’t depend on any imports. Not even for energy or logistics (no diesel trucks) or any outsourced service. I bet you are finding it difficult to think of many (any?) such products?

But let us assume there are products like that and we can keep prices of all the input factors constant (and the input factors going into producing those input factors and so on constant as well – it is a web at the end of the day).

So with that magic in place if we kept producing P at a fixed price then to still avoid inflation we must keep demand the same. Why? Because if demand is increasing (say due to population increase or expectation of shortage) and we don’t increase supply then there is an opportunity for arbitrage. People can buy and fixed price but sell at higher price to someone with greater need. Now if we increase supply to match the demand – we will transmit that demand down the supply web (i.e. we will need more input factors etc.) and at the end we will end up with someone owning raw materials (like oil, ore, IPR etc.) who will need to spend more money (e.g. wages, capital) to extract more. They won’t be able to transfer the increased demand down the web because they are at the edge. Now at that point these edge producers have an option to not raise prices: either get additional labour, output without increasing salary, interest bill or to reduce the profit margin. Assuming they reduce the profit margin (as generally people won’t work more for the same salary or accept less interest for a loan) they will avoid a price rise.

But what happens when the demand rises again (population keeps on increasing). There will come a time when profit cannot be reduced any further – and it will not be worth the edge producers while to remain in business. This is what happened during Covid-19 where the edge producers (in China) stopped producing at same levels – leading to price shocks around the world.

Conclusion

Hopefully I have peeled back some of the confusion around Inflation and its sources.

Some key points:

  1. Inflation is driven more by expectation than anything else
  2. Shocks can kick start inflation but it is the expectation that those shocks give rise to that really ramps up the price rise
  3. Shock -> Expectation -> Panic is the perfect storm for price rise
  4. Early inflation can lead to arbitrage opportunities where people buy cheap, hold and sell at a higher price

Thank you for reading!

2021 Cricket T20 World Cup in Numbers

2021 Edition of the 2020 T20 World Cup (Mens) has been a strange one. Delay and change of venue caused by Covid-19 and the 3 stage format that included Associate Teams like Namibia and Papua New Guinea battling it out with past champions like Sri Lanka for a place in the second stage, to name a few. For the second stage the asymmetric nature of the two groups was also puzzling. But for me the most important ‘failure’ was the result depending on the toss.

At the beginning of the game a coin is tossed and the winning Captain gets to choose whether to bat first or bowl first. In this post I analyse who won the toss and what they decided to do and the impact it had on the result of the game.

Figure 1 shows in the top-left pie shows the outcome of winning the toss (independent of the decision) and how it relates to the result. We can see 2/3rd (66%) of the matches are won by the team that wins the toss. If we look at the bottom-left pie which takes into account whether the winner batted first we see again about 2/3rd (66%) of the teams that decided to bowl first won the match.

Figure 1: Toss – Result and Batting First – Result Outcomes

The left column is data from all the matches therefore we can have some errors creeping in because matches against Associate members are less likely to be impacted by the toss. This is because an experienced team is likely to dominate an Associate member irrespective of whether they bat or bowl first. The right column shows the data from matches that did not involve an associate team (i.e. not against Afghanistan, Scotland, Netherlands, Namibia, Oman, PNG).

Here the relationship between winning the toss, choosing to bowl first and winning is even stronger. About 76% of the teams that won the toss, won the match. 4/5th (80%) of the teams that decided to bowl first won the match.

Below we combine the Toss Outcome, Decision and Match Result and put into one of four buckets: [Toss Won-Bat First-Won], [Toss Won-Bat First-Lost], [Toss Won-Bowl First-Won] and [Toss Won-Bowl First-Lost].


Figure 2: Toss Outcome, Decision and Match Outcome

Again showing all the matches in Figure 2 we see >50% of the matches fall in [Toss Won-Bowl First-Won] bucket.

Figure 3: Toss Outcome, Decision and Match Outcome – No Associates

In Figure 3 we remove any matches that involve Associates. We find >75% matches fall in the [Toss Won-Bowl First-Won] bucket.

This tells me that in this T20 World Cup the toss decided to a great extent what the result will be. Most teams, upon winning the toss, decided to bowl first and ended up winning. So much so that most teams upon winning the toss decided to bowl first (>90% in Non-Associate matches and about 70% in all matches).

Maybe it was the grounds being in the same region that resulted in the above? Would we have seen a different result, one less dependent on the toss, if the WC had been played in India?

The Inelastic Health

At the start of this Parliament, resource spending on healthcare was £133bn. Today’s Spending Review confirms that by the end of this Parliament, it will increase by £44bn to over £177bn.
And the extra revenue we’re now forecast to raise from the Health and Social Care Levy is going direct to the NHS and social care as promised.
The health capital budget will be the largest since 2010.
Record investment in health R&D, including better new-born screening…
…as campaigned for by the Member for the Cities of London and Westminster.
40 new hospitals.
70 hospital upgrades.
More operating theatres to tackle the backlog.
And 100 community diagnostic centres.
All staffed by a bigger, better-trained workforce, with 50,000 more nurses and 50 million more primary care appointments.

– Budget Speech (Oct 2021) by Rishi Sunak [https://www.gov.uk/government/speeches/autumn-budget-and-spending-review-2021-speech]

Healthcare is a tough topic to deal with. It impacts everyone, directly or indirectly. Therefore, I wanted to analyse the part of the budget speech that dealt with healthcare and NHS (see quote above).

We can break down the above quote into promises that:

  1. Increase Supply:
    1. Building up infrastructure (new hospitals, upgrades etc.)
    2. Creating a skilled labour pool (more doctors, nurses etc.)
  2. Reduce Demand:
    1. Offloading (outreach to promote healthy living options etc.)

The overall promise is about £44 billion over next 3 years (approx.). So let us see how this money could be used and how realistic this is.

Building Infrastructure and Creating a Skilled Labour Pool

Hospitals take a long time to build, upgrades can happen faster. But all of that will need more staff to operate. More specialised nurses, doctors and technicians. It also takes time to train/onboard healthcare professionals. Therefore the supply of healthcare cannot be expanded quickly (and it is ‘inelastic’ in economic speak).

As per Gov.uk [REF] the number of doctors increased by 6,600 and nurses by 10,900 in the year 2020 (Dec. 2020). The intake target for Medicine during the 2021-22 academic year is 8,313 [REF]. If that sounds impressive remember the UK has one of the lowest doctors per capita [REF] in the OECD group of countries. In 2020-21 1,358 doctors (GPs and Hospital doctors) retired [REF]. Taking into account drop-outs (from data about 10-11%), retirement and people leaving the profession, we should get about the same amount of increase per year (6,500). This gives us a supply of about 19,500 over next 3 years.

So if we were to create 40 more hospitals and upgrade 70 others and improve access to healthcare then we need:

  1. 80-100 doctors per new hospital (say)
  2. 10-20 doctors per upgraded hospital (say)
  3. 3 GPs to replace 1 current GP [REF]

Lets take lower numbers over next 3 years:

80 doctors per new hospital = 40*80 = 3,200 doctors for those new hospitals

10 doctors per upgraded hospital = 70*10 = 700 doctors for the upgraded hospitals

Replacement rate of 20% of current GP (35,000 approx.) to account for retirement and increasing demand – far lower than the 3 for 1 = 7,000 GPs per year = 21,000 GPs over 3 years

Total over 3 years: 25,000 doctors over the next 3 years.

At 10% replacement rate: about 3,500 GPs per year = 11,000 GPs over 3 years giving a total requirement, over the same period of about 15,000 doctors.

At 15% replacement rate: about 16,000 GPs over 3 years giving a total requirement, over the same period of about 20,000 doctors.

In the worst case we are looking at a shortfall of 500-5000 doctors. In the best case a surplus of about 4000. Given the current issues with waiting times and delays across the board a surplus scenario looks unlikely.

How to fill the gap?

Assuming a small gap of say 500 doctors – we would need to fill it from outside the UK. That means a small but significant part of the £44 billion might go towards Visa infrastructure and Enforcement. There will be earnings from this as well because there are usually visa fees involved which will offset some of the spending. Currently about 15% of NHS staff are not British Citizens [REF]

This will have a second order impact though in the country of origin. It will lead to brain-drain as typically the best doctors and nurses will qualify to work in the UK. This will result in a higher required replacement rate in the country of origin. Since the biggest source of such talent is India, it will be interesting to study the effect this has on the quality of healthcare there. This is when India specifically is already dealing with a shortage of doctors [REF].

Other Solutions

It is always good to give ideas alongside investigating issues. So I want to make some effort to present some ideas:

  • Create 1000 medical seats per year in UK universities, provide full scholarship and chance to work in the NHS to the students selected for it. Can target international students or a mix through a common entrance exam. This will attract the best talent from around the world.
  • Push offloading measures such as promoting healthier lifestyle – e.g. subsidy on exercise equipment, financial incentives for weight loss and ‘cashback’ type of benefits if you have not used the NHS.
  • Increase automation and connected healthcare – where sensors can be used to monitor those at risk
  • Increasing salaries of NHS Staff to attract more people (it is dangerous to pay more and expect longer working hours because that can lead to mistakes, stress and burnouts)

Interesting Fact:

Currently India’s Doctor per 1000 people ratio is about 0.7 [REF], WHO recommends 1.0, if all the Indian origin doctors (i.e. doctors trained in India) were to go back to India this would [REF] make a big impact on the shortfall.

World Mobile Token

Service coverage is a fundamental limitation for CSPs. If people cannot get good service coverage then it becomes harder for CSPs to sell their products. Service coverage cannot be expanded/improved without careful planning due to cost and logistic constraints.

Currently service coverage planning is centralised and driven by various internal and external factors like regulation and prediction of consumer behaviour.

This results in overprovisioning in dense areas (e.g. city centres) and under-provisioning in rural areas. The impact of this can be seen in rural areas where significant initial investments are required for site prep and connectivity.

Then events like the COVID-19 pandemic can mess up the plans (e.g. due to work from home mobile data traffic decreased by 5% [1]) where overprovisioned sites are significantly under-utilized.

Enter the World Mobile Token

What if there was a way of providing access devices to consumers and businesses to deploy and maintain, who could then be incentivised to provide local coverage. This could be as simple as deploying a set of antennas and a box that provides connectivity to the internet.

If the platform can be compressed to a box similar to the box that most CSPs send out when you buy broadband/TV from them and you just setup an antenna and connect the box to it. Then you not only provide connectivity to your local area but also earn compensation.

One issue with doing this is that CSPs cannot handle the massive distributed transactional load it would generate. Imagine the volume of transactions as people use your node and move away. Your node then has to send data to the CSP to validate transactions and CSP has to pay you compensation. At the end of the day the people using the node should pay the people hosting the node. Why do we need CSP to play middle-person (along with a bank perhaps)? Then there is a question of ID management, privacy and securing the network.

We do know of one technology that does provide distributed, trust-less transaction support (distributed ledger technology most commonly implemented using blockchain). World Mobile Token combines distributed ledger technology with telco functions to facilitate demand-based coverage growth. Keeping the environment in mind and its focus on rural connectivity, especially in developing countries, the solution is based on low-power architecture powered by solar and battery.

The World Mobile Token (WMT) is the ‘coin of the realm’ for the network and will be used to buy and consume services on the network. So when a user does a balance top-up they will be buying WMT which they will be spending based on usage. This is promised to be a seamless and transparent operation. Part of that spend will go towards compensating node operators whether in local currency or WMT (more on this later). The supply of WMT is finite (2 billion) with a fraction of that being circulated in the beginning.

Architecture

The WMT network consists of three types of nodes:

  1. Earth Nodes
  2. Air Nodes
  3. Aether Nodes

Earth Nodes

These nodes contain the core logic of the WMT system. The major modules can be seen in Figure 1.

Figure 1: Modules of the Earth Node

Distributed Identity – this module provides a decentralised authentication layer.

Distributed Ledger – this module contains the blockchain implementation to support the user account balances, transactions, rewards and secure storage of user data. To ensure privacy the ledger encrypts and splits information into public and private parts. The transactions are public but the details of the transaction are held privately. The good thing about the implementation is that unlike the proof of work based consensus algorithm used in Bitcoin it uses proof of stake [2] which requires you to stake some money (skin in the game!) to be trusted rather than wasting energy solving meaningless maths puzzles. At the time of writing you need to stake 100,000 WMT for an Earth Node.

Telco Functions – this module supports the utility of WMT – the telco network functions. These include: routing of data, signalling, service management (create/modify/cease), service assurance, self-healing.

Internode API – the most important component as it enables communication between all nodes and sub-systems. It can be thought of as a universal bus that touches all types of components, nodes and external systems. This is also responsible for Authentication, Accounting and Authorization functions and integration of data with the blockchain for transaction processing, smart contracts and enforcing rules.

Earth Nodes can be used for: BSS type services (i.e. buying telco services, using credit to use services etc.) or Telco type services (i.e. making a call etc.). The two main type of telco services Earth Nodes offer are: voice/sms service and Internet access. Some of the more sophisticated add-ons like content delivery are expected to be added on.

Earth Node operators are compensated using WMT for blockchain functions from what I can make out in the whitepaper. This is similar to other utility-backed coins. But I am not sure, for example, whether for Telco services Earth Nodes will be paid in local currency or WMT?

Air Nodes

Air Nodes form the access layer for the network. As the name suggests these are the Radio Access nodes. These nodes come in different hardware configurations based on capacity and location requirements.

Figure 2: Interaction between Air and Earth nodes to enable connectivity.

Figure 2 shows the critical role that the Air Nodes play. These are the units that retail and business users can deploy and run to provide connectivity. The operators of these units get compensated for their role. In a normal mobile network provider all this infrastructure would be created and maintained by the provider or delegated to another organisation.

One interesting point is that Air Node operators are paid compensation in the local currency of the area of operation. This ensures all the downstream operational payments are seamless (e.g. for electricity, maintenance, connectivity).

Aether Nodes

This is where new-school telco meets the real world to provide access to the Internet and enable connectivity (e.g. voice, SMS) with users on other provider’s network. There needs to be at least one of these in a country before service can be provided there. The operator(s) of the Aether node need to have the necessary licences from the regulator in that country.

Aether Node operation requires staking 1,000,000 WMT and compensation for running it is also paid in the local currency of the operating region.

Network Architecture

Given the ‘demand’ driven nature of the operations we expect lots of Air Nodes deployed around Earth nodes linking with one or more Aether nodes. Air Nodes can also mesh with each other to provide remote connectivity (e.g. between Town C and a village near it in Figure 3).

Figure 3: Network Architecture with Air, Earth and Aether Nodes

We see two types of paths in Figure 3 – the pink one shows a call from between two subscribers of the WMT Network. The green path shows a call between a WMT Network subscriber and a subscriber on some other network.

Figure 3 shows the network architecture as I understood from the whitepaper [3]. In terms of routing the nearest Earth node is preferred to maintain quality of service. Otherwise you could make a cartel of Earth nodes that send traffic to each other (to earn compensation) even if it leads to a reduction in the overall customer experience. For example in Figure 3 the Earth Node (EN) owner in Town A can form a cartel with the EN owner in Town B so they send traffic via each other no matter if it is heading to Town C or outside the WMT Network. This will overload the link between Town A and Town B ENs and reduce the quality of connectivity.

WMT network also publishes node quality metrics that allows different selection mechanisms for route calculation and promotes transparency. For example we always want to have a diverse path through the network as long as service quality is not impacted. Why? Because here traffic processing means money means incentives. Also distributing traffic evenly means links don’t get congested.

Conclusions

This is a very interesting project which should grab the interest of operators in developing regions (e.g. MTN, Airtel, Vodafone). The Aether Node operation requires regulatory approvals which are not easy to get.

But there are many questions that I have not been able to get an answer to. Maybe I missed something in the whitepaper [3].

Q1: What happens with the spectrum for the Air Nodes? Do they operate in unlicensed spectrum (e.g. WiFi)?

Q2: How do we incentivise connectivity between the nodes because that is one of the most difficult problems when it comes to rural connectivity?

Figure 4: Connectivity challenge.

For example looking at Figure 4 (based on Figure 3) – if we assume between Village B and Town B there exists another Village A – here were are able to deploy an Air Node (for the village) using some sort of radio mesh technology. But what about Village C which has no such transition point? In WMT Network how will we incentivise people to create longer distance connectivity using mechanisms such as microwave or fibre? One option is to use community-led fibre projects (by Village C in this case) that then recoup their money via the Air Node operation. But this kind of longer distance connectivity projects can be very expensive. Therefore it will be interesting to see how they solve this issue.

Q3: Using WMT coin instead of local currency can lead to complexities and over time with Central Bank Digital Currency (CBDC) it will be easier to use that instead of WMT coin as CBDC will be easily convertible to the local currency. So how will this system evolve/work over time?

Disclaimer: this post represents my personal opinion and I am in no way associated with World Mobile Token or promoting them through this post.

Gameplaying in a Multi-Agent Environment (2): Exploration vs Exploitation

Most automatic game playing involves some sort of algorithm or model that can manage and use limited resources to attain the goal of the game without human guidance. The limited resource may be lives (e.g. Super Mario), playing pieces (e.g. Chess), number of turns or time. Usually we choose one or more actions which use one or more resources (e.g. using our turn move a piece in Chess). To train the model usually the same game is played multiple times to build up the knowledge. See this post for a more detailed description of automated game playing.

The automated game player’s (AGP) primary task is to balance exploration of the state space of the game with exploiting (i.e. taking specific actions) to ‘win’ the game.

Exploration

The exploration task deals mainly with finding information about the state space, the value associated with each state and how actions influence state transitions. This allows us to select the optimum actions in the Exploitation stage. Exploration is not just about about movement actions. The AGP may want to spend a turn ‘exploring’ a new action like extracting a resource or attacking an enemy and assessing the corresponding reward.

Exploitation

Exploitation is about using the information we have gained from exploration and choosing one or more actions. Since exploitation also uses up resources we want to balance information gathering with information use. For example, in many games there is a choice between ‘fight or flee’, the AGP could adopt a policy of ‘flee’ all the time or it could choose to explore the fight action and update its policy to be a mixture of both or it could find that exploring the fight action led to much better outcomes therefore it may decide to have a pure fight policy. But in a game there are limited opportunities to choose what to do, potentially infinite state space and state transitions that are not deterministic. Lets pick these apart one at a time using the Knights and Dragon game.

State Space and State Transition from the Knights and Dragon Game

State pace is a set of values that can be used by AGP to reason about the game environment. State space can directly represent the game (e.g. chess grid with players pieces) or indirectly (e.g. field of view in first-person games like Doom).

To explain this we can look at the Knights and Dragon game. The game is now a simple 10×10 grid with green squares representing farmland. There are two knights and a dragon. The dragon is controlled by a set of simple rules: detect nearby players, move towards closest player then attack closest player if in range.

The knights are controlled by the AGP. The knights don’t know anything about the state space to begin with. They have the following actions available: rest, grow food, search for food, attack, move (up, down, left, right) based on the current state.

The state space for a knight is made up of the following six values: knight’s health (max 10), knight has food (Boolean 1/0), current position of the knight (x and y), food on the current tile (max 10), dragon’s health (max 30).

Actions transition the knights from one point in the 6d space to another point. For example take a knight with state space: (10,1,5,5,10,30). If the knight takes a ‘search for food’ action then it moves to (10,1,5,5,8,30) because that action extracted 2 food from the current cell (represented by the 10 in the first point).

In general the state transition for Knight 1 in current state S1 taking action ‘move’ can be defined as a mapping.

Figure 1: One-to-Many State Transitions

The picture above shows the one-to-many nature of state transitions in this game where S1 to S6 are states in the game. Taking the ‘move’ action when in state S1 can take us to states S3, S5 or S6. This is due to the dynamic nature of the cell state (representing the food resource).

Initially Knight 1 has no idea about state transition mappings. Therefore there is no option but to explore and build up the state transition map before the AGP has enough information to start exploiting.

State Value

As the state transition map is built up we need a way to attach a value to each state so that we can start exploiting it in selecting the best action to take. In the current environment the reward is simply survival at the end of each turn. Each Knight gets +1 reward if they end their turn with full health (= 10). We can use Monte-Carlo or Temporal Difference (TD)methods to iteratively build up the value of each state.

Since in this case we do not have 1-1 mapping between ‘before’ and ‘after’ states, the AGP cannot directly evaluate the utility of taking an action. If there was 1 to 1 mapping then the value of taking an action would be same as the value of the ‘after’ state. One way of finding a combined value of an action is to take an average of the value of the states [S1, …, Sn] reachable by taking that action. For example, going back to Figure 1, if we have built up State values for S1, S2,.., S6 using TD method then when we are in State S1 we can evaluate the utility of taking the ‘Move’ action by averaging State values of S3, S5 and S6 because so far the AGP has learnt these three possible next states associated with this current state and action.

The expression so far I used above is important to explain. Typically the AGP has only one task – to ‘win’. The AGP should not need to know every state transition to ‘win’ because that would be quite difficult for anything but the simplest of games. Therefore, as AGP plays and discovers more of the transitions it will reach a point where it will be able to ‘win’ by utilizing what it has explored. Before that point the AGP can only decide based on what transactions have been explored. After that point there is no real incentive to explore unless the reward function is modified.

Figure 2:

Looking at Figure 2, we may find that ‘winning’ requires the AGP to transition to State S7 – which at this point we are not aware of as we have not come across a combination of current state and action that takes us to S7. Once we discover S7 (in this example from S2 taking the ‘Move Down’ action) we will quickly build up its value using methods like TD.

Another important element for the AGP is memory. If the AGP forgets the state information between each game then we will be forever exploring and never reach a state where we can exploit, unless the games are long enough to build up transition mapping every time we play. But that would like learning from scratch the different strategies of Chess every time we start to play.

Sample of three states with corresponding state values:
(10, 1, 10, 30, 1, 1): 9.6557
(10, 0, 10, 30, 1, 1): 7.9430
(10, 1, 8, 30, 1, 1): 7.9784

TD Update Equation:
V(s) <- V(s) + a*{(R + g*V(s')) - V(s)}

V(s): Value of current state (s)
V(s'): Value of new state (s')
R: reward when transitioning to new state (s')
a: alpha - constant
g: gamma - reward discount

Above we can see a sample of three states with corresponding state values. Also the TD Update Equation is provided. All state values are initialised to 0. Reward (R) is +1 for every step that ends with full health. Over time, as we explore more of the state transitions, the state values converge. We keep following the chain of states till we reach a terminal states defined as:

  1. Failure: Health of any Knight drops to 0
  2. Success: Knights survive 250 turns each

Results

To see the exploration vs exploitation in action we can run some easy experiments with our Knights. We start without any state information. We prioritise exploration by ensuring all actions are tried from the current state.

Once the AGP finishes exploring all the actions for a state, the next time it encounters that state it will start using estimated next state value to select the action. This estimation becomes more accurate as the state values converge over games.

We can tune this by adding a probability that the highest valued action will be disregarded and another action is chosen randomly.

Figure 3: Game lengths as we increase randomness

In Figure 3 we can see different probabilities of disregarding the highest valued action and how it impacts the game outcome. The game length in number of turns is given on the Y axis and number of games on the X axis. Whenever game length reaches 500 we assume that the AGP (controlling the Knights) won the game.

No Randomness: In the case where we never disregard the highest value action (blue line) we see after exploring and learning the state transitions (around the 19th time it plays the game) the AGP discovers a good solution and continues to exploit it. Following this discovery all the games reach the full 500 turns required to win.

1% Chance of Ignoring Highest Value Action: In this case we discard the highest value action 1% of the time (orange line). We see that the learning approaches the winning tactic but the randomness does not allow the AGP to settle into a winning routine.

10% and 50% Change of Ignoring Highest Value Action: As expected in this case (red and green lines) the learning is completely overshadowed by the randomness and AGP does not settle into any winning routine.

Finally, the winning tactic

So what is this winning tactic that the Knights have converged on? The winning tactic is to ignore all the actions except grow food and search for food. This is a farmer strategy where the Knights settle down and don’t move, don’t get close enough for the dragon to detect them. Its a perfect loop where following this strategy ensures all games are won. That is why we see the perfect success record (blue line) once this strategy is ‘discovered’. No other state value will come close to beating this.

Figure 4: Steady state win solution – Knights stay put and become a farmer – grow food and consume it

Code:

Code can be found at: https://github.com/amachwe/pgame/tree/mdp

rl1.py – contains the state transition building logic and the ‘brain’ for the AGP: https://github.com/amachwe/pgame/blob/mdp/rl1.py

player_ga.py – contains a simple fully informed version of the ‘brain’ where we can workout the exact consequence of taking an action as the transition functions are available: https://github.com/amachwe/pgame/blob/mdp/player_ga.py

I have not cleaned the code but if you are interested I will be happy to explain the basics! Just leave a comment.

Gameplaying and Reinforcement Learning in a Multi-Agent Environment (1)

Gameplaying is no child’s play! Simple games can have deep tactical and strategic choices that can have a big impact on the end result. Modern board games include multiple different gameplay mechanics like dice rolling for probabilistic decision making, decision making under incomplete information, resource management and so on. They have now given way to games that can be played by a single player where either a set of rules (e.g. Legend of Drizzt) or a smartphone app (e.g. Mansions of Madness) play the role of the Game Keeper/director.

Each game has a target/goal that helps define the win/loss conditions. This may be static (kill your opponents ‘king’), evolving (find something then do something with that object) or time-based (e.g. survive 10 turns).

Game turns are usually divided into phases which groups behaviour different class of agents (such as human players, monsters, world effects). So each game turn might begin with a preparation phase, a hero-action phase, a monster-action phase and so on. In the hero-action phase (usually) the human players can choose one or more actions from a set of actions (e.g. move, explore/search, attack etc.) and execute it by spending some limited resource (e.g. action point, potion, item etc.). This is where the tactics and strategy come into the picture. Similarly, during the monster-action phase we may use some rule-based mechanic to decide what the monster does (e.g. monster moves one square closer to the nearest hero) or this can be decided by the Game Keeper/director. Finally there are always constraint mechanisms that help decide the reward signal (e.g. health, turns left) as a consequence of our actions. For example – my hero can choose to fight a monster or attempt to sneak away – what should I choose? We might make a tactical choice to sneak away if our health is low or risk an encounter if we feel we are much stronger than the monster. Games can also have a time-constraint where once the time runs out something good/bad happens.

Game environment usually consist of some grid system which allows for exploration and a set of agents and game elements (e.g. resources, locations) that are oriented on the grid. Grid system may provide complete information (i.e. all parts of the grid are visible) or incomplete information (i.e. the grid is built up as we play and make choices).

Similar concepts also exist in video-games. But they are lot less discreet. For example when we play a shooting game like Doom we move in a continuous way. There is no grid (or a the grid has infinitely small squares). We still have mechanics around action choices and reward signals.

As an example for a common game: Chess –

target is to kill your opponents king

grid is the chess board

agents are the two players (or 1 player and 1 AI) – chess pieces are the tokens used to play the game

phase in chess consists of a phase for each player to move

action just one basic action of movement – this includes the action to ‘kill’ an opposing piece

reward win/loss, number of pieces left etc.

constraints number of pieces left, in some fixed length games – the number of turns etc., number of squares on the chess board

Game Playing Gym

So why this lengthy discussion around this topic? I want to explore concepts around automated-gameplaying. I was also interested in building a game environment. Therefore, I built a ‘gym’ environment using python and PyGame where I can play around with different automated game-playing methods.

target is for both knights to survive 250 game turns per knight

grid is the environment consisting of mountain, water and grass squares

agents are two knights and a dragon. The dragon operates on a fixed rule-set (scan, move and attack nearest). Knights will be controlled by automated gameplaying methods.

phase – there is a hero phase for each knight followed by a monster phase for the dragon

action seven actions to choose from: move (x4 up, down, left, right), search (for food), farm (grow food), rest (heal)

reward food quantity, health quantity, amount of area explored

constraints health, food – if food goes to zero health starts to drop – all actions cost 1 food


Figure 1: Example of the Game Environment

In Figure 1 we can see the game environment. Green blocks are grasslands where food is available and farming can be done. Brown blocks are mountains and blue blocks are water. The yellow block is a grassland block exhausted by searching for food. Exhausted block can be regenerated by using the ‘farm’ action.

Two white blocks below the barren block are the two knights and the white block to the right of it is the dragon. Against each Knight we can see their food (uncapped) and health value (capped at 10).

Reward Maximisation – No Constraints

Video 1: Basic survival – gather and move

In the above video you can see the dragon is taking a snooze and the knights are busy exploring the land. You might think that the knights have been ‘programmed’ to avoid mountains and navigate through. But that is not the case. We use reward signals for deciding where the knights should go.

As we know in any kind of agent-based system we have a loop like below:

  1. Agent reads state of the environment (or observes the environment)
  2. Agent reads the reward signal (zero for first loop)
  3. Agent decides and executes next action on the environment
  4. This causes the state of the environment to change and a reward signal to be generated
  5. Check state/observation for Termination condition, if not met then go to (1)

In our case the knights are getting the full state of the environment as opposed to observing/sampling the state indirectly. The environment state consists of their health, their food and their location (which includes cell type and the food available if a grassland type of cell).

Since state transitions are not stochastic and the knights have full access to the state transition functions they can evaluate each of the seven actions at each stage and select the action that maximises the reward (in which health is the most important component) in the next step. This simple rule can lead to a lot of sophisticated behaviour that you see in the above video. Even there note how there are absolutely no constraints on the knights and they can keep moving, searching and stockpiling the food (see the food number go up!). These knights are just greedy!

We can make the state transition stochastic by randomly changing the amount of food the agent gets when searching or the direction of movement when selecting a move action.

Again if we have the transition graph (generated by Markov Decision Process) available then we can easily use Value Iteration or Policy Iteration to come up with an optimum policy for each state. I will show an example of this in the next post.

Reward Maximisation with Constraints

Let us make life slightly difficult for our greedy knights (well at least for one of them). In the video below we just change the starting position of the dragon. Now the dragon is going to continuously harass one of the knights.

Remember we have not changed anything else in the program! Just moved the dragon closer to one of the knights so that it enters the dragons area of awareness. The simple rule-based AI of the dragon will ensure it tracks the knight.

Survival under constraints

Now our knight has to be slightly cleverer than before! It can no longer just gather food. It now has to gather food, loose food to heal (if dragon has attacked successfully) and move. It may also choose to regenerate an area if it can afford to. Remember again reward signal has health as the most important component. The knight, using reward maximisation, automatically starts doing this in a sequence. We see that it keeps moving and surviving (keeping health > 0) while it collects food to heal. This adds a big constraint to its food gathering capability and doesn’t allow it to stockpile any food. The other knight – not bothered by a dragon – is able to continue its greedy behavior. We can tune the aggressiveness of the dragon by tuning its attack probability.

We can see in Figure 2, for a high attack probability (0.9), the knight under attack (knight 2) is overwhelmed and killed (compare with knight 1 that is steadily stockpiling food). This means that knight 2 is not able to gather, heal and move fast enough under the relentless attack of the dragon. We may improve survival chances if we had an ‘attack’ action?

severe attack by dragon - food
Figure 2: Knight food levels under heavy attack

We can see in Figure 3, for a lower attack probability (0.6), the knight under attack (knight 1) is not overwhelmed. Whilst it still starts falling behind the other knight it is far from any real danger of death. In this case the dragon is a bit lazy about attacking!

low constraints on food low attack
Figure 3: Knight food levels with lower attack probability

TicTacToe: Playing Against the Machine

TicTacToe is a simple game but it does have some tactical depth. With limited moves and turns it can be easy to create exhaustive search based solutions to implement the logic for a Player vs Computer game.

It is also the perfect game for anyone who wants to start with AI programming (e.g. Reinforcement Learning). Keeping this in mind I created a test-bed with two ‘brains’ – one that uses Induction (taking advantage of the finite state-space) and the other that uses Pattern Matching (taking advantage of the fixed grid nature of the game).

The code in python can be found on github:

https://github.com/amachwe/tictactoe

Here is the readme that explains how to use it:

https://github.com/amachwe/tictactoe/blob/main/README.md

Currently neither the Induction or Pattern Matching brain really learn anything. This is because we can do an active search of the full state space to plan each move (given the relatively small size of the state space).

Patterns in the Pattern Matching brain are provided as ‘knowledge’ but these can just as well be learnt using Reinforcement Learning methods.

Have fun playing the game or using it as a test-bed for your own AI Tic Tac Toe playing brain!

Feel free to comment to ask questions/provide suggestions!