Blog Data Policy

So an AI walks into a Pub..

There is a joke/useful analogy that in very simple terms explains despite all the complexities and technicalities, how modern AI systems work at a fundamental level..

An AI walks into a pub and goes up to the bar, the bartender greets the newcomer and wants to know what they would like to drink..

“What’s everyone else drinking…”

Good is it not…

An AI or specifically a LLM is a reflection of it’s training data and is looking for the most statistical relevant or in simple terms “most common” response to any question you give it.. what most people are drinking in the bar analogy..

“What’s everyone else drinking…”

The reason I bring this up is a trip I made to Dublin last week, and a visit to one or two bars in that fine city.

What is everyone else drinking… well in the Temple Bar area of Dublin, is going to be a pint of Guinness.. and perhaps in most of the city that is going to be the case.

But how representative is this.. the bartender in the joke / analogy is of course the training data used to train our model so while Guinness have the statistical significance in a Temple Bar pub, is it the case for Dublin, or indeed the rest of Ireland.

If we expanded out sample of bartenders to include all or Ireland Guinness may have less significance on the other hand if we focused on some of Dublins more up market bars we might find a lot of expresso martinis consumed..

A bar in Dallas, Sydney, or Bangkok are all lightly to produce different responses for our imaginary bartender..

The moral of this is clearly that models are very sensitive to their training data and how representative the training data is of the subject of interest, in almost all cases in may not be as representative and we might like and an important question for the industry is what to do in those circumstances.

How we alter the response (weight) of a system based on a foundation model to take into account limitations of data is the real “Question for our Times”, and indeed it’s also important to remember that sometimes the data is actually an accurate reflection of reality even if we might not like it..

In AI data is the code

In AI data is the code, so we need to really understand all aspects of it, not just how representative it is but its antecedence, who created it and for what original purpose.

More thinking along this lines to follow…

Blog Thoughts

Why Concorde is the only airliner at Heathrow on Google Maps…

It was 20 years ago today…

It was 20 years ago today

BA002 on finals returning from New York JFK on the last commercial operation by Concorde on the 24th October 2003 marking the end of the era of supersonic air travel. Filled with celebrities G-BOAF touched down at 4:05pm and I was lucky enough to be there amongst the many thousands of Concorde fans to see it and the two previous consecutive Concorde landings.. 

Today G-BOAF also the last Concorde ever to fly is on display at the excellent Aerospace Museum in Bristol, while at Heathrow her sister aircraft G-BOAB resides outside at British Airways Maintenance Facility, seemingly unloved and forgotten.

However all is not lost, if you look at Heathrow Airport of Google Maps you will find that Concorde is the only airliner visible !

Heathrow the airport without airliners ?

This is not the result of a Satellite or Aerial photograph captured during a very quiet period or during the COVID lock-down when of course there were many aircraft parked at the airport, instead it is the result of image processing and the use of AI techniques to remove moving objects..

As Concorde G-BOAB has not moved in many years it is the only airliner at Heathrow..

A last laugh for Alpha Bravo !

Blog Thoughts

Beware Mechanical Turks..

The mechanical turk is a recurring warning to technologists and perhaps wider society in the need to have a healthy degree of scepticism when it comes to demos of what appears to be game changing advances in technology.

The original Mechanical Turk or Automaton Chess Player, was a fraudulent chess-playing machine demonstrated in the late 18th century. The automaton, took the form of a box on top of which was of a life-sized model of a human head and torso, with a black beard dressed in Ottoman robes and a turban. The automaton was able to “Play” games of Chess with some considerable skill, playing notable figures of the time including Napoleon Bonaparte and Benjamin Franklin!

Of course the Turk was in fact a mechanical illusion the box allowed a human chess master hiding inside to operate the machine.

In December 2022 OpenAI released ChatGPT, a demonstration artificial intelligence chatbot to highlight advances made in dialogue based user interfaces. Behind the chatbot is a very large language model fine-tuned with both supervised and reinforcement learning techniques.

ChatGPT has been lauded as making AI mainstream and indeed on first impressions seems to produce very credible results answering specific fact based questions, creating detailed and consistent articles, generating scripts in the style of particular authors and generating working code to carry out defined tasks in many computing programming languages.


ChatGPT is a very accessible interface to OpenAI’s GPT 3.5 language model trained on a wide corpus of internet content containing 175 billion elements, but of course that is both it’s strength and it’s weakness. To be clear I am in no way critical of OpenAI’s work with GPT, but perhaps the nature of a chatbot interface and way in which the chatbot answers questions with confidence can be problematic…

Let me demonstrate..

I asked ChatGPT to write a critical review of what3words well known in the UK as geocoding system using shortcodes of three discrete words.

ChatGPT dialogue

Lets look at the response, which at first seems quite impressive…

what3words is a geocoding system that divides the world into a grid of 3m x 3m squares and assigns each square a unique three-word address. While the idea of using three words to identify a specific location is innovative and has potential, there are several limitations to the system.

This is an accurate summary of how what3words works very much in the style on a standard essay response, as I have asked for a critical review chatGPT has found some limitations…

First, the use of three words as an address can be confusing and difficult to remember, especially for people who are not familiar with the system. In contrast, traditional street addresses and geographic coordinates are typically easier to remember and communicate.

This is really interesting, now I am in no way a massive fan of what3words but this statement would seem to be the direct opposite of the basic proposition of what3words… traditional geographic cooridnates are hard to remember, what3words are not ? There may be a more nuanced arguement here that traditonal address are social constructs that are widely used and understood where they are widely used, but this is not exposed by GPT?

Second, the what3words system is not universally adopted, so not all locations have a three-word address. This can make it difficult to use the system to communicate the location of a particular place, especially if the recipient is not familiar with the system.

Here this is a example of the so-called hallucination, while what3words is not universally adopted of course, it is complexity wrong to suggest that not all locations have a three-word address as the system was designed to provided a global reference system –

Hallucination is perhaps not the best term here the anthropomorphic reference to a false perception is not quite right as, and this is a key point, there is no understanding or intelligence displayed by GPT instead the terms with the highest probability of use within the context of the previous or similar content are selected.

Third, the what3words system is proprietary and owned by a private company, which means that it is not an open, standardized system that can be freely used by anyone. This lack of openness and standardization can limit the potential uses of the system and create barriers to its adoption.

Good points re openness and a well argued point, although of course there are many highly successful proprietary services online.

In conclusion, while the idea of using three words to identify a specific location is interesting, the limitations of the what3words system make it difficult to use and not as effective as other methods of identifying locations. As a result, it may not be the best solution for many applications.

A good concluding paragraph you might expect from an undergraduate essay on the topic, and indeed for many answers provided by all the current language model “AI’s” especially concerning specialised subjects, there is the unmistakable influence of the academic literature which forms an important consistent part of the LLM themselves!

So maybe an important question assessing the usefulness of LLM based AI’s may be.. “Where did you go to college ?”

“Where did you go to college ?”

One final point while we are all fixated on Generative AI and especially LLM’s we should not forget that the majority of applications of Machine Learning (Machine Learning not AI – more on this another time) in general and especially when it comes to applications of ML in Geospatial Technology are not Generative and are instead focused on the classification and extract of features from existing datasets.

Google Open Buildings Dataset derived from a deep learning model trained to determine the footprints of buildings from high resolution satellite imagery.