The mechanical turk is a recurring warning to technologists and perhaps wider society in the need to have a healthy degree of scepticism when it comes to demos of what appears to be game changing advances in technology.
The original Mechanical Turk or Automaton Chess Player, was a fraudulent chess-playing machine demonstrated in the late 18th century. The automaton, took the form of a box on top of which was of a life-sized model of a human head and torso, with a black beard dressed in Ottoman robes and a turban. The automaton was able to “Play” games of Chess with some considerable skill, playing notable figures of the time including Napoleon Bonaparte and Benjamin Franklin!
Of course the Turk was in fact a mechanical illusion the box allowed a human chess master hiding inside to operate the machine.
In December 2022 OpenAI released ChatGPT, a demonstration artificial intelligence chatbot to highlight advances made in dialogue based user interfaces. Behind the chatbot is a very large language model fine-tuned with both supervised and reinforcement learning techniques.
ChatGPT has been lauded as making AI mainstream and indeed on first impressions seems to produce very credible results answering specific fact based questions, creating detailed and consistent articles, generating scripts in the style of particular authors and generating working code to carry out defined tasks in many computing programming languages.
But..
ChatGPT is a very accessible interface to OpenAI’s GPT 3.5 language model trained on a wide corpus of internet content containing 175 billion elements, but of course that is both it’s strength and it’s weakness. To be clear I am in no way critical of OpenAI’s work with GPT, but perhaps the nature of a chatbot interface and way in which the chatbot answers questions with confidence can be problematic…
Let me demonstrate..
I asked ChatGPT to write a critical review of what3words well known in the UK as geocoding system using shortcodes of three discrete words.
Lets look at the response, which at first seems quite impressive…
what3words is a geocoding system that divides the world into a grid of 3m x 3m squares and assigns each square a unique three-word address. While the idea of using three words to identify a specific location is innovative and has potential, there are several limitations to the system.
This is an accurate summary of how what3words works very much in the style on a standard essay response, as I have asked for a critical review chatGPT has found some limitations…
First, the use of three words as an address can be confusing and difficult to remember, especially for people who are not familiar with the system. In contrast, traditional street addresses and geographic coordinates are typically easier to remember and communicate.
This is really interesting, now I am in no way a massive fan of what3words but this statement would seem to be the direct opposite of the basic proposition of what3words… traditional geographic cooridnates are hard to remember, what3words are not ? There may be a more nuanced arguement here that traditonal address are social constructs that are widely used and understood where they are widely used, but this is not exposed by GPT?
Second, the what3words system is not universally adopted, so not all locations have a three-word address. This can make it difficult to use the system to communicate the location of a particular place, especially if the recipient is not familiar with the system.
Here this is a example of the so-called hallucination, while what3words is not universally adopted of course, it is complexity wrong to suggest that not all locations have a three-word address as the system was designed to provided a global reference system –
Hallucination is perhaps not the best term here the anthropomorphic reference to a false perception is not quite right as, and this is a key point, there is no understanding or intelligence displayed by GPT instead the terms with the highest probability of use within the context of the previous or similar content are selected.
Third, the what3words system is proprietary and owned by a private company, which means that it is not an open, standardized system that can be freely used by anyone. This lack of openness and standardization can limit the potential uses of the system and create barriers to its adoption.
Good points re openness and a well argued point, although of course there are many highly successful proprietary services online.
In conclusion, while the idea of using three words to identify a specific location is interesting, the limitations of the what3words system make it difficult to use and not as effective as other methods of identifying locations. As a result, it may not be the best solution for many applications.
A good concluding paragraph you might expect from an undergraduate essay on the topic, and indeed for many answers provided by all the current language model “AI’s” especially concerning specialised subjects, there is the unmistakable influence of the academic literature which forms an important consistent part of the LLM themselves!
So maybe an important question assessing the usefulness of LLM based AI’s may be.. “Where did you go to college ?”
“Where did you go to college ?”
One final point while we are all fixated on Generative AI and especially LLM’s we should not forget that the majority of applications of Machine Learning (Machine Learning not AI – more on this another time) in general and especially when it comes to applications of ML in Geospatial Technology are not Generative and are instead focused on the classification and extract of features from existing datasets.