My first instinct with ChatGPT was to see what all the fuss was about. So I signed up to the new general artificial intelligence platform that has become a global sensation in recent days.
It was a simple, easy to use process with a nice user interface on the OpenAI website and free text facility not dissimilar to a search engine like Google.
And then, it’s over to you. The world is your proverbial oyster. What sort of questions would you like to ask a chatbot that, like most things these days, seems to have split opinion right down the middle.
In my own case – and I’m sure I’m not alone – I thought it would be quite nice to know whether I will be redundant in the near future.
Phew, is all I can say.
But to more serious matters:
Now, I don’t think that either of the two questions I asked yielded results that I had not expected, or are in any way revelatory, but they are well structured, persuasive and clear.
You can see the beginnings of something that can make everyday writing that follows certain formulae – for example legal text drafting – somewhat less of a chore. In process-driven environments, the potential for ChatGPT as an assistive and augmentative technology is clear.
However, humans are emotional creatures and they seek meaning and connection through the experience of and interaction with others. It is questionable whether a robot is ever likely to provide that kind of ‘service’, although I wouldn’t rule it out completely.
I digress and it’s time to let the actual experts have their say. So it’s with thanks to the following contributors from our tech community and industries in Scotland, whose varied views on ChatGPT make for an illuminating read on what is fast becoming the biggest tech story of the year.
Brian Hills, CEO of The Data Lab, said: “It is important that we, in the tech industry, learn from our experience of hype cycles. As Sam Altman, CEO of OpenAI, said himself “fun, creative inspiration; great! reliance for factual queries; not such a good idea.” This is because the output we are seeing from ChatGPT has a confident, authoritative tone; however, the reality is an overconfidence based on the current training data.
“Additionally, leading AI expert, Andrew Ng commented, the next breakthrough will come with “….large language models that can accurately decide when to be confident and when not to will reduce their risk of misinformation and build trust. We have a responsibility to dispel the hype and evangelise the debate on the reality, opportunity and challenges of this emerging technology.”
David Irvine, CEO of Maidsafe, said: “I feel ChatGPT is a game-changer and evident if you are an innovator. Yes, you can trick it, its maths can be wrong, and it can and does use stock phrases, occasionally in the wrong place. However, every tool works like this. Hold a hammer the wrong way or use the wrong end of a fork, and you get the point.
“So knowing the limitations and working with them whilst knowing your field it’s very powerful. Learning a new field is also very powerful as long as you treat it like a better version of a google search and instead ask it to drill down, expand, ‘explain like I am five’ and in seconds, you are deep in an interesting new field.
“I had a conversation with some colleagues, and we were all amazed at the possibilities. We all agreed it was better than a human research partner or pair programmer. It truly is powerful. A colleague was speaking about J/AML/Q programming languages, I said I did not know them, but I would in two minutes, and he replied, “I know kung fu”.
“Hysteria, no; trepidation, yes; excitement, absolutely. This glimpse of the progress for GPT3 -> 3.5 is a real “eyes wide open” moment, not at just what we can achieve today, but seriously how fast this technology will change all of our lives (upgrade to 4 pre-Xmas). The inquisitive and curious minds will see this opportunity and grab it. Smartasses will post how they confused this early-stage AI, much like the journalist in the 90s laughing about this stupid Internet and how it will never take off, or the bitcoin deniers, there will be polarisation, but like all previous innovations like this. It’s happening and we cannot stop it.
“Gutenberg allowed us to progress fast, the phone and communications made us faster, and the internet put us into top gear, this, I feel strongly, is our warp drive in terms of innovation and progress. That is beyond exciting and, at the same time very scary, but I love it.”
Paul Winstanley, CEO of CENSIS, said: “Recently, a close colleague and I discussed ChatGPT. He has to make a speech in January about the entrepreneurial mindset in the public sector allied to innovation and had asked Chat GPT to write the speech.
“When he shared this with me it scared me – it was written in a form where I could imagine my colleague delivering this speech, the content and tone was spot on to his style. This piqued my curiosity so I’ve had been testing ChatGPT with a range of different obscure topics to try and get a feel for it’s capabilities and limitations.
“What are my thoughts? Firstly, the outputs are too high level for much of the communications I make; I like to give examples and anecdote to try and bring the content to life and make it authentic. This gives rise to a thought – the ChatGPT created an excellent framework rapidly that I could build upon. This point made me reflect – this is the point of AI, to augment and enhance an individual’s performance and not to replace them.”
Jarmo Eskelinen, executive director, Data-Driven Innovation Initiative, University of Edinburgh, said: “ChatGPT is an impressive example of the rapid acceleration of the capabilities of learning AI systems. We’re witnessing just the start of this development and such text and image generators will amend the ways we produce content and may disrupt several professions.
“When using these generative tools, it is critical to understand how they operate and which tasks they excel in. Large language models, like ChatGPT, have been developed by feeding them with massive amounts of information. It processes new material fast and works well for searching for information in the vastness of the internet, and for tasks such as writing and debugging code.
“It is also capable of producing text which sounds convincing, on almost any topic. However, that does not mean the text is factually correct. AI can write scientific text, with quotations and references to background material. However, those articles have often been proven to be completely bogus; they are just snippets of text hallucinated by ChatGPT.”
Donald MacAskill, chief executive, Scottish Care, said: “Whether or not the existence of ChatGPT and its promise is more permanent than the winter snow is debateable yet what is clear is that technologies of this type are going to become commonplace.
“Yet in social care life is not just about the chronology of a heart-beat, it’s about quality of relationship and belonging to community; it’s more than the tricks of a chatbot but contains the subtleties of unsuspecting conversation and surprising encounter. The predictabilities of pre-determined AI need to leave scope for the pain and pathos of messy contradictory relatedness. Reducing unnecessary operational duplication is a yearnful desire, replacing contradictory human presence is a nightmare.”
Nick Freer, founding director, Freer Consultancy, said: “OpenAI’s ChatGPT is definitely making waves and will only grow in its adoption, but there are also well-documented limitations. What we know is that the AI can produce well-written copy in English, or any other language, and create blocks of computer code at astonishing speed.
“On the writing front, ChatGPT and, over time, the proliferation of other chatbots like it, will be a game-changer for businesses, who will be able to create content in a fraction of the time it takes today.
“On the disclaimer side of things, even OpenAI’s CEO Sam Altman is playing things down, recently tweeting that: “It’s a mistake to be relying on it for anything important for now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness”.
“The words of Greek philosopher Plato come to mind here: “Not truth but only the semblance of truth”.
“In addition to simulating human conversation, ChatGPT can write code, web pages and applications in programming languages like JavaScript. While it cannot yet write complex code, commentators believe it will become a more proficient coder in the years ahead.
“So, did I use ChatGPT to pull together any of this piece? My chatbot and I would rather not say.”
Declan Doyle, head of professional services at Scottish Business Resilience Centre, said: “I see it becoming an integral part of the more tedious but fairly simple tasks that we have to do, for example, getting it to write a follow up email where we discussed x,y and z.
“I am also encouraging my team of ethical hackers to use it if they are having some writers block (report writing can be tough for the techies) on how to start a report. For example, telling ChatGPT to write a vulnerability assessment where we identified default credentials in use. They can take the output and then build upon it themselves. It won’t provide a complete report, but it will help them get started, especially if they struggle with writing.”
We have been talking about AI for quite some time now in Scotland, especially in the public sector where there has even been an effort to put a national strategy in place.
As much as there are fine words in the document, one wonders whether any government can truly keep up with let alone regulate a technology that is advancing at breathtaking speed.
Even if in Scotland we pledge to abide by standards whereby AI is done in a way that is ‘trustworthy, ethical and inclusive’, the virtues are somewhat lost if people turn in their droves to platforms like those developed by OpenAI, with over a million users on the platform in a single week.
Aside from regulation, there are other legitimate concerns. Already, we can see a tricky problem for the education sector and how it might respond to students using chatbots to complete coursework assignments. I personally also foresee trouble for what is rather buzzword-ily described as modern ‘content marketing’, in other words text, words and images used to sell us stuff. Other than the images, which might be a step too far as it involves design, which ChatGPT explicitly does not do (DALL-E, also from OpenAI, does provide images, though) much of the internet is awash with third-rate puffery used by brands to promote their products.
Whilst the human factor in understanding what motivates consumer behaviour will always be a factor, outsourced content farming and SEO to specialist writers may be one of the first casualties of ChatGPT. I’d like to think ChatGPT may even help free our social media feeds from the equivalent of the Royal Mile’s tartan tat, but that may be a step too far.
AI-fuelled industry power-grabs aside, the augmentative effects of tools like ChatGPT are where we will, I believe, see the most noticeable and near-term impacts. Already doctors are using AI to assist in fields as diverse as cancer diagnosis and radiology. The advent of ChatGPT will only add to that trend, with a doctor in Australia using it to help diagnose a patient in the last week alone. Perhaps a bigger and scarier challenge is to deal with the effect of patients turning to the platform as a way to diagnose themselves, but without medical training or insight into what the right questions might be. Fortunately (and I did check), ChatGPT offers some insights into what symptoms might be, but advises you to consult with a doctor and stresses it does ‘not have access to personal information or the ability to conduct medical examinations’.
The impact of ChatGPT is nonetheless profound and represents a harnessing of artificial general intelligence in ways we have not yet seen. It is, in ChatGPT’s own words, ‘a large language model trained by OpenAI, and my knowledge comes from the text that I have been trained on, which includes a wide range of documents and websites.’
When interrogated further the algorithm simply adds that those documents and websites include ‘books, articles, websites, and other written material from a wide range of sources’. It does not, at this stage, have the ability to browse the internet or contextualise any of the information it has been trained on, and studiously avoids giving opinions or making judgements about people or their actions.
When it comes to essay writing, I think the education sector might need to reflect on the fact that – as part of the academic process in subjects other than those that are purely logic driven – it is necessary to interrogate sources and make arguments based on available facts and evidence. We are still some way from that point, although it may be reached sooner than we think.
The final thing to say is that – as I was informed by David Irvine at Maidsafe – artificial intelligence is on a pathway to evolve to mimic the human brain itself, looking for new connections without having to be ‘trained’ on data in the way it is now. In David’s words it would be ‘a general purpose learns everything tool’, and may even one day help us stop the ageing process. At that point we will probably need to drop the word ‘artificial’ altogether, and perhaps even reconsider what it is to be human at all.