GPT, the brain powering the disruptive ChatGPT technology, is changing the world so quickly. The stunning quality of the bot has signaled that AI is going to be the new revolution, the new Internet, and much more quickly than most have predicted. Some experts are scared enough to call for a pause in development before things get out of control.

How Does GPT Work? 

GPT learns from tons of human-produced content, mostly text on the Internet. It establishes relationships between words in various contexts found in articles, code-bases, opinion pieces, news, and more. These established relationships (the trained model) help GPT to guess the next suitable word based on all conversation words as input. This includes both all the user prompts and the generated answers till that point. The guessing of the next suitable word though is not deterministic. Instead, it has some random element. This allows GPT to generate a different answer every time you ask it a question, even if it is the same one. The answer stays coherent despite the randomness because the probability to choose certain words considering some input may be much higher for certain words than others. For example, if GPT generated the word ‘He’, the probability to subsequently choose ‘is’ or ‘was’ is much higher than choosing ‘are’ or ‘were’, making the result almost always consistent with the language grammar and context. You can learn more about this process by reading this simplified guide

Based on that, we can say that GPT does not generate new knowledge, and does not have the ability (yet) to extrapolate to cases or ideas it was never trained on. Alternatively, it can only compose answers that are similar to some human contributions somewhere on the Internet, or a combination of human contributions. In other words, it can interpolate human knowledge and serve it to the consumer.

Yet, the majority of our needs and applications do not need ‘new knowledge’. Perhaps GPT can never be the next Einstein to classical physics or cannot be the Steve jobs of basic mobile phones. But how many of us humans do innovative work? What we call innovation is usually merely reusing previous knowledge, methods, or ideas in new contexts. Although we tend to overestimate our creativity, we rarely can come up with things we’ve never seen before. Instead, we mostly reuse or recompose ideas that we’ve seen before, and adapt them to problems we face

GPT and Medical Doctors

Doctors, like most workers, are not really innovating when they follow a medical procedure since they use existing knowledge to prescribe drugs or devise a treatment plan. Accordingly, there is no reason to believe that this occupation is immune to new world order technologies, like GPT.

A recent study demonstrated how GPT performed at or near the passing threshold for all three exams of the USMLE, one of the most difficult medical examinations, without any specialized training or reinforcement. These results, the study points out, suggest that large language models may have the potential to assist with medical education, and potentially, clinical decision-making. A statement by USMLE officials clarifies that GPT may not be able to answer all types of questions at the moment, but maintains that they would not be surprised to see AI models improve their performance dramatically as the technology evolves. Another study has demonstrated that GPT could be used to make decisions about prescribing patients with antibiotics. At this time, it is clear that GPT is not far off from changing the medical field.

Family practice doctors provide care to people of all ages. These generalists treat chronic conditions, evaluate symptoms, offer preventative care, and let people know when they need to see a specialist. This work is mainly about listening to the patient, doing generic tests, giving tailored advice based on well known medical literature, and rerouting cases when needed to other specialized doctors. This sounds like a perfect candidate for GPT to replace.

In reality, GPT may replace some responsibilities of doctors as opposed to whole specialties. This includes determining when to see a doctor, doing initial diagnosis for medical cases, recommending early measures, answering after treatment questions, and more. Accordingly, the aggregate demand for human doctors may drop, as each doctor may be able to treat more patients, just like developers can write more code in a day. At least this kind of threat seems imminent. So, at least in the near future, GPT may not end the need for doctors, but it can significantly transform the medical practice in our day to day lives.



This blog post is part of the AI4D project. In the Middle East, the International Development Research Center (IDRC) is funding several non-governmental initiatives to tackle the challenges little prioritized by governments, such as fostering inclusive and human rights-based AI and mitigating the possible negative effects of AI, such as job losses. One of their initiatives AI4D is trying to enhance and improve the capacity of infrastructure and skills in the MENA healthcare sector. 

In light of the absence of AI regulation, such initiatives might play a crucial role in steering AI direction to be safe, reliable, fair, and much more.