Banner
Workflow

GPT-4 — a shift from ‘what it can do’ to ‘what it augurs’

Contact Counsellor

GPT-4 — a shift from ‘what it can do’ to ‘what it augurs’

  • U.S. company, OpenAI, recently released GPT-4, its latest AI model.
  • This large language model can understand and produce language that is creative and meaningful, and will power an advanced version of the company’s chatbot, ChatGPT.

GPT-4 and what it can do

  • GPT-4 is more conversational and creative.
  • It can accept text and image input simultaneously, and consider both while drafting a reply.
  • The model can purportedly understand human emotions, such as humorous pictures.
  • GPT-4 was tested in several tests that were designed for humans and performed much better than average. Eg: Bar examination.
  • ChatGPT-generated text infiltrated school essays and college assignments almost instantly after its release; its prowess now threatens examination systems as well.
  • OpenAI has released preliminary data to show that GPT-4 can do a lot of white-collar work, especially programming and writing jobs, while leaving manufacturing or scientific jobs relatively untouched.
  • Wider use of language models will have effects on economies and public policy.

Ethical questions

  • GPT-4 is still prone to a lot of its flaws its predecessor have.
  • Its output may not always be factually correct — a trait OpenAI has called “hallucination”.
  • GPT-4 has been trained on data scraped from the Internet that contains several harmful biases and stereotypes.
  • There is also an assumption that a large dataset is also a diverse dataset and faithfully representative of the world at large.
  • The moderator model is trained to detect only the biases we are aware of, and mostly in the English language.
  • This model may be ignorant of stereotypes prevalent in non-western cultures, such as those rooted in caste.
  • There is vast potential for GPT-4 to be misused as a propaganda and disinformation engine.
  • The larger question here is about where the decision to not do the wrong thing should be born: in the machine’s rules or in the human’s mind.

A ‘stochastic parrot’

  • In essence, GPT-4 is a machine that predicts the next word in an unfinished sentence, based on probabilities it learned as it trained on large corpuses of text.
  • This is why it’s being called a “stochastic parrot”, speaking in comprehensible phrases without understanding the meaning.
  • But Microsoft Research has maintained that GPT-4 does understand what it is saying, and that not all intelligence is a type of next-word prediction.
  • Apart from OpenAI’s models, AI company Anthropic has introduced a ChatGPT competitor named Claude.
  • Google recently announced PaLM, a model trained to work with more degrees of freedom than GPT-3.

Conclusion

  • More broadly, efforts are underway worldwide to build a model with a trillion degrees of freedom.
  • These will be truly colossal language-models that elicit questions about what they cannot do, but these concerns would be red herrings that distract us from whether we should be building models that simply test the limits of what is possible to the exclusion of society’s concerns.

Categories