By Blessing Enechojo Abu
OpenAI has released another model of ChatGPT which is image- and text- understanding AI, the GPT-4, that the company calls “the latest milestone in its effort in scaling up deep learning.”
Since its launch in November, ChatGPT, which runs on a technology called GPT-3.5, has been so impressive, partly because it represents a quantum leap from the capabilities of GPT-2, which had been released few years ago.
Comparing the GPT-3.5 and the GPT-4, OpenAI states that although the distinction can be subtle, “the difference comes out when the complexity of the task reaches a sufficient threshold — GPT-4 is more reliable, creative and able to handle much more nuanced instructions than GPT-3.5.”
A clear distinction however is that the GPT-4 can read and analyze images in addition to texts. This makes it possible to generate texts based on text inputs and images. GPT-4 can caption — and even interpret — relatively complex images, acting as a virtual eye for the user and virtually impaired.
As of now, using this latest chatbot model isn’t free. Users will have to pay $20 per month for access to ChatGPT Plus, a premium version of the ChatGPT bot.
Despite the fact that the GPT-4 proffers enormous assistance to its users, it has its flaws. As with previous versions, GPT-4 can “relay misinformation or be misused to share controversial content, like instructions on how to cause physical harm or content to promote political activism.”
The model also has 40 percent chance of giving factual responses and shows a high chance (82%) of not responding to requests for disallowed content.
According to the makers; “GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its data cuts off (September 2021), and does not learn from its experience.”
“It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains or be overly gullible in accepting obvious false statements from a user. And sometimes it can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces.”
Add a Comment