ChatGPT was launched in November 2022 and has been making headlines for its sheer brilliance since then. Be it acing various competitive exams, helping students with their assignments, turning a tutor in some cases, composing music and poetry and a lot more, the AI chatbot has aced various tests that people have put it through. However, all this time, there has been one concern that particularly stood out – the accuracy of its responses. And according to a new study, the chatbot’s accuracy is once again under the scanner as it failed to answer 52 per cent of software engineering questions correctly.
ChatGPT’s wrong responses to software engineering questions
According to an IANS report, a study by Purdue University in the US raised questions on the chatbot’s accuracy. Researchers analysed ChatGPT’s responses to 517 questions from Stack Overflow (SO) and found out that 52 per cent of these responses were inaccurate and 77 per cent were ‘verbose’. The team also found out that the inaccurate answers were largely due to the AI chatbot’s failure to understand the concept behind the questions.
The researchers added that even when ChatGPT did understand the question, it couldn’t arrive at a solution to the problem which contributed to a higher number of conceptual errors. The AI tool’s limited reasoning was also questioned by the team.
ChatGPT was launched in November 2022 and has been making headlines for its sheer brilliance since then. Be it acing various competitive exams, helping students with their assignments, turning a tutor in some cases, composing music and poetry and a lot more, the AI chatbot has aced various tests that people have put it through. However, all this time, there has been one concern that particularly stood out – the accuracy of its responses. And according to a new study, the chatbot’s accuracy is once again under the scanner as it failed to answer 52 per cent of software engineering questions correctly.
ChatGPT’s wrong responses to software engineering questions
According to an IANS report, a study by Purdue University in the US raised questions on the chatbot’s accuracy. Researchers analysed ChatGPT’s responses to 517 questions from Stack Overflow (SO) and found out that 52 per cent of these responses were inaccurate and 77 per cent were ‘verbose’. The team also found out that the inaccurate answers were largely due to the AI chatbot’s failure to understand the concept behind the questions.
The researchers added that even when ChatGPT did understand the question, it couldn’t arrive at a solution to the problem which contributed to a higher number of conceptual errors. The AI tool’s limited reasoning was also questioned by the team.
“In many cases, we saw ChatGPT give a solution, code, or formula without foresight or thinking about the outcome,” the team of researchers told IANS.
“Prompt engineering and human-in-the-loop fine-tuning can be helpful in probing ChatGPT to understand a problem to some extent, but they are still insufficient when it comes to injecting reasoning into LLM. Hence it is essential to understand the factors of conceptual errors as well as fix the errors originating from the limitation of reasoning,” they added.
OpenAI to go bankrupt?
On a related note, a recent report by Analytics India Magazine highlighted that ChatGPT’s users are declining. The chatbot saw a fall in its userbase in June. And the next month, the number of people using ChatGPT dropped even more. While 1.7 billion people were using the viral AI chatbot in June, the numbers declined by 12 per cent in July with 1.5 billion active users. The report then added that OpenAI isn’t profitable yet and Microsoft’s investment of USD 10 billion might be keeping the company going at the moment. This is because the company’s losses are increasing, and its money is coming from investors’ pockets.
Further, it is costing the company about USD 700,000 to operate ChatGPT every day. Thus, if OpenAI doesn’t become a profitable company soon, they might go bankrupt eventually.