Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
The general buzz around AI is not fading away anytime soon. It’s popping up in almost all industries, from customer service to medicine. However, a technical understanding of these tools remains a complicated conversation.
Large Language Models (LLMs) cannot understand and emulate human-like conversations. They are trained with huge volumes of data to give a particular output based on the specific input. But they lack the ability to comprehend the true meaning behind those words. Any response generated by LLMs will lack a basic understanding of the context.
While LLMs can produce formulaic and structured pieces of prose and poetry, these writings are highly uninspiring and dull. OpenAI’s ChatGPT is an LLM that generates new text after training with vast amounts of data. While teachers fear that the popularity of ChatGPT will be the end of take-home assignments and exams, a close examination of ChatGPT’s algorithm reveals its inability to produce creative and interesting human-like prose. This type of incompetency poses a fundamental question about the usefulness of technology in solving matters of business.
According to statistics, the chatbot market is expected to grow at a CAGR of 23.5%, reaching 10.6 billion by 2026.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
ChatGPT is the popular generative AI, not the first AI-based chatbot. It is competing in a market full of different language bots. However, the recent free version of ChatGPT gained more momentum after getting 1 million users in just one week. ChatGPT depends on a large number of humans handling massive volumes of data for data classification, tagging, labeling and annotation to enhance its capabilities. There are some deterministic speculations that ChatGPT can replace Google’s search engine.
However, the probability of inaccuracies in ChatGPT’s answers forces users to verify them using external sources. Such verification may be more complicated as ChatGPT provides concrete answers without any links to sources (unlike Google) or stating its level of confidence. Therefore, fears and speculations around replacing Google may be a bit conflated.
Shortcomings of ChatGPT
As discussed above, ChatGPT can write prose and poetry, answer sophisticated questions, and engage in conversations, but certain shortcomings cannot be overlooked. Some of them include the following:
ChatGPT is an extensive LLM that improves the accuracy of its responses through continuous training. However, since this LLM is sufficiently new, it has not undergone enough training. As such, it can give inaccurate answers.
Due to this, Stackflow has banned answers from ChatGPT, saying that answers by ChatGPT are harmful to the community and users looking for correct answers. While ChatGPT has a high rate of generating inaccurate answers, the chatbot answers all questions with such confidence that it feels like those answers are not only correct but also the best.
Limitations in training data
Like all other AI models, ChatGPT suffers from limitations in its training data. The constraints, limitations and biases in the training data can produce inaccurate results. It can disproportionately impact minority groups and perpetuate stereotypical representations. To reduce such biases, it is imperative to improve data transparency.
ChatGPT is a free product, but running this technology is extremely expensive. The running cost is estimated to be around $100,000 per day or $3 million per month. This raises questions about its sustainability in the long run. Open AI’s partnership with Microsoft might lower some costs. But this operation is not cheap in any sense.
Advances in AI: A rocky road ahead
While many technology determinists have called ChatGPT “code red” for Google, the reality is far from it. Testing has demonstrated that ChatGPT produces “mindless discomprehension,” — that is, mindless, incoherent answers that reveal that the system does not understand what it is talking about. While it guardrails offensive responses (the main problem with other Generative AI bots), it does so using keywords and does not understand what it is guarding against.
The other more significant problem with ChatGPT is hallucinations — it fluffs related stuff together that does not answer the question correctly. Basically, it paraphrases and combines different pieces of information from the training data. There may be some random or vague relationship between these pieces of information. This is why the answer may seem plausible or credible, but it may be far from reality.
Unlike traditional chatbots that connect keywords with intents, LLMs like ChatGPT are text predictors. This means they fundamentally learn about the relationship between texts, words and sentences. And they use these relationships to predict the following string of characters.
While Google searches cost less than a penny, ChatGPT is quite expensive (between data collection, manual data activities and massive computing). Similarly, it takes a while to compile its response, while Google searches are instantaneous. These economic and speed issues put ChatGPT behind Google.
The above discussion around LLMs and ChatGPT demonstrates that the hype around ChatGPT may be exaggerated. A lot of excitement gets drummed up when people start imagining the possibilities. However, after a short period of time, those actually testing the parameters of these tools in specific business scenarios reveal that we’re still a ways from the great AI singularity in the sky.
Srini Pagidyala is the cofounder of aigo.ai.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!