Chinmay Jog, Pangiam

DataDecisionMakers Author

Chinmay Jog is a senior machine learning engineer at Pangiam.

VentureBeat/Ideogram

Here are 3 critical LLM compression strategies to supercharge AI performance

In today’s fast-paced digital landscape, businesses relying on AI face new challenges: latency, memory usage and compute power costs to run an AI model. As AI advances rapidly, the models powering these innovations have grown increasingly complex and resource-intensive. While these large models have achieved remarkable performance across various tasks, they are often accompanied by significant computational and memory requirements.

Chinmay Jog, Pangiam
Jog/Dall-E

The 'strawberrry' problem: How to overcome AI's limitations

By now, large language models (LLMs) like ChatGPT and Claude have become an everyday word across the globe. Many people have started worrying that AI is coming for their jobs, so it is ironic to see almost all LLM-based systems flounder at a straightforward task: Counting the number of "r"s in the word “strawberry." They are not exclusively failing at the alphabet “r”; other examples include counting “m”s in “mammal”, and “p”s in “hippopotamus." In this article, I will break down the reason for these failures and provide a simple workaround.

Chinmay Jog, Pangiam