SUBSCRIBE TO OUR NEWSLETTER
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form. Please try again.
The growing field of software supply chain security is different in every way we might interact with it, and it requires different tools and techniques from traditional application security.
We are witnessing an increasing trend in software supply chain attacks. Analysis by Gartner states that “by 2025, 45% of organizations worldwide will have experienced attacks on their software supply chains, a three-fold increase from 2021”. For security professionals who have been working with application security for years, it can be a false comfort to view these new threats through the same lens. It can be easy to treat software supply chain vulnerabilities as if they were someone else’s web bugs, but different tools and techniques are required for managing risk that comes from the software supply chain.
In this blog post we are going to contrast software supply chain security against application security in concrete terms by showing how the two differ in the context of the DevOps process. This allows us to describe the options you have for securing your software supply chain.
The below diagram shows which App Sec and Software SupplyChain risks exist in a typical DevOps-driven software development process. The chartreuse colored 1 and 7 are application security related risks: the introduction and exploitation of application vulnerabilities, respectively. Dark gray colored numbers 2 through 6 are software supply chain related risks.
A typical application security vulnerability is introduced in coding or planning phase.
Vulnerabilities created during the planning phase are typically caused due to the inability to fully consider the security implications and requirements of the system. For example, failure to realize the security implications for allowing jndi lookups in log data can lead to surprising results.
Threat modeling performed by experienced analysts can go along way to prevent these lapses.
We can assume that the typical professional software developer doesn’t go out of their way to write insecure code unless their intent is malicious. Producing secure software takes effort. An unexperienced developer may concatenate user input into a SQL query instead of devising a parameterization strategy, or innocently reflects unsensitized data to a web browser without jumping through all the hoops to properly encode it as html. Even classic buffer overflows like Heartbleed take effort to avoid.
The application security industry has built a wide array of countermeasures for implementation bugs. SAST, DAST, SCA, Penetration Testing, Bug Bounties, and Secrets Scanning tools are all geared toward identifying these defects between when they are implemented and production.
Developers inject malicious code during the coding phase of software development. As a result, malicious third-party packages are introduced—either intentionally or unintentionally—when the code is built into executables and packaged for deployment.
Malicious code can be inserted by legitimate developers or by compromised developer accounts to source code management systems. Finding backdoors, logic bombs, trojan horses and the like as code are difficult to automate.
What do I mean by that? Take the PHP backdoor from March 2021. On receipt of a specific HTTP header, the backdoor uses a string sent with that header as executable PHP code:
Any static analysis tool that looks for command injection should flag this because this backdoor is implemented as command injection by using the function “zend_eval_string”.
Compare this to a Linux kernel backdoor attempt from 2003 in which the wait4 system call was modified to add this code:
Unless you are a very conscientious C programmer, this code looks like it checks to see if a few options are set and if the current user ID is zero, and if so, sets a return value to indicate an invalid condition has occurred. Since those two options should never be set at the same time, this would make sense. However, this code sets the current user’s ID to zero. That second condition in the if statement is a single-equals (‘=’) assignment operation, not a double-equals (‘==’) comparison operation. And setting your user ID to zero in the kernel is a privilege escalation to Linux’s all-powerful root user.
Static analysis would whizz right by this code. I’ve conducted penetration tests of many Unix and Linux systems and never have I written a custom program which calls arbitrary system calls with arbitrary options to see if it results in a privilege escalation.
Fortunately, the Linux kernel has many very conscientious C programmers who perform code reviews as part of a formal approval system, who don’t merely rubber stamp their code reviews but instead carefully analyze their changes before they make their way into the kernel. For this reason, this backdoor was never deployed.
When Dominic Carr was contacted by fellow open-source software developer “right9ctrl” asking to takeover maintenance for “event-stream”, one of his several hundred projects, he said yes. Feature requests were being made to event-stream and Carr didn’t have the bandwidth to care for a package he didn’t even use anymore. Right9ctrl infected event-stream with a backdoored dependency and event-stream thereby infected the 3900+ packages that use it as a dependency in a targeted attack against bitcoin wallet software.
Most organizations do not produce in-depth code reviews of third-party dependencies. Instead, package risk scoring such as OpenSSF’s Scorecard project or BlackDuck’s OpenHub system can help identify immature, under-maintained, and therefore risky dependency packages.
In either the code or build stages you can fight this risk by identifying risky code changes. This is the space that Arnica works in. We will be announcing much more about our capabilities later.
In August 2018, Snapchat accidentally released parts of its source code tree as a part of its iOS app release. Mobile app packages are compressed zip files holding everything the app needs to run on a target mobile device and, frequently, lots of things it doesn’t. Snapchat accidentally copied some of its source into that zip file, which was subsequently found by a curious snapchat user and shared on the internet.
The accidental or intentional exfiltration of code can happen at various points in the lifecycle. For Snapchat, it was in the release. Anyone with access to your code at any time can exfiltrate it. Clearly, curbing excessive permissions is key to preventing the majority of risk here. Threat intelligence can clue you in that your code has been disclosed publicly or in the dark web. Anomaly detection and sometimes data loss prevention can identify your code as its on its way out the door.
This risk encompasses the exfiltration of not just code but other software development artifacts. When Anthony Levandowski allegedly stole almost 10 gigabytes of design documentation from self-driving car pioneer Waymo to found his own company, Uber quickly bought the small company for over half a billion dollars. The civil suit was settled for 0.34% of Uber’s equity.
Fortunately for Waymo, they had security systems which let them observe the downloads of the design documents and were able to confirm through external intelligence that Uber was using proprietary features of Waymo’s design which were documented amongst the stolen materials.
Sometimes malicious external actors will compromise IT infrastructure to inject malicious code into innocent code. This happened with the Webmin backdoor from 2019. There, the very popular UNIX administration software’s SourceForge package was manipulated to include a command injection vector in its change password functionality, but the source and packages available from their GitHub repo was untouched. Anyone who deployed Webmin by downloading packages from SourceForge were potentially affected.
Protecting the integrity of your code is not a simple matter because of the size and complexity of the attack surface of your pipeline. Reducing that attack surface to the extent you are able is a good first step, but it’s big and complex for a reason: it has a big, complex job to do. Nonetheless, hardening configurations and minimizing the permissions for each component in the pipeline is a worthwhile exercise. Signing code and binaries early is useful but only if those signatures are built on secure/distributed enclaves to prevent tampering, and validated in later phases. Distributed enclaves are a novel approach: after their breach, Solarwinds experimented with running builds in parallel across redundant, distributed build systems so that a similar attacker would have to be in three places at the same time. Having a trusted package distribution system and performing software composition analysis to identify publicly-known vulnerable third party packages can also help manage this risk.
Cross-environment compromise occurs when authorization to one environment leads to unwanted authorization to another. One simple way this can happen is through the reuse of credentials across different environments. Another is through transitive trust issues in infrastructure, for example, deploying a service to a network/pod of another service.
Centralized secrets managers can prevent the former risk from occurring. Separate pipelines—treating each environment as a different tenant—can help with the latter.
We’ve seen how many different ways software supply-chain vulnerabilities can be exploited throughout the DevOps process. Sometimes they are exploited in production as well. This happened during the Siemens logic bomb attack, wherein a developer allegedly created self-destroying Excel macros for the purpose of garnering additional support contracts.
For software products that run on customer premises, that means customer production, as with the SolarWinds cyberattack.
If you have custom backdoors being accessed in production, there’s not a lot you can do at this point. XDR tools might detect especially loud network anomalies. Any other internal controls contributing to your defense in depth strategy can help. For example, API threat protection on SaaS products can identify abnormal API calls. If your intruder is accessing APIs as a part of their attack campaign, this control can help.
Finally, we come full circle on the application security side. Operations is the only phase where application security vulnerabilities are exploited.
There are many productized countermeasures in this space, such as API Threat Protection, web application firewalls, and endpoint detection and response systems.
Managing down the risk of software supply chain attacks requires security activities at many places in the DevOps process. While there might be a lot of manual work to do today, software supply chain tools are maturing as awareness grows throughout the DevSecOps industry. Don’t be confused. Join our journey at Arnica.