Most teams rely on more than just their application code to ship software. What happens when one of those tools falls victim to an attack? We recently got a demonstration with the popular security scanning tools Trivy and KICS.
The attackers leveraged the compromised tooling (GitHub Actions) in a supply chain attack to harvest credentials in any consuming repo. With these additional credentials, they could expand their reach until they achieved the foothold they were looking for.
I've noticed this risk is not unique to security scanners. While teams commonly consider the libraries their applications directly leverage, there is considerable surface area for what Node calls "devDependencies" or additional tooling your CI/CD pipeline pulls in during execution. This might include a test framework, formatter, or linter.
Pipelines are high value targets with the secrets and systems they have access to, and teams must be more intentional about the steps they take to protect them. Just because a system does not accept requests from the internet does not mean it can't have a huge impact on your company's data. I've seen many teams adopt lock files for their application dependencies, but less rigor is applied to this additional tooling.
For example, the overwhelming majority of consumers of a given GitHub action will reference a major version tag (uses: actions/setup-go@v6). These tags are "mutable" which means anyone with write access to the action's repository can change the backing code without any code updates from the consumer. This increases the blast radius an attack can have.
We can learn from some of the approaches teams already take for their application dependencies. Similar to many dependency files, teams can reference a more specific minor or patch version, which means new releases don't get consumed automatically. This does not resolve the immutability challenge, but avoids the automatic consumption of new versions without review.
Some programming languages (like Golang via go.sum) take it a step further and verify if the code for a version matches a checksum and inform the user if it has changed. Actions can be referenced by their hash/SHA which prevents the code from shifting beneath a consumer's feet. This tradeoff is not without cost, as automation is needed to make regular updates. However, I'm coming around to the idea that it is not best to always upgrade the moment a new release is available.
A growing number of package management systems now support "cooldowns". With this configuration, tools like dependabot, uv, or pnpm will not suggest upgrading to a newer version until the time period has passed. If an attack is resolved within a few hours or days, consumers with these configurations will avoid the impact altogether. A couple of days or a week sounds like a reasonable tradeoff, without letting security vulnerabilities go unpatched for too long.
Another approach to consider is eliminating the impact by removing unneeded dependencies. Is a given third-party GitHub action needed, or is it really just two commands wrapped in the action? Focus on the minimal required set, rather than letting the list grow endlessly.
You can also consider vendoring. With vendoring, you take a snapshot of the code and store it locally, using it in place of dynamically fetching from upstream. This removes the ability for new releases to directly impact the codebase, but requires additional management to update and make the vendored packages available to the build.
With custom development, a team can choose to write their own version of a given tool vs putting their trust in a dependency. While it creates an additional burden on the team, the tool can be customized for only what functions are truly needed.
Platform and security teams must do the work to enable teams to navigate these options. As I've previously explored, defaults in templates and tooling around upgrades can help protect development teams at scale. Additional controls such as an allow list of actions, or defaulting to read permissions can also reduce the impact of an attack. When I implemented the allow list approach, we eliminated the potential for action typosquatting attacks, while also providing evaluation criteria to teams considering new actions.
As these attacks have demonstrated, it's no longer enough to only focus on the code your application uses. What tooling do your pipelines depend on? What might happen if one of those tools was compromised?
Top comments (0)