Again and again, we read in the IT news about new security gaps that have been identified. The more severe the classification of the loophole, the more attention the information receives in the general press. Most of the time, you don't even hear or read anything about all the security holes found that are not as well known as the SolarWinds Hack, for example.
But what is the typical lifecycle of such a security gap?
From Generated to Discovery
Let's start with the birth of a vulnerability. This birth can be done in two differently motivated ways. On the one hand, it can happen to any developer that they create a security hole by an unfortunate combination of source code pieces. On the other hand, it can also be based on targeted manipulation.
However, whichever of these two scenarios (intentional or unintentional) applies, this has essentially no effect on the further course of the lifecycle of a security vulnerability.
In the following, we assume that a security hole has been created and that it is now active in software. These can be executable programs or libraries that are integrated into other software projects as a dependency.
From Discovery to Publicly Available
In most cases, it is not possible to understand precisely when a security hole was created. But let's assume that there is a security hole and that it will be found, which clearly depends on which person or which team finds this weak point. This has a severe impact on the subsequent course of its history.
Let's start with the assumption that this vulnerability is found by people who are interested in using this vulnerability themselves or having it be exploited by other parties. Here the information is either kept under lock and key or offered for purchase at relevant places on the Internet. There are primarily financial or political motives here, which I do not want to go into. However, at this point, it can clearly be seen that the information is passed on at this point to channels that are not generally available to the public.
However, if the security gap is found by people or groups who are interested in making the knowledge about it available to the general public, various mechanisms now come into effect. One must not forget that commercial interests will also play a role in most cases, though here the motivation is different. If the company or the project itself is affected by this vulnerability, there is usually an interest in presenting the information relatively harmlessly. The feared damage can even lead to the security gap being fixed, but knowledge about it further is hidden. This approach is to be viewed as critical, as it must be assumed that there will also be other groups or people who will gain this knowledge.
But let's assume that people who are not directly involved in the affected components found the information about the vulnerability. In most cases, this is the motivation to sell knowledge of the vulnerability. In addition to the affected projects or products, there are also providers of vulnerability databases. These companies have a direct and obvious interest in acquiring this knowledge. But to which company will the finder of the vulnerability sell this knowledge? Here it can be assumed that there is a very high probability that it will be the company that pays the better price. This has another side effect that affects the classification of the vulnerability: many vulnerabilities are given an assessment using CVSS. Here, the base value is made by different people. Different people will have other personal interests here, which will then also be reflected in this value.
Regardless of the detours via which knowledge comes to the vulnerabilities databases, only when the information has reached one of these points can one assume that this knowledge will be available to the general public over time.
From Publicly Available to Consumable
However, one fact can be seen very clearly at this point. Regardless of which provider of vulnerabilities you choose, there will only ever be a subset of all known vulnerabilities in this data set.
As an end consumer, there is only one sensible way to get there. Instead of contacting the providers directly, you should rely on integrators. This refers to services that integrate various sources themselves and then offer them processed and merged. It is also essential that the findings are processed so that further processing by machines is possible. This means that the meta-information such as the CVE or the CVSS value is supplied.
This is the only way other programs can work with this information. The CVSS value is given as an example. This is used, for example, in CI environments to interrupt further processing when a particular threshold value is reached. Only when the information is prepared in this way and is available to the end user can this information be consumable. Since the information generally represents a considerable financial value, it can be assumed in the vast majority of cases that the commercial providers of such data sets will have access to updated information more quickly than freely available data collections.
From Consumable to Running in Production
If the information can now be consumed, i.e., processed by the tools used in software development, the storyline begins in your projects.
Whichever provider you have decided on, this information is available from a particular point in time, and you can now react to it yourself. The requirement is that the necessary changes are activated in production as quickly as possible. This is the only way to avoid the potential damage that can result from this security vulnerability. This results in various requirements for your software development processes.
The most obvious need is the throughput times. Only those who have implemented a high degree of automation can enable short response times in the delivery processes. It is also an advantage if the team concerned can make the necessary decisions themselves and quickly. Lengthy approval processes are annoying at this point and can also cause extensive damage to the company.
Another point that can release high potential is the provision of safety-critical information in all production stages involved. The earlier the data is taken into account in production, the lower the cost of removing security gaps. We'll come back to that in more detail when the shift-left topic is discussed.
Another question that arises is that of the effective mechanisms against vulnerabilities.
From Test Coverage to Mutation Testing
The best knowledge of security gaps is of no use if this knowledge cannot be put to use.
But what tools do you have in software development to take efficient action against known security gaps? Here I would like to highlight one metric in particular: the test coverage of your own source code parts. If you have strong test coverage, you can make changes to the system and rely on the test suite. If a smooth test of all affected system components has taken place, nothing stands in the way of making the software available from a technical point of view.
But let's take a step back and take a closer look at the situation. In most cases, changing the version of the same dependency in use will remove known vulnerabilities. This means that efficient version management gives you the agility you need to be able to react quickly. In very few cases, the affected components have to be replaced by semantic equivalents from other manufacturers. And to classify the new composition of versions of the same ingredients as valid, strong test coverage is required. Manual tests would go far beyond the time frame and cannot be carried out with the same quality in every run. But what is strong test coverage?
I use the technique of mutation testing. This gives you more concrete test coverage than is usually the case with the conventional line or branch coverage. A complete description of this procedure is beyond the scope of this article.
From Logical Central Point to Finding Vulnerabilities
If we now assume that we want to search for known security vulnerabilities in the development and the operation of our software, we need a place where we can carry out the search processes.
Different areas are suitable here. However, there is a requirement that enables an efficient scan across all technologies used. We are talking about a logical central point through which all binaries must go. I don't just mean the JAR files declared as dependencies, but also all other files such as the Debian packages or Docker images. Artifactory is suitable for this as a central hub as it supports pretty much all package managers at one point. Because you have knowledge of the individual files and have the metadata, the following things can also be evaluated.
First of all, it is not only possible to capture the direct dependencies. Knowing the structures of the package managers used means that all indirect dependencies are also known.
Second, it is possible to receive cross-technology evaluations. This means the full impact graph to capture the practical meaning of the individual vulnerabilities. The tool from JFrog that can give an overview here is JFrog Xray, and we are directly connected to JFrog Artifactory. Whichever tool you choose, it is crucial that you don't just scan one technology layer. Only with a comprehensive look at the entire tech stack can one guarantee that there are no known security gaps, either directly or indirectly, in production.
We have seen that we have little influence on most stages of the typical lifecycle of an IT security vulnerability.
In fact, there are only two stages that we can influence directly. On the one hand, it is the quickest and most comprehensive possible access to reliable security databases. Here it is essential that you not only entrust yourself to one provider but rather concentrate on so-called “mergers” or “aggregators”. The use of such supersets can compensate for the economically motivated vagueness of the individual providers. I named JFrog Xray as an example of such an aggregator.
The second stage of a typical lifecycle is in your own home. That means, as soon as you know about a security gap, you have to act yourself—robust automation and a well-coordinated DevSecOps team help here. We will deal with precisely this section from the “security” perspective in another article. Here, however, we had already seen that strong test coverage is one of the critical elements in the fight against vulnerabilities. Here I would like to refer again to the test method “Mutation Testing”, which is a very effective tool in TDD.
And what can I do right now?
If you don't want to wait that long to read my next article, you can, of course, take a look at my YouTube channel and find out a little more about the topic there. I would be delighted to welcome you as a new subscriber!