|
|||||||||||||||||||||||||||||||
Feature Articles: Reducing Security Risks in Supply Chains by Improving and Utilizing Security Transparency Vol. 22, No. 11, pp. 64–68, Nov. 2024. https://doi.org/10.53829/ntr202411fa9 Efforts to Improve and Utilize Security Transparency in Software Supply ChainsAbstractLooking back on the expectations of various stakeholders for the utilization of visualization data to reduce risks in software supply chains and the actual situation in which the utilization has not progressed, we introduce the latest research trends toward addressing issues related to the use of visualization data and the security transparency technologies that NTT Social Informatics Laboratories is investigating. Keywords: visualization data, SBOM, LLM 1. IntroductionIn response to government trends in Japan and overseas, each business operator in a software supply chain is required to provide visualization data including a software bill of materials (SBOM) and respond to security issues. However, there are various practical and technical issues involved in the actual implementation of these measures. In this article, we introduce the issues that businesses face when responding to various systems, technical issues involved in the production and use of visualization data, and technologies that NTT Social Informatics Laboratories is investigating to expand the use of visualization data. 2. Expectations of various stakeholdersInstitutions in the United States, the EU, and Japan require not only the provision of visualization data including SBOMs but also management of visualization data and vulnerability management using visualization data to reduce supply chain security risks. However, from the perspective of each business operator in a supply chain, the requirements imposed by established systems and guidelines cannot be immediately reflected in system operations. For example, there are various tools and technologies for generating visualization data, and it is necessary to understand and select appropriate tools. Business operators will also need knowledge and best practices to manage and operate these tools. In accordance with the systems and guidelines of each country [1], the following issues need to be addressed. First, there are issues in providing visualization data. Each business operator is required to provide an SBOM in various situations, but the content of the information and the format of data and files may differ depending on the requester. As we explain later, the same format contains different information. Business operators that generate visualization data need to select tools that meet these requirements and learn how to use them. Next, there are issues related to operations using visualization data. Guidelines stipulate that an SBOM must be managed for a certain period, and it is also necessary to determine and manage the frequency of updating the SBOM, such as when updating software. Security risk management using visualization data is also an issue. In addition to using visualization data to reduce security risks, each country’s system requires business operators to certify compliance with security requirements, address vulnerabilities, and disclose information. Therefore, it is necessary for business operators to consider how to prove security using visualization data, what is disclosed, and how to continuously ensure security using visualization data. To address these issues, it is important to cooperate between those who produce and use visualization data. While the popularization of SBOMs has made it possible to identify software names and versions, it has been reported that attackers took the time to infiltrate the software development project and install a backdoor into the open source software (OSS) called XZ Utils that surfaced in March 2024 [2]. This suggests the new challenge of using visualization data not only to check whether suspicious software has been mixed in but also to check whether even legitimate software is operating illegally. 3. The status of visualization dataSoftware supply chains are formed through multiple different organizations and are becoming more complex. In addition, the reuse of OSS makes software security threats more serious. Visualization data are expected to contribute to increasing the transparency of software and as a means to combat these threats, but it is not yet fully utilized. This is due to various issues related to the visualization data. The life cycle of visualization data is roughly divided into the generation phase, collection/management phase, and utilization phase, and there are issues with each phase. The generation phase is the phase in which users generate visualization data to manage device and system dependencies, licenses, and other configuration information. There are several issues with software configuration analysis (SCA) tools used to generate the visualization data. One is that the difference in the specifications of SCA tools causes inconsistency in representation in the character string output to the visualization data. For example, the difference between the prefix “Person:” and “Organization:” assigned by the SCA tool to the character string output as the organization name of the supplier and the difference between “organization-name inc” and “organization-name llc” are applicable. Each company has been addressing this issue, and some have implemented matching methods using their own databases [3]. There is another issue in that different SCA tools have different analysis performance. To give an example of our research data, we examined MongoDB image files from Docker Hub and found that the SCA tool Syft output 295 dependency packages, while Trivy output 136. The selection of an SCA tool is based on the configuration information to be visualized and the use of the tool according to the purpose. However, some research results suggest that there is no SCA tool that meets the minimum requirements [4] in the guidelines issued by the National Telecommunications and Information Administration (NTIA) in the United States [5, 6]. The collection/management phase collects and manages the generated visualization data. The issue is that it is difficult to handle the visualization data in a unified way because of the compatibility between them. There are two formats for visualization data, SPDX and CycloneDX. The former has many license information items and the latter has many security information items. When one manages collected visualization data, if one uses one of the two formats, the items will be insufficient. Therefore, it is important to consider a comprehensive format model and the development of an integrated platform to maintain compatibility [7]. In the final utilization phase, there is the security issue of sharing visualization data across different organizations. In other words, it is an issue of data integrity and access control to ensure that visualization data are not illegally rewritten in the process of sharing. To ensure the authenticity of visualization data, technologies that apply the verifiable credentials model in a blockchain to supply chains are being studied [8]. Issues related to visualization data differ depending on the phase, and there are both microscopic issues, such as inconsistent representation of visualization data, and macroscopic issues related to management and utilization of visualization data. Since these are no independent issues, it is difficult for a single company to address the issues that are barriers to penetration and utilization of visualization data. The direction of solutions by organizing various issues of visualization data has been discussed [9–11], but only a few papers have made technical proposals based on actual issues. In the Security Transparency Consortium, each company shares its knowledge and exchanges technical opinions for the popularization of visualization data. 4. Enhancing security operations using visualization dataVulnerability management is a security operation that will be greatly changed by using visualization data. Vulnerability management involves the collection of vulnerability information, confirmation of vulnerability risks, and analysis of the impact on the organization [12]. The first step in vulnerability management is to understand the configuration of the hardware and software used by the organization. By accurately identifying the configuration, it makes it possible to accurately identify vulnerabilities. Examples of methods of identifying the configuration include the use of management sheets and package-management systems. Because the system configuration changes due to system updates, and some software is not managed by package management systems, these methods have problems such as omission of management and an increase in management operations. These problems can be solved by using visualization data to obtain accurate and up-to-date configuration information. Other problems may arise. In vulnerability management, multiple pieces of information are used to analyze the impact of vulnerabilities. Examples include the severity of the vulnerability, availability of the exploit code, actual damage status, communication status, and process status. Because a security operator or developer uses these data to determine the impact, vulnerabilities can be accurately visualized and vulnerabilities that were overlooked in the past can be grasped; thus, vulnerabilities cannot be managed with the same approach as before. Therefore, along with research and development (R&D) on visualization data, R&D on vulnerability countermeasures is also required. The following are two such countermeasures. The first is a technology that visualizes communication activities occurring in devices to narrow down the vulnerabilities that need to be addressed first. With this technology, information can be generated that is linked to the software information that made the communication. Therefore, information such as that software X version Y communicated with the global Internet protocol (IP) address Z can be visualized. Since communication information is information used to determine the impact of a vulnerability, it can be used to narrow down vulnerabilities that have a high risk and need to be addressed preferentially on the basis of the communication destination. The second technology analyzes and visualizes programs that are executed when a device is started. This technology makes it possible to visualize the programs that are executed when a device starts up and those that are periodically executed. The information used to determine the impact of a vulnerability includes whether software is running, so it is possible to narrow down the software contained in a device that should be given priority during a vulnerability check. We hope to advance the use of visualization data by developing technologies that solve the problems associated with the use of visualization data. 5. Initiatives to expand the use of visualization dataThe device and system-configuration information described in the visualization data are mainly used for use cases of dependency understanding and vulnerability management. However, this is only an example of the use of visualization data alone. By using multiple sets of visualization data in a supply chain, more extensive use can be expected, such as identifying erroneous configuration information on the basis of the differences in configuration information among visualization data, or compensating for missing configuration information due to SCA performance based on the dependency and co-occurrence characteristics of configuration information. The characteristics of dependency and co-occurrence of configuration information means, for example, that if dependency package D is described in the visualization data and package D has a dependency relationship based on packages A and C, they have a co-occurrence relationship. We are building a platform to manage visualization data on a large scale and examining techniques to capture patterns by analyzing the characteristics of the configuration information as described above. We are also investigating techniques to estimate packages using large language models (LLMs) to supplement missing packages. We believe that these techniques will increase the value of the configuration information of the visualization data, contribute to the spread of visualization data in the future, and lead to the construction of a highly transparent software supply chain. To further strengthen supply chain security, it is important not only to increase transparency and visualize risks as described above but also to appropriately deal with risks and use the experience for the next measures. In addition to visualization data, we are developing technologies to visualize risks in the development phase. We have established a source-code-dependency analysis technology that comprehensively detects risks by lexical string analysis. We are conducting technical verification of source code analysis using an LLM and aim to establish a new risk detection technology by semantically analyzing the processing content that was difficult with lexical features [13]. We are also investigating automated vulnerability analysis and risk estimation, which require a high level of knowledge. This task is intertwined with the natural language processing of vulnerabilities, source code processing, analysis capabilities, and personal experience and knowledge. Therefore, it requires a high level of technology to automate. We are actively using LLMs that are strong in natural language and code analysis to break down tasks into smaller tasks and test their effectiveness. In the validation we conducted, we tested whether an LLM could identify whether a vulnerability was a triggered fix. The results of the validation indicate that the identification accuracy was somewhat high even with zero-shot prompts. However, the identification accuracy tended to be lower for vulnerability types with many vulnerability trigger points, making it difficult to treat all vulnerabilities in the same way [14]. Another study demonstrated that it is difficult to automatically fix vulnerabilities when the fixes exist across multiple files [15]. On the basis of these findings, we think it is important to properly decompose and examine the actual problem to determine the extent to which domain specialization is possible using an LLM. In the area of vulnerability, one specific technology cannot replace all other technologies. By conducting these studies and technical studies to ensure transparency through visualization data in both directions, gaps in the specifications and usage of each technology will be reduced, leading to the development of practical technologies to enhance the security of software supply chains. 6. ConclusionThis article introduced NTT Social Informatics Laboratories’ research activities in the competitive domain based on social issues discussed in the collaborative domain of the Security Transparency Consortium. We will continue our research activities so that visualization data enable users to use software with confidence. References
|