Close Menu
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
What's Hot

Gundam Creator Yoshiyuki Tomino to Speak at Space Business Conference – Interest

May 25, 2025

Gō Ikeyamada to End Takanashi-ke no Imōto wa Hanayome ni Naritaii!! Manga – News

May 25, 2025

Doraemon Dorayaki Shop Story Game Adds Hindi Language Support – News

May 25, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
  • Home
  • Technology
    • Docker
    • Kubernetes
    • AI
    • Cybersecurity
    • Blockchain
    • Linux
    • Python
    • Tech Update
    • Interview Preparation
    • Internet
  • Entertainment
    • Movies
    • TV Shows
    • Anime
    • Cricket
Luminari | Learn Docker, Kubernetes, AI, Tech & Interview PrepLuminari | Learn Docker, Kubernetes, AI, Tech & Interview Prep
Home » In the kitchen and the software supply chain
Linux

In the kitchen and the software supply chain

HarishBy HarishMay 6, 2025No Comments18 Mins Read
Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
Share
Facebook Twitter Pinterest Reddit WhatsApp Email


Software supply chain security has become more relevant in the last decade as more and more organizations consume, develop and deploy containerized workloads. 

Software is inherently complex so an analogy concerning an area of life that we can all relate to should help. Here’s a conversation about cooking lasagna!

“Do you need any help?”

“No, it’s fine. I have done this a thousand times, thanks.”

“That meat packaging is unusual. It’s just a thin plastic bag. Where did you get that?”

“It was a bargain. A young chap knocked the door earlier and said he was selling meat. He had a cooler full and I got 500g of meat for £1.00.”

“So, where did it come from and what even is it? It’s a strange colour.”

“I don’t know. It looks a bit like minced pork so it should be fine.”

“But he didn’t say what it was – he just said it was ‘meat’ ?”. 

“I’m sure it will be fine, trust me.”

“Sorry, but I’m not eating that. We don’t know what it is, where it came from, how it’s been stored. I don’t care what it looks like. I want meat that has come from a reputable butcher. There’s even an animal welfare question here – I bet the poor animals, whatever they were, have been stolen. It happens.”

It is probably fair to assume that the evening meal was quickly changed to cheese-on-toast and the suspect “meat” was despatched to the bin.

In choosing to buy a cheap or free product from an unknown source the person cooking had subverted the supply chain. Ordinarily the shopping process involves going to a known and trusted supermarket or local butcher. Picking a product from the carefully monitored refrigerated shelf, examining the label and a “use by” date. Trust evolves over time and we all have our favourite suppliers for specific products, such as food and even cars. That trust is based on prior shopping experience, recommendations from others, online reviews and, of course, value for money. In the case of cars, that trust extends to reliability, comfort and safety.

The conversation shown above can easily be related to containerized software products consumed by an organization. One of the clear differences is that you can’t smell software or decide not to use it because it “looks a funny colour.” There are many other characteristics for software assets that we need to examine in order to satisfy ourselves that software is safe to use. 

The software supply chain

In some cases, software is written and consumed within the confines of the same company, which can create an expectation that the provenance of the components is assured. But what if the base container image used to host an application has questionable origins?

In other scenarios, software is written and assembled specifically for an organization by a third-party provider, with the same questions to be asked over the base image, together with a need to understand what has been added to deliver the required functionality.

Any time something is taken from a third party or from a repository of available content, it establishes a perceived relationship of trust, either by design or by implication.

Container image maintenance

Many container images are available on a number of public repositories offering a range of useful software. The consumers of these images are responsible for due diligence regarding the quality and currency of the content. The speed of updating the image may be indeterminate when an issue is found, often leaving users trying to maintain the image themselves with varying levels of success. 

As a consequence, many organizations opt to use a base container image that is as small as possible, only containing the bare minimum of core components. It is then up to the organization to layer onto the container the components that they need. Examples of the components to be added include language frameworks, middleware, integration technology and runtimes, to name just a few.

This approach enables the organization to maintain the container image accurately and quickly when issues arise. If you have built it from a simple source then you can probably maintain it over time. This technique can also lead to some of the smallest container images possible, which is an important consideration for operational efficiency and provides a basis for good cyber hygiene processes.

Building the container image

Containerized applications typically involve a runtime element such as a compiled Java application or a packaged Node.JS application. The software components required to build an application may not be necessary, or even appropriate, to include in the container image that is to be used to run the application in production. It is sensible to try to keep production images as small as possible and it is also a good practice to keep extraneous components out of the image. Tools used for compilation and build automation could be used to perform an unwanted action by an attacker who manages to get access to the running pod.

In another food-related analogy, the building of container images can be compared to homemade soup. You may combine all the ingredients together and cook the soup in an old and clearly well-used saucepan, but you then transfer the soup to a clean, decorative and warmed serving tureen to present it to your guests at the table.

The equivalent of transferring the soup to a new receptacle is the extraction of the executable application from the builder container image and moving it to a runtime container image in which only the components necessary to run the application exist.

Can the code changes be trusted?

When we investigate the application in the container image we can see that it behaves as expected. It responds with the correct answers when prompted with specific testing activities, following a SAST and DAST methodology. But can we be certain that it was actually built with our source code or is there a subtle difference in the code that contains malicious content intended to send private data to a third party? One of the worst kinds of attacks occurs when the application continues to behave as it should, while additionally stealing data or having a negative impact on another system.

Our cooking analogy is not particularly helpful here so instead we turn to history. For many centuries, personal seals have been used to assert the authenticity of documents. The wax seal, used on a folded document, provides evidence that the missive has not been tampered with in transit from the author to the recipient. Further information on historical seals is available here, for interest.

In software terms, there is a requirement to sign artefacts in two different areas of container development. The first is with respect to each code change committed to a source code repository. This can attest to the identity of the person or organization performing the commit. In the same manner that the seal attests to the identity of the author, a signed commit of a change to source code bears an undeniable identification of the person who made the change. Any changes to the code that do not have an associated signature should be rejected, as shown in the automated software build process, in Figure 1. 

A container build pipeline highlighting the signature validation step

Figure 1: Container image creation with source code commit signing

The developer interaction with the process, shown on the left hand side of Figure 1, is to create and commit a code change with an associated signature. The source code management system then triggers an automated build process, and the first action of the build is to validate the new commit has an acceptable signature. If the signature is not acceptable then the build will not proceed and a new container image will not be created. If the build is rejected, then further action is required to investigate, and reverse, the commit. 

Vulnerabilities in container images

In the above process, a new container image is produced, assuming the signature is accepted. There is still a possibility that even after the best efforts of the development team there may be vulnerabilities in one or more components that are used within the container. Vulnerabilities can result from one of two places. The first is there could be a vulnerability in a component of the base image, and the second is that a vulnerability could be introduced by one of the steps of the container build process shown in Figure 1. 

Container base image management

In the case of the base container image, teams should regularly perform vulnerability scans on these images and, either manually or automatically, rebuild it when necessary. This process is shown in Figure 2.

A container build pipeline highlighting use of a secondary pipeline to validate, and if necessary update, a base container image

Figure 2: Container build process with base image vulnerability maintenance

The top part of Figure 2 is the base image management process. The first step of this process is to pull the base image from the container registry and perform a vulnerability scan. If the image doesn’t have any vulnerabilities, then there is no need to take further action and the build pipeline will end. If vulnerabilities are detected, however, the base image should be updated,  or a new image created from a dockerfile. Once the new base image is created, it is stored in the container registry ready for use by the application build process shown in Figure 1 and at the bottom section of Figure 2.

Vulnerabilities in the new container image

After the new container image has been created, it should be scanned for vulnerabilities to make sure none were introduced during the build process. This scan should be performed before the image is moved to an area of the container registry from which other automated processes, or users, will pull the image. This process is shown as an expansion of the container build process in Figure 3. 

A container build pipeline highlighting the image vulnerability scan of a new container image

Figure 3: Container build process with application image vulnerability scan

Not every vulnerability of a component will be exploitable since the context and usage of the component is important. It is still good practice to maintain each component in an up-to-date state to avoid as many issues as possible. 

As shown in Figure 3, if the container image passes the vulnerability scan, it is safe to store it in the registry. 

Built right, by the right people?

For an organization creating container images, the actual pipeline process will probably involve a few more steps than the simple example shown in Figure 3. It is likely that source code analysis will be used to check that the code complies with standards and best practices. Additionally, various tests on the container image are likely to be done, in addition to the vulnerability scan already discussed. These additional steps may involve Red Hat solutions or third party products.

When examining a container image, how can we be certain that these steps have been performed properly and that the tests and assessments have all had the right policies and rules applied? There is also a need to establish if the container image was actually built by the team that we expected to build it. Has a third party intercepted the build process and injected a new container image that looks like the image we expect, but actually contains harmful code?

At this stage of the process we are calling into question two things with respect to the container image: 

The authenticity of the image: Was it produced by the right people?The provenance of the image: Was it produced by the right process?

To assert the authenticity of the image we turn to a second signing process—signing the container image itself. In the same manner as signing code commits, a container signature identifies the person or organization who created the container image. 

The presence and validity of the container signature can then be verified when it is deployed into an environment, giving the consumer of the image confidence that the image has been built appropriately and an unauthorized image has not been added to the registry by a nefarious third party. This process is even more important where there is a physical gap between the organization that produces the container image and the organization that consumes it.

For example, a company may subcontract the production of a specific container image to a specialist third party company. In this scenario, the container image is transferred between the two companies, usually via a shared container registry. This increases the risk of a man-in-the-middle attack to replace or tamper with the image. In this situation an agreed process for signing and validating container images is critical. 

Through the use of signatures we now know that the source code changes have been performed by appropriate individuals and the container images have been validated. There is little debate concerning who performed those actions.

Beyond confidence in who performed the changes to code and who built the container, the testing phase provides a level of confidence in the quality of the application within the container image. It is important to verify that all steps in the container build and validation process have been performed correctly and to maintain an audit record of the operations within that process.

This brings us back to our cooking analogy.

“Did you follow the recipe accurately for the lasagna?”

“Of course, but I have done it so many times that I really don’t need the book.”

“Are you sure you didn’t miss a step because it looks a little pale? I’m not sure you cooked the sauce for long enough.”

“Well I think I did it correctly, but I’m not absolutely certain.”

“Hmm, have you also remembered that we have a lactose intolerant guest this evening and we needed to use vegan cheese. Did you use vegan cheese on the lasagna or ordinary cheese?”

“I think I used vegan cheese, but both packages are opened so I can’t be sure”

“Really. How can we serve this to everyone now, it could make someone really unwell?”

It is probably fair to assume that the guest that evening was offered the easy standby of (vegan) cheese-on-toast.

In software terms, how can we be certain that the container image build process has been performed in its entirety following all the right steps and the right testing processes? Storing a record of the build process performed in an immutable repository gives us a way to verify the exact steps performed, the source code used and the container image produced. Even years after the event, it should be possible to validate who did what, to which assets, with which result.

What’s in the container image?

Many people will purchase pre-packaged sandwiches, salads, cakes and treats when they go out for a day. For most, the only consideration is what they are in the mood to eat, but others need to give a great deal of consideration to the ingredients for religious, ethical or medical reasons. Most suppliers of such items are very good at disclosing everything that is in the products, giving the consumer an informed choice. Finding out exactly what is in a software container image may not always be as simple. 

Software components often have multiple layers of dependencies, resulting in a large number of components being pulled into a container image. This can make it difficult to get an accurate view of everything that is being used. A software bill of materials (SBOM) acts like the detailed ingredients on the side of a packaged sandwich, providing a detailed inventory of everything in the container image. The production of a SBOM should be a part of the container build process as shown in the outline pipeline process in Figure 4.

A container build pipeline highlighting the creation of a software bill-of-materials and storage in a container registry

Figure 4: Creating and storing a software bill of materials as part of the container build process

SBOMs can be stored alongside the container image in the image registry so that they are easily accessible for examination. 

How Red Hat can help

The Red Hat Trusted Software Supply Chain is a collection of Red Hat solutions that help organizations build security into the components, processes and practices for the creation and delivery of cloud-native applications. The relationships of the various elements are shown in Figure 5 with a description of each element following.

The components and relationships of the Red Hat Trusted Software Supply Chain

Figure 5: Red Hat Trusted Software Supply Chain

Red Hat Trusted Application Pipeline

The Red Hat Trusted Application Pipeline is a grouping of products that delivers the following capabilities.

Red Hat Developer Hub

Red Hat Developer Hub is an internal developer portal that enables platform engineers to deliver software templates that simplify the creation of a complex set of assets for delivering a container build and deployment process. Developer Hub abstracts the complexity of the process from developers and other users so they can focus on the elements of the process that deliver genuine business value.

Think of Developer Hub as the supplier of lasagna sheets, chopped tinned tomatoes and cheese. Yes, you can make all of these things from flour, tomato plants and raw milk but who has the time for that?

Red Hat Trusted Artefact Signer

Red Hat Trusted Artefact Signer enables teams to cryptographically sign and verify software artefacts helping them to have greater confidence in the security and trustworthiness of their software supply chain. 

Think of Trusted Artefact Signer as the mechanism for providing a label of quality assurance on food packaging stating where and when it was produced and by which farmer.

Red Hat Trusted Profile Analyzer

Red Hat Trusted Profile Analyzer enables teams to manage the Software Bill of Materials (SBOM) produced as part of a software delivery pipeline. To do this, Trusted Profile Analyzer uses two sources of information:

Vulnerability exploitability exchange information (VEX)Common Vulnerability Exchange (CVE) information

Trusted Profile Analyzer helps teams to get a real world view of the vulnerability of a component within the context of how the component is used. This provides a more useful risk profile based on the use of in-house software, third party content and open source content.

Think of Trusted Profile Analyzer as a mechanism to assess all of the content being prepared in the kitchen within the context of your guests tastes and dietary requirements. Full fat cheese and meat may be acceptable one evening but not on another when you have vegetarian and lactose intolerant guests.

Red Hat Universal Base Images

Red Hat Universal Base Images are a curated collection of container images that anyone can use. Universal base images are available in varying sizes with different content included, such as different language frameworks. Users may share images they produce with Red Hat Universal Base Images with anyone.

Think of universal base images as the base building blocks of a recipe that ensure you have a simple starting point in which you can have confidence. They are the contents of the spice rack, the cupboard and the fridge that makes life easier, but they are not a complete ready-to-go meal.

A blog post introducing Red Hat Universal Base Images is available here.  

Red Hat OpenShift

Red Hat OpenShift is the orchestration and delivery platform on which all elements described so far operate. OpenShift includes a fully-functional continuous integration and continuous delivery (CI/CD) solution based on the popular open source projects Tekton and ArgoCD. However, any CI/CD solution can be used alongside the other offerings described above.

Think of OpenShift as the kitchen, the pots, the pans, the utensils, the dining room, crockery, knives, forks and wine glasses.

Red Hat Advanced Cluster Security for Kubernetes

Red Hat Advanced Cluster Security for Kubernetes focuses on the security and compliance of cloud-native applications through the build, deploy and run phases. Operating across Kubernetes distributions on cloud platforms and on-premise environments, Red Hat Advanced Cluster Security enables everyone to participate in the security of applications through the presentation of clear and actionable information on the presence of vulnerabilities and policy violations. 

With direct integration into the build and deploy automation systems, Red Hat Advanced Cluster Security helps secure workloads and assert the compliance of the entire platform.

Think of Red Hat Advanced Cluster Security as a mechanism to assess the food quality and cleanliness of the kitchen as meals are prepared and served.

Red Hat Quay

Red Hat Quay is a scalable private registry for the management of container images. Quay enables teams to share and store container images between development, quality assurance and production environments whether they be geographically close or distributed around the world. Quay also includes a vulnerability assessment process to validate the containers stored.

Think of Quay as a refrigerator that securely stores food at the right temperature to keep it safe.

Final thoughts

The creation and delivery of containerized workloads is a complex process, often involving many organizations and locations. Validating the assets used and the containers produced requires a number of steps to verify that there has been no tampering or replacing of assets in the supply chain. The Red Hat solutions that form the Trusted Software Supply Chain can help organizations gain confidence in the assets they use and the processes they have adopted to more securely create and deploy containerized workloads.



Source link

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
Previous ArticleOne of Nation’s Largest Trans Conferences Ends Annual Event, Latest LGBTQ Cancelation
Next Article Best Dressed Stars at the 2025 Met Gala
Harish
  • Website
  • X (Twitter)

Related Posts

No More Xorg! Fedora 43 Will Be Wayland-only

May 21, 2025

The road to quantum-safe cryptography in Red Hat OpenShift

May 21, 2025

The evolution of Red Hat Ansible Lightspeed

May 21, 2025

Unleashing intelligent operations at the edge

May 21, 2025

Real AI results driven by Red Hat platforms and partners

May 21, 2025

Free as in Fraud? A $130M Aerospace Company Caught Exploiting Open Source Trial

May 20, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

Gundam Creator Yoshiyuki Tomino to Speak at Space Business Conference – Interest

May 25, 2025

Gō Ikeyamada to End Takanashi-ke no Imōto wa Hanayome ni Naritaii!! Manga – News

May 25, 2025

Doraemon Dorayaki Shop Story Game Adds Hindi Language Support – News

May 25, 2025

Betrothed to My Sister’s Ex Anime Reveals More Cast, July 4 TV Debut in 2nd Promo Video – News

May 25, 2025
Don't Miss
Blockchain

Industry exec sounds alarm on Ledger phishing letter delivered by USPS

May 24, 20252 Mins Read

Scammers posing as Ledger, a hardware wallet manufacturer, are sending physical letters to crypto users…

Decentralizing telecom benefits small businesses and telcos — Web3 exec

May 24, 2025

Wallet intelligence shapes the next crypto power shift

May 24, 2025

Hyperliquid trader James Wynn goes ‘all-in’ on $1.25B Bitcoin Long

May 24, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Luminari, your go-to hub for mastering modern tech and staying ahead in the digital world.

At Luminari, we’re passionate about breaking down complex technologies and delivering insights that matter. Whether you’re a developer, tech enthusiast, job seeker, or lifelong learner, our mission is to equip you with the tools and knowledge you need to thrive in today’s fast-moving tech landscape.

Our Picks

Khosla Ventures among VCs experimenting with AI-infused roll-ups of mature companies

May 23, 2025

What is Mistral AI? Everything to know about the OpenAI competitor

May 23, 2025

Marjorie Taylor Greene picked a fight with Grok

May 23, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 luminari. Designed by luminari.

Type above and press Enter to search. Press Esc to cancel.