Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.

If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!

Feature Request: Limit Container Images to those that are inherited from another feed



  • Requested Feature:
    Have a "Inheritance" check for a Docker Feed. This setting would point to another Docker Feed (lets call it the "Base Images" feed). When a container is pushed to a docker repository ProGet checks to see if it has the Inheritance check setup.

    If it does, then it calls docker image history on the container that was just pushed. This will list the IDs of the container images that were used to create the recently pushed image.

    ProGet will then check list of image IDs returned, comparing them against the images in the "Base Images" feed. If the pushed image does not have one of the images from the "Base Images" feed in its history, then it was inherited from a non-approved source, and the push is blocked.

    Reason for Feature:
    Many companies want to control the containers that get used in their systems. However this is difficult to do without overly limiting development.

    This feature would allow a group of "blessed" containers to be created and placed in a highly restricted feed. Then allow other container feeds (with more permissive permissions) to contain any container as long as it is inherited from a "blessed" container somewhere in its history.

    This allows operations teams to know that there are not rogue containers with bad dependencies being put up for deployment.


  • inedo-engineer

    Hi @Stephen-Schaff_8186 , thanks for the feature request!

    I like it. This is a good idea, and sounds like a way for us to enforce the "base image" pattern on the ProGet-side that we want to promote with our products. This is how we do it on the BuildMaster-side: https://inedo.com/buildmaster/containerized-development-in-buildmaster

    Would you mind taking a look at it, and sharing your opinion on the approach? This is a solution we developed from the "outside looking in", since we simply don't have firsthand experience with this problem.

    We are focusing a lot of our efforts on "why to" content, as I call it. This is even more important than the "how to" approach (which, that above article demonstrates), because it speaks to a problem many folks won't even realize they have. "this is just the way docker is" is something I hear in the field, all the time. My take is -- if we don't have a lot of detailed "why to" content about the feature, then it may as well not exist.

    To give you an idea of what I'm talking about, our team just finished writing a pretty comprehensive and easy-to-understand guide on .NET5 Migration; the articles are here -- https://blog.inedo.com/tag/net-5 (fill out the form on any of those posts to get a copy).

    We've got some other guides in the pipeline, and I want to start on containerized development as well. So the more feedback from advanced folks in the field, like yourself, who think of these ideas, the better!

    Cheers.



  • @apxltd

    I like the idea of baking the concept of a base image into the build layer, though I see a few issues with it (one of which, from reading the document, you may have dealt with already).

    The first is that the Dockerfile is usually thought of as the source of truth for the container build. Any configuration that is setup in the auto build should look at the final container image layer in the Dockerfile (so you get the real one if it is a multistage file). It could allow the user to select from any history of that container image as the "base" image. But just selecting any base image could put the selection at odds with the Dockerfile.

    BuildMaster seems to have a "we will build your Dockerfile for you" feature. Which is fine until you need some complicated stuff in the Dockerfile that is not supported as a feature.

    The second part is that this only works for developers that know about it and want to follow policy that says to use it. From an operations standpoint, more surety is needed. ProGet is where containers are pulled from for operations. If there were no way for a container image to get into a feed without it inheriting from a "base" container image, then we know we can trust that at least our images are from approved sources.

    I think that when this becomes a "Must Have" for me, I will see if I can meet the need with a web hook.

    Either way, I appreciate you taking a look at the feature request.


  • inedo-engineer

    @Stephen-Schaff thanks for taking a look!

    In addition to container development patterns, I'm also researching the so-called software supply-chain attacks... and I think that the inherent complexity of Docker and containerized development expands the attack surface exponentially.

    The first is that the Dockerfile is usually thought of as the source of truth for the container build. Any configuration that is setup in the auto build should look at the final container image layer in the Dockerfile (so you get the real one if it is a multistage file). It could allow the user to select from any history of that container image as the "base" image. But just selecting any base image could put the selection at odds with the Dockerfile.

    Can I pick your brain a bit more on this? What do you mean "source of truth for the container build"?

    Is this Dockerfile typically stored in source control, alongside the application source code? In a world of cookie-cutter microservices and microapps, it seems these Dockerfiles should be nothing more than the absolute basics on top of a base specific image?

    Separately, but related... I just read up on multi-stage builds and the Builder pattern vs. Multi-stage builds in Docker, but I'm struggling to see how this all adds up to real-world use inside of organizations?

    Is this really a pattern that is emerging? Giving development teams not only control over their own base image, but over the tools to compile/build their code?

    This just seems to add another layer to what ought to be a very simple process, and more opportunities for supply-chain attacks.

    BuildMaster seems to have a "we will build your Dockerfile for you" feature. Which is fine until you need some complicated stuff in the Dockerfile that is not supported as a feature.

    Definitely something I want our content to address; I think the "build for you" is good for these cookie-cutter microservices/microapps, which seem to be a better way to go than pushing all the Docker complexity to development teams.

    I think that when this becomes a "Must Have" for me, I will see if I can meet the need with a web hook.

    Well you definitely get the value in it 😉 -- but convincing others the importance of this sort of features is a real challenge, and that's where the content I mentioned comes in.

    Question: how much more would you pay (or encourage your organization to pay) for ProGet if this feature were to be there? 20% 100%? 1000%? Why is it worth that extra?

    I ask because that's an important way for us to start thinking about "value", which is what the check-signers want... and ultimately how we need to think about which features and content to invest in developing.



  • @apxltd said in Feature Request: Limit Container Images to those that are inherited from another feed:

    Is this Dockerfile typically stored in source control, alongside the application source code? In a world of cookie-cutter microservices and microapps, it seems these Dockerfiles should be nothing more than the absolute basics on top of a base specific image?
    Separately, but related... I just read up on multi-stage builds and the Builder pattern vs. Multi-stage builds in Docker, but I'm struggling to see how this all adds up to real-world use inside of organizations?
    Is this really a pattern that is emerging?

    This all goes together as one is the cause of the other. In many ways, Microservices are and should be cookie cutter. But in many ways they cannot be (and still have reasonable dev practices).

    Here is an example:

    Consider a medium sized company that has 3 microservices. They use .Net Core 3.1.12 for their auto build. They setup their build servers and they get things working great.

    A month goes past and they need a new micro service. They go to develop, it but .Net Core has updated to a newer version. They are now faced with a choice.

    1. They update the build server to the newer version.
    2. They make a new build server.
    3. They develop with the older version of .Net Core

    All three options have issues.:

    1. Updating the build server means that the 3 microservices are also getting upgraded. Something that may not be in their prioritized list of work at this time. (We almost never do this option.)

    2. Making a new build server gets expensive as all most dependencies are always putting out new versions. (We have quite a few build servers because of this.)

    3. Using older tech stifles innovation and keeps features and bug fixed that were in later versions of dependencies out of the final product.

    We trade off between numbers 2 and 3, as number 1 is too risky for my company.

    Multistage Dockerfiles help fix this issue. Each build is done in its own sandbox. The initial 3 microservices can keep using .Net Core 3.1.12 (or a company approved descendent image). They do this because their dockerfile has not changed.

    But the new micro service can user the latest version of the .Net Core SDK Container (or a company approved decendant image) in its docker file. And, because of Docker, they can all run on the same build server.

    Giving development teams not only control over their own base image, but over the tools to compile/build their code? This just seems to add another layer to what ought to be a very simple process, and more opportunities for supply-chain attacks.

    As my company's IT Software Architect this is something I worry about a lot! Hence the reason for my request that started this chain. I need to know that all the containers in the multi stage build file were containers that came from an approved ProGet source. (Not just the final one.)

    We use mostly Microsoft technologies at my company. So if some developer goes and downloads a PHP or Ruby container and uses it to build an containerized application, I want them blocked as soon as they try to upload the image to ProGet. This allows me to enforce our technology chain.

    How much more would you pay (or encourage your organization to pay) for ProGet if this feature were to be there?

    The costs to a company of allowing developers to "just use what works best for them" are big. Management wants to be able to move developers between teams. And as product dependencies get old and need to be replaced or upgraded, each migration plan takes time to come up with. If you have to be making custom migration plans for .Net, Angular, React, VueJs, Ruby, PHP etc as each has a new major version it becomes very expensive.

    But developers really like to use the new shiny stuff. And many will just do it if they can. And Multistage Dockerfiles make it even easier because they don't have to ask for anything to be installed on the build server anymore. If you are not really careful, you can get it trouble fast.

    I am still planning out how I will enforce our tech stack as we move to containers in earnest. It could be a parsing of the Dockerfile, or it could be a proget hook.

    Still, if there were a solution that just worked, it would be worth money to my company. I don't get approve purchases, but I do recommend them a lot. (For example, I just got another ProGet license approved as a Sandbox instance).

    My company starts to balk after you get over a few thousand dollars unless the value is really big. For example, the enterprise version is too expensive for just HA features (from my company's point of view). If this container feature was in the Basic version and it went up around 30% I don't think my company would complain.

    But small companies just don't need a feature like this. (Not sure of you have small companies in your customer demographics).

    And in general you have to be careful about price hikes. ProGet has a nice position in the market right now. It has great features and is much cheaper than its competitors. You could lose some market share if you raise your prices too high.


  • inedo-engineer

    @Stephen-Schaff thanks again for all the thoughts. This is making a lot of sense.

    To summarize the Multistage-approach, "each build is done in its own sandbox so that the tools used to build the software are kept in lock-step with the software", or something like that.

    Anyways, I really dig it, and I like where your thoughts are on this! This is the sort of "engineering/process rigor" I always envisioned to help companies build with our tools.

    Actually, I'm already thinking of how we can do all of this with not just ProGet (probably using a function I spec'd out ages ago... happy to share that idea with you at some point, it'd totally work I think), but also integrate some great self-service workflows into BuildMaster for this sort of thing.

    Now all that said.... I'm an engineer by trade and by training, and maybe ten years ago I would jump on this idea, and code it all myself over a weekend 😉

    My company starts to balk after you get over a few thousand dollars unless the value is really big.

    As well they should! I do the same 😉

    The reason I ask the "how much more would you pay" question is to get a "gut feeling" for demonstrating/articulating value. It sounds like this gap is pretty big right now, and that's something I'd like to work on.

    This is more a marketing exercise, and not something I'd normally do in a venue like this, but the next InedoCon is a ways off, and you seem to have a great grasp on these topics already!

    Now just to be clear, we don't intend on price hikes, especially for a feature like this, but one of our biggest challenges (thanks to an engineering CEO 😅) is demonstrating value to buyers.

    So working on this, some quick back-of-the-napkin math shows that "30% of $2k/year" is $600, or at most 6 developer hours of internal value (whether that means saved or gained). That's not very valuable.

    What I'd like to demonstrate is the business case for why this feature is easily worth $10,000/year (and probably even $100k/year) to companies with 50-100+ developers. That would be very valuable.

    This is a really difficult exercise...


    It's actually pretty easy easy to do demonstrate this kind of value with the high-availability feature alone. When 100 developers lose an hour of productivity due to a failure of ProGet, that's $10,000 lost. If your ProGet Basic instance goes caput, you'll lose a LOT more than that.

    Of course, the problem with the high-availability feature, however, is the skills and mindset required to monitor/maintain high-availability things. If you don't have the infrastructure team on call that knows how to monitor, how to respond, how to support developers, etc., then it's not a feature you can use.


    The value of this feature idea isn't so clear, at least in terms of Time (delivering quicker, increasing productivity), Money (lowering labor costs, increasing profits), or Risk reduction.

    But even if we figure those out, the major problem I see with this feature is the skill/complexity in using it.

    With the high-availability feature, you need to have a high-availability organization, but a lot of organizations already have that, so it fits right in.

    The knowledge/skills gaps seem much larger, and the Problems an organization would need to have are complex. A buyer might think, "So an 'unapproved container' made its way to production, despite passing all the acceptance tests... is it really a problem?"

    Anyways just brainstorming! But this is just part of the process.



  • @apxltd

    We do high availability on some of our stuff. And it may be that we move to it for ProGet as we start doing more with containers.

    The value of this feature idea isn't so clear

    It is possible that you are underestimating the value that enterprises place on governance of their processes.


    As a note, I have done some more research into this feature request, and it will not be possible based off the container image alone. Multistage Dockerfiles to not pass any part of the Dockerfile that was used from a previous stage on to the later stages (aside from any files copied).

    I will have to find some sort of process change for this part of our governance.


  • inedo-engineer

    @Stephen-Schaff said in Feature Request: Limit Container Images to those that are inherited from another feed:

    It is possible that you are underestimating the value that enterprises place on governance of their processes.

    It's really valuable, but from a marketing/sales perspective, it's hard to "compete" with the likes of ServiceNow for governance improvement. Containers are already too much in the weeds for the folks with governance problems/pain points... and they can "just throw a guy they trust to manage the details, like Stephen" 😆

    @Stephen-Schaff said in Feature Request: Limit Container Images to those that are inherited from another feed:

    Multistage Dockerfiles to not pass any part of the Dockerfile that was used from a previous stage on to the later stages (aside from any files copied).

    That makes sense, but you could still validate that containers "inherit" from a known base image. In ProGet, you can navigate across base images.


Log in to reply
 

Inedo Website HomeSupport HomeCode of ConductForums GuideDocumentation