I have a particular hate on for the sandwich pattern where you have a Dockerfile to a "build" environment that then gets entered, and a bunch of stuff done in there, and then the results ADDed into a different Dockerfile for the publishable asset. I know multi-stage builds sort of address this need, but a lot of people aren't using them or have found cases where it's desirable to have the "running" context for the container, for example if a bunch of stuff is being volumed-in (which is now possible at build time but is janky).
Or there's the copy my ansible playbooks into the container and run them there as localhost approach, which can be mitigated by what Packer does with at least running the playbook over SSH. But then Packer has no concept of multi-stage or copying in assets from other containers.
There's a lot of room to do better in this space.
I don't like the sandwich either, it's too easy to accidentally miss a dependency. What I find myself doing more is building one Dockerfile as a base, and then doing `FROM $BASE` in the second to ease readability, caching, image pinning, etc.
I think it often comes about because there's a desire for developers to be able to locally reproduce what is happening on CI (good!) and having an end-to-end Dockerfile can be a great piece of that (yay!) but as soon as devs are debugging a problem or wanting to change a dependency, attach a debugger, or whatever else, it becomes too burdensome to rebuild the whole thing every time. So you end up with the setup and finalization of the build in Dockerfiles, and then some nebulous semi-manual stuff occurring in the middle.
This is workable-ish for the base developer + ship a product use cases, but it creates a bit of a crisis of repeatability when you've got a complicated CI config implementing all this mystery meat that falls in the middle of the sandwich, and there's separate documentation somewhere else that explains what the dev workflow is supposed to be, but no one is explicitly tasked with keeping the CI script and dev workflow in sync, so they inevitably drift over time.
Plus, long and complicated CI-specific scripts are a smell all on their own, as was discussed here on HN a few days ago.
> But then Packer has no concept of multi-stage or copying in assets from other containers.
I don't know about containers specifically, since I've never bothered to use packer for that process, but it does seem that packer supports multi-step artifact production and their example even uses docker to demonstrate it https://github.com/hashicorp/packer/blob/v1.9.5/website/cont...