a_t48 5 days ago

This is very cool, I often find myself composing together a mixture of Dockerfiles and shell scripts for various images (build X Dockerfile for Y arch, save it as a base, build Z Dockerfile on top of the last thing, with W args). And then having to run it different ways with different mounts, based on if you're on a developer machine/robot/CI, running as a service/adhoc/attaching. This _looks_ like it could solve some of that messiness. I like that it can just reference the output of a build without having to muck around with tags (which are system global and can collide if you're not careful).

4
mikepurvis 5 days ago

I have a particular hate on for the sandwich pattern where you have a Dockerfile to a "build" environment that then gets entered, and a bunch of stuff done in there, and then the results ADDed into a different Dockerfile for the publishable asset. I know multi-stage builds sort of address this need, but a lot of people aren't using them or have found cases where it's desirable to have the "running" context for the container, for example if a bunch of stuff is being volumed-in (which is now possible at build time but is janky).

Or there's the copy my ansible playbooks into the container and run them there as localhost approach, which can be mitigated by what Packer does with at least running the playbook over SSH. But then Packer has no concept of multi-stage or copying in assets from other containers.

There's a lot of room to do better in this space.

a_t48 5 days ago

I don't like the sandwich either, it's too easy to accidentally miss a dependency. What I find myself doing more is building one Dockerfile as a base, and then doing `FROM $BASE` in the second to ease readability, caching, image pinning, etc.

mikepurvis 5 days ago

I think it often comes about because there's a desire for developers to be able to locally reproduce what is happening on CI (good!) and having an end-to-end Dockerfile can be a great piece of that (yay!) but as soon as devs are debugging a problem or wanting to change a dependency, attach a debugger, or whatever else, it becomes too burdensome to rebuild the whole thing every time. So you end up with the setup and finalization of the build in Dockerfiles, and then some nebulous semi-manual stuff occurring in the middle.

This is workable-ish for the base developer + ship a product use cases, but it creates a bit of a crisis of repeatability when you've got a complicated CI config implementing all this mystery meat that falls in the middle of the sandwich, and there's separate documentation somewhere else that explains what the dev workflow is supposed to be, but no one is explicitly tasked with keeping the CI script and dev workflow in sync, so they inevitably drift over time.

Plus, long and complicated CI-specific scripts are a smell all on their own, as was discussed here on HN a few days ago.

a_t48 4 days ago

Definitely. The more than can be shared between these flows the better.

mdaniel 4 days ago

> But then Packer has no concept of multi-stage or copying in assets from other containers.

I don't know about containers specifically, since I've never bothered to use packer for that process, but it does seem that packer supports multi-step artifact production and their example even uses docker to demonstrate it https://github.com/hashicorp/packer/blob/v1.9.5/website/cont...

a_t48 5 days ago

OMG https://docs.dagger.io/features/debugging - Docker hasn't supported shelling into a broken build for a while now, this sounds so nice. I see there's some Dockerfile support, I wonder if I can just use this as a drop in replacement for `docker build` and get a nicer experience.

shykes 5 days ago

Yes you can.

Dagger is built on the same underlying tech as docker build (buildkit). So the compatibility bridge is not a re-implementation of Dockerfile, it's literally the official upstream implementation.

Here's an example that 1) fetches a random git repo 2) builds from its dockerfile 3) opens an interactive terminal to look inside 4) publish to a registry once you exit the terminal:

  git https://github.com/goreleaser/goreleaser |
  head |
  tree |
  docker-build |
  terminal |
  publish ttl.sh/goreleaser-example-image

philsnow 5 days ago

> 3) opens an interactive terminal to look inside 4) publish to a registry once you exit the terminal

It seems like it would be good to be able to prevent the pipeline from publishing the image, if the inspection with 'terminal' shows there's something wrong (with e.g. 'exit 1'). I looked a little bit into the help system, and it doesn't seem that there's a way from inside the 'terminal' function to signal that the pipeline should stop. Semantics like bash's "set -e -o pipefail" might help here.

with-exec lets you specify that you want a command to succeed with e.g.

  container | from alpine | with-exec --expect SUCCESS /bin/false | publish ...
If you try that, the pipeline will stop before publishing the image.

shykes 5 days ago

Huh, you're right, I would expect `exit 1` in the terminal to abort the rest of the pipeline.

By the way, in your example: `--expect SUCCESS` is the default behavior of with-exec, so you can simplify your pipeline to:

  container | from alpine | with-exec /bin/false | publish ...
Thank you! Would you be willing to open an issue on our github repo? If not I will take care of it.

a_t48 5 days ago

If that docker-build command fails, will the interactive debugger do something sensible?

shykes 5 days ago

Assuming you use `dagger --interactive`, the debugger will kick in so you can inspect the failed state.

In the specific case of Dockerfile compatibility, I don't actually know if it will be smart enough to drop you in the exact intermediary state that the Docker build failed in, or if it reverts atomically the whole 'docker build' operation.

levlaz 5 days ago

You sure can :)

Dagger can read dockefiles as is, https://docs.dagger.io/cookbook#build-image-from-dockerfile

verdverm 5 days ago

Being able to drop into failed container builds / runs has been game changing, saved me mucho time. If you are being programmatic, you can also inject the drop into interactive mode anywhere you want, without needing an error. You can use this to build up working environments into which you drop developers (kind of like dev containers)

levlaz 5 days ago

Yeah for sure, this was specifically designed to solve the problems you describe!

CBLT 5 days ago

I solve most of those issues with a Docker Bakefile; I'm confident I could solve the rest with Bakefiles if I had to. Reasonable developer experience.

a_t48 4 days ago

The last time I tried something more than Buildx that Docker itself put out the experience was bad - something about not properly caching. I'll have to give this another shot sometime, though.