solatic 10 days ago

Or, you can write an actual shell script file (i.e. with a .sh extension) to be stored in your repository, ADD it in a throwaway context (i.e. multi-stage builds), then RUN --mount=type=bind to put it into a temporary directory in the build container so that you can execute it. This way, the script doesn't pollute the container, and you have proper separation of concerns, including the ability to use library functions, running shell linters directly, or using higher-level languages like Python if you really need it for some reason

1
xenophonf 9 days ago

That's bad practice because it hides build steps from `docker inspect`. Per https://github.com/docker-library/official-images#clarity:

> Try to make the Dockerfile easy to understand/read. It may be tempting, for the sake of brevity, to put complicated initialization details into a standalone script and merely add a RUN command in the Dockerfile. However, this causes the resulting Dockerfile to be overly opaque, and such Dockerfiles are unlikely to pass review. Instead, it is recommended to put all the commands for initialization into the Dockerfile as appropriate RUN or ENV command combinations. To find good examples, look at the current official images.

solatic 9 days ago

That's advice specifically for official images, and it dates back before multi-stage builds. Most people are not building official images. Most people benefit from clear encapsulation and separation of concerns. The Dockerfile sets up the environment to run the provisioning script, and a provisioning script does the actual provisioning. Official images are different because usually the provisioning script is hidden in an OS package installed with e.g. apk add (or are we going to pretend that OS packages are bad practice for the same reason?).

yjftsjthsd-h 9 days ago

Don't multi-stage builds already break `docker inspect`?