If you're still struggling with the build workflow, it's probably not yet the right time to take that leap.
It's not rocket science, of course. You build an image somewhere (your local machine, a CI server, anywhere), push to a registry, and when you want run the image, you pull from the registry and run it. ("docker run" will, by default, automatically pull when you ask it to run something.)
I don't quite understand what your Compose problem is. Is the Compose file referencing images published to, say, Docker Hub? If so, the image obviously has to be built and published beforehand. However, it's also possible to run Compose against local checkouts, then run "docker-compose up --build", e.g.:
There's a whole ecosystem of tools built around Docker for building, testing, deploying and orchestrating Docker applications. Kubernetes is one. If you're having issues with the Docker basics, however, I wouldn't consider any of these systems quite yet, although you should consider automating your building and testing with a CI (continuous integration) system, rather than making your devs build and test on their local machines.
As with anything, to actually use Docker in production you'll need an ops person/team that knows how to run it. That could be something as simple as a manual "docker run" or a manual "docker-compose", to something much more complex such as Kubernetes. This is the complicated part.
The problem I was referring to with docker-compose:
let's say I update my Dockerfile and change from `FROM ruby:2.3.4` to `FROM ruby:2.5.1` and commit the Dockerfile change, merge it to master, etc.
Our developers have to remember to manually run docker-compose --build, or to remove their old containers and create new ones, which would get them rebuilt... I couldn't find something that would warn them if they're running off of stale images, or better, simply build them automatically when the Dockerfile changes.
Part of the benefits of docker is creating a repeatable environment with all sub-components on all dev machines. Isn't it?
Maybe our devs should only pull remote images and never build them, but then wouldn't I have the same problem that docker-compose won't force or remind the developers to pull unless they explicitly tell it to? And also, isn't this detaching the development process around the Dockerfiles/builds themselves from the rest of the dev process??
If you run with "docker-compose up --build", it should automatically build. This requires that any app you want to work on references the local Dockerfile, not a published one, the same way as in my paste. I.e. "build: ./myapp" or whatever.
Edit the code, then restart Compose, and repeat. It will build each time. If you want to save time and you have some containers that don't change, you can "pin" those containers to published images — e.g., the main app is in "./myapp", but it depends on two apps "foo:08adcef" and "bar:eed2a94", which don't get built every time. This speeds up development.
Building on every change sounds like a nightmare, though. It's more convenient to use a file-watching system such as nodemon and map the whole app to a volume. Here's a blog article about it that also shows how you'd use Compose with multiple containers that use a local Dockerfile instead of a published one: https://medium.com/lucjuggery/docker-in-development-with-nod....
We're not building every time. But some times, like the example above, we do need to build. The problem however is this becomes a fairly manual process. If a developer forgets to do it, they will keep running with an older base image. So all the consistency benefits across developers is gone.
In any case, thanks for your suggestions. I think it's some misconception on my part about how docker-compose should behave.
So to me it's starting to sound like "developers forgetting" is your problem. Not Docker or Compose.
The solution I've used in the multiple companies I've started is to maintain a developer-oriented toolchain that encodes best practices. You tell the devs to clone the toolchain locally and you build in a simple self-update system so it always pulls the latest version. Then you provide a single tool (e.g. "devtool"), with subcommands, for what you want to script.
For example, "devtool run" could run the app, calling "docker-compose --build" behind the scenes. This ensures that they'll always build every time, and never forget the flag.
If you have other common patterns that have multiple complicated steps or require "standardized" behaviour, bake them into the tool: "devtool deploy", "devtool create-site", "devtool lint", etc.
We've got tons of subcommands like this. One of the subcommands is "preflight", which performs a bunch of checks to make sure that the local development environment fulfills a bunch of checks (Docker version, Kubectl version, whether Docker Registry auth works, SSH config, etc.), and fixes issues (e.g. if the Google Cloud SDK isn't installed, it can install it). It's a good pattern that also simplifies onboarding of new developers.
That's a great suggestion! Thanks. We're doing parts of it, but I just need to expand it to work with docker-compose. As I mentioned, I probably had the wrong preconceptions about it "figuring out" when components were stale... I guess a few simple bash scripts can work wonders to make it more intelligent :)
It's not rocket science, of course. You build an image somewhere (your local machine, a CI server, anywhere), push to a registry, and when you want run the image, you pull from the registry and run it. ("docker run" will, by default, automatically pull when you ask it to run something.)
I don't quite understand what your Compose problem is. Is the Compose file referencing images published to, say, Docker Hub? If so, the image obviously has to be built and published beforehand. However, it's also possible to run Compose against local checkouts, then run "docker-compose up --build", e.g.:
and so on.There's a whole ecosystem of tools built around Docker for building, testing, deploying and orchestrating Docker applications. Kubernetes is one. If you're having issues with the Docker basics, however, I wouldn't consider any of these systems quite yet, although you should consider automating your building and testing with a CI (continuous integration) system, rather than making your devs build and test on their local machines.
As with anything, to actually use Docker in production you'll need an ops person/team that knows how to run it. That could be something as simple as a manual "docker run" or a manual "docker-compose", to something much more complex such as Kubernetes. This is the complicated part.