Could someone explain why tip#9 is a good idea? To me it makes more sense to build the application in the CI pipeline and use Dockerfile only to package the app.
The post is focused on Java apps but, for example, there is a distinction on runtime and SDK images in .NET Core. If you want to build in Docker, you have to pull the heavier SDK image. If you copy the built binaries to image, you can use the runtime image. I guess there could be similar situations in other platforms too.
Other than that, it looks like a decent guide. Thanks to the author.
For me the big advantage of doing more in docker and lees in the CIT environment is that I have less lock-in/dependency to whatever my CI provider does. I try to reduce my CI scripts to something like
docker build
docker run image test
All complexities of building and collecting dependencies go in dockerfiles, so I can reproduce it locally, or anywhere else. And importantly, without messing with any system settings/packages. No more makefiles or shell scripts that make a ton of assumptions about your laptop that need to be set just right to build something from source; just docker build and off you go. Such a hassle when you need to follow pages of readme just to build something from source (plus a lot of installed dependencies that you have to clean up afterwards)
The same problems that apply to production environments also apply to CI systems - you need to make sure those build agents are project-aware, up to date, and if you decide to move to a new JDK on one project you'll need to update your build servers, and good luck to you if you want to update only some of your projects.
The appeal of docker is completely & reproducibly owning production (what runs on your laptop runs on prod), and that also applies to the build (what builds on your laptop builds on prod). Not to mention the add on benefits that you can now use standard build agents across every tech stack and project, no need to customize them or keep them up to date, etc.
With multi-stage builds you get a bunch of benefits. You can pull the heavy SDK when you start building the app, and that gets cached. Then when you package the image, you copy the jar that was built, but not the heavy SDK. When you run this again, the heavy/expensive steps are skipped because they're cached. Now you have a single set of operations to build your app and its production image, so there are no chances for inconsistencies.
In addition, you can build a separate container from a specific part of your multi-stage build (for example, if you want to build more apps based on the SDK step, or run tests which require debugging). So from one Dockerfile you can have multiple images or tags to use for different parts of your pipeline. The resulting production image is still based on the same origin code, so you have more confidence that what's going to production was what was tested in the pipeline.
Furthermore, devs can iterate on this Dockerfile locally, rather than trying to replicate the CI pipeline in an ad-hoc way. The more of your pipeline you stuff into a Dockerfile, the less you have to focus on "building your pipeline".
As I read it, the tip is to always build in a consistent environment. I think a CI pipeline counts in that regard.
The way I read it, they're more talking about when you're developing locally everyone should be building the application inside of a container rather than on their personal machines with differing set ups.
Hi @aphyr. I'm a great fan of your work with Jepsen although I know very little about the fault tolerance of distributed systems. Are there any resources you would recommend on the subject? I am an application developer so I don't see myself writing a database in the future. Still it would be great to learn about the concepts.
Recently I have been thinking about if any of our units makes sense in cosmic perspective. Let's take speed of light for example. It's approximately 300000 km/s. But then what is a second? It's 1/60 of a minute which is 1/60 of an hour which is 1/24 of a day(and so it goes) and all those numbers are arbitrary. A day doesn't make any sense outside our planet anyway, I doubt that there is another celestial body in the universe that takes the same time to complete a rotation. Period of some natural phenomena (like atomic electron transition) sounds better as a unit but it's a really tiny period of time so we have to scale it to make it practical for us. We will use decimal numeral system to do that, another arbitrary choice. What if we had 12 fingers or 8? This can be extended to all kinds of measurements so I wonder if any of this would make sense to another civilization. What would a cosmic system of units would like? Any reading about this would be greatly appreciated.
The definition of a second isn't based on the Earth's motion, but some natural phenomena like you recommended. "The second is the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom." [1]
You might be thinking along the lines of natural units [1] or even Planck units [2], where you set some fundamental constants to 1 and take it from there.
I would suggest that Planck units scaled by powers of 2 is the closest we can get to a cosmic system of units. The choice of binary is non-arbitrary as it's the smallest base that can be chosen and still provide scaling.
"The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium 133 atom."
>Could you enlighten me on how JWST could find the evidence of life?
Near Infrared Spectroscopy [0]. JWST will be the first instrument in space capable of performing NIR spectroscopy, which will be used to analyze the starlight shining through exoplanets' atmospheres for evidence of biological molecules.