Drawbacks of developing in containers

Monday, February 1, 2021

It seems like everyone these days is developing in containers. Specifically, I mean running your local development environment almost entirely in containers. There's a lot to be said for it, especially when you're bringing new developers into a project: it can be an invaluable way for anyone new to the project to quickly get all the tooling set up with a lot fewer "works on my machine" issues. But everything involves tradeoffs, so today we're going to focus on what some of the drawbacks of containers are. Also because it's my blog, so I write about what I want, dammit, and containers have been bugging me a little lately!

Here are some of the things that have been bugging me, and that make me just want to run processes directly on my host.

Installing and managing packages takes extra work. Using Python as an example, it's pretty easy to get packages installed for any project that's using a modern dependency manager like poetry or pipenv. You just run something like poetry install and it will setup a virtualenv for you (essential to have reproducible builds, isolated between projects). The tool is there, and you just use it, but you can wrap it in a Makefile for convenience if you like. But if you're using Docker, you get a lot of extra steps. For one, you have to make sure that the dependency manager's generated virtualenv is persisted on your host, not just in the container, else you will be reinstalling those dependencies every time you start a container. (Or you can install the dependencies into the image, but then you have to rebuild the entire image and install every dependency to just add one for testing something out, adding unnecessary friction.)

Actually, everything takes extra commands. You want to install a new package? Oh, open a shell into your container, then install it. You want to compile your program? Open a shell into your container, then install it. You want to launch the development server? You guessed it... And yes, I know the pedantic comments are going to be "well, actually you can do this in one command with docker run"—okay, but that's not better: now you've just made the one command you need to run far longer.

File permissions are a pain in the butt. If you have a directory mounted on the host and in the container (node_modules, for example) then in all likelihood the files created inside the container are going to be owned by root since everything runs as root by default. When you're your normal user outside the container, you now will have challenges doing things like deleting that directory—and it produces a lot of really noisy output when you use something like grep, because suddenly there are a lot of files you cannot read! There are solutions to this, making the container use the same userid as your user on the system, but I have not found anything that worked well. It's just not worth the effort, and you're left with a nuisance every time you generate files inside a container.

Retaining history is difficult. Getting your bash history retained between containers is a challenge, but I understand it's doable. Now you use a Python shell, so you need to configure that one for history as well. Then you start using Elixir, so you need to configure history for it as well. Everything that normally works out of the box when you're developing on your host directly? You have to configure it and figure out which files you need to share between containers and the host to make it work. You usually figure this out by the pain of not having the history, and eventually someone on the team will patch it to make it work (mostly). But... at what cost? You get this for free by just, you know, using the host.

Performance is horrible on Macs. I use Linux as my only operating system (Ubuntu specifically), but many colleagues past and present prefer using Macs. With the latest hardware advances from Apple, this is going to hold true going forward. And yet the performance is tangibly worse for Docker on Macs, because you have to run it inside a VM. (Windows cleverly skirts this problem by running the Linux kernel directly, not inside a VM; Mac OS could certainly take a similar approach in the future, if they prioritize developer experience. I am not optimistic.)

Integration with tooling isn't great. We've had a tough time at work getting IDEs like PyCharm or VS Code to play nice with our containerized setup. They support containers, but they don't seem to support containerized monorepos very well. Or that's what I've seen from the problems my colleagues have run into. I've fiddled with it a bit. As a vim user, it doesn't affect me personally, but it is a big issue when considering the impact on the team at large.

So, yeah, containers are pretty great. I'm glad we have them, and I love having them for use in production environments. But locally? Most of the time, I think you're better off running what you're developing directly on your local machine. You get a robust, easy-to-use setup that doesn't throw away a lot of the features you get for free (history!). There are benefits to either approach, but the old tried-and-true methods deserve some love.


If this post was enjoyable or useful for you, please share it! If you have comments, questions, or feedback, you can email my personal email. To get new posts and support my work, subscribe to the newsletter. There is also an RSS feed.

Want to become a better programmer? Join the Recurse Center!
Want to hire great programmers? Hire via Recurse Center!