Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean it could just be very many containers that add up. And in many cases you'll have loads of images close to 1GB unless you go to great lengths to try and shrink them (e.g. anything that uses Python).


I can get an Alpine Linux container running Perl with a chunk of CPAN on it to 60MB by just stripping the build tools once I've installed everything I want. Is Python really a GB bigger, or are people not taking the most basic of steps?


Basic steps. Welcome to the long September of Docker. Whereas most folks who use it regularly know basic maintenance patterns, new users (myself included) must wade through inscrutable documentation or (worse) poorly written blogfarm posts in order to bootstrap up to a level of proficiency that passes interview smell tests.


Do you have any documentation (or blogs) on a higher level that you would recommend? I too have been using Docker for a while, and recently also for certain programs on my desktop (mainly firefox, github cli, azure cli, aws cli, terraform).

I like to think I am somewhat proficient in using Docker correctly by now, but I am still discovering new tidbits of practical knowledge, tips and good practices every few weeks. And I always want more. :)


Itamar Turner-Trauring's blog has a number of good articles on Docker best practices:

https://pythonspeed.com/docker/


I do not. I'm a new user who has found resource identification difficult.


So maybe I was quick to comment, but I glanced at the images I have sitting around and python:3.7 is 876MB. However, python:3.7-slim is 112MB (still nearly twice the size of the image you describe, though). But it's conceivable that many people running into limitations with the slim image would just opt for the full Python image instead.

Maybe a better example is the Tensorflow image, which is close to 4GB for the GPU version. It's also the kind of thing that would be a pain to rebuild from scratch if you wanted to find a way to save space.


Do you also understand that once you have any image built on top of python:3.7 then that 876MB doesn't need to be downloaded again for subsequent images?

> the Tensorflow image is really big

This is kind of meaningless without comparing it to how large Tensorflow is with related dependencies if you just install it centrally?


Yes, I understand that Docker caches layers. Like I said, if docker system prune is clearing a huge amount of space, it's like a large number of different images and containers that might not share layers. If you're build, running, starting, and stopping containers all day, you might not be terribly careful about exactly what you're doing each time. That's why system prune is so useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: