This is a general think out loud point to other Ai...
# contributing-to-airbyte
d
This is a general think out loud point to other Airbyters - I’m running stable now and it actually took me 3 minutes to download all the images at about 11 MB/s. I don’t remember it being this long previously. Looks like our total image footprint is now ~ 3GBs, not surprising since we’ve been adding more code and functionality. We should keep image size in mind for the future. Hopefully getting rid of the scheduler will help this a bit.
Copy code
➜  ~ docker images | head -n 30 | grep 0.30.23-alpha
airbyte/webapp                       0.30.23-alpha                                           705cee2dda23   6 days ago      53MB
airbyte/server                       0.30.23-alpha                                           fee3d39ea555   6 days ago      800MB
airbyte/worker                       0.30.23-alpha                                           9f0cac3aa409   6 days ago      1.2GB
airbyte/scheduler                    0.30.23-alpha                                           786de4d6c25a   6 days ago      759MB
airbyte/db                           0.30.23-alpha                                           73344a9224f2   6 days ago      192MB
u
Just curious, what is the target desired size of the final package? (assuming, less than 1 gb?)
u
No target. We mainly need to be aware of not including unnecessary layers.
u
I was noticing this slowness on Friday when mine updated. Because I could see the logs on screen, I knew it was docker downloads being slow; but if it weren’t for that, the hanging UI would have confused me a lot. I think it might be handy in OSS version to have the UI show some note “thank you for your patience as docker image updates are downloaded” or something like that. That way it’s not just hanging on a spinner and a blank screen.
d
^^ cc @John (Airbyte) as he’s been thinking about design elements for the UI during the slower update steps; docker image downloads may be one of those to consider.
u
i would be curious to know how they've grown over time. i'm wondering if we can track it back to specific changes. would love to feel better that all the extra size is actually intentional
u
Likewise maybe we’re missing a clean and purge step for yum package files or something similarly trivial to fix in the Dockerfile. Do we have a ticket to investigate the size and make sure it’s necessary?
u
It’s worth noting that these are the uncompressed local sizes. On Docker Hub it shows the compressed sizes (550mb for the server). The compressed base java image is already 220mb. The tar file for the server for example is 184mb. That leaves 146mb unaccounted for which isn’t a huge amount to optimize. Switching
Copy code
COPY build/distributions/${APPLICATION}-0*.tar ${APPLICATION}.tar
RUN tar xf ${APPLICATION}.tar --strip-components=1
to
Copy code
ADD build/distributions/${APPLICATION}-0*.tar /app
actually shrinks the local uncompressed size from 800mb to 600mb and the compressed size from 550mb to 385mb. The worker is a bit different since we’re installing some stuff. We can optimize that a little more.