Slightly lazy question. Is there something up with...
# box-products
a
Slightly lazy question. Is there something up with https://downloads.ortussolutions.com/debs/noarch ATM? Rebuilding a Docker container that has some commandbox install stuff in it; usually works fine (and indeed worked fine y/day), but today it's just spinning trying to do a GET from there for something. Tried a coupla times.
g
The site comes up for me. Is there a specific path that isn't working?
a
The part of the Docker build process that it's stopping at is
Copy code
=> => # Get:2 <https://downloads.ortussolutions.com/debs/noarch>  commandbox 5.9.0-1 [103 MB]
TBH I don't really understand what all the telemetry is telling me when I'm building a container 😉 But I left that there for 15min and that's as far as it got. Same a second time now. Y/day... no problem.
g
@jclausen @bdw429s?
a
It's part of the execution of this line in my Dockerfile:
Copy code
RUN echo "deb [signed-by=/usr/share/keyrings/ortussolutions.gpg] <https://downloads.ortussolutions.com/debs/noarch> /" | tee /etc/apt/sources.list.d/commandbox.list
Which I took from https://commandbox.ortusbooks.com/setup/installation#stable, and modified for a Dockerfile
b
Interesting, that's just an S3 bucket behind a CloudFront distribution.
If HTTP calls to it are hanging, it would prolly be an issue with AWS
You can't hit that first URL directly as "folders" aren't browsable and in fact don't really even exist in S3.
a
I can curl the URL fine. It responds with "bad key" or some such, but that's predictable & it is responding.
b
a
Yeah if I curl that I get a 200-OK
b
I wonder if you're on a newer version of Ubuntu that's changed their signing requirements or something
Do you have a copy of that "bad key" error?
I don't use apt install path very often.
a
To be clear, I was just copying the URL from the docker feedback and hit it with a browser to see if that gave more helpful feedback
j
@Adam Cameron Have you thought about starting from the Ortus Docker image? It is already multi-arch for x86 and ARM
a
I'm specifically testing the CF2023 image. I'm only installing commandbox at all so as to install testbox.
So commandbox is very secondary to the task @ hand here.
TBH it's not holding me up... I can just sling the testbox dir in there by hand. Was more asking here in case there was an issue y'all didn't know about
j
Ah! Got it.
b
Another quick workaround is just to curl the binary https://s3.amazonaws.com/downloads.ortussolutions.com/ortussolutions/commandbox/5.9.0/commandbox-bin-5.9.0.zip unzip it and toss it into a bin folder. It doesn't answer your question about the apt key but it is just a manual install.
If you can, let me know what distro the container is and what the error message is so we can peek at it
a
The Dockerfile I'm using is @ https://github.com/adamcameron/cf2021_and_mariadb/blob/main/docker/cf2021/Dockerfile. (and compose file one level up, and shell script to capture the correct pwds in the correct place there also. But don't sink time into it unless yer short of other stuff to do).
👍 1
That repo is my CF2021 one, but I'm currently trying to compare with CF2023 as 2023 is misbehaving (see other threads on adobe channel)
b
Yep, saw your other thread 👍
a
The 2023 one will be "identical" except 2021^h^h23
I've extracted the commandbox stuff from the Dockerfile, and running it separately after the container is up. I think this is the culprit:
Copy code
10.9 kB/s 2h 23min 34s
My connection is 500Mbps down, 200 up. That's when doing the
apt-get install apt-transport-https commandbox
bit
Doing it straight in my shell, rather than the container's one is about the same.
It's my end. I think WSL was getting tired. I rebooted & seems OK now.
d
I sometimes get weird connectivity when building docker images locally. Having to trigger build multiple times to get it past some kind of download step. Haven't had the motivation to dig deeper and figure it out.
a
I find WSL networking gets flaky sometimes fullstop. But it usually manifests as no connectivity at all, hence not testing that earlier in this thread. Noted for "next time"