Ok guys, active storage and caching is breaking my...
# support
g
Ok guys, active storage and caching is breaking my brain. We have a server we just set up that will end up being the production server when we flip things live. Our staging server works perfectly (along with out local environments, but we have caching disabled there, so that's likely why), but we have some pages where product images work for a few minutes before displaying broken images. If we clear the cache the images come back, but only for a bit before turning into broken images again. I'm sure this is something in our config somewhere, we got things working on our staging, but as far as I can tell we've completely replicated the environment settings. Can y'all think of anything that we might have missed or can point us in a direction for getting images to stick around?
j
This has to be the most common issue reported here at this point. 💀
this 1
g
I thought I searched for it, I must have missed it, my brain is mush trying to figure stuff out 😛
j
Basically, yeah, it's caching. Most default configuration have ActiveStorage generating expiring URLs when using storage like S3.
Nah, it's probably rotated out of Slack history at this point.
When the expiring URL gets cached, it works for a bit and then expires and doesn't work anymore.
You're using something like S3?
g
config.active_storage.service = :local
that's where it would say, right?
actually, let me check that config and see what "local" means
j
Oh, this is a similar, related issue.
What are you using for application hosting?
Local means the files are stored on the filesystem of the web server.
g
actual servers are digital ocean, but we're running through hatchbox
j
oh interesting, I don't know a ton about hatchbox, but it doesn't really matter: unless you application is hosted by a single webserver, you can't use local in production. As soon as you scale horizontally (introduce more than one application instance) you'll run into issues because only one of the servers will have any given file on its filesystem. It'll be a crap shoot as to whether your request gets routed to the server you're looking for.
You'll want to set up some kind of object storage, like AWS S3 or one of the many alternatives to store your files.
g
that all makes sense, and I see how that all works... the part that has us confused is that our staging server is also on digital ocean (even the same server) and also running through hatchbox
so in my head we would be seeing the same caching issue on our staging server, right?
j
it's not likely a caching issue, that only happens if you have something like S3 set up regardless, I don't have an explanation for why you're only seeing it in production
I wouldn't worry about it though, since you'd just be debugging a storage setup you shouldn't be using anyway. Whatever service you choose, you'll want to configure the service+ActiveStorage so that it's generating non-expiring public URLs for the files. Expiring URLs and caching don't mix.
g
no worries, you've given me some ideas for new stuff to look into
right, that makes sense
sg horns 1
t
thanks for pointing us in the right direction. we wound up fixing for now by setting the urls to expire later. not sure if this is gonna come back to bite us in 30 days. I believe it was the service_urls that fixed the issue.
Copy code
config.active_storage.service_urls_expire_in = 30.days
config.active_storage.urls_expire_in = 30.days
🙏 1
j
if your URL expiry is longer than your cache expiry, you should be fine
b
I ran into the issue described regarding S3 and solidus active storage. What is the correct solution this for this scenario? Do I the change the url to expiration as Tyler did? Or is there a way to disable temp urls generated by Solidus or Active Storage or S3? Not sure where to look... If this is a Solidus configuration problem could it be mentioned in the docs since it's so common?
j
You can get ActiveStorage to generate public URLs. It's a Rails configuration issue, not a Solidus one, though I'm in favour of us making it clear in the docs regardless.
For S3 specifically, I know that if the bucket is configured such that everything is always public, then you'll end up with public URLs. I think there are other paths though, in terms of configuring ActiveStorage, but like I said, I'm not an ActiveStorage guy.
t
Would that be but putting
public: true
in
storage.yml
in the
local
section? hatchbox recommended that and we haven't tried it yet.
j
That might be it, yeah.
b
However, this seems like it might not be ideal as a global setting? For example, digital downloads would need to be temporary URLs while images should not be.
j
Yeah, and it's something that varies based on the service you're using to store your files.
j
You can override the url method in the spree attachment class to add some special logic for your use cases for that
j
With things configured correctly, you shouldn't need to, but yes, that is true.
j
Yeah we changed our attachment to work through our asset host for CDN but didn’t see a way to get active storage to use that so a separate issue
sg thumbs up 1