https://discord.cloudflare.com logo
Join Discord
Powered by
# durable-objects
  • e

    Erwin

    09/29/2021, 4:23 AM
    What do you mean by “using cache?” Do you mean in-memory?
  • h

    habitat

    09/29/2021, 7:28 AM
    I mean using the cache worker api
  • h

    habitat

    09/29/2021, 7:28 AM
    cache.put(request, response)
    method
  • e

    Erwin

    09/29/2021, 7:30 AM
    Ahh.. DOs don't run in every colo. So obviously depending on a million other things, there could be scenarios where you want to cache the output of a DO in the cache of the Worker's colo.
  • h

    habitat

    09/29/2021, 7:32 AM
    If we assume the user calling that DO will always be nearest to the same colo, and the DO just returns something from persistent storage, is there any performance benefits of using the cache api?
  • h

    habitat

    09/29/2021, 7:34 AM
    basically what is the added latency of spinning up a DO + routing the message to it + persistent storage versus cache
  • h

    habitat

    09/29/2021, 7:35 AM
    Like I'm achieving 50-80ms requests for this particular DO for example
  • h

    habitat

    09/29/2021, 7:35 AM
    Cold start is 300ms+
  • e

    Erwin

    09/29/2021, 7:35 AM
    That depends greatly on how close the nearest colo with a DO is
  • e

    Erwin

    09/29/2021, 7:36 AM
    And what do you mean with cold start? The first time you create a DO for a particular user?
  • h

    habitat

    09/29/2021, 7:36 AM
    Yeah
  • h

    habitat

    09/29/2021, 7:36 AM
    or even if the DO has been evicted, there's a slight delay, no?
  • e

    Erwin

    09/29/2021, 7:36 AM
    Yeah.. if you use
    ifFromName
    that can take a while because of the global coordination required
  • e

    Erwin

    09/29/2021, 7:36 AM
    idFromName
  • e

    Erwin

    09/29/2021, 7:37 AM
    Of all the overhead that come with DO, by far the most impactful one will be latency between the worker colo and the DO colo
  • e

    Erwin

    09/29/2021, 7:37 AM
    (Besides the initial creation based on a name that is)
  • e

    Erwin

    09/29/2021, 7:38 AM
    Right now, most DO clusters are in the US and Europe.. so average latency will be low there..
  • e

    Erwin

    09/29/2021, 7:38 AM
    Where as in the Southern Hemisphere, it gets to be more
  • h

    habitat

    09/29/2021, 7:39 AM
    I see, that makes sense. The distance between the worker and DO seems to be the deciding factor
  • h

    habitat

    09/29/2021, 7:39 AM
    so that's where the cache comes into play
  • e

    Erwin

    09/29/2021, 7:39 AM
    Yeah.. with the speed of light still being fast, but also rather finite 😉
  • e

    Erwin

    09/29/2021, 7:40 AM
    So yeah.. depending on your use-case, you could end up wanting to use the cache
  • h

    habitat

    09/29/2021, 7:41 AM
    I think I do want to use cache, but there's just one problem. It's critical in my particular use-case that the first load is fast regardless of location. I need to cache and potentially refresh/purge data in the cache
  • h

    habitat

    09/29/2021, 7:41 AM
    I haven't been able to find a suitable solution to this
  • h

    habitat

    09/29/2021, 7:42 AM
    I need to pre-populate the cache and then refresh/purge when changes are detected on the backend
  • h

    habitat

    09/29/2021, 7:42 AM
    The initial load needs to be sub-50ms
  • h

    habitat

    09/29/2021, 7:42 AM
    that is my main requirement
  • e

    Erwin

    09/29/2021, 7:43 AM
    I would need to know a bit more of your requirements to give you a decent answer. But it gets to be a bit much to do that here. Can you DM me a paragraph or two what you need to do? Where the data is coming from?
  • e

    Erwin

    09/29/2021, 7:44 AM
    Unless you can wait little over a week and want to work with @User to present this as a case to me on our next Workers for Fun and Profit episode 😄
  • h

    habitat

    09/29/2021, 7:45 AM
    Yeah I might be able to drop in! Thanks for the help!
1...188189190...567Latest