https://discord.cloudflare.com logo
Join Discord
Powered by
# durable-objects
  • b

    brett

    11/29/2022, 5:26 PM
    Yeah, if you're OK with losing a little, then you could set a timeout to persist state every N unit time or something
  • h

    HardAtWork

    11/29/2022, 6:08 PM
    It would be cool though if there were a way to fire a handler whenever the runtime knows that it will soon be evicted, even if that won’t cover every eviction case
  • n

    nclevenger

    11/29/2022, 6:32 PM
    is there ever an eviction while a request is active? If not, is there a way to extend that behavior a bit? like with
    ctx.waitUntil
    ?
  • b

    brett

    11/29/2022, 6:35 PM
    I think our fear is that people will use it assuming it's reliable, but I do agree it'd be a nice to have. Another issue is how long to give someone to do extra work. Also if you mean just eviction because the object is idle, you could implement your own with a 30 second timer after the latest request, and debounce it anytime a new request comes in.
  • n

    nclevenger

    11/29/2022, 6:36 PM
    That's exactly what I was thinking
  • b

    brett

    11/29/2022, 6:36 PM
    Depends how you define eviction. Since hardware crashes you can't always do stuff when a DO instance it being killed. The only other time an instance becomes "broken" is if there is something like an error accessing storage, or if you do a code update and try to use storage.
  • n

    nclevenger

    11/29/2022, 6:38 PM
    I guess I mean eviction not from failure like hardware crashes, but standard eviction behavior like when worker updates are deployed
  • n

    nclevenger

    11/29/2022, 6:39 PM
    The use case I was playing with was to keep in memory state with a 20-30 second timer in ctx.waitUntil ... and write to KV if another request doesn't come in ... for very large & changing datasets the pricing of DOs is really expensive - so for some of these use cases we're writing to KV from DOs ...
  • u

    Unsmart | Tech debt

    11/29/2022, 6:39 PM
    Maybe the function it calls can just be like unsafeEviction if you guys do add it
  • b

    brett

    11/29/2022, 6:42 PM
    Yeah I think the timer thing I mentioned above would cover you pretty well except for code updates. We went back and forth on updates when we were designing the system
  • b

    brett

    11/29/2022, 6:42 PM
    It could be cool to have a "let the old instance finish up work before starting the new instance" but that does imply requests would need to block until that work finished
  • s

    Skye

    11/29/2022, 6:44 PM
    If I know that all of my requests are hitting essentially 1 colo, it would be faster to have a durable object + some cache store things in that colo over KV, right?
  • h

    HardAtWork

    11/29/2022, 6:45 PM
    Probably, as long as that colo supports DOs, that the values are small, and that you wouldn’t overload the DO with unique requests.
  • s

    Skye

    11/29/2022, 6:47 PM
    All I'd use the DO for would be a cache essentially - the rest of the processing could happen in the normal worker. But yeah I'm storing like <1kb values
  • s

    Skye

    11/29/2022, 6:47 PM
    (as for "why not the cache api?" - the less common writes can come from anywhere, so I need it to update in this colo asap)
  • n

    nclevenger

    11/29/2022, 6:47 PM
    If all invocations were in a single colo, then would you even need a DO?
  • n

    nclevenger

    11/29/2022, 6:48 PM
    i guess if you have conflicting writes ...
  • h

    HardAtWork

    11/29/2022, 6:48 PM
    A DO in the same colo would probably respond faster than a KV origin in a completely different colo
  • s

    Skye

    11/29/2022, 6:48 PM
    Writes (much less common) can come from anywhere, reads will come from just one place
  • s

    Skye

    11/29/2022, 6:49 PM
    I estimate read-write to be something like 100:1
  • b

    brett

    11/29/2022, 6:57 PM
    I assume you're OK with the cache value being evicted
  • b

    brett

    11/29/2022, 6:57 PM
    It's even possible (though unlikely) your DO is in Colo 1, has set a Cache value of A. Then that colo has issues and we move the DO to colo 2, where it sets a Cache value of B. Then the DO moves back to Colo 1 where it reads a Cache value of A.
  • b

    brett

    11/29/2022, 6:59 PM
    And your normal Worker requests can be routed to different places for many different reasons, so I don't know if you can really depend on always hitting the same colo as your DO just because you observe that to be the case right now
  • s

    Skye

    11/29/2022, 7:02 PM
    I'm basing this on the fact that about 99% of requests go to IAD at the moment, so putting my cache in a DO in IAD would be beneficial for performance over just using KV.
  • s

    Skye

    11/29/2022, 7:03 PM
    And yeah, eviction is fine, I don't expect these values to change often, but when they do, I'd like the cache up to date asap (which is why the cache API is not ideal - the update could come from anywhere, as opposed to the read, just IAD)
  • b

    brett

    11/29/2022, 7:14 PM
    We have a number of colos in IAD that will have their own colo caches
  • b

    brett

    11/29/2022, 7:14 PM
    It seems like your normal Worker could try cache, fallback to DO, then put to Cache if it wasn't there? Depends on the details, I guess
  • s

    Skye

    11/29/2022, 7:15 PM
    It's a bit of a tricky one, isn't it 😅
  • s

    Skye

    11/29/2022, 7:15 PM
    A very specific need, will have to think more about it
  • n

    nclevenger

    11/29/2022, 7:20 PM
    Oh interesting ... how many colos have independent caches? Would that tend to be the same colos that have enough redundancies to also support DOs? Or is that an outlier given how core IAD is?
1...448449450...567Latest