https://discord.cloudflare.com logo
Join Discord
Powered by
# durable-objects
  • h

    HardAtWork

    04/15/2023, 1:16 PM
    Doesn’t it also stop when you begin streaming a Response
  • h

    HardAtWork

    04/15/2023, 1:16 PM
    Or does that only apply for regular Workers
  • u

    user6251

    04/15/2023, 1:17 PM
    awesome, thank you for your explanations, @Unsmart | Tech debt and @zegevlier, very helpful!
  • t

    Tino

    04/17/2023, 7:17 AM
    data is stable after it was written, problem is that i am moving legacy environments to new wrangler2 workers. The old durable object is stuck in the naming of the legacy environment names. So i am basically trying to move its data to a new one, same script single DO. (well2 of them since 2 legacy dash environments)
  • e

    Erwin

    04/17/2023, 3:03 PM
    I don’t think you would need to migrate the data for that.. but that would be better answered by someone who has a lot better idea about wrangler1 vs wrangler2 environments etc
  • u

    user6251

    04/19/2023, 1:33 PM
    I am currently evaluating Durable Objects as an alternative to Queues and Pubsub. Does the below solution make sense for our situation? 1. We have 1000s of concurrent data streams that would ideally be kept separate from each other 2. Each data stream sends messages (on average 1 every 5 seconds with a size of less than 1KB) 3. Every x seconds (e.g. 300) or y messages (e.g. 100), all the data from a data stream needs to be forwarded to a downstream consumer 4. The consumer reads all the messages, aggregates them and writes the data to a downstream consumer In other words, 1000s of data streams fill up their dedicated buckets with data and every time a bucket is full (x) or time is up (y), a consumer takes the accumulated data and processes it. If we create one instance of a DO for each of our data streams and persist the data via the storage API, the data should be able to accumulate with each request, even if they are minutes apart, I think. And by counting the messages while they are being added, and using the alarm feature to wake up the DO instance every x seconds, we should be able to write the accumulated data to a downstream consumer based on number of messages or elapsed time, right? To those with experience with DO, does this make sense? 😉 If my assumptions are correct, DO looks like the perfect solution. Queues wouldn't be able to separate the data (100 queues max), and Pubsub wouldn't be able to aggregate, because there is no support for non-streaming pulls yet, I think.
  • h

    HardAtWork

    04/19/2023, 1:37 PM
    Yeah, that idea looks good to me
  • u

    user6251

    04/19/2023, 1:38 PM
    Awesome, thank you! 🙂
  • u

    user6251

    04/19/2023, 1:41 PM
    Also, would you guys recommend using something like https://github.com/kwhitley/itty-durable or would it be smarter to start without that abstraction layer?
  • h

    HardAtWork

    04/19/2023, 1:44 PM
    It's up to you. Both apps should work fine, it is just a difference in how you build them
  • u

    user6251

    04/19/2023, 3:20 PM
    I'm new to DO and best learn by example. I already found this (https://github.com/saibotsivad/cloudflare-durable-object-sessions/blob/main/src/objects/user.js) helpful. If anybody here knows of other good examples, please share. Similar to the example, we intend to store user session data.
  • j

    john.spurlock

    04/19/2023, 3:42 PM
    seeing a bunch of
    Network connection lost
    from eyeballs to DOs in the last few minutes...
  • j

    john.spurlock

    04/19/2023, 3:42 PM
    known outage?
  • b

    Ben Caimano

    04/19/2023, 3:45 PM
    Yep, we should have a status page up soon.
  • j

    john.spurlock

    04/19/2023, 3:46 PM
    thanks!
  • b

    Ben Caimano

    04/19/2023, 3:58 PM
    https://www.cloudflarestatus.com/incidents/4h34s9dst3g6
  • c

    ckoeninger

    04/19/2023, 4:06 PM
    @john.spurlock impact should have been around 15:35-15:45, you see anything outside of that time frame? anything other than Network connection lost?
  • j

    john.spurlock

    04/19/2023, 4:10 PM
    let's see: nope I think that was it, sudden spike of unreachability between eyeballs from various colos -> DOs with
    network connection lost
    even after retries, then it cleared up
  • t

    Tarnadas

    04/19/2023, 7:09 PM
    Hey, I have some issue with DOs where I send multiple requests to the same DO. For some reason requests get cancelled and idk why. In my prior implementation I've been awaiting responses from DOs, but I need to handle lots of data and had performance issues due to many fetch requests to DOs. I am now trying to await as rarely as possible, but will definitely await all requests before my worker returns its final response
  • t

    Tarnadas

    04/19/2023, 7:15 PM
    hm I might have an idea. I also create new instantiations of DO stubs every time before I make a fetch request. I assume this might invalidate the old fetch? So if I do something like this:
    Copy code
    ts
    while (true) {
      const addr = c.env.LIQUIDITY.idFromName(cancelOrder.order.pair_id);
      const obj = c.env.LIQUIDITY.get(addr);
      // no await here
      obj.fetch(//...
    }
    this will cancel the old fetch?
  • u

    Unsmart | Tech debt

    04/19/2023, 7:25 PM
    you should always await the fetch otherwise requests will be cancelled
  • t

    Tarnadas

    04/19/2023, 7:33 PM
    but what is meant with e-orders then below here? https://developers.cloudflare.com/workers/runtime-apis/durable-objects/#object-stubs
  • u

    Unsmart | Tech debt

    04/19/2023, 7:34 PM
    Thats just about ordering requests, if you dont await a request it will get cancelled.
  • t

    Tarnadas

    04/19/2023, 7:35 PM
    but if I await them anyway, they are always in order? what's the purpose of this explanation then?
  • t

    Tarnadas

    04/19/2023, 7:38 PM
    "When you make multiple calls to the same stub, it is guaranteed that the calls will be delivered to the remote Object in the order in which you made them" if I await requests to the same stub, they are obviously in order. This sentence doesn't make any sense, if I can't send multiple requests without awaiting or am i missing something?
  • h

    HardAtWork

    04/19/2023, 8:01 PM
    Basically, requests will be processed in the order they are sent, awaited or no. The issue is, once the initial request to the Worker stops being processed, it will immediately cancel all of the requests that haven't completed yet. If you want to run as many requests in parallel as possible, you could try placing the promises into an array, and then awaiting them at the end, like so:
    Copy code
    ts
    const promiseArray: Promise<Response>[] = [];
    while (true) {
      const addr = c.env.LIQUIDITY.idFromName(cancelOrder.order.pair_id);
      const obj = c.env.LIQUIDITY.get(addr);
      // no await here
      promiseArray.push(obj.fetch(...));
      // Some Exit Condition must exist
    }
    await Promise.all(promiseArray);
  • t

    Tarnadas

    04/19/2023, 8:09 PM
    oh yeah I actually do this, but I have some additional chaining logic. Hm idk I need to investigate further. Will get back to you
  • t

    Tarnadas

    04/19/2023, 8:23 PM
    ok fixed something with the chaining, but it still seems like I can only make around ~10 requests in parallel until they randomly start to get cancelled. Maybe I'll just limit to some max amount of parallel request and await in between
  • e

    erik-beus

    04/20/2023, 8:29 AM
    Hi 🙂 I'm working with DOs and delayed `alarm()`s. I have some stuff saved in memory on my DO that I want to upload after a period of time with the
    alarm
    function. However, the
    alarm
    function doesn't seem to have access to the DO instance when accessing
    this.myVariable
    for instance (which I guess makes sense). Instead, I'm trying to invoke the DO over http (using fetch) from the
    alarm
    function, but I keep getting a
    403 Forbidden
    response. Do you know if there's a limitation in place so that an
    alarm
    function can't call its own DO using fetch? I know that it's possible to use the DO's
    state.storage
    to save data, but in our case we're exceeding the memory limits for the storage and hence we keep it in memory (only for 10 seconds at a time though). I appreciate any help on this 🙂
  • h

    HardAtWork

    04/20/2023, 8:31 AM
    The fact that the
    alarm()
    function doesn't have access to
    this.myVariable
    means that the DO has recently been restarted, and thus does not have it in memory. If it isn't saved to
    state.storage
    , then there isn't anything you can do to recover.
1...538539540...567Latest