https://discord.cloudflare.com logo
Join Discord
Powered by
# durable-objects
  • d

    Deleted User

    04/14/2021, 2:27 PM
    or cough return as an object instead
  • d

    Deleted User

    04/14/2021, 2:27 PM
    so you could:
    Copy code
    js
    const { client, server } = new WebSocketPair();
  • j

    jed

    04/14/2021, 2:31 PM
    probably better to not break a GA API.
  • k

    kristian

    04/14/2021, 2:48 PM
    noted! i'll pass that along to the folks implementing
  • g

    GrygrFlzr

    04/14/2021, 2:56 PM
    doesn’t have to break API if it’s just left as deprecated and the new API is added
  • d

    Deleted User

    04/14/2021, 3:05 PM
    yea - note that you could make both work at the same time
  • d

    Deleted User

    04/14/2021, 3:12 PM
    ()
  • m

    matt

    04/14/2021, 3:17 PM
    whoops, sorry about this. You could split your DO into another project, and have the calling worker be written in the service-worker format to get access to request.cf
  • m

    matt

    04/14/2021, 3:17 PM
    we're still thinking about the best way to expose things that vary per-request to module worker handlers
  • m

    matt

    04/14/2021, 3:22 PM
    This is a docs bug -- you need to remove the binding to your durable object (the entry in the
    [durable_objects]
    section in
    wrangler.toml
    ) when you delete the class that it binds to, otherwise creating the binding will fail as the class has been deleted (note that if any part of script upload fails, migrations are rolled back)
  • a

    alex.b

    04/14/2021, 3:56 PM
    So I did a bit of testing today using the chat room example, I basically added an
    const inMemoryStorage = []
    array, and pushed a
    new Uint8Array(5000000)
    (5MB) to it after every received message, and there was seemingly no limit to how big it would get (I surpassed 128MB, if my code was working right). Also, I couldn't get two rooms to share the
    inMemoryStorage
    array. Is this just a fluke, or does each instance of a durable object get its own worker/runtime instance?
  • m

    matt

    04/14/2021, 4:07 PM
    going to dm you
  • m

    matt

    04/14/2021, 4:09 PM
    as far as the sharing the
    inMemoryStorage
    , there are certain conditions where multiple durable objects will share global scope, but it's not something you should rely on. you should have your rooms call each other
  • r

    Reaver

    04/14/2021, 4:26 PM
    Anyone has an idea if Workers KV will eventually start using Durable Objects?
  • a

    alex.b

    04/14/2021, 4:37 PM
    👍 thanks for the clarification. The main reason I was concerned about sharing memory is to figure out what happens when the shared memory limit is reached. Would every object will be killed, then reinitialised? if so, mine would all then make requests to an external server and essentially refill their memory limit, so it would get killed again shortly? is there any system that ensures/makes it unlikely all the objects don't get restarted again in the same instance together?
  • g

    GrygrFlzr

    04/14/2021, 4:38 PM
    Wouldn’t that kill part of the performance advantages
  • e

    eidam | SuperSaaS

    04/14/2021, 5:17 PM
    Someone from the team mentioned that DOs might eventually be a backend for KV (at least part of it, cant remember details)
  • k

    Kevin W - Itty

    04/14/2021, 7:38 PM
    is the storage (like the keys) locally scoped to a single DO instance or is that shared?
  • k

    Kevin W - Itty

    04/14/2021, 7:39 PM
    (hoping/suspecting the former)
  • b

    brett

    04/14/2021, 7:43 PM
    it's scoped to each DO instance
  • g

    Greg-McKeon

    04/14/2021, 8:21 PM
    Hey Alex, Yep, when you hit the memory limit your Object will be allowed to finish processing its current Request and then be killed. It will then be re-instantiated on the next request to it, with its in-memory state reset. To your earlier question, we can't scale out a Durable Object because it is by definition a singleton, so you're hard limited to 128 MB of memory per Object. This is why you should choose Object IDs that represent the smallest unit of state possible in your application. As Matt said, you shouldn't share global scope across DOs and you shouldn't need to worry about what instance a DO is actually running on. If you got into a crash-loop like you described while staying under 128 MB per DO, we'd consider it a bug. If you used more than 128 MB per DO, you would observe this behavior.
  • g

    Greg-McKeon

    04/14/2021, 8:21 PM
    For runtime limits, websocket establishment counts as a request. Future messages do not count as a request. Durable Objects are currently not being billed for, and we're still working out the best way to price WebSockets with them (would love feedback from you all on this!)
  • g

    Greg-McKeon

    04/14/2021, 8:34 PM
    The issue we have today with websockets pricing in a Durable Object is that DO charges for duration in wall-clock time, so a mostly-idle websocket will be expensive. We've discussed a number of options: * We know that we want to share the duration charge across multiple WebSockets connected to the same DO. This reduces the cost for a DO with many WebSockets connected, since as you connect more WebSockets the likelihood that you're actually processing a message increases. * We've considered adding a "hibernation" API, where after a certain amount of inactive time (order of tens of seconds) a DO hibernates its WebSockets and billing is paused. When a new WebSocket message comes back in, the WebSocket connection is delivered back to the DO via the Durable Object constructor. * We've also considered billing for Durable Objects based on active CPU time, plus a small WebSocket connection charge (per minute) whenever a WebSocket connection was open to a Durable Object. This charge would be much smaller than the current duration charge.
  • v

    vans163

    04/14/2021, 9:08 PM
    I think #3 makes the most sense, hibernation seems like it would affect realtime performance. Say your usecase is stock alerts, the DO waking up from hibernation could take an extra second for example, causing unneeded performance regressions. Hibernation is usually used to compact (Major GC) something running that seldomly gets messages at a large cost to start/stop it. Since DOs are limityed to 128MB anyways I dont think hibernation would win much of anything.
  • j

    jed

    04/14/2021, 9:09 PM
    this is the closest i've seen: https://discord.com/channels/595317990191398933/773219443911819284/810713916283551746
  • k

    Kevin W - Itty

    04/14/2021, 9:16 PM
    so, adding tests and fleshing out the docs on itty-durable... anyone have a solution for mocking DO stubs/objects (or really just how to test around them in general)?
  • k

    Kevin W - Itty

    04/14/2021, 9:17 PM
    also exposing a helper method (that withDurables middleware uses) to allow you to have the same DO stub interface within another DO (as in, execute methods directly off the stub)
  • k

    kenton

    04/14/2021, 10:21 PM
    Hmm it sounds like you're thinking of a different, more-specific meaning of "hibernation" than we intended. All we mean here is some way that the isolate can shut down while the WebSocket is idle, and be started back up on-demand when a message arrives. This would be an isolate "cold start" which in Workers takes only a few milliseconds.
  • v

    vans163

    04/14/2021, 10:24 PM
    @User the fate of the TCP socket would be complex i think in that case, this would totally work with HTTP3/QUIC over UDP tho i think. because you would have to start up the DO on the same machine right? Unless you use some kinda outer machine as a holder/router/proxy of the websocket to the inside?
  • v

    vans163

    04/14/2021, 10:26 PM
    im not sure actually if http3/quic has anything for steaming available tho. http2 totally bombed that aspect and forwent websockets completely
1...596061...567Latest