https://discord.cloudflare.com logo
Join Discord
Powered by
# durable-objects
  • b

    brett

    12/29/2022, 2:53 PM
    Yeah, so long as the binding they use refers to the same DO Namespace, and they speak to the same object(s)
  • s

    sks

    12/29/2022, 2:54 PM
    Okay, understood. Thanks for the input.
  • b

    brett

    12/29/2022, 2:54 PM
    DOs are for the most part normal Workers, the Cache API should work the same. Are these trying to write to the cache, or read from it? My only guess about your problem is that the DO may not be running in the same datacenter as the Worker where you previously (?) wrote to the Cache. Cache is per-datacenter.
  • s

    sks

    12/29/2022, 2:56 PM
    How do I forward the request, is there a demo code somewhere? I found the chat demo, and wrote this code.
    Copy code
    export default {
    
        async fetch(request, env, ctx) {
            let id = env.TestObject.idFromName("default");
            let stub = env.TestObject.get(id);
            return await stub.fetch(request);
        },
        
    };
    Is this the correct way to proxy it ?
  • b

    brett

    12/29/2022, 2:57 PM
    Yeah, that's right. You could inspect the request or do some authentication or other Work beforehand, but the important bit is that you eventually return the response from the DO. Even if it's a WebSocket that remains open all day long, the Worker in the middle will turn into a dumb/free proxy as soon as you do that.
  • s

    sks

    12/29/2022, 3:00 PM
    I see, i assumed this is the case, so i wrote my code like this
    Copy code
    export default {
    
        async fetch(request, env, ctx) {
            let id = env.TestObject.idFromName("default");
            let stub = env.TestObject.get(id);
            return await stub.fetch(request);
        },
    
    };
    
    
    /* Durable Object */
    export class TestObject {
    
        constructor(state, env) {
            if(this.count === undefined) {
                this.count = 0;
            }
            this.connections = [];
        }
    
        async fetch(request) {
            const upgradeHeader = request.headers.get('Upgrade');
            if (upgradeHeader && upgradeHeader.toLowerCase() === 'websocket') {
                const pair = new WebSocketPair();
                await this.handleSocket(pair[1]);
                return new Response(null, { status: 101, webSocket: pair[0] });
            } else {
                return new Response("Count HTTP: " + this.count);
            }
        }
    
        async handleSocket(socket) {
            socket.accept();
            this.connections.push(socket);
        }
        
    }
    This is just testing. I am just today beginning my journey in durable objects.
  • s

    sks

    12/29/2022, 3:00 PM
    Thank you very much @brett for the help !
  • r

    ryucode

    12/29/2022, 3:02 PM
    hey guys, I'd like to ask for some advice for making a design decision with durable objects. I am currently building a simple CRM application that uses NoSQL database to access all the user related data. The database itself follows a single table design pattern so, all the data ( including relational data ) is structured in a way it can be accessed without making joins etc. My question is, is it a good idea to ditch the database and solely use durable objects a single instance of user data (including relational data) all within one instance of an object meaning, each user gets their unique object?
  • b

    brett

    12/29/2022, 3:04 PM
    Not knowing much about your app, more objects are better. Object per user or per domain-object is the preferred default, yeah.
  • r

    ryucode

    12/29/2022, 3:12 PM
    thanks @brett , do you see any issue with using durable objects as a drop in replacement for database? like in terms of, if something goes wrong, can we somehow take backups of the objects? or as an admin, can I list and inspect them to see what data it holds etc?
  • b

    brett

    12/29/2022, 3:14 PM
    You can list objects by ID, and then you'd need to contact each of them via some admin API if you want to introspect them. We do our own backups but we don't expose a backup API to users currently. Things are relatively primitive today, you may have to build some of your own tooling. But they are definitely designed to replace a typical app backed by a DB.
  • r

    ryucode

    12/29/2022, 3:17 PM
    got that, thank you @brett! 🙌
  • d

    davidfm

    12/29/2022, 3:27 PM
    Thanks @brett for quick response. - The
    fetch
    done from DO alarm handler is performed with
    cacheOptions
    to
    cacheEverything: true
    and a specific
    cacheKey
    . When the
    fetch
    completes, I expect it to write to the Cache with that
    cacheKey
    - I'd need another worker, running independently of the DO, to find the same resource from the cache using the same
    cacheKey
    and not hit the origin I somewhat expect the cache to propagate/sync globally eventually.... or would it stay local to the datacenter? Will there be cache misses in each and every datacenter/ 1 origin call per datacenter? What's the best way in your view for the other worker to find the element already fetched by the DO alarm?
  • b

    brett

    12/29/2022, 3:38 PM
    Cache is not synced with any other colos. If you need to reliably read some data from multiple different colos then you could use Workers KV, hit a DO, etc.
  • d

    davidfm

    12/29/2022, 5:16 PM
    Thanks a lot @brett , this clarifies things. We will have to make the Worker read from KV/R2 or similar... and the DO put the resource there. Is there a preferable clean way to push it to the cache too from the worker? Cache API
    cache.put
    ? or again a
    fetch
    with
    cf
    object options (to DO for example...)? In the limit we need the resource pre-fetch and pre-warming the cache from DO request/response, ahead of the Worker request coming and requesting it (should find it somewhere, not the origin). But ideally the resource should end up in the cache, whether or not initially needs to be in KV. Hopefully makes sense, and thanks for prompt responses
  • c

    ckoeninger

    12/29/2022, 5:19 PM
    KV uses the same cache as the cache api, it's redundant to try to manually cache things coming out of KV
  • a

    aarhus

    12/29/2022, 10:42 PM
    Have been working with itty-durable and do-taskmanager and have created my own base class. Might be useful - https://gist.github.com/aarhus/d7d6d7e1778367994f9e33c37a08074e If notice anything glaringly wrong please let me know! 🙂
  • s

    sks

    12/30/2022, 6:06 AM
    Are all durable objects allocated only 128MB of memory? I am looking at the chatroom example, and I can see WebSocket connections to each individual client, being active at all times. Doesn't that mean if I have a Million people connected to the same chatroom (which would mean the same durable object), I would have to manage all the million connections inside the durable object. Which would both take a lot of time to execute and lot of memory as well. If that happens, how does cloudflare handle increasing memory ? Does the 128MB memory ever increase? Does the websocket connections drop? Is there an example, that can help with this usecase?
  • s

    sks

    12/30/2022, 6:16 AM
    Suppose I am building a broadcast-type durable object. Such as let's say a stock market chat. Around 1 Million people are connected via WebSocket to the DO. It is all readonly, and only around 1 request per second would write to the DO. When data is updated around 1 request per second, all the WebSocket connections receive the data. What should I do in this usecase. Are there ways to make sure the durable object won't fail or be evicted. I have read the limits (in the documentation it says around 100req/s is an estimated limit). ? I was thinking of using KVs for this high read usecase, but I am not sure if KV would be a good fit for stock market style data, which is crucial to be delivered at perfect timings.
  • h

    HardAtWork

    12/30/2022, 8:42 AM
    You could use a fan-out model. Instead of clients connecting via WS directly to the DO with the data, they connect to a proxy, which then connects to the data source
  • s

    sks

    12/30/2022, 8:43 AM
    So the proxy would be another DO. Which would handle a small number of clients?
  • h

    HardAtWork

    12/30/2022, 8:52 AM
    Yeah. You can probably do ~100 clients per, because remember, every time a message is received, it has to be forwarded to all of the clients before the next message arrives. So you probably don’t want to go much higher(though this also depends on how often events fire).
  • s

    sks

    12/30/2022, 8:54 AM
    I see. But how do I keep track of underused or overused durable objects. I guess I could do some ID tracking but I am not sure.
  • k

    Kai

    12/30/2022, 1:20 PM
    Is there a reason to group multiple websockets together? The way I've though of it is just one WS = one DO. DOs are unlimited & only billed for usage, so I'd think it doesn't matter in terms of cost and just gives CF more chances to place DOs close to users?
  • j

    jed

    12/30/2022, 3:21 PM
    DO billing is wall-time, so this will get expensive quickly.
  • k

    Kai

    12/30/2022, 4:59 PM
    Is a connected but not sending WS considered active? Didn't look into the billing too much yet as my usage is low for now
  • e

    ehesp

    12/30/2022, 5:00 PM
    If its connected, then the DO is alive and running
  • k

    Kai

    12/30/2022, 5:01 PM
    Huh, looks like I'll have to make some changes 😛 Thanks
  • m

    muslax

    12/30/2022, 6:09 PM
    Is there limit in KB for transient (in memory) state for each Durable Object?
  • c

    ckoeninger

    12/30/2022, 6:23 PM
    there's a limit of 128mb for a given durable object class on a given physical server
1...466467468...567Latest