https://discord.cloudflare.com logo
Join Discord
Powered by
# durable-objects
  • i

    ItsWendell

    08/15/2021, 10:48 AM
    It is possible to run a setInterval within a DO to perform a certain task every X seconds right? E.g. ping an Websocket or process some sort of queue?
  • i

    ItsWendell

    08/15/2021, 10:48 AM
    That will prevent is from being evicted right?
  • a

    albert

    08/15/2021, 10:49 AM
    I am not sure, haven't tried doing that. Wouldn't the wait caused by
    setInterval()
    still use memory though?
  • a

    albert

    08/15/2021, 11:13 AM
    I tried hammering a single Durable Object (several hundred requests/second) and it started returning 500 errors - is there any way to have the request queue up until they're able to be processed?
  • a

    albert

    08/15/2021, 11:15 AM
    If I do this manually by adding a try-catch block that retries after 100ms, I'm able to get a single DO to handle 533 requests/second - albeit with quite a lot of latency. A better, built-in way of doing this would be nice though.
  • i

    ItsWendell

    08/15/2021, 11:29 AM
    Interesting, I'm attempting to build some type of scalable pub / sub system where 'nodes' have a maximum amount of listeners. If that amount is exceeded, a new node is created that proxies requests from another node back to the listeners of that node.
  • i

    ItsWendell

    08/15/2021, 11:32 AM
    What's the latency difference you're getting?
  • a

    algads

    08/15/2021, 11:49 AM
    Ah, that's specifically what I was looking for. So, if I create enough idFromName('1') and idFromName('2') and idFromName('...') and they are all created in the same machine, they are likely to not each have 128Mb of memory but would instead be sharing the memory pool.
  • i

    ItsWendell

    08/15/2021, 12:05 PM
    Yes indeed, that's how understand it!
  • w

    Wallacy

    08/15/2021, 12:16 PM
    If I’m not wrong, the max overlap is the same as the regular worker. Up to 6 connections per instance. Then they will create another one.
  • w

    Wallacy

    08/15/2021, 12:16 PM
    Sometimes there’s no overlay anyway. Even with only one connection.
  • w

    Wallacy

    08/15/2021, 12:17 PM
    There’s no clear rule on that.
  • w

    Wallacy

    08/15/2021, 12:17 PM
    But there’s limits on this behavior.
  • a

    albert

    08/15/2021, 1:36 PM
    __**Result of 20.000 requests in 19.991 seconds**__ Best: 22 ms Worst: 955 ms Average: 763 ms 50th percentile: 794 ms 75th percentile: 825 ms 99th percentile: 920 ms 99.9th pecentile: 943 ms Worker:
    Copy code
    js
    async function handleRequest(request, env) {
        const start = Date.now()
        const url = new URL(request.url)
        const id = env.VOLATILECOUNTER.idFromName(url.pathname)
        const stub = env.VOLATILECOUNTER.get(id)
        while (true) {
            const response = await stub.fetch(request)
            const stop = Date.now()
            if (response.status == 200) {
                return new Response(JSON.stringify({start: start, stop: stop}), {headers: {'content-type': 'application/json'}})
            }
            await new Promise(resolve => setTimeout(resolve, Math.random() * 10))
        }
    }
    Durable Object:
    Copy code
    js
    export class VolatileCounter {
        constructor(state, env) {
            this.state = state
        }
    
        async initialize() {
            this.value = 0
        }
    
        async fetch(request) {
            if (!this.value) {
                await this.initialize()
            }
    
            this.value++
    
            return new Response(this.value)
        }
    }
    These results come from 10 HTTP/2 parallel connections (on different cores) each making 2000 parallel requests. The response time of the Durable Object was measured inside the Worker using as can be seen in the code.
  • a

    albert

    08/15/2021, 1:47 PM
    I ran some more tests with lower amounts of request just to see how that would affect the response time. The average response time started taking a hit beyond 100 requests and got really bad beyond 1000 requests. __**Result of 5000 requests in 5.047 seconds**__ Best: 22 ms Worst: 980 ms Average: 463 ms 50th percentile: 449 ms 75th percentile: 715 ms 99th percentile: 940 ms 99.9th pecentile: 974 ms __**Result of 2000 requests in 2.396 seconds**__ Best: 29 ms Worst: 528 ms Average: 358 ms 50th percentile: 363 ms 75th percentile: 469 ms 99th percentile: 515 ms 99.9th pecentile: 524 ms __**Result of 1000 requests in 1.517 seconds**__ Best: 19 ms Worst: 235 ms Average: 139 ms 50th percentile: 154 ms 75th percentile: 181 ms 99th percentile: 225 ms 99.9th pecentile: 229 ms __**Result of 250 requests in 0.901 seconds**__ Best: 20 ms Worst: 124 ms Average: 65 ms 50th percentile: 70 ms 75th percentile: 79 ms 99th percentile: 112 ms 99.9th pecentile: 124 ms __**Result of 100 requests in 0.758 seconds**__ Best: 19 ms Worst: 52 ms Average: 29 ms 50th percentile: 27 ms 75th percentile: 33 ms 99th percentile: 47 ms 99.9th pecentile: 52 ms __**Result of 50 requests in 0.739 seconds**__ Best: 20 ms Worst: 44 ms Average: 27 ms 50th percentile: 26 ms 75th percentile: 29 ms 99th percentile: 42 ms 99.9th pecentile: 44 ms
  • a

    albert

    08/15/2021, 1:48 PM
    (sorry for the long messages 😅)
  • e

    Erwin

    08/15/2021, 2:04 PM
    Yeah, that is to be expected indeed. The request are processed serially on the DO. Which is why for any system that needs a lot of throughput you need to share your request over multiple DOs. But I am pretty impressed with even the performance of a single DO tbh. Of course this also greatly depends on what you do inside the DO.
  • a

    albert

    08/15/2021, 2:08 PM
    It's just an in-memory atomic counter 🙂
  • a

    albert

    08/15/2021, 2:08 PM
    The performance is perfectly acceptable, but what I don't like is the need to do something like this:
    Copy code
    js
    while (true) {
            const response = await stub.fetch(request)
            const stop = Date.now()
            if (response.status == 200) {
                return response
            }
            await new Promise(resolve => setTimeout(resolve, Math.random() * 10))
        }
  • a

    albert

    08/15/2021, 2:09 PM
    If I just do
    return stub.fetch(request)
    , I start getting status 500 under heavy load.
  • a

    albert

    08/15/2021, 2:10 PM
    Perhaps there's a limit to how many request can be queued at a time? @User
  • k

    kenton

    08/15/2021, 4:12 PM
    If you put your fetch in a try/catch (make sure to
    await
    inside the try/catch), then you should be able to catch any exceptions and they should have descriptions that explain what went wrong.
  • k

    kenton

    08/15/2021, 4:13 PM
    There is indeed a limit on queuing. If the DO can't keep up with the requests it's getting, eventually some requests will error out early rather than just letting the queue grow infinitely.
  • a

    albert

    08/15/2021, 4:15 PM
    Alright, that explains the errors during high load 🙂
  • j

    john.spurlock

    08/15/2021, 5:51 PM
    When DO instances live in the same isolate, they share all of the same global state like any other js environment. If you keep a simple static array or
    Map
    of all instances, adding
    this
    to the static container in your DO constructor, you can easily make direct in-memory calls between them. However, if an instance gets a handle to another instance's
    state.storage
    , it is prevented from making I/O calls on it.
    Error: Cannot perform I/O on behalf of a different Durable Object. I/O objects (such as streams, request/response bodies, and others) created in the context of one Durable Object cannot be accessed from a different Durable Object in the same isolate. This is a limitation of Cloudflare Workers which allows us to improve overall performance.
  • i

    ItsWendell

    08/15/2021, 6:55 PM
    Haha that's interesting!
  • g

    Greg-McKeon

    08/15/2021, 8:07 PM
    setInterval won't prevent eviction today. We're working on something here, but no timeline yet.
  • g

    Greg-McKeon

    08/15/2021, 8:10 PM
    you might also be getting 500's from overloading the Worker that's connecting to your DO. http/2 will send requests to the same machine - try using HTTP/1 when running your load test.
  • a

    algads

    08/15/2021, 10:55 PM
    What determines if they execute in the same isolate? Is it when the same DO is executed on the same machine in the same POP? If I call getIdFromName('1') and getIdFromName('2') and then run on the same machine in a POP, they will run in the same isolate even though their storage (both permanent and memory) are distinct. However, issues such as memory limits, etc. are for the isolate and therefore the two separate DO instances have less resources than if they ran on different machines in the same POP.
  • j

    john.spurlock

    08/15/2021, 11:18 PM
    Currently, only objects in the same colo and same DO namespace (a namespace = script+classname+name/environment) are eligible to be placed into the same isolate, but this may change in the future. https://discord.com/channels/595317990191398933/773219443911819284/870304711666966588
1...154155156...567Latest