https://discord.cloudflare.com logo
Join Discord
Powered by
# durable-objects
  • l

    Lockface77

    01/25/2023, 9:19 PM
    Hello, I am currently testing cloudflare durable object for performance. To test the performance, I created a worker endpoint that called a random durable object out of 100 possible durable objects. I tried to load test with fortio on this endpoint. Here are the results: > In 30 seconds, I was able to make 7236 calls to my object with a QPS of 237 Also, to make sure that my computer is strong enough to do a lots of request, I also tried with one of my remote go server with same parameters, here are the results as comparision: > In 30 seconds, I was able to make 25248 calls with a QPS of 839 Did I do the performance tests correctly? Because if that is the case, why is there such big performance difference between those 2 tests?
  • h

    HardAtWork

    01/25/2023, 9:21 PM
    If you are comparing local/server, the difference there might be down to device-to-edge network latency.
  • l

    Lockface77

    01/25/2023, 9:21 PM
    Both are remote servers
  • l

    Lockface77

    01/25/2023, 9:22 PM
    The go server is hosted by oracle
  • h

    HardAtWork

    01/25/2023, 9:22 PM
    Still might be the case, depending on network speeds?
  • u

    Unsmart | Tech debt

    01/25/2023, 9:22 PM
    so youre comparing a go service to a single threaded DO?
  • h

    HardAtWork

    01/25/2023, 9:23 PM
    I think they are load-testing DOs from a Go service
  • h

    HardAtWork

    01/25/2023, 9:23 PM
    Where’s the Fortio server running?
  • l

    Lockface77

    01/25/2023, 9:23 PM
    On my computer
  • l

    Lockface77

    01/25/2023, 9:24 PM
    Yes, but actually there are 100 different DOs that kinda makes it 100-threaded? My go run on a 4 OCPU oracle server
  • h

    HardAtWork

    01/25/2023, 9:24 PM
    That’s probably the difference then? Network speed/CPU differences
  • u

    Unsmart | Tech debt

    01/25/2023, 9:24 PM
    ^
  • h

    HardAtWork

    01/25/2023, 9:25 PM
    But also, the best way to measure performance in this case would be to only hit a single DO
  • h

    HardAtWork

    01/25/2023, 9:25 PM
    Any results you get here, up to a point, would be the same, just multiplied by ~100
  • u

    Unsmart | Tech debt

    01/25/2023, 9:25 PM
    DOs performance also entirely depends on the code you write. A very very simple (single) DO that only counts request I've tested up to 1,000 RPS with no errors and a p90 latency of 200ms.
  • l

    Lockface77

    01/25/2023, 9:25 PM
    I already tried, but it gets even slower with one single DO
  • h

    HardAtWork

    01/25/2023, 9:26 PM
    Well, 100 rps is about the limit. But if you add multiple DOs(or a DO pool in your case), you could theoretically run requests as fast as you are willing to pay for
  • u

    Unsmart | Tech debt

    01/25/2023, 9:31 PM
    100rps is not the limit of DOs it entirely depends on the code you can get some pretty insane performance out of a DO if it does very little. Some example stats for a very very simple counter DO I did was able to easily handle 1000rps and keep an accurate count of the number of requests that happened without having any errors.
  • l

    Lockface77

    01/25/2023, 9:31 PM
    What did you used to test this?
  • u

    Unsmart | Tech debt

    01/25/2023, 9:31 PM
    1000rps is about the most requests I could send before my computer hated me
  • u

    Unsmart | Tech debt

    01/25/2023, 9:32 PM
    I had a k6 script
    Copy code
    ts
    import http from 'k6/http'
    
    export const options = {
        scenarios: {
            open_model: {
                executor: 'constant-arrival-rate',
                rate: 1000,
                timeUnit: '1s',
                duration: '8m',
                preAllocatedVUs: 10000
            },
        },
        userAgent: 'Testing Garret / 1.0.0',
        discardResponseBodies: true
    }
    
    export default function () {
        http.get('https://example.com')
    }
  • u

    Unsmart | Tech debt

    01/25/2023, 9:32 PM
    But as I said the performance of a DO heavily depends on the code you write so you might not be able to get 1000rps to your DO
  • u

    Unsmart | Tech debt

    01/25/2023, 9:33 PM
    A go server will pretty much always be able to handle more load because of the fact that its a go service and not a single threaded DO
  • l

    Lockface77

    01/25/2023, 9:34 PM
    My method from DO is very simple:
    Copy code
    javascript
        async get(request: Request): Promise<Response> {
            const ret: string | undefined = await this.state.storage.get("value");
            return new Response(JSON.stringify(ret), {
                headers: {
                    "Content-Type": "application/json"
                }
            });
      }
  • s

    Skye

    01/25/2023, 9:35 PM
    You could so easily cache that between requests
  • u

    Unsmart | Tech debt

    01/25/2023, 9:35 PM
    Its cached automatically
  • s

    Skye

    01/25/2023, 9:35 PM
    In memory?
  • u

    Unsmart | Tech debt

    01/25/2023, 9:35 PM
    That code could easily get to 1000rps if thats the only request running at all
  • u

    Unsmart | Tech debt

    01/25/2023, 9:35 PM
    Yes @Skye
  • s

    Skye

    01/25/2023, 9:35 PM
    til
1...481482483...567Latest