https://discord.cloudflare.com logo
Join Discord
Powered by
# workers-discussions
  • j

    Jup

    05/04/2023, 9:47 PM
    something up there ig
  • j

    Jup

    05/04/2023, 9:50 PM
    yup, it doesnt like
    "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36",
  • j

    Jup

    05/04/2023, 9:50 PM
    weird
  • p

    panic_macc

    05/04/2023, 10:43 PM
    I'm getting started with workers and loving it so far. I'm curious about something I'm seeing in the UI: I create a project using wrangler. In my
    wrangler.toml
    I specify my base environment (which ideally I want to be dev) and one or two additional environments (like staging and prod, following the tutorials). That works, but in the dashboard results in one Service being created for each environment. Moreover, the dashboard card for every service says it's got 1 Environment, and that environment is __production__. Is this expected behavior, or am I maybe doing it wrong?
  • c

    Chaika

    05/04/2023, 10:47 PM
    That is expected yea. There is service environments (dashboard), Wrangler Environments, and Deployments. Service Environments are gone besides the label in the dashboard, Wrangler environments create different workers, and Deployments are WIP and only historic right now. James did a write up on this on the community, it is a bit confusing right now: https://community.cloudflare.com/t/environments-vs-deployments/451036
  • p

    panic_macc

    05/04/2023, 10:48 PM
    Ah, that clears it up, helps a lot, thank you! I will roll with it, then.
  • d

    dave

    05/05/2023, 5:27 AM
    How do I actually use this in a typescript program? https://github.com/auth70/bcrypt-wasi
  • d

    dave

    05/05/2023, 5:28 AM
    Cannot find module './bcrypt-wasi.wasm' or its corresponding type declarations.ts(2307)
  • m

    mekpans

    05/05/2023, 10:12 AM
    Hey everyone. I was looking to understand more about subrequest limits. I'm having trouble tracking exactly how many have occurred within a particularly beefy request. Is there a way to get that information during a worker invocation?
  • z

    zegevlier

    05/05/2023, 10:13 AM
    No, not really. You would have to keep track of that yourself to know it
  • m

    mekpans

    05/05/2023, 10:15 AM
    Is that commonly done? or should I just be just letting the subrequest limit exception be the signal?
  • m

    mekpans

    05/05/2023, 10:15 AM
    Seeing big differences in local dev with wrangler compared to production too. Locally I don't hit limits for some reason.
  • z

    zegevlier

    05/05/2023, 10:15 AM
    I'm not sure, but I haven't heard of people really doing that much before. Are you making a variable number of subrequests?
  • z

    zegevlier

    05/05/2023, 10:16 AM
    I believe local dev might not have those limits
  • m

    mekpans

    05/05/2023, 10:17 AM
    Yes, could be thousands, so my plan would be to trigger a chain of worker requests each limited by the subrequest limit.
  • z

    zegevlier

    05/05/2023, 10:18 AM
    Perhaps #1008691665688604783 would work better for that? There you can split it up in batches, and limit the number of batches that get into the consumer at once to make sure you never go over the limits.
  • m

    mekpans

    05/05/2023, 10:20 AM
    Good idea, but either way it still relies on me being able to accurately count how many subrequests will be used, and I'm not having a lot of luck with that.
  • z

    zegevlier

    05/05/2023, 10:20 AM
    What is the number based on? The response you get from previous requests?
  • m

    mekpans

    05/05/2023, 10:23 AM
    For a bunch (n) of things, I do some some things, differing based on some thing specific things. Those things I do are a combination of KV, Durable Object, D1 and regular fetch invocations.
  • m

    mekpans

    05/05/2023, 10:24 AM
    I tried to keep a count of subrequests for all those invocations, but I always get the too many subrequests error earlier than I would expect (1000 total).
  • z

    zegevlier

    05/05/2023, 10:36 AM
    Internal and external requests have a different counter. If you're on bundled, you have a limit of 50 fetch requests, while the limit of internal (so to KV, DOs, D1, etc) is 1000. Could that be causing what you're seeing?
  • m

    mekpans

    05/05/2023, 10:38 AM
    I'm on unbound, but I was confused about if they were separate counters or not. Separate counters means I should have a limit of 1000 in each right? I'm seeing this exception before I reach even a combined count of 1000 (according to my count).
  • m

    mekpans

    05/05/2023, 10:38 AM
    I'll go over my logic and see if I'm missing any.
  • z

    zegevlier

    05/05/2023, 10:39 AM
    I believe they are separate counters, yes
  • m

    mekpans

    05/05/2023, 10:41 AM
    alrighty, I'll track them separately and go over my logic again
  • m

    mekpans

    05/05/2023, 10:41 AM
    thanks
  • s

    sathoro

    05/05/2023, 3:21 PM
    be careful with CF Queues. only 10 concurrent consumers and 15 minute runtime are hard limits right now
  • s

    sathoro

    05/05/2023, 3:22 PM
    if you are doing thousands of requests you might hit that but I am not sure how long your requests are. you can implement a queue in DO using alarms. split the tasks up so you don't hit the request limit
  • u

    Unsmart | Tech debt

    05/05/2023, 3:30 PM
    The counters are like so: All internal products share: 1000 All external fetches share: 50 on bundled, 1000 on unbound
  • u

    Unsmart | Tech debt

    05/05/2023, 3:30 PM
    Although I believe the cache api uses the external fetch share instead of internal unless that has been changed.
1...243724382439...2509Latest