https://discord.cloudflare.com logo
Join Discord
Powered by
# do-alarms
  • u

    Unsmart | Tech debt

    10/22/2022, 4:37 PM
    but luckily DOs are transactional 🙂
  • e

    Erwin

    10/23/2022, 9:02 PM
    I wrote a library that handles all of that for you. https://github.com/evanderkoogh/do-taskmanager Schedule multiple tasks with context, such as an array of keys to expire
  • u

    Unsmart | Tech debt

    10/23/2022, 9:06 PM
    Yeah that does help quite a bit 🙂
  • m

    matt

    10/25/2022, 4:04 PM
    Are you doing something like one "expirable value" per DO, with an alarm to handle the TTL? A scheme like @Unsmart | Tech debt mentioned above where you put multiple keys in one DO will help with costs
  • k

    Kevin W - Itty

    10/25/2022, 4:53 PM
    Wait, how would sharing DOs cut cost in this case? If I'm paying per request, what's the diff between 100 DOs being called twice (once to init, once for cleanup) and 1 DO being called 200 times (100 times to register a TTL, and 100 more times to execute the cleanups)?
  • k

    Kevin W - Itty

    10/25/2022, 4:54 PM
    Otherwise, sure I could do it - I prob wouldn't want to use a single DO (scaling issues once it's handling millions of requests), but I could certainly shard it
  • k

    Kevin W - Itty

    10/25/2022, 4:57 PM
    additionally, if I were to use one/few DOs to handle multiple TTLs, I'd need to actually use storage to track them. Currently the single DO per entry method allows me to skip storage entirely, as it sets an alarm upon invokation and stores no info needed for cleanup. (edit: aside from the alarm itself being in storage... not sure if that invalidates this theory)
  • m

    matt

    10/26/2022, 4:11 PM
    I assume you were worried about the duration cost, which is separate from the per-request cost -- When actually deleting the keys, if you can delete multiple keys at once from one DO you are saving on duration compared to 1 key per DO, since you're paying for the duration of a single DO being active vs several
  • m

    matt

    10/26/2022, 4:11 PM
    alarms are stored the same way as normal DO storage keys, so you're still using storage if you're using them
  • m

    matt

    10/26/2022, 4:12 PM
    It probably depends on your exact usecase whether or not the math here works out and it ends up cheaper
  • k

    Kevin W - Itty

    10/26/2022, 4:13 PM
    I may have to give it some thought, trial and error, etc. Each one fires some async cleanup tasks, so maybe that’s the challenge. Each removes an entry from KV, R2, and Supabase simultaneously
  • k

    Kevin W - Itty

    10/27/2022, 9:08 PM
    Is there a way to see errors caused within our alarms?
  • k

    Kevin W - Itty

    10/27/2022, 9:09 PM
    This is specifically from a DO that is basically just an alarm... don't see any errors with my Worker call to set the alarm either
  • m

    matt

    10/28/2022, 9:03 PM
    I believe if you
    wrangler tail
    while an alarm runs, you should see any thrown exceptions/console logs
  • j

    john.spurlock

    10/28/2022, 9:06 PM
    interested to know if this has been fixed, last time i checked, tailing didn't work from anything inside the
    alarm
    handler
  • k

    Kevin W - Itty

    10/28/2022, 11:57 PM
    So I never saw anything while tailing but wrangler dev and console logs sorted me
  • k

    Kevin W - Itty

    10/28/2022, 11:59 PM
    Still looks like I may have to follow the suggestion earlier about grouping alarms. A few hundred DOs w alarms that just run some r2/KV deletes (on a single key) was showing a few hundred GBs, compared to a single DO called a few hundred times at around 1.5GBs total…
  • e

    ehesp

    11/09/2022, 8:06 PM
    I'm seeing logs from DO when tailing (from cli and console), but exceptions don't show - I have to manually try/catch , log and throw to see them
  • b

    Ben

    11/14/2022, 7:54 PM
    Do alarms not have a status at all?
  • b

    Ben

    11/14/2022, 7:54 PM
    Seems like they might head off into the void. I'm using the do-taskmanager for reference
  • e

    ehesp

    11/14/2022, 10:08 PM
    That package keeps track of the retry status, but in general no they don't
  • e

    ehesp

    11/14/2022, 10:09 PM
    It internally uses do storage, but a native do doesn't from what I know of
  • b

    Ben

    11/14/2022, 11:06 PM
    thank you
  • e

    Erwin

    11/16/2022, 2:33 PM
    Exactly what @ehesp is saying. The
    do-taskmanager
    package keeps track of multiple tasks with a context and it keeps track of retries. And it will automatically schedule the next one. Let me know if you have any questions 🙂
  • b

    Ben

    11/17/2022, 6:43 PM
    Thanks, will do!
  • o

    osa

    11/18/2022, 9:05 AM
    I'm trying to use
    do-taskmanager
    for my project but I have a problem. Is it by design that you can not do
    state.blockConcurrencyWhile
    in a class's constructor that implements
    TM_DurableObject
    ?
    Copy code
    ✘ [ERROR] Uncaught TypeError: Illegal invocation
    
      state.blockConcurrencyWhile(async () => {
            ^
          at TestDO (/Users/petteri/dev/github/osaton/do-taskmanager/worker/src/index.ts:26:10)
          at construct (/Users/petteri/dev/github/osaton/do-taskmanager/src/index.ts:117:20)
  • o

    osa

    11/18/2022, 9:59 AM
    Created a PR: https://github.com/evanderkoogh/do-taskmanager/pull/8
  • m

    mrbarletta

    11/18/2022, 6:20 PM
    Hey Folks, seeing
    Unknown Event
    in my alarms (wrangler tail to the worker) and I have found not a single mention to it.
    Unknown Event - Unknown @ 12/31/1969, 6:00:00 PM
    We are debugging an issue with alarms processes kind of halting.. I don't know if that event is cancelling the execution.
  • e

    Erwin

    11/22/2022, 6:09 AM
    That was absolutely no by design! And I have merged the PR and released a new version. Unfortunately because of another inconsistency I had to release it as a new major version.. but
    2.0.0-rc0
    should be good now 🙂
  • w

    Walshy | Pages

    01/11/2023, 3:06 AM
    ------------------------------ *This channel is archived, please use #773219443911819284 * ------------------------------