https://discord.cloudflare.com logo
Join Discord
Powered by
# durable-objects
  • g

    Greg-McKeon

    07/22/2021, 4:02 PM
    where are you sending the data once it's exported? is it just for taking. a backup?
  • w

    Wallacy

    07/22/2021, 7:12 PM
    Right now to the backblaze B2 ... I can download and apply de data again if i need to restore the backup.
  • j

    john.spurlock

    07/23/2021, 4:36 PM
    Just catching up with the DO news from last week after a vacation, sounds like there were some very nice additions: https://community.cloudflare.com/t/2021-7-16-workers-runtime-release-notes/287327
  • j

    john.spurlock

    07/23/2021, 4:40 PM
    Does
    state.blockConcurrencyWhile()
    take a void-returning async closure? I'm assuming it holds pending subsequent requests in order until it completes? If the closure throws, what happens to the pending requests?
  • k

    kenton

    07/23/2021, 8:37 PM
    It takes an async closure returning anything -- the return value is propagated as the return value of
    blockConcurrencyWhile()
    itself. Everything else going on in the object is paused while the callback runs (including while the callback awaits things) -- so no new requests will be delivered, but also any subrequest responses will be paused too (unless those subrequests were made by the callback itself), etc. If it throws an exception, then we assume the durable object has been left in a bad state and we reset it.
  • k

    kenton

    07/23/2021, 8:39 PM
    it's great for doing initialization in the constructor, or for cases where you need to carefully synchronize with something remote and need to make sure nothing else intrudes and modifies the object state while you're waiting for the remote party to respond.
  • k

    kenton

    07/23/2021, 8:42 PM
    The release notes don't actually go into details about the biggest change, which is an in-memory caching layer. Scroll up and read my recent messages for a bit of info on that, but there will be more in a blog post. The post is on hold for a bit because we have one of our "weeks" happening next week and it didn't really fit in...
  • k

    kenton

    07/23/2021, 8:43 PM
    I have to admit I am shocked, SHOCKED that no one reported any problems with this rollout, considering it completely changed how the storage API is implemented. 😅
  • w

    Walshy | Pages

    07/23/2021, 8:45 PM
    Kenton you broke all my services!!! 😠 /s
  • j

    john.spurlock

    07/23/2021, 8:46 PM
    hehe yea you had mentioned you were rolling it out right before i went on vacation - my DO stuff actually did break while I was gone (requests hanging indefinitely) so I had to disable my DO load last week - gonna look into it tomorrow
  • k

    kenton

    07/23/2021, 9:11 PM
    oh dang, maybe I did break something. 😦
  • k

    kenton

    07/23/2021, 9:22 PM
    Oh but you reported storage hangs before we pushed the big changes... hmm...
  • j

    john.spurlock

    07/23/2021, 9:54 PM
    yes, but i'd already disabled the highly contentious DO that was hanging before - the ones that broke after the rollout last week had never locked up before - couldn't dig into it at the time so i just turned them all off - I guess this is why the vacation laptop exists : )
  • j

    john.spurlock

    07/23/2021, 10:21 PM
    ok, starting looking into it now - all storage
    list
    calls never return, and deadlock the DO instance for all subsequent storage ops. if
    get
    is called prior to
    list
    , it will succeed - but if after, it hangs
  • j

    john.spurlock

    07/23/2021, 10:24 PM
    passing a
    limit
    to
    list
    seems to have no effect - same problem
  • j

    john.spurlock

    07/23/2021, 10:49 PM
    narrowing it down: it repros only after the DO instance is accessed for the first time by a reader that lives in another worker (a script-based worker if that matters)
  • k

    kenton

    07/23/2021, 11:22 PM
    We actually had an internal team report a hang to us today, which we've managed to reproduce and are working to debug. It might be the same thing.
  • j

    john.spurlock

    07/23/2021, 11:31 PM
    Think I narrowed it down: my reader worker script calls stub.fetch, the DO side initializes on first access using a
    get
    , then a
    list
    with a
    prefix
    . If the
    list
    limit
    is not provided (I was not providing one), or too large, the
    list
    will never return, and will lock out all subsequent storage operations for that instance from any caller. The list call is at most 4096 items. Would it be possible to throw in this case instead of hanging? It is basically unrecoverable. I will retry using multiple small list calls and see if that works around it so I'm not completely dead in the water.
  • k

    kenton

    07/23/2021, 11:54 PM
    Well it should just return the results, not hang or throw. I don't know why it would be hanging, I guess we'll have to track that down.
  • j

    john.spurlock

    07/24/2021, 12:10 AM
    That would be even better! Obviously it would need some max if no limit was provided, but it would be great if it returned the first page of results with some default limit. Not sure why this surfaced after the storage rewrite, perhaps the effective list limit decreased dramatically? In any case, the source of the hang is much easier to track down when it happens every time, vs once every few days : )
  • k

    kenton

    07/24/2021, 12:13 AM
    the 128MB memory limit applies. But it doesn't sound like you're getting close to that.
  • k

    kenton

    07/24/2021, 12:14 AM
    I don't think we can apply a default limit because that might make an app incorrectly think that the rest of the list range is empty. But I definitely do recommend using a limit when reading a large number of keys, and then do multiple requests to read them all.
  • w

    Wallacy

    07/24/2021, 12:15 AM
    Few hours after I got a problem that I never see before…. But I discovered that was my fault. I blame the update for few minutes.
  • j

    john.spurlock

    07/24/2021, 1:03 AM
    also: DOs that were staying around (prior to last wk) for subsequent requests are now only alive for one request - i'm relying on caching data in an in-memory map for subsequent requests and this is also kind of a deal breaker. I wonder if my in-memory map + the new storage cache is bringing the total > 128mb. Is there a way I could opt out of the new caching layer to check?
  • k

    kenton

    07/24/2021, 1:21 AM
    currently, the limit is actually enforced separately on the isolate memory and the cache, i.e. each one can get up to 128MB. The total of the two doesn't matter.
  • k

    kenton

    07/24/2021, 1:29 AM
    But it does sound like your object is getting reset for some reason. An error should be thrown from any
    fetch()
    calls that are interrupted by the reset... you aren't seeing anything thrown on the stateless worker side?
  • j

    john.spurlock

    07/24/2021, 2:36 PM
    There is no error thrown (I disabled my transparent retries), every call succeeds, but for some of the DO instances with larger amounts of memory in a Map, they are reinstantiated (and thus need to reload from storage, the slow path) on every call. This makes my entire prototype infeasible, I'm hoping that something can be fixed/reverted on the cf side, this was working great before last week's storage change. What's strange is that nothing has changed on my side since before the storage rewrite, the size of the data held in memory is the same, and the codebase has not changed. It seems like somehow the memory ceiling has been decreased, or a VM change where the same code somehow allocates more memory.
  • j

    john.spurlock

    07/24/2021, 3:02 PM
    by "larger amounts of memory" I mean a single Map<string, Record> where there are at most 4096 keys, and the size of all values reaches 12.5mb (as measured by JSON.stringify(record).length) - I figured that a single 128mb DO instance could hold this amount of data easily with plenty of headroom for the future.
  • k

    kenton

    07/24/2021, 6:15 PM
    hmm, JSON often consumes much more memory after parsing than the size of the serialized form, especially if it has deep structure. I could easily see 12.5MB of serialized JSON being a problem to parse. But, we didn't change anything about that recently.
  • j

    john.spurlock

    07/24/2021, 6:52 PM
    I'm not parsing any json, just estimating the size of the structured cloneable values coming out of durable object storage, so just serialized each one to json as they came out of
    list
    and summed up the byte size - doesn't seem to be a large amount of data to store in a memory structure. It was working just fine with this amount of data prior to the big storage change. Maybe
    list
    itself creates unreclaimable memory? I'm now doing multiple
    list
    calls with a
    limit
    of 512 to workaround that hanging-list-of-death issue.
1...128129130...567Latest