I'm rolling a simple home grown event source syste...
# help
r
I'm rolling a simple home grown event source system using SST, I have two approaches I'd like to get feedback on. These are two options I'm considering: 1) Publish event to Event Bridge, one of the consumers will read and then write the event to S3 for long term storage of all these events. or 2) Write/Save the event direct to S3, then have an S3 event trigger a lambda to publish it to Event Bridge. Both approaches save the event to S3 - which do you prefer? Any pitfalls to either approach you're aware of?
t
I'd recommend #1 - EB is the most flexible + scalable in terms of reacting to events
Archiving can probably be thought of a downstream concern of the EB event
Also in my experience EB flow is faster, but I don't have specific benchmarks to share
r
Yeah that makes sense. direct to EB will remove any i/o scaling issues.
even though S3 is very scalable, like you said the flow might be "slower" direct to S3
But some events, perhaps not all types of event, I'd like to keep a permanent record of (S3)
could evolve into something using Event Store (or similar) but for now, just a home grown thing will do.
a
I'm currently building out an event sourcing & CQRS system with the following flow: API Gateway -> DynamoDB -> Eventbridge -> Lambda consumers
It's worked really well so far. Once we figured out our event structure, development has been a breeze.
t
my setup goes apig -> eb -> dynamo - curious what advantages you see going to DDB first
a
I don't think we considered that structure, but now you have me thinking about it and it could create a better solution for subsequent events after an object has been written to the read DB.
@thdxr In your architecture, do you stream the DDB updates elsewhere or is it simply a consumer of EB?
t
Simply a consumer at the moment
DDB streams are cool but they're limited (pretty sure you can only have 2 max) and I save that for syncing non-event data to a data warehouse for analytics (rockset)
a
Did AWS raise the quota limits on EventBridge? I've been avoiding it in part due to the low invocation limits. Looks like they are soft limits now. (Also, the 500ms latency kills it for anything interactive.)
r
This is interesting, just posted on youtube.

https://youtu.be/WtCfHP6rUAY

They use dynamodb streams.
a
But why? Can't the first Lambda just do what the second Lambda does - without passing through DynamoDB?
r
I guess ddb is their event store, then after that other stuff happens.
Also no event bridge.
a
I thought DDB was the event store - but then see that they shove it into S3.
r
There's event driven, then also event store.
Yeah, it's different. I guess many ways to achieve the same goals.
a
It probably is the right architecture for them, but without any why are contrast, it wasn't helpful. (There was a nice why for the choice of FIFO.)