Question about shared-libraries for micro-services...
# sst
a
Question about shared-libraries for micro-services: If you create a shared-library that is used by many different functions…. and in that logic, for example, you need to trigger an Event on EBridge… That means every Lambda consumer of that library, will need to have access to the EBridge, with permissions. And possibly also an environment variable with the EBridge location, due the library doesn’t know which is the AWS Resource. And other stuff could be needed too. And that’s not clear for the consumer… you are “consuming” a library which might be a “black box” ideally. For example… “Users Library” which contains “Update User” which triggers “User Updated” event. Also this creates coupling between those “consumers” of the library and every resource that is needed… Which could be OK, but that can be messy if you have many things shared. This is why I prefer to encapsulate that logic in a different Lambda instead of Library… doing something like “Lambda calling Lambda” (or maybe a REST API) that other functions can consume, that way we encapsulate the logic and permissions of the “shared code”. Thoughts? @thdxr @Frank
m
When using event bridge this way a primary point is loosely coupled context. Services/contexts ideally don't talk to each other directly. That is tight coupling. All producers (things calling put events) and consumers of those event do need the necessary permissions to communicate with the event bridge, but that's all. If a User service puts a user updated event, it should not be concerned what targets event bridge triggers and what they do with that information. ie. black box. If the calling services is interested in events from other context a rule to invoke a trigger in that service would be added. Target invocations can come and go by listening to events on the bus.
o
We’ve been thinking about this topic too. Right now the library throws an error on initialisation if - for example - the eventbridge it needs isn’t declared as an env variable. Another approach that someone on my team tried out this week was for the library to accept a “sendEvent” function. Then it’s up to the lambda that calls it to initialise and wrap the eventbridge. This also makes unit testing of the library possible - right now most of our tests are integration tests against the handler, which is great for coverage but slow
Not a fan of lambdas calling lambdas because of the increased latency and cost. For me the only resources shared between services are dbs and eventbridge
t
Yeah there is some complexity here and I'm not even 100% certain I'm committed to my approach - it's just DDD warped to serverless. In terms of configuration, I put basically every resource (queues eventbus, etc) into SSM with something like
<stage>/DYNAMO_TABLE=<arn>
and my core library loads all of that so the functions don't need any env vars passed in or need to be aware of what they're going to use. My functions do call "blackbox" library functions like
User.create
- they have no idea if that's writing to dynamo, queue, eventbridge, etc. But because the config is available it doesn't matter here. Think of the config as "service discovery". Where it does matter is permissions - this is where the coupling really shows up. If you create a new lambda function calling a blackbox
User.create
- you don't know what permissions you need to grant it. What I've been doing for now is granting permissions globally across all functions. I know this isn't best practice but I only have a few of these - usually dynamo + EB. Pretty much every function needs access to these anyway but from a purist pov it's not good practice. But I prefer the benefits of DDD (hiding implementation details) over the loss in security
a
Thanks guys for your comments, so apparently I’m not the only one having these concerns, haha.
Still I think is better to encapsulate in lambda / API instead of library.
s
Yeah, me too. Kind of feel this is step function territory. Like the idea of typedoc/annotations or decorator that declares a functions required permissions. Fails at compile after inspecting the role of the lambda principal within hlthe template (possible brain fart)
t
The other complication is some of the infrastructure is an implementation detail. For example you might decide to implement
Users.import(csv)
as a complex step function. Where would you put the lambda handlers for those steps. Gets tricky
m
We manage bulk csv processes using a custom construct that provisions the architecture, in our case step functions. Each domain (context) can create as many as needed. I dont' think there is a way to avoid global infrastructure such as bucket and or event bridge.
without running into limits
Also muddy when using something like appsync that does communicate across services.
t
Yeah it's really the
api
that suffers the most from this. That's what synthesizes things between domains
Any backend / async process can usually be cleanly thought of as a service
m
And I think that ok. Even in the backend we manage cross communication through the api. Never access cross domain lambda/databases directly.
a
I would like to try a “service discovery” model, using AWS CloudMap, and create libraries to “consume” those services. So from the implementation perspective, there are still libraries but those are just hiding details about how to consume a service.
d
I agree with all of what @thdxr said above, except for the security part. I think it is worth the pseudo-coupling to grant specific permissions, even if it doesn't cover a whole lib. I might be paranoid though.
a
@Derek Kershner so what should we do then?
Or do you share my concern too?
d
At some point, something is going to have to call a service directly. For me, it doesnt really matter where that is, just do whatever makes the most sense for your context. @Adrián Mouly