This message was deleted.
# developer-productivity-engineering
s
This message was deleted.
a
I think it can be a mistake to think about productivity in terms of the impact on the code itself. It's better to think in terms of the product metrics and then look at the number of developer hours to make improvements to those. In our case, we look at the aggregate of FTE / Daily Active Users. It makes sense if you start to look at cases where removing bad or unused features can have a positive effect on your users, but actually hurt "code velocity"
n
Thanks so much for the response. For context, I'm the tech lead on our Mobile Infrastructure team at LinkedIn, and our leadership chain is hard-pressed on figuring out our "time it takes to deliver a feature", since most of our work is tied to what the dev uses while coding (think GraphQL, for example). Judging by your response, I'm guessing Meta does not measure anything like this, but please lmk of any insights you might have around this topic. 🙂
a
We absolutely measure feature development time, but not in terms of code velocity. We establish critical feature sets and measurable human-facing aspects of the product then monitor those as we develop products. But the biggest difference is that we drive most of our changes based on what the people doing the work come up with rather than what the leadership chain wants. In general, our managers and directors set priorities and facilitate resourcing, they aren't trying to make artificial timelines happen. All that said, the best way to alleviate concerns over "time it takes to deliver a feature", in my experience is to establish a tech-ladder-style delivery path where each piece is delivered and incrementally shows value. I'd guess that you hear a lot about the need for "predictability" and that usually belies a wrong-headed view of how to make development organizations productive. It's almost always a losing proposition to think of development as a feature factory where the whole cost of development for a feature is knowable. You get over-planning and generally worse outcomes than if you're only setting direction and letting your people deliver. It often boils down to a lack of trust in the people doing the work and that's where the core of the problem lies. When leadership trusts the engineers to do the work, the engineers produce more and better work product.
n
Super helpful thoughts. You've helped me gain clarity on a problem space I've been thinking about for quite some time. Thanks again! 🙂
m
@Adam Woods-Mccormick Identifying feature sets and how fast they iterate or change can make sense, but only if you are talking about a team that linearly owns the frontend and its largest dependencies, right? And if those dependencies mostly only feed into that feature set. It's harder (or silly, perhaps?) to measure how a library would impact that, which I think is the problem that @Nikhil Bedi might have. Any thoughts about measuring the impact of libraries? I've always thought it's something specific on a per-library basis that the library team should come up with themselves after defining their goals.
a
@Nikhil Bedi Imo, "time it takes to deliver a feature" is essentially all developer productivity, so you aren't going to come up with one metric for it. You'll have macro measurements such as lead time (from when code is ready to review to out into the hands of users, beta or otherwise) and micro measurements such as build time. I'd suggest reading up on the SPACE framework if you haven't yet, which you can probably pull some insights from to share with your leadership
n
@Adam Rogal (DoorDash) Just read through the SPACE framework -- super helpful. Thanks so much for sharing, I'll definitely be sharing with my leadership
a
@Max Kanat-Alexander Libraries are still collections of features, the customers are just different. When you build a library, you need to be much more conscious of your downstream dependencies and the amount of churn you cause for them, but in general you should focus on the feature set you provide and the health of that code. I have advocated for two big groups of metrics for libraries: adoption to look at growth and adoption, and crash/bug consistency to look at ongoing health and churn. It starts depending a lot on your delivery mechanism as to the specifics of those metrics.
m
@Adam Woods-Mccormick Yeah, those two work fine unless it's an internal library that's the only game in town (in which case raw adoption doesn't matter) and doesn't have anything to do with app stability, like a UI styling library or something. But your general point about focusing on the feature set you provide makes sense.
a
@Max Kanat-Alexander At least at our scale there's always the option of a group rolling their own and then competing for internal users. When everyone has to use your library you'll make choices that are bad for your users just because you can get away with it. When I've been in that position we've succeeded when we act as if we could lose our customers at any time and thinking of COTS products as our competition
🙌 1
m
@Adam Woods-Mccormick Definitely agreed. Thanks!