Continuous Delivery in 4 months for 15 teams and their single monolith - Thierry de Pauw

Bookmark
Summary
Questions Covered
Why It Matters
x

[applause]. All right, this is working. Yeah, you hear me, right? Okay, i need to say this because, well, i didn't expect to ever say this again. So it's seven years i'm speaking and i used to say when i started: look, i'm very shy, this is challenging. Please bear with me. So this is my first time since two years and a half i'm speaking in front of an actual audience and not a screen. Right, it's set. I can start. Oh, yeah, i walk a lot. This is me coping with, with stress, right. Show more

Show less

So i'm going to share with you a a story of 15 teams from a belgian federal agency that wanted to adopt continuous delivery in an incredible short time frame, like huge, and i'm i'm. I'm going to start to pick the words of foodies earl, who quoted peter block. So i'm not here to tell you whether you should do this, what we did with these 15 teams, or whether you should not do this. It worked for these 15 teams. Unfortunately, i i've never had the occasion to repeat this with another client that was so open to do this, but i still see a lot of value in what we did and that's why i'm here to share this with you. So all of these 15 teams were working on on this single central, huge, big old monolith, and each one of these 15 teams consisted of software engineers, test engineers and analysts, and on the side, there were a couple of transversal support teams involved, like the architects, the built-in, built-in release engineers, dbas operations and infrastructure engineers back then. So we're speaking 2018- the monolith was quite old, 15 years old, written in java using the egb technology and deployed in a single jboss application server using one single central, big fat oracle database classic, and all of this was running in the private cloud of the belgian federal public administrations- yeah, don't be too excited, this isn't aws, but what? They were happy they had a private cloud. It worked somehow. The monolith was quite important for the agency because it was a single piece of software that ran the whole business of of the agency and that was serving the 11 million belgian citizens. So what quite a big thing. Show more

Show less

The agency used to have three, two, four planned major releases per year, and those major releases were quite a big deal. So whenever a major release came near, stress rising, development stopped for weeks of code freezes, during which you had rooms full of users and test sessions doing [Music], user acceptance testing and regression testing, while making sure new functionality worked as. As it was expected. No new regressions were introduced, and yet, after each and every release, they had to apply a series of urgent hotfixes- also classic um. Now, inside the agency, there were plenty of people that weren't too happy with those major releases, and one of them were the domain experts, so the people in between the business and and the it. They wanted to see features more often in production because they wanted faster feedback. They wanted to know faster how the features were being used, how they behaved in production, and so, late 2017, a handful of people came together to discuss the matter on how to get rid of those major releases, but they soon realized that wouldn't be too easy and that they needed management support for this. Now, luckily, management was quite favorable to the idea, but they wanted a case before approving, and so they started to estimate the effort of one major release, and the outcome was quite surprising. So each major release had a deployment lead time from coach freeze till production of 28 days, and during those 28 days, they consumed 334 people days. Look at that. It's quite impressive. So with that, the case was made: management improve. Management approved the project of improving the release process. Show more

Show less

Early september 2018, the agency got in touch with me asking me the question: can you help me? Can you help us achieve continuous delivery with fortnightly releases? Yes, of course i can help you. Oh, and, by the way, we would like to have this by the end of december, giving us a little four months to achieve this. Well, really well, that's interesting. So, yeah, you can imagine my surprise. So in my humble career, i've never seen an organization that big moving that quickly towards continuous delivery. So we're speaking here about an organization of 2000 employees with an id of 250 people. But the good thing was they actually came to me. Yes, now, this was going to be my very first experience working with 15 teams, so before that, i had an experience working with seven teams distributed over belgiand india. Show more

Show less

I wasn't too happy with the outcome, but wow, it seemed the organization was happy because they came back to me to help another department, and before that, i've mainly worked with single teams, which is which is pretty easy. So this time i i i wanted some sort of a plan or some sort of structure to help me doing this, and i didn't want to use maturity models, as i have often seen used in digital transformations because maturity models are fundamentally flawed. Show more

Show less

They assume that improvement is linear, context-free and and repeatable amongst teams and organizations, so what you did in one team, you can just reapply in another team. Well, this is not true. They define a a static level of technological and organizational changes to to achieve and they focus on arriving at a finished mature state, and then they call it dom, and so, inevitably, you end up with the classic digital transformation project with a defined beginning, begin date and and a forecasted end date, where we all know that the digital transformation is actually never done and never finished. So, instead of that, while we should adopt capabilities and and and outcomes, and we should adopt a continuous improvement paradigm- and therefore i wanted to try out the improvement qatar, which is a continuous improvement framework that helps you reaching goals and uncertainty, and, at the same time, my dear friend, steve smith, whilst finishing his book measuring continuous delivery, wherein he explains how organizations can measure the adoption of continuous delivery. Show more

Show less

Now, interestingly, steve was also using the improvement qatar, but he extended this with theory of constraints applied to the end-to-end it delivery process in order to find the bottleneck and so to find the experiments most likely to succeed. Now, whenever i suggest this way of working: to, to, to prospect. Well, most of the time i get silence at the other side of the phone and i don't hear back from them. Yeah, and? And one day i got the reaction but, but, but, but this is going to impact our whole release process. And and then there was silence at my side like what shall i say? Well, never mind. But this time was different. I was lucky. My contact person was the internal agile coach of the agency and he has a strong, lean and kanban background, and so he was very favorable to the idea of running experiments and having data-driven decisions. Now, later during the journey i've i got informed that it management of the agency wasn't too happy with the approach. They wanted to have a roadmap, and obviously i cannot give them a road map because every organization is unique. Now the first thing we did was to set up the core team who would lead the adoption of continuous delivery, and the reason for this was pretty simple. Show more

Show less

So i wasn't going to arrive with an army of coaches to help the agency. It was just me who was going to help the agency on my own. So, getting in touch with all these 15 teams in this limited time frame and with my limited availability, was just impossible. So i was going to interact with the core team and the core team was going to interact with the 15 teams. Now setting up this, this, this core team, was pretty simple. The internal agile coach just sent out an email to the whole idea organization, inviting everyone that had enough interest, was enough motivated and had enough background information about the the system to participate. And so we started with 20 people in in that core team that represented the different roles from the 15 teams and and from the support teams. So we had software engineers, test engineers, analysts, we had one architect, we had one built engineer, one release engineer. However, we've never seen the dbas, nor the operations and infrastructure engineers around the table, although there were quite some problems in that area. So, yeah, you have to know that in public administrations in belgiwe have quite a lot of middle management. So those 15 teams were sitting in two departments, each with line manager, and then we had the dbas and another department, also with a different line manager, and then the infrastructure and operations engineer and a yet another department with yet another line manager. So you can imagine the whole communication overhead that is happening. And so, in the case of the dbas, it happens from time to time that when going into production, a database change was not applied to the production database. Whoops, yeah, happens now. This problem never really got solved during those four months. We had to wait until the crisis somewhere in march 2019 before this problem got solved or tackled and that we get to a better solution- not yet how it should be, but already better. Now. Investing in the practices that make continuous delivery is actually very valuable, and this was confirmed by the academic research done by nicole dr nicole forsgren and the book accelerate, also by dr nicole forsgren and friends, and so nicole found out that adopting continuous delivery will actually improve your it delivery process and, together with the adoption of lean product management and an organ and a generative organizational culture, it will improve your organizational performance, like really money-wise turnover market share. Show more

Show less

But you have to understand that continuous delivery is actually a holistic approach to achieving the right stability and the right speed in order to satisfy business demand, and so continuous delivery is not only about speed, as many tend to think. It is most importantly about stability and about quality. So you need the stability and the quality in order to improve speed, because stability and quality will prevent rework. And you need the speed to improve stability and quality, because the speed will give you fast feedback that will allow you to improve stability and quality. However, to be honest, if you are ever confronted by the choice of either improving speed or improving stability and quality, well, in my humble opinion- and i'm not the only one thinking that you should focus on improving stability and quality, because speed will follow always. Show more

Show less

Now, in order to achieve the right, right stability and the right speed, while you need to apply a large number of technological and organizational changes to your organization, and you need to apply this to the unique circumstances and the unique constraints of your organization, and this is what makes the adoption of continuous delivery so difficult and so hard. So where do we start? Do we start by applying technological changes or by applying organizational changes? Which change should we do first? Which change should we do next? Which change will work in our context and which change will not work in our context? So, in order to move forward in these uncertain conditions, well, we need something that will help us with that, and this is where the improvement gata comes in. So the improvement scatter is a continuous improvement framework that helps you in executing and measuring organizational change. Show more

Show less

So it's? It's a framework for reaching goals, and in certain conditions, and it was described by mike rotter in his book the toyota kata- and it consists of four stages that that we are going to repeat in cycles. So first, we start by defining a vision, a goal, a direction that we want to achieve in a far, far future and that we may eventually never achieve, which is okay. Um, once we have defined this vision, we are going to iterate towards this, this fission, using target conditions with a horizon for going from one week to three months. So, once the direction is set, we start by analyzing our current condition. How does our current process look like? What data do we have? Which facts do we have about this current condition? And once that is done, well, we start by defining our first target condition, the first improvement that we want to achieve, and it has a set date by when we want to achieve that improvement. And it has a measurable target, like an acceptance criteria, so to say. When we have achieved that, in which condition? Now you have to understand that during this planning phase, the team does not know how to reach this target condition. They do not plan on how to reach the target condition. It's only during the execution phase that they start to run lots and lots of experiments with technological and organizational changes using the pdca cycle. So we plan an experiment, unexpected outcome, we execute the experiment and collect the data. Then we compare the results with the expected outcome. If it was successful, we include a change into the baseline, if it wasn't, while, we discard the change and we start over again. Show more

Show less

Now for the agency, the qatar looked like this. So, in order to satisfy the demand of the domain experts, the agency understood they had to adopt fortnightly releases, and so, to achieve continuous delivery, they had to move from releasing twice a year to releasing once every two weeks, and so the direction was pretty simple. A last major release would happen end of december, starting in january. The first fortnightly release would happen and then, from then on, every two weeks, a new release would would go out to analyze the current condition. We were going to run a value stream mapping workshop in order to map the technology value stream. So the technology value stream are all the steps that are happening from committing code into version control and getting this code into the hands of the users into production, and for this value stream mapping workshop, we were going to use sticky notes, and we were going to do this as a group, where everyone involved in the whole end-to-end it release process was present in the room. Now, the benefit of sticky notes over drawings is that they are more fluid, you can move them more easily around, you can rearrange them, and so you can iterate quickly over your value stream map. And when you do this in a group, what happens? Well, it starts messy, then it gets messier and then it gets really, really messy. But as people are refining the value stream map, they are, with each iteration, building on top of each other's knowledge. So you have to understand that an organization is a complex, adaptive system wherein everyone only has a limited amount of information, so no one has a complete overview of the whole it delivery process, and so you end up with a value stream map that integrates the knowledge everyone has about the delivery process, and so this resulted in the following value stream map for the six months major release process. Show more

Show less

So during five months, there was an accumulation of features. Then it was followed by a three weeks code freeze, during which we had rooms full of users, before performing regression testing and user acceptance testing. This is where the 334 person days started to be consumed. This is also where awareness starts to arise that quality is actually a thing- yeah, obviously. Then this was followed by a one week production release and finally, while we had champagne- well, you've just spent six months to get something into production, so you might as well celebrate this, don't we? But it also revealed a second value stream: a much faster batch release process designed for production fixes that was happening every fortnight and that occasionally, but increasingly, was also used to release features into production. Show more

Show less

Look at that. How convenient can it be? Now? This is a classic anti-pattern that happens in in an organization. So when the transaction cost to release features is too high, often a truncated value stream arise for production fixes, and so you end up with dual value streams. You have a feature value stream with lead times of months and you have a fixed value stream with lead times of weeks. Now, although it is an anti-pattern, it was quite key to the success of this journey because the patch release process showed the potential the agency had to adopt continuous delivery. So from this moment on, while we just ditched the major release process and only focused on this patch release process, now. Some people were confused by this decision and tried to get my attention back on the major release process, asking why aren't we trying to reduce the lead time for this major release process? And then i told him: well, look, you are already doing this. You are already releasing every fortnight, but you are doing this in a hidden way, under the radar. Now we are going to make this very transparent, very visible, and we will make this process more robust so it can replace the existing major release process. Now, in order to identify the first target condition, we were going to apply theory of constraints to this patch release process. Show more

Show less

So theory of constraints as a management paradigm introduced by eliah goldthread in his seminal 1984 book. The goal and the central premise is that every system has one governing bottleneck and optimizing anything then the bottleneck is just an illusion. So if we have a system- well, a linear system- that consists of steps a, b and c, and b is our bottleneck. Losing an hour on b is an hour loss for the whole system, but gaining an hour on a and c is just a mirage. So increasing throughput on a will only result in more inventory in front of b and increasing throughput on c will result in more work: starvation for c. Now we can apply theory of constraints on the technology value stream, because the technology value stream should be a homogeneous process wherein every step is deterministic, just like it is in manufacturing, and by doing this, well, we will find the bottleneck in our technology value stream, and so we will find the experiments most likely to succeed. Now, in order to find the bottleneck, i've asked everyone in the room to pick a green sticky note and try to estimate the duration of every activity, and then pick a red sticky note and try to estimate the failure rate of every activity. Show more

Show less

Now, it doesn't need to be very precise because, well, most of the time, organizations don't have data about this. This, just this is just well guessing, and that is okay. We just want to have an order of magnitude, have an idea now. Interestingly, the bottleneck was not the manual regression testing, nor the manual testing that happened in pre-production, which was quite surprising because, well, everyone would expect that, including me. Actually, the bottleneck was the execution of the automated test. Huh, how is that possible? Yeah, moreover, well, the duration of the execution of automated tests only took four hours, where the manual testing took something in between half a day to a day. So how can the automated test be the bottleneck? Well, for various reasons. First of all, the failure rate of the automated test was quite high, resulting in lots of rework and lots of re-execution of the automated test, which adds up to the total lead time. Second, every lane that you see, the three lanes on on on the value stream map, represents a version control branch. So on every merge of a branch, well, they had to re-execute the automated test again, adding up to the total lead time. Now, up until then, every team, every one of those 15 teams, managed their own set of automated tests and they decided on an ad hoc basis whether to run the automated tests or not and which one to execute. So no one really had an overview of how many automated tests exist for the total, for for the monolith, and how stable they were. Show more

Show less

And so, as a first experiment, i suggested: well, let's implement a deployment pipeline so that all, all of these tests get executed. So the deployment pipeline is a key design pattern from continuous delivery and it's the automated manifestation to get code out of version control into the hands of the users in production. And the purpose is to create transparency and visibility about the release process and so that it increases feedback and creates empowered teams. Now, in our case, the deployment pipeline stopped after the execution of the automated test because, well, deployment in the different environments was done by a different tool. Whatever, it's not important. And i've also suggested that they should collect some metrics: the lead time for the deployment pipeline and the failure rate for the deployment pipeline. And the lead time was eight hours- it's quite long- and the failure rate was 70 percent. Whoops, this is quite high, does it? Now, the reason for the very high failure rate was also quite diverse. So the first one was that automated tests were setting in a different get repository from production code, resulting in automated test getting out of sync with production code. The second thing which was quite important is that the concept that one failing test fails a release candidate was totally unknown for the agency up until then. They applied something called test failure analysis, where they try to find root cause of a failing test, and so if a failing test was caused by a recent change to functionality that was covered by the failing test, then the release candidate was discarded. But if it was not closed by a recent change, well, i guess it is okay and we can accept a release candidate. Yeah, i don't see where is the problem. Test data was not cleaned up before executing automated tests. Manual testing, automated tests- you was- were using the same set of test data in the same environment, and lots of tests were depending on third-party services that sometimes were available and sometimes not. So this brings us to this current condition for the agency. Show more

Show less

We have two value streams: a feature value stream- the major release process- and a fixed value stream, the batch release process. The lead time was eight hours. Well, for the batteries process and the failure rate of 70 percent. The first. Well, the the core team decided that the first target in target condition would be one of stability improvement. They wanted to improve the failure rate from 70 to 30 percent in one month's time, which is a fair choice. And because the lead time for the deployment pipeline was eight hours, they could only execute a deployment pipeline once a day. So over one week they had five, five executions of the deployment pipeline. So by the time they reached their target condition, they would accept that two out of five pipeline runs would fail. And then they started defining experiments and the first experiment was, well, setting up the deployment pipeline. Show more

Show less

That was done check. Then they wanted to do a daily evaluation of the failing test, to actually do something about the failing test. They wanted to have a dedicated environment for the automated tests. They wanted to recreate the database before running the automated test so that test data was automatically cleaned up. They wanted to stub the third-party services so that the dependency on third-party services would disappear, and they wanted to automatically collect lead time and and failure rate for the deployment pipeline so that they could create dashboards and then see the evolution of those metrics. Are we going in the right direction? Is this all going down, or is it going up and then we're going in the wrong direction? Now, that was for the plan. Show more

Show less

The execution went slightly different than expected and it took me another six months after reaching continuous delivery for the agency to realize- and in fact it's it's. It's not me that realized that it was. It was with the help of douglas creville and jeffrey fredericks who made me realize this was an example of fear conversations. So fear conversations is one of five type of conversations that an organization should adopt in order to become a high-performing organization. And and and squirrel and jeffrey describe those different kinds of conversations in their book: agile conversations and they were so kind to include this case as an example in their book, thank you. So these fear conversations, while helped us surf face the fears that existed inside the teams and inside the stakeholders, to then mitigate these fears and and have all the difficult and and then and then to navigate all the difficult conversations we had to have in order to to reach to our goal. And the first fear was one of complexity. Show more

Show less

So when i arrived in september, the last major release cycle, star, already started in june. So during two months they had the time to accumulate features. Now i was hoping that somewhere there was a way that we could move those features planned for this last major release towards the existing patch release process that was happening every fortnight. Because, well, i had some concerns and i was a bit afraid of having a big bang switch between last major release in december and then starting in january with fortnightly releases. So i was hoping there was a way that we could gradually move away from this major release process and eventually discard this last major release and then to pick up with the fortnightly releases. Now it took me quite some time, lots of conversations, lots of drawings, to understand and accept that this was actually not possible. So the organization had a quite complex branching strategy in place, so a version control system branching strategy. They were using something that resembles git flow. So they had a long-running develop branch, all right. We managed then a semi-long-running main branch. This is getting interesting. Some teams were committing directly into develop, some teams were committing into feature branches, other teams were committing into team branches, yet other teams were using a combination of feature and team branches. Yeah, crazy situation. But all of these branches were lasting for weeks or months. Now, during a patch release, fixes and features planned for the for the patch release were cherry picked from develop into the main branch, while resulting in develop deviating more and more from the main branch and so making it nearly impossible to merge develop back into the main branch. Right? So what happened during a major release? Well, deployment happened from develop. Show more

Show less

This was possible because develop contains both the fixes and features planned for the past patch releases. That happened as well as the features planned for the next major release. So this is okay. Then the main branch was deleted, all right, and a new main branch was created from developers and we start over again, right? You can imagine it took me some time to, to just absorb this. So with a big sigh, especially from me, well, we accepted to mitigate this complexity by just accepting that. Well, a last major release will happen in december and then we will switch to fortnite releases. Now, in hindsight, i have to admit it wasn't such a big risk because any improvement we we decided to to to apply to the patch release process, which would essentially become the fortnightly release process, while, was immediately implemented and was immediately tested during the, the next patch releases and and so well, we knew that this will work when we would do the do the switch. Now they also wanted them to implement a proper get flow branching strategy after the last major release. Now, people knowing me know that i am not a big fan of branch branching strategies. Given the contacts, it made sense. So sometimes you have to just and shut up now. They also promised they would work on reducing the, the lifetime of their branches so that eventually they could move away from get flow towards github flow. Six months later, this did never happen. Now, in the meantime, the lead time of the deployment pipeline while reduced from four hours, now from eight hours to four hours, which was a hundred percent increase in feedback. Show more

Show less

So now they could execute the deployment pipeline twice a day, which is quite impressive. Number of automated tests grew. Not because teams were frantically writing new tests and adding new tests to the deployment pipeline, no, no, no, just because they found out they forgot some tests that they forgot about. Oh, look another bunch. Right, i had quite a lot of surprises, but stability of the deployment pipeline did not improve- absolutely not. We had to wait to somewhere late november to have our first screen deployment pipeline where all tests were green, and then it was read again for a couple of weeks. Not very reassuring knowing that months later you want to do a yeah, well, right. Show more

Show less

The next fear was one of deadlines. So, being part of government, the agency had legally mandated deadlines. Missing a deadline was not an option, and so how would they be able to hit those hard targets, knowing that a single failing test could block a release and given the state of of the automated test? Well, it management was quite concerned about this. Well, i was also concerned about this. Now, to mitigate this, the core team decided to introduce a manual overrule of the deployment pipeline. Show more

Show less

So, despite a failing deployment pipeline near a hard deadline, the team could choose to accept a bad release candidate and then to mitigate a risk with sufficient automated testing. Now, personally, i wasn't too happy with that decision because i feared it would remove the pressure to actually do something about the stability of the automated tests. And and that fear was confirmed six months later because they were still struggling six months later to obtain a stable deployment pipeline. But it had a benefit to introduce transparency and visibility about those manual overrules, because they were already happening in the past but nobody was aware of this. So now they were being recorded and then and we could radiate this together with all the problems that goes with it. Show more

Show less

Now. The last fear is one of books, and this was the biggest concern for it management. A single severe issue could land the agency on the front page of every belgian newspaper and damage their reputation and damage their career of it management. Obviously, and again well, given the state of the automated test, this was a well-founded fear. Many of the tests were flaky and non-deterministic. Re-executing a test in isolation often made it past to green. Show more

Show less

Now, to mitigate this, the qa- expert of that was present in the core team- suggested to split the automated tests into two sets of automated tests. One set of stable automated tests. If any of them failed, it would block the release candidate, and then a set of unstable current time tests that are only executed at night and whenever. Whenever a a unstable test failed, it would never block the release candidate. Now, interestingly, it also had the effect that it reduced the lead time from four hours to two hours so all of a sudden they can execute the deployment pipeline four times a day. An incredible increase in and feedback. Show more

Show less

The last major release happened as planned end of december without any major issues, which helped in reassuring it management to move on with the plan. The first fortnightly release was planned for the third week of january. Everyone was holding its breath, expecting a storm of problems, and in fact it went very smoothly without much problem. It felt like a normal batch release, although the underlying process had changed dramatically. And so by making those fears discussable and then mitigating those fears, while we have provided the agency with enough protection to make this first fortnightly release happen as planned, and from then on releases came out every fortnight like clockwork. And it actually never stopped, they just continued. And on my first visit after the first fortnightly release, well, i found smiles all over the place. Everyone was super excited. They never imagined such a change was possible inside that organization. They just did it, which is very rewarding. Show more

Show less

But our fortnightly release is considered to be continuous delivery. Well, continuous delivery has a dynamic success threshold that depends on business. Need we say that an organization is in a state of continuous delivery if it's it, services have the right stability and the right speed to satisfy market demand. So, for the agency, the domain experts wanted more frequent releases, with default nightly releases. We satisfied the demand of the domain experts. So, yes, they achieved continuous delivery. However, during the journey, i also realized- and this was a big surprise- that it is actually possible to reach continuous delivery without ever achieving continuous integration. Show more

Show less

So, continuous integra? Yeah, this was a surprise. I wasn't expecting this one because, yeah, i thought they will never, they won't, they will never do it. Look, look, the deployment pipeline is not, it's not stable, and yet they did it. Um, so continuous integration has a static success threshold. We say that an organization is in the state of continuous integration when everyone and when everyone commits at least once a day into mainline. Every commit triggers an automated build and execution of the automated tests and and and. Whenever the build fails, you can fix it within 10 minutes. Well, with the metrics we had, because we collected metrics while we knew the agency was not in a state of continuous integration, so they never reached. Continues continuous integration now as there's a good thing? Well, no, absolutely not. This is not good, because you will never be able to sustain continuous delivery on the long run. You will be operating in a higher you will have. You will be operating with a higher risk for for delayed delivery and and and production failures. Then people can actually tolerate and this will create a higher risk for stress, fatigue and burnout. So you don't want this. So you need to have continuous integration. Many times i was wondering: what am i doing here? The improvement process was extremely slow. Experiments did not get implemented as as agreed. Stability of deployment pipeline did not improve. I often felt very little, very insignificant and exhausted. And well, i had the impression this is going nowhere. And yet it happened in the most unexpected way, and and the reason why- why it worked- was that the core team was this very yelled, very motivated team that absolutely wanted to achieve that. The internal agile coach worked a lot on creating a safe environment. The very short time frame created a sense of urgency. The core team communicated a lot inside the iit organization about what they were doing, what was to be expected, what will be, what will change? And then, quite important, the core team used the inertia of the organization at their advantage. Show more

Show less

They just acted and decided faster than the opposition. Yeah, they, they sometimes had questions about things. Yeah, but why? Why? Why do you plan to do this? Yeah, well, it's already done, sorry. Now the biggest outcome for the, for the organ, for the agency, was was creation of transparency, creation of visibility. So deployment pipeline made all the problems everyone was aware of very visible and once dashboards were in place showing the evolution of lead time and failure rate, it management got more and more interested. And that unlocked budget for tech coaching to support the teams. Now, from my side, i've only spent 12 days of my time on this: two days of kickoff, during which we've we've set up the improvement qatar, run the value stream mapping workshop, applied theory of constraints on on the batch release process and then defining the experiments, followed by um one day of follow up every week, and so my role was mainly one of of of guidance and teaching principles and concepts. All the hard work, all the improvement work was done by the agency, not by me, and that was the story. Show more

Show less

So thank you for your time. You. Show more

Show less
Do you find this recap helpful? 👍 👎
Why?
Thank you for your feedback 😊