eXtreme Programming folks complain about BDUF (Big Design Up Front)processes and occasionally accuse FDD of being a BDUF process because it builds an initial overall model.
My response is that FDD is not BDUF but JEDI - Just Enough Design Initially.
The rebuff is usually - how do you know what is 'Just Enough' to which I usually reply with tounge in cheek, 'it takes ability and experience to be a Jedi Master'.
However, my basic rule of thumb for knowing when enough modeling has been done up front is, when after one pass through the envisioned scope of the software in question, modeling in small groups does not produce any new classes or associations of real significance.
Steve
Wide rather than deep
Steve,
I love the humor in this post!
I've seen Jeff use the term "Wide rather than Deep". This to me is the secret and perhaps your rule of thumb is how to measure it.
I believe that one of perhaps four core tenets of agile is 'small batch sizes' (and note - not timeboxes).
In FDD we keep the vertical slice batch size small in the Design-by-feature, Build-by-feature stages using a rule of thumb which says a batch of Features (a Chief Programmer Worksheet, CPW) should never take more than 2 weeks to complete. This is a guideline as to how big the batch size should be.
I believe that batch size should be thought of as an area (depth x breadth). With modeling we are saying "go wide and not deep" and implying 'keep the batch size small'. The UML in Color (Archetypes and DNC) technique is extremely helpful in this regard. The technique is also very fast - 1 week of color modeling is probably worth 4 to 10 weeks of "first list all the verbs and nouns in the use cases, now decide which nouns should be classes".
The BDUF methods suffer because they don't have a modeling technique which easily lends itself to "wide rather than deep". Older techniques require too much effort and too much elaboration to produce the same goal - an illumination onto the domain in order to create a realistic plan and estimate. IOW, BDUF methods have an overly large batch size at the front of the process.
Standard queuing theory says, don't put big batch transfers at the front of a process because it creates a delay and a bottleneck downstream. Both of these contribute to long lead times, which contribute to project risk as requirements go stale, which results in change requests, which elongates the process and delays the delivery, which results in more changes and thus a vicious cycle has begun.
Hence, FDD's secret is "small batch of design upfront which still manages to cover the entire scope with shallow (but useful) depth"
David
David J. Anderson
http://www.uidesign.net/
The Webzine for Interaction Designers
Shape and BDUF
I've not used the phrase wide vs deep or breadth vs depth for a long time now. I simply describe it as "shape modeling" which I know is not a clear or crisp phrase itself but that's because there are many issues in play here. That is - there isn't one word or simple phrase that can capture it all.
By shape modeling, I mean we are after all the classes and how they connect to one another. We're not worried about all the attributes or all the methods in any class. Obviously, there are attributes and methods identified during shape modeling and these are recorded, but the point is that we don't try and fill out a class with all its attributes and methods.
I'm not sure about the area formula of width times depth here (however, I could be too literal in my interpretation of what you're saying). It makes me think of the scenes in the film Dead Poets Society where the introduction to that poetry text talks about a similar formula to determine the worthiness of a poem.
Now, BDUF is actually being used incorrectly here, but to keep the terminology consistent...
The BDUF methods suffer because things like requirements are hard and the structured and discplined tasks in these areas are usually done poorly. A brilliant paper related to this is by Parnas and Clements "A Rational Design Process: How and Why to Fake It. IEEE Transactions on Software Engineering, Feb 1986."
Now, a part of why process 1 - Develop an Overal Model - in FDD (where shape modelling is done) is so important is because it also recognises De Luca's first law of object modeling which is "when we're doing object modeling, we are also doing requirements and requirements analysis."
Where some other methods differ is they too recognise that these things are poorly done and so the reaction is to skip them. To not waste time on them. The term BDUF itself is pejorative.
FDD's heritage is very much from DUF. However, it takes known and measured best practices, such as DUF and formal design and code inspections and makes them far easier to implement than with larger-grained or monolithic processes (because of shape modeling, where we cover all the classes and their connection and because of the granularity of a feature - which makes inspections easier to implement).
During process 1 - Develop an Overall Model - as we are doing shape modeling we are also doing requirements and requirements anlaysis with the domain experts. Peter Coad's contribution here of modeling in small teams, but each team modeling the same domain area and the teams having a mix of people, is a big help.
Models are very expressive. Domain experts can quickly understand them to the level necessary during the Develop an Overall Model process. There are many other spinoffs as well. Models have very little "wiggle room." That is, they very much help challenge and confirm the requirements. Also, domain experts can come to understand the difference between
which is a big deal to developers. A 1 means a simple attribute, a set and a get. A * means a collection. What kind of collection? What are the rules for inserting into the collection? Deleting from the collection? And what is the meaning of the * - is it many addresses at the same time or a history of addresses over time? And so on. It is 10 times the effort. Now, picture a typical domain model and it doesn't take many 1s to become *s to blow a plan.
As to the selection of features for chief programmer workpackages (CPWs) this I'll explain in a future newsletter on FDD Workflow. There is still quite a bit done by the CPs in terms of planning and scheduling the detailed workpackage tasks and this I think is what you are referring to when you speak of batch sizes.
Paul Szego's not around at the moment, but hopefully when he's back he'll post in at this point as he can explain far better than I the details that go on at this level.
Jeff
A bit of Pottery
Not long ago I worked on a project where we used FDD, although I was the only one with FDD experience. When we began the modelling process, there was a great tendency to model all the way down to the ground; as soon as there is a box on the model, people want to fill out attributes and give it methods. There was a misconception that modelling meant "painting a detailed picture", and that it wasn't finished until it was detailed.
When we workshopped it, I frequently had to pull people back and remind them that they were working more at a conceptual level. That the overall shape was more important than any of the detail, and the relationships between the classes was the key part.
After a few iterations, the model settled into what felt right, and it ended up being quite different to the first cut. We embellished the model with just enough detail to see how things would hang together properly, and then moved on to the next phase, and we came back later to flesh out the rest.
Had we put in the level of detail that they initially wanted, it would have been that much harder to get the shape right. It seems to me that "shape models" are more fluid, as the lack of detail minimises coupling and assumptions that you may make about how things are going to work. It therefore makes it easier to mould and shape into what will be the "final" design.
So that's what made me think of pottery - you don't get out all those groovy little cutting tools to do the detailed work until you have worked out the gross shape with your hands.
So I guess potters are against BDUF too!
:: Gavin
How many feautures in a CP Work Package?
I also read the "batch size" to mean something like "how much do I do in one iteration of processes 4 and 5: Design and Build by Feature". The short answer is: just the right amount.
In practice this sorts itself out, as incorrect work package sizing self-corrects, but to understand why a short description of FDD workflow is helpful.
From the feature list and development plan the PM and/or DM will schedule the next features to be developed by assigning them to a CP (there's more to this, but not strictly relevant here). The CP will have their virtual "inbox" of assigned features waiting to be kicked off, and it's from here they will select what goes into the next work packages to be designed and built using processes 4 and 5: Design and Build by Feature.
There are a number of forces the CP is trying to balance here, but I'll come back to them later. What's more important to understand is why the size self-corrects, and any other attempts to artificially specify it are counter-productive.
CP's soon discover that the administrative overhead of a workpackage is similar, no matter the number of features. So they'll first try to maximise the number of features in a package, in order to reduce the amount of administrivia. THEN they'll consider the other factors. If it gets too big, i.e. over 2 weeks, they cut it down. If other factors make it too hard, they change something.
The most common factor is class ownership: you want to select a set of features that hit classes owned by a small group of owners. This makes it easier to schedule as developer contention is less likely. The way feature lists are written this is usually quite easy, as small "chunks" of features can be grabbed straight from the features list.
Most other factors relate to schedule. The CP is constantly monitoring their own "pipeline", and must also be wary to ensure that assigned feautures are completed by the target MM/YY target date. The pipeline is simply the CP planning ahead, and as a rule of thumb we like to ensure that you never have less than 2 weeks worth of work packages kicked off.
So the bottom line is that CP's will juggle the work package sizes to make their life simpler, which is good for the project as work gets done faster! Developers like developing, they don't like administration. They don't like waiting on depdendancies, they love getting things "finished" each week. If the process encourages and rewards them for doing what they like best, without sacrificing the other neceassary steps, they'll find a way to make it happen themselves.
Optimal efficiency is self organizing - interesting!
Paul,
This is very interesting - the idea that optimal batch size occurs naturally.
First of all you point at the "efficiency problem". What you call "administrivia" is formally known as "setup". Any batch has a setup. To maximize "efficiency" mass production and cost accounting have encouraged manufacturers to maximize batch size i.e. minimize the setup time per unit. The FDD equivalent is minimize the administrivia per CP.
In system thinking terms, there appears to be a natural balancing loop which causes the system to converge. The balancing effect is that of the morale of the team which goes down if the batch size is too big - they just don't get to be finished often enough.
However, I have not seen it as you describe. I have seen a tendency for batch sizes to get too big. This causes the team to evade design and code reviews because they are too big and daunting and this causes quality to drop off. In an effort to improve CP efficiency i.e. reduce admin per CPW, the overall system throughput falls off.
Hence, I'm wondering if there is another element missing in your description - some leadership and compelling force from the CP to achieve the correct balance, i.e. the CP enforces the balancing loop.
As you say, you end up with just the right size by magic. I believe that you are effectively saying that you as CP intuitively know when the team is being most "effective" (not "efficient") i.e. producing the optimal amount of Features with the minimal acceptable overhead and optimal quality.
I am left wondering if this would not always be the case with a less experienced CP.
Hence, I still believe that a formal method of explaining what represents an optimal batch size i.e. optimal CPW-size for stages 4 & 5, is desirable.
Finally, you talk about the "inbox" - this would be described as a "buffer" or "queue" in formal production process theory. The Theory of Constraints would explain it this way...
The CP is a constraint on the throughput of the system (of software development). The CP must be exploited to the full. In order to fully exploit the CP, the CP must never be idle. In order, to ensure that the CP is never idle a buffer of work pending must be placed in front of the CP to protect the CP from idleness.
You could go on to write this again for the Feature Team and the individual developers.
Where I am going with this thread is that it ought to be possible to generate a general theory of agile development and FDD ought to fit that general theory. I believe that the existing Theory of Constraints IS that general theory and that FDD evolved the way it is because all of those originally involved naturally thought in a pattern of "identify and eliminate that which is constraining me from doing optimal work" and furthermore those thoughts understood the holistic system of software development "The system I'm building is the process" and because of this choices made in FDD are globally optimal choices rather than locally optimal choices.
As a final comment on queues in this case, there is another aspect to queuing which is hard to describe in formal process theory. The CP is aware of what is in the queue. This allows the CP to act in a predictive fashion. Predictive control systems always out perform reactive systems.
If you drive a manual shift car you'll understand that the driver predicting the future speed of the car shifts the gear appropriately. This means that manual shift is more responsive than an automatic shift. Automatic gear boxes always react to changes in the speed of the car, this causes a time delay in selecting the correct gear.
The same principle can be applied to management processes. Predictive processes react better than reactive processes. The overall effect is higher throughput and a need for smaller queues or buffers.
This is an important point because most Agile processes are entirely reactive e.g. XP . FDD can be differentiated in several ways from other popular Agile methods because of this predictive quality e.g. modeling as stage 1 and planning as stage 3.
David
--
David J. Anderson
http://www.uidesign.net/
The Webzine for Interaction Designers
It's not magic
I think this has been covered well in another thread on this site. However, I want to be clear that it isn't "magic" and Paul didn't say it was. As discussed in the other thread, it's about feedback loops, visual control, self-organising within planned assembly.
Administrivia is not setup
What I flippantly called "administravia" became "...is formally
known as setup" in your reply. That is definitely not what I was trying to say.
What I am talking about is "all the other work that a CP has to do in their role as a CP". It is ongoing. It does not happen once at the start of an iteration (as the term "setup" implies), nor is the effort necessarily proportional to any measure of the "size" of an iteration (e.g. number of features).
Each weekly release meeting involves such work. Each stage of a DBF/BBF iteration *may* involve such work. Managing your pipeline is certainly part of this. Only some of it results directly from DBF/BBF iterations. It can vary wildly based on many factors.
Setup implies one-time
Well spotted Paul, I missed that. Yes, to me setup means one time and done before the rest. That is not the case for the CP tasks you are talking about. David - unless setup has some special meaning in the manufacturing context you were referring to?
Jeff
Intellectual Efficiency
Does this work better for you if we say "setup and maintenance"?
In manufacturing and cost accounting Efficiency is as:-
Cost expended on processing / (Cost expended on setup (and overheads) + cost expended on processing)
If we re-draw this as Intellectual Efficiency and say:-
Hours spent programming Features / (Hours spent on CPW setup (and maintenance) + Hours spent programming Features)
does this begin to make sense?
The concept is that in manufacturing cost accounting encourages a behavior which makes batch sizes large because the metric used for measurement and control is Cost Efficiency.
The CP who is attracted to the intellectual challenge of coding (or wouldn't have become a CP software engineer) is mentally attracted to the larger batch size because it is more intellectually efficient - this behavior may be re-inforced by a senior management requirement to have very efficient programmers who spend most of their hours coding. Design and Code can be capitalized under GAAP rules. In fact software is valued based on the number of hours input on it. These rules are based on a manufacturing factory principle of added value where inventory increases in value as it passed through a process. Hence, there are two factors encouraging large batch sizes in software development - in systems thinking this is known as as a reinforcing loop.
Without some balancing force this quickly turns into good old SDLC - heavyweight process. What FDD introduces is the appropriate focus on balancing forces which limit the size of any CPW to a size which produces optimal throughput or value efficiency.
The difference between Mass Production and Lean Production is that Mass Production uses cost accounting to measure cost efficieny - % of total cost of operating a machine is spent actually producing throughput, i.e. number of widgets per dollar (processed locally). Lean Production uses throughout accouting to measure value efficiency. This is typically reported as "number of hours to build a car" i.e. it is the lead time which is important - how quickly can the material be pushed through the process.
If you use a Lean Production type metric for software, it would become more important to have a 2-week lead time for a CPW, than it would be maximize the intellectual efficiency or the cost efficiency.
"Tell me how you will measure me, and I'll tell you how I will behave"
If you tell me that I will be measured on client valued function delivered and lead time for delivering it, I will want to keep my batches small and deliver them often. By setting a guideline of 2 weeks, you help me to focus on a target.
On the other hand, if you ask me to fill out a timesheet and measure me by number of hours spent designing or coding against hours spent on all other overhead - which is not assignable to the balance sheet under GAAP rules - then I will seek to maximize my time spent coding and minimize my time spent on other things. This is most easily achieved by maximizing the batch size.
If you measure me with cost accounting rules, then you get big batch sizes.
If you don't believe me - try working as a line manager in a Fortune 100 company for 3 years and having to produce the right numbers to hit the capitalization of development target for the year.
David
--
David J. Anderson
http://www.uidesign.net/
The Webzine for Interaction Designers
There is no batch size
I'm also still totally uncomfortable with this notion of a batch size.
I'm not sure if you've moved away from it now or not (as your other comment about the feedback loop seems to suggest).
An iteration of DBF/BBF is not a batch. There is no "optimal batch size" because the dynamics of the environment in which this is executed are so fluid.
There is a feedback loop, but the way you have described it is too simplistic. i.e. that it simply settles or converges on a single ideal "value".
In the very brief description of the FDD workflow I mentioned only some of the factors that a CP is balancing. They are doing this all the time!
In fact one of the problems we look out for is when CP's do start to settle into a regular iteration schedule, as most often this is planning around reporting milestones (e.g. the weekly release meeting) rather than focussing on the more important goal: overall progress. This is not to say that it's never valid, but from experience something to watch out for.
But back to the point about CP's - they *are* the leadership and compelling force that achieves the correct balance. It's just that it's not a simple balancing loop as you describe. You dont end up with the "right size" that can be applied to all work packages, but more importantly you end up with just the right work being done at the right time by the right people.
Your observation about a less experience CP is spot on - because it's not simply a case of minimizing overhead and maximising output! There are so many factors to consider, albeit ones that an experienced developer / team lead / CP lives and breathes, that someone without the experience would have a harder time. Note that I'm not talking about super-hero developer types here - these are all normal skills and knowledge for any experienced developer / team lead. What FDD gives us is the framework and a clear understanding of the constraints and their responsibilities in which they can apply this experience.
To me the more interesting flip-side to this is how can we utilise FDD to grow CP's in less years? A light-weight process is much easier to understand and come to grips with. Most aspects of FDD have been known to be best practice for years, and so there are volumes of material already published by some of the giants in IT that can be easily located. In an FDD project everything is published - so working in an FDD project is the ideal opportunity to understand how it all works. The surgical team structure provides the ideal communication paths for not only CP-developer communications, but also a mentor-trainee relationship (which again can be many at once, and changing over time). With such small iterations, a CP "trainee" can easily cut their teeth on smaller work packages under the guidance of a more experienced CP. This could be nothing more than formalising all of the inspection's to include additional reviewers.
As far as CP's being a constaint on the throughput - exactly! Harlen Mills surgical teams are designed this way, as described by Uncle Fred in the Mythical Man Month over 25 yars ago. It's the reason why they're used in FDD. If you want to scale up, get more CP's. It's all about lines of communication - being a matrix between CP's, but running from CP to developer within each team.
And as far as teams evading reviews, it's pretty simple: that's not FDD. They are a fundamential part of the design and build processes. We have milestones that make this explicit, and the ETVX layout of the process descriptions makes it clear that you're not "finished" until the exit criteria are met. If you think you're doing FDD, and teams manage not to do reviews, then you in a world of pain that no process is going to fix: you've got people problems.
One final thing I noticed: you seem to contradict yourself on the issue of large batch sizes: you first talk about a balancing loop where increased batch sizes lead to lower morale, but then relate your experience with the tendancy for batch sizes to get too big? I'm not sure that I understand how you arrived at this assertion.
JEDI is easy to determine
From our experience it's always blatantly obvious when we've done enough modelling. And I mean from all people that participate in the modelling sessions, not just an arbitrary decision by the lead modeller (or chief architect role as described in the process descriptions).
One of the greatest benefits of having the participants in process #1 - Develop an Overall Model - is that we get a common understanding of the domain though this group exercise. That includes both the domain experts and the developers. And remember we're developing UML diagrams here, which don't offer the same "wriggle room" that we get when interpreting written text requirements. Given a domain walkthrough of a component, it's always obvious whether we've captured the requirements or not (at least to the level of shape modelling). If we can't take any piece of the domain walkthrough and see precisely where it is represented in the model, we're not done. This is one of the tasks the chief architect is performing at this stage: ensuring coverage of the domain from the models produced. In reality however this is usually more than adequately handled by the multiple teams approach (that doesn't mean always!).
There may be other related issues that have been identified (and recorded, usually on the "rat hole" list, to be dealt with later), and possibly several alternate models presented. But at the end of the session we arrive at a concensus model via the split into teams / present / merge process.
So given a clear scope from the domain walkthrough, and this group approach to building our model, it's always obvious if we've left anything out or straying beyond the scope of the requirements.
Due to this we only ever need to do one pass through each "piece" of the domain, as presented by the domain experts. Sometimes as we start modelling we discover that the scope is too large, so we may decide to break it down further into more manageable chunks. In this case we are usually modelling in groups already, so we simply pause to communicate the adjusted scope to all present.
Other times we may find during the model merge that different teams have all focused on different parts of the walkthrough presented, but that all are valid. In some cases we may do a merge here, but more often we'll realise that the scope was too large and again break it down further. The reason for doing this, rather than just a merge, is that we haven't gained the benefit of multiple teams all looking at the same part of the model, which is a great advantage of this approach.
From a practical people and facilitation point of view this process may be broken up simply to allow people to function. Modelling is hard work, and we ensure that are plenty of breaks so people's minds don't turn to mush. The lead modeller may wish to stop the modelling at an early point, so that initial shapes from different groups can be contrasted before continuing. They may throw up a strawman shape at this point, or perhaps direct the group in some other way. We may do several small "bites" of the model as a model / present / merge process if there is likely to be huge variation among the groups (to lessen the effort of a merge at the end). For example we may say to start with "look for the key Moment-Interval class(es) only" or maybe further down the track do "look for no more than 3 interesting attributes for each class".
Each of these micro-steps is not an iteration - we are still *building* the same component model. The group is never under any illusion that these intermediate points represent a "good enough" component model.
The only caveat here is of course when you have inexperienced modellers, but that's the same no matter what process you follow. In FDD it's during process #1 when their inexperience has the most visible impact, and where most of the tequniques for handling this come into play. Everything I've said holds true, but you're obviously more reliant on the more experienced modellers.
So the bottom line is that there's no real mystery, or "magic" going on here. It's obvious to all participating when a model addresses all the requirements adequately. It doesn't take a guru.