Refinement of UML models through Analysis - Design - Code

I'm currently looking at a major J2EE project, over a period of 3 years, with 6 analysts and a dozen programmers. The analysts are responsible for creating business models/specifications and application requirements. During this period, parts of the system will go into production and be extended along the way. So there will be some release while analysis and development continues (1.0, 1.1, 2.0,...)

The analysts will develop an overall domain model, and describe use cases and features.
A list of features will be specified to be implemented in version 1.0 of the application.

Suppose there is only one model which is equal to the implementation at all times (TogetherJ approach). Also suppose version 1.0 is currently under development, while the analysts already start analyzing features for version 1.1. Of course these new features can not be added to the same model, for some reasons:
- these new features will not be released in version 1.0
- the developers can not afford analysts creating classes which can not be implemented as such
- the analysts probably don't recognize their models anymore, after the designers/programmers have reshuffled them

During development, it must however also be possible for the analysts to specify changes/updates/improvements to these requirements that do need to be added to version 1.0. These updates need to be reflected later in the specifications for version 1.1 and so on.

How do others manage this refinement of models from analysis to design and implementation?
Is there only one model, meaning that the analysis model is the same as the design model and the implementation model?
Or does each phase have it's own models, which need to be kept synchronized?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
szego's picture

n+1 development

When you're doing "N+1" type development, where a new version is being prepared while the existing one maintained, most shops will branch the development. This is most often reflected in their SCM tools, e.g. CVS etc.

I'd take a similar approach to the model: there's one that reflects the mainline development of the next version, and others that reflect previous versions.

Since I only have one model, and it's in sync with the code at all times due to tools like Together, that's hardly a suprising approach. I guess the trick is that you need to do the branch before you start modelling, so you've got something ou can tweak without disturbing the existing working version.

As far as retrofitting features, it's similar to the approach taken with bug fixes: whenever you have branched development and a defect is found, you have to check what other branches might be affected and addres each one in turn. It's the same thing with enhancements.

I don't ever use different models for analysis/design/implementation. They're all the same model. And using FDD we don't have the case where distinct sets of people are involved exclusively in each of those activities - CP's and developers are involved all the way through, so there's no discontinuity as you describe.

PaulS :)

Jeff De Luca's picture

Program Management

Very well said Paul. The other thing you must consider for a project like this is that it is actually a series of projects. Thus program management could (I would say should) be applied and each of your releases (branches) are discrete projects.

It is also common in a program for one project to also define a subsequent project.

Jeff

steve palmer's picture

One model

IMHO

In the end there is only one model worth worrying about and that is the one that developers are using because it is the only one that accurately reflects the system being built. I don't care what any of the guru's say, I have never in practise seen any value come from 'analysis models' that were out of step the with a developed system or seen any project that has, over the course of time, had the resources to keep multiple models in sync.

I'm not completely opposed to analysts/domain experts drawing pictures to help them organize their thoughts as they prepare for domain walkthroughs with developers but there are dangers in doing so. Pride of ownership, the temptation to over analyse and slip into design, obscuing the actual problem domain with newly invented conceptual solutions, arguments about whose model is right, develpment drifting off course and not being seen to do so until too late to easily correct, etc, etc.

The idea that developers should be able to reshuffle models to the point that analysts no longer recognise them is problematic. What value is there in the output produced by the analysts if it does not reflect the system being built? Much better to have the analysts and developers working from the same script, talking and working together rather than commnicating through documented models. It's not that hard these days to have different views of the same model showing different levels of detail as appropriate.

Paul's suggestion of using vcs branching is a good one. It does require a well managed and disciplined approach to version control; something not every organization is capable of (although maybe one would question in if the organization should be doing software development if not).

Have fun

Steve (www.step-10.com)

Jeff De Luca's picture

Don't be humble about it!

Shout it loudly - you are absolutely correct.

Rudy: can you explain exactly what your analyst role is and who performs the role? In FDD-speak, where are your Chief Programmers? Or, put another way, what other roles (if any) are the people playing the Chief Programmer role playing?

Jeff

How to fit in the analyst

The analyst role is performed by an IT analyst who's part of the IT department, and has the following responsibilities:

  • Capture domain knowledge from domain experts
  • Create an overall business model and specify business rules and constraints
  • Derive functional requirements (features, feature sets)
  • Hand over specifications/requirements/business model to developers
  • Manage the project, follow-up of planning and progress

The analyst is definitely not a programmer, and doesn't necessarily know anything about Java, J2EE or object oriented design. That's why I fear the analyst starting to make changes to a design established by experienced designers and developers.

The analysts hand over the 'WHAT TO DO' to the developers, the second important group in the organization. These people are the Chief Programmers, designers, developers, etc.. They decide on 'HOW TO IMPLEMENT' the specifications.

I agree with all people commenthing on this issue, yet the analyst as described above doesn't seem to fit in well.

Jeff De Luca's picture

Roles - get the Roles right, then the People to play them

Well, you've got all the problems mentioned in this thread but you've also got your IT Analyst role, in fact comprising multiple other roles .

From your first few bullet points it sounds like your IT Analyst is what most call a BA or business analyst. It's a role in an IT department that essentially acts as a proxy for the users. This is a reasonably common shape in large enterprises. It's not as good as directly involving the users, but it certainly can be made to work.

If that speculation is right, then your IT Analysts should be the Domain Experts in the FDD Develop an Overall Model process.

This is how you fix part of the massive disconnect you have in roles and handoffs currently. That is - the Chief Programmers are not involved in the Develop an Overall Model process (in fact, they are not involved in the Features List process either as you state above this is done by your analysts).

The next problem is the one of the issues raised by Steve. That is - your use of the language "business model. As Paul Szego loves to say "model the domain Luke." If there was only one object modeling tip I could ever give - this would be it. After all, this is what OO is really supposed to be about. That is - more flexible and sustainable systems as we are modeling the underlying framework of the business (the domain) itself rather than the ad-hoc functional requirements of whatever today's problem happens to be. Hence, the model reflects the domain and is as sustainable and as felixble as the domain itself is.

A good object model of the domain is then simply implemented. There is no business model and then an implementation model.

So, go back and look at the FDD processes and think of your Analysts as the FDD Domain Expert role, and think of your best programmers as the Chief Programmer role.

Finally, your analaysts seem to also be playing the Project Manager and/or Development Manager role (your last buller point). Since they are the source of the requirements and so on in your world, that is not a good shape.

Jeff

I'll third that!

It is a myth that you can produce a domain model directly from the requirements and its one of my major raps against UML. Requirements define scope. A domain model is derived from facts about the business domain within the scope of the requirements.

To give a simple example, the customer has various immutable (my favourite word at the moment) facts about them that any possible requirements can not change. A customer has an address (or addresses) and your requirements can not change that fact. However, your requirements do affect the scope of the facts about a customer you are concerned about. So your requirements may determine that you are only interested in a single (primary residence) address, or all addresses, or only electronic addresses, etc.

Phil

Understanding refinement when using Feature Team Area's

The 'Design By Feature' FDD process introduces the 'Feature Team Area'. If I understand correctly this is a private workspace for a feature team to design each feature of a selected feature set. Would you please verify the following statements?

This private workspace or 'Work Package' is used to refine (add, delete, update) the overall object model, and create sequence diagrams, and updates classes with method prologues etc. To accomplish this in a multi-user environment, I suppose we would need a branch of the main line of development (which contains source code, models, etc). If we don't use a branch, we could e.g. copy the main branch to a shared directory - which I don't think you would recommend.

I hereby suppose that in the DBF process, we don't just start with an empty work package, but we start with a branch of the main branch. Why? Because we need all the existing models and source code to start from. If we don't take the complete branch, bug e.g. just the classes we will change, we will probably not be able to test our implemented features. I hereby also suppose that the DBF-activity 'refine the object model' refers to the overall object model we created in process 1 - Develop an Overall Object Model.

This private workspace is also used for the 'Build By Feature' process which further implements - and possibly refactors - the current implementation.

When the features are implemented, we promote to the build.

  • Does this mean we always create a branch to design and develop features, and never develop in the main branch?
  • Also suppose we are working with 3 branches for 3 DBF/BBF packages, does this mean users can't test one of these implementations before it is promoted to the build? Unless of course we deploy each branch as a seperate build, each on a different application server?
  • Does this also mean that features are only promoted to the build when the work package is completely finished?
  • When the work package is promoted to the build, it is necessary to merge the changed models and the possibly refactored design?

Thanks!

I faced this same question!

The most sensitive point is when you mentioned that there might a new proposed model for the Overall Domain Model( part of the domain model is required tobe changed).

I think that Feature Teams don't have the autonomy to change any part of the Overall Domain Model, that is, if a change is needed it can only be authorized and evaluated by a Chief Architected in conjuction with any person that he sees fit, and will have the responsibility to change the overall domain model in accordance. That is, Feature Teams can propose changes to the Domain Model but never turn those changes effective.

This is the only sensitive change control activity. Sequence Diagrams, class prologues, and coding are all local. That is, Feature Teams have full accountability for it and can do as they see fit, so there is no need to consult anyone else (besides domain experts for validation).

This simplifies your branching. Just check out the needed classes, make a snap shot of part of the domain model that is required, and expand model (the snapshot) with implementation specific classes. You can create any sequence digrams one sees fit as they are not shared with any other feature team unlike the domain model.

After the work is done, just checkin everything (including the new expanded model and sequence diagrams).

Remember that classes while being changed by a feature team cannot be changed by any other team as from that period of time the team "owns" the class owner.

Nuno Lopes
PS: If "several" domain classes need to be refactured something is realy wrong with them in the first place. If something is wrong with them, then something is fishy with the overall domain model (the problem of the unstrusted domain model), bang back to phase 1.

Commit sequence diagrams to the main model

Thank you for your interesting reply.

FDD seems to specify that a Work Package is only promoted to the build after 1) the DBF has been completed, and 2) the BBF has been completed.

Would you propose not to branch the main line of development, and to commit sequence diagrams and code into the main branch while still busy doing DBF or BBF respectively? (Suppose a Feature Team consists of multiple developers, all committing their intermediate work in the main line of development). Or would you create a branch to do DBF, and not create a branch for BBF (commit changed code immediately into the main branch)?

It seems you also commit/promote sequence diagrams to the main branch and add them to the overall model. Is this correct? What happens when a future feature does a redesign that changes the way former features were designed, e.g. introduce some design pattern. Do you then update all the existing sequence diagrams of all the features affected?

Do you also update the corresponding Work Packages, which are e.g. published on the team intranet? As Work Packages are released on their own when finished, the newer design or refactoring might have invalidated them.

Configuration Management Model isn't flat in FDD

I've seen this type of confused conversation before. I think that it stems from a lack of strong definition of what is expected for a configuration management model in FDD.

Typically, it should have 3 tiers - not 1. I have noticed that many teams are heavily influenced by the Martin Fowler / XP school of thought and this uses a flat single tiered model where developers are checking-in straight back into the mainline build which runs against the continuous integration tests. This approach doesn't work well in FDD.

It's better for developers and Feature Teams to have their own space and promote up into the main build.

I'll let someone more knowledgeable on this topic elaborate the point. Overall I feel there is a need for an article on this site describing an ideal config management system for an FDD project whilst comparing/contrasting it with an XP style CM setup.

David
--
David J. Anderson
author of "Agile Management for Software Engineering"
http://www.agilemanagement.net/

szego's picture

Promote to Build

FDD seems to specify that a Work Package is only promoted to the build after 1) the DBF has been completed, and 2) the BBF has been completed.

No - we only promote once. Right at the very end of the entire DBF+BBF process, for the entire work package. It's the very last step. After that there's nothing else to do for those features.

Note that all the changes made for that work package are promoted together.

szego's picture

Class Owner team membership

Remember that classes while being changed by a feature team cannot be changed by any other team as from that period of time the team "owns" the class owner.

Not quite - remember that a person typically owns many classes. I may in fact be part of more than one feature team at the same time, because I own multiple classes.

So strictly speaking, the work package forces serialisation on the class and not the class owner.

Class Owner team membership

You are correct of course!!

I examplained it badly. Nevertheless all I wanted to emphasise that a class owner and its class is assigned to only one feature team at a moment in time. So no two feature teams work with the same class at a given times, so no changes are overlaped, so one can check out and lock the classes to update for the time it takes to implement the features selected for the WorkPackage.

This in the end, means that there is no need to span out RC (Revision Control) branches for each WorkPackage in order to save guard mulple updates to the same files (This is bad use of branching anyway).

The Chief Programer assigned to the WorkPackage is the one responsible to check in all classes, new sequence diagrams, expanded domain model snapshots etc, when all the work defined in the WorkPackage is done (after the cycle is ended). Note that a cycle is composed by several DBF/BBF's, each for each feature.

During the development of features selected for the WorkPackage one may ask what kind of RC one can use. For a team as small as 3-4 people each owning their own classes (files) this is not such a problem. That is, no need for RC. Althought the CP should perform backups of the files cheked out if the cycle takes more then 1 day.

Nuno

szego's picture

Revision Control

Hi Nuno,

it's an interesting observation that peoples approach to using revision control changes when (a) the team size is smaller, and (b) the cycle times for a checkout-edit-checking get shorter. My view though is that no matter what situation, you should always take the same approach.

We ensure that the box hosting the revision control system is backed up daily, so developers that find themselves working on a class for more than a day or so will often checkin an intermediate revision to ensure there is a backup.

I'd also argue that no matter what the team size, you should always use some formal RCS. There are several advantages, not just tracking a history of changes. If you're not going to use an RCS, you still have to face the problems of communicating changes among the team and how you're going to implement a "common codebase". So even for just a few people you're going to have to solve problems that something like CVS (which is cheap and relatively easy to use) gives you.

But it's not just the tool here, there's the manual processes that we wrap around the technology. And it's here that I think people tend to get less formal with a lightweight team. For example: not bother with a BUILD tag, or tagging each work package.

Part of the reason for a process is to make things repeatable. With things like tools and environment we tend to build up a "bag of tricks" over time, some of which are based on the tools in use. For example grabbing a version of the codebase and automatically publishing the JavaDoc to the project intranet, or generating a report that says "build XYZ contains the following new features". So for us electing NOT to use an RCS would mean we'd have to find another way to do all of these things also.

Maybe there's an argument that says you don't need all of that stuff either with a small team. Maybe, maybe not. Common sense always applies, but in general I'd tend to disgagree about the RCS. In fact in many cases I'd suggest that you're better off sticking to a formal approach to counter the human tendancy to think "it's only trivial, we don't need to bother with that step".

PaulS.

Class ownership in practice

Does class ownership really work? Suppose implementing a feature affects classes A, B, and C which are all owned by different class owners - different people in this case. Each class owner makes the modifications necessary to his class, adding methods and attributes, changing signatures, etc... Each developer promotes his work to the shared workspace. Who assures that this workspace compiles? Even when doing extensive design in the DBF process, how can you be sure not to have missed any detail for these 3 classes to cooperate as needed? And most importantly, who is responsible for testing this feature? When?

At what point do we need the workspace to compile? Is that only as soon as starting the 'promote to the build' task?

Rudy

szego's picture

No Branches

* Does this mean we always create a branch to design and develop features, and never develop in the main branch?

No - we never create a branch for this reason. Ever. Branches are only for reflecting the product releases, not the internals of the development process.

* Also suppose we are working with 3 branches for 3 DBF/BBF packages, does this mean users can't test one of these implementations before it is promoted to the build?

For starters we'd never have *any* branches. But you are correct that nothing is visible outside the feature team until the work package has been promoted to build. That means not only the users, but the rest of the development team also.

* Does this also mean that features are only promoted to the build when the work package is completely finished?

Absolutely! When all the work is done for that workpackage, everything gets promoted.

* When the work package is promoted to the build, it is necessary to merge the changed models and the possibly refactored design?

Again - there's no branching going on, so there's no merge.

Refactoring is not a part of FDD - we strive to get the model shape correct the first time (there are other discussion threads here which talk about this at length). Any shape change encountered during the execution of DBF/BBF iterations causes alarm bells to go off!

We expect some minor tweaks as we go along, but significant change indicates we got something wrong very early on and further investigation is necessary to determine the impact of the change.

A single class being part of two Work Packages at the same time

* Again - there's no branching going on, so there's no merge.

Is it allowed for a single class to be part of two Work Packages, which are being designed/coded at the same time? That would also require the class owner to be part of two feature teams at the same time. Suppose a single class has been changed in both Work Packages, and that the first Work Package is promoted to the build first. I can then imagine that, when promoting the second Work Package to the build, a merge is necessary.

szego's picture

There can be only one :)

In short - no.

It's as Nuno stated above in the post
Class Owner team membership

Perhaps there's some confsion regarding "class owner" - there's both the person and the role. In that role, e.g. "class owner of com.acme.pd.Foo" they're only in one feature team. The person may be in another team playing the class owner role for a different class. In practice they're usually in multiple feature teams at once.

But now I'm going to muddy the waters a bit: we sometimes streamline the process somewhat, where we'll slightly overlap the work packages that are serialised on a class, say X. For example one work package might be done coding (A), things are all checked in pending the code inspection. There's nothing stopping us from kicking of a subsequent work package, say (B), by doing the domain walkthrough and the sequence diagrams on paper. We do this a lot.

Any more overlap than this and you're in the situation where the revision history for that class is not serialised, and you get into potential nightmares.

szego's picture

Implementing Feature Team Areas

Hi Rudy,

the concept of the feature team area is simply a "shared workspace". That's all. We want a way for team members to be able to safely share their work with each other, but not have it impact anyone outside the feature team.

There are several ways this can be implemented, depending on the tools and environment at your disposal.

Despite any other problems it might have, ClearCase allows you to do this very well - by identifying each file checked in with a tag specific to that work package we can easily tell ClearCase to make our view something like: "always use a locally modified copy first, otherwise if there's a revision checked in with a tag from this work package use it, otherwise use the one with the BUILD tag on it".

When using something more like CVS a lot of people do use a shared dircetory. That's the way we first did it in Singapore - and it works fine. Another approach when using CVS is to use a "BUILD" tag. When a revision is promoted to build we simply apply a tag with the name "BUILD" (most likely moving it from some earlier revision). Each developer can now easily see the "promoted to build" codebase by using a sticky tag of "BUILD". If you want to see the changes made by other team members, simply remove the sticky tag for all files being modified for this work package and you'll get the latest version to be checked in.

I *think* this might be what David is talking about with the 3 levels - for each team member they have their own personal workspace, then there's the feature team area, and then there's "the build". Changes are made by the developer in their personal workspace, then files can be exposed into the feature team area for other team members to see, and then finally the entire work package is promoted up into the build.

Note that this approach is NOT new! I first saw it in the IBM labs over 10 years ago. That was the most comprehensive implementation I ever saw, running on mainframes! So there were a few reasons why we first started doing things this way, but there turned out to be a lot more advantages than expected.

The first driver was simple - once you've done the sequence diagrams, how do you get the changed files to the class owners? You don't want anyone outside the team to see the changes, because the files might not even compile yet. Along comes the feature team area. Then later during BBF we find that developer A needs to see changes made by developer B. Again we used the shared feature team area. What about unit testing? If I needed some other piece of the work package to be done in order for my unit tests to run, we'd share the changes via the feature team area.

Note that developers don't work in the shared area. They have their own local workspaces. The shared area simply allows a way to share changes to the codebase among the team. Code at this level should ideally compile - but might not. It's a lot less stable than code that's promoted to the build. Typically there's no formal control over the shared area - developers are generally smart enough people to be able to handle this among themselves.

szego's picture

DBF+BBF and the Feature Team Area

Hi Rudy,

some more info I hope will help. I've posted a number of small replies as you've touched on quite a few different subjects (and I'm home with the flu and have the attention span of a housefly).

I'll try to walk through the typical steps of a DBF+BBF iteration. I'm obviously going to gloss over a lot here, but I'll try to address the questions you've raised. I'll use an example of a Java project using CVS (pretty typical) and using Together/J. Other tools exist, but the bottom line is I want simultaneous round-trip engineering. And lets say I'll use the tags-in-CVS approach to a feature team area, and not a shared directory.

After the option domain walkthrough the team does a design. Remember we've identified the required classes ahead of time, and from this derived the team membership. Typically we sit around our sheet of flip-chart paper, and draw a sequence diagram. We get concensus, take notes, and then go back to our desks.

Changes are made to the latest revision of each effected file, so in CVS I'd do an update and specify something like "cvs -A" to reset the BUILD sticky tag. The CP diagrams up the sequences using Together/J. We end up with a number of new methods, and perhaps attributes. We might have made some shape changes, but that's rare (and another topic). The changed files are put into the feature team area, i.e. checked back in to CVS. The sequence diagram is published to the project portal or intranet (actually I'd generate the HTML Doc from Together).

Each team member updates their workspace from the feature team area, by checking out the latest revision of the affected files, and documents the changes. For example a new method requires the JavaDoc comments to be written for it. The documented files are checked back into CVS by each team member. Once all changes are documented, the entire design is published. This includes the sequence diagram(s), class diagram(s) for changed components, JavaDoc output, design notes, etc.

A design review is held. An inspection log is prepared and published as part of the work package. If any changes are required the necessary steps are repeated, e.g. sequence diagrams, model changes, document, review, fix.

Once the design is inspected and passed the BBF process starts. Each class owner "fills in the blanks". They update their local files from CVS, again getting the most recent copy (since they reset the sticky tag earlier). Typically there's a handful of new methods that are empty. They are well documented, and there's a sequence diagram showing what they need to do. The class owners codes up their changes. They write unit tests. They push the changed file out into the feature team area if others need to see their changes by checking them back into CVS.

Once done, they prepare the code for inspection. The file is checked in, and the revision noted. The code is effectively "frozen" once it's ready for review. Each class owner prints up the changes to their files and hands them to the CP. Once all classes are ready the CP distributes copies to all reviewers. The inspection is held and an inspection log prepared. Fixes are made as necessary.

Eventually the code is ready to be promoted to the build. We apply the BUILD tag to every file changed by this work package. The revision of every affected file is noted in the work package documentation. Note that there may be some lag between the final changes being checked in and the promote to build, and later revisions of the file may exist already.

To easily identify the latest changes to a file for a work package we use work package specific tags in addition to the BUILD tag (e.g. 'SC034-042'). We also ensure the first line of the comments when checking in changes contains nothing but the work package name. This allows us to not only see why each change was made, but to automate some of the reports (e.g. "what new work packages are in build X").

Once the BUILD tag has been applied / moved each team member re-applies the "BUILD" sticky tag, marking the end of the life of the feature team area.

Note that we don't use a branch. Our view of the world, for this feature team, includes the latest revision of any file that's part of this work package and the BUILD revision of every other file. This is our "starting point" if you like, and that's how we have a set of code to work on.

Your observation about "refine the model" is spot on - we start with the overall model from process one and continually add content to it one work package at a time. There is only one model, one codebase, one line of development.

HTH, Paul.

Hi, Another Question

What you mean by refining the model?

Does it include refactoring the domain classes? The reason why I ask this is that this task my change the shape of the domain model substancially (associations between objects). A domain model that already the "client" had validated and complied as correct.

In order words, refactoring domain classes may change the model shape. How do you solve this?

Nuno Lopes

szego's picture

Iterative vs. Incremental

We don't refactor.

In process one the deliverable is what's sometimes called a "shape model". The focus is on getting the shape correct: what are the classes, and how are they associated. We sometimes get a few key attributes and methods, but thats secondary at this stage.

When we're doing DBF+BBF iterations we're mostly adding methods and attributes to the existing shape. It's a very incremental process, which is different than iterative. The goal is to add new code, and not to change existing code.

So "refining the model" means mostly adding new methods and attributes. We don't refactor - the shape should already be correct. There are of course small "tweaks" made, but these are most often in the form of derived associations or low-level implementation details. They should have only minor impact on the shape.

This is such a strong concept that if we do find that shape changes are made during a DBF+BBF iteration then we see this as a trigger to indicate that something might be very wrong! In such cases we will investigate exactly what's happening, as often it means that we got something wrong back in process one.

This rarely happens, as the involvement of the domain experts and the techniques used in process one (e.g. modelling in teams) gives us an incredibly good picture early on. Combined with the fact that we model the domain, we very rarely need to go back and revisit the model shape once we've started.

Need to clarify the semantics of "the model"

Hello everyone. It might be interesting to clarify the exact semantics of "the model", which has been used many times throughout this discussion. From this thread I've learned that "the overall model" created in process 1 is pretty important and alarm bells should go off when changing this "business model" or "domain model" is needed. Everyone seems to agree that there is only one "domain model". I suppose this overall domain model is what is meant by people talking about "the model" in this thread?

What about the other classes however, which we for now could call "implementation classes". These are all classes necessary to realize a multi-tier application, including a presentation tier, a business tier, and a data management tier. I'm hereby supposing that the overall domain model classes are not included in this definition of "implementation classes".

When doing DBF, I suppose adding implementation classes is allowed and is not considered as an alarm-bell ringing change to "the model"?

Suppose we have implemented many features, documented by sequence diagrams. I suppose these sequence diagrams also contain interactions among the implementation classes, and not only among the domain classes? Suppose we want to introduce some design patterns to decouple the presentation tier from the business tier. By doing this, some of the existing design packages - including the sequence diagrams - might not be valid anymore. What do you do? Update all sequence diagrams? What about the published Work Package designs and documents?

Rudy

Need to clarify the semantics of "the model"

I'll try to answer from what i've learned on this site, books and some experience but I'm not the best one to provide you with a complete explanation.

1) There is only one model (the model)
2) Several pictures can be taken from the model, one of them is the domain model. But there is also the implementation picture, etc etc.
3) One starts modeling by "drawing" a picture of the business scoped by the problem to solve or processes to automate. So we create the domain model. As a picture is all about shape and colours (phase 1).
4) The domain model shapes the core of the business tier of an application.

>I suppose these sequence diagrams also contain interactions among >the implementation classes, and not only among the domain classes?

Everything is implementation classes.

Sequence diagrams are use to establish how objects collaborate with each other to enforce business rules. Business rules change less then for example UI Rules (UI Interactions) or Persistance Rules (Database Access). Changing business rules have an effect mainly on collaboration, so sequence diagrams (unless new business objects are required to be represented). Changes in the collaboration do not change the picture of the domain model.

If you use MVC to implement the UI or other approach that enforces the creation of a layer between the Business Tier and the Interface Tier, then your Business Tier is "safe".

The sequence diagrams of the business tier only become invalid if the business rules change. Changes on the UI should not require changes in the business tier, the domain model should not be tightly coupled with the UI, a fact that is prevented on phase 1.

Nuno Lopes

What to do when sequence diagrams become invalid

Thank you for your reply! I'm still wondering about evolution of the design and the implementation. Suppose some former design packages, including some design notes and sequence diagrams, become invalid. These design packages were promoted to the build successfully in the past, but now they don't match anymore with the latest changes to the application.

What to do? Does FDD allow these designs to be a representation of what the system looked like in the past, even if these designs are no longer valid? This would also mean that we currently don't have a design document of how some features are implemented, only the code "documents" their design. It also means that "the model" currently contains sequence diagrams which are invalid!

Or does FDD force us to update all former design packages and/or sequence diagrams, which might be a nightmare to complete?

Rudy

What to do when sequence diagrams become invalid?

Hi,

"What to do? Does FDD allow these designs to be a representation of what the system looked like in the past, even if these designs are no longer valid?"

This is the kind of facility that the Revision Control of your Software Managment & Control tool should provide (CVS, or whatever). It has little to do with FDD IMHO (anyone correct me please).

Having said this, one should establish a build policy that includes a schedule (build once every day, once a week, every too days, etc). The build plan can/should include the update of the revision number of any single file (document, code, sequence diagrams etc) that belongs to the project (Ex: Revision code: software version.software release.interim-revision.build-number).

This means that any updates are recorded and can be recalled. If you want to see how the system looked like in the past (including the documentation) just recall a previous revision or build.

"It also means that "the model" currently contains sequence diagrams which are invalid!"

No, only valid content should have their file revisions updated upon a build. If you get the latest build then you should not get invalid sequence diagrams (has their are stored in files).

"Or does FDD force us to update all former design packages and/or sequence diagrams, which might be a nightmare to complete?"

Nope, a lightwight change control policy does that for you easily.

For instance:

*One should not create new sequence diagrams to change a previously effective ones (valid), but create a new revision of it.

Some SCS's allow you to establish that in every checkout the interim revision number is updated automatically. If a file is checked out but no changes are made there must be an option in the SCS that allows one to undo it (so no revision is incremented), those mantaining a system updated and "correct".

Hope it helps,

Best regards
Nuno Lopes

What to do when sequence diagrams become invalid?

Ok. So all sequence diagrams affected by some design change must be updated and a new revision must be committed.

Rudy

What to do when sequence diagrams become invalid?

Yes, but this is not an "artificial" task just to keep code and diagrams in sync.

Usually sequence diagrams are affected due to some change on business rules (Even then probably sequence diagrams don't change as it concerns mainly with sequence of messages, well mostly). Changes on business rules require you to re-evaluate certain features.
Re-evaluating certain features requires you to create a new Worpackage. Then all DBF/BBF cycle is applied to those features affected by some business rule change or whatever. DBF requires one to re-evaluate existing sequence diagrams (new revision, change them). After all work is done the "changes" are promoted to build.

Nuno Lopes
PS: If you don't evaluate the impact of change then you can't predict its outcome. "Blind" change is the source of most bugs, IMHO.

What to do when sequence diagrams become invalid?

I personally don't describe everything including some message calls in the sequence diagrams of the Business Tier. I tend to leave out the following:

* Messages to the Persistance Layer;
* Any IO messages;
* Any UI Controls message (new dialogs etc etc);
* Remoting;

That is, I tend to consider that any layer bellow the current layer is a System Layer. This beacouse I'm assured that refactoring the layer above or bellow does not have an impact on the sequencing of messages (collaboration between objects within the concerning layer) of the current layer.

How can I be assured that there will be no impact across tiers? - Unit Testing centered on each tier. That is unit tests should be done not only on objects but also on contracts between tiers.

If I refactor and the unit tests pass then messages across tiers should be "safe". If they don't pass, there can be a problem with the current unit tests, the code modified upon refactoring, or actually object interfaces (services across tiers/contracts/ facade) were changed without further notice to the Chief Architect.

Nuno

Revision Control of FDD artifacts

Nuno, it would be great if you - or anyone else - could describe a typical directory layout. Where goes the object model and where go the work packages including sequence diagrams, etc?

It would indeed be nice if an article was published on this site about Revision Control on FDD artifacts and related issues.

Rudy

Jeff De Luca's picture

No, all completed sequence diagrams are not updated (I think)

I'm not sure I'm following this part of the thread correctly.. but here goes anyway.

Here is what I think you are saying: A sequence diagram is the design for some feature (feature A). Feature A is completed (promoted to build). Subsequently, another feature invalidates the sequence diagram for feature A.

Well, that means that Feature A itself is invalidated. This then is the rework or defect or bug fix scenario which has been described at least twice elsewhere on this site.

You would not go back and redraw the sequence diagram for feature A - that wouldn't make sense. You are simply now doing DBF-BBF for a new feature(s) which represents the defect or fix etc.

Perhaps I'm not following what you mean?

Jeff

Jeff De Luca's picture

This thread has spawned another

This (great) thread has spawned another thread.