DNC and the Moment-Interval Archetype

Hi,

this post is steamed from my own Pet project concerning the development of an integrated suit of tools for managing projects following FDD.

I've been trying to create a static model using the DNC and is rules, but I'm finding it hard if not impossible to follow them. In my tools I want PM, CPs and Customers to be able to assess the ammount of work that has been done, including the level of completeness (Percentage Complete) and what has been completed.

Do this I've defined eatch stage of FDD as classes respecting the moment interval archytype. Plus I have another one called Project. As I see them all they are "moment intervals", in the sense that each of them as a Start Date, Completion Date, Status (planned and actual).

My problem is that Project completeness is defined by the FDD stages that are moment interval themselves (meaning that for a project to be complete and closed, each stage must be completed and closes). This in turn I belive that violates the rules for DNC in the sense that mi-datails are not defined as moment intervals.

So the questions is, If I'm violating what seams to be MI rules does that mean that I do not have a good model?

I've read an article in www.uidesign.com that arrives more or less to the same conclusion as I have, that is the DNC component probably should allow mi-details be themselves moment-intervals, but again maybe I'm defining a erronous model. But even in that article the conclusion is debatable.

While modelling FDD, I'm coming up with a lot of mi's. For instance, I'm condering a WorkPackage as a moment-interval because it as a creation date and status. Also its live is dependent on the assement of completion of a a developing feature (a Role of a Feature when participating in DBF and DBF stages of FDD). You can see that project completeness and work packaging are related but only becouse their completeness have in common part of FDD stages.

Hope some expert can bring me some light on this dillema.

Thanks in advance,

Nuno Lopes

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Hi think I got it!

Probably my conclusion will help others too.

After thinking about how MI relate to each other I believe that after all the DNC as it is defined is enough. That is, there is no need for mi-details to be moment-intervals themselfs.

My reasoning "glitch" was in considering that Subsequent Moment Intervals (next-mi) where actually events (or whatever qualifies as a moment interval) happening only after preceeding moment intervals (prev-mi). If I consider that a next-mi can occur during the observed moment interval then everything fits together correctly.

So in my previous example I was condering that each Stage of an FDD process needed to be details of an all encompassing moment-interval class, the Project. Furthermore, each Stage of an FDD process where moment-intervals, so the need for mi-details to be moment-intervals too. This because I thought that they could not be condered next-mi due to the fact they where "happening" while a project was in course, not after. But examples provided by "Java Modeling in Colour" suggest otherwise.

So using the DNC, I now have that "Domain Modelling" (moment interval) is a next-mi of a Project (moment interval), and so on as described by FDD.

This actually has a deep impact how I perceived the relationships beetween "Moment-Interval"s.

For instance I now believe that the article "Observations on the DNC" (http://www.uidesign.net/2000/papers/ObservationsOnDNC.html) actually "force" the a new formulation for the DNC more or less due to the same reasoning "glitch" that I had. There is no need for new DNC re-formulation as far as I see it now.

Wether an aggregation is needed or not, is irrelevant IMHO for the all DNC balance.

So in the example of the article a Presentation should be a next-mi of a conference, and not an mi-detail. That is a Presentation does not qualify has an mi-detail according to the DNC Pattern althought there is an aggregation, its need is at least debatable.

Best regards,

Nuno Lopes

Jeff De Luca's picture

Yes, the next mi can occur during the previous mi (interval)

If I consider that a next-mi can occur during the observed moment interval then everything fits together correctly.

Correct!

Jeff

Observations on the DNC

Hi Nuno,

I notice that you read the paper I published along with Pawel Pietrusinksi, a couple of years ago about the DNC

Observations on the DNC

which addresses the issues you raised in the intial post on this thread.

However, your observation that it may not be necessary may be true in this case. I know that we have built an FDD where I currently work and we didn't run into the modeling issues you describe. Where I was going with the "Observations" article was the notion that the Archetypes and the DNC could be used in a variant of the UML MOF for auto-verification of domain models (L1 models). This is seen by some as an academic exercise but it would have a practical use in business process modeling tools which try to dynamically generate service-oriented architecture code.

David J. Anderson
author of "Agile Management for Software Engineering"
http://www.agilemanagement.net/

szego's picture

Don't forget what an Archetype is all about.

I think the revised model you present is simply wrong, but more than that you also seem to forget what an archetype is all about: it's a shape from which things more or less follow. The DNC, and later the archetypal domain shape (ADS), show how things fit together more often than not.

The fact that the diagram shown in figure (1) doesn't exhibit a shape that's identical to something on the ADS doesn't mean it's broken. It stands up very well - there's one association, in the place corner, that needs a tweak. So? You able to very quickly and and very easily arrive at this model using the ADS. And YES: it's normal that some things don't fit exactly into the archetypal shape. But you're 99% of the way there, and it's obvious with a little exercising of the grey matter how the rest should look.

As for figure 2 in your paper, the shape simply makes no sense at all. In neither follows on from figure 1, as you assert, nor does it make any sense in a generic fashion. I struggle to even contrive requirements that would justify the shape you've shown. Figure 3 looks close to the mark, but there's still a few details that look dubious.

Figure 4, the simplified model, simply has no value. It doesn't appear justified from the disjoint examples provided, nor can I say it's a shape that appears in practice more often than the existing archetypal model. I'm sure there are might be cases where it's suitable, but as a generic archetypal shape I see no justification for it. You say as much yourself - that you've attempted to take one example model and reverse out a more generic shape from it. Perhaps we should wait for at least those other two cases you mention to appear before we consider it as an alternative to the shape Peter distilled from his 20 years or so of modelling.

P.S. if anyone wants to see the tweaks we've made to the ADS based on experiences on real projects over the past 5 years, let me know and I'll get them posted up here.

Continued research and debate is good

Paul wrote: P.S. if anyone wants to see the tweaks we've made to the ADS based on experiences on real projects over the past 5 years, let me know and I'll get them posted up here.

Yes, this would be good - please do.

Continued research and debate on this topic is good. There is a lot more to be gained from further the work that we all started with Peter 5 years ago. We have all learned the strength of this.

I happen to agree with your sentiment that the DNC adheres to the Pareto Principle that it will get you 80% of the way there for 20% of the effort. It is also true that for practical application in the field, I would preach your approach - "don't get too anal about the precise shape - use it to get you started on the correct path".

However, you are clealy unaware of academic research in the object field and the academic definitions of "strong" versus "weak" meta-modeling. Archetypes fall between the two stools. They aren't "weak" - mere <> - but neither are they strong.

You have to take a long hard dispassionate, external observer look, at the language you have just used, and the language Jeff and Pete and Steve have used about Archetypes and then recall what we all preach about "repeatability". To the disconnected observer Archetypes don't sound very repeatable - in fact they sound down right vague.

The concept of a "strong" L2 entity is one which can be used to rigorously define an L1 entity - a class. There is an implied inheritance from an L2 in an L1 entity. When we color classes to give them an archetype, we are implying some form of inherited properties but I am yet to see anyone define these in an intelligble fashion.

An L3 entity could be a pattern of L2s which make sense in a given domain, context or application. This notion is very powerful. It could be particularly valuable given the move towards SOA and BPM. Finding better ways of doing this could lead to much more powerful tools.

Granted this is academic in nature and not everyone is interested in that. For some their interests will lie in pragmatic application of the DNC (ADS?) using the Pareto Principle rule that 80% is a good start. For others it may be interesting to push the boundaries and see how far it can be taken.

With specific reference to the paper, it is almost 3 years old and I haven't spent any more time thinking about this since then. However, I think that the DNC represented the start of something truly useful and I'd like to see more work done on it.

David

--

David J. Anderson

author of "Agile Management for Software Engineering"

http://www.agilemanagement.net/

Footnote

I should have added that ...

3 years on since the "Observations" paper I still use and teach the DNC in its original form from the JMCU book. To Paul's point - let's wait until there are more examples before making a change.

However, there are good reasons why we need to debate the DNC and tighten up its definition. Looseness and "[80%] of the way there" are not things much liked by geeks. Paul Glen explains the geek psyche in his 2003 book, "Leading Geeks". He lists the 13 attributes of the geek psyche: Passion for reason; problem-solution mindset; early success; joy of puzzles; curiosity; prefer machines (than people); self-expression == communication; my facts are your facts; judgement is swift and mericless; my work, my art; reverence for smart people; loyalty to technology and profession; always seek fairness and meritocracy.

The current position on the DNC - it's good enough and gets you most of the way there - goes against the geek psyche element of "passion for reason" and invokes the element of "judgement is swift and merciless". Hence, psychologically some geeks find the DNC incomplete and are therefore prone to dismiss it as "useless" or a "waste of time".

Tightening up on how we talk about and teach it, so that it appeals to the extreme geek psyche will be one way of increasing its adoption.

David
--
David J. Anderson
author of "Agile Management for Software Engineering"
http://www.agilemanagement.net/

There is no magic formula

However, there are good reasons why we need to debate the DNC and tighten up its definition

I disagree with this idea completely. The archetypal shape is just that - it's not meant to be something from which you can just create a model cookie-cutter fashion, and expect to get the perfect model every time. Synaptic activity is mandatory! Modelling is always an exercise in trade-offs, balancing all the constraints, and investigating the possibilities. I don't agree with the sentiment that there will, or should, ever be some perfect "meta model".

This is NOT what the ADS is about. Every single attribute, method, association and cardinality in that model has had countless hours or scrutinisation over many years by many people from use on real projects! It's certainly not a case of "good enough" - everything in there is very precise.

You can see similar lines of thinking within the pattern community, which I think got off the rails very early on. From my understanding it goes against the intent of Alexanders work on which the original concept was based. From Alexander himself:

Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice".

Even from the intro to the GOF book, they use the analogy of plots for novelists or playwrights, citing patterns such as "Tragically Flawed Hero" etc. I think these are more illustrative of the kind of thinking that should be going on here - and that is focussing on the shape of the solution, and not getting too uptight about nailing it down to the n-th degree as IT people seem to want to do.

I suspect that part of this problem lies in the way we describe patterns. We present "a solution", which is meant to illustrate a way to solve the stated problem. I think the success of the GoF book, which dealt only with faily low level design and implementation patterns, obscured the bigger picture for a lot of people. Because the solutions from that book were able to be almost cut-and-pasted into code (as many tools do for you today) it skewed the thinking about patterns towards the "cookie cutter" view of their application. I don't think this was the books intent.

I don't think "tightening up" the ADS is what's needed. Perhaps a catalog of examples that illustrate how it might apply in given scenarios would be more useful. Possibly through some worked examples we might provide the missing link for those that insist on seeing something more concrete (i.e. code). As long as they're carefully done to show alternatives suggested by the ADS, and not interpreted as "the solution". I do agree that there's almost nothing out there in the way of instruction on how to apply the archetypal shape. Peter Coad's writing has always set the bar very high, and was rarely accessible to a novice. And it's not obvious just from looking at it. From experiences in teaching its use, the best way has always been by example.

An interesting note: Peter Coad back in 1992 was one of the first to note the link between Alexandrian patterns and software architecture in a CACM article!

ADS Updates

Would love to see the ADS tweaks based on your experiences. Please post if you are able...

Thanks
Greg

szego's picture

Modelling MI's

Sounds like you got it all sorted out, but it's an interesting modelling question in general. MI's are probably the hardest part of the shape to grapple with, as there's usually more than one way to look at things.

You've already got the major breakthrough - the previous/next MI shape vs. the MI/MI-detail shape. Often we also consider modelling a moment as one interval-like MI class, or two moment-like MI classes (representing the start and end of the interval). In your case the moment makes more sense (the big clue is usually to be found in how you express the concept in spoken language when you talk about it), but often just considering these two alternative shapes will also highlight the other dimension of the problem that you've encountered: the previous/next vs. MI/MI-detail alternatives.

In practice I've found that the MI-detail generally only appears in a couple of specific cases. The first is the typical transaction / line-item case, e.g. Order/OrderItem or similar going on. The other is in the more traditional sense of aggregation: the MI-detail really is obviously a piece of the MI. Unless one of these two cases really jumps out at you and smacks you in the face as being obvious, then generally the previous/next MI shape is more applicable.

Aggregation is the key

Nuno, I agree with Paul entirely on this. It's good advice on DNC modeling which needs captured more formally.

The MI - MI-Detail generally only applies to aggregations. A process of steps tends to be the next-MI, prev-MI example. Paul also highlights a powerful option, the concept of separating out into edges rather than levels, i.e. a Moment at each end of an Interval.

My additional hint as to whether an Interval or two edge Moments is/are required is to think what actions significant to the business take place and precisely when these happen. Is it the state of a business process which is important or the trigger which changed the state?

As for your specific problem of modeling FDD for a KMS tool, I see that very much as a next-MI, prev-MI thing. If I can get permission, I'll post the object model for our own KMS which was developed by Jason Marshall - who shows up hear occassionally.

In the example, you mentioned originally from the "Observations" paper, the Presentation(s) are definitely an aggregate part of the conference. The next-MI, prev-MI thing could only be applied there if it was a small conference with a linear series of events. The need for a parallel series of events creates the need to create an aggregate. Other models for this problem might be possible.

David
--
David J. Anderson
author of "Agile Management for Software Engineering"
http://www.agilemanagement.net/

Aggregation is Key?

Wow, so much input. Thanks a lot. For me, Moment-Interval relationships are the most complex to understand, not because they are complex by nature but simply due to the fact that most people like me are not used to model things in this manner. That is, relationships expressing business process dependencies are usually "constraints" left out of the class model and put on Sequence Diagrams.

IMHO, this is usually rooted on the fact that most people that I have met (including myself) actually build UML models using data modelling precepts and then attach behaviour (We are used to describe data dependencies not process dependiencies in a Class Model). Unfortunately the moment-interval and their relationships (between each other) is the least explained archetype in the papers that I have found, and yet the most semantically powerfull.

David wrote:


In the example, you mentioned originally from the "Observations" paper, the Presentation(>s) are definitely an aggregate part of the conference

Let me see If I understand (I need help!). I believe there are two distinct semantics for aggregations to look after here: - One is the aggregation of MI-Detail into a moment-interval. This is an unquestionable pattern. I understand this aggregation within the context of constraining the object life cycle. That is, aggregated objects are destroyed if their "container" (the moment-interval) is destroyed.

As I understand the term "aggregation" when it is used in the Conference Model, has a richer semantics.

* A Presentation occurs during a Conference

* In this domain, a Presentation occurs inside the scope of a Conference (or is scoped by).

If I start declaring aggregations due to this kind of semantics then I suspect that potentially all objects are aggregated by some aggregator (container semantics, occurs during, etc). Including moment intervals (This is actually the subject of the paper).
As far as I understood DNC, their authors tried to avoid delving on to the semantics of aggregation unless it was perfectly obvious (like the mi-detail). This is fine, because complex constraints can always be explained using UML Notes.

If one states that the relationship between the Conference and the Presentation is of Aggregation in the UML Model (using the UML symbol "<>"), several problems arise. For instance should'nt be all subsequent moments of a Presentation also aggregated when a Presentation occurred during a Conference?

I have faced exactly the same issue when modelling the domain for my tools for FDD. In my model I have a Project. Everything, including the stages of an FDD process occurs during a Project. Not only occurs but is also scopes both its lifecycle and its context.


I had two options:

1) Associate (Aggregation) each stage FDD with the Project and state a constraint like - "Any two associated FDD Stages are occur within the same Project". Now, this rises the issues against the DNC that steamed my post. Further more, it is harder to define a normalized data model (3rd normal form) for this.

2) Clearly a Project is a moment-interval, so are FDD stages. Now what is the "best practice" when modelling association between moment-intervals? As I understand the DNC, is to leave out aggregations and use Notes. The reason why as I understand is due to the fact that if I put an aggregation when associating two moment intervals (A->B), rises complex scoping issues on the subsequent moment-intervals of B in relationship with that aggregation (need to describe them in text). The impact of the aggregation is to what extent? I better stick to Notes.


I have chosen the option two:


Project <-> Overall Domain Model <-> Build a Feature List <-> Plan Development <-> ....


The advantages are:

1) It respects the constraint "Any two associated FDD Stages occur within the same Project".

2) By transitiveness I can figure out the Project where any stage occurred, this complying with normalization theory of relational modelling and ORM. No need to state a fact "twice" when it can be deduced!

David wrote:

The next-MI, prev-MI thing could only be applied there if it was a small conference with a >linear series of events. The need for a parallel series of events creates the need to create an aggregate.

I don't think I understand this. I have wrote, previously and I thought that everyone concurred, the following:

If I consider that a next-mi can occur during the observed moment interval then everything fits together correctly.

Isn't this the basis for a parallel sequence of events?

Furthermore, if I understand the DNC correctly, just because there seams to exist only one association between a moment-interval and its Subsequent moment-interval actually we can have many associations, each with multiple classes of “Subsequents”.

For instance:

Project -> (1:1) Overall Domain Modelling -> ....


Project -> (1:n) Project Assignment

Both the Overall Domain Modelling and the Project Assignment are moment-intervals that can occur in parallel. I can imagine the impact of this if we apply this rule recurrently for each moment-interval. We can have a massive parallel system.

As you can see David I need more input, because I’m confused now regarding your last statement. I know what you are looking after when you state that Agregations are Key:

Where I was going with the "Observations" article was the notion that the Archetypes and the DNC could be used in a variant of the UML MOF for auto-verification of domain models (L1 models).

Probably, but then UML needs to be enriched with more artefacts in order for use to express what we mean by Aggregation. IMHO at the moment, I have to use Notes and Sequence Diagrams and stick with the basics (my rule). That is, the artefacts that we have do make Class Diagrams are not enough (even the DNC uses them but requires imagination outside the scope of the rules of UML to fully understand how it works).


Hope someone can correct my thoughts, because right now as a noob I’m puzzled.


Thanks in advance for any help.


Regards,


Nuno Lopes

PS: Sorry for this long post, by I did not had the time to write it shorter :)

szego's picture

Applying the ADS

Furthermore, if I understand the DNC correctly, just because there seams to exist only one association between a moment-interval and its Subsequent moment-interval actually we can have many associations, each with multiple classes of “Subsequents”.

You can have as many "subsequents" as you want to! Go crazy!

The ADS is not trying to provide the shape of the overall model - only to point out the typical ways in which archetypes connect to one another. By using it you should more easily be able to arrive at the overall shape, but that's all! So there's really no such thing as "can't". There are a few things that don't make any sense, but not many (e.g. having a yellow without a green is pretty meaningless). Remember what an archetype is all about: it's a more or less thing, i.e. here's a shape we see more often than not. There's really two aspects to this.

The first is the way in which the four archetypes connect together: a blue description to a green party/place/thing to a yellow role to a pink moment-interval. But this is not set in concrete, and there's some good tips on the ADS diagram itself about some of the variations. You might not always have a yellow role class. If you don't need one, then drop it. You may not need a blue description class, no problem! Or you might put a blue description on a yellow, or even a pink. If that's what you need, then do it. You might have blues that are associated wtith or even aggregate other blues - not uncommon in product catalog's. These are just some common variations - ones we see most often.

The second aspect is the three "corners" of blue/green/yellow/pink that we often see associated with a moment-interval: the party, place and thing corners. That doesn't mean that all three have to be there. You might have only one, or two, or even none. And within each corner you might have any number of variations on how that corner is laid out. That's fine also. It just depends on the domain that you're modelling, and what makes sense.

Now consider the subsequent / next links on the MI class. Say we have MI classes A -- B -- C so that A is previous to B which is previous to C. Then for each of A, B and C we'd try to apply the ADS. So for A, is there a party corner? A place corner? How about a thing corner? For each corner, which of the blue/green/yellow's apply? Maybe just one or two, perhaps none! Having done this for each of the three MI's there's most likely some overlap here: e.g. chances are that A and B might have something in common, like a yellow role class or maybe two distinct roles back to the same party class. But that's not in the ADS? Fine: it's not trying to address how all the classes fit together. It's just giving us some really good clues about the kinds of things each archetype would typically be associated with, and how those associations typically look.

Consider the infamous convenience store example: a blue description class for "ItemDescription" that represents some type of thing that can be sold. In the "sales" component it's associated with a pink MI-detail class "SaleItem". Over in the inventory component it might be associated with a yellow Supplier class via a pink SupplierAgreement class. You can see now that once we merge these component models into one big diagram things aren't going to look like the ADS much anymore. But that's ok - the ADS could have given us some great clues: in the sales component we looked down from the pink Sale MI, realised an MI-detail was appropriate, and considering the 'thing' corner arrived at the blue ItemDescription class. Over in the inventory component a similar thing happned: we had the concept of a Supplier, but the ADS helped us classify this as a yellow Role, and then we considered using a pink SupplyAgreement rather than statically associating suppliers with their products. From there the connection down to the 'thing' corner and ItemDescription was pretty obvious.

Instead of a blueprint for the overall model shape, consider the ADS as a little "template of a building block" from which you'll be able to discover small fragments of the overall model. Don't expect your overall model shape to necessarily look like the ADS itself. And when you find something that doesn't fit into the ADS don't be too suprised either: you've just hit something on the 'less' side of 'more or less'.

Hope this helps - it's something I normally explain in front of a whiteboard where I can wave my arms around a lot. I too need more time to get this description shortened.

PaulS :)

KMS tool model

David,

Any chance you can post the KMS model you referenced in this post? I'm new object modeling and it would be great to see some practical examples...

Thanks
Greg

Sorry, I can't

I did go look for it but the Cheif Programmer responsible for it couldn't find it. It was lost in a system upgrade when something was backed up properly. I can't believe these kind of things happen in this day and age but they do.

Naturally, we could reverse engineer it out of the code but we haven't had time. We're not working on the KMS tool any more.

David

David J. Anderson
author of "Agile Management for Software Engineering"
http://www.agilemanagement.net/

heptaman's picture

KMS Model

Hi David!

Regarding the KMS model, I really want to get an idea of how it could be. It doesn't need to be too much comprehensive. Just the FDD-related part will help a lot.

Can you provide it for me (us)? :^)

[]s,

Adail

Modelling MI's

Hi Szego,

Thanks for your kind and informative reply. Glad I'm not tottally out of the track. Especially when I don't have anyone to talk about DNC in my country.

Szego wrote:


In practice I've found that the MI-detail generally only appears in a couple of specific cases. The first is the typical transaction / line-item case, e.g. Order/OrderItem or similar going on.

I also suspected that but now I'm sure.

Szego wrote:

The other is in the more traditional sense of aggregation: the MI-detail really is obviously a piece of the MI. Unless one of these two cases really jumps out at you and smacks you in the face as being obvious, then generally the previous/next MI shape is more applicable.

Yes but what if it smacks in the face on both sides (I'm not even giving my both faces to be smacked ;) how can I decide? That is, I'm observing a phenomena that is both a moment interval and is aggregated within another. Like a Project delimiting the life span of FDD Stages. In other words I destroy the Project, I destroy the FDD Stages (Isn't this the most "aggressive" notion of part-of/aggregation? The same happens with a Conference and its Presentations.

In modeling the FDD Stages, let's say I destroy a Domain Overall Modeling Phase, should'nt I destroy all subsequent phases, as they don't make sense now! Isn't there some kind of aggregation? This things are smaking in my face all the time on this small project.

Please read my other post to check If I got out of this well.

Best regards

Nuno Lopes

szego's picture

Tough One

Hi,

Yes but what if it smacks in the face on both sides

Duck!

I'm not suprised that aggregation causes so much grief, given that there is no concensus even among the OO "gurus" on what, exactly, it means. They've even dropped the two different types in the UML so that now there's just the one symbol - due to the general confusion and lack of common understanding of what it was all about.

To answer your question, if neither case jumps out at you as being clearly the better solution you simply have to investigate both. We do this quite often, particularly in process one when doing a model merge.

The best advice here: draw both shapes, put them up on the wall (or whiteboard or whatever), stand back and discuss them. Another key: look at behaviour, and not data, when considering the alternatives.

This is one of the (rare) cases where we might run a sequence or two early on, just to explicitly expose the dynamic aspects of the model that aren't as immediately obvious when staring at class diagrams. This is what I do in my head when I'm looking at two different shapes - thinking about how the classes are going to interact with each other.

It appears from your post that you've starting thinking this way, but maybe you want to bypass the lifecycle and crud type stuff and consider the "business" methods instead. What kind of behaviour will the business require of the model? Look at the notes from your domain walkthrough.

This is another place where the ADS can help - take a look at the archetypal methods in the relevant classes. With the ADS in one hand, and your domain walkthrough notes in the other, stare at your model on the wall and see if you can come up with domain specific equivalents for some of the archetypal methods. There are often some great clues here.

Peter Coad mentionds very briefly how the colour archetypes add an implicit layer of dynamics over top of a class diagram. Once you get familiar with these archetypal methods, and the way they interact among the archetypes, I've found that you can quite easily visualise these interactions on a largely static class diagram. They spring to life: I can see little message-send's wanding along associations and causing other message-send's to be triggered.

For the particular model you've present I can't add much, without a lot of speculation. I don't know exactly what features you want to support. And there's also the problem that I've actually modelled this domain A LOT in the past, and the particular features I was trying to address would certainly impact my thinking on the subject.

This is getting long... I'll signoff and add one more point in another comment. Hope this helps some.

PaulS.

Again Thanks!

Thanks for your kind answers and remarks.

I can see what mean as I've applyed that methodology myself. I do'nt actually consider that finding if something is an aggregation or not of much importance as concept if they are not that evident (part-of, container, etc). I tend to focus in more specific issues, like business rules and collborations (state them in text when complex).

Unfortunately the "Java Modelling in Color" does not present a clear "model" for collaboration (dynamics). In other words, most archetypal methods are readers (like assess, estimate, calc, find, etc). Only in the moment-inteval archetype I find mutators/crators like generate, make and addDetail.

For instance, the book (neither the papers that I've read on the subject) present object creation examples, rules checking examples etc.

When explaining dynamics the book always refers to the initializer as aSender. The question I always ask is, what kind of sender? Is it ok to be a Role, a Party a Place, a Thing in the dynamics being presented?

For instance:

If I want to get a Roles of a Person participating in a moment interval (a Party), should I ask the moment-interval say:

aMI.getRoles(aPerson); /* several disavantages

or

aPerson.getRoles(aMI); /* several disavantages

or

should it be a static method of an MI Class or a Role Class or something similar (like a call to a singleton class, aManager) - don't like this.

The same thing for creating a role. Should it be created by the moment-interval (mi.assign(aPerson, role-type)? the Person (aPerson.assign(aMi)? Should it be created externally in a Sender and provided to the moment-interval (aRole = new Role(aPerson); aMI.addRole(aRole), etc)

Although this can be argued as implementation issues, at a higher level is how objects should collaborate to enforce business rules.

I have already ordered the "Streamlined Object Modeling" to get more info about this.

Nuno Lopes
PS: Thanks for all the help anyway. I guess this is DBF/BBF stuff.

Whole-Part Relationships

Nuno,

I think you would greatly enjoy the Streamlined Object Modeling book I mentioned in another thread. In it, Peter Coad's former co-authors, Jill Nicola and Mark Mayfield, examine in great detail the aggregation (they call them whole-part) relationships including those associated with <<transactions>> (a.k.a <<Moment-Intervals>>). I believe that the thinking in this really excellent book will help you to think through the issues which Paul points out above.

Regards,

David

--

David J. Anderson

author of "Agile Management for Software Engineering"

http://www.agilemanagement.net/

Hi

Hi David,

Best OO advice that was given to me in almost 6 years,

Thanks,

Nuno