Thursday, December 28, 2006

The cost of building DSL / DSM tools

Most of the costs of developing software are attributed to the maintenance phase (often I hear 80%).

I never heard anyone disagree with that.

It makes sense to get your design right from the start - as mistakes in the design phase are normally the most costly ones to solve later on - a good initial design, though, does not eliminate or necessarily reduce the amount of cost that goes into the maintenance phase.


Often I hear the question on how software development with DSL's relates to this wisdom. It's a good question with a simple answer, but there is a better question.

The simple answer first: DSM languages (or: DSL's) aim at development on a more abstract level. The maintenance work, therefore, is also done on a higher abstract level, making performing this work more effective and thus less costly: a change in requirements is much easier to implement. When it comes to getting the initial design right, DSM languages provide better support than general purpose modeling or programming languages: The "wisdom" of the expert and rules imposed by the underlying architecture are (or should be) encapsulated in the modeling language and prevent silly designs. This does not mean that the developer can be a brainless chimp, in fact the more knowledgeable he/she is about the problem domain the better (just like when not using DSL's), but the support he/she gets from the language in getting the design right simply is more prominent.

The better question, however, focuses on the tools for implementing DSM languages and generators (or MDD if you will):

How does the issue of maintenance and "getting the design right" relate to DSL tools?

Why should building your own model-based code generation tool (or tool-chain) be any different when it comes to the design and maintenance issue? Isn't it likely that 80% of the cost of building your DSL tool (or MDD tool chain) will come down to maintaining it? I dare say that the 80% is a rather mild estimate.

Sure, you can get an MDA tool, based on the industry-standard UML language. You will draw complicated pictures, outdated immediately, you won't generate anything useful and you will have very little tool maintenance. You can claim you do MDD but you never even have the time do it or to evaluate the true MDD alternative because you're so darn busy all the time maintaining your code. For you, reverse engineering must be a blessing...

With DSL/DSM tools you shift (most of) the maintenance effort to the language (and generators) you are using. Just like your code in software development today, now your modeling language(s) and generator(s) will need maintenance: You will not get them right immediately and even if after a while you get them right you will still need to change them later on.

So, are we no step further with DSM? Do we just shift the maintenance work to another, more abstract level? No, with DSM less developers will do the maintenance work, preferably the smarter ones and their work helps the less-smarter ones (who are still smart, just a bit less) do smarter things faster.

When using DSM/DSL technology correctly, the major chunk of your work over time will be focused on updating your modeling languages and generators. In my opinion it makes sense to choose a DSL/DSM technology that supports this fact: You do not want the majority of your developers to wait for a couple of days/weeks until the expert provides them with a new, thoroughly tested version of the modeling language, do you? And what about having your developers manually update all models made with the previous version of your modeling language when a new version comes around? That's a lot of maintenance work man. You can claim you're doing DSM or DSL or even that you have a Software Factory, but the factory sucks, and you probably do not have the time to claim it.

Microsoft have a pretty good solution to this problem with their DSL tools: A factory for making factories. I do hope they improve it though as I hear it does have some significant drawbacks. It's been ridiculed but they did think a lot further than the people doing the Eclipse GMF/GEF. The latter have not even started thinking about the maintenance problem, making the framework ideal for code-savvy pet-techies who just like to build stuff and care less about doing (or staying in) business.

The risk involved with DSM/DSL technology can be reduced in several areas:
  • Making sure you build a good language and generator
  • Adopting a supporting technology that is forgiving to the mistakes that you WILL make
  • Tool and consulting support by an organization who have experience in this area
  • Assigning tool-evaluating techie who understands that technology is there to support business, not for technology's sake
  • Understanding the social and organizational changes that DSM/DSL technology will introduce in your organization and getting support for it



Monday, December 18, 2006

Vendors and tool comparisons

I've been pretty amazed (yep...understatement) of how some vendor-executed (fabricated is probably a better description) tool comparisons happed to end-up in tool evaluation reports, which in turn amazed me that these things ended up there, copy/pasted in the first place.

Food for thought: Vendor-executed tool comparisons are always subjective

Don't use a vendor provided tool comparison in your evaluation report. The fact that the vendor-executed tool comparison shows that vendor's tool scoring best in most (some vendors take it that far to score best in all...ghehe) categories, should ring a bell: Something fishy is probably going on.

It's easy to twist a tool comparison in your own advantage:
a) you lie (it's called marketing)
b) you base the competition's data on some old, no longer supported product version that no one really remembers
c) you take only those categories (and give them complicated names) that you score good in, and lie about the competition

It's not the vendors fault really. It is ,however, embarassing to see a tool comparison literally copy/pasted from a white paper or website end up in a tool evaluation report. Sure it saves time making the report, but the potential for losing a year or two in extra work due to your wrong choice or laziness in making the report is very much there.

In a bid to help domain-specific modeling tool evaluators, we added a MetaEdit+ feature list to our website. We did not feel our opinion about a competing tool should matter to you, hence we left the column "other tool" empty, for you fill out. Also did we not feel qualified to categorize our tool features as "best", "average" etc. so we just kept it to "supported" or not. Should you feel we missed some categories then we will be glad to add them there.

Friday, December 15, 2006

More on templates

It was nice to see Microsofts' Gareth J agree with me when it comes to providing DSL tool users with ready language templates, i.e. generally a bad idea. It just leaves me to wonder about Microsofts' DSL tool strategy to provide users with complete ready languages (not leaving it just to templates but go the whole nine yards in providing organization A a method that organization B says is probably best for them, without knowing a whole lot about how exactly software is developed in organization B). At least, this is what the media tells me to be the strategy Redmond will follow and of course the media does not always get it right, but this most often happens when editors think they understand a new technology (...but then they don't...and still write a story on it), my guess is that they often get it right on strategy. Especially when interviewing and quoting team members and project leaders.

I wrote my earlier post on language templates with in mind the idea we at MetaCase have that it is probably best to allow the organization that uses the DSL to define that DSL... themselves and make defining tool support for the DSL (in the end that is what we need) convenient.
There are several reasons for this conclusion:
  1. MetaCase employees only know a lot about Domain-Specific Modeling tools, we do not pretend to know how software is or should be developed in different vertical problem domains
  2. Providing a ready-language to many (say more than 2) ALWAYS leans toward the one-size-fits-all problem that we see with UML: users want to change it to become more domain-specific, which if you would, often suffers from its original version legacy: The template problem.
  3. Allowing users to define their own, makes them think more and in different ways about their problem domain, which has the effect that they will start to understand their problem domain better, which is undeniably a big advantage, which can lead to more concrete advantages
Of course there may be the argument that MS focuses on horizontal domains instead of the presumably vertical domains that we at MetaCase focus on but even then I feel that many of my arguments still stand: It is better to allow companies to define their own DSL and code generators, this is what makes them really domain-specific, which provides the biggest benefits.