I've been pretty amazed (yep...understatement) of how some vendor-executed (fabricated is probably a better description) tool comparisons happed to end-up in tool evaluation reports, which in turn amazed me that these things ended up there, copy/pasted in the first place.
Food for thought: Vendor-executed tool comparisons are always subjective
Don't use a vendor provided tool comparison in your evaluation report. The fact that the vendor-executed tool comparison shows that vendor's tool scoring best in most (some vendors take it that far to score best in all...ghehe) categories, should ring a bell: Something fishy is probably going on.
It's easy to twist a tool comparison in your own advantage:
a) you lie (it's called marketing)
b) you base the competition's data on some old, no longer supported product version that no one really remembers
c) you take only those categories (and give them complicated names) that you score good in, and lie about the competition
It's not the vendors fault really. It is ,however, embarassing to see a tool comparison literally copy/pasted from a white paper or website end up in a tool evaluation report. Sure it saves time making the report, but the potential for losing a year or two in extra work due to your wrong choice or laziness in making the report is very much there.
In a bid to help domain-specific modeling tool evaluators, we added a MetaEdit+ feature list to our website. We did not feel our opinion about a competing tool should matter to you, hence we left the column "other tool" empty, for you fill out. Also did we not feel qualified to categorize our tool features as "best", "average" etc. so we just kept it to "supported" or not. Should you feel we missed some categories then we will be glad to add them there.
Food for thought: Vendor-executed tool comparisons are always subjective
Don't use a vendor provided tool comparison in your evaluation report. The fact that the vendor-executed tool comparison shows that vendor's tool scoring best in most (some vendors take it that far to score best in all...ghehe) categories, should ring a bell: Something fishy is probably going on.
It's easy to twist a tool comparison in your own advantage:
a) you lie (it's called marketing)
b) you base the competition's data on some old, no longer supported product version that no one really remembers
c) you take only those categories (and give them complicated names) that you score good in, and lie about the competition
It's not the vendors fault really. It is ,however, embarassing to see a tool comparison literally copy/pasted from a white paper or website end up in a tool evaluation report. Sure it saves time making the report, but the potential for losing a year or two in extra work due to your wrong choice or laziness in making the report is very much there.
In a bid to help domain-specific modeling tool evaluators, we added a MetaEdit+ feature list to our website. We did not feel our opinion about a competing tool should matter to you, hence we left the column "other tool" empty, for you fill out. Also did we not feel qualified to categorize our tool features as "best", "average" etc. so we just kept it to "supported" or not. Should you feel we missed some categories then we will be glad to add them there.
No comments:
Post a Comment