m
Our Mission Statement
This is Photoshop's version of Loremer Ipsn gravida nibh vel velit auctoregorie sam alquet.Aenean sollicitudin, lorem quis bibendum auci elit consequat ipsutis sem nibh id elit.
Follow Us
Top
The Forrester Wave™: In-Memory Databases, 2017 Q1 - Caveat Lector - VoltDB
6832
post-template-default,single,single-post,postid-6832,single-format-standard,mkd-core-1.0,highrise-ver-1.0,,mkd-smooth-page-transitions,mkd-ajax,mkd-grid-1300,mkd-blog-installed,mkd-header-standard,mkd-sticky-header-on-scroll-up,mkd-default-mobile-header,mkd-sticky-up-mobile-header,mkd-dropdown-slide-from-bottom,mkd-dark-header,mkd-header-style-on-scroll,mkd-full-width-wide-menu,mkd-header-standard-in-grid-shadow-disable,mkd-search-dropdown,mkd-side-menu-slide-from-right,wpb-js-composer js-comp-ver-5.4.2,vc_responsive
VoltDB / Announcements  / The Forrester Wave™: In-Memory Databases, 2017 Q1 – Caveat Lector

Blog

The Forrester Wave™: In-Memory Databases, 2017 Q1 – Caveat Lector

The Good, the Bad, and the Ugly…of Analyst Reports

Analyst reports — with each new year and the corresponding publication of new analyst reports, I’m reminded of the critical importance of fully understanding what you are reading and of thinking for yourself. The new Forrester Wave for In-Memory Databases (Q1 2017) has just been released (you can get a free copy here). It can be a useful additional data point for deciding which in-memory database products to evaluate for your own particular needs, but it should not be the only data point you use.

This cycle of the Wave was somewhat frustrating for me and I wasn’t entirely pleased with the results. I spent some time on the phone with the Wave team from Forrester trying to explain some of my disagreements with their conclusions but that had no effect on the final report. VoltDB was judged to be a “Strong Performer” by Forrester, but I believe we should have been scored higher. I know — that is what every vendor other than the top scorer will be saying about this report — but I will elaborate on why we deserved a higher score and also why I said thinking for yourself is important.

The report is based on a set of attribute scores, which are then weighted by a relative importance number. It is important to look not only at the attribute score but also at the weighting Forrester assigns to the attribute, as that impacts the vendor’s final score.

In looking at some of the weightings, I noticed some that didn’t make much sense to me. For example, in the final report, “Professional services” was weighted as 30% while “Support” was weighted at 25%. That means within this report and scoring, a vendor offering lots of professional services outscored a vendor that offered superior support. In my experience, having a product remain up and running in a reliable manner is extremely important – and if it happens to fail or have issues, customers really want the vendor to be able to help resolve the problem quickly and easily through customer support. They don’t want to hear that the vendor has a large staff of consultants they’d happily put on the clock to assist the customer in getting back up and running. And what if one product is significantly simpler to deploy than another and doesn’t need significant (or any) professional services? VoltDB’s presales and support teams help get our customers into production and keep them in production, without the need for paid professional services. So which do you think is more important, Support or Professional Services?

If you read “The Forrester Wave Methodology”, you will see:

“We set default weightings to reflect our analysis of the needs of large user companies — and/or other scenarios as outlined in the Forrester Wave document — and then score the vendors based on a clearly defined scale. These default weightings are intended only as a starting point, as we encourage readers to adapt the weightings to fit their individual needs through the Excel-based tool.”

I, too, would encourage readers to use their own weightings.

The meaning of the scores themselves also needs to be scrutinized. One I had particular issue with was Scalability. The scoring of Scalability was based exclusively on the largest number of nodes being used by any of the vendor’s customers. If you had a customer running a cluster of 100 nodes, you scored really well. If your “largest” customer was running only 7 servers, you scored poorly.

This is the part I don’t get: even if both of those customers were supporting the exact same use case, the vendor requiring 100 nodes outscored the vendor requiring only 7 servers! All of our customers want to reduce server counts wherever and whenever possible. If they have a choice between running 100 servers and running 7, they will take 7 without any hesitation. We have had customers move off other database products, reducing their node count from 100 down to 7. VoltDB was designed to make much more efficient use of hardware so they were able to get better performance from our 7 nodes compared to their previous 100 node configuration of another database.

This all points to the fact that you need to look closely at analyst reports and try to interpret based on your own requirements. If you are in need of a new fast data management system, download the new Wave and read it — it provides some valuable information for comparing vendors and solutions — but don’t rely solely on the Wave (or any other analyst report) to decide which product is best for you.

Get your complimentary copy of the Forrester Wave: In-Memory Databases, 2017 Q1 here.

  • Peter Booth

    Thats a great point about scalability. I once had the unusual experience of being interviewed for essentially the same job at two different companies. Both firms were equity option market makers. Firm A was a well known investment bank that disappeared during the financial crisis. Firm B was a private company that did one thing equity option market making. Both shops have a five person team building their trading system. The interviewers from both teams seemed very smart. Both were two year old home-grow Java apps running on RedHat Linux on dual socket Xeon Dell servers. They couldn’t be more similar. But …

    Firm B had 5x the market share of Firm A. Firm A was very proud of it’s cluster of 82 Linux servers, which sounded huge to me. So two week’s later I’m being interviewed at Firm B and I ask,
    “so how many production servers do you have?”
    “Well..” he says sheepishly.
    “We have three servers but we haven’t spent time tuning, so I think we will need to deploy a fourth pretty soon”

    I nearly fell out my chair. It wouldn’t have been OK to say anything, but I was taken aback by the idea that the shop that had the smaller workload had 25x as many servers. That was when I realized that most of our industry has no clue about scalability.