One thing so many people tell you, in small and big companies alike (especially the big ones), is “you can’t measure [XXX]”. PR firms always tell how you can’t measure the value of press. Engineers tell you that you can’t measure development, it’s a soft science. Corporate marketers tell you that you can’t measure the value of brand-focused advertising.
You know what? Try telling that to a VP of Sales. Where every salesperson is measured every hour, and every bit of revenue is measured in real-time, hourly, daily, monthly, quarterly, and annually in Salesforce.
Turns out you can measure everything.
And if you measure everything, you’ll do better, every quarter, like clockwork, it turns out.
Here are some examples of what we did. I know bigger/better/more sophisticated folks have even better ideas, but these worked:
1. How to Measure PR Efficacy. This idea I got and then modified from Tien Tzuo, who is CEO of Zuora and was CSO at Salesforce. We developed an almost Fibonacci-esque scale for all PR hits, from 1-15 (1,2,3,5,8,15). {I don’t know why it was 15 and not 13 ..} TechCrunch, WSJ, NYT were a 15. A 1 was a blog written by someone decent but with little traffic. Then our VP Marketing would assign targets in between.
Then, we’d take a multiplier. x1 if EchoSign was just prominently mentioned in the piece. x3 if the article was solely about EchoSign.
So all press/PR hits were worth from 1-45.
Then, we’d track our progress in PR. The goal was to increase 20% Quarter-over-Quarter. Several of the PR firms we worked with quit when we tried to measure them as well this way. But. It worked.
2. How to Measure Dev Productivity. OK everyone has their own way of doing this, but surprisingly many don’t. We did something similar to PR. We used a scale of 1-8 measured in weeks, which is how long we “knew” a scaled, senior developer would take to develop a feature (from 1 to 8 weeks). Then, we’d tally up what each developer built during each quarterly release, and tracked the results across the entire team each month. We decided to give no credit for bug fixes unless they were large and took 1+ weeks (basically a feature anyway). Then, we removed names (of course) from any external presentation. Know what? It worked. Productivity went up each release, and we could measure it not only on an absolute basis, but on an aggregated feature points/developer ratio, to track the overall feature cost effectiveness of our developers. Today, there are plenty of tools here.
As soon as we tracked it this way (productivity/developer on average, per month and quarter), then it became clear – we could afford to hire a lot more developers. More features = more larger customers. Because it became clear how accretive feature development was by adding engineers.
Also, for the first time, the true “cost” of slipping a feature or pushing out a release — which in isolation can be a very rational move — became very apparent, and more of a logical than an emotional discussion. You slip the feature, the score goes down by 8 that quarter … pushing out a release a week probably will cost you 20 points …
It also enabled us to judge the efficacy of the product development team.
3. How to Measure Revenue/Lead/SalesRep. This was an important project in getting us cash-flow positive and in letting us scale sales. Everyone measures $$$/rep — that’s basic Salesforce 101. What less people do unless they have sophisticated sales ops is carefully measure how effective each rep is at closing the leads they get. I.e., closed $$$/lead by sales rep as well as by lead type and category. By doing so, we quickly learned that sending an SMB rep more than 150 leads/month, and sending a “run-rate” rep more than 100 leads/month, lead to a dramatic fall off in $$$/yield to the company. Yes, beyond 150/100 leads per month the reps would close more, but not that much more. They were not only too busy, but they were neglecting the lower quality leads — human nature when you have surplus.
So by measuring $$$/lead/rep across different deal sizes, we also learned exactly how many reps to hire to scale and optimize $$$/lead. We knew we always needed to have enough scaled sales reps to handle the 150/100 leads per month, which made it crystal clear what our minimum hiring goals needed to be to feed the engine.
I could go on. I know every demand gen marketer today is meticulous about measurement. So are the best folks in consumer internet on funnels and A/B and multivariant product testing.
But measurement isn’t left to them. Measure everything. Publish the results. And watch them magically grow, if not month-over-month, then certainly quarter-over-quarter.