Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

The subtle dig at getsatisfaction was interesting. (For history: http://37signals.com/svn/posts/1650-get-satisfaction-or-else ).

This is a good show of selective transparency. The last 100 should serve to make potential customers feel warm and fuzzy without actually showing anything to potentially discourage them (other than if the last 100 were :( complaints)).



For the record, it's not a subtle dig at getsatisfaction. "Customer satisfaction" has been a term used by the companies for decades. We don't think satisfaction is a very high bar. We're fighting against low expectations.


Do you publish response rates? Things like this always scream selection bias to me. If it were, say 90/7/3, but it took 300 interactions to get 100 responses, that seems like a misrepresentation.

That being said - I really appreciate the idea, but it only really works if you trust the company already. I have plenty of reason to believe 37s (even applied to be a part of the team being tracked,) but if this were State Farm or Comcast or something like that, it would be a pretty laughable concept.


I believe around 15% of our customers respond with a rating. From what I understand that's pretty high as far as these sorts of things go. I think that would be considered a reasonable sample.

We don't ask randomly, we ask everyone, so everyone is free to offer up a rating.

CORRECTION: Just reviewed the stats again and it looks like the response rate is in the mid 30s.


Jason, the issue is not whether the sample is large enough--a 30+% response rate strikes me as very high, and I suspect given your support volume you're getting several hundred responses per day.

The issue, instead, is whether the people who choose to respond--to take another step after their support encounter and rate the quality of service--are representative of all users, including the majority who do not rate the experience. That's a tough hypothesis to test, and I suspect it could cut either way: the happiest customers may choose to respond since they so enjoyed the experience, or the angriest may choose to respond since they want you to be aware of their disappointment. Finding out which (if either) of these is occurring is tricky, though.

That said, I certainly don't mean to impugn what you're doing here. I love the transparency and I think you probably are getting a fairly good snapshot of overall customer satisfaction. Also, even if there is a response bias, you can safely use these numbers to benchmark over time, which I suspect would be very useful.


Right, I didn't mean to imply that this was deliberately misleading, just that I would imagine these things to be far more useful internally than to the casual observer. (i.e. it would be great for spotting consistent sore spots and the like, but not necessarily a statistically viable indicator of customer happiness)

My guess would be that the biggest benefit from this is going to be when people hear about how great the customer service was at 37s, stop by to check it out, and see a chart full of smiley faces, each one potentially representing an echo of the great experience they heard about.

PS: I would like to add a frowny face for never getting a response to my application :(


I don't understand this observation. The alternative would be to show, what, a paginated array of every face? Who would benefit from that, besides people who want to snipe at them anonymously on message boards?

Incidentally, in real companies, all transparency is selective. They've selected not to be transparent about any number of things, from their monthly top-line revenue to their feature roadmap.


I think GetSatisfaction did a great job responding to this post at the time and it's important to give them some credit for a good response to criticism.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: