Hacker Timesnew | past | comments | ask | show | jobs | submit | clain's commentslogin

Jason, the issue is not whether the sample is large enough--a 30+% response rate strikes me as very high, and I suspect given your support volume you're getting several hundred responses per day.

The issue, instead, is whether the people who choose to respond--to take another step after their support encounter and rate the quality of service--are representative of all users, including the majority who do not rate the experience. That's a tough hypothesis to test, and I suspect it could cut either way: the happiest customers may choose to respond since they so enjoyed the experience, or the angriest may choose to respond since they want you to be aware of their disappointment. Finding out which (if either) of these is occurring is tricky, though.

That said, I certainly don't mean to impugn what you're doing here. I love the transparency and I think you probably are getting a fairly good snapshot of overall customer satisfaction. Also, even if there is a response bias, you can safely use these numbers to benchmark over time, which I suspect would be very useful.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: