Hacker Timesnew | past | comments | ask | show | jobs | submit | madworld's commentslogin

How long after they reproduced did you give them to fix the issue? I looked up the thread on their mailing list and you seemingly jumped the gun a bit on your conclusions.


Feel free to investigate further. I had to move on.


So what you are saying is I was right. Thank you. People who report a bug and give less than half of a day for someone to investigate has never dealt with a vendor like oracle or IBM. This tells me you haven't had a data problem before and based on your willingness to give up so quickly leads me to believe you won't end up with data problems that this article is talking about anyway.


Ha. I've had and have plenty of data problems. After 2 days of making adjustments as per Basho's suggestions to try and improve the write throughput, I moved on. You seem to be making a lot of judgments and assumptions about that decision based on very little information. I guess this is troll food.


Meanwhile, back in Postgres-and-MySQL land we're wondering why we should have to entertain this kind of ridiculousness.


Exactly, hook them in, so they question whether or not to deal with the problems when it falls on its face.


What they are doing with Riak isn't sharding. Riak from the ground up was been designed as a distributed database. They didn't want to go horizontal when really they shouldn't have to with their datasize based on Mongo's claims. The problem is, Mongo lies about what their database can do, and the fact that Kiip figured that out is why they didn't want to bother scaling out with mongo as a band-aid for its problems. It was better for them to just use something made to scale. That's how I read it, based on that blog and by his comments on this post.


That's most people's findings. If your dataset can fit in ram [1] and you don't care about your data being safe then there might be an argument for MongoDB. Once you care about your data, things like Voldemort, Riak, and Cassandra will eat Mongo's lunch on speed.

[1] But as Artur Bergman so eloquently points out, if your data can fit in ram, just use a native data-structure (http://youtu.be/oebqlzblfyo?t=13m35s)


How is the global write lock "fixable" without a major rewrite of the codebase?

Like the article suggested, it would be one thing if they did it for transaction support. In reality, from looking at the code, it seems like the global write lock came from not wanting to solve the hard problems other people are solving.


A major rewrite of the core engine is exactly what's needed. Sounds like a fun project. If they don't do it, someone else will.

Adoption is hard to replace. Modifying API's is really hard. Rewriting a core engine? Reasonable and in this case probably necessary given the issues.

There are lots of people here who could replace that core engine and someone should if the MongoDB guys can't.


DB-level locking is planned for MongoDB 2.2 which should be out within a few months.

https://jira.mongodb.org/browse/SERVER-4328


Meh, if your other option is to use PostgreSQL and get row level locks, a db level lock is still a fail.


And here's a great post that provides some insight on how much effort has been put in by RDBMS vendors to handle locking:

http://stackoverflow.com/a/872808


> This is the main reason my large employer didn't even bother to seriously look at what their products had to offer. Both 10gen (Mongodb) and one of the companies offering Cassandra support contracts were a lot more reasonable.

When we contacted both 10gen and Basho a few months ago, Basho's support rates were cheaper than 10gen. We didn't look at Datastax at the time, so I can't comment on that comparison.


Maybe that's why they don't list their prices. Having my company as a client would help with PR and marketing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: