Jan 292010
 

It’s a super busy week – I believe it’s the busiest week in the past couple of months, but hey, vacation is ahead :D.

Suddenly I have so many meetings to attend, and seriously discussing about how the new project will look like. Also another project got some serious issue and took me quite some time to help them digging out the root cause.

Anyway, I will take vacation next week, actually it’s from this weekend :-$, wish I can have some good rest, recharge myself, and do better in the week after.

Jan 222010
 

I “unfollowed” all Linux distros as they didn’t give me much useful information, I’m still following CouchDB, Cassandra, and MongoDB, so far their tweets are kind of helpful.

Also I subscribed to Cassandra’s user mailing list, lots of interesting topics there.

Jan 202010
 

Finally there’s someone in the company said congratulation to me.

It was my boss’ boss – this morning he’s in the same elevator and said “congratulation”, this makes me feel better :), but whenever he asked me “how’s the feeling?” my answer is “way too long”.

Is it the right answer? ๐Ÿ˜€

Jan 192010
 

I setup a testing environment on couple of company boxes to see how Cassandra performs with real machines (real here means powerful enough to be a data node), here are details of the environment:

  • Two client nodes, one server nodes, all are RHEL 4.x. I use two clients nodes as I found that during the performance test, single client machine is unable to generate enough load
  • All three machines are 8 cores/16G memory (well, memory is not a big deal for my tests)
  • Running Cassandray 0.5.0 RC3 (built from svn last night)
  • Client is using Python

Here is the graph for simple request (single key lookup):

It seems the result is pretty encouraging – query per second of the server is growing almost linearly, at about 5,000 QPS, over CPU utilization is still under 40% (25% user, 12% sys), I cannot get more client boxes to test, but if it goes this way, and let’s make 80% is threshold of CPU utilization, then this kind of box can handle 10K QPS, roughly, with latency at around 3ms.

Note that CPU utilization, QPS per client, and latency is not quite clear as the overall QPS is too high, but you can get some ideas from next graph …

Here is the graph for application (login, which will do one user lookup, and then 10~100 user lookups, each lookup is to get one buddy’s information):

The result is kind of worrying me, since the CPU utilization is 70% already (45% user and 25% sys), it seems 200 QPS is what the cluster can provide. However, thinking of the login operation is doing way too many table lookups (average 55 lookups per login), so just matches the simple lookup we discussed above (10K QPS per box), while latency is at around 80ms.

Actually, 20% sys is pretty bad, means the kernel is busy switching (I didn’t check vmstat during that time, but this is a reasonable guess), but again, this may be reasonable since the machine is handling 16 active clients who are sending bunch of requests, while it has only 8 physical cores so context switching is unavoidable.

Since everything’s linear, I can assume 4 cores boxes can offer 5,000 QPS with reasonable latency. I will do some similar tests with MySQL and memcached, and I will do similar test with multiple data nodes as well, since I got impression that multiple data nodes is far slower than single node (inter-node communication?).

Jan 182010
 

Actually this apply to adding new nodes, or removing existing nodes:

  • Add the new node in, make sure AutoBootstrap is set to true so it will step into bootstrap node, for removing old nodes, just shut it down
  • For all existing nodes current in ring, do the calculation to get different ranges which used for token, the formula is 2^127/nodes.
  • Now, on each node, run “nodeprobe move token-for-this-node”, you can run only one node at a time as data will be moved around
  • After all nodes finish the move, do “nodeprobe cleanup” to remove useless entries

The serving should not be affected during the operation though performance may be affected.

The “loadbalance” does not make things perfect, though Cassandra guys mentioned it should be good enough.

Jan 182010
 

It seems Cassandra creates a big pile of threads for different tasks, I didn’t step into details, but I’m pretty sure it has more than 40 threads with default setting on a 2-nodes cluster. So multi core may not be a concern, as all these threads can run on different cores to fully utilize the CPU resource.

However, my tests show something really worry me – multi-node cluster performs worse than single node (due to inter-node communication I believe), and multi-cores deployment is slower than single-core deployment (this is something I don’t quite understand, may be because of L1/L2 cache?).

I need some hardware to test it as well, as VM is not that good for this kind of test. Then I suddenly recall I still have some 8-core/16G boxes in company sitting idle there, I can use one or two to do the test for sure ๐Ÿ˜‰

Jan 172010
 

Just found that I didn’t do any research to compare Xen and kvm, obviously two are both announcing they perform much better than VMWare, which I believe that’s true, but I haven’t read anything about xen vs. kvm yet.

Will post something here – but I won’t do my own comparison as it will be too much to me.

Jan 172010
 

10 years ago I joint this company, I never doubted that I would stay here for 10 years though people questioned it from time to time.

I wish I can stay here as long as I can, but now I do doubt if I can stay here for another 10 years, let’s see. ๐Ÿ™‚

Jan 172010
 

I do have a feeling of writing something today but I just cannot recall what to write, bad memory, indeed.

Could be one of these couple of things:

  • Avatar, idiot story, excellent special effect
  • CouchDB, still far away from what I want
  • Cassandra, it’s running way too many threads so it should be able to utilize multi-cores
  • Desktop development, starting using VC++ Express again, it’s 2010 beta now

Will post after I can recall what to say.

 Posted by at 00:18  Tagged with: