Ubuntu 22.04

 Triviality  Comments Off on Ubuntu 22.04
Apr 262022
 

Ubuntu 22.04 was just released, I managed to upgrade my WordPress site (running on VM and only publish static site), it was not as smooth as I thought, there were quite some small problems here and there, but eventually it worked after like 4 hours of troubleshooting, and so on.

As a side note, I no longer running Parallels since I found it tricked me to subscription which involves $79.99 per year, luckily I noticed the weird charge on my credit card so was able to cancel it the same day. I’m running my VM on UTM now, I didn’t notice performance difference between UTM and Parallels, though Parallels does provide more convenient features like desktop integration, snapshot, etc.

I think I’ve been running Ubuntu since 10.04, but I’m seriously thinking of moving away from it now due to snap, there are more and more applications over Ubuntu dropped deb support and are solely on snap only, I don’t mind a distro changes its package management tool like yum to dnf, but I just don’t want to have 2 package management mechanism at the same time.

I may not run Fedora as I need LTS, I cannot choose Rocky as it (actually RHEL) lacks of aarch64 support. Most likely I’m going to play with Debian or Arch.

Mar 052010
 

This is my .screenrc:

startup_message off
vbell off
split
screen -t s1 1 ssh s1
split -v
focus down
screen -t s2 2 ssh s2
focus down
screen -t s3 3 ssh s3
split -v
focus down
screen -t s4 4 ssh s4
focus top

And you can see the screen shot at right.

This is exactly what I want. ๐Ÿ˜€

Jan 132010
 

Since I’m getting myself messed up with all sort of distros, here are something I have to write down to keep track.

To make Linux/FreeBSD distro up-to-date:

  • Debian & Ubuntu:
    alias update=’sudo apt-get -y update && sudo apt-get -y upgrade && sudo apt-get -y dist-upgrade’
  • CentOS & Fedora:
    alias update=’sudo yum -y update’
  • openSUSE:
    alias update=’sudo zypper refresh && sudo zypper –no-gpg-checks -n update’
  • Gentoo:
    alias update=’sudo emerge –sync && sudo emerge –update –deep –newuse world’
  • FreeBSD (need to have portmanager installed):
    alias update=’sudo portmanager -u’
Jan 012010
 

Here are current setup:

  • data server d1: Fedora 12
  • data server d2: openSUSE 11.2
  • data server d3: CentOS 5.4
  • shared host f5: Debian 5.0.3, this is the machine act as NFS server, LVS director, and login bridge box
  • client machine c1: Ubuntu 9.10, this is the pure client host, doing all development and initiate testing traffic
  • client machine c2: Gentoo 1.12.13, actually this is the web server running apache+wsgi

d1~d3 and f5 are running on the dedicated box with ProxMox, while c1 and c2 are running on a Windows Vista machine with VirtualBox.

Dec 302009
 

Brand new ๐Ÿ˜€

Playing with Sabayon now, an “out-of-box” gentoo, seems fun, but I have to write down something as I never touched this before:

  • rc-update/rc-status to add/list current enabled services
  • “emerge –sync” to sync package meta data, and “emerge –search” to search, then “emerge pkg” to install package “pkg” – it takes some time as everything will be built from source
  • change /boot/grub/menu.lst, vga=normal to avoid loading system with graphics

Could be more, will post then.

Dec 282009
 

I was running everything on Fedora (7 VMs), but steadily, I changed some of them to different distros … just for fun.

Now I have two Fedora 12,ย  one CentOS 5.4, one openSuSE 11.2, one Debian 5.03, and two Ubuntu 9.10. I will think about converting one of the Fedora machine and one of the Ubuntu machine to something else, but haven’t decided yet.

Dec 082009
 

Testing a prototype that uses Cassandra as the back end storage, the simple application is doing user authentication stuffs, it logs in and then get user’s profile and then show details on the web page.

I hit performance problem with buddy related operation – every user may have 0~20 buddies, I want to show each buddy’s last login time on the result page, and actually I’ve retrieve everything for those buddies. The most direct implementation as I did first, is using user object to get data back, obviously this is not good as for every user object, client needs to access Cassandra cluster to get data back, the TCP round trip would be a pain.

Then I added something to user object’s constructor, which load all buddies info in one shot (Cassandra’s multiget_slice API), things are getting better but this doesn’t seems reasonable to me as for most time, we don’t need buddy info (such as authentication), and getting buddies info back is just a waste of time.

So I added a new method to the user class, called load_buddies, this will load buddies info on-demand. This makes authentication pretty fast, but still keep the ability of loading buddies info in batch mode.

After all these the performance is … still not good, my test case is one login failure every ten requests, and for successfully logged user, I should buddy id and last access time, and also change the user’s last login time. The performance, with my current setting, the worst response time is about a second, while 90% request were done in less than 600ms.

There must be something can be tuned, though VM could be the reason of slowness. I will check following stuffs:

  • Apache HTTPd configuration, it seems prefork is performing better than worker, there may be more can be tuned include both HTTPd and wsgi
  • Python class optimization, I will review the implementation of user class, as I don’t want to make user class too complicated to be used
  • Cassandra performance, actually this is what I’m worrying about, as during the tests, Cassandra boxes’ CPU utilization is about 80% – 70% on user, 10% on sys, roughly, it could be the bottleneck

Without the buddy operation everything’s fine – the worst response time is about 600 ms while 90% requests are below 400ms. Relationship is a pain, it’s the bottleneck, but in this social era, there is no web application can live without relationship …

BTW, my testing environment:

  • Test client running on PowerBook, using ab – I will check if there is anything else can be useful
  • Servers are all running on same physical box controlled by proxmox, this includes a web server, a LVS director (to load balance Cassandra nodes), and 3 Cassandra nodes
  • The server box uses Ethernet, PowerBook is on wireless. I don’t think there is any issue for this as connect time is pretty low.
Nov 262009
 

It seems ipvsadm does not do health check, etc., so I turned to keepalived. While keepalived is doing great job on adding live node back and removing dead node out, it does not have a good way to dump current rules and stats, also it has problem during startup time.

I think latest keepalived may have solved the problem though I haven’t tried yet. If it works fine I will leave ipvsadm installed to be the utility tool only (i.e. disable service coming from ipvsadm).

I will look around to see if there is anything else similar to keepalived or not, it should be, I think.

Nov 252009
 

Playing with LVS – so that I don’t have to connect to individual Cassandra server.

What I planned for LVS:

  • 192.168.1.99 will be VIP
  • f5 (192.168.1.205) will be LVS director … you are right, its name is f5 ๐Ÿ˜‰
  • f1 (192.168.1.101) to f4 (192.168.1.104) will be real server
  • will do DR mode (I think it is called single-B on most L4 switches …)

The configuration is actually pretty simple, as long as you get it right, on LVS director (f5):

  • Configure VIP to eth0:0 as an alias, netmask should be 255.255.255.255, broadcast is VIP itself
  • Add following rules with ipvsadm (yea … you need to install this package)
    -A -t 192.168.1.99:9160 -s wlc
    -a -t 192.168.1.99:9160 -r 192.168.1.101:9160 -g -w 1
    -a -t 192.168.1.99:9160 -r 192.168.1.102:9160 -g -w 1
    -a -t 192.168.1.99:9160 -r 192.168.1.103:9160 -g -w 1
    -a -t 192.168.1.99:9160 -r 192.168.1.104:9160 -g -w 1
  • Start LVS (after restart network to make sure VIP is effective) by “ipvsadm –start-daemon master”
  • If you want to stop LVS you can do “ipvsadm –stop-daemon master”

Now let’s turn to real servers – all real servers (f1~f4) are doing the same:

  • Add these lines to /etc/sysctl.conf and then do a “sysctl -p”
    net.ipv4.conf.dummy0.arp_ignore = 1
    net.ipv4.conf.dummy0.arp_announce = 2
    net.ipv4.conf.all.arp_ignore = 1
    net.ipv4.conf.all.arp_announce = 2
  • Configure VIP to dummy0 as an alias, netmask should be 255.255.255.255, broadcast is VIP itself
  • Change ThriftAddress in storage-conf.xml for Cassandra, to 0.0.0.0 so that Thrift serves on all interfaces
  • Remember to restart Cassandra so to make new configuration takes effect

Now, launch your favorite client, connecting to 192.168.1.99:9160, you should get everything as if connect to individual server.

Nov 202009
 

I recently spent lot of time on reading articles from Linux Magazine, as it always introduce pretty new (or not new, but less known) technology and products, this time it is Proxmox, an open source virtualization product.

I’m running VMWare Go at home at this moment, but I failed to solve the license issue and every 60 days I have to re-install everything. I guess I got the wrong ISO, so what I installed was actually for vSphere, but I just don’t want to spend too much time on digging it out as obviously VMWare does not want people get the right solution easily.

Xen is another story, at least to me it is not easy to use, maybe next version will be easier (I should take a try either but I lack of machines …). Also, I still have this impression that running Ubuntu with Xen is painful as it is, kind of, tightly bundled with RedHat distro.

Now here comes Proxmox which seems promising, I will take a try today (may be weekend as well) and then if it works I will stick with it, but if not … I will try out Xen.

Let’s see.