Observations from an ebook noob

I’ve been the owner of an ebook reader (see the previous post) for all of two weeks now. I have used my kindle nearly daily, and it’s a handy thing to have around. So far, I’ve mostly been reading issues of Linux Journal (who moved to a digital format two years (?) back), the scifi book by MK Wren that I mentioned, and then various tests.

But about the medium. Surprisingly, I fucking hate that there are format restrictions, DRM and all that jazz. Why have two formats that do essentially the same thing on different devices? Profits, probably. Businessy stuff that I don’t understand. There are of course, ways around things like this. I read somewhere that you can root a Kindle, which then enables functionality not found on the retail device. There are various converters for formats, such as Calibre, which enables management and conversion between formats. I have read that the Kindle (un-rooted?) will not eat stuff that has been un-DRM-ified using a converter, or that it will read books that have been converted at all. I haven’t tried the software yet, so I’ll have to get back to you.

The issue of DRM is a difficult one. I do not believe in crippling content and/or software. Your product should be good enough so that people want to pay for it. And I will. The amount of money I spend on software, movies and music in a given year is not a small one. We own several shelves of music, several gigabytes of digital music, and probably in the neighborhood of 500 DVDs and Blurays. I prefer FLOSS, but if it doesn’t do what I need it to do, I’ll probably buy something. I own my copies of Windows, on all of my hardware. And so on. Ok, disclaimers aside, the point I was trying to make is: If your content is good and there is a need for it, people will pay for it. DRM will never be an effective solution, ever. People will always find a way around it.

Okay, done venting!

I’m still miffed that I can’t read my technical manuals or whitepapers, which are in PDF format, on my Kindle. I would really find it useful if I could carry that with me when I go on consulting gigs, so I could pull up any number of manuals when I’m in a server room somewhere doing an install. Yeah, I can use a laptop, but that will run out of battery on most install gigs, and it’s not comfortable to have when you’re behind a rack for instance. Printing them is also out of the question, as they might be hundreds of pages. This is really a use case I can get behind, though, I do admit it is a comfort thing, more than a necessity for me.

I ran into that pesky “out of memory” message, trying to read a tiny 15 MB pdf. I don’t get it. Surely the device has more than 15 megs of RAM, and I hope it doesn’t cache the entire document when you read it. Maybe a slight read-ahead and read-back? Conversion might be the answer here, but, as I said, I will have to get back when I’ve tried it.

As for the content: I have not bought anything from Amazon yet. I have bought The Book of PF (3rd edition) from No Starch (really like their stuff!), some indie content, and then the scifi books through.. whatever it was. Paid by paypal or credit card, then transfered them through the USB to the Kindle. Works fine.

There is in-device buying. I’ve seen ads for $1.99 books on the Kindle, and sooner or later, I’ll click on one. It will be interesting to see if there are regional restrictions on that. I bought the Kindle in the States, sure, but can I buy books from Amazon when I’m in Finland? Amazon.co.uk tells me to go to Amazon.com (eerily similar to my first tries of buying a Kindle). I simply don’t understand this. I get it that they need to like.. pay distributors and what not, but.. Just let me pay you for your stuff! I have the money! You have the stuff! Let’s transact!

You can also move content by sending an email to your “Kindle email address”, which was created when you first registered your Kindle. Also, you can probably use Wifi (haven’t tried it). USB is fine for me.

Even if I have to live without content from Amazon’s stores, there’s still plenty for me to read, and plenty of good publishers that provide me with cheap, compatible books.

Compatible books. What a laugh-riot.

 

The Trouble with Tumblr…and other stuff

So.. I’ve been thinking about two things in relation to Tumblr, the popular image-blogging site.

1) Why is it so hard to get an image in the original size? Sure, this may be theme related stuff, I’m not tumblr’d enough to say. But when you see a thumbnail on a site, shouldn’t you just be able to click it, and get the original, right? This has been the kind of.. use case since the very early days of the internet. The point of the thumbnail is to, one, decrease load times by displaying a smaller ‘preview’ image first, and letting the user decide whether he wants to load the larger image, and two, to save layout space on your site by not covering the entire screen with one image. The case in Tumblr is often that I’m clicking on an image, and then I’m taken to the comments page, where I can either click the “source” link under the image, or the link for the person (‘via xxxxx’) who reblogged the image from the original poster. Then I might get the large, original picture. Or not! I find this extremely disturbing. If  it is a theme issue, then okay, fine. But then most people are using very broken themes. It also might signify that most people have no idea how to fix the theme, or even what makes up a ‘theme’ on Tumblr. Which might, or might not, say something about the blogger on Tumblr. But enough about this angle! I digress!

2) Who provides the original content? Pick any tumblr, save for say, the official Tumblr page for a celebrity or so. Look at the images. Are they all reposts/reblogs of some other image? In some cases the reblog chain for an image is stuponfuciously long. Is there original content on Tumblr, or is like, everything a reblog of a reblog of a reblog of a reblog of some picture someone found somewhere, which was still not the original source?

Okay, I realize this is a silly thing to get annoyed over, but that’s me.. for you.

On to other things!

I’m moving. Again. I seem to live in one apartment for two to three years. But this time, it’ll be different! It’s a sweet pad. Built 2011. Four rooms, a big washroom and sauna. 98 m^2. Huge living room (I’m looking into the crystal ball and I’m seeing, yes.. a projector…). And, again, a hacking room. Same as in my last apartment. I missed that place. A room that I can fill top-to-bottom with hardware, books, whatever. A place where I can sit down, close the door and do whatever. I’m getting fuzzies just thinking about it. It’ll be great. Also, nobody will be disturbed by the humming. It’ll just be there, and it’ll be sweet.

What else what else. Didak has posted some new pics of his famous Home Office, version 7. They are the sweetness. Check them out. Waiting for a writeup or something, or a making of article. I’ve really enjoyed those in the past.

I wanted ESXi 5.5 on some Dell and HP boxes. I had no joy booting from a USB that was made using Unetbootin or Win32DiskImager. It simply wouldn’t boot. Now note, that the same image would eventually boot correctly via ILO/iDRAC using the virtual media feature. It might be a problem with the USB media I was using. Or the software that I’m using to create the bootable media. Or the specific server hardware, or the BIOS/UEFI settings of them, or UEFI in general. I googled for a solution, and I found one. Here it is! Following those instructions, I now have a proper bootable (on any machine I’ve encountered so far) media, with ESXi 5.5 on it. It might be helpful for you. Also, remember to use the vendor specific media for both HP and Dell, and not the generic VMWare Image. They contain  diagnostic tools, drivers and other stuff that will be useful later. You can find the vendor specific bootable media for HP and Dell in those two links there. These may or not be current, but they’ll take you somewhere. For Dell, google for the esxi version, and then A0x, where x is a number. When I was installing, the latest was A01.

What I’ve been reading lately: Tom Clancy’s Threat Vector (his last book?). Okay for a Clancy, and pretty eerily realistic. After that I started on Neal Stephenson’s The Diamond Age. Which has been moving a bit slowly at times. It goes from okay to excellent between chapters, so sometimes I’m reading twenty pages in one go, and sometimes it’s more like sixty or eighty. It’s a curious book, that. There are absolutely brilliant parts, and then some parts that are, to put bluntly, boring. But I’ve been meaning to read that for a while now, and I’ll be happy to finish it soon. Snow Crash was excellent, and so was Reamde. After this I will either read The Baghdad Blog, by ‘Salam Pax’, or another Clancy perhaps? I have like five books on my reading shelf.

 

Saying Hi to 2014

So.. 2014. Real psyched! Not really. No big plans for 2014, but there are a few milestones coming up nonetheless. My son will start elementary school this fall, which will obviously be HUGE. We’re not going abroad this year; we’re sort of planning a bigger trip in 2015 (possibly Japan), so that’s okay. It doesn’t rule out shorter trips, though. Maybe we’ll drive up north, or something.

I’m starting to get settled at my new job, and things are going smooth. Lots to do, which is not a bad thing at all. Loads of new stuff to learn, which is always great, and I need that. Not really much I can speak of, except a FreeNAS-build that I’m working on. I’ll maybe do a post or two on that build once it’s done. Sure, it’s not the EMC or IBM storage system that I often work with, but this is a cool project, that’s kind of on the sideburner. It’ll be done when it’s done.

At home, I’ve added a two-disk Iomega NAS to my network. It’s an IX-200, running in RAID1. So far, good experience. Sure, it’s not a honking big ZFS FreeNAS-box or anything, but it’ll do great as a dump/one of my backup locations. Quoting whoever said it first “If it isn’t in at least three locations, it doesn’t really exist”, refering to backups/data in general. I’ve also switched out my trusty Linksys WRT54GL to a TP-Link WR841N. It’s essentially a cheap-ish 802.11N spec access point, that can also run custom firmware. Currently on stock, but planning on putting in either OpenWRT or DD-WRT at some point. Both are installable and supported. I’ve gotten speeds of 144Mbps from the device (a laptop) to the AP, so it’s not all too bad.  The stock firmware doesn’t actually look too bad on that thing either. Gotta see if there’s something that actually is missing, which would warrant replacing it.

What else what else.. Well, I got a Sanyo Eneloop quick charger, with four AA Eneloop batteries. Everyone’s been saying good things about the Eneloop-series of products, so I have great expectations.

Updated my pf-box to OpenBSD 5.4. No biggie here.

One kind of “Wanna”-project for this year would be a ZFS-box. Let’s see if I have the funds for this. It’ll be a RAM-heavy box with both SSD’s and spinny disks. WD Red is really pleasing me (running that disk in two of my boxes currently), so that’s probably my go-to as for the rotating rust. SSDs will either be in there or not. If they are, they’ll be Intel’s most probably. My use case for an L2ARC and ZIL are speculative at best, so the only reason I’d have them is for practice and fun.

Bleh. My muse is not present, so this’ll be it.

Removing trickier VMFS-datastores

Ok, maybe tricky isn’t the right word, but at least I couldn’t find anything written on this particular issue. Maybe it’s too simple a solution even for the VMware KB, but anyway.

I was cleaning out some local datastores (Smart Array 420 and 420i controllers) and ran into an issue where I was unable to remove the VMFS datastore because of a file in use error. It didn’t give me specifics; just told me that there were file(s) in use, and/or that the datastore was busy. After a fair amount of googling I started throwing some commands at it through the ssh. There’s a vmkfstools command that can break any existing locks, and it warns you that it will do it forcibly. So I tried that, given that there was nothing on the datastore that I couldn’t afford to lose (the point, after all, was to remove it). Despite grave warnings, vmkfstools was unable to break the lock and didn’t really give me a proper reason.

Looking at the vmkernel logs (/var/log/vmkernel.log by default), I saw the same references to files being in use, but no exact reference as to what files and where. No virtual machines were running anymore, and I had deleted most everything that I could off the datastore by hand already. There was a rather specific error message relating to corruption, and googling that got me exactly diddley. The datastore had had some problems previously, some hardware had been replaced, so there were a lot of variables and things that could have affected the case.

The solution, how ever, was much simpler. ESXi (5.1 update 1), a standalone server not attached to any cluster, was shoving logfiles onto the datastore I was trying to remove. Obviously, there would be ‘file in use’-errors. D’uh. So, from the host level, I went to the Configuration tab, and from there Advanced Settings. From there, Syslog -> Syslog.global.logDir. If it is null (and it can be null), the logs are all reset if and when you reboot the host. If there’s a path, in the style of [datastore]/path, it’ll use that instead.

So for this particular case, I set a null path, which raises a warning that logs are being stored in a non-persistent location, but it then allowed me to delete the datastore (and/or detach it first) without issue.

I was probably thrown off by the vmkernel messages about corruption, though they may have played a part in why certain files and folders couldn’t be deleted by hand using datastore browser or the command line.

After everything was done, I redirected the logs back to one of the datastores, which clears the warning (no reboot needed here, or when I set the null path earlier).

I tried to find the specific error messages but I couldn’t. I may have them somewhere so I’ll shove them in here if I find them.

Some of the commands that helped me along were:

esxcli storage filesystem list ## This lists the filesystems that the server knows about, including their UUID, label and path. These are needed for many vmkfstools commands, so it’s a good place to start

vmkfstools -B /vmfs/devices/disks/naa.unique_disk_or_partition_goes_here ## This tries to ‘forcibly’ break any existing locks to the partition that may prevent you from proceeding. Didn’t work in my case, but also didn’t tell me anything useful..

vmkfstools -V ## re-read and reload vmfs metadata information

Some of the sites and blogs that helped me along:

VMWare KB article 1009570
VMWare KB article 2004201
VMWare KB article 2032823
VMWare KB article 1009565
http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html
http://kb4you.wordpress.com/2012/04/23/unpresenting-a-lun-in-esxi-5/
VMWare KB article 2011220
VMWare KB article 2004605
http://arritdor.e-wilkin.com/2012/03/removing-vmfs-datastore.html

Thanks to everyone who wrote those.

Pi musings

So now I’ve gone and done it! I am doing something with my Pi. What I’ve done is, install nginx in a jail on it. Why? Just because I haven’t done that before. I’ll talk a bit more about what I did, and how in this post.

Why nginx? Well, the primary reason is that it’s growing in market share, and because I have very little hands-on experience of it. Also because I have this idea in my head that it’s slightly less bulky than say Apache2. Many Pi-specific pages also recommend lighthttpd, but since nginx is more prevalent on the net, I chose that.

Note! You could prepare the chroot environment beforehand. If you wish to do so, jump to the appropriate heading and then come back here. This is the order that I did things in, so if you, for some yahoo reason want to follow that, read on.

The Raspbian repositories contain a version of nginx, but it’s supposedly very old. I opted to compile from source, which seemed like a good idea after the repositories listed for a more current version didn’t work properly for the version of Raspbian / architechture of the Pi. Obviously, compiling on the Pi as a rather slow process, but this isn’t a rush order. To start off, i installed some necessary tools so I could compile from source:

sudo apt-get -y install wget build-essential libpcre3-dev libpcre++-dev zlib1g-dev libssl-dev

After this, wget the latest source package for ngingx, http://nginx.org/en/download.html, and unpack this to a location of your choosing:

wget http://nginx.org/download/nginx-1.5.6.tar.gz and the pgp signature: wget http://nginx.org/download/nginx-1.5.6.tar.gz.asc

Get the public key for the signer of the package (in t his case Maxim Dounin)  wget http://nginx.org/keys/mdounin.key

Import it: gpg –import mdounin.key

And finally run gpg nginx-1.5.6.tar.gz.acs

You should get a message about a good signature, however, it’ll not be a trusted signature. You can’t be sure it belongs to the owner. The key would need to be signed by trusted sources, in order to establish the web of trust properly. But for now, we are content.

Then once you are all wrapped in tin foil, go prepare a pot of your favorite coffee and start compiling nginx. Change, add, remove options as needed. This is just from another howto, so you might like different locations for your logs, or include modules that are not included here:

cd nginx-$VERSION ./configure –sbin-path=/usr/sbin/nginx \ –conf-path=/etc/nginx/nginx.conf \ –pid-path=/var/run/nginx.pid \ –error-log-path=/var/log/nginx/error.log \ –http-log-path=/var/log/nginx/access.log \ –with-http_ssl_module \ –without-http_proxy_module make

After this, you could potentially start nginx using /usr/sbin/nginx, but we’re not done yet.

Chroot

Here, we want to do some potential damage control. The webserver is living inside its own little world, and if someone gets into that world, it’s kind of small and boring, and has no real access to the underlying OS.

We can do this either manually, or by giving the chroot directory (the new root) as a variable:

D=/example
mkdir $D

After this, we need to create necessary directories inside the chroot directory for nginx to work properly.

# mkdir -p $D/etc
# mkdir -p $D/dev
# mkdir -p $D/var
# mkdir -p $D/usr
# mkdir -p $D/usr/local/nginx
# mkdir -p $D/tmp
# chmod 1777 $D/tmp
# mkdir -p $D/var/tmp
# chmod 1777 $D/var/tmp
# mkdir -p $D/lib

Note that we also give permissions to tmp and /var/tmp at this stage. Just to keep them writable by everyone just like they are in the base OS. Makes it easier for non-privileged users to write temporary files during installs or stuff needed when you are running the server.  Some instructions (like the one on Nixcraft that I relied on heavily while doing this) create a lib64 directory inside the chroot. I didn’t even have such a directory in the base Raspbian, so I followed suite inside the chroot by making a lib directory.

Next, create the following inside the chroot/dev directory, but first checking their special attributes using:

# ls -l /dev/{null,random,urandom}

You’ll get something like:

crw-rw-rw- 1 root root 1, 3 Jan  1  1970 /dev/null
crw-rw-rw- 1 root root 1, 8 Jan  1  1970 /dev/random
crw-rw-rw- 1 root root 1, 9 Jan  1  1970 /dev/urandom

Note column five. 1,3 and 1,8 and 1,9.  You need to set these attributes inside the chroot too. Do a:

# /bin/mknod -m 0666 $D/dev/null c 1 3
# /bin/mknod -m 0666 $D/dev/random c 1 8
# /bin/mknod -m 0444 $D/dev/urandom c 1 9

Next, you’ll copy all the nginx files from your base OS inside the chroot. For instance:

# /bin/cp -farv /usr/local/nginx/* $D/usr/local/nginx and

# /bin/cp – farv /etc/nginx/* $D/etc/nginx

Next a tricker part. Move all necessary libraries to run nginx to the chroot. You can find out what you need by doing a:

ldd /usr/sbin/nginx

You’ll get an output similar to:

/usr/lib/arm-linux-gnueabihf/libcofi_rpi.so (0xb6f94000)
libpthread.so.0 => /lib/arm-linux-gnueabihf/libpthread.so.0 (0xb6f6a000)
libcrypt.so.1 => /lib/arm-linux-gnueabihf/libcrypt.so.1 (0xb6f33000)
libpcre.so.3 => /lib/arm-linux-gnueabihf/libpcre.so.3 (0xb6ef2000)
libssl.so.1.0.0 => /usr/lib/arm-linux-gnueabihf/libssl.so.1.0.0 (0xb6ea2000)
libcrypto.so.1.0.0 => /usr/lib/arm-linux-gnueabihf/libcrypto.so.1.0.0 (0xb6d3f000)
libdl.so.2 => /lib/arm-linux-gnueabihf/libdl.so.2 (0xb6d34000)
libz.so.1 => /lib/arm-linux-gnueabihf/libz.so.1 (0xb6d16000)
libgcc_s.so.1 => /lib/arm-linux-gnueabihf/libgcc_s.so.1 (0xb6cee000)
libc.so.6 => /lib/arm-linux-gnueabihf/libc.so.6 (0xb6bbf000)
/lib/ld-linux-armhf.so.3 (0xb6fa1000)

All of these need to go to the corresponding locations inside the chroot. There are scripts floating around for checking what you need and copying them over; I just copied them manually because I’m a pleb.  You can always come back later; nginx and any other tools you use will tell you if you uare missing any libraries, and you can copy them later.

Copy the relevant contents of /etc to the chroot. I had problems with the users inside the chroot, but it might have been something I messed up. I was unable to run it using nobody:nogroup, and had to resort to using the uid and gid, but more on that later. If someone knows what I fucked up, and happens to read this, use the comments, thanks! But the copying I mentioned (again thanks to Nixcraft):

# cp -fv /etc/{group,prelink.cache,services,adjtime,shells,gshadow,shadow,hosts.deny,localtime,nsswitch.conf,nscd.conf,prelink.conf,protocols,hosts,passwd,ld.so.cache,ld.so.conf,resolv.conf,host.conf} $D/etc

And some directories (though my raspbian install didn’t have prelink.conf.d at all):

# cp -avr /etc/{ld.so.conf.d,prelink.conf.d} $D/etc

We’re just about done. Kill an existing nginx’s using pkill nginx or something like killall -9 nginx to do it more violently.  Then we can run a test of nginx inside the chroot. This will tell you what is missing (libraries, files etc.), or if your config syntax is wrong:

# /usr/sbin/chroot /nginx /usr/local/nginx/sbin/nginx -t

To run it finally, remove the -t at the end. As I mentioned, at this point I had issues about a line in the nginx config file (/etc/nginx/nginx.conf), which is “user nobody;”. For the life of me  I could not get it to run using this user, even though I had it inside the chroot/etc/passwd, and group files. It just told me unknown user and so on. Changing the user also had no effect, i tried creating a fresh user, but to no avail. Finally, I ended up running nginx with:

/usr/sbin/chroot –userspec=65534:65534 /nginx /usr/sbin/nginx

Where 65534 is the uid and gid (respectively) of nobody and nogroup. Note that we are chrooting into /nginx (my chroot directory for nginx) and then from there, running /usr/sbin/nginx which is the script that starts nginx. After this, we have nginx running under the correct user and group:

nobody    4355  0.0  0.1   4984   724 ?        Ss   Oct07   0:00 nginx: master process /usr/sbin/nginx
nobody    4356  0.0  0.2   5140  1228 ?        S    Oct07   0:00 nginx: worker process

To be absolutely sure that nobody runs the “base OS” version of nginx, you can remove the directories associated, or rename the executable file under /usr/sbin (i called mine nginx_nonchroot), so I can verify that file isn’t being run. Or remove the execute bit with chmod -x /usr/sbin/nginx.

When starting nginx at boot, be sure you are doing it in the right way to ensure it’s inside the chroot:

# echo '/usr/sbin/chroot /nginx /usr/sbin/nginx' >> /etc/rc.local

To verify that your nginx is running inside the chroot, use the process id (second column when you run ps aux | grep nginx; in my example, 4355), by running:

# ls -la /proc/4355/root/

…and you’re getting the contents of the chroot root, i.e. all the directories that sit under the chroot /

drwxr-xr-x 10 root root 4096 Oct  7 19:00 .
drwxr-xr-x 24 root root 4096 Oct  6 23:24 ..
drwxr-xr-x  2 root root 4096 Oct  7 19:11 bin
drwxr-xr-x  2 root root 4096 Oct  6 23:25 dev
drwxr-xr-x  5 root root 4096 Oct  7 19:43 etc
drwxr-xr-x  3 root root 4096 Oct  6 23:36 lib
drwxr-xr-x  2 root root 4096 Oct  7 00:03 run
drwxrwxrwt  2 root root 4096 Oct  6 23:23 tmp
drwxr-xr-x  5 root root 4096 Oct  6 23:27 usr
drwxr-xr-x  5 root root 4096 Oct  7 19:51 var

You can also change the default index page so you can see that that’s the one being loaded.  In my case /nginx/usr/local/nginx/html/index.html. You can reload the chrooted nginx using:

# /usr/sbin/chroot /nginx /nginx/usr/sbin/nginx -s reload

You could now make sure nginx is listening on your pi, by using:

netstat -pantu | grep nginx

tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      4355/nginx   

Browse to the ip assigned to your pi and see your webpage! Make sure you lock things down with iptables, and allow traffic only to ports that you want, and from addresses you want.

Infinite props to Nixcraft for this article, which helped me along the way. The main reason I wrote this was that my install  was slightly different, and I figure I’d type my own problems and solutions down. Also, raspbian has changed slightly (i guess?); So here you are. This howto was also very helpful, thanks to elinux.org.

 

 

 

LSI Updates and Pi

There’s no possible way to make a Raspberry Pi-joke that hasn’t already been made.

LSI

So far so good. Things’ve been working fine, though I have to look into disabling the bios since I’m not booting from any drives that are behind the LSI card. Boot times are three times as long as without the card, even though the OS is loading from the Samsung 840 Pro SSD drive.

I used MegaRaid Storage Manager for Windows to install the latest BIOS for my card. I went to the LSI site, searched for Host Bus Adapters -> LSI SAS 9211-8i -> Firmware, and downloaded the only available package (at the time this was named “9211-8i_Package_P17_IR_IT_Firmware_BIOS_for_MSDOS_Windows”, released Aug 09, 2013, the same package as for the IR-firmware installed in the previous post). Inside the archive, you will find various folders. Look in the  folder “sasbios_rel” and check that you have mptsas2.rom in there. That’s the BIOS image.

The good news is, as I mentioned, once you have the Storage Manager software installed, and your card is recognized, you can flash the BIOS from Windows without issues. This should also work for Firmware, but I haven’t tried this yet, as I am already running the latest IR-firmware. Open up SM, and somewhere in the middle you will find Update Firmware. There, select BIOS (middle selection for me), and browse to the folder mentioned earlier. Inside, select the mptsas2.rom file. Hit OK, and it will ask you to check a box and confirm that you want to update the BIOS. After that, it’ll flash, and tell you when it is done. It will show you the old BIOS version until you reboot. My card was 7.29.0.0, and is now 7.33.0.0. Improvements are minimal, but there were some.

One note on the Write Cache, mentioned in the last post. I was unable to enable this from Storage Manager. Perhaps due to the fact that there is no battery backup unit. I’ll have to look more into this at a later date.

PI

Got me a Pi. The B model, from local RS reseller, Yleiselektroniikka. Cost me 47 bucks including taxes. It’s the revised Model B, with 512MB memory. I also got a transparent case, which was 10 bucks. I didn’t get a powersupply, because I have plenty of USB chargers for various devices (and a few generic ones) that provide 1A+ @5V. My HTC Desire Z charger powered the Pi just fine, even though there’ve been reports of “flaky” mobile phone chargers not working with the Pi.

I have an 8 GB Verbatim SD-card for this project, and I dropped the latest NOOBS image from the Raspberry Pi homepage on the card, after formating the card FAT. I then installed Raspbian from the NOOBS-installer, and proceeded to do an apt-get update && apt-get upgrade, which also upgraded the Pi bootloader to the latest version (as was recommended by the small booklet that came with the Pi.)

I haven’t done much with the device yet (joining the club of Pi owners everywhere! :)), except hook things up and tried it out a bit. It works great! Or just as advertised. Obviously the boot is a little bit slow, but nothing out of the ordinary, considering the specs. HDMI out works fine; I use an HDMI -> DVI cable for this.

Adventures in LSI-land

I bought an IBM M1015 raid card. Which is actually an LSI 9240, containing a SAS2008 chip. It is a basic card, with no battery backup. It has two mini-SAS ports, that can be split to 8 SAS/SATA devices (four per port), and then configured to either RAID0 or RAID1. With a feature-key (some little IBM fob that is slapped onto the card), you can also get RAID5, and other modes. RAID1, however, is what I wanted.

So turns out the IBM M1015 does not get detected by three of the four motherboards I tested it on. It worked on an Intel reference board (?) in some old workstation, but not on:

  • MSI MS-7497
  • MSI M2N-E
  • Asus P8Z68-V GEN3

Out of those, the 7497 and the Asus did not even boot, just a blank screen if the card was in (any PCI-E slot). I also couldn’t find any BIOS settings that would make it boot. Nothing would ever come on screen. The M2N booted just fine, but no card was detected.

So, instead of giving up, I decided to try and flash the card with a different firmware. In a process known as crossflashing, you essentially clean up the firmware on the card, and flash something that wasn’t originally intended to go on the card.

For this card, there are three alternatives (at least):

  • Original IBM M1015 (LSI 9240) firmware, as provided by IBM
  • LSI 9211-8i IT-firmware (tried this one too, machine booted fine and detected the drives behind the card)
  • LSI 9211-8i IR-firmware (i picked this one)

The latter two being provided by the actual manufacturer of the card, LSI. The differences (as listed by this site) are as follows:

IBM M1015 firmware

  • Can do RAID0 and RAID1. Contains Web-Bios for controlling settings, perhaps other IBM branding

LSI IT-firmware

  • Can do straight passthrough, without RAID. Apparently ideal for ZFS for instance.

LSI IR-firmware

  • Can do RAID0 and RAID1, 1E and 10, as well as passthrough

The flashage!

So, to flash the card, you need a machine that (obviously) can recognize the card. I had two of them, the Intel reference motherboard-box, and an IBM x3690 X5 server (UEFI, more on this later). On top of this, you need a bootable USB stick. I used a Kingston U3 USB stick, which is recognized by most machines, and works great. On that stick, I have FreeDOS, the LSI Megacli/megarec tools, as well as the required firmware- and bios-images. I can make a package of the files that I have, so you can slap em on a card. To get FreeDOS on a stick, check here and then here, for instance. I also needed the UEFI Shell file, again, more on that later. But you might as well put that file on your stick too.

To start off, you need to clean out whatever is on the 16MB flash chip on the card. Boot up to freedos, and move to the directory where you have megacli and the firmware files. First, get the ‘SAS Address’ of the card, either by looking at the card physically, or by running:

megarec -writesbr 0 sbrempty.bin
megarec -cleanflash 0

After this, you have a card with pretty much nothing on it. If you do not flash a firmware on the chip, you have what is effectively a dead card. Reboot the computer. You should not see the machine detect the card, as it will not load the BIOS of the card. Now, there are two ways to move forward. Either boot back to the FreeDOS environment, and flash the correct firmware to the card, or load up an UEFI Shell (depending on your hardware) and do the flashing from there. You should start with the FreeDOS-way:

sas2flsh -o -f 2118ir.bin -b mptsas2.rom
sas2flsh -o -sasadd put_your_SAS_Address_here

A note about the first command: Chose which firmware you want to flash, either it or ir. Note, that you can flash between any of the firmwares after the fact, just do megarec -cleanflash0, and then proceed to the second step with the new firmware that you chose. You can leave out the -b mptsas2.rom command. This is the BIOS of the card, which you do not need, if you do not intend to boot off a RAID-array which is behind the card. Boot times will be faster if the BIOS isn’t loaded. I put it in just for good measure (and yes, the boot slow-down is noticeable).

The UEFI caveat

If when running the first command you get: “ERROR: Failed to initialize PAL. Exiting program.”, there is a problem with your motherboard’s BIOS and/or you have UEFI instead of BIOS. I can confirm that this happened on a regular old workstation (3 years old maybe?) which does not have UEFI (or then I’m blind and dumb), so I’m not exactly sure as to what is causing the error in this case. In either case, I had to move the card to a server that actually has UEFI, in my case the IBM x3690 X5 server. This server, however, does not have UEFI shell, for some inconceivable reason. But, I was able to boot to the UEFI Shell .efi file that I downloaded previously, by going into UEFI, going to Boot Manager, and selecting Boot From File. Then I navigated to the USB stick where the .efi file was, hit enter, and soon I was in the UEFI shell.

Some notes about the shell. It’s Unix-like, but has some select commands that you need here. Firstly, to navigate to different disks that it detected, use fs0:, fs1: etc. In my case, the USB stick was fs0. After that, you can use either standard DOS or Unix commands to list files; so either ls or dir. Navigate to the directory where you have your megacli and whatnot using cd, as usual. There, you can use the following commands to flash the card (and BIOS):

sas2flash.efi -o -f 2118ir.bin -b mptsas2.rom
sas2flash.efi -o -sasadd put_your_SAS_Address_here

Again, you can leave out the -b mptsas2.rom if you don’t need the BIOS. This time, I had success in flashing the card.

After the commands are done, reboot the machine. You should now see it loading LSI 2008 whatever, instead of IBM M1015. You can use Ctrl-I to enter the configuration management, where you can set card options and create RAID arrays.

Performance and management

A note about performance: When I created the RAID 1 array (consisting of two WD 1TB Red drives), the background initialization which started (There was apparently also a fast initialization option) had significant performance impact. Running Crystal Disk Mark x64, I got around 85MB/s sequential reads. When the init was done, these were the figures:

LSI 9211-8i performance with two drives in RAID 1
LSI 9211-8i performance with two drives in RAID 1

Noteworthy is the write-performance. After the Init was done, I got a log entry stating the Write Cache is disabled. Since this card has no battery backup (being an entry level card), Write Cache probably should be disabled. If it were enabled (I might try this later) from the card options, write perofmance will be significantly better. But, since this is mostly for storage (more reads than writes by far), this is not of concern to me. Data integrity is more important.

In linux, you can use the same MegaCLI from LSI to manage and view the card status. In Windows, you can use a similar graphical program called MegaRAID Storage Manager, from LSI (on the search page, pick Host Bus Adapters -> LSI SAS 9211-8i -> Management Software and Tools), which supports most versions of Windows desktop and server. To download either of these, visit here and select Host Bus Adapters, then LSI SAS 9211-8i, and your relevant download category. Also get the driver for your operating system from the same site, even though Windows 7 and Linux both supported this out of the box.

Oh. also in case you were wondering, the cable I got to hook up the SATA-drives to the card was this one, the DeLOCK Mini SAS 36 pin (SFF-8087) -> 4 SATA cable. The price at the time of this article was 17,90€.

Sources:

I would like to thank the following pages. Without them, this would not have progressed.

http://blog.monsted.dk/?q=node/5
http://www.servethehome.com/ibm-serveraid-m1015-part-4/

Upgrading hard drives – part deux

So all is now well and installed. The motherboard SATA mode is set to “RAID” (was “AHCI”).  The configuration is now as follows:

  • 256GB Samsung 840 Pro SSD – System drive for Windows 7 and Linux
  • 2 x Western Digital Red 1TB in RAID1 configuration (using the built in RAID chip on the motherboard, an Asus P8Z68) – For games, and data that i use actively. Other data is on other systems, stuff I don’t need often.

I had to take out the motherboard, all expansion cards etc, because I had installed my previous SSD behind the motherboard tray in the Fractal Design Define R4 case. It has space for two 2.5″ drives back there. I was sweating buckets, because it’s still incredibly hot here (unseasonably so), and because one of the mounting screws for the motherboard had gotten stuck. One leatherman, one pair of pliers and a screwdriver or two later, it was out safely, without scratching the motherboard or prying lose a single component! Empty bucket of sweat, and get back to work.

Reinstalling Windows 7 was a breeze. Empty SSD, and the RAID1 array, all were detected during the install with no additional drivers needed. Easy!

I’ve yet to install Linux (I’m now on Manjaro, a nice Arch-spinoff), and still kind of sorting out drivers and applications in Windows. But, I did manage to run Crystal Disk Mark on the drives. And here are the results. Everything I had expected:

256GB Samsung 840 Pro SSD
256GB Samsung 840 Pro SSD
2x1TB WD Red in RAID1
2x1TB WD Red in RAID1

 

Upgrading hard drives

So this post will be about my new hard drives, which arrived today. I got one 256GB Samsung 840 Pro SSD, and two Western Digital Red 1 TB drives. They will replace my current drives, which are a mixed bag, topped off with a 128GB Samsung 830 SSD.

The 830 obviously is still one of the better models out there, even compared to later models (though it gets a beating from the 840 Pro), and would be good enough for years to come. But, I wanted more space, and I didn’t want to switch to something that’d be slower than the 830, ergo, I ordered the 840 Pro for around 220 euros.

As for the Western Digitals, they will be running in RAID1. My storage is elsewhere, this is for local stuff, which doesn’t need much space, but which would benefit from redundancy. So instead of having the current two 500GB, one 1TB drive (a WD Green, ugh), I’ll have the 1TB RAID1, and the 256GB SSD. Less drives, more modern drives (less power consumption), and faster drives. All around good deal. For future reference, the WD’s were like 80 something euros apiece.

I will start out with using the on-board RAID card, which comes with my Asus P8Z68 motherboard. It’ll probably not be that good, but it’ll work. Then, once I get the SFF-8087 cable that is on backorder, I can move to the LSI M1015 RAID card that I bought. It’s a PCI-express x8 card, that does RAID 0 and RAID 1, and uses two SFF-8087 connectors, that can each support four sata / sas drives.

And yes, I realize I’ll have to wipe the drives when moving from the internal to the discrete RAID-card. This doesn’t bother me.

More on all of this later, once I have things installed. Meanwhile, have a picture.

2 x WD Red 1TB, Samsung 840 Pro 256GB
2 x WD Red 1TB, Samsung 840 Pro 256GB

Blabbity blab

Nothing specific to talk about, but I felt like writing anyway.

Don’t multihome vmk ports in ESXi

Multihoming vmk ports on ESXi 5 (?) and later is not kosher. It’ll allow you to make the config, and it’ll even work, for a random period of time. You probably want separate physical ports for management and vMotion, so you’re bound to have two vmk ports, don’t put them on the same subnet/vlan. This was supported in ESX 4 and earlier, perhaps, but not in any later versions of the VMware hypervisor. This KB-article helped out a lot, as well as this quickhand on ESXi shell network commands. The setup was roughly the following:

  • vmk0 – management – vSwitch0 – 10.10.10.1
  • vmk1 – vmotion – vSwitch1 – 10.10.10.2

One host with this config dropped off the network, and the management port wouldn’t respond. The other vmk interface still responded perfectly, and the machines were on separate vmnics and vSwitches so they were unaffected as well. But vCenter lost connectivity to the host. Obviously, migrating the vm’s off the host was not an option, as there was no way to reach it through the vSphere client. The cluster did not have HA enabled.

To fix it, the steps were roughly:

  1. Enable ESXi Shell, if it isn’t already, through the DCUI -> Troubleshooting options -> Enable ESXi Shell
  2. Hit Alt-F1 to go to the shell
  3. Disable the vmnic that is not the management vmnic (in our example, vmk1, for vmotion) using esxcli network nic down -n vmnic   ##make sure you get the right vmnic, doublecheck in DCUI
  4. You can Alt-F2 back to DCUI and check out the network settings to verify that it’s down. Once the conflicting vmk is down, the primary one should start working, and you’ll have management back. If necessary, restart management agents / network from DCUI.
  5. There’s also esxcfg-vmknic -d (for delete, -D for disable) portgroup. To list the portgroups, use esxcfg-vmknic -l (and locate the conflicting, non-management vmk, and check the name of it)
  6. When management is restored (you can verify by running the Test Management Network in DCUI, and ping your management IP), do the rest from the vSphere Client (restoring what ever vmk you disabled, and the functionality it had (be it vmotion or so)). This time, make sure you use a separate subnet/vlan (not the same as for management)
  7. Also NOTE that if you used the ESXi Shell to disable a NIC, you have to enable it from there as well. I’ve found no way to say “vmnic up” in vSphere Client. If you know of a way please let me know in the comments. I had to make an extra trip to the data center to get the interface up, and then finalize the config in vSphere client.

Considering a Soekris or Mikrotik

For years (uh say, 8 years?) I’ve used an older workstation PC with two Intel 1Gbps NICs and lately, an SSD, plus OpenBSD & pf as my network firewall/router. It’s a rather clunky solution for a simple task, but it has served me well for years, without too many problems. After listening to TechSNAP (the latest couple of episodes, I guess), I’ve been thinking about replacing that box with a smaller solution, such as hardware from Soekris or Mikrotik. Soekris are a bit expensive, but they are perhaps.. more fully fledged than the Mikrotik. Both, as I understand, allow for your own choice of OS. I would still be running BSD (be it Free or Open), because that’s what I sort of trust with these matters. The other option is to buy an Atom board, slap on 2-4GB memory, two NICs (or a multiport NIC), and the SSD that I already have, and then run that in a smaller form factor case. I’m more of a do-it-yourself kind of guy, so I might end up going that route anyway.

Reading stuff

I’ve been reading a lot lately. Well the past 10 years maybe. My dad tends to remind me that back in school I didn’t like reading too much (perhaps because I didn’t usually need to work too hard to pass courses (except for math), or maybe I just hadn’t found my thing yet. Or maybe I was an immature brat? Perhaps. Anyway. What I’m reading right now is the Bridge Trilogy, by William Gibson. No big shocker here, I’ve read his works multiple times. I think this trilogy is the one I’ve read the least. That’s not to say it isn’t good, but it’s just gotten less attention from me. I’m on the final book now, ‘All tomorrow’s parties”. After that I’ll hop away from Gibson, and move on to James Bamford’s “The Shadow Factory”, a book on the NSA.

Since I misplaced (probably lent it out to someone who doesn’t remember or really liked the book) my copy of Stealing the Network – How to own a Shadow, I ordered a used copy from amazon. The condition was listed as very good, and it came exactly in that shape….

.. only it smells like weed. You know? Mary jane? Now it might just be from hemp-scented incense, or maybe just a pot-head security guy. I don’t mind really, but I still put the book outside for a while to get the worst fumes out. Luckily nobody had ripped pages to roll their joints in. I guess the book would then have been listed as.. Cannabilized. Get it!?!