Patching ScaleIO 2.0 VMware Hosts

Recently did an ESXi patch run, along with some BIOS and firmware updates on a ScaleIO 2.x environment (more precisely 2.0.5014.0). The environment consists of some Dell PowerEdge servers, some of which are ESXi 6.0 build 3380124, some are Linux based, non-virtualized hosts. Luckily this environment was ScaleIO 2.x, because this version has a real maintenance mode in it (1.3.x did not). This means that while I can only patch one host at a time in this layout, I can do it fairly quickly and in a controlled fashion.

ScaleIO Maintenance Mode vs. ESXi Maintenance Mode

These are, obviously, two different things. With ScaleIO maintenance mode, you can put one SDS (providing storage services) host (at least in this configuration with two MDM’s) at a time into maintenance mode, which does not have an adverse impact on the cluster. The remaining SDS will take care of operations, provided it too does not break or go down at the same time.  After you are done patching, you exit maintenance mode, which the makes sure all changes are rebuilt and synced across the cluster nodes. This takes some time depending on the amount of data involved.

ESXi maintenance mode on the other hand, deals with putting the VMware hypervisor layer into maintenance mode so you can patch and perform other operations on it with no VMs running. The order is:

  1. ScaleIO
  2. VMware ESXi

And when coming out of the maintenance break, it’s the reverse.

I left the SVM (virtual machine on the host which takes care of the different functions that the host has, technically a SLES appliance) that I was patching, but I powered it down gracefully before putting the host into maintenance mode.

So accounting for all these things, my order was:

  1. Migrate all running VMs except the SVM off of the host using vMotion
  2. When the host is empty (bar the SVM), put ScaleIO into maintenance mode
    1. This is done via the ScaleIO GUI application, on the Backend page, by right clicking on the host. I did not have to use the force option, and neither should you…
  3. Shut down the SVM via “Shut Down Guest” in vCenter
  4. Put the host into maintenance mode without moving the SVM off the host (I suppose you could move it, but I didn’t)
  5. Scan and Remediate and install other patches (I installed BIOS, iDRAC and some other various updates via iDRAC; I had set them to “Install next reboot” so they would be installed during the same reboot as ESXi does remediation)
  6. Once you are satisfied, take the host out of maintenance mode
  7. Start the SVM on that host
  8. Wait for it to boot
  9. Exit ScaleIO maintenance mode (see 2.)
  10. Check to see that rebuild goes through (ScaleIO GUI application, either the Dashboard or Backend page)
  11. Make sure all warnings and errors clear. During host remediation and patching, I had the following errors
    1. High – MDM isn’t clustered (this is because you’ve shut down one of the SVMs containing the MDM role)
    2. Medium – SDS is disconnected (for the host being remediated)
    3. Low – SDS is in maintenance mode (for the host being remediated)
  12. After the SVM starts, it should clear all but the last alert, and once you have Exited Maintenance Mode, the final alert should clear
Exiting maintenance mode in ScaleIO GUI application
Rebuilding after exiting maintenance mode in ScaleIO

(Expected) Alerts during maintenance

As mentioned, you will have alerts and warnings during this operation. I had the following:

  • First, when putting the SDS into maintenance mode in ScaleIO, one warning about SDS being in maintenance mode:
SDS still on, ESXi not in maintenance
  • After SVM is shut down and ESXi is also placed in maintenance, two more:
All three alerts after host is in maintenance and SVM has been shut down
  • Then once you have remediated and taken the host out of maintenance, and started the SVM, you’re back to one, as in the first picture.
  • When you take the SDS out of maintenance, it will clear the last alert

Note that the highest rated alert, the Critical “MDM isn’t clustered” is actually noteworthy. It means that the SDS you are taking down for maintenance had the MDM role (critical for management of ScaleIO). Normally you’d have another one, and you shouldn’t proceed with any of this if you can only find one MDM, or if you already had this (or any other alert).

EMC has this to say about MDM’s (also see the document h14036-emc-scaleio-operation-ensuring-non-disruptive-operation-upgrade.pdf):

Currently, an MDM can manage up to 1024 servers. When several MDMs are present, an SDC may be managed by several MDMs, whereas, an SDS can only belong to one MDM. ScaleIO version 2.0 and later supports five MDMs (with a minimum of three) where we define a Master, Slave and Tie-breaker MDM.

Roles / Elements in ScaleIO

You can see the installed roles in VMware in the notes field, like so:

Roles in the Notes field in VMware

Elements or roles are (may not be a complete list):

  • MASTER_MDM – Master MDM node, Meta Data Manager, enables monitoring and configuration changes
  • SLAVE_MDM – Secondary MDM node, will take over if Master is unavailable
  • SDS – Storage node, ScaleIO Data Server, provides storage services through HDD, SSD, NVMe etc.
  • SDC – ScaleIO Data Client, consumer of resources (e.g. a virtualization host)
  • RFCACHE – Read-only cache consisting of SSD or Flash
  • RMCACHE – RAM based cache
  • LIA – Light installation agent (on all nodes, creates a trust between node and Installation Manager)
  • TB – Tiebreaker, in case of conflicts inside cluster, counted as a type of MDM, non critical except in HA/conflict situations

ESXi funny business…

While running remediate on the hosts, every single one failed when installing patches.

Scary Fatal Error 15 during remediation

A very scary looking Fatal Error 15. However, there’s a KB on this here.

So, (warm) reboot the host again, wait for ESXi to load the old pre-update version, and do a re-remediate without using the Stage option first. I used stage, as I’m used to, apparently this breaks. Sometimes.

And to re-iterate, I was patching using vCenter Update Manager (or VUM) from 6.0 build 3380124 to 5050593.

Sources

docu82353_ScaleIO-Software-2.0.1.x-Documentation-set.zip from support.emc.com (not actually for the version in use, but similar enough in this case. Use at your own risk..

ScaleIO v2.0.x User Guide.pdf contained in the above mentioned

https://community.emc.com/thread/234110?start=0&tstart=0

https://www.emc.com/collateral/white-papers/h14344-emc-scaleio-basic-architecture.pdf

https://www.emc.com/collateral/white-papers/h14036-emc-scaleio-operation-ensuring-non-disruptive-operation-upgrade.pdf

Home Lab Xeon

The current home lab setup consists of an Intel Core i3-2100 with 16GB of DDR3, a USB drive for ESXi (on 6.5 right now) and a 3TB WD for the VMs. While the Intel i3 performs perfectly for my needs, I came across a Xeon E3-1220 (SR00F, Ivy Bridge), which should be even better!

For the specs, we have the following differences:

Model Intel Xeon E3-1220 Intel Core i3-2100
Released: Q2-2011 Q1-2011
Manufacturing process: 32nm 32nm
Price originally: 189-203 US dollars (more in euroland) 120 USD
Core count: 4 Cores 2 cores
Hyperthreading No Yes
Base Freq: 3.10 GHz 3.1 GHz
Turbo Freq: 3.40 GHz No
TDP: 80 W 65W
Max Memory: 32 GB ECC DDR3 32 GB Non-ECC DDR3
L1 Cache: 128 + 128 KB 64 + 64 KB
L2 Cache: 1 MB 512 KB
L3 Cache: 8 MB 3 MB

So we can see that the Xeon part is 4 core processor, without hyperthreading, so real cores as opposed to the i3’s threads. It’s more power hungry, which is to be expected, but can also Turbo at a higher frequency than the i3. Also, the Xeon has more cache, which is also to be expected with a server grade component.

A notable thing is that the Xeon, being a server part, does not include the GPU components, so I’ll have to add a GPU at least for the installation. I run the server headless anyway, but I want to see it POST at least. I think I’ll have to add a PCI card for this it has no PCI slots so, as I only have one PCIe slot (well there are some x1 slots but I have no such cards), and that’s used by the NIC. The motherboard is an Asrock H61M-DGS R2.0 which has one x16 slot and one x1 slot. Maybe I’ll do it all headless and hope it posts? Or take out the NIC for the installation?

Some yahoo also tried running an x16 card in an x1 slot here. Might try that but since I have to melt off one end of the x1 slot, probably not.

There are apparently some x1 graphics cards, but I don’t have one as I mentioned. An option could be the Zotac GeForce GT 710, which can be had for 60 euros as of this post.

Preparations

I went to the pharmacy to get some pure isopropyl alcohol. It wasn’t on the shelf, so I had to ask for it. I told the lady I need some isopropyl alcohol, as pure as possible. She looked at me funny and said they had some in stock. I told her I’m using it to clean electronics, so she wouldn’t suspect I’m some sort of cringey soon-to-be-blind  (not sure if you get blind from this stuff, but it can’t be good for you) wannabe alcoholic, to which she replied that she doesn’t know what i’ll do with it, or how it will work for that. She got the bottle, which is described as “100 ml Isopropyl Alcohol”. There is a mention of cleaning vinyl disks and tape recorder heads on the back, so I was vindicated. There’s no indication of purity on the bottle, but the manufacturer lists above 99.8% purity here. Doesn’t exactly match the bottle, but it’s close.

Why did I get isopropyl alcohol? Well, because people on the internet said it’s good for cleaning off residual thermal paste from processors and CPU coolers. With common sense 2.0, I can also deduce that anything with a high alcoholic content will evaporate, and not leave behind anything conductive to mess things up. Oh and it cost 6,30€ at the local pharmacy. It’s not listed on the website (or it says it’s no longer a part of their selection).

Let’s see how it performs. I’m using cotton swabs, but I suppose I could use a paper towel. If it leaves behind cotton pieces, I’ll switch to something else.

The Xeon originally had a passive CPU block and a bunch of loud, small case fans, but I will use the same cooler as for the i3.

Take out the i3 and the cooler. Clean the cooler off with the isopropyl:

Isopropyl worked wonders

Put in the E3, new thermal paste. I used some trusty Arctic Silver 5.

Termal paste added, note artistic pattern

Re-attach the cooler and we’re off to the races. I’ll note here that I hate the push through and turn type attachments of the stock Intel cooler. Oh well, it’ll work.

 

Powering on

Powering the thing on was the exciting part. Will there be blue smoke? Will it boot headless? Will it get stuck in some POST screen and require me to press a button to move on? Maybe even go into the BIOS to save settings for the new CPU?

Strangely enough, after a while, I started getting ping replies from ESXi meaning the box had booted.

There’s really nothing left to do. ESXi 6.5 recognizes the new CPU and VMs started booting shortly after.

Xeon E3 running on ESXi 6.5

My Intel Core i5 Skylake Build

After four years on an Intel i5-2500, I decided it was time for an upgrade. Also because I want to pass down some components to the other people in the household. I bought my i5-2500 back in 2012 which is four years ago. While it still performs admirably, and the new CPU will not be significantly faster, it will sit in a motherboard with modern connectors (USB 3, 3.1, M.2. etc.), as well as bringing the memory up to DDR4. The i5-2500 wasn’t the latest processor when it was bought either. Ivy Bridge was out (the 3xxx series), but still a bit costly. This time, I plunked down for the latest generation, simply because it’s the second 14nm CPU-family from Intel the first being Broadwell; it represents the “tock” in the (now defunct?) Intel tick/tock development model. The tock stays on the same manufacturing process as the previous tick, but optimizes performance and reduces power consumption. The CPU I went with is similar to the one I have. I chose the Core i5-6600K. The main differences (other than four generations of Intel CPUs in between) is the K, signifying an unlocked CPU. While I don’t usually go for overclocking, I might want to squeeze some extra performance out of this one at some later date, seeing as my upgrades are few and far between.

As an interesting fact, this Intel Core i5-6600K cost 270€, while the i5-2500 cost a little under 200 back in 2012 (196€ I think). The next one up would have basically been a locked i7-6700, at 354€. The unlocked version of the CPU I got would have been 253€, so 17 euros less.

For the motherboard, I picked the Asus Z170 Pro Gaming. There are cheaper alternatives (B and H chipsets, starting at around 60€), but I figured, with a semi-expensive CPU, I’d better not cheap out on the motherboard. I actually bought a bundle, which contained the motherboard, and an Asus ROG Gladius mouse (which isn’t actually that bad; it costs around 60€ bought separately).

For the CPU cooler, I didn’t want to all out for a Nexus at 70-90 bucks. I instead opted for a fairly well priced Cooler Master 212 EVO. At 42€ it’s a mid range cooler, which has done fairly well in the reviews I read.

Rounding everything off, I got 16GB’s of Kingston’s HyperX Fury memory, operating at 2666 MHz. A kit of two 8 GB sticks, which set me back 83€. I could have opted for cheaper memory. Words like “Fury” or “Hyper” do not really factor into my daily usage profile. But it was a 16 GB kit which is certified compatible with the motherboard, as per Asus’ documentation. That is important to me. (The cheapest 16GB DDR4 kit/stick right now costs 64€)

Pile o' Parts
Pile o’ Parts

Preparations

Installation started with a backup of my system. I use Veeam Endpoint Backup Free (v. 1.5), backing up to a 3TB NAS. In case I need a bare metal recovery, there’s an ISO file that I can burn on a disk or throw on a USB stick. Probably not but.. you can never be too sure. What’s being removed is:

  • Asus P8Z68-V Gen. 3 Motherboard
  • Intel Core i5-2500
  • 16 GB of DDR3 memory (4x4GB sticks)
  • Nexus NH-U12P (dual fan)

So I’m not touching the case (Fractal Design Define R4 Pearl Black), storage (Samsung 840 Pro 256GB SSD, 1 x WD Red 2TB drive + Intel 910 SSD PCI-E card, 400GB), PSU (Corsair 650 TX), GPU (Nvidia GTX 960) and other assorted bits and bobs.

Oh, I am getting rid of my Razer Blackwidow keyboard and my Razer Deathadder Chroma, because Razer software is shit. It annoyed me to the point of throwing out 200 € worth of Razer stuff. The mouse is already replaced, the keyboard will wait for Assembly 2016, where I will buy a Ducky. Maybe this will be another blog post later on.

I am also taking the time to clean out the case of dust and so on. A good thing to remember during the hot summer. Dust makes for bad air flow, and bad air flow makes for hot computers. And I don’t mean the sexy kind!

Tear-down

The case was absolutely full of dust. Luckily there are at least *a* filter in the bottom of the case, but most fans were still in pretty bad shape. I started by separating all case-to-motherboard cables, as well as psu-to-motherboard cables. After that, I removed the motherboard/cpu/memory/cooler combo. I forgot how heavy the Nexus NH-U12P was!

A quick dust-off, and the case was ready to receive the new parts.

Build.. up?

Ah! Forgot about the I/O backplate. Remove that, and insert the new one that came with the Z170 motherboard.

Check that motherboard standoffs are all in shape and tighten them.

I opted to install the CPU and cooler prior to putting the MB in the case. Socket 1151 installation was very simple with the included cpu installation tool. Snap the cpu into the plastic install tool. Put the tool with the cpu inside into the socket. Close socket latch. I was surprised that you could actually leave the tool in place, but it fits, and the instructions tell you just that.

First, attach the plate that comes behind the motherboard for cooler mounting. This wasn’t too hard, but it was nice to have an extra pair of hands to help. You have to flip the board in order to attach bolts to the other side. A handy tool is included for tightening them.

Small dab of thermal paste in the middle of the CPU (I always do it this way), and attach the cooler to the previously attached backplate. Very easy, although you do have to apply a small amount of force to get the spring-attached screws to bite properly.

Smoke test

I like to run Memtest86 for a night after a new build is done. Also a few hours of furmark / prime95 just to see that things are stable. Some people advocate even longer tests, and there might be arguments for this, but I’m content. Temps for the new build are very good, even with the budget-priced Cooler Master. I can readily recommend this combination (i5 Skylake + CM 212 EVO) based on my experiences.

Idle temps are X degrees, and during testing (say 3DMark), CPU reaches around Y degrees C.

Conclusion and final words

Performance increase isn’t really noticeable. Not that I expected it. Here are some 3dMark results comparing the previous build with the i5-2500 with ddr3 memory, and the current build i5-6600 with dd4. Most other components are the same.

3DMark Firestrike: 6401 vs 6608 (where the biggest differentiator was the physics score, in which this Skylake build scored 1000 points more than the older i5)

3DMark Sky Diver: 17444 vs 18394 (again CPU bound tests made the difference)

3DMark Cloud Gate: 15435 vs 17454

And finally just as a joke, 3DMark 11: 9250 vs 9550 (CPU again differed by about 1000 points in favor of the Skylake)

After writing this article, I’ve upgraded the BIOS twice: Once to version 1901 and then 1904. Both have been stable, with no noticeable differences. I’ve used the EZ upgrade thing in the BIOS, and it’s worked fine. It can also connect to the internet, but that requires and extra reboot so I’ve just placed the file on a disk, and then browsed to that disk in the BIOS. We’ve come a long way from booting to FreeDOS or something through a floppy or usb, and then flashing! There’s also an option to do this from Windows, but I’ve usually opted to do it in the BIOS. Just feels more safe.

Messy build is done
Sure, it’s not cable managed and there’s no color coordination. Sorry!

Oh, and also, I ended up getting a Turtle Beach Impact 500 at Assembly Summer 2016. There were no Ducky keyboards for sale (typical, they’ve been there every year..). But on the other hand, this 69€ keyboard has performed lika god damn champ! Cherry MX Blue switches, tenkeyless, with uh.. 6 KRO? Enough for my needs. Very good feel, compact, solid build and detachable cable. Based on two months of usage, get this thing if you’re looking for a cheap minimalistic mechanical keyboard!

Turtle Beach Impact 500
Turtle Beach Impact 500

All of my Razer stuff is in the garbage now. Adios!

Lenovo Thinkpad T460s First Impressions

I recently switched laptops from the T440s to the T460s. I’ve long been a fan of the Thinkpads, both during the IBM period and the Lenovo reign of late. The T440s was a bit of a mistake in my opinion. Sure it performed as you’d expect, but the mouse was a huge pile of dung, and the keyboard wasn’t nice either. My favorite is still the T410s, which had the non-chiclet keyboard, similar or same as the old IBM Thinkpads had. I had a bunch of issues with the T440s over its 2 year and some odd month lifespan. The SSD broke early on and had to be replaced. I broke the keyboard (no fault of Lenovo, but still), and one USB port is unusable (not sure why). Battery life is still good after two years of business use, and it has no technical faults other than the ones I listed. It’ll still serve as my secondary machine, and probably do so for quite some years.

Plan old packaging
Plain old packaging

I got the T460s hot off the press, just a week after release, or so. I opted for the 20F9-0043MS model which has the full-HD matte screen, 4 + 4GB of RAM (which i expanded to 20GB by switching out the sole 4GB stick for a 16GB one), Core i7-6600U processor, and so on.

Hardware

First, let’s look at the hardware. We have output from CPU-Z first, showing the features of the CPU:

cpuz_lenovo_t460s
Detail of the main page, showing Skylake U/Y series CPU. Note the rather cool 15W TDP and 4MB L3 cache, plus the awesome 14nm manufacturing process.
cpuz_memory_lenovo_t460s
Detail of memory page. Total of 20GB DDR4, 4GB internal soldered on the motherboard, + 16GB SO-DIMM
cpuz_mainboard_lenovo_t460s
Mainboard details. Propietary Lenovo motherboard, running 1.05 BIOS (later upgraded to 1.08)
cpuz_caches_lenovo_t460s
CPU-Z Cache page listing the CPU caches

Then GPU-Z, showing the integrated Intel HD Graphics 520:

gpuz_lenovo_t460s
GPU-Z output. Chip is Skylake GT2 from last fall

 

Then we move on to the SSD, which appears to be an M.2 type drive and not your standard 2.5″ SSD. I’ll get an internal picture later for you, but opening the bottom of the machine (which is much easier than in the T440s which had icky plastic tabs that were too easy to break off), shows you all the user replaceable parts, which are very easily accessible! The SSD is manufactured by Samsung, however the model seems to be something sold to OEMs (the catchy MZNLN256HCHP). Some forums speculate that it is similar to the 850 (EVO?) model, but nothing certain.

Here’s some output from SSD-Z:

ssdz_lenovo_t460s
Some data on the Samsung SSD. Sata-3 bus, 256GB

 

CrystalDiskMark 5.1.2 results for the T460s
CrystalDiskMark 5.1.2 results for the T460s

If you want to compare performance (I’m not saying Crystal Diskmark is the ultimate tool, and these are not official testing conditions, but they are .. comprable I would wager) to some select SSD:s, here’s my Intel 910’s (PCI-E card) results, and here are the Samsung 840 Pro results, the T440s results and finally the venerable T410s’ results. All results with 64-bit CrystalDiskMark version 5.1.2, default settings.

Mobile Connectivity

There’s a 4G/LTE card in this model, which is a Sierra Wireless EM7455 Qualcomm Snapdragon X7 LTE-A WWAN Modem. The fun part was taking out the SIM-caddy, which was surprisingly already occupied! There was a “Lenovo Connect” SIM-card inside. Apparently, Lenovo has partnered up with a number of carriers worldwide (115 countries according to Lenovo). But since those cost extra, and I already have such connectivity in the countries I need to travel to, I took the SIM out. You might want to have a look at it, but it looks like most packages have data caps, which I discard out of principle. The prices don’t look.. bad, I suppose. Here’s the link http://shop.lenovo.com/fi/fi/lenovoconnect/index.html

As for the 4G performance, I tested it in Lapland, which has superb 4G connectivity (probably due to the low amount of subscribers per cell), it works fine without additional software in Windows 10. Speedtest gave me the following results (DNA is the carrier).

Speedtest run in April of 2016 in Finnish Lapland
Speedtest run in April of 2016 in Finnish Lapland

WiFi card is an Intel Dual Band Wireless-AC 8260, and the gigabit NIC is an Intel I219-LM. Both are bog-standard intel quality and have worked fine.

There is one thing that annoyed the piss out of me. Clicking the Notifications icon in the systray…

..this one!
..this one!

You get the otherwise handy Action Center / Notification bar thing, where you can turn off things like bluetooth, wireless, and yes, even cellular (though it is not showing here right now). Well, what happens if you turn off cellular here, and you want it back? Naturally, instinct tells you to open the action center thing again and re-enable it! But, what if it doesn’t show up (like it did for me)? What then? Well the next step is to go to Network Connections, look at the adapters and enabl… oh but wait it’s already enabled. But still it’s off, and you can’t connect? Crap!

Handy action center!
Handy action center! Not showing cellular because of reasons?

So after an unreasonable amount of googling, I found some people with similar issues. Apparently you can’t enable it anywhere in Windows proper (if you can, please tell me in the comments). No amount of enabling and disabling the card in network connections or device manager brings it back, or going to airplane mode or.. whatever. Instead what you need to do is sign out, and in the login screen, click the connectivity icon (the wireless symbol). From there, you can re-enable the radio of the WWAN card. Horse shit I say!

Clean install of Windows 10

I don’t care for manufacture-bloated OS’s, so I did a clean re-install of Windows 10 Enterprise, build 1511. Because I’m a dummy, I didn’t initially realize my mistake and attempted to install from my Easy2Boot USB drive. And that works too, if you’ve read the instructions and understand what you are doing… Here’s what I did wrong, so you don’t have to do the same things:

  1. Easy2Boot works fine, but you have to understand that if the install image is of UEFI type (which the windows image is), you can’t just copy the image to the Windows directory like other images
  2. You have to follow these instructions and make the Windows install image into an imgPTN image, and then try again.. Follow these instructions: http://www.easy2boot.com/add-payload-files/adding-uefi-images/
  3. Or, alternatively, get a suitably sized USB stick (4GB should do, 8GB will most definitely do), and use the Windows Media Creation tool (only for home and pro versions), or use Rufus but select the “GPT partition scheme for UEFI” option under ‘Partition Scheme and Target System Type’, or it won’t boot correctly. Or use the Windows 7-era tool (step 12 onwards) https://blogs.technet.microsoft.com/ptsblog/2015/08/19/how-to-create-a-bootable-usb-stick-or-a-bootable-dvd-for-windows-10/
  4. In my case, it did boot, but failed to find suitable devices to install to, or was lacking other drivers
  5. And no, adding SATA or other disk-related drivers during install did nothing to fix this – It’s an UEFI issue
  6. Changing BIOS settings between UEFI only, Legacy only, and Legacy first (and the CSM setting) also didn’t help in this case

After learning about UEFI stuff, installation was straightforward. The only Lenovo tool I like to install is the excellent Lenovo System Update, which keeps track of correct drivers and helper software and makes sure it is up to date. Also updates your BIOS, which is pretty useful. As of this date, BIOS 1.08 (or.. UEFI, I guess)

There’s more to write, but so far, I’m very pleased with the T460s. Much more than the 440s. The hardware is easily accessible, it’s performant and the mouse is much improved. To quote Wil Wheaton: “Later, nerds.”

 

 

MicroATX Home Server Build– Part 4

After a longish break, here’s the next installment! So the server has been in production now since last September, and is running very well. After the previous post, this is what’s happened:

  • Installed ESXi 6.0 update 1 + some post u1 patches
  • Installed three VMs: Openbsd 5.8 PF router/firewall machine, Windows Server 2016 Technical Preview to run Veeam 9 on and an Ubuntu PXE server to test out PXE deployment
  • Added a 4 port gigabit NIC that I got second hand

In this post, I’ll be writing mostly about ESXi 6.0 and how I’ve configured various things in there.

For the hypervisor, I bought a super small USB memory, specifically a Verbatim Store n’ Stay (I believe this is the model name) 8GB, which looks like a small Bluetooth dongle. It’s about as small as they get. Here’s a picture of it plugged in:

The Verbatim Store N Go plugged in
The Verbatim Store N Go plugged in

Using another USB stick created with Rufus, which had the ESXi 6u1 installation media on it, I installed ESXi on the Verbatim. Nothing worth mentioning here. Post-installation, I turned on ESXi Shell and SSH, because I like having that local console and SSH access for multiple reasons, one of them I’ll get to shortly (hint: it’s about updating).

Since I didn’t want to use the Realtek NIC on the motherboard to do anything, I used one of the ports on the 4 port card for the VMkernel management port. One of the ports I configured as internal and one as external. The external port is hooked up straight to my cable modem, and it will be passed through straight to the OpenBSD virtual machine, so it can get an address from the service provider. The cable modem is configured as a bridge.

The basic network connections therefore look like this:

Simple graph of my home network
Simple graph of my home network

After the installation, multiple ESXi patches have been released. Those can be found under my.vmware.com, using this link: https://my.vmware.com/group/vmware/patch#search. Patches for ESXi can be installed in two ways: either through vCenter  Update Manager (VUM) or by hand over ssh/local esxi shell. Since I will not be running vCenter Server, VUM is out of the question. Installing patches manually requires you to have a datastore on the ESXi server where you can store the patch while you are installing. The files are .zip files (you don’t decompress them before installation), and are usually a few hundred megabytes in size.

To install a patch, I uploaded the zip file to my datastore (in this case the 2TB internal SATA drive) and through SSH logged on to the host. From there, you just run: esxcli software vib install -d /vmfs/volumes/volumename/patchname.zip

Patches most often require reboots so prepare for one, but you don’t have to do it right away.

Update 2 installed on a standalone ESXi host through SSH
Update 2 installed on a standalone ESXi host through SSH

Edit: As I’m writing this, I noticed Update 2 has been released. I’ll have to install that shortly..  Here’s the KB for Update 2 http://kb.vmware.com/kb/2142184

A one-host environment is hardly a configuration challenge, but some of the stuff that I’ve set up includes:

  • Don’t display a warning about SSH being on (this is under Configuration -> Advanced Settings -> UserVars -> UserVars.SuppressShellWarning “1”)
  • Set hostnames, DNS, etc. under Configuration -> DNS and Routing (also made sure that the ESXi host has a proper dns A record and PTR, too; things just work better this way)
  • Set NTP server to something proper under Configuration -> Time Configuration

For the network, nothing complicated was done as mentioned earlier. The management interface is on vmnic0, vswitch 0. It has a vmkernel port which has the management ip address. You can easily share management and virtual machine networking if you want to, though that’s not a best practice. In that scenario, you would create a port group under the same vswitch, and call it something like Virtual Machine port group for instance. That port group doesn’t get an IP, it’s just a network location you can refer to when you are assigning networking for your VMs. What ever settings are on the physical port / vswitch / port group apply to VMs that have been assigned to that port group.

By the way, after the install of Update 2, I noticed something cool on the ESXi host web page:

Host..client?
VMware Host..client?

Hold on, this looks very familiar to the vSphere web client which has been available for vCenter since 5.1?

Very familiar!
Very familiar!

Very familiar in fact! This looks awesome! Looks like yet another piece that VMware needs to kill of the vSphere Client. Not sure I’m ready to give it up just yet, but the lack of a tool to configure a stand-alone host was one of the key pieces missing so far.

Host web client after login
Host web client after login

In the next  post I will be looking at my VMs and how I use them in my environment.

Relevant links:

https://rufus.akeo.ie/
http://www.verbatim.com/prod/flash-memory/usb-drives/everyday-usb-drives/netbook-usb-drive-sku-97463/
The Host UI web client was previously a Fling, something you could install but that wasn’t released with ESXi https://labs.vmware.com/flings/esxi-embedded-host-client
But now it’s official: http://pubs.vmware.com/Release_Notes/en/vsphere/60/vsphere-esxi-60u2-release-notes.html

MicroATX Home Server Build – Part 3

Because I am impatient, I went ahead and got a motherboard, processor and memory. The components that I purchased were:

  • Asrock H61M-DGS R2.0 (Model: H61M R2.0/M/ASRK, Part No: 90-MXGSQ0-A0UAYZ)
  • 16 GB (2x8GB) Kingston HyperX Fury memory (DDR3, 1600MHz, HX316C10FBK2/16, individual memories are detected as: KHX1600C10D3/8G)
  • Intel i3-2100 (2 cores, with hyperthreading)

I ended up with this solution because I realized I may not have enough money to upgrade my main workstation, to get the parts from that machine into this one. I also didn’t have the funds to get a server grade processor, and getting an mATX server motherboard turned out to be difficult on short notice (did I mention I’m an impatient bastard?).

I ended up paying 48€ for the motherboard, 45€ for the processor (used, including Intel stock cooler) and 102 bucks for the 16GB memory kit.

The motherboard has the following specs:

  • 2 x DDR3 1600 MHz slots
  • 1 x PCIe 3.0 x16 slot
  • 1 x PCIe 2.0 x1 slot
  • 4 x SATA2
  • 8 USB 2.0 (4 rear, 4 front)
  • VGA and DVI outputs

The factors that led to me choosing this motherboard were mainly: Price, availability, support for 2nd and 3rd generation Intel Core processors (allowing me to use the i3 temporarily, and upgrade to the i5 later if I feel the need), and the availability of two PCIe slots. All other features were secondary or not of importance.

The reductions in spec that I had to accept were: No support for 32GB memory (as mentioned in the previous post), no integrated Intel NIC (this has crappy Realtek NIC, but I might still use that for something inconsequential as management; probably not though)

These pitfalls may or may not be corrected a later date when I have more money to put toward the build, and patience to wait for parts.

The CPU is, as mentioned, an Intel i3-2100. It’s running at 3.1 GHz, has two cores, four threads (due to HT), 3MB Intel ‘SmartCache’, and a 65W TDP.  It does support 32GB of memory on a suitable motherboard. I doubt the CPU will become a bottleneck anytime soon, even though it is low-spec (it originally retailed for ~120€ back when it was released in 2011). The applications and testing I intend to do is not CPU heavy work, and since I have four logical processors to work with in ESXi, I can spread the load out some.

Putting it all together

Adding the motherboard was fairly easy. There were some standoffs already in the case, but I had to add a few to accommodate the mATX motherboard. Plenty of space for cabling from the PSU, and I paid literally zero attention to cable management at this point. The motherboard only had two fan headers: One for the CPU fan (obviously mandatory..) and one for a case fan. I opted to hook up the rear fan (included with the case) to blow out hot air from around the CPU. I left the bottom fan in, I may hook it up later, or replace it with the 230mm fan from Bitfenix.

Initially, I did not add any hard drives. ESXi would run off a USB 2.0 memory stick (Kingston Data Traveler 4GB), and the VMs would probably run from a NAS. I ended up changing my mind (more on this in the next post). For now, I wanted to validate the components. I opted to run trusty old MemTest86+ for a day or so. Here’s the build running MemTest:

Build almost complete, running MemTest86+
Build almost complete, running MemTest86+

Looks to be working fine!

Here’s a crappy picture of the insides of the case, only covered by the HDD mounting plate:

Side panel open, showing HDD mounting plate, side of PSU
Side panel open, showing HDD mounting plate, side of PSU

One thing to note here is that if you want the side panel completely off, you need to disconnect the cables seen to the front left. These are for the power and reset buttons, USB 2.0 front ports and HDD led. They are easy to remove, so no biggie here.

One note on the motherboard: There has only ever been one release of the BIOS, version 1.10. This was installed at the factory (obviously, as there were no other versions released at the time of writing). If you do get this board, make sure you are running the latest BIOS. Check for new versions here: http://www.asrock.com/mb/Intel/H61M-DGS%20R2.0/?cat=Download&os=BIOS

So this is the current state of the build. Next up…

  • Installing ESXi 6.0U1 (just released in time for this build)
  • Deciding on where the VMs would run
  • Adding NIC and possible internal storage
  • Configuring ESXi
  • Installing guest VMs

Stay tuned!

Relevant links:

http://ark.intel.com/products/53422
http://www.asrock.com/mb/Intel/H61M-DGS%20R2.0/

http://www.kingston.com/datasheets/HX316C10FBK2_16.pdf
https://pubs.vmware.com/Release_Notes/en/vsphere/60/vsphere-esxi-60u1-release-notes.html

MicroATX Home Server Build – Part 2

First an editorial correction to the previous post. An Intel B85 chipset motherboard will not support my current LGA1155 socket i5 processor, because that chipset is meant for the 4th Generation stuff (i.e. Haswell). Forget I wrote that.

And meanwhile, back at the content:

The case arrived last friday, and it’s a nice one! I’ve already stripped out the stuff I don’t need (mainly the 5,25″ bay internals), and installed the Corsair VX 450W PSU I had laying around from a previous build. A few notes:

  • The PSU installation was tricky
  • There are plenty of fans included, but they can easily be replaced. I’m thinking of getting their own 230mm (!) fan for the bottom of the case, since it should be fairly quiet
  • The handles on the bottom and top are a mixed bag. They are flexible, yet solid, so I wouldn’t worry about breaking them per se. I did end up removing the bottom handles (are they still handles even though they are in the bottom?) because it felt wobbly with them. I don’t want it to sway if i touch it.
  • Plenty of slots for 2.5″ and 3.5″ HDD’s. Very nice! All with removable mounting brackets of sorts
  • The case was wider than i thought, but this isn’t a bad thing
  • Most things are toolless, but there were some (easily removable) screws for certain parts
  • A nice selection of screws, rubber grommets, standoffs and other bits and bobs were included

The PSU installation

The PSU is installed in the front of the case, but not like you would think. I am not entirely sure why they opted for this method, but there is a standard power cable running from the from the front, from under the case, to the rear of the case, to a standard power plug. This is so that you can have all cables running to the rear of the case, even though the PSU isn’t physically in the rear.

The problem is that when you mount the PSU (it’s mounted top down, instead of on its side like usual), the regular power plug which you would plug to a wall outlet, is for the internal run, that ends with a 90 degree angled plug. It was *very* hard to fit, as you can see from the pictures. If your PSU has the plug near the edge of the PSU, it might even be impossible to fit the cable.

Detail of case bottom. Note PSU placement and power cable.
Detail of case bottom. Note PSU placement and power cable. Also note the rubber feet to lift the case slightly and allow at least minimal airflow below the case

From a space utilization perspective, I see why they did this. But the practicalities are well.. not. I seriously hope I didn’t break the cable, because the fit is so tight. If i did, it’ll probably blow a fuse the minute I turn it on, since it’s it would then be in contact with the metal of the case, causing a short.

Removing the bottom handle

I’m not sure why they made the bottom handle rounded too. The top one I get. It’s ergonomic, it looks good. But the bottom? I don’t want the case to be a rocking chair. I want it to sit still on the floor, shut up, and to what it is told.

Luckily, removing the bottom handles is an easy task: Remove four screws, pull it slightly and lift it out. The result isn’t pretty, but then, this is a home server build, not a beauty pageant. Someone asked Bitfenix if they’d consider different kinds of handles, or kits to cover the void left by removing a handle. The answer at the moment seems to be no, and I understand. As they say in the post, plastic is cheap, but making new molds isn’t.

I suppose if the visuals are a dealbreaker for you, either leave the handles in, or cover it with black gaffer tape or something. You can see the end result in the pictures.

Because of airflow, and stability, I added rubber feet to the bottom of the case. They seem to work fine. Whether I need more of a gap between floor and case remains to be seen. I have bigger rubber feet, and I’ll replace them if it seems necessary.

Lower case after handle is removed.
Lower case after handle is removed.
Full side view of case after lower handle removed
Full side view of case after lower handle removed

..and for my next trick

I am currently looking for a motherboard. I’m basically down to two choices:

Intel DH61ZE – https://www-ssl.intel.com/content/www/us/en/motherboards/desktop-motherboards/desktop-board-dh61ze.html

A cheaper desktop board with the same H61 chipset: http://www.asrock.com/mb/Intel/H61M-DGS%20R2.0/ or https://www.asus.com/Motherboards/H61MK/specifications/

Price for the former is ~80€, price for the latter: 45-60€

I might not move the i5-2500 to this board after all. I’ve been looking at a used i3-2100, which has 2 cores and hyperthreading, making it nice for an ESXi box. They are priced at around 40-60 euros used.

Memory will come from an existing stash, but will be limited to 16GB due to the motherboards. Just something I’ll have to live with unless i dish out more money for a modern board, or a proper server grade board and processor.

MicroATX Home Server Build – Part 1

Today I officially started my new home server build by ordering a case. The requirements for building a new home server are the following:

  • It needs to be physically small
  • It needs to be able to operate quietly
  • It needs to utilize some current hardware to reduce cost
  • It needs to be able to run VMware ESXi 6
  • Needs to support 32GB RAM for future requirements
  • Needs to accommodate or contain at least 2 Intel Gigabit NICs

Having run a number of machine at home in the past three decades, some of these have become more or less must-haves. Others are more of a nice-to-have. I’ve had some real server hardware running at home, but most of the hand-me-down stuff has been large, powerhungry and/or loud to the point where running it has been a less than pleasurable experience.

The last candidate was an HP Proliant 350 G5 (or so?), which was otherwise nice, but too loud.

You will note that power isn’t a requirement. I don’t care, really. My monthly power bills for a 2.5 person household of 100 m^2 is in the neighborhood of a few dozen euros. I really don’t know, or care. I’m finally at a position where I can pick one expense that I don’t have to look at so closely. For me, that expense is power. Case closed.

The conditions I’ve set forth rule out using a classic desktop machine cum server thing. Those are usually not quiet, they use weird form factors for the motherboard, seldom support large amounts of RAM etc. etc. A proper modern server can be very quiet, and quite scalable as most readers will know. A new 3rd or 4th generation Xeon machine in the 2U or Tower form factor can be nigh silent when running at lower loads, and support hundreds of gigabytes of RAM. They are, however, outside my price range, and do not observe the “Needs to utilize some current hardware to reduce cost”-condition.

Astute readers will also pipe up with, “Hey, this will probably mean you won’t use ECC memory! That’s bad!”. And I’ll agree! However, ECC is not a top priority for me, as I am not running data or time sensitive applications on this machine. Data will reside elsewhere, and be backuped to yet another “elsewhere”, so even if there is a crash, with loss of data (which is still unlikely, even *with* non-ECC memory), I’ll just roll back a day or so, not losing much of anything. A motherboard supporting ECC would be nice, but definitely not a requirement.

Ruling out classic desktop workstations and expensive server builds I am left with two choices:

  1. Get a standard mATX case + motherboard
  2. Get a server grade mATX motherboard and some suitable case

The case would probably end up being around the same choice, as the only criteria is that it is small, and can accommodate fans that are quiet (meaning non-small fans). The motherboard presents a bigger question, and is one that I have yet to solve.

I could either go with a Supermicro, setting me back between 200-400 €, and get a nice server grade board, possibly with an integrated intel nic, out of band management etc., or I could go with a desktop motherboard that just happens to support 32GB of memory. There are such motherboards around for less than 100€ (For instance, Intel B85 chipset motherboards from many vendors).

Here’s the tricky part: I could utilize my current i5-2500 (Socket LGA1155) in this build, and associated memory. This would mean that the motherboard would obviously need to support that socket. Note! The 1155 socket is not the current Intel socket. We’re now at generation 6 (Skylake), which uses an altogether different socket (Socket 1151), which is not compatible with generations 2&3 (which used 1155), generation 4&5 (which used 1150).

Using my current processor would save some money. Granted, I’d have to upgrade the machine currently running that processor (meaning a motherboard, cpu and memory upgrade, probably to Haswell or Broadwell, i.e. Socket 1150), meaning the cost would be transferred there. But then again, I tend to run the most modern hardware on my main workstation, as it’s the one I use as my daily driver. The server has usually been re-purposed older hardware.

Case selection

I’ve basically decided on the form factor, which will be micro ATX (or mATX or µATX or whatever), so I can go ahead an buy a case. Out of some options, I picked something that is fairly spacey inside, and somewhat pretty on the outside, which doesn’t cost over 100€. The choice I ended up with was the Bitfenix Prodigy mATX Black.

Here’s the case, picture from Bitfenix (all rights belong to them etc.):

bitfenix_prodigy

Some features include:

  • mATX or mITX form factor
  • 2 internal 3.5″ slots
  • Suitable for a standard PS2 standard ATX PSU (which I happen to have lying around)
  • Not garish or ugly by my standards

I ordered the case today from CDON, who had it for 78,95€ + shipping (which was 4,90€). Delivery will happen in the next few days.

The current working idea is to get an mATX motherboard which supports my i5-2500 and 32GB of DDR3 memory. I’ve been looking at some boards from Gigabyte, Asrock and MSI. MSI is pretty much out, just because I’ve had a lot of bad experience with their kit in the past. May be totally unjustified, but that’s the way it feels right now.

I haven’t still ruled out getting a Supermicro board, something like this one: http://www.supermicro.nl/products/motherboard/Xeon/C202_C204/X9SCM-F.cfm but that would rule out using my current CPU and memory. I’d have to get a new CPU, which, looking at the spec, would either be a Xeon E3 or a 2nd or 3rd generation i3 (as i5’s and i7’s are for some reason not supported). i3 would probably do well, but I would take a substantial CPU performance hit going from Xeon or i5 down to i3. I’d lose 2 cores at least, which are nice to have in a virtualized environment, such as this.

Getting the board would set me back about 250€ and the CPU, even if I got it used would probably be around 100€. Compare this against an 80-100€ desktop motherboard, use existing CPU, existing memory (maybe?). Then again, I’ll have to upgrade my main workstation if I steal the CPU from there. Oh well. More thinking is in order, me thinks.

 

Last minute edit:

The hardware I have at my disposal is as follows:

  • Intel NICs in the PCI form factor
  • Some quad-NIC thing, non intel, PCIe
  • Corsair ATX power supply
  • Various fans
  • If I cannibalize my main rig:
    • i5-2500
    • 16GB DDR3 memory (4x4GB)

Flow of things

A very long time has passed since I last posted anything. In that time, I’ve done an ass ton (metric, in imperial that’d be approximately 45/64th’s of one quarter cup liquid ounce of.. inches?)  of work, been to Switzerland and back, had my son start elementary school, and various other bits and bobs. Maybe that’s why? Anyway, I’ll start rambling off things that come to mind.

So I went to Switzerland, Geneva to be more exact. And to be even more exact, I visited CERN! The inner geek in me is still excited. That place is, to put it bluntly, amazing. We started by checking in at the visitor center, where we got our badges. I took the opportunity (at the recommendation of one of our hosts), to visit the gift shop and pick up a t-shirt and coffee mug. The mug has the four component formulas for, well, everything important, i.e. the Standard Model Lagrangian. Don’t ask me to explain it, because I’m pretty sure I couldn’t. The t-shirt I can explain. Not only was it made somewhere in Asia, but it also has on it the original Tim Berners Lee proposal for the world wide web. The back has his boss’s comment “Vague, but exciting”, on it. Both items are in frequent use.

At CERN, I visited the control room for ATLAS, one of the experiments using the large hadron collider. The LHC itself was being upgraded to allow for higher energy level collisions in the future. Pity we couldn’t visit the actual detector, or see the actual uhm.. tube where the particles travel in a circle before hitting each other every once in a while. We also visited the computer center.  As a computer guy, I was pretty darn impressed. The amount of hardware that’s in there is staggering, and the connections to the outside world are even more impressive. I was told there wasn’t “much” science going on, and still the aggregate bandwidth of connections to and from the facility and to research facilities around the world was at over 7 GiB, with over 200 000 running jobs. They told us it gets to around 13-15 GiB when there’s a real buzz. There was a nifty touch screen in the lobby of the computer center, built around google earth, that you could spin around to see the different connections around the globe. Finland’s share? A meager 0,3% of the computing being done. Meh. The lobby also had some display cases with various old hardware: old modems, fiber optics, hard drives and so on.

Geneva was a nice place in general. The climate was nice, the views spectacular and the people generally very nice. I had that same nagging feeling that I had in Paris in 07, where the French speaking people were just acting.. weird. We had a waiter that was muttering something under his breath the whole time he was serving us. There was that same air of arrogance and displeasure at having to speak English. The hotel was a refreshing exception (as it was in Paris), and I can easily recommend it for anyone looking for a reasonably priced hotel in Geneva. We stayed at, *drumroll* the Hotel de Geneve! Located at 1, pl. Isaac Mercier, Geneva 1201, Switzerland, it seemed to be a fairly central location. It was a short 10 minute walk from the train station, and not far from the river for instance.

On our second day, we took the train to Lausanne. I had perch. Nice expensive looking place by the shores of Lac Léman (Lake Geneva). The train ride was maybe an hour, or a little less and very smooth. Saw an Aston Martin Vanquish drive by. The whole place seemed to be in a perpetual slow motion, and somehow at ease or at rest. Didn’t really see much of the city, we just had lunch, but what little I saw was nice.

The journey back was eventless, if it wasn’t for the small incident at the airfield in Geneva. We were taken to our plane (an Embraer 190) by bus, and had to wait outside the plane for a considerable amount of time as the idiots piled into the plane (how hard is it to just find your place, and stow your luggage?). While waiting, I figured I’d take a few pictures. I took a picture of one of my traveling companions, with the plane in the background, and then turned around to take a picture of the scenic mountains that basically surround the whole place. At this point, one of the yellow vested… whatever she was, told me to put the phone away. No pictures! Put it your bag! I told her there were no signs posted anywhere that I couldn’t take a picture, but she would have none of it, and I yielded, putting my phone in my pocket.

Now, I am aware that standing on the tarmac, there is in theory a risk that something will happen that requires my attention. On the other hand, if a plan lands on us, I doubt I would have time to do some kind of Die Hard-type jump to safety, phone or no phone. There were also no spinning propellers that I could accidentally walk into. I think there was even a small roped fence thing preventing us from wandering onto the runway or other areas around the plane.

I was not given any reason for why I couldn’t take a picture. This always irks me. If there is no sign prohibiting photography, or an announcement, and I have used my common sense to assess that taking a picture does not pose a risk to my or anyone elses health, I’m going to take pictures. I have no reason to fight with airport people. They are doing their job. I still fail to see how my photography could cause any harm. Also note, the queue into the plane was *not* moving, so I was not holding up the plane, telling everyone “hold on, I need to tweet this shit!”.

“Is this not a reasonable place to park?”

Enough about travel again! Seems I can’t get enough of it. Later this year, though, I’m flying over to Edinburgh, which might be the place to be now that they are voting for independence! I might get a chance to visit the newest independent country in the world. Or maybe not, in case the No-vote is the winner.  The vote might be today?

On the hardware side of life, I’ve been doing some upgrades for my backup and storage infrastructure. For local onsite backups, I now have an Iomega IX2-200 (cloud edition), with twin 3TB Western Digital Red drives, in RAID1. It’s not the newest or the fastest NAS out there, but it works. On my main workstation I have replaced the previous 2x1TB RAID1 set with a 2x2TB RAID1 set. Just added one terabyte. I now have a bunch of spare 1TB disks, which will probably be incorporated into a FreeNAS build I’m working on. I had some issues trying it out earlier this month, but I think it was just Samba misbehaving. It would disconnect in the middle of a file transfer, and tell me the path is not accessible. According to FreeNAS, things were a-ok. It’s not like I’m a FreeNAS guru or anything, so I’ll have to put in more hours to that build to get it working. It might end up being up to 8x1TB. Currently I have only 8GB of RAM (ECC, though), but I’ll probably want to upgrade that to at least 16, maybe even 32. The thing is, that means I have to get a different motherboard, processor and.. Oh well.

 

Lenovo Thinkpad T440s – 6 months in

I’ve now had the Lenovo Thinkpad T440s as a work machine for the past 6 or 7 months. Here are some short observations, things I like, things I don’t like, things the broke, etc.

Things I do not like:

  • The gorram touchpad. Get it out of here! Horrible the way there is like a single button (the size of the entire touchpad), and a certain area for the right mouse button etc. Just unusable in my opinion
  • The keyboard used to be better… now it resembles something that comes from Cupertino, and is not as comfortable to use as the previous thinkpad-y keyboards
  • No more nipple buttons! How am I supposed to use the trackpoint (a.k.a. the nipple) without the two buttons below the keyboard? I’m not, that’s how! External mouse is basically an absolute necessity
  • They’ve slimmed it down so much the keyboard leaves marks on the screen when you have the lid closed. It’ll only get worse, and I hope it doesn’t permanently damage the screen. I do have a screen-filter in between so hopefully that protects the LCD slightly.

Things I like:

  • Screen is great. 14″, FullHD (yes, It’s not 1440p or whatever). You could get it with either a touchscreen or not. Obviously mine isn’t a touchscreen, as I was buying a laptop, not a tablet
  • 256GB SSD. Not the fastest out there, but I like
  • Connectivity. With the docking station, I have enough ports to fill my needs. USB 3.0, 2x Display Port (which I have connected to my two external Dell screens), etc. etc. I’ve missed the optical drive a few times. But not enough to get an external drive to lug around
  • The overall form factor, size and weight

Things that have broken or failed more than once, or annoyed me

This list is longer that I would like. Compared to previous Thinkpads that I have used or owned, this is unusual

  • SSD. Started failing when I was saving files (for instance), and eventually stopped being detected at boot. Replacement was sent by Lenovo, and I swapped it out. In hindsight, do not do this on your own. The case is a bitch to open. Get their onsite tech to do it.
  • Keyboard. Broke a button while fiddling with it. A piece of plastic came off and the button was forever broken. My fault entirely. Ordered a replacement keyboard, swapped it out. Easier than the SSD. A bit harder than some Thinkpad models in the past.
  • The piece of metal that keeps the ethernet cable in place! This is incredibly annoying. For some reason, the ethernet cable doesn’t *click* into place anymore. Something is missing. Not sure this is a warranty thing. I’ll just survive, I think. I use it in the dock about 70% of the time anyway
  • Issues with the external screens, when docked. I have two Dell U2713HM screens attached via Display Port cables to the dock. Randomly, the screens will go blank, even when the laptop is securely seated (and locked) into the dock. Sometimes resolutions get messed up, so that one screen has a lower resolution. This might be a Windows 8.1 issue too, but still, annoying. Issues waking up from sleep, or power save
  • Serviceability. I wish it was easier to open the case. Granted, I don’t have to do it. I can get their onsite or whatever to do it. But I liked how you could open the slot for memory, or the hard drive, or whatever, and not have to rip the entire case to bits. Screws are also not enough, there are plastic clips that *will* break if you are not careful when opening the case. I wish it was more like my T410s, where everything, more or less, was behind it’s own hatch and/or easily replaceable
  • Not available with more then 12 GB memory. Why? Why the I7 processor, but then limit the memory to 12GB? Doesn’t make sense in 2014…

Not sure I can recommend this laptop. There are a lot of annoying things with this machine. When docked, it works mostly great, and with the 256GB SSD, I7 processor, and with it’s dual DP ports supporting large external screens, it is a powerful rig. But a lot of annoying issues. Not sure what I would get, if I didn’t get this one. Apple is out, never liked HP.. what other business type machines are there that I would like? Dell? Always thought they were a bit clunky.. I dunno.