All proceeds from Ad Clicks goes to the author of this site.


Wednesday, March 11, 2015

Is Google server nodes the new Mainframe

A friend mentioned that small/middle companies are using mainframes, but I think an argument could be made that Google and Facebook and friends are really creating the new mainframe with there various server nodes that are often made up of custom tasked designed hardware/servers.

One of the explanations of what made a mainframe a mainframe. I learned over the years was that a mainframe had custom hardware for specific components of the system, such as a hard disk controller that could be given a list of blocks or files and it would go fetch the data and place them into the system memory or returned to the program. Or a network controller that would do the same with network IO.

Has Google, Facebook and friends created a network by using one or more of their server nodes to make up what could be effectively be called a mainframe. For example if you consider 1 or 10, or even a 1000 server nodes running memcached a storage controller, the programmer can in a single function task the memcached servers with fetching and returning literally 1000's of requests for data all returned over the network with what many people consider the equivalent of drinking from a fire hose, because a 1000 nodes are essentially trying to return the data at wire speed. Various other technologies exist that allow other servers to return or store various chunks of data to other servers. Such as a restful API.

Since a mainframe is not limited to a single box or a size of box why not consider one rack of server nodes or even 1 or more rows of a data center a single system. Projects like Mesos and kebernetes allow the programmer to consider the full cluster of nodes a single system. Surely back in the 60's, 70's and 80's the mainframe were made from a lot of custom parts and not commodity parts, but by doing custom nodes, some of custom nodes are tuned for disk storage, others are tuned specifically for networking or ram based nodes, in the future they will be moving to GPU capable nodes, that have one or more GPU processors on board. Yes it would be the equivalent of making mainframe out of lego blocks.

First new post after a long rest...

Well time to wake up this blog, will be posting stuff again, beware.

Tuesday, February 07, 2012

N40L awesome deal, my new toy

 its quiet enough to live in my living room, but it is currently in my basement, when I installed Nexenta it was sitting on my dining room table about 2 ft from my ears, and it was barely noticeable. It has a single low rpm 120mm fan,unless you somehow get a bad fan in yours, it should be fine.

HP ProLiant N40L Ultra Micro Tower Server System AMD Turion II Neo N40L 1.5GHz 2C 2GB (1 x 2GB) 1 x 250GB LFF SATA 658553-001 

  • Cache Memory: 2GB
  • Memory Type: DDR3
  • MAX Memory Capacity: 8GB
  • Memory Features: 2 DIMM slots Unbuffered ECC
  • Model #: 658553-001
  • Item #: 
  • $249.99
    The system last night, from, along with the 8GB of ECC memory  an extra $60 but still less than best price of the server. When I ordred mine the deal was for $299 and included a free 2TB hard drive, not quite good now, but the server is still an excellent choice, Nexenta Community version booted right up on it, had to install via a usb cdrom or dvdrom. Other wise there is a sata port for the for Optical drives and a power plug but the process of fishing the cable through the tight case and since I was only going to use it for installation I grabbed USB dvd-rom and gave it a try and it worked, smooth as silk the entire process. I will be working on adding a a second 2TB drive to the one that came with the system, and post benchmark numbers soon. I will try and do a better job keeping my blog up to date.  

Tuesday, August 16, 2011

Node on Nexenta

I discovered node.js that Joyent  is using extensively to work its cloud magic. What is node.js check out this video for info, it seems very cool.

Now the next question becomes how do I get it running on Nexenta. First hope was that it was already there considering the following announcement of the of the Joyent/Nexenta partnership

jamesd@amd:~/nodejs/node$ apt-cache search node | grep ^node

nope no luck, okay that was out, well since I wanted to give it a try I popped over to my Ubuntu box and then built node using and it worked fine, and was painless. Except the git node repository has moved to

opteron:nodejs$ uname -av
Linux 2.6.32-33-generic-pae #71-Ubuntu SMP Wed Jul 20 18:46:41 UTC 2011 i686 GNU/Linux
opteron:nodejs$ node
> process.platform

Well now for a leap of faith... back to Nexenta and give it a try.

To make a short story even shorter, it WORKS!!! using eveything just the same.

jamesd@amd:~/nodejs/node$ uname -av
SunOS amd 5.11 NexentaOS_134f i86pc i386 i86pc Solaris
jamesd@amd:~/nodejs/node$ node
> process.platform


Wednesday, May 04, 2011

Moving onto Nexenta Community edition

A while ago I manged to screw up my ZFS pool by having it use files on other ZFS pools as cache and slog devices. After 6 different versions/distrobution/releases,  I have given up, and recreated pools that are much saner in layout.

The filesystem now runs Nexenta community edition

With the following pools
250GB root pool or syspool as Nexenta calls it
3x 500GB in a mirrored layout in a pool called tank I may break it into a 2 way mirror later. And then create another 2x 500GB pool or zdev on the pool, currently this pool is mostly being used for ESX nfs storage. I wanted to go 3 drive wide mirror for read speed and improved write performance over raidz.

I was going to use SXCE on the system but during the install it hung at 18%, so I gave up and went with Nexenta since I had the disc sitting next to the machine anyway. I am enjoying it mostly did have to add another repository to get bind9 build for it, but that plus about 15 minutes of configuring it I was able to get nexenta to be a DNS server, thus making NFS happy.


Well I don’t expect to be using this fileserver long, since I want to move to a all in one box solution for ESXi+ZFS using an HP ML350g6, but this will allow me to add more storage to my existing ESXi 3.5 box, its built in 60GB is showing its age with its SCSI drives that are expensive for anything larger than 36GB and use lots of power in return. I will be picking up components to go in the ml350 that can live in this box till the main (expensive) bits are ready.
I will probably get 4x 2TB drives and put them in this box perhaps even the ssd’s for l2arc/slog can go in this box as well.

Sunday, April 17, 2011


Well after 4 days of struggles and unexpected patching sessions, I now have my new 750GB SATA drive in my laptop. Once I started using samba it all went much better than I expected. To make things faster be sure to choose restore partition not files. Restoring files takes a lot more time. Despite your first guess that restoring by partition doesn’t require extra steps if you installed a larger drive, it uses the partition scheme you setup on your new drive. You just need to specify where you want your files restored too. The process goes much faster, when I tried using the file method it gave me a time of 19 hours, and went up occasionally to 3-5 days. When I switched over to partitions in the morning when the timer still gave an ETA of 18 hours, it went down to 2 hours, but took like 3.5 hours to complete the process.

When I rebooted the laptop after the restore completed, it immediately started booting Windows, no hiccups, no little tweaks, it all just worked, I was surprised and impressed at how well it did the job. Of course this raises the question could I have used full drive backup and got better performance, and still restored to a larger partition?

After windows booted, it did have to look for and install a driver for the hard disk, I didn’t change the controller, so I was surprised it needed to install a driver for a SATA hard drive that replaced a SATA hard disk? But oh well it worked, I will have to see next time I reboot if it gives any performance enhancement.

The reason I purchased Acronis and didn’t use Clonezilla and gparted type solution was because I was afraid that I wouldn’t be able to make Vista boot on the new disk, all the entire how-tos I read required repair work and/or recover disks. My laptop didn’t come with a Vista install disc; I did create the recovery disks nearly 3 years ago, damn if I know where those disks are hiding. So in the end Acronis did what I expected just the backup progress wasn’t as easy as I expected, but the restore pretty much exceeded my expectations by far.

But it is nice to see 487GB of free space, and even gave 60GB of space to Linux partition should I choose to dual boot sometime in the future.

Lessons learned in this process.
  •  Install as Administrator, an administrator enabled account isn’t enough, after the restore process it still doesn’t like running as a normal user.
  •   Don’t bother using FTP with Acronis it really doesn’t play well with others.
  •  Defragment your drive before you start, backups go faster with it defragged.
  •   Use the fastest transport media possible, wired is much better than WIFI even “N” class wireless
  • Gigabit would of helped, but wasn’t an option on this laptop
  • Don’t use dynamic style disks the restore process doesn’t like it, and you can change later
  • Be sure to have another computer if possible for several days while the backup/restore is proceeding.
  •  Install gkrellm on your fileserver so you can see throughput, and be sure that Acronis is doing something. 

Saturday, April 16, 2011


I was never able to make Acronis Trueimage 2011 complete a full backup to any Linux based FTP server, I finally just found the simplest samba configuration document and created a samba share, which worked painlessly. I even took the extra couple hours for it to complete a full validation of the backup before beginning the drive replacement procedure. Not sure if I read something for an older version about not using samba for the backup or what, it was completely painless except for the fact that it took all the way to Sunday to complete a full backup of a 180GB disk. Something should have been done in 8 hours.

I bought the Acronis Disk Manager Home along with the Trueimage home, it was about $65 combined for the together not bad, it made the job of formating my new 750GB 2.5” SATA disk go smoothly and allowed me to setup a Linux partitions for ext3 and swap. BTW if you are in the market for laptop hard drives, the current 1TB drives are 12mm in height and most laptops can only take a 9mm tall drive so don't order the 1TB drive that you really think you need before you verify it will fit in your laptop and save your self the return and re-order hassle.

I sit here and type this as the recover operation runs so we shall see how this all turns out in a few hours. Its only reading data off the samba server at about 7-8MB/s compared to 12MB/s that it wrote at. I guess Linux based recover program is slower than the window hard disk based version. I'm pretty sure the new hard disk is faster than the old one it replaced and the 1.5TB disk in the Sun Ultra 20 with dual core 2.6ghz Opteron should be up to the task of filling a 100mbit ethernet to the laptop, the ulra 20 has a Gigabit Ethernet port and is plugged into my gigabit wifi router.

The process has been going on for about 30-40minutes, and it still hasn't provided any hints as to how long the recover process is going to take, thankful I have gkrellm running on the Ultra 20 so I can see that it is sending between 1.5 and 6MB/s across the network. It still doesn't appear to have any color to the progress bar on the Acronis recovery dialog box.

I guess I'm going to take my own advice in these situations, walk away and let the computer do what it does best.

UPDATE: well after about an hour, it did give me an ETA for completion, I really don't like the number, 19, 19 hours but what can I do. But its the story of this progress everything takes much longer than I think it should!

This Close!!!

I decided to buy TrueImage 2011 home, to backup my laptop, since I will be replacing its hard disk, I am seriously thinking of returning the software and asking for a refund, it’s been a total pain, first issue had to do with permissions and it not liking the software I had installed previously not sure which software, but I had to disable my virus checking software, and every service not written by Microsoft, and then run it as “Administrator” not sure which one of those things killed the initial startup, but I did get it going finally, But that is only the first stumbling block.

I am using the ftp destination for backing up, which basically goes like, first time I tried it with just plain wireless, N class network adapter and after it uploaded its first gigabyte it estimates the ETA of 2 days and 8 hours. That would never do. So I hook up wire and see if it could recover, it can’t FAIL. Like really it can’t restart a ftp connection? How lame.

But it gets better. I restart the backup process, this time it gives a better speed only 8 hours to backup the 160GB hard drive. Well it’s now about 1am, and I have to work in the morning, so I lock my screen and head off to bed. I wake up in the morning thinking it will be done or perhaps I will have an hour or two left. You guessed it, it’s not done, I sit looking at my login screen, Microsoft decided it was the right night to install patches, and reboot my machine around 2am…. Not sure how much I blame Acronis for this issue, but it could of made Microsoft wait… I kick the backup off again and head off to work. I get home, to find an error message and the backup has failed, used their trouble shooting tools that link into their knowledge base, a few hits, but nothing seems valid, so I try again, and 8 hours more, FAIL again. Lots of fighting with ftp connections, sure you can create them, but they don’t give you a way to delete them and start over, talking about second class destination.

Well this process started Wednesday at around 6pm, it is now Friday 11:39pm, I finally got tired of fighting this and decided to open a ticket, I avoided this process because it feels too much like being at work opening tickets, of course there automated system decides to use Microsoft outlook to send off an email, I have outlook installed but not configured so fail again. I think spent another 20 minutes looking for the way to upload it manually. After I upload there “troubleshooting packet” the upload process tells me it knows a possible solution, so I read the first link and low and behold comes a page that mentions it doesn’t like proftpd, that I have been trying to use, okay you know your software hates proftpd and needs features that some ftp servers don’t support why not check for them and bitch about the lack of them preferably in pure English FAIL again. Further the doc doesn’t mention a ftp server that they support FAIL again. If you Google Acronis ftp server, you get there software to backup Linux servers. So we have a company that supports Linux, but won’t give you a list of ftpd servers that works in Linux with Acronis trueimage and I couldn’t even find a list of supported ftp servers on their forum. It seems like there whole error reporting is for machines, every error is a hex code, and can’t you print out the error means in English. This is a home product…

For those that found this blog entry because you have been bitten by the ftp problem. I did find an ftp server that appears to work, it is pure-ftpd which ubuntu has, all you have to do is install it.

Saturday, April 09, 2011

All in One?

Well the debate continues in an earlier blog post I discussed my thoughts on my new servers for fileserver and esxi box (debating new home hardware) my current thought has now changed from getting two servers into just getting one but running Solaris ZFS in a guest on it. I know ZFS loves memory, so to make Solaris or Nexenta ZFS perform its best I would get one HP ml350 g6 but upgrade it into the middle of its capacity, 2x quad core e5606 (3.13 ghz 4MB of l3 cache), and 4GB of memory from hp, (2x 2GB dimms from hp) and then 4x 8GB dimms from a 3rd party memory supplier for a total of 36GB memory. For the controller I will use the p410/nz and grab a 512MB cache with battery backup to get full write performance for the disks  I will add 2x 250GB  from HP to hold Solaris and a couple other guests, and  4x 2TB  sata drives from a 3rd party in a raid 1+0 layout, two 460 watt psus. Even with the extra 8GB of memory and extra power supply it works out to be $800 less and more importantly it also uses less power so lower electric bill long term.

I think the above configuration should be a good enough performance. But if it still needs more performance Napp-it  all in one, documents using a SAS controller in pci pass through mode with good results. The card they recommend is LSI 1086E SAS controller that Amazon has on sale for $160 but then I may need to get a separate disk box or put the disks in the non hot-swap bays. I don’t think my storage needs require overly fast performance and I could easily dedicate 8-12GB of memory to the Solaris/Nexenta guest perhaps even more. Third party 8GB dimms are only $270 and the ml350 g6 has 18 dimm slots so its not like I’m going to run out or I could possibly get a SSD device and attach to the p410 controller and pass that into Solaris/Nexenta guest.

Loving Autohotkey

As a Unix administrator, I try and use my keyboard more than my mouse, and I love automating things and processes, until I found Autohotkey I really didn’t have a good way to do it in windows, sure you can record marcos or write small scripts in office basic or whatever they call it now, but that is limiting, or I could write scripts in virtualbasic , too much work.
Autohotkey really makes automating things like passwd entry, I know it’s a “bad thing” but when you are a new job site, and they really don’t understand the easy, power, security of ssh shared keys but they accept autohotkey what can you do? When in rome I say.

It would be nice if Autohotkey had a tool that allowed you to encrypt a string decrypt it easily so that you wouldn’t have to put passwords unencrypted in your script files, sure someone that really wanted access could crack them but it would at least keep the casual person from reading all my passwords.

Something like
Send unencrypt(“%myPasswd”)

Here is my current autokey script on my home system.

InputBox, SearchTerm, Search
if SearchTerm <> ""
Run chrome.exe

#w::run winword.exe
#e::run excel.exe
#m::run chrome.exe