Politics and Technology.

Monday, December 22, 2008

The $102 Million Monster On The Raritan

So I received this piece of garbage from McCormick a couple of weeks back and it ticked me off but I ultimately decided that there was enough things going on that ticked me off that I could sit this one out.

What got me about this note are two things: the inexplicable firing of Bob Mulcahy, and the joining of the stadium construction to a jobs program envisioned by the McCormick.

I don't know much about Mulcahy, but with what's going on at the Univeristy, as a tuition payer and a taxpayer, I want to know "what's up with that?" The action has garnered attention and McCormick feabily attempted to address it in a Jedi "these are not the droids you're looking for" kind of way.

As for the stadium construction providing "jobs for hundreds of working people at a challenging time for our state’s job market", when the hell did RU put social work over education? Am I going to foot this bill? I'd rather save that money for my family, pal.

Then I see this little piece of work with McCormick's stamp on it, and I can't hold back any more.


I'll save you the trouble of parsing everything and summarize it like this.

Dear Mr. Obama,

We need money. Please make the check out to Rutgers University or we will raise tuiton.


36 University Presidents, including Richard McCormick

What the hell? It's not enough my tuition dollars and NJ state income tax dollars go towards this White Elephant in Piscataway, but damn it all if McCormick squeezes money from my Federal tax bill for it.

For crying out loud, who the hell is looking over McCormick's shoulder from the RU board of governors?

It's time to slay the $102 Million Monster On The Raritan!

Below is the December 12th message from McCormick (emphasis added by me).

---------- Forwarded message ----------
From: Richard L. McCormick
Date: Fri, Dec 12, 2008 at 4:37 PM
Subject: Athletics and Rutgers Stadium
To: PRESIDENT_ALLSTAFF@rams.rutgers.edu, PRESIDENT_ALLSTUDENTS@rams.rutgers.edu

Members of the Rutgers Community:

Over the past several months, information we have received about the Division of Intercollegiate Athletics has pointed to the need for tighter financial controls and for stronger administrative oversight by the Rutgers Board of Governors and by me. I want to report to you on the progress we are making and on important decisions that have been made this week.

Today was the first meeting of the full Board of Governors since the Athletics Review Committee (ARC) issued its report. The ARC report made strong recommendations for change in the areas of governance, departmental operations, and authority to enter into contracts on behalf of Rutgers. As I made clear in my initial statement on the ARC report, my administration is committed to working in concert with the Board to make the improvements spelled out in the report. Specific actions taken today by the Board expand the duties of the newly titled Senior Vice President and General Counsel, Jonathan Alger, to include service as the university’s chief compliance officer and establish a new process of approval for all salaries above $300,000 – a process that will involve both the president and members of the Board. In addition, my administration will increase the size of the Internal Audit Department over the next several years. I have appointed a committee to review signatory authority on contracts that Rutgers employees enter into on behalf of the university. And we are updating and revising university policy and practices related to sponsorship agreements. (See http://speakup.rutgers.edu/intercollegiate.shtml for information on the ARC report and the steps we are taking.)

Earlier this week, I removed Bob Mulcahy from his position as the Director of Intercollegiate Athletics, effective at the end of this month. I did so with reluctance, because Bob has achieved a great deal in his decade of leadership. The successes of our sports teams on his watch have opened windows onto many other Rutgers programs. Our victories have energized the people of New Jersey and caused them to admire Rutgers. Bob has worked tirelessly and skillfully on behalf of our student athletes, the coaching staff, and the many thousands of Scarlet Knight fans. His achievements can be seen everywhere in the program.

Now it is time to seize the opportunity to build on all that he has accomplished for Rutgers and for the people of New Jersey. I believe, based on recent events, that the time is right for new leadership for our athletics program. We will act quickly, and search nationally, for Bob’s successor. That man or woman will inherit a legacy of accomplishment – both on the field of play and in the classroom. Rutgers will go forward and will continue to show that success in intercollegiate athletics and success in our academic programs are truly complementary.

Finally, I want to report on the decision the Board has made regarding Rutgers Stadium.

Two months ago, I asked University Facilities and the Division of Intercollegiate Athletics to develop options that would continue to place priority on constructing additional seats while reducing the overall cost. I was concerned that the bids we received for the second phase of the project – construction of more than 11,000 end zone seats – were too high and would exceed the $102 million threshold established by the Board. We needed to consider ways to complete the project at a lower cost, especially because private fundraising for the project was moving very slowly.

We evaluated a number of options, and it became clear that some models might have immediate political appeal but would fail economically both in the near and long term. In the end, there was a clear best option that was the right course of action for Rutgers and was financially sound. I recommended, and today the Board approved, continuing the stadium expansion at a cost of $102 million, the project budget the Board approved in January. To stay within that budget, we will have to scale back the scope of the project. The plan approved by the Board puts aside the project’s non-revenue-generating features, including locker rooms and the media room, but moves ahead on the revenue-generating elements, including seats and concessions.

The Board granted the university authority to issue additional debt obligations up to the $102 million limit, which will be financed by revenues generated by the 12,000 new seats and other revenues from the stadium itself. While we are confident that we can cover the debt service on the full $102 million project, we will continue aggressively to pursue private fundraising because every dollar we raise for this purpose will be put toward reducing the debt and helping achieve the expressed goal of moving our athletics program toward greater financial self-sufficiency.

This direction makes economic sense for Rutgers. It is not truly feasible to shut the project down, even for a year, because that would cost us millions of dollars. We need to build – and fill – the seats. Completing the stadium also makes sense for New Jersey. This is a major capital project that will provide jobs for hundreds of working people at a challenging time for our state’s job market. (See http://speakup.rutgers.edu/stadium.shtml for a press release and other information on the stadium expansion.)

Rutgers is committed to a high quality, well-managed, and competitive athletics program to complement our excellence in academics, research, and outreach. The actions we are taking, both in the management of the Division of Intercollegiate Athletics and in the construction and financing of Rutgers Stadium expansion, will serve the university, the state, and our many Rutgers fans and supporters well.

Richard L. McCormick
Rutgers, The State University of New Jersey

Sunday, December 21, 2008

Edna on Mac OS X

While I am warming up to iTunes thanks to my new AppleTV, there are times that I think it is more than what I need to listen to my music collection. Call me old school, but there's a certain elegance to simply browsing a folder and clicking on a song and expecting it to just play without bells and whistles.

While Finder can do this locally on a mac, I want access to my home library of music remotely from work. Using iTunes and Mojo is an option for remotely using my iTunes library, but I somehow feel this combo is going a little overboard. It feels like I'm using a sports car to drive from my garage to my curbside mailbox to pick up the mail.

That's where Edna comes in. Made "back in the day", she's a simple Python script that serves up m3u's of your audio files just as you have them organized in directories and sub-directories. A subsequent hack made it possible to do simple searches on the directory trees (one word searches, apparently). Before it gave up the ghost, I used Edna on my Centos box and forwarded the port through my firewall.

Since my jump to AppleTV, I've come to rely more and more on my iTunes for storing my music. I decided to get Edna working on Mac OS X and point it to my iTunes collection.

So to share this experience with those interested, I have included my steps here. I put together a tarball with all of the necessary files for the reader's convenience. If the reader decides to use my tarball, they should read the readme files and understand how to edit the configuration file for edna, at least. For instance, I believe the conf file in the tarball points to "/Volumes/itunes/music", not "/Users/jason/Music/iTunes".

First I had to get the Edna package and patch it with the search patch.

Then I had to install it with make install ("make" comes with the Xcode package).

Without make from Xcode, run install by hand instead. I had a difficult time getting python to see the "/usr/lib/edna" directory, so I had to soft link the two include files (ezt.py and MP3Info.py). Oddly, as a side note, when I upgraded my Macbook from Tiger to Leopard, and python 2.5 replaced 2.3, these links went away, but edna still worked. When I did a fresh install of Edna on a Leopard box, I still had to make the soft links. Weird.

sudo install edna.py /usr/bin/edna
sudo install -d /usr/lib/edna /usr/lib/edna/templates /usr/lib/edna/resources
sudo install ezt.py /usr/lib/edna
sudo install MP3Info.py /usr/lib/edna
sudo install -m644 templates/* /usr/lib/edna/templates
sudo install -m644 resources/* /usr/lib/edna/resources
cd /usr/lib/python2.5
sudo ln -s /usr/lib/edna/MP3Info.py
sudo ln -s /usr/lib/edna/ezt.py

Then I had to configure the conf file. The conf file in my tarball uses port 8000 to serve out.

sudo cp edna.conf /etc

Test it.

sudo /usr/bin/edna -c /etc/edna.conf
telnet localhost 8000

Then I had to create a service to run it at boot up. Using the Property List Editor, I created a StarupParameters.plist file with the prerequisite entries. I hacked together the startup file using the typical rc.common entries.

sudo mkdir /Library/StartupItems/MyEdnaServer
sudo cp MyEdnaServer StartupParameters.plist /Library/StartupItems/MyEdnaServer

SystemStarter can be used to test functionality. Note: if you are going to roll your own start-up and plist file, be sure to use the same name of your service as the "Description" in your plist file. It turns out that SystemStarter is pretty moody when it comes to descriptions.

sudo SystemStarter start MyEdnaServer
telnet localhost 8000

Using the handy "defaults" command, I then changed the firewall rules to allow connections to this new daemon. You could use the GUI instead, but as it was I did this remotely from work when I realized WHY I was having so much trouble connecting to Edna when it worked well at home that morning. Duh.

sudo defaults write /Library/Preferences/com.apple.sharing.firewall firewall -dict-add 'MyEdnaServer' '<dict><key>editable</key><integer>1</integer><key>enable</key><integer>1</integer><key>port</key><array><string>8000</string></array><key>row</key><integer>100</integer></dict>'

And that was that!

Monday, October 13, 2008


I bought an AppleTV this weekend and am in love with it. I can easily play much of my videos, all of my music, and also be entertained by YouTube and podcasts.

My Linux "entertainment center" started to die last week. I believe it is an issue with the NIC that is causing the kernel to hang. Unfortunately, the NIC is integrated into the motherboard, so I believe it was a bad omen.

Unbelievably at the same time, my 6 year old G4 iMac died. My kids used it to play games. Imagine how insufferable my daughter became when this happened: she kept saying over and over "I won't know how to spell my name anymore!"

SO, I bought an AppleTV, a Mac Mini, and a 640GB external USB hard dive. It took me several hours to transfer my mp3 music and mp4 video collection off of the old Linux box, through the network, onto the HFS+ formatted external drive connected to my Macbook, as the Linux kernel would hang every 30 minutes or so killing the transfer. Once I finished that, I plugged it into the (prepped and patched) Mac Mini, but far enough away so that the kids don't accidentally knock the thing over.

I added iTunes to my daughter's login start up items as "hidden", unchecked the "Copy files to iTunes Music folder when adding to library" box in preferences, and then told iTunes to add that drive to its library. To be safe, I opened up Finder preferences to hide hard drives from the desk top (nothing like a 1 year old pounding on the keyboard, changing drive names to ruin your day).

I then plugged in the AppleTV using the component out (my HDTV is 6 years old, and predates the popularity of HDMI), went through the setup dialogue, and pointed it to the Mac Mini. That involved restarting iTunes on the Mac Mini to have it see (immediately) the Bonjour broadcast of the AppleTV, then selecting it under "Devices", and entering in the passcode given by AppleTV. I then opened up my Macbook's and my wife's Dell's iTunes to share their libraries with AppleTV. That involved some firewall changes on the Macbook and Dell, as TCP 3569 needs to be open for iTunes.

The downside: out of the box, AppleTV won't play my library of DIVX avi's. It is pretty specific when it comes to following the iTunes standards of mp4. Since most of my movie library was in mp4, I wasn't that disappointed. At least, not disappointed enough to re-encode my avi's. Also, I need to ensure that iTunes is kept up and running on the Mac Mini, something that I am sure will be a chore since it is on the kids' computer.

I also noticed that every now and again, AppleTV gives me an annoying pause when navigating the menus. I think that has more to do with the vast size of my mp3 library as it has a lot of work to do to synchronize itself.

Saturday, September 27, 2008

Active Directory Authentication and RHEL, pt1

Ok, so it, again, has been a couple of months since my last technical post. The whole RHEL cluster thing has been tossed to the side for now so there was nothing to report.

In searching for the command to extend the Satellite Server's database, I came across my own blog entry for just that. I realized that I've neglected my own blog so I figured I should add something.

One project I've been working on over the past month or so has to do with overcoming the expected abysmal connectivity from Red China during the 2008 Olympics. The application folks were turned on to Aspera from our colleagues at Fox News. This software is, essentially, a fork of OpenSSH. It is designed to send file transfers through a UDP based data channel (they call it "fasp"). This has the effect of beating TCP based transfers over high-latency connections. Basically, Aspera transfers beats FTP hands down. It beats it so well, transfers can move at wire speed and overwhelm all other traffic. Fortunately, Aspera has built in throttling mechanisms into fasp that help guard against killing networks.

So, what does this have to do with Active Directory authentication on RHEL? Plenty for us. Since we'd be placing this thing on the bad-bad internet expecting reporters from Beijing to upload hours worth of Star Spangled Banner from any ol' internet cafe, authentication is a concern. The speed of this project forced us to put it in place before getting any SA or Help Desk resources on board with support. I don't like the idea of an SA or engineer getting called at 3am because some luddite in Beijing can't figure out their password. Linking in logins to the Aspera server to AD was a critical component.

Since Aspera runs on Solaris, RHEL and Windows, we expected to stick it on Windows and call it a day. Alas, there was one problem: lockout DoS attacks. Windows natively doesn't protect against this. RHEL does with pam_tally. RHEL also can handle LDAP authentication with an AD server. In trying to stick with a non-complicated roll-out in order to make ongoing SA support easy, RHEL became the OS of choice here.

So, there we were with the following solution: Aspera and RHEL with pam_tally and Active Directory authentication. Unfortunately, a lot became trial and error since there seems to be a dearth of real world examples on how to get RHEL to talk to an AD server. It turns out to be very straightforward and simple.

The weird thing about Aspera's software, however, is that they spent a lot of effort making FASP and then slapped on a generic web interface. They expect you to hire consolutants to make the Web interface professional looking. We tried in vain to get Apache to authenticate via LDAP without opening ourselves up to the lockout DoS. We even tried to compile in "mod_auth_pam" to take advantage of pam_tally, but it seems to refuse to use LDAP over SSL through pam, even though LDAP in the clear worked fine. I believe it has something to do with getting non-privileged UID access to the OpenSSL libraries or maybe the originating LDAPS port.

In the intrest of getting up and running in time for capturing team USA kick ass on video, we gave up trying to figure out the issue and decided to stick with a regular "htpasswd" authentication scheme. In the end, I left the mod_auth_pam in just to use pam_tally.

What follows is the fruit of our labors in this endeavor.

Install RHEL 5

Install Apache

Download the Aspera Enterprise Server for RHEL from the Aspera web site.


First, install the RPM as you would any other RPM. The bulk of the software installs in /opt, though binaries drop into the usual places under /usr. Next, configure the license with the provided license key by placing the keys into “/opt/aspera/etc/fasplicense.txt”.

Configure the Aspera Enterprise Server Software

Make a unix account with the same name (say, “ASPERA”). It is ok to lock the account from logins and even create a shell of “/bin/false”. Set the home directory to be the directory where all file transfers will work out of (“/home/ASPERA” is sufficient). We will use a UID of 7000 in our setup.

Now, add user ASPERA's UID and home directory to “/opt/aspera/etc/docroot” on one line, separated by a “:”.

# cat > docroot
7000:/home/ASPERA ^D
# cat docroot

Set the world writable and executable permissions for user videotransfer’s home directory so that everyone can access its contents.

# chmod 777 /home/ASPERA

Finally, you will need to hack one of Aspera’s perl scripts to allow everyone to share the same directory for file transfers. Near line 826 of “/opt/aspera/var/webtools/scripts/aspera-dirlist.pl”, change the command

$sought = $id.':'; To read $sought = '7000:';

Whereas “7000” is the UID of the unix account “ASPERA”.

Edit “/opt/aspera/etc/aspera.conf” to reflect these settings for bandwidth caps and target rate flows.

Under "Central":


Under "Web":

Be sure to restart the aspera service.

# /usr/sbin/service asperacentral restart

Configure Apache

Within “/opt/aspera/etc”, create group login and password in the” htpasswd” format within a file called “webpasswd”. We choose to use “ASPERA” as the username.

# htpasswd -b webpasswd ASPERA SOMEPASSWORD

Add thes “ASPERA” stanza to the bottom of “/etc/httpd/conf/httpd.conf”. We want to be able to have everyone reach the Aspera directory without using "http://asperaserver.corp.com/aspera/user" all the time. Keeping root pointed at "aspera/user" is a good idea for this.

(httpd.conf entry to follow on another post)

Set DocumentRoot to be the Aspera directory.

DocumentRoot "/opt/aspera/var/webtools"

Copy “.htaccess” from “/opt/aspera/var/webtools/user” to “/opt/aspera/var/webtools”.

Configure LDAP

Create a new “/etc/ldap.conf” with the following entries. LDAP will be used to talk to the Active Directory servers, over SSL, to provide password authentication. You need to have a read only AD account created for the initial binding. This user account is used to lookup accounts to provide the password hash.

host WINDOWSDC1.domain.corp.com WINDOWSDC2.domain.corp.com

base dc=domain,dc=corp,dc=com
binddn CN=READONLYUSER,OU=Users,OU=USA,DC=domain,DC=corpm,DC=com


timelimit 120

bind_timelimit 120

idle_timelimit 3600

pam_login_attribute samaccountname

ssl on

tls_cacertdir /etc/pki/tls/certs

debug 5

Download your Certificate Authority certificates. Be sure that they are in PEM format. Ours come as PC friendly pkcs7.

# openssl pkcs7 -inform DER -in CA.p7b -print_certs -text -out CA.pem

Our CA certs have mulitple entries. The certificates from our pem file had to be separated out into different files. Fortunately someone addressed this issue. This PERL script can do the work for you.

# Splits a certficate file with multiple entries up into
# one certificate perl file
# Artistic License
# v0.0.1 Nick Burch
my $filename = shift;
unless($filename) die("Usage:\n cert-split.pl <certificate-file>\n");
open INP, "<$filename" or die("Unable to load \"$filename\"\n"); my $ifile = ""; my $thisfile = ""; while(<inp>) {

$ifile .= $_;
$thisfile .= $_;
if($_ =~ /^\-+END(\s\w+)?\sCERTIFICATE\-+$/) {

print "Found a complete certificate:\n";
print `echo "$thisfile" | openssl x509 -noout -issuer -subject`;
print "\n";
print "What file should this be saved to?\n";

my $fname = <>;

open CERT, ">$fname";
print CERT $thisfile;

close CERT;

$thisfile = "";

print "Certificate saved\n\n";

close INP;
print "Completed\n";

Next, move these new PEM files to “/etc/pki/tls/certs” and softlink their x509 hash values to their names. Hashes are calculated with “/usr/bin/openssl x509 –noout –hash –in FILENAME”. OpenSSL uses the x509 hash of a signature to match against a filename of the same name in order to know which certificate to use to verify authenticity. Observe the soft linking example below.

# cp *.pem /etc/pki/tls/certs; cd /etc/pki/tls/certs
# ls -alF

total 376

drwxr-xr-x 2 root root 4096 Jul 2 11:59 ./

drwxr-xr-x 7 root root 4096 Jul 2 10:22 ../

-rw-r--r-- 1 root root 249373 Jan 12 2007 ca-bundle.crt

-rw-r--r-- 1 root root 5054 Jul 2 10:31 ca1.pem

-rw-r--r-- 1 root root 4066 Jul 2 10:31 ca2.pem

-rw-r--r-- 1 root root 5060 Jul 2 10:31 ca3.pem

-rw-r--r-- 1 root root 5055 Jul 2 10:31 ca4.pem

-rw-r--r-- 1 root root 5054 Jul 2 10:31 ca5.pem

-rw-r--r-- 1 root root 610 Jan 12 2007 make-dummy-cert

-rw-r--r-- 1 root root 1832 Jan 12 2007 Makefile

# for i in `ls *pem`; do ln -s $i `openssl x509 -noout -hash -in $i`.0; done

# ls -alF

total 396

drwxr-xr-x 2 root root 4096 Jul 2 12:06 ./

drwxr-xr-x 7 root root 4096 Jul 2 10:22 ../

lrwxrwxrwx 1 root root 9 Jul 2 12:06 184d21f0.0 -> ca1.pem
lrwxrwxrwx 1 root root 9 Jul 2 12:06 3cea7904.0 -> ca4.pem lrwxrwxrwx 1 root root 9 Jul 2 12:06 64cc7d0a.0 -> ca2.pem lrwxrwxrwx 1 root root 9 Jul 2 12:06 a4a62ce2.0 -> ca5.pem -rw-r--r-- 1 root root 249373 Jan 12 2007 ca-bundle.crt
lrwxrwxrwx 1 root root 9 Jul 2 12:06 d32b38ca.0 -> ca3.pem
-rw-r--r-- 1 root root 5054 Jul 2 10:31 ca1.pem
-rw-r--r-- 1 root root 4066 Jul 2 10:31 ca2.pem

-rw-r--r-- 1 root root 5060 Jul 2 10:31 ca3.pem

-rw-r--r-- 1 root root 5055 Jul 2 10:31 ca4.pem

-rw-r--r-- 1 root root 5054 Jul 2 10:31 ca5.pem

-rw-r--r-- 1 root root 610 Jan 12 2007 make-dummy-cert

-rw-r--r-- 1 root root 1832 Jan 12 2007 Makefile


Configure PAM

In “/etc/pam.d/sshd”, configure the pam stack to include pam_tally and pam_ldap. The pam_tally module will prevent failed password attempts on this server from triggering an account lockout from the Active Directory. If a user enters in a wrong password five times (“deny=5” in the option list), they will be locked out from this server only for 61 minutes (one minute longer than Active Directory’s automatic reset). After 61 minutes, pam_tally will automatically reset and allow them to try again. The pam_ldap module will allow users to authenticate their passwords from Active Directory rather than from “/etc/shadow”. Password management for those accounts should then come from the Dow Jones Help Desk, as with all other Active Directory passwords. The order in which these modules are referenced within “/etc/pam.d/sshd” is important. The example below allows a local user account to authenticate against “/etc/shadow” after trying against the Active Directory server. This allows for normal Unix logins.

# cat /etc/pam.d/sshd
auth requisite pam_tally.so onerr=fail deny=5\ unlock_time=3660

auth sufficient pam_ldap.so

auth include system-auth

account required pam_nologin.so

account requisite pam_tally.so

account sufficient pam_ldap.so

account include system-auth

password include system-auth

session optional pam_keyinit.so force revoke

session include system-auth
session required pam_loginuid.so

Create User Accounts
Create user accounts for every person expected to use this software. The user account names must match their Windows Acitve Directory name and their shell must be “/bin/aspshell”. Their unix passwords in “/etc/shadow” can remain locked for added security.

Thursday, September 25, 2008

Thoughts on the current financial crisis

Here is the content of an email I sent out to some friends explaining the current financial crisis in the US:

"Bailout" is certainly the wrong name for this.

Picture this: you, me, and Frank are playing 5 card stud, no peek, jacks or a better split the pot. Anyone who rolls a 2 of clubs immediately has to fold and loses their ante and bets. The ante is high, say, the title to your car. Are you going to play? A rational person probably would not, even though you have a poor chance of being dealt a 2 of clubs.

That's the game we're playing. No one knows who has a rotten hand, and that is keeping everyone from playing.

So, with that game in mind, pricture this: you're the CFO of a small bank and you are sitting on $1 million dollars. You can't keep that in your safe, so you have to invest it somewhere. You shop around and find an investment that is likely to yield 1% or two better than a Treasury bond and is given a high rating by S&P and Moody's. It is backed, in part or in whole, by mortgages. You buy it, not just because it seems safe, but you were encouraged by the fact everyone was buying them. You did your job: maximizing contemporaneous shareholder value (biggest share price now).

Meanwhile, some circus monkeys in an investment firm somewhere weren't entirely truthful in creating an investment vehicle that was based on a pool of mortgages. When working with the rating agencies to gauge the overall risk, they kinda hid the truth behind the mortgages they were bundling. The guys working the ratings agencies are folks who didn't do well enough in business school to go to work for an investment bank, so they ended up working for essentially (apologies to my co-workers) newspapers (S&P and Moody's) and are not the sharpest pencils in the box. They even allowed themselves to be "wined and dined" by the firms while working on the ratings, because at $50,000 a year, a stay in a nice hotel with a lobster dinner is a nice change of scenery. Between mouthfuls of lobster, they gave their thumbs up.

These investment vehicles were then divided up and resold in packages with other investments by some other unwitting but enterprising young business school guy working at a different investment firm, which were in turn chopped up again by some other guy and resold with other entirely different assets. So now there is this investment vehicle that might be backed in part by someone who will not make his mortgage, but for the most part, the investment is probably backed by entirely other things that have nothing to do with home mortgages.

You bought this investment from the $1 million dollars you had before. Now, there's no way for you to know just what portion of that $1 million dollars you will see evaporate because someone can't pay their mortgage. It could be, literally, $1, or it could be the whole $1 million. You just don't know and you can't know as it would probably take $1 million dollars to figure it out. If you and everyone else who bought these things pooled together, you all would lose maybe 2% (I'm making this number up, but it is much less than 15%) of your overall investment due to deliquent mortgages. It's really nothing over all, but no one knows for sure how much of that 2% they own themselves.

No one is going to buy it from you because no one can put a finger on its worth. Worse still, you can't use it as collateral for a loan from a bigger bank because no one will take it. In January you could use it as the basis for $10 million in loans to small business. Thanks to the recent "mark to market" account rule, you now can't use that investment vehicle anymore and you've essentially had to remove $10 million dollars from the economy. That $10 million dollars is no longer available to the corner cabinet shop to borrow to make this month's payroll while they're waiting on their customers to pay for the cabinets they've ordered.

Now, the government is coming along saying they're willing to buy this investment from you based upon an auction. It'll probably mean you will have to sell your $1 million investment for something like $500,000. $500,000 is better than $0, and it allows you to get back in the business of using at least $500,000 for collateral or loaning out $5 million dollars to that cabinet shop to make payroll.

Should you lose your promised compensation package because you unloaded this to the government? That's unfair, in my opinion. In fact, it creates what is called an agency problem. If I'm that CFO, I might think twice about participating, and I might tell my boss that we're better off waiting for this whole thing to blow over as we'll likely to get more than $500,000 for it. Meanwhile, that cabinet maker has to close down because he can't make payroll.

In my opinion, yes, the guys that made the crappy bundles in the first place and the rating agencies bear most of the blame. The thing about the ratings agencies is that they are free and independent and there are only two of them. The whole world relies on them so you can't beat them up just yet and you can't do it so publicly. God, the horror that would entail if they had to stop doing ratings suddenly!

As far as nailing the guys who did the bundling, that market is so complicated and intertwined, the guilty will all be dead before we figure out who did what. I remember a year or two ago specialists had to be brought in to meet with the Fed to explain how these bundles worked. It went even above THEIR heads!

Yes, this is directly related to Fannie and Freddie [Mac]. They are the largest holders of these investments. The more "illiquid" these papers became, the more pressure they had to come up with other assets for collateral, until they had no other assets available to use.

The systemic risk is based on the fact no one knows who has what exposure so no one is willing to part with their cash. Investment banks who trade with each other thousands of times a day, using blind faith, are eyeing each other suspiciously. Banks are told they need to come up with more collateral than they ever thought they would need. In the end, it is the small business guy who can't get a loan to operate. Our economy is based on the short-term use of cash. I borrow today, loan tomorrow, and we all get to do things like make payroll, buy office equipment, build inventory, etc. off of this practice. That's the lifeblood of business and it is quickly drying up.

Thursday, May 1, 2008

RHEL 5 Cluster and GFS, the saga, part II

Suprise, suprise: the "group" commands that are undocumented in yum's RHEL man page do indeed install suites of software. The installation of GFS and RHEL Cluster is simple.

# yum groupinstall "Cluster Storage" "Clustering"

That's it. Now to actually get it working....

Monday, April 28, 2008

RHEL 5 Cluster and GFS, the saga begins

Well, my next techie project is to get GFS up and running. To do this, we needed to reinstall several servers with RHEL 5. That was no sweat. Now, to use GFS, we need to install RHEL Cluster first. Ok, I can understand that: GFS is a part of RHEL Cluster and draws on its components to do its jobs.

First problem: how on earth do you install RHEL Cluster? Our first victim is subscribed to our Satellite server and is subscribed to the appropriate channels (don't even get me started on how much effort it was to get the Satellite server to subscribe to the RHEL 5 cluster channels in the first place). The manual for RHEL 5 cluster is not very helpful. It basically says to install it like you would RHEL 5.

"...secure and install the software as you would with Red Hat Enterprise Linux software."


I want to avoid running yum a dozen or so times just to suck down the right rpm's. Apparently in the olden days, you can instruct up2date to pull down the full cluster suite. The man page for yum on RHEL 5 doesn't elude to similar functionality, though there are these curiously undocumented switches containing the word "group" in them.

Man pages. Yet another reason why I was a big fan of OpenBSD. It packages the best damn man pages a Unix guy could ask for. RHEL's man pages were written by people who apparently work the Help Desk phones at my office.

I suspect this will be a long thread, so rather than just wait for the end and blog the results, I'll blog the experience and finish with a how to. Hopefully the next entry will be a no-hassle how to to install Cluster with yum.

Friday, April 18, 2008

VVR and DD

It's time to start this blog back up.

Recently, I've had the pleasure of having to wipe clean disk storage arrays that hold Veritas volumes in an RVG. The pleasure did not come with executing "vxdg destroy", but came in having to figure out a way to preserve the data at both sides of the rlink without performing a full resynchronization.

In this case a full resynchronization would have taken 22 days, a time period not acceptable to the client, as one site is in South Brunswick, NJ, and the other site located somewhere near the Kingdom of the Rat. Our bandwidth and latency between the two sites are not the best that money can buy.

The answer was to do a block level copy of the volumes to portable USB drives and ship those drives between Orlando and South Brunswick.

I created a wiki page on our internal site that outlined my experience and I have copied over the content below. The setup involved was:

Veritas Cluster Server (VCS)
Veritas Global Cluster Manager (GCM)
Veritas File System (VxFS)
Veritas Volume Manager (VxVM)
Veritas Volume Replicator (VVR)
1 Pair of RHEL on Dell servers (RHEL 4 on Dell 2950's) in Orlando, connected to a single AX150 (a rebranded Clariion)
1 Single RHEL on Dell in South Brunswick, connected to a single AX150.

The AX150's handled the RAID setup, and EMC Powerpath handled the redundant HBA connections.

We had to convert the storage at both ends from RAID 5 to RAID1+0. Since the AX150 ONLY allows you to present disks as RAID 5 or RAID 1+0, and not as a JBOD, we couldn't just let VxVM handle the RAID. Furthermore, VVR does not support replicating RAID 5 Veritas Volumes, so when the client wanted to go cheap and do a RAID 5 (against our advice, of course), we had to use an external RAID solution.

After everything was built and right before the go-live date, we demonstrated how poor the performance of the whole solution was with RAID 5. The client then coughed up the money for the extra disks needed for RAID 1+0 and we set about trying to do the conversion as fast as possible.

1 Make the vxmake files

Create vxmake input files for each volume at both sites. Do this for every volume in the rvg, '''including the srl volume'''. Keep these files safe and make copies.

# vxprint -hmvpsQq -g DISKGROUP VOLUMENAME > /var/tmp/VOLUMENAME.vxprint

Though it won't be used, it is also probably a good idea to capture the RVG and RLink information for reference.

# vxprint -Vm -g DISKGROUP > /var/tmp/RVGNAME.vxprint
# vxprint -Pm -g DISKGROUP > /var/tmp/RLINKNAME.vxprint

2. Prep the systems

Mount the portable USB drives.

# mount /dev/DEVICE /mnt

Stop Veritas Cluster and unmount the RVG volumes. You may have to stop service groups and completely turn off VCS on both the primary and secondary. Ensure that the volumes are umounted but in a 'started' state within VxVM.

# vxinfo -g DISKGROUP

3. Backup the data

On the primary, perform a block-level copy of each volume to the portable disk using 'dd'. The block size is set high to speed the copy process.

# dd if=/dev/vx/rdsk/DISKGROUP/VOLUMENAME of=/mnt/VOLUMENAME.dd bs=1M

4. Reconfigure VxVM

Destroy the disk groups at both sites.

# vxdg destroy DISKGROUP

Add the new hardware to the hosts. You may have to reboot (ensure that VCS '''won't''' restart), run '''fdisk''', move the "/etc/vx/*info" files aside, run '''vxdctl enable''', rerun '''vxdiskadm''', or even run '''vxinstall'''. Add your new hardware to VxVM into a diskgroup of the same name that was used before. It is important to use the same disk names as before and have disks of the same size. If not, some heavy manual editing of the vxprint files will be necessary.

Recreate the volumes at both sites. Run '''vxmake''' using the vxprint's your created. You will need to edit the vxprint output files to remove the "rvg" references, and set the "path" stanza to the correct device names.

# vxmake -g DISKGROUP -d /var/tmp/VOLUMENAME.vxprint

Initialize the volumes at both sites. For the RVG data volumes, merely enable each of them since you will dd the data back on to the RVG volumes.

# vxvol -g DISKGROUP init enable VOLUMENAME

For the SRL volumes, zero them out. This process will take a long time so you may wish to hold off on doing this until there is a suitable time to execute it. A good time is while waiting for the portable USB drive to arrive at the secondary site.

# vxvol -g DISKGROUP init zero SRLVOLUME

5. Restore the data

Primary Site

Mount the portable USB disks to the primary server and copy that volumes data from the portable disk back on to the primary. Use "rdsk" in the path, NOT "dsk".

# dd if=/mnt/VOLUMENAME.dd of=/dev/vx/rdsk/DISKGROUP/VOLUMENAME bs=1M

Once the data is copied over, activate the volumes. '''''DO NOT MOUNT THE VOLUMES YET AS THAT WILL TAINT THE DATA!!!'''''

# vxvol -g DISKGROUP init active VOLUMENAME

Secondary Site

Unmount the portable and transport it to the secondary machine.

# umount /mnt

Mount the portable USB disks to the secondary server and copy that volumes data from the portable disk back on to the secondary. Use "rdsk" in the path, NOT "dsk".

# dd if=/mnt/VOLUMENAME.dd of=/dev/vx/rdsk/DISKGROUP/VOLUMENAME bs=1M

Once the data is copied over on the secondary, activate the volumes. '''''DO NOT MOUNT THE VOLUMES YET AS THAT WILL TAINT THE DATA!!!'''''

# vxvol -g DISKGROUP init active VOLUMENAME

6. Recreate the RVG

Prep the VVR network. Plumb up and activate the VVR IP's by hand on both the primary and secondary.

On the primary, create the RVG with '''vradmin'''.


On the primary, add the secondary volumes as secondaries with '''vradmin'''.


7. Verify and Start Replication

On the primary, run the sync with verify command on the volumes for the whole RVG. By using the "verify" option, the systems exchange and compare checksums and do not update. This should take a fraction of the time a full resynchronization would take. For instance, an RVG set that would normally take three weeks to synchronize, should take only six hours to verify.

When completed, the output should report that "the volumes are verified as identical." The RVG will then be ready to be started.


Forcibly start the replication.


At this point you can mount the volumes on the primary host to check the data.

8. Clean up and reboot

Ensure that VCS will start at boot up, unmount the usb drives, etc. Reboot both sides. Sometimes VVR will take several minutes to recognize itself on both sides after the first time VCS starts it up.

9. Special Note

With VVR pairs that have high network latency, low bandwidth or high number of hops in between, it may be necessary to keep the VVR packets from fragmenting. This is easily accomplished with vradmin by setting the VVR pack_size to something small enough to avoid reaching the 1500 byte threshold when header information is added.

# vradmin -g DISKGROUP pauserep RVGNAME
# vradmin -g DISKGROUP set RVGNAME packet_size=1480
# vradmin -g DISKGROUP resumerep RVGNAME

Setting packet_size can only be done when UDP is the transport of choice.