We've started to evaluate Stratus' FT4300 using Red Hat Enterprise Linux.
What we've learned so far is impressive: the server operates as robustly as it would with VOS. After installation, we've unplugged modules while running a video from the RHEL server. It never missed a beat.
It should be noted, however, that I/O redundancy is handled by RHEL as the hardware presents all the I/O channels, and not some kind of metadevice that represents a redundant pair of paths.
We did entitle the RHEL server to our Satellite server and noticed one problem right off the bat: the RHEL installed by Stratus required connectivity to their yum server for updates. We commented this line out from /etc/sysconfig/rhn/sources:
yum Stratus_Technologies_ft_Linux_4.0 http://pman3.com/ftLinux/4.0/
And ran up2date. It complains about initscripts, right away:
Testing package set / solving RPM inter-dependencies...
There was a package dependency problem. The message was:
To solve all dependencies for the RPMs you have selected, The following
packages you have marked to exclude would have to be added to the set:
Package Name Reason For Skipping
======================================================================
initscripts-7.93.29.EL-1 Config modified
Now, one would be tempted to run "up2date -f" and force the issue, but I knew right away that Stratus had to have its hooks in somewhere. You see, even though the I/O redundancy is handled by RHEL, the hardware remains aware of when there is lack of redundancy. For instance, after replaceing the power on one of the modules after testing its resiliencey, RHEL's MD had to remirror the root drive. During this time, the hardware flashed its led's in that characteristic way that VOS systems due to signal that systems are not currently in a redundant and fail-safe mode. Once mdadm showed the mirroring to be complete, the lights stopped flashing.
A quick run of rpm's verify showed exactly what configuration files were altered:
[root@sbkrhelstratp01 rhn]# rpm -Vc initscripts
S.5....T. c /etc/inittab
S.5....T. c /etc/rc.d/rc.sysinit
[root@sbkrhelstratp01 rhn]#
A careful scan of the rc.sysinit file drew my attention to several operations that were commented on by Stratus. These seemed to involve RAID. I fully expected to find something in inittab requiring respawns, and sure enough, there they were:
osm:2345:respawn:/opt/ft/sbin/osm
ftmo:12345:respawn:/opt/ft/sbin/miceope
My options at this point include saving copies of these and force the up2date, or perhaps reach out to the yum server at Stratus and see if it keeps an updated rpm of initscripts.
Ideally, I'd like to create a custom software channel on my satellite server based upon the yum server at Stratus.
Right now, I'm stalled.
Politics and Technology.
Friday, June 29, 2007
Friday, June 22, 2007
SSH Key Agent and Screen
I love screen. I use it whenever I can. I even experimented a bit with ratpoison, that's how much I love screen. One thing that drove me mad, though, was that SSH's key agent (ssh-agent) and screen are not good buddies. The problem is that old window sessions point to old SSH sockets to the agent. If I detach my screen session, log out, log back in later, and reattach to that session, SSH points to old sockets. What's the point of screen if I can't logout and login keeping a persistent state of things? With SSH being core to everything I do, I can't go without it. At work, key agents are especially important with our smartcards.
So, I made a hack to allow me to forward my key info through my screen sessions. This hack is, well, a hack, but it works for me.
First things first, edit your .screenrc file to contain a line like this:
setenv SSH_AUTH_SOCK $HOME/tmp/socket
This makes every window from your .screen point to a custom socket rather than the system set socket to your key agent.
Next, make a script that does something like this:
#!/bin/sh
/usr/bin/rm /export/home/username/tmp/socket
/usr/bin/ln -s $SSH_AUTH_SOCK /export/home/username/tmp/socket
This script creates a softlink from our own socket to the real key agent socket as presented by SSH_AUTH_SOCK. I called this script "screen-ssh-agent" and stuck it in my personal bin directory. Now, for your login, you need something like this to execute:
~/bin/screen-ssh-agent
Old-timey SA's like myself use tcsh, so I just added this to my ".login".
Now, after I login to this box and kick off screen, running ssh from any window inside will refer to the staticly named file "tmp/socket" that links to the real socket that is uniquely created and named by sshd everytime I login.
One key to rule them all!
So, I made a hack to allow me to forward my key info through my screen sessions. This hack is, well, a hack, but it works for me.
First things first, edit your .screenrc file to contain a line like this:
setenv SSH_AUTH_SOCK $HOME/tmp/socket
This makes every window from your .screen point to a custom socket rather than the system set socket to your key agent.
Next, make a script that does something like this:
#!/bin/sh
/usr/bin/rm /export/home/username/tmp/socket
/usr/bin/ln -s $SSH_AUTH_SOCK /export/home/username/tmp/socket
This script creates a softlink from our own socket to the real key agent socket as presented by SSH_AUTH_SOCK. I called this script "screen-ssh-agent" and stuck it in my personal bin directory. Now, for your login, you need something like this to execute:
~/bin/screen-ssh-agent
Old-timey SA's like myself use tcsh, so I just added this to my ".login".
Now, after I login to this box and kick off screen, running ssh from any window inside will refer to the staticly named file "tmp/socket" that links to the real socket that is uniquely created and named by sshd everytime I login.
One key to rule them all!
Veritas Volume Replicator and Red Hat
Here's some advise: if you plan on using Veritas Volume Replicator with Red Hat Enterprise Linux, AVOID USING EXT3FS!
I've seen the combo just hang on sending updates to the remote secondary. It would just sit there when trying to drain the Storage Replicator Log (SRL); vxrlink status would just sit there and show it not being drained .
To remedy, we had to force the SRL to overflow into DCM logging by creating a big enough bogus file (mkfile or dd if=/dev/zero) to use vradmin to resync. Forcing it to clear the DCM would be the only way to make the problem go away and resume replication!
This problem would appear almost randomly and we eliminated the size of the volumes as a factor. One volume was in the terabytes while another was several gigabytes. Both exhibited this problem.
Both ext2fs and vxfs worked fine. To preserve file system journaling, we went with vxfs. We had no reason not to, we just went with ext3fs because there were performance questions in a past project on Solaris regarding vxfs. Being this new project was on RHEL, we found no reason to stop us from converting to vxfs.
So, stick to vxfs with vvr. YMMV, but it worked well for us.
I've seen the combo just hang on sending updates to the remote secondary. It would just sit there when trying to drain the Storage Replicator Log (SRL); vxrlink status would just sit there and show it not being drained .
To remedy, we had to force the SRL to overflow into DCM logging by creating a big enough bogus file (mkfile or dd if=/dev/zero) to use vradmin to resync. Forcing it to clear the DCM would be the only way to make the problem go away and resume replication!
This problem would appear almost randomly and we eliminated the size of the volumes as a factor. One volume was in the terabytes while another was several gigabytes. Both exhibited this problem.
Both ext2fs and vxfs worked fine. To preserve file system journaling, we went with vxfs. We had no reason not to, we just went with ext3fs because there were performance questions in a past project on Solaris regarding vxfs. Being this new project was on RHEL, we found no reason to stop us from converting to vxfs.
So, stick to vxfs with vvr. YMMV, but it worked well for us.
Newly Gray
This is the first entry for my blog, Graying Matter. Posts will include matters relating to politics and technology.
Subscribe to:
Posts (Atom)