How to suppress “The number of vSphere HA heartbeat datastores for this host is 1 which is less than required 2″

Whilst preparing my ApplicationHA lab for the SE interlock and Symantec Vision in the next couple of weeks, I was busily just making sure that no errors or alarms were triggered and everything looked clean, one thing I did notice and has been there from the start of building the lab, was a message “The number of vSphere HA heartbeat datastores for this host is 1 which is less than required 2″.

The heart beating over a datastore is something that was added with vSphere 5.0, the new FDM HA agent utilizes the storage subsystem as an alternate communication path and typically needs 2 paths to communicate, it’s actually similar to Veritas Cluster Server heart beating which uses 2 extra NICs for heart beating as well as using disk type heartbeats via Coordination Point Server (CPS).

Ok so how to remove or suppressing of the alarm.

There are a couple of ways to remove the message from appearing, the recommended way is to add redundancy via adding another shared datastore to the vSphere cluster or alternatively you can use the “das.ignoreInsufficientHbDatastore = true” attribute in the advanced vSphere HA settings.

1. From within the vSphere Client right-click on the cluster and edit settings.

2. Select the vSphere HA feature and click “Advanced Options”

3. In the options column enter the tag “das.ignoreInsufficientHbDatastore”

4. In the value column enter “true” for the value.

5. Click on “OK” and then “OK” again to make the changes in the advanced options

6. right-click on the ESX server which is displaying the warning symbol and select “Reconfigure for vSphere HA”

That’s it, once the reconfigure task completes the warning should disappear and you are left with a warning icon clean ESX server.




Converting a virtual IDE disk to a virtual SCSI disk

When converting a physical machine to a virtual machine using VMware Converter or vCenter Converter Enterprise, if an adapter type is not selected during the initial customization the resulting virtual machine may contain an IDE disk as the primary OS disk.

You must convert the IDE disk to SCSI to get the best performance. If the primary disk is an IDE virtual disk, the newly converted virtual machine may fail to boot because the guest OS does not support the driver. Second reason for this issue is that in ESX 4.x the default disk type for Windows XP 32bit virtual machine creation is IDE. This default value can be manually changed during the virtual machine creation wizard by selecting the custom option. Windows XP 64bit will still use SCSI by default.

Note: For newer versions of Windows and Linux OS guests, the typical SCSI adapter types are the LSI Logic controllers. When using LSI Logic SCSI controllers in the Windows XP virtual machine, ensure to download and install the appropriate LSI driver before proceeding. For more information on downloading and installing LSI Logic SCSI drivers, see Storage Drivers for ESX 3.5.x and Microsoft Windows XP When Using the VMware LSI Logic Storage Adapter (1007035).
In situations where you are manually changing an IDE disk to a SCSI disk that holds a Windows Operating system volume, you may need to repair the master boot record of the disk. Please see  Repairing boot sector problems in Windows NT-based operating systems (1006556)

To convert the IDE disk to SCSI:

  1. Locate the datastore path where the virtual machine resides.

    For example:

    # cd /vmfs/volumes/<datastore_name>/<vm_name>/

  2. From the ESX Service Console, open the primary disk (.vmdk) using the vi editor. For more information, see  Editing files on an ESX host using vi or nano (1020302)
  3. Look for the line:

    ddb.adapterType = “ide”

  4. To change the adapter type to LSI Logic change the line to:

    ddb.adapterType = “lsilogic”

    To change the adapter type to Bus Logic change the line to:

    ddb.adapterType = “buslogic”

  5. Save the file.
  6. From VMware Infrastructure/vSphere Client:
    1. Click Edit Settings for the virtual machine.
    2. Select the IDE virtual disk.
    3. Choose to Remove the Disk from the virtual machine.
    4. Click OK.

      Caution: Make sure that you do not choose delete from disk.

  7. From the Edit Settings menu for this virtual machine:
    1. Click Add > Hard Disk > Use Existing Virtual Disk.
    2. Navigate to the location of the disk and select to add it into the virtual machine.
    3. Choose the same controller as in Step 3 as the adapter type. The SCSI ID should read SCSI 0:0.
  8. If a CDROM device exists in the virtual machine it may need to have the IDE channel adjusted from IDE 0:1 to IDE 0:0. If this option is greyed out, remove the CD-ROM from the virtual machine and add it back. This sets it to IDE 0:0.


Unix commands and tools you just can’t live without

Are you someone who never met a Unix command you didn’t like? OK, maybe not. But are there commands you just can’t imagine living without? Let’s look at some that have made a big difference on my busiest days and those that people I’ve worked with over the years have said are their most important essentials.

On Unix systems, there are commands that do what they need to do and then there are commands that knock your socks off day after day, saving you gobs of time, taking the tedium out of systems administration and giving you the insights that you need to keep your systems humming without making you work too hard. Here are some of the I-can’t-live-without commands that me and my Unix buddies find we need every day — and can’t imagine living without — and some of the ways we use them.


There’s just no getting along without top. While there are other performance commands that provide a lot more detail on how a system is performing, top provides the most critical information about how your system is working in the most succinct fashion. Fortunately, top is installed by default on a lot of systems. If you don’t have it, get it. This command show what your system’s load looks like, as well as highlight those processes which are hogging the bulk of your system’s resources. Top also displays memory stats and swapping activity. Top is one of my all time favorite Unix commands and one I couldn’t manage without.


The ping command was one of the first things that a Unix consultant I worked with many years ago taught me and I’ve used it many thousands of times since. This command can tell you whether other systems up and even whether your own system is functioning on the network. If I’m sitting in my home office and wondering whether my network connection is up, I’ll ping a familiar system much sooner than I’ll go look the state of my network interface and generally will know very quickly if my network connection is up.

tail -f

The tail command is handy for many things, but the -f option is special. The top -f command allows you to watch as entries are being added to your log files. Much better than just using tail, it shows entries as they’re being added. Do something in one window and watch the resultant log entries in another. This helps you tie cause and effect together without having to think too hard about which log entries relate to which activities.


I doubt that a day goes by without my “grepping” on something. I may be checking on running processes, pulling lines from a log file or looking through text files to analyze a problem, but grep is always at the tip of my tongue.


I don’t use the tee command much at all, but friends of mine swear by it. They say that being able to add command output to log files while examining it saves them tons of time and helps them tremendously. They view the output that tells them what’s happening on their systems while creating a record of their output at the same time.


I still use find quite often to hunt down large files or files with permissions I’d rather not support. The find command is wonderfully versatile in that you can search for files by so many different criteria — ownership, size, permissions, type, modification date, inode number, group, whether it’s newer than some reference file, the number of links … and, of course, name! You can even use find to locate files that have no recognized group or owner (i.e., no groups or owners on the system that are associated with the particular GIDs and UIDs). And then you can decide what to do with your finds — just print the information or take some action such as removing the files or changing their permissions or ownership.

du -sk [dir]

The du command is, of course, valuable when evaluating disk space. You can use the du -sk * command to see how much space each file and directory in your current file system location is using or du -sk . to see the space occupied by everything in your current directory. I’ve become particular fond of these commands.

df -k .

I may have come to where I am sitting in the file system by some circuitous route, following symbolic links or not. This df command both shows me what file system I’m sitting in and how much space is available in it.

$ df -k .
Filesystem           1K-blocks      Used Available Use% Mounted on
boson:/data          201582336   4991232 186351104   3% /data/boson


The lsof (list open files) command is a powerful tool for displaying open files. It doesn’t matter what kind of files are open or even if they’re the kind of thing that most of us don’t normally think of as files — such as pipes, character and block special files, directories and sockets. The lsof command will provide valuable information. Want to see all open files? Just use the lsof command by itself. Want to see what processes are using a specific file? Use the command lsof filename. The lsof -u username command will show you all files currently open by a particular user. Very valuable information indeed!


The fuser command is one which I only learned sometime in the last ten years or so (i.e., recent for me). It is definitely the right tool for the job when you want to know what process is using a particular file or why you can unmount a file system that the system keeps saying is “busy”.


I truly appreciate the netstat command, especially netstat -rn which shows you a system’s routing table and netstat -a | grep “LISTEN “ which shows you listening ports on Linux (netstat -a | grep LISTEN on Solaris).


Another all-time winner for me is awk. Being able to select a single column from a file or from command output provides a huge number of shortcuts. I often use awk to manipulate huge data files. If I want to know, for example, all the possible values that the third field in such a file can assume, a command like this works wonders:

awk -F: '{print $3}' | sort | uniq -c

That command will show me each unique value along with a count of how many times each appears in the file. Plus it uses the colon character to know where “fields” start and stop. Commands like this are invaluable for getting quick answers from unwieldy data files.

sed and tr

I use sed as needed. Some of my Unix buddies use it as much as they do awk, but I use awk probably 50 times as often as I use sed. It’s still among the basic tools that I need, just not as beloved as awk. I also use tr at least as often as I use sed. Both commands provide a way to modify text between pipes, just differently.


I’ve been deeply impressed by rsync ever since I was first introduced to the command. For super efficient synchronization of files and directories between servers, rsync is a godsend. And, yes, I can’t imagine working without it.


When I have to copy a file or set of files from one system to another, scp is my friend. I like that I can set it up for password-free operation for those automatic file transfers that I have to do from time to time.


I’m still not a wizard when it comes to Perl scripting, but I’m good enough to do a lot of really cool file reformatting and
manipulation. Perl’s use of regular expressions gives it a high ranking in my list of vital tools.


I can’t say that I use sar every day, but I definitely benefit from it every day. I get email from sar scripts that send me performance reports on some of my most critical servers. Every day. Long gone are the days that I only looked into system performance data when something way definitely wrong with my servers. These days, I look at performance data every day — because it comes to me — and I know what normal performance generally looks like on my systems.

for loops

Lastly, but not leastly, I depend heavily in for loops. I can’t go a day without some form of for SOMETHING in `some command`. For loops save me lots of time every single day. And I can’t imagine how I’d get all my work done or stay focused if I didn’t have the option of looping through a complete set of values, regardless of their source. Whether systems, files or values of some other kind, for loops take the tedium out of having to check N things or run the same command for some large number of members of a particular data set.

And, of course, I take vi/vim and commands like date for granted, like breathing. And cron for getting work done while I’m asleep. Unix commands let me get a lot of work done, but some let me amazing things without breaking a sweat.

Read more of Sandra Henry-Stocker’s Unix as a Second Language blog and follow the latest IT news at ITworld, Twitter and Facebook.

Cheap and Dirty Windows Log File Archiving

So your company has a requirement to maintain log files for a year?

You don’t know how to go about it and you need to implement it now?

I have a solution for you and best of all it’s free. This solution however is not supported by me, there will be no bug fixes by me, and any damages you cause to your own servers is your own fault. That is my one sentence disclaimer to tell you that you truly are on your own.

For this solution or temporary fix (depending on your organization) you are going to need the following helper programs:

Info-zip – we’ll use this to compress down files and save space specifically we need the zip.exe file

MD5SUMS – this allows us to generate MD5 checksums to verify if any file tampering has taken place after the fact. Specifically we need the md5sums.exe file.

Dump Event Log (Dumpel.exe) – this is a tool offered by Microsoft to dump your event logs to a text file. Though this link is part of the Windows 2000 Resource Kit I have tested and it does work with the Windows XP and Windows Server 2003 log files.

We take these 3 programs and wrap them to work together via a batch file. All 3 of these programs MUST be in the same directory as the batch file for it to work as designed. Here is the batch file:

@echo off
REM Sets date variables for file name
for /F “tokens=1,2″ %%d in (‘date /T’) do set day=%%d & set date=%%e
set yyyy=%DATE:~6,4%
set dd=%DATE:~3,2%
set mm=%DATE:~0,2%
set startDate=%yyyy%-%mm%-%dd%
REM Adds Computer to prefix the date
set outputname=%computername%-%startdate%
REM Cleans out previous zip files from a bad run
REM Dumps each of the log files going back for 2 days
REM allowing for overlaps we may miss due to time changes
dumpel -f %outputname%.sec -l security -d 2
dumpel -f -l application -d 2
dumpel -f %outputname%.sys -l system -d 2
REM creates an MD5 hash for verification checking
md5sums %outputname%.sec >%outputname%.md5
md5sums >>%outputname%.md5
md5sums %outputname%.sys >>%outputname%.md5
REM Compresses the 4 files
zip %outputname%.*
REM Cleans up the unneeded files to save
del %outputname%.sec
del %outputname%.sys
del %outputname%.md5

I’ve included my comments in the batch file – but let’s go through it a section at a time so you can fully understand it.

@echo off

If you don’t know @echo off supresses everything from your screen in a batch, I wouldn’t suggest modifying my script since this is batch file programming 101.

for /F “tokens=1,2″ %%d in (‘date /T’) do set day=%%d & set date=%%e
set yyyy=%DATE:~6,4%
set dd=%DATE:~3,2%
set mm=%DATE:~0,2%
set startDate=%yyyy%-%mm%-%dd%
REM Adds Computer to prefix the date
set outputname=%computername%-%startdate%

This section adds the prefix to the files we are going to be using on all of your files – these allows us to work with files that include the computer’s name you are running this on and the date on which it was run.


This verify actually cleans up the zip file if this script has already been run during the current day. Mine is modified to delete the zip file completely since at the end of my script I move my files to a remote location and don’t need archived logs filling up my hard drive quickly.

dumpel -f %outputname%.sec -l security -d 2
dumpel -f -l application -d 2
dumpel -f %outputname%.sys -l system -d 2

This area does the physical dumping of the logfile. Dumpel is the command. The -f switch allows us to specify a file name. If you notice I used the %outputname% as the first part of the file with the file type of which log file it is as the suffix. The -l switch let’s us specify which logfile we are dumping from the event log (security, application, or system). the -d switch allows us to specify how many days we wish to save. I chose 2 days to allow some overlap on the log files which is good for security reasons since we shouldn’t miss any events if you change the time of the day the script is run. It also give us two more logfiles to verify the authenticity of the log data we are looking at.

When this section is done running you should have three files. If your computer’s name was BOB-SERVER and the date you ran this was one January 3, 2007 the file names would read like this; BOB-SERVER-2007-01-03.sec for your security log, for your application log, and BOB-SERVER-2007-01-03.sys for your system log.

md5sums %outputname%.sec >%outputname%.md5
md5sums >>%outputname%.md5
md5sums %outputname%.sys >>%outputname%.md5

This section generates an MD5 Hash of the logfile data allows you to see if the data was tampered from when it was originally generated. It is next to impossible to edit a file and maintain the same hash data. This allows you some security that your log files are authentic. For those wondering “well can’t I just rerun the md5 program and generate a new hash and save that after modification?” – I have your answer. I didn’t include how to store these files after they are generated and we will touch upon that question under “What do you do now?” at the bottom. The command outputs your three BOB-SERVER-2007-01-03 files and output it to a single BOB-SERVER-2007-01-03.md5 file that includes a section with each of the above files. I decided personally that I didn’t need an md5 file for each of them – feel free to modify this if your needs differ.

zip %outputname%.*

This compresses the dumped events logs down to a manageable size. I managed to get a 60 MB log file that I generated during varying testing phases down to just over 6 MB. I also manged to get 480 KB of log files down to 14kb. At this point you should have a file which includes your three event logs and you md5 file.

del %outputname%.sec
del %outputname%.sys
del %outputname%.md5

This section cleans up the files outside of the zip. I manage t0 get these files down 90% in size I don’t need these to eat up extra space.

What do you do now?

From here I would add in a line at the end to move the logs to another server where you can store them for length your organization deems necessary. To help combat the MD5 re-engineering I mentioned above I would copy the compressed archived to two locations on your network. This will help make having an MD5 meaningful. Another option is adding a script that e-mails you the MD5 hash so you have it saved for reference. Having the MD5 and collecting 2 days of information from the logs and would mean an attacker may have to edit 2-4 archives and regenerate md5′s for them – double that if you store a second set of archives in another location.
While this may fulfill your needs for log file capturing and an easy way to store them, it does not address the fact of easy log file auditing and tracking down events. There are all in one solutions out there for you to use and I don’t say in any terms this a solution to those. You need to address your own needs and decide what works for you. This is to give you sometime until your decide what you are going to do.