Deploying Custom Registry Changes through Group Policy

The Scenario

You’re administering thousands of Vista workstations and their applications, and you spend a lot of your day connecting to them for troubleshooting and maintenance. You’ve found that you’re using Windows Calculator all the time to convert hex to decimal and reverse; it’s the best way to search for error codes online after all. After the hundredth time that you’ve had to set the calculator from Standard to Scientific mode, you’ve decided to make it default to Scientific. So let’s learn about how to actually figure out where values get set, then how we can control them.

Figuring out the registry entry

It stands to reason that Vista’s Calculator has to store which mode it’s going to start in somewhere, and that this somewhere is probably the registry. So let’s download Process Monitor and use it for some light reverse engineering. We’re guessing that CALC.EXE will read and write the values, and that it will be registry related. So we start ProcMon.exe, then set a filter for a process of calc.exe and an operation of RegSetValue, like so:

We then start the calculator, and we switch it over to scientific mode. The filtered results are pretty short, and we see:

It’s doubtful the cryptography entries are anything but chaff, so let’s focus on this setting change for HKCU\Software\Calc\Layout. We right-click that line and choose ‘Jump to…’

This takes us into the registry editor, where we see what actually got changed. Pretty slick!

It looks like the DWORD value name ‘layout’ is our guy. We confirm by setting it to 1 and restarting calculator. It’s back to Standard mode. We restart calculator with the value set to 0 and now it’s Scientific again. So I think we’ve got what we need to do some group policy work.

The Easier Way

We’re just making a simple registry value change here, so why not use REGEDIT.EXE in silent mode to set it? To do this we:

1. Export this registry value to a file called SciCalc.reg

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Microsoft\Calc]
“layout”=dword:00000000

2. We create a new Group Policy object and link it to the OU we have configured for all the administrative users in the domain (ourselves and our super powerful colleagues).

3. We open it up and edit “User Configuration | Windows Settings | Scripts (Logon/Logoff).

4. Under the Logon node, we add our settings so that regedit.exe calls our SciCalc.reg file silently (with the /s switch):

5. We click Show Files and drop our SciCalc.reg into SYSVOL.

6. Now we’re all set. After this policy replicates around the Domain Controllers and we logon to the various Vista workstations in the domain, Windows Calculator will always start in scientific mode. Neat.

The Harder Way

The logon script method is pretty down and dirty. While it works, it’s not very elegant. It also means that we have some settings that are done without really leveraging Group Policy’s registry editing extensions. It’s pretty opaque to other administrators as well, since all you can tell about a logon script applying is that it ran – not much else about if it was successful, what it was really doing, why it exists, etc. So what if we make a new custom ADM template file and apply it that way?

ADM files are the building blocks of registry-based policy customization. They use a controlled format to make changes. They can also be used to set boundaries of values – for the Calculator example there are only two possible good values: 0 or 1, as a DWORD (32-bit) value. Using an ADM lets us control what we can choose, and also gives good explanation of what we’re accomplishing. Plus it’s really cool.

So taking what we know from our registry work, let’s dissect an ADM file that will do the same thing:

<ADM Starts Here>

CLASS USER
CATEGORY Windows_Calculator_(CustomADM)
POLICY Mode
EXPLAIN !!CalcHelp
KEYNAME Software\Microsoft\Calc
PART !!Calc_Configure DROPDOWNLIST REQUIRED
VALUENAME “layout”
ITEMLIST
NAME !!Scientific VALUE NUMERIC 0 DEFAULT
NAME !!Standard VALUE NUMERIC 1
END ITEMLIST
END PART
END POLICY
END CATEGORY

[strings]
WindowsCalculatorCustomADM=”Windows Calculator Settings”
Calc_Configure=”Set the Windows Calculator to: ”
Scientific=”Scientific mode”
Standard=”Standard mode”

; explains
CalcHelp=”You can set the Windows Calculator’s behavior to default to Scientific or Standard. Users can still change it but it will revert when group policy refreshes. This sample ADM was created by Ned Pyle, MSFT.”

</ADM Ends Here>

· CLASS describes User versus Computer policy.
· CATEGORY describes the node we will see in the Group Policy Editor.
· POLICY describes what we see to actually edit.
· EXPLAIN describes where we can look up the ‘help’ for this policy.
· KEYNAME is the actual registry key structure we’re touching.
· PART is used if we have multiple settings to choose, and how they will be displayed
· VALUENAME is the registry value we’re editing
· NAME describes the friendly and literal data to be written

So here we have a policy setting which will be called “Windows_Calculator_(CustomADM)” that will expose one entry called ‘Mode’. Mode will be a dropdown that can be used to select Standard or Scientific. Pretty simple… so how do we get this working?

1. We save our settings as an ADM file.

2. We load up GPMC, then create and link our new policy to that Admins OU.

3. We open our new policy and under User Configuration we right-click Administrative Templates and select Add/Remove Administrative Templates.

4. We find our ADM and highlight it, then select Add. It will be copied into the policy in SYSVOL automagically.

5. Now we highlight Administrative Templates and select View | Filtering. Uncheck “Only show policy settings that can be fully managed” (i.e. any custom policy). It will look like this:

6. Now if we navigate to your policy, we get this (see the cool explanation too? No one can say they don’t know what this policy is about!):

7. If we drill into the Mode setting, we have this:

And you’re done. A bit more work, but pretty rewarding and certainly much easier for your colleagues to work with, especially if you have delegated out group policy administration to a staff of less experienced admins.

Notes

My little examples above with Calculator only work on Windows Vista and Windows Server 2008. Prior to those versions we used the WIN.INI to set calculator settings – D’oh! Now you have a very compelling reason to upgrade… 😉

These sorts of custom policy settings are not managed like built-in group policies – this means that simply removing them does not remove their settings. If you want to back out their changes, you need to create a new policy that removes their settings directly.

ADMX’s can also be used on Vista/2008, but I’m saving those for a later posting as they make ADM’s look trivial.

This is just a taste of custom ADM file usage. If you want to get really into this, I highly suggest checking out:

Using Administrative Template Files with Registry-Based Group Policyhttp://www.microsoft.com/downloads/details.aspx?familyid=e7d72fa1-62fe-4358-8360-8774ea8db847&displaylang=en

Administrative Template File Formathttp://msdn2.microsoft.com/en-us/library/aa372405.aspx

 

Avots: http://blogs.technet.com/b/askds/archive/2007/08/14/deploying-custom-registry-changes-through-group-policy.aspx

Advertisements

Storage and the VMware VMFS File System

Storage and the VMware VMFS File System

by David Marshall, Stephen S. Beaver, and Jason W. McCarty

When designing a Virtual Infrastructure environment, one of the single most important things to consider and plan for is the storage backend. There are several options available that range from local storage, Fibre Channel and iSCSI. The first thing to think about is where you store and run your virtual machines. VMware’s VMFS file system is specially designed for the purpose of storing and running virtual machines.

Virtual Machine File System

VMware developed its own high performance cluster file system called VMware Virtual Machine File System or VMFS. VMFS provides a file system that has been optimized for storage virtualization for virtual machines through the use of distributed locking. A virtual machine stored on a VMFS partition always appears to the virtual machine as a mounted SCSI disk. The virtual disk or *.vmdk file hides the physical storage layer from the virtual machine’s operating system. VMFS versions 1 and 2 were flat file systems, and typically only housed .vmdk files. The VMFS 3 file system now allows for a directory structure. As a result, VMFS 3 file systems can contain all of the configuration and disk files for a given virtual machine. The VMFS file system is one of the things that set VMware so far ahead of its competitors. Conventional file systems will allow one server to have a read/write access or lock to a given file at any given time. VMware’s VMFS is a file system that will allow multiple nodes or multiple VMware ESX servers to read and write to the same LUN or VMFS partition concurrently.

Now that we know about VMFS, let’s take a look at the different storage options that are made available.

Direct Attached Storage
Direct-attached storage (DAS) is storage that is, as the name implies, directly attached to a computer or server. DAS is usually the first step taken when working with storage. A good example would be a company with two VMware ESX Servers directly attached to a disk array. This configuration is a good starting point, but it typically doesn’t scale very well.

Network Attached Storage
Network-attached storage (NAS) is a type of storage that is shared over the network at a file system level. This option is considered an entry-level or low cost option with a moderate performance rating. VMware ESX will connect over the network to a specialized storage device. This device can be in the form of an appliance or a computer that uses Network File System (NFS).

The VM kernel is used to connect to a NAS device via the VM kernel port and supports NFS Version 3 carried over TCP/IP only. From the standpoint of the VMware ESX servers, the NFS volumes are treated the same way VMware ESX would treat iSCSI or Fibre Channel storage. You are able to VMotion guests from one host to the next, create virtual machines, boot virtual machines as well as mount ISO images as CD-ROMs when presented to the virtual machines.

When configuring access to standard Unix/Linux based NFS devices, some configuration changes will need to be defined. The directory /etc/exports will define the systems that are allowed to access the shared directory. And there are a few options in this file that you should be aware of.

  1. Name the directory to be shared.
  2. Define the subnets that will be allowed access to the share.
  3. Allow both “read” and “write” permissions to the volume.
  4. no_root_squash: The root user (UID = 0) by default is given the least amount of access to the volume. This option will turn off this behavior, giving the VMkernel the access it needs to connect as UID 0.
  5. sync: All file writes MUST be committed to the disk before the client write request is actually completed.

Windows Server 2003 R2 also natively provides NFS sharing when the Windows Services for Unix (SFU) service is installed and configured. Out of the box, Windows Server 2003 R2 has this ability, but it can also be run on Windows Server 2003 (non-R2), and Windows 2000 Server after downloading SFU from Microsoft’s Website.

  1. After storage has been allocated, the folders are presented similarly as NFS targets.
  2. Because there is no common authentication method between VMware ESX and a Microsoft Windows server, the /etc/passwd file must be copied to the Windows server, and mappings must be made to tie an account on the ESX server to a Windows account with appropriate access rights.

Fibre Channel SAN
When using Fibre Channel to connect to the backend storage, VMware ESX requires the use of a Fibre Channel switch. Using more than one allows for redundancy. The Fibre Channel switch will form the “fabric” in the Fibre Channel network by connecting multiple nodes together. Disk arrays in Storage Area Networks (SAN) are one of the main things you will see connected in a Fibre Channel Network along with servers and/or tape drives. Storage Processors aggregate physical hard disks into logical volumes, otherwise called LUNs, each with its own LUN number identifier. World Wide Names (WWNs) are attached by the manufacturer to the Host Bus Adapters (HBA). This is a similar concept as used by MAC addresses within network interface cards (NICs). All Zoning and Pathing is the method the Fibre Channel Switches and SAN Service Processor (SP) use for controlling host access to the LUNs. The SP use soft zoning to control LUN visibility per WWN. The Fibre Channel Switch uses hard zoning, which controls SP visibility on a per switch basis as well as LUN masking. LUN Masking controls LUN visibility on a per host basis.

The VMkernel will address the LUN using the following example syntax:

Vmhba(adapter#):target#:LUN#:partition# or Vmhba1:0:0:1

So how does a Fibre Channel SAN work anyway? Let’s take a look at how the SAN components will interact with each other. This is a very general overview of how the process works.

  1. When a host wants to access the disks or storage device on the SAN, the first thing that must happen is that an access request for the storage device must take place. The host sends out a block-based access request to the storage devices.
  2. The request is then accepted by the HBA for the host. At the same time, it is first converted from its binary data form to optical form which is what is required for transmission in the fiber optical cable. Then the request is “packaged” based on the rules of the Fibre Channel protocol.
  3. The HBA then transmits the request to the SAN.
  4. One of the SAN switches receives the request and checks to see which storage device wants to access from the host’s perspective; this will appear as a specific disk, but will really be a logical device that will correspond to some physical device on the SAN.
  5. The Fibre Channel switch will determine which physical devices have been made available to the host for its targeted logical device.
  6. Once the Fibre Channel switch determines the correct physical device, it will pass along the request to that physical device.
  7. When a host wants to access the disks or storage device on the SAN, the first thing that must happen is an access request for the storage device. The host sends out a block-based access request to the storage devices.
  8. The request is then accepted by the HBA for the host. At the same time, it is first converted from its binary data form to optical form which is what is required for transmission in the fiber optical cable. Then the request is “packaged” based on the rules of the Fibre Channel protocol.
  9. The HBA then transmits the request to the SAN.
  10. One of the SAN switches receives the request and checks to see which storage device wants to access from the host’s perspective; this will appear as a specific disk but will really be a logical device that will correspond to some physical device on the SAN.
  11. The Fibre Channel switch will determine which physical devices have been made available to the host for its targeted logical device.
  12. Once the Fibre Channel switch determines the correct physical device it will pass along the request to that physical device.

Internet Small Computer System Interface
Internet Small Computer System Interface or iSCSI is a different approach than that of Fibre Channel SANs. iSCSI is a SCSI transport protocol which enables access to a storage device via standard TCP/IP networking. This process works by mapping SCSI block-oriented storage over TCP/IP. This process is similar to mapping SCSI over Fibre Channel. Initiators like the VMware ESX iSCSI HBA send SCSI commands to “targets” located in the iSCSI storage systems. iSCSI has some distinct advantages over Fibre Channel, primarily with cost. You can use the existing NICs and Ethernet switches that are already in your environment. This brings down the initial cost needed to get started. When looking to grow the environment, Ethernet switches are less expensive then Fibre Channel switches.

iSCSI has the ability to do long distance data transfers. And iSCSI can use the Internet for data transport. You can have two separate data centers that are geographically apart from each other and still be able to do iSCSI between them. Fibre Channel must use a gateway to tunnel through, or convert to IP. Performance with iSCSI is increasing at an accelerated pace. As Ethernet speeds continue to increase (10Gig Ethernet is now available), iSCSI speeds increase as well. With the way iSCSI SANs are architected, iSCSI environments continue to increase in speed the more they are scaled out. iSCSI does this by using parallel connections from the Service Processor to the disks arrays.

iSCSI is simpler and less expensive than Fibre Channel. Now that 10Gig Ethernet is available, the adoption of iSCSI into the enterprise looks very promising. It is important to really know the limitations or maximum configurations that you can use when working with VMware ESX and the storage system on the backend. Let’s take a look at the one’s that are most important.

  1. 256 is the maximum number of LUNs per system that you can use and the maximum during install is 128.
  2. There is a 16 port total maximum in the HBAs per system.
  3. 4 is the maximum number of virtual HBAs per virtual machine.
  4. 15 is the maximum number of targets per virtual machine.
  5. 60 is the maximum number of virtual disks per Windows and Linux virtual machine.
  6. 256 is the maximum number of VMFS file systems per VMware ESX server.
  7. 2TB is the maximum size of a VMFS partition.
  8. The maximum file size for a VMFS-3 file is based on the block size of the partition. A 1MB block size allows up to a 256GB file size and a block size of 8MB will allow 2TB.
  9. The maximum number of files per VMFS-3 partition is 30,000.
  10. 32 is the maximum number of paths per LUN.
  11. 1024 is the maximum number of total paths.
  12. 15 is the maximum number of targets per HBA.
  13. 1.1GB is the smallest VMFS-3 partition you can create.

So, there you have it, the 13 VMware ESX rules of storage. The setting of the block file size on a partition is the rule you will visit the most. A general best practice is to create LUN sizes between 250GB and 500GB. Proper initial configuration for the long term is essential. An example would be, if you wanted to P2V a server that has 300GB total disk space, and you did not plan appropriately, you would have an issue. Unless you planned ahead when you created the LUN and used a 2MB block size, you would be stuck. Here is the breakdown:

  1. 1MB block size = 256GB max file size
  2. 2MB block size = 512GB max file size
  3. 4MB block size = 1024GB max file size
  4. 8MB block size = 2048GB max file size.

Spanning up to 32 physical storage extents (block size = 8MB = 2TB), which equals the maximum volume size of 64TB.

NOTE
Now would be a very good time to share a proverb that has served me well over my career. “Just because you can do something, does not mean you should.” Nothing could be truer than this statement. There really is no justification for creating volumes that are 64TB or anything remotely close to that. As a best practice, I start thinking about using Raw Device Mappings (otherwise known as RDMs) when I need anything over 1TB. I actually have 1TB to 2TB in my range, but if the SAN tools are available to snap a LUN and then send it to Fibre tape, that is a much faster way to back things up. This is definitely something to consider when deciding whether to use VMFS or RDM.

System administrators today do not always have the luxury of doing things the best way they should be done. Money and management ultimately make the decisions, and we are then forced to make due with what we have. In a perfect world, we would design tier-level storage for different applications and virtual machines running in the environment, possibly comprised of RAID 5 LUNS and RAID 0+1 LUNS. Always remember the golden rule-“Spindles equal Speed.” As an example, Microsoft is very specific when it comes to best practices with Exchange and the number of spindles you need on the backend to get the performance that you expect for the scale of the deployment. Different applications are going to have different needs, so depending on the application that you are deploying, the disk configuration can make or break the performance of your deployment.

Summary

So we learned that the number of spindles directly affects the speed of the disks. And we also learned the 13 VMware ESX rules for storage and what we needed to know about VMFS. Additionally, we touched on the different storage device options that have been made available to us. Those choices include DAS, iSCSI and Fibre Channel SAN. We also presented a very general overview on how a Fibre Channel SAN works.

Knowing one of the biggest gotchas is the block size of VMFS partitions and LUNs, and then combining that knowledge with the different storage options made available, you can now make the best possible decisions when architecting the storage piece of your Virtual Infrastructure environment. Proper planning up front is crucial to making sure that you do not have to later overcome hurdles pertaining to storage performance, availability, and cost.

 

Avots: http://www.ittoday.info/Articles/VMware_VMFS_File_System.htm

Kas īsti ir Non-commercial Use?

“Non-commercial Use” means the use of the Software Product for noncommercial purposes only, and is limited to the following users:

  • Non-profit organizations (charities and other organizations created for the promotion of social welfare)
  • Universities, colleges, and other educational institutions (including, but not limited to elementary schools, middle schools, high schools, and community colleges)
  • Independent contractors who are under contract by the above-stated organizations and using the Software Product exclusively for such nonprofit or educational clients
  • Government organizations and agencies
  • Other individual users who use the Software Product for personal, noncommercial use only (for example, hobby, learning, or entertainment).

Webyog reserves the right to further clarify the terms of non-commercial Use at its sole determination.

Avots: http://webyog.com/faq/content/30/131/en/what-is-non_commercial-use.html

ESXi HA – viens divi un varbūt gatavs!

Introduction

I am not biased to one virtualization solution or another but I know a great product and amazing features when I see them. VMware ESX Server and the VMware Infrastructure suite has a lot of amazing features that really “set the bar” for other virtualization products. One of those features is VMware’s High Availability feature – dubbed VMHA.

When a physical server goes down or loses all network connectivity, VMHA steps in and migrates the virtual guest machines off of that server and onto another server. This way, the virtual machines can be up and running again in just the time that it takes them to reboot.


Figure 1: VMware High Availability (VMHA) – Image Courtesy of VMware.com

This is a very powerful feature because it means that any operating system and appliance can have high availability just by running inside the VMware Infrastructure.

There are a number of requirements to make this happen and there are both good and bad qualities of VMHA. I will cover all of that and show you how to configure VMHA in this article.

Let’s get started.

What is required to make VMHA work?

There are a number of requirements that you will have to meet to make VMHA work. Those requirements are:

  • VMware Infrastructure Suite Standard or Enterprise (no you cannot do it with the free ESXi nor can you do it with the VMware Foundations Suite).
  • At least 2 ESX host systems.
  • A shared SAN or NAS between the ESX Servers where the virtual machines will be stored. Keep in mind that with VMHA the virtual disks for the VMs covered by VMHA never move. What happens when a host system fails is that the ownership of those virtual machines is transferred from the failed host to the new host.
  • CPU compatibility between the hosts. The easiest way to test this is to attempt a VMotion of a VM from one server to another and see what happens. Here is what CPU incompatibility looks like when it fails:


Figure 2: CPU Incompatibility

If you cannot achieve CPU compatibility between hosts in the HA resource pool, then you will have to configure CPU Masking (see VMworld: VMotion between Apples and Oranges).

  • Highly Recommended – to have VMware management network redundancy (at least two NICs associated with the VMware port used for VMotion and iSCSI). If you do not have this, you will see:


Figure 3: Configuration issues because there is no VMware management network redundancy

What is great about VMHA?

Here are some of the great features of VMHA:

  • Provides high availability for all virtual machines at a low cost (compared to purchasing a HA solution on a per machine basis).
  • Works for any OS that runs inside VMware ESX. That means that even if I create a Vyatta virtual router running inside ESX Server, that ESX Server is in a HA resource pool, and the server it is running on goes down, then that Vyatta virtual router OS will migrate and have it reboot on the ESX host system.
  • VMHA is easy to configure. If you have the right equipment, licenses, and VMware Infrastructure already set up, you can configure VMHA in just a few minutes.
  • Works with DRS (distributed resource scheduler) such that when a VMs are going to be brought to other hosts in the resource pool due to host failure, DRS is used to determine where that load should be placed and to balance that load.

What is “not so great” about VMHA?

Just like with any solution, there are some features of VMHA that are not as great. Those features are:

  • CPUs on each host must be compatible (almost exactly) or you will have to configure CPU masking on every virtual machine.
  • Virtual machines that are on the host system that goes doem WILL have to be restarted.
  • VMHA is unaware of the underlying applications on those VMs. That means that if the underlying application data is corrupt from an application crash and server reboot, then even though the VM migrates and reboots from a crashed machine, the application still may be unusable (not that this is necessarily VMware’s fault).

How do I configure VMHA?

Configuration of VMHA is easy, just follow these steps:

Note:
The following assumes that you already have two ESX Server host systems, the VMware Infrastructure Suite (VI Suits), the CPUs on the host systems are compatible, you have a shared storage system, and all licensing related to VMHA and the VMHA feature is in place.

  1. In the VI Client, Inventory View, Right-click on your datacenter and click on New Cluster.


Figure 4: Adding a New HA Cluster

  1. This brings up the New Cluster Wizard. Give the Cluster a name and (assuming you are only creating a HA cluster), check the VMware HA cluster feature.


Figure 5: Naming the HA Cluster

  1. Next, you will be given a chance to configure the HA options for this cluster. There is a lot to consider here – how many hosts can fail, if guests will be powered on if the proper amount of resources is not available, host isolation, restart priority, and virtual machine monitoring. To learn more about these settings, please read the VMware 3.5 Documentation.


Figure 6: Configuring HA Options

  1. Select the swapfile location – either with the VM on your shared storage or on the host. I recommend keeping the swapfile with the VM on your shared storage.
  2. And finally, you are shown the “ready to complete” screen where you can review what you are about to do and click Finish.
  3. Once the HA cluster is created, you need to move ESX host systems into the cluster by clicking on them and dragging them into the cluster. You can also move VMs to the cluster in the same way. Here are my results:


Figure 7: HA Cluster created with ESX Server hosts and VMs inside

  1. At this point, you should click on the cluster to see if there are any configuration issues (as you see in Figure 3). Also, notice how the cluster has its own tabs for Summary, Virtual Machines, Hosts, Resource Allocation, Performance, Tasks & Events, Alarms, and Permissions.
  2. Even though I had configuration issues (no redundant management network), my VMHA cluster was still functional. To get around the “insufficient resources to satisfy configured failover level for HA” error message when powering up a VM, I changed the HA configuration to “Allow VMs to be powered on even if they violate availability constraints”.

Let’s test it.

How do I know if VMHA worked?

To test VMHA, I had two low end Dell Servers in my cluster. I had one Windows Server 2008 system running on ESX host “esx4”. To perform a simple HA test, I rebooted host “esx4” without going into maintenance mode. This caused the Windows 2008 Server to move from “esx4” to “esx3” and be rebooted. Here is the “before and after”:


Figure 8: Before causing the failure of server ESX4


Figure 9: After the failure of server ESX4 – proving the VMHA was successful

In this test, we saw that the Windows 2008 VM was moved from “esx4” to “esx3” when “esx4” we restarted.

Conclusion

In this article, you learned what VMware’s High Availability solution is and how to configure it. We started off with the requirements to use VMHA. From there, you saw what was good and what was not so good about VMHA. After showing you how to configure VMHA, I demonstrated exactly how it works in a real server failure. VMHA is really the leader when it comes to virtualization high availability.

Avots: http://www.virtualizationadmin.com/articles-tutorials/vmware-esx-and-vsphere-articles/vmotion-drs-high-availability/configure-vmware-high-availability-vmha.html

Divi ieteikumi HA network redundancy un Misconfiguration in the host network setup risinājumam

This message appears if the Service Console does not have network redundancy configured properly and can be safely ignored.

To prevent this message from appearing and to comply with proper network redundancy, VMware recommends that you add a second service console on a different vSwitch and subnet. Alternatively, you can add a second vmnic to the service console vSwitch.

To suppress this message on ESX hosts in the VMware High Availability (HA) cluster or if the warning appears for a host already configured in a cluster, set the VMware HA advanced option das.ignoreRedundantNetWarning to true and reconfigure VMware HA on that host.  This advanced option is available in VMware Virtual Center 2.5 Update 3 and later.

Note: If the warning continues to appear, disable and re-enable VMware High Availability in the cluster.

To set das.ignoreRedundantNetWarning to true:

  1. From VMware Infrastructure Client, right-click on the cluster and click Edit Settings.
  2. Select VMware HA and click Advanced Options.
  3. In the Options column, enter das.ignoreRedundantNetWarning.
  4. In the Value column, enter true.

    Note
    : Steps 3 and 4 create a new option.
  5. Click OK.
  6. Reconfigure HA.

————————————————————————————————————————

This issue occurs if all the hosts in the cluster do not share the same service console or management network configurations. Some hosts may have service consoles using a different name or may have more service consoles than other hosts.
For example, this error may also occur if the VMkernel gateway settings are not the same across all hosts in the cluster. To reconfigure the setting, right-click on the hosts with this error and select Reconfigure for HA.
Address the network configuration differences between the hosts if you are going to use the Shut Down or Power Off isolation responses because these options trigger a VMware HA isolation in the event of Service Console or Management Network failures.
If you are using the Leave VM Powered on isolation response, the option to ignore these messages is available in VMware VirtualCenter 2.5 Update 3.
To configure VirtualCenter to ignore these messages, set the advanced option das.bypassNetCompatCheck to true:
Note: When using the das.bypassNetCompatCheck option, the heartbeat mechanism during configuration used in VirtualCenter 2.5 only pairs symmetric IP addresses within subnets across nodes. For example, in a two node cluster, if host A has vSwif0 “Service Console” 10.10.1.x 255.255.255.0 and vSwif1 “Service Console 2” 10.10.5.x and host B has vSwif0 “Service Console” 10.10.2.x 255.255.255.0 and vSwif1 “Service Console 2” 10.10.5.x, the heartbeats only happen on vSwif1. Starting in vCenter Server 4.0, they can be paired across subnets if pings are allowed across the subnets. However, VMware recommends having them within subnets.
  1. Right-click the cluster, then click Edit Settings.
  2. Deselect Turn on VMware HA.
  3. Wait for all the hosts in the cluster to unconfigure HA.
  4. Right-click the cluster, and choose Edit Settings.
  5. Select Turn on Vmware HA, then select VMware HA from the left pane.
  6. Select Advanced options.
  7. Add the option das.bypassNetCompatCheck with the value True.
  8. Click OK on the Advanced Options screen, then click OK again to accept the cluster setting changes.
  9. Wait for all the ESX hosts in the cluster to reconfigure for HA.