Month: October 2013

P2V’d VM fails to boot with incorrect disk controller

VMware converter tool is a really nice tool to convert physical server to virtual machines. It’s free and fairly simple to use. The majority of time that I have used it I have not had any issues but a recent issue where a P2V’d vm failed to boot inspired me to write this blog posting.

Back in 2010 when I was first implementing virtualization at my company, I ran into an issue with a P2V of aW2K8 server that wouldn’t boot up after completion. The VM would go into a continuous loop of blue screens and reboots. This was very annoying and made it difficult to determine the source of the problem.

The blue screen error gave the generic error that hardware on the server changed. DUH…I just P2V’d you from a Dell M610 blade so of course your hardware has changed, you silly server. The fun part was finding out what the server didn’t like about this change to cause it to blue screen.

The blue screen error code indicated something with the disk had been changed. With that lead I started looking at all of the VM settings, in particular the disk settings.  I noticed that the SCSI controller was set to Paravirtualized SCSI which was different from all of the other VMs I had P2V’d that day.  Their controllers were set as LSI Logic controllers.

I figured something must have become confused during the conversion of this blade, so I changed the SCSI controller on the VM to LSI Logic SAS within the vSphere client. The settings change worked, and the VM powered up normally.

Fast forward to August 2013 when I encountered a similar problem. This time I was converting a Dell 1855 blade running Windows Server 2003 R2 which also suffered from the infamous blue screen reboot loop. Having seen this issue before, I checked the SCSI controller setting and found that it had been converted as an IDE controller. Server operating systems are not compatible with IDE controllers so this VM was not going to boot unless I changed the adapter controller. When I attempted to change the disk type using the vSphere client, however, the option was not available.

ide1

A quick search on the internet led me to VMware KB article 1016192 (Converting a virtual IDE disk to a virtual SCSI disk).According to the article, if no controller is selected during the conversion process, the VM is created with an IDE controller for the VM’s system drive.

To fix the issue you have two options:

1.      Re-run the conversion, making sure you select a controller type. Do not leave it set to the default of auto-select.

2.      Manually change the adapter type inside the vmdk file.

I was under a time constraint and did not have time to re-run the P2V, so I opted to manually change the controller type. Using the instructions from the article, I was able to successfully change the controller.

To manually change the disk controller type I used the following steps:

Login to the host where the VM resides. If SSH is not enabled, you will need to enable that option before you can connect.

  1. Once logged onto your server navigate to the datastore path of the VM.

# cd /vmfs/volumes/<datastore_name>/<vm_name>/     ide8

  1. Using the vi editor, open the vmdk file of the VM with the following command.

vi nameofserverfile.vmdk

  1. Find the line that says :ddb.adapterType = “ide”ide6
  2. Change the adapter type to LSI Logic,

(type r over a letter then the replacement character until all the characters have been replaced with)
ddb.adapterType = “lsilogic” ide7

  1. Press the ESC key, then :wq to save the file.
  2. From vSphere Client:
    1. Click Edit Settings on the VM.
    2. Select the IDE virtual disk and remove it from the VM but DO NOT delete the disk.  ide2
  1. After the disk has been removed, you will need to go back into settings and re-add the disk.
    1. Click Add > Hard Disk > Use Existing Virtual Disk. ide3
    2. Navigate to the location of the disk you just removed and select to add it to the VM.
    3. Choose the same controller type from step 4. The SCSI ID should read SCSI 0:0. ide4 ide5
Advertisement

SnapManager for Exchange fails to run scheduled snaps after running an upgrade to 6.0.4

Sometimes fixes & patches introduce another set of issues that will give way to another set of new patches and fixes.

In our case, it was our upgrade to SnapManager for Exchange( SME) 6.0.4 which had fixes to some bugs we were facing. Everything seemed to go real well, all the upgrades on the Exchange 2010 DAG member servers didn’t hiccup one bit. This was too good to be true, an upgrade of SME and no issues so far. I had my fingers crossed and was hoping for the best, maybe luck would be in our corner.

No Joy…

After completing the upgrade on all servers I needed to run a test of some exchange snaps. Got to make sure it works right? I first started out running manual snaps on all the databases on each node. Those worked great, No Problems.

So onward to the next test which was to kick off a scheduled snap of the DAG databases. After kicking off a scheduled snap through task scheduler the snaps failed to run. After some digging around and a few more tests, my co-worker discovered that there is bug when you upgrade to SME 6.0.4 which causes scheduled snaps to fail.

According to Netapp’s KB 649767 article it has to do the value “0” is not selectable in the “retain up-to-the-minute restorability” option in the GUI of this release like it was in previous releases.  When running the snaps through the GUI of SME 6.0.4 , you can manually enter the value “0” and the run the job immediately, backups will work. The issue occurs when SME creates a scheduled job; it creates the job with wrong parameter , it be should be NoUtmRestore if you don’t want to retain any transaction logs.

http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=649767

SME604_a

Getting Backups to work again…

To get scheduled backups to work again you will need to do one of 2 things:

  • Change the -RetainUTMDays and -RetainUTMBackups from something other than “0”. Changing the value to something other than “0” will retain your transaction logs for the specified value
  • If you don’t want to keep any transaction logs, manually modify the scheduled job and remove the -RetainUTMDays or -RetainUTMBackups parameters then replace with NoUtmRestore.
    • If you are running a DAG remember you will need to modify the scheduled job across all DAG members that have the scheduled job.

SME604

Using Custom Filters For Your Exchange Dynamic Distribution Groups

Dynamic Distribution groups are Distribution groups that dynamically add members into the group based on a certain set of filters and conditions, when an email is sent to the group. These are great for mass mailing a group of users that change can often and managing the group manually would be difficult to maintain.

Exchange offers 2 ways of creating these groups; you can use the EMC/ EAC or PowerShell. I have found that the majority of cases for Dynamic Distribution can be created using the EMC/EAC, which offers the following set of pre-canned filters and conditional.

  • IncludedRecipients
  • ConditionalCompany
  • ConditionalDepartment
  • ConditionalStateOrProvince
  • ConditionalCustomAttribute( 1–15)

There are times that this pre-canned list just doesn’t fit the bill. Let’s say you need a Dynamic Group that filters on users from a certain country or even a particular job title?  PowerShell to the rescue!

PowerShell offers the pre-canned filters as well as any of the account attributes that a user account would have, giving you a lot more freedom to create some customized Dynamic Distribution Group. Please note that you cannot combine pre-canned conditional filters and custom Recipient Filters in the same query.

For example, to create a Dynamic group for mailbox users only in a particular country and company, let’s say the US, use the following cmdlet:

New-DynamicDistributionGroup -Name "TestGroup" -Alias "TestGroup" -OrganizationalUnit "your/OU"-RecipientFilter {(RecipientType –eq  “UserMailbox”) -and (CountryOrRegion –eq “United States”) -and (Company –eq “mycompany”)}

If you have an existing group that you just need to modify to become custom use the Set-DynamicDistributionGroup cmdlet:

Set-DynamicDistributionGroup -Identity "TestGroup" -RecipientFilter {(RecipientType –eq  “UserMailbox”) -and (CountryOrRegion –eq “United States”) -and (Company –eq “mycompany”)}

Note that when creating your Dynamic Distribution Group using PowerShell you cannot combine pre-canned conditional filters and custom Recipient Filters. A list of all the available filterable properties  parameters can be found on TechNet’s site.

http://technet.microsoft.com/en-us/library/bb738157(v=exchg.150).aspx

Exchange 2010 and Active Directory Operation Failed on DC errors

An annoying problem that I have seen since we upgraded to Exchange 2010 is when in the Exchange Management Console (EMC) , you are not able to perform certain tasks because a DC could not be contacted .The domain controller in the error is usually one that has been demoted from your environment but sometimes not.  The issues can also occur after recent changes to a DC, which causes the EMC to lose contact with the Domain Controller

When this particular scenario was first noticed , it puzzled us because the DC in question  was still  running and Exchange was able to discover it. We did all the typical AD and exchange troubleshooting steps, checked permissions, AD replication, etc., but none these steps fixed the issue, the tech was still not able to create accounts.  After some more digging around we later found out some FSMO roles were removed from that DC. Aha! A major change to the DC.

Common error messages may contain  “Active Directory operation failed on Dcxxxx” or “ LDAP server was unavailable”. When the problem occurs  you are not able to perform certain actions in the EMC, such as creating accounts, mailbox moves, basically any operation that requires contact with the DC.

 An example of an error is shown below:

EMC

So what’s the problem you ask?

The problem is a result of the Exchange Management Console caching the domain controller details in the MMC temp files. It caches the data but it’s not smart enough to update the data or locate another DC. To fix the issue you have to remove the MMC cache file from the users profile.

Use the following steps to clear the EMC MMC cache file:

1. Close the EMC if you have it open
2. Go to the User’s profile directory and delete the Exchange Management Console file.
3. File location can be found here:

      • C:\users\<specific user>\AppData\Roaming\Microsoft\MMC\Exchange Management Console

EMC2

4. Reopen the EMC

See Microsoft KB article http://support.microsoft.com/kb/2019500