vCenter Events

Top  Previous 

ID

Severity

Group

Message Catalog Text

AccountCreatedEvent

info

VC

An account was created on host {host.name}

Since 2.0 Reference

AccountRemovedEvent

info

VC

Account {account} was removed on host {host.name}

Since 2.0 Reference

AccountUpdatedEvent

info

VC

An account was updated on host {host.name}

Since 2.0 Reference

ad.event.ImportCertEvent

info

VC

ad.event.ImportCertEvent| Import certificate succeeded.

Since 5.0 Reference

ad.event.ImportCertFailedEvent

error

VC

ad.event.ImportCertFailedEvent| Import certificate failed.

Since 5.0 Reference

ad.event.JoinDomainEvent

info

VC

ad.event.JoinDomainEvent| Join domain succeeded.

Since 5.0 Reference

ad.event.JoinDomainFailedEvent

error

VC

ad.event.JoinDomainFailedEvent| Join domain failed.

Since 5.0 Reference

ad.event.LeaveDomainEvent

info

VC

ad.event.LeaveDomainEvent| Leave domain succeeded.

Since 5.0 Reference

ad.event.LeaveDomainFailedEvent

error

VC

ad.event.LeaveDomainFailedEvent| Leave domain failed.

Since 5.0 Reference

AdminPasswordNotChangedEvent

info

VC

The default password for the root user on the host {host.name} has not been changed

Since 2.5 Reference

AlarmAcknowledgedEvent

info

VC

Acknowledged alarm '{alarm.name}' on {entity.name}

Since 5.0 Reference

AlarmActionTriggeredEvent

info

VC

Alarm '{alarm.name}' on {entity.name} triggered an action

Since 2.0 Reference

AlarmClearedEvent

info

VC

Manually cleared alarm '{alarm.name}' on {entity.name} from {from.@enum.ManagedEntity.Status}

Since 5.0 Reference

AlarmCreatedEvent

info

VC

Created alarm '{alarm.name}' on {entity.name}

Since 2.0 Reference

AlarmEmailCompletedEvent

info

VC

Alarm '{alarm.name}' on {entity.name} sent email to {to}

Since 2.0 Reference

AlarmEmailFailedEvent

error

VC

Alarm '{alarm.name}' on {entity.name} cannot send email to {to}

Since 2.0 Reference

AlarmReconfiguredEvent

info

VC

Reconfigured alarm '{alarm.name}' on {entity.name}

Since 2.0 Reference

AlarmRemovedEvent

info

VC

Removed alarm '{alarm.name}' on {entity.name}

Since 2.0 Reference

AlarmScriptCompleteEvent

info

VC

Alarm '{alarm.name}' on {entity.name} ran script {script}

Since 2.0 Reference

AlarmScriptFailedEvent

error

VC

Alarm '{alarm.name}' on {entity.name} did not complete script: {reason.msg}

Since 2.0 Reference

AlarmSnmpCompletedEvent

info

VC

Alarm '{alarm.name}' on entity {entity.name} send SNMP trap

Since 2.0 Reference

AlarmSnmpFailedEvent

error

VC

Alarm '{alarm.name}' on entity {entity.name} did not send SNMP trap: {reason.msg}

Since 2.0 Reference

AlarmStatusChangedEvent

info

VC

Alarm '{alarm.name}' on {entity.name} changed from {from.@enum.ManagedEntity.Status} to {to.@enum.ManagedEntity.Status}

Since 2.0 Reference

AllVirtualMachinesLicensedEvent

info

VC

All running virtual machines are licensed

Since 2.5 Reference

AlreadyAuthenticatedSessionEvent

info

VC

User cannot logon since the user is already logged on

Since 2.0 Reference

BadUsernameSessionEvent

warning

VC

Cannot login {userName}@{ipAddress}

Since 2.0 Reference

CanceledHostOperationEvent

info

VC

The operation performed on host {host.name} in {datacenter.name} was canceled

Since 2.0 Reference

ChangeOwnerOfFileEvent

info

VC

Changed ownership of file name {filename} from {oldOwner} to {newOwner} on {host.name} in {datacenter.name}.

Since 5.1 Reference

ChangeOwnerOfFileFailedEvent

error

VC

Cannot change ownership of file name {filename} from {owner} to {attemptedOwner} on {host.name} in {datacenter.name}.

Since 5.1 Reference

ClusterComplianceCheckedEvent

info

VC

Checked cluster for compliance

Since 4.0 Reference

ClusterCreatedEvent

info

VC

Created cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

ClusterDestroyedEvent

info

VC

Removed cluster {computeResource.name} in datacenter {datacenter.name}

Since 2.0 Reference

ClusterOvercommittedEvent

warning

Cluster

Insufficient capacity in cluster {computeResource.name} to satisfy resource configuration in {datacenter.name}

Since 2.0 Reference

ClusterReconfiguredEvent

info

VC

Reconfigured cluster {computeResource.name} in datacenter {datacenter.name}

Since 2.0 Reference

ClusterStatusChangedEvent

info

VC

Configuration status on cluster {computeResource.name} changed from {oldStatus.@enum.ManagedEntity.Status} to {newStatus.@enum.ManagedEntity.Status} in {datacenter.name}

Since 2.0 Reference

com.vmware.license.AddLicenseEvent

info

VC

com.vmware.license.AddLicenseEvent| License {licenseKey} added to VirtualCenter

Since 4.0 Reference

com.vmware.license.AssignLicenseEvent

info

VC

com.vmware.license.AssignLicenseEvent| License {licenseKey} assigned to asset {entityName}

Since 4.0 Reference

com.vmware.license.DLFDownloadFailedEvent

warning

VC

com.vmware.license.DLFDownloadFailedEvent| Failed to download license information from the host {hostname} due to {errorReason.@enum.com.vmware.license.DLFDownloadFailedEvent.DLFDownloadFailedReason}

Since 4.1 Reference

com.vmware.license.LicenseAssignFailedEvent

error

VC

com.vmware.license.LicenseAssignFailedEvent| License assignment on the host fails. Reasons: {errorMessage.@enum.com.vmware.license.LicenseAssignError}.

Since 4.0 Reference

com.vmware.license.LicenseCapacityExceededEvent

warning

VC

com.vmware.license.LicenseCapacityExceededEvent| The current license usage ({currentUsage} {costUnitText}) for {edition} exceeds the license capacity ({capacity} {costUnitText})

Since 5.0 Reference

com.vmware.license.LicenseExpiryEvent

error

VC

com.vmware.license.LicenseExpiryEvent| Your host license will expire in {remainingDays} days. The host will be disconnected from VC when its license expires.

Since 4.0 Reference

com.vmware.license.LicenseUserThresholdExceededEvent

warning

VC

com.vmware.license.LicenseUserThresholdExceededEvent| Current license usage ({currentUsage} {costUnitText}) for {edition} exceeded the user-defined threshold ({threshold} {costUnitText})

Since 4.1 Reference

com.vmware.license.RemoveLicenseEvent

info

VC

com.vmware.license.RemoveLicenseEvent| License {licenseKey} removed from VirtualCenter

Since 4.0 Reference

com.vmware.license.UnassignLicenseEvent

info

VC

com.vmware.license.UnassignLicenseEvent| License unassigned from asset {entityName}

Since 4.0 Reference

com.vmware.vc.cim.CIMGroupHealthStateChanged

info

VC

com.vmware.vc.cim.CIMGroupHealthStateChanged| Health of [data.group] changed from [data.oldState] to [data.newState].

Since 4.0 Reference

com.vmware.vc.datastore.UpdatedVmFilesEvent

info

VC

com.vmware.vc.datastore.UpdatedVmFilesEvent| Updated VM files on datastore {ds.name} using host {hostName}

Since 4.1 Reference

com.vmware.vc.datastore.UpdateVmFilesFailedEvent

error

VC

com.vmware.vc.datastore.UpdateVmFilesFailedEvent| Failed to update VM files on datastore {ds.name} using host {hostName}

Since 4.1 Reference

com.vmware.vc.datastore.UpdatingVmFilesEvent

info

VC

com.vmware.vc.datastore.UpdatingVmFilesEvent| Updating VM files on datastore {ds.name} using host {hostName}

Since 4.1 Reference

com.vmware.vc.dvs.LacpConfigInconsistentEvent

info

VC

com.vmware.vc.dvs.LacpConfigInconsistentEvent| Single Link Aggregation Control Group is enabled on Uplink Port Groups while enhanced LACP support is enabled.

Since 5.5 Reference

com.vmware.vc.ft.VmAffectedByDasDisabledEvent

warning

VirtualMachine

com.vmware.vc.ft.VmAffectedByDasDisabledEvent| VMware HA has been disabled in cluster {computeResource.name} of datacenter {datacenter.name}. HA will not restart VM {vm.name} or its Secondary VM after a failure.

Since 4.1 Reference

com.vmware.vc.guestOperations.GuestOperation

info

VC

com.vmware.vc.guestOperations.GuestOperation| Guest operation {operationName.@enum.com.vmware.vc.guestOp} performed on Virtual machine {vm.name}.

Since 5.0 Reference

com.vmware.vc.guestOperations.GuestOperationAuthFailure

warning

VirtualMachine

com.vmware.vc.guestOperations.GuestOperationAuthFailure| Guest operation authentication failed for operation {operationName.@enum.com.vmware.vc.guestOp} on Virtual machine {vm.name}.

Since 5.0 Reference

com.vmware.vc.HA.AllHostAddrsPingable

info

VC

com.vmware.vc.HA.AllHostAddrsPingable| All vSphere HA isolation addresses are reachable by host {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.AllIsoAddrsPingable

info

VC

com.vmware.vc.HA.AllIsoAddrsPingable| All vSphere HA isolation addresses are reachable by host {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.AnsweredVmLockLostQuestionEvent

warning

VirtualMachine

com.vmware.vc.HA.AnsweredVmLockLostQuestionEvent| Lock-lost question on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} was answered by vSphere HA

Since 5.0 Reference

com.vmware.vc.HA.AnsweredVmTerminatePDLEvent

warning

VirtualMachine

com.vmware.vc.HA.AnsweredVmTerminatePDLEvent| vSphere HA answered a question from host {host.name} in cluster {computeResource.name} about terminating virtual machine {vm.name}

Since 5.1 Reference

com.vmware.vc.HA.AutoStartDisabled

info

VC

com.vmware.vc.HA.AutoStartDisabled| The automatic Virtual Machine Startup/Shutdown feature has been disabled on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Automatic VM restarts will interfere with vSphere HA when reacting to a host failure.

Since 5.0 Reference

com.vmware.vc.HA.CannotResetVmWithInaccessibleDatastore

warning

Cluster

com.vmware.vc.HA.CannotResetVmWithInaccessibleDatastore| vSphere HA did not reset VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} because the VM had files on inaccessible datastore(s)

Since 5.5 Reference

com.vmware.vc.HA.ClusterContainsIncompatibleHosts

warning

Cluster

com.vmware.vc.HA.ClusterContainsIncompatibleHosts| vSphere HA Cluster {computeResource.name} in {datacenter.name} contains ESX/ESXi 3.5 hosts and more recent host versions, which isn't fully supported.

Since 5.0 Reference

com.vmware.vc.HA.ClusterFailoverActionCompletedEvent

info

VC

com.vmware.vc.HA.ClusterFailoverActionCompletedEvent| HA completed a failover action in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.ClusterFailoverActionInitiatedEvent

warning

Cluster

com.vmware.vc.HA.ClusterFailoverActionInitiatedEvent| HA initiated a failover action in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.DasAgentRunningEvent

info

VC

com.vmware.vc.HA.DasAgentRunningEvent| HA Agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is running

Since 4.1 Reference

com.vmware.vc.HA.DasFailoverHostFailedEvent

error

Cluster

com.vmware.vc.HA.DasFailoverHostFailedEvent| HA failover host {host.name} in cluster {computeResource.name} in {datacenter.name} has failed

Since 4.1 Reference

com.vmware.vc.HA.DasFailoverHostIsolatedEvent

warning

Cluster

com.vmware.vc.HA.DasFailoverHostIsolatedEvent| Host {host.name} has been isolated from cluster {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.DasFailoverHostPartitionedEvent

warning

Cluster

com.vmware.vc.HA.DasFailoverHostPartitionedEvent| Failover Host {host.name} in {computeResource.name} in {datacenter.name} is in a different network partition than the master

Since 5.0 Reference

com.vmware.vc.HA.DasFailoverHostUnreachableEvent

warning

Cluster

com.vmware.vc.HA.DasFailoverHostUnreachableEvent| The vSphere HA agent on the failover host {host.name} in cluster {computeResource.name} in {datacenter.name} is not reachable from vCenter Server

Since 5.0 Reference

com.vmware.vc.HA.DasHostCompleteDatastoreFailureEvent

error

Cluster

com.vmware.vc.HA.DasHostCompleteDatastoreFailureEvent| All shared datastores failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.DasHostCompleteNetworkFailureEvent

error

Cluster

com.vmware.vc.HA.DasHostCompleteNetworkFailureEvent| All VM networks failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.DasHostFailedEvent

error

Cluster

com.vmware.vc.HA.DasHostFailedEvent| A possible host failure has been detected by HA on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.DasHostIsolatedEvent

warning

Cluster

com.vmware.vc.HA.DasHostIsolatedEvent| Host {host.name} has been isolated from cluster {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.DasHostMonitoringDisabledEvent

warning

Cluster

com.vmware.vc.HA.DasHostMonitoringDisabledEvent| No virtual machine failover will occur until Host Monitoring is enabled in cluster {computeResource.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.DasTotalClusterFailureEvent

error

Cluster

com.vmware.vc.HA.DasTotalClusterFailureEvent| HA recovered from a total cluster failure in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.FailedRestartAfterIsolationEvent

error

VirtualMachine

com.vmware.vc.HA.FailedRestartAfterIsolationEvent| vSphere HA was unable to restart virtual machine {vm.name} in cluster {computeResource.name} in datacenter {datacenter.name} after it was powered off in response to a network isolation event. The virtual machine should be manually powered back on.

Since 5.0 Reference

com.vmware.vc.HA.HeartbeatDatastoreChanged

info

VC

com.vmware.vc.HA.HeartbeatDatastoreChanged| Datastore {dsName} is {changeType.@enum.com.vmware.vc.HA.HeartbeatDatastoreChange} for storage heartbeating monitored by the vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.HeartbeatDatastoreNotSufficient

warning

Cluster

com.vmware.vc.HA.HeartbeatDatastoreNotSufficient| The number of heartbeat datastores for host {host.name} in cluster {computeResource.name} in {datacenter.name} is {selectedNum}, which is less than required: {requiredNum}

Since 5.0 Reference

com.vmware.vc.HA.HostAgentErrorEvent

warning

Cluster

com.vmware.vc.HA.HostAgentErrorEvent| vSphere HA Agent for host {host.name} has an error in {computeResource.name} in {datacenter.name}: {reason.@enum.com.vmware.vc.HA.HostAgentErrorReason}

Since 5.0 Reference

com.vmware.vc.HA.HostDasAgentHealthyEvent

info

VC

com.vmware.vc.HA.HostDasAgentHealthyEvent| HA Agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is healthy

Since 4.1 Reference

com.vmware.vc.HA.HostDasErrorEvent

warning

Cluster

com.vmware.vc.HA.HostDasErrorEvent| vSphere HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} has an error: {reason.@enum.HostDasErrorEvent.HostDasErrorReason}

Since 5.0 Reference

com.vmware.vc.HA.HostDoesNotSupportVsan

error

VC

com.vmware.vc.HA.HostDoesNotSupportVsan| vSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} in {datacenter.name} because vCloud Distributed Storage is enabled but the host does not support that feature

Since 5.5 Reference

com.vmware.vc.HA.HostHasNoIsolationAddrsDefined

warning

Cluster

com.vmware.vc.HA.HostHasNoIsolationAddrsDefined| Host {host.name} in cluster {computeResource.name} in {datacenter.name} has no isolation addresses defined as required by vSphere HA.

Since 5.0 Reference

com.vmware.vc.HA.HostHasNoMountedDatastores

error

Cluster

com.vmware.vc.HA.HostHasNoMountedDatastores| vSphere HA cannot be configured on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} because there are no mounted datastores.

Since 5.1 Reference

com.vmware.vc.HA.HostHasNoSslThumbprint

error

Cluster

com.vmware.vc.HA.HostHasNoSslThumbprint| vSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} because its SSL thumbprint has not been verified. Check that vCenter Server is configured to verify SSL thumbprints and that the thumbprint for {host.name} has been verified.

Since 5.0 Reference

com.vmware.vc.HA.HostIncompatibleWithHA

error

Cluster

com.vmware.vc.HA.HostIncompatibleWithHA| The product version of host {host.name} in cluster {computeResource.name} in {datacenter.name} is incompatible with HA.

Since 5.0 Reference

com.vmware.vc.HA.HostPartitionedFromMasterEvent

warning

Cluster

com.vmware.vc.HA.HostPartitionedFromMasterEvent| Host {host.name} is in a different network partition than the master {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.HostStateChangedEvent

info

VC

com.vmware.vc.HA.HostStateChangedEvent| The vSphere HA availability state of the host {host.name} has changed to {newState.@enum.com.vmware.vc.HA.DasFdmAvailabilityState} in {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.HostUnconfiguredWithProtectedVms

warning

Cluster

com.vmware.vc.HA.HostUnconfiguredWithProtectedVms| Host {host.name} in cluster {computeResource.name} in {datacenter.name} is disconnected, but contains {protectedVmCount} protected virtual machine(s)

Since 5.0 Reference

com.vmware.vc.HA.HostUnconfigureError

warning

Cluster

com.vmware.vc.HA.HostUnconfigureError| There was an error unconfiguring the vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name}. To solve this problem, connect the host to a vCenter Server of version 5.0 or later.

Since 5.0 Reference

com.vmware.vc.HA.InvalidMaster

warning

Cluster

com.vmware.vc.HA.InvalidMaster| vSphere HA Agent on host {remoteHostname} is an invalid master. The host should be examined to determine if it has been compromised.

Since 5.0 Reference

com.vmware.vc.HA.NotAllHostAddrsPingable

warning

Cluster

com.vmware.vc.HA.NotAllHostAddrsPingable| The vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name} cannot reach some of the management network addresses of other hosts, and thus vSphere HA may not be able to restart VMs if a host failure occurs: {unpingableAddrs}

Since 5.0 Reference

com.vmware.vc.HA.StartFTSecondaryFailedEvent

info

VirtualMachine

com.vmware.vc.HA.StartFTSecondaryFailedEvent| vSphere HA agent failed to start Fault Tolerance secondary VM {secondaryCfgPath} on host {secondaryHost} for primary VM {vm.name} in cluster {computeResource.name} in {datacenter.name}. Reason : {fault.msg}. vSphere HA agent will retry until it times out.

Since 5.0 Reference

com.vmware.vc.HA.StartFTSecondarySucceededEvent

info

VC

com.vmware.vc.HA.StartFTSecondarySucceededEvent| vSphere HA agent successfully started Fault Tolerance secondary VM {secondaryCfgPath} on host {secondaryHost} for primary VM {vm.name} in cluster {computeResource.name}.

Since 5.0 Reference

com.vmware.vc.HA.UserHeartbeatDatastoreRemoved

warning

Cluster

com.vmware.vc.HA.UserHeartbeatDatastoreRemoved| Datastore {dsName} is removed from the set of preferred heartbeat datastores selected for cluster {computeResource.name} in {datacenter.name} because the datastore is removed from inventory

Since 5.0 Reference

com.vmware.vc.HA.VcCannotFindMasterEvent

warning

Cluster

com.vmware.vc.HA.VcCannotFindMasterEvent| vCenter Server is unable to find a master vSphere HA Agent in {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.VcConnectedToMasterEvent

warning

VC

com.vmware.vc.HA.VcConnectedToMasterEvent| vCenter Server is connected to the master vSphere HA Agent running on host {hostname} in {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.VcDisconnectedFromMasterEvent

warning

VC

com.vmware.vc.HA.VcDisconnectedFromMasterEvent| vCenter Server is disconnected from the master vSphere HA Agent running on host {hostname} in {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.VMIsHADisabledIsolationEvent

info

VC

com.vmware.vc.HA.VMIsHADisabledIsolationEvent| vSphere HA did not perform an isolation response for {vm.name} in cluster {computeResource.name} in {datacenter.name} because its VM restart priority is Disabled

Since 5.1 Reference

com.vmware.vc.HA.VMIsHADisabledRestartEvent

info

VC

com.vmware.vc.HA.VMIsHADisabledRestartEvent| vSphere HA did not attempt to restart {vm.name} in cluster {computeResource.name} in {datacenter.name} because its VM restart priority is Disabled

Since 5.1 Reference

com.vmware.vc.HA.VmNotProtectedEvent

warning

VirtualMachine

com.vmware.vc.HA.VmNotProtectedEvent| VM {vm.name} in cluster {computeResource.name} in {datacenter.name} failed to become vSphere HA Protected and vSphere HA may not attempt to restart it after a failure.

Since 5.0 Reference

com.vmware.vc.HA.VmProtectedEvent

info

VC

com.vmware.vc.HA.VmProtectedEvent| VM {vm.name} in cluster {computeResource.name} in {datacenter.name} is vSphere HA Protected and vSphere HA will attempt to restart it after a failure.

Since 5.0 Reference

com.vmware.vc.ha.VmRestartedByHAEvent

warning

VirtualMachine

com.vmware.vc.ha.VmRestartedByHAEvent| Virtual machine {vm.name} was restarted on host {host.name} in cluster {computeResource.name} by vSphere HA

Since 5.0 Reference

com.vmware.vc.HA.VmUnprotectedEvent

warning

VirtualMachine

com.vmware.vc.HA.VmUnprotectedEvent| VM {vm.name} in cluster {computeResource.name} in {datacenter.name} is not vSphere HA Protected.

Since 5.0 Reference

com.vmware.vc.HA.VmUnprotectedOnDiskSpaceFull

info

VC

com.vmware.vc.HA.VmUnprotectedOnDiskSpaceFull| vSphere HA has unprotected virtual machine {vm.name} in cluster {computeResource.name} in datacenter {datacenter.name} because it ran out of disk space

Since 5.1 Reference

com.vmware.vc.host.AutoStartReconfigureFailedEvent

error

VC

com.vmware.vc.host.AutoStartReconfigureFailedEvent| Reconfiguring autostart rules for virtual machines on {host.name} in datacenter {datacenter.name} failed

Since 5.0 Reference

com.vmware.vc.host.clear.vFlashResource.inaccessible

info

VC

com.vmware.vc.host.clear.vFlashResource.inaccessible| Host's vSphere Flash resource is restored to be accessible.

Since 5.5 Reference

com.vmware.vc.host.clear.vFlashResource.reachthreshold

info

VC

com.vmware.vc.host.clear.vFlashResource.reachthreshold| Host's vSphere Flash resource usage dropped below {1}%.

Since 5.5 Reference

com.vmware.vc.host.problem.vFlashResource.inaccessible

warning

VC

com.vmware.vc.host.problem.vFlashResource.inaccessible| Host's vSphere Flash resource is inaccessible.

Since 5.5 Reference

com.vmware.vc.host.problem.vFlashResource.reachthreshold

warning

VC

com.vmware.vc.host.problem.vFlashResource.reachthreshold| Host's vSphere Flash resource usage is more than {1}%.

Since 5.5 Reference

com.vmware.vc.host.vFlash.defaultModuleChangedEvent

info

VC

com.vmware.vc.host.vFlash.defaultModuleChangedEvent| Any new vFlash cache configuration request will use {vFlashModule} as default vSphere Flash module. All existing vFlash cache configurations remain unchanged.

Since 5.5 Reference

com.vmware.vc.host.vFlash.modulesLoadedEvent

info

VC

com.vmware.vc.host.vFlash.modulesLoadedEvent| vSphere Flash modules are loaded or reloaded on the host

Since 5.5 Reference

com.vmware.vc.host.vFlash.SsdConfigurationFailedEvent

error

ESXHostStorage

com.vmware.vc.host.vFlash.SsdConfigurationFailedEvent| {1} on disk '{2}' failed due to {3}

Since 5.5 Reference

com.vmware.vc.host.vFlash.VFlashResourceCapacityExtendedEvent

info

VC

com.vmware.vc.host.vFlash.VFlashResourceCapacityvSphere Flash resource capacity is extended

Since 5.5 Reference

com.vmware.vc.host.vFlash.VFlashResourceConfiguredEvent

info

VC

com.vmware.vc.host.vFlash.VFlashResourceConfiguredEvent| vSphere Flash resource is configured on the host

Since 5.5 Reference

com.vmware.vc.host.vFlash.VFlashResourceRemovedEvent

info

VC

com.vmware.vc.host.vFlash.VFlashResourceRemovedEvent| vSphere Flash resource is removed from the host

Since 5.5 Reference

com.vmware.vc.npt.VmAdapterEnteredPassthroughEvent

info

VC

com.vmware.vc.npt.VmAdapterEnteredPassthroughEvent| Network passthrough is active on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.npt.VmAdapterExitedPassthroughEvent

info

VC

com.vmware.vc.npt.VmAdapterExitedPassthroughEvent| Network passthrough is inactive on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.ovfconsumers.CloneOvfConsumerStateErrorEvent

warning

VC

com.vmware.vc.ovfconsumers.CloneOvfConsumerStateErrorEvent| Failed to clone state for the entity '{entityName}' on extension {extensionName}

Since 5.0 Reference

com.vmware.vc.ovfconsumers.GetOvfEnvironmentSectionsErrorEvent

warning

VC

com.vmware.vc.ovfconsumers.GetOvfEnvironmentSectionsErrorEvent| Failed to retrieve OVF environment sections for VM '{vm.name}' from extension {extensionName}

Since 5.0 Reference

com.vmware.vc.ovfconsumers.PowerOnAfterCloneErrorEvent

warning

VC

com.vmware.vc.ovfconsumers.PowerOnAfterCloneErrorEvent| Powering on VM '{vm.name}' after cloning was blocked by an extension. Message: {description}

Since 5.0 Reference

com.vmware.vc.ovfconsumers.RegisterEntityErrorEvent

warning

VC

com.vmware.vc.ovfconsumers.RegisterEntityErrorEvent| Failed to register entity '{entityName}' on extension {extensionName}

Since 5.0 Reference

com.vmware.vc.ovfconsumers.UnregisterEntitiesErrorEvent

warning

VC

com.vmware.vc.ovfconsumers.UnregisterEntitiesErrorEvent| Failed to unregister entities on extension {extensionName}

Since 5.0 Reference

com.vmware.vc.ovfconsumers.ValidateOstErrorEvent

warning

VC

com.vmware.vc.ovfconsumers.ValidateOstErrorEvent| Failed to validate OVF descriptor on extension {extensionName}

Since 5.0 Reference

com.vmware.vc.profile.AnswerFileExportedEvent

info

VC

com.vmware.vc.profile.AnswerFileExportedEvent| Answer file for host {host.name} in datacenter {datacenter.name} has been exported

Since 5.0 Reference

com.vmware.vc.profile.AnswerFileUpdatedEvent

info

VC

com.vmware.vc.profile.AnswerFileUpdatedEvent| Answer file for host {host.name} in datacenter {datacenter.name} has been updated

Since 5.0 Reference

com.vmware.vc.rp.ResourcePoolRenamedEvent

info

VC

com.vmware.vc.rp.ResourcePoolRenamedEvent| Resource pool '{oldName}' has been renamed to '{newName}'

Since 5.1 Reference

com.vmware.vc.sdrs.CanceledDatastoreMaintenanceModeEvent

info

VC

com.vmware.vc.sdrs.CanceledDatastoreMaintenanceModeEvent| The datastore maintenance mode operation has been canceled

Since 5.0 Reference

com.vmware.vc.sdrs.ConfiguredStorageDrsOnPodEvent

info

VC

com.vmware.vc.sdrs.ConfiguredStorageDrsOnPodEvent| Configured storage DRS on datastore cluster {objectName}

Since 5.0 Reference

com.vmware.vc.sdrs.ConsistencyGroupViolationEvent

warning

VC

com.vmware.vc.sdrs.ConsistencyGroupViolationEvent| Datastore cluster {objectName} has datastores that belong to different SRM Consistency Groups

Since 5.1 Reference

com.vmware.vc.sdrs.DatastoreEnteredMaintenanceModeEvent

info

VC

com.vmware.vc.sdrs.DatastoreEnteredMaintenanceModeEvent| Datastore {ds.name} has entered maintenance mode

Since 5.0 Reference

com.vmware.vc.sdrs.DatastoreEnteringMaintenanceModeEvent

info

VC

com.vmware.vc.sdrs.DatastoreEnteringMaintenanceModeEvent| Datastore {ds.name} is entering maintenance mode

Since 5.0 Reference

com.vmware.vc.sdrs.DatastoreExitedMaintenanceModeEvent

info

VC

com.vmware.vc.sdrs.DatastoreExitedMaintenanceModeEvent| Datastore {ds.name} has exited maintenance mode

Since 5.0 Reference

com.vmware.vc.sdrs.DatastoreInMultipleDatacentersEvent

warning

VC

com.vmware.vc.sdrs.DatastoreInMultipleDatacentersEvent| Datastore cluster {objectName} has one or more datastores: {datastore} shared across multiple datacenters

Since 5.0 Reference

com.vmware.vc.sdrs.DatastoreMaintenanceModeErrorsEvent

error

VC

com.vmware.vc.sdrs.DatastoreMaintenanceModeErrorsEvent| Datastore {ds.name} encountered errors while entering maintenance mode

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsDisabledEvent

info

VC

com.vmware.vc.sdrs.StorageDrsDisabledEvent| Disabled storage DRS on datastore cluster {objectName}

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsEnabledEvent

info

VC

com.vmware.vc.sdrs.StorageDrsEnabledEvent| Enabled storage DRS on datastore cluster {objectName} with automation level {behavior.@enum.storageDrs.PodConfigInfo.Behavior}

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsInvocationFailedEvent

error

VC

com.vmware.vc.sdrs.StorageDrsInvocationFailedEvent| Storage DRS invocation failed on datastore cluster {objectName}

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsNewRecommendationPendingEvent

info

VC

com.vmware.vc.sdrs.StorageDrsNewRecommendationPendingEvent| A new storage DRS recommendation has been generated on datastore cluster {objectName}

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsNotSupportedHostConnectedToPodEvent

warning

VC

com.vmware.vc.sdrs.StorageDrsNotSupportedHostConnectedToPodEvent| Datastore cluster {objectName} is connected to one or more hosts: {host} that do not support storage DRS

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsRecommendationApplied

info

VC

com.vmware.vc.sdrs.StorageDrsRecommendationApplied| All pending recommendations on datastore cluster {objectName} were applied

Since 5.5 Reference

com.vmware.vc.sdrs.StorageDrsStorageMigrationEvent

info

VC

com.vmware.vc.sdrs.StorageDrsStorageMigrationEvent| Storage DRS migrated disks of VM {vm.name} to datastore {ds.name}

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsStoragePlacementEvent

info

VC

com.vmware.vc.sdrs.StorageDrsStoragePlacementEvent| Storage DRS placed disks of VM {vm.name} on datastore {ds.name}

Since 5.0 Reference

com.vmware.vc.sdrs.StoragePodCreatedEvent

info

VC

com.vmware.vc.sdrs.StoragePodCreatedEvent| Created datastore cluster {objectName}

Since 5.0 Reference

com.vmware.vc.sdrs.StoragePodDestroyedEvent

info

VC

com.vmware.vc.sdrs.StoragePodDestroyedEvent| Removed datastore cluster {objectName}

Since 5.0 Reference

com.vmware.vc.sioc.NotSupportedHostConnectedToDatastoreEvent

warning

VC

com.vmware.vc.sioc.NotSupportedHostConnectedToDatastoreEvent| SIOC has detected that a host: {host} connected to a SIOC-enabled datastore: {objectName} is running an older version of ESX that does not support SIOC. This is an unsupported configuration.

Since 5.0 Reference

com.vmware.vc.sms.datastore.ComplianceStatusCompliantEvent

info

VC

com.vmware.vc.sms.datastore.ComplianceStatusCompliantEvent| Virtual disk {diskKey} on {vmName} connected to datastore {datastore.name} in {datacenter.name} is compliant from storage provider {providerName}.

Since 5.5 Reference

com.vmware.vc.sms.datastore.ComplianceStatusNonCompliantEvent

error

VirtualMachine

com.vmware.vc.sms.datastore.ComplianceStatusNonCompliantEvent| Virtual disk {diskKey} on {vmName} connected to {datastore.name} in {datacenter.name} is not compliant {operationalStatus] from storage provider {providerName}.

Since 5.5 Reference

com.vmware.vc.sms.datastore.ComplianceStatusUnknownEvent

warning

VC

com.vmware.vc.sms.datastore.ComplianceStatusUnknownEvent| Virtual disk {diskKey} on {vmName} connected to {datastore.name} in {datacenter.name} compliance status is unknown from storage provider {providerName}.

Since 5.5 Reference

com.vmware.vc.sms.LunCapabilityInitEvent

info

VC

com.vmware.vc.sms.LunCapabilityInitEvent| Storage provider system default capability event

Since 5.0 Reference

com.vmware.vc.sms.LunCapabilityMetEvent

info

VC

com.vmware.vc.sms.LunCapabilityMetEvent| Storage provider system capability requirements met

Since 5.0 Reference

com.vmware.vc.sms.LunCapabilityNotMetEvent

info

VC

com.vmware.vc.sms.LunCapabilityNotMetEvent| Storage provider system capability requirements not met

Since 5.0 Reference

com.vmware.vc.sms.provider.health.event

info

VC

com.vmware.vc.sms.provider.health.event| {msgTxt}

Since 5.0 Reference

com.vmware.vc.sms.provider.system.event

info

VC

com.vmware.vc.sms.provider.system.event| {msgTxt}

Since 5.0 Reference

com.vmware.vc.sms.ThinProvisionedLunThresholdClearedEvent

info

VC

com.vmware.vc.sms.ThinProvisionedLunThresholdClearedEvent| Storage provider thin provisioning capacity threshold reached

Since 5.0 Reference

com.vmware.vc.sms.ThinProvisionedLunThresholdCrossedEvent

info

VC

com.vmware.vc.sms.ThinProvisionedLunThresholdCrossedEvent| Storage provider thin provisioning capacity threshold crossed

Since 5.0 Reference

com.vmware.vc.sms.ThinProvisionedLunThresholdInitEvent

info

VC

com.vmware.vc.sms.ThinProvisionedLunThresholdInitEvent| Storage provider thin provisioning default capacity event

Since 5.0 Reference

com.vmware.vc.sms.vm.ComplianceStatusCompliantEvent

info

VC

com.vmware.vc.sms.vm.ComplianceStatusCompliantEvent| Virtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} is compliant from storage provider {providerName}.

Since 5.5 Reference

com.vmware.vc.sms.vm.ComplianceStatusNonCompliantEvent

error

VC

com.vmware.vc.sms.vm.ComplianceStatusNonCompliantEvent| Virtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} is not compliant {operationalStatus] from storage provider {providerName}.

Since 5.5 Reference

com.vmware.vc.sms.vm.ComplianceStatusUnknownEvent

warning

VC

com.vmware.vc.sms.vm.ComplianceStatusUnknownEvent| Virtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} compliance status is unknown from storage provider {providerName}.

Since 5.5 Reference

com.vmware.vc.spbm.ProfileAssociationFailedEvent

error

VC

com.vmware.vc.spbm.ProfileAssociationFailedEvent| Profile association/dissociation failed for {entityName}

Since 5.5 Reference

com.vmware.vc.stats.HostQuickStatesNotUpToDateEvent

info

VC

com.vmware.vc.stats.HostQuickStatesNotUpToDateEvent| Quick stats on {host.name} in {computeResource.name} in {datacenter.name} is not up-to-date

Since 5.0 Reference

com.vmware.vc.VCHealthStateChangedEvent

info

VC

com.vmware.vc.VCHealthStateChangedEvent| vCenter Service overall health changed from '{oldState}' to '{newState}'

Since 4.1 Reference

com.vmware.vc.vcp.FtDisabledVmTreatAsNonFtEvent

info

VC

com.vmware.vc.vcp.FtDisabledVmTreatAsNonFtEvent| HA VM Component Protection protects virtual machine {vm.name} on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because the FT state is disabled

Since 4.1 Reference

com.vmware.vc.vcp.FtFailoverEvent

info

VC

com.vmware.vc.vcp.FtFailoverEvent| FT Primary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is going to fail over to Secondary VM due to component failure

Since 4.1 Reference

com.vmware.vc.vcp.FtFailoverFailedEvent

error

VirtualMachine

com.vmware.vc.vcp.FtFailoverFailedEvent| FT virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to failover to secondary

Since 4.1 Reference

com.vmware.vc.vcp.FtSecondaryRestartEvent

info

VC

com.vmware.vc.vcp.FtSecondaryRestartEvent| HA VM Component Protection is restarting FT secondary virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to component failure

Since 4.1 Reference

com.vmware.vc.vcp.FtSecondaryRestartFailedEvent

error

VirtualMachine

com.vmware.vc.vcp.FtSecondaryRestartFailedEvent| FT Secondary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restart

Since 4.1 Reference

com.vmware.vc.vcp.NeedSecondaryFtVmTreatAsNonFtEvent

info

VC

com.vmware.vc.vcp.NeedSecondaryFtVmTreatAsNonFtEvent| HA VM Component Protection protects virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because it has been in the needSecondary state too long

Since 4.1 Reference

com.vmware.vc.vcp.TestEndEvent

info

VC

com.vmware.vc.vcp.TestEndEvent| VM Component Protection test ends on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.vcp.TestStartEvent

info

VC

com.vmware.vc.vcp.TestStartEvent| VM Component Protection test starts on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.vcp.VcpNoActionEvent

info

VC

com.vmware.vc.vcp.VcpNoActionEvent| HA VM Component Protection did not take action on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to the feature configuration setting

Since 4.1 Reference

com.vmware.vc.vcp.VmDatastoreFailedEvent

error

VirtualMachine

com.vmware.vc.vcp.VmDatastoreFailedEvent| Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {datastore}

Since 4.1 Reference

com.vmware.vc.vcp.VmNetworkFailedEvent

error

VirtualMachine

com.vmware.vc.vcp.VmNetworkFailedEvent| Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {network}

Since 4.1 Reference

com.vmware.vc.vcp.VmPowerOffHangEvent

error

VirtualMachine

com.vmware.vc.vcp.VmPowerOffHangEvent| HA VM Component Protection could not power off virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} successfully after trying {numTimes} times and will keep trying

Since 4.1 Reference

com.vmware.vc.vcp.VmRestartEvent

info

VC

com.vmware.vc.vcp.VmRestartEvent| HA VM Component Protection is restarting virtual machine {vm.name} due to component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.vcp.VmRestartFailedEvent

error

VirtualMachine

com.vmware.vc.vcp.VmRestartFailedEvent| Virtual machine {vm.name} affected by component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restart

Since 4.1 Reference

com.vmware.vc.vcp.VmWaitForCandidateHostEvent

error

VirtualMachine

com.vmware.vc.vcp.VmWaitForCandidateHostEvent| HA VM Component Protection could not find a destination host for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} after waiting {numSecWait} seconds and will keep trying

Since 4.1 Reference

com.vmware.vc.vm.VmRegisterFailedEvent

error

VC

com.vmware.vc.vm.VmRegisterFailedEvent| Virtual machine {vm.name} registration on {host.name} in datacenter {datacenter.name} failed

Since 5.0 Reference

com.vmware.vc.vm.VmStateFailedToRevertToSnapshot

error

VirtualMachine

com.vmware.vc.vm.VmStateFailedToRevertToSnapshot| Failed to revert the execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} to snapshot {snapshotName}, with ID {snapshotId}

Since 5.0 Reference

com.vmware.vc.vm.VmStateRevertedToSnapshot

info

VC

com.vmware.vc.vm.VmStateRevertedToSnapshot| The execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} has been reverted to the state of snapshot {snapshotName}, with ID {snapshotId}

Since 5.0 Reference

com.vmware.vc.vmam.AppMonitoringNotSupported

warning

VC

com.vmware.vc.vmam.AppMonitoringNotSupported| Application monitoring is not supported on {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.vmam.VmAppHealthMonitoringStateChangedEvent

warning

VC

com.vmware.vc.vmam.VmAppHealthMonitoringStateChangedEvent| Application heartbeat status changed to {status} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.vmam.VmAppHealthStateChangedEvent

warning

VirtualMachine

com.vmware.vc.vmam.VmAppHealthStateChangedEvent| vSphere HA detected that the application state changed to {state.@enum.vm.GuestInfo.AppStateType} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 5.5 Reference

com.vmware.vc.vmam.VmDasAppHeartbeatFailedEvent

warning

VirtualMachine

com.vmware.vc.vmam.VmDasAppHeartbeatFailedEvent| Application heartbeat failed for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.VmCloneFailedInvalidDestinationEvent

error

VC

com.vmware.vc.VmCloneFailedInvalidDestinationEvent| Cannot clone {vm.name} as {destVmName} to invalid or non-existent destination with ID {invalidMoRef}: {fault}

Since 5.0 Reference

com.vmware.vc.VmCloneToResourcePoolFailedEvent

error

VC

com.vmware.vc.VmCloneToResourcePoolFailedEvent| Cannot clone {vm.name} as {destVmName} to resource pool {destResourcePool}: {fault}

Since 5.0 Reference

com.vmware.vc.VmDiskConsolidatedEvent

info

VC

com.vmware.vc.VmDiskConsolidatedEvent| Virtual machine {vm.name} disks consolidated successfully on {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.0 Reference

com.vmware.vc.VmDiskConsolidationNeeded

info

VC

com.vmware.vc.VmDiskConsolidationNeeded| Virtual machine {vm.name} disks consolidation is needed on {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.0 Reference

com.vmware.vc.VmDiskConsolidationNoLongerNeeded

info

VC

com.vmware.vc.VmDiskConsolidationNoLongerNeeded| Virtual machine {vm.name} disks consolidation is no longer needed on {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.1 Reference

com.vmware.vc.VmDiskFailedToConsolidateEvent

error

VirtualMachine

com.vmware.vc.VmDiskFailedToConsolidateEvent| Virtual machine {vm.name} disks consolidation failed on {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.0 Reference

com.vmware.vc.vsan.DatastoreNoCapacityEvent

error

VC

com.vmware.vc.vsan.DatastoreNoCapacityEvent| VSAN datastore {datastoreName} in cluster {computeResource.name} in datacenter {datacenter.name} does not have capacity

Since 5.5 Reference

com.vmware.vc.vsan.HostCommunicationErrorEvent

error

ESXHost

com.vmware.vc.vsan.HostCommunicationErrorEvent| event.com.vmware.vc.vsan.HostCommunicationErrorEvent.fullFormat

Since 5.5 Reference

com.vmware.vc.vsan.HostNotInClusterEvent

error

VC

com.vmware.vc.vsan.HostNotInClusterEvent| {host.name} with the VSAN service enabled is not in the vCenter cluster {computeResource.name} in datacenter {datacenter.name}

Since 5.5 Reference

com.vmware.vc.vsan.HostNotInVsanClusterEvent

error

VC

com.vmware.vc.vsan.HostNotInVsanClusterEvent| {host.name} is in a VSAN enabled cluster {computeResource.name} in datacenter {datacenter.name} but does not have VSAN service enabled

Since 5.5 Reference

com.vmware.vc.vsan.HostVendorProviderDeregistrationFailedEvent

error

VC

com.vmware.vc.vsan.HostVendorProviderDeregistrationFailedEvent| Vendor provider {host.name} deregistration failed

Since 5.5 Reference

com.vmware.vc.vsan.HostVendorProviderDeregistrationSuccessEvent

info

VC

com.vmware.vc.vsan.HostVendorProviderDeregistrationSuccessEvent| Vendor provider {host.name} deregistration succeeded

Since 5.5 Reference

com.vmware.vc.vsan.HostVendorProviderRegistrationFailedEvent

error

VC

com.vmware.vc.vsan.HostVendorProviderRegistrationFailedEvent| Vendor provider {host.name} registration failed

Since 5.5 Reference

com.vmware.vc.vsan.HostVendorProviderRegistrationSuccessEvent

info

VC

com.vmware.vc.vsan.HostVendorProviderRegistrationSuccessEvent| Vendor provider {host.name} registration succeeded

Since 5.5 Reference

com.vmware.vc.vsan.NetworkMisConfiguredEvent

error

ESXHostNetwork

com.vmware.vc.vsan.NetworkMisConfiguredEvent| VSAN network is not configured on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}

Since 5.5 Reference

com.vmware.vc.vsan.RogueHostFoundEvent

error

VC

com.vmware.vc.vsan.RogueHostFoundEvent| Found another host participating in the VSAN service in cluster {computeResource.name} in datacenter {datacenter.name} which is not a member of this host's vCenter cluster

Since 5.5 Reference

com.vmware.vim.eam.agency.create

info

VC

com.vmware.vim.eam.agency.create| {agencyName} created by {ownerName}

Since 5.0 Reference

com.vmware.vim.eam.agency.destroyed

info

VC

com.vmware.vim.eam.agency.destroyed| {agencyName} removed from the vSphere ESX Agent Manager

Since 5.0 Reference

com.vmware.vim.eam.agency.goalstate

info

VC

com.vmware.vim.eam.agency.goalstate| {agencyName} changed goal state from {oldGoalState} to {newGoalState}

Since 5.0 Reference

com.vmware.vim.eam.agency.statusChanged

info

VC

com.vmware.vim.eam.agency.statusChanged| Agency status changed from {oldStatus} to {newStatus}

Since 5.1 Reference

com.vmware.vim.eam.agency.updated

info

VC

com.vmware.vim.eam.agency.updated| Configuration updated {agencyName}

Since 5.0 Reference

com.vmware.vim.eam.agent.created

info

VC

com.vmware.vim.eam.agent.created| Agent added to host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.destroyed

info

VC

com.vmware.vim.eam.agent.destroyed| Agent removed from host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.destroyedNoHost

info

VC

com.vmware.vim.eam.agent.destroyedNoHost| Agent removed from host ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterPowerOn

info

VC

com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterPowerOn| Agent VM {vm.name} has been powered on. Mark agent as available to proceed agent workflow ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterProvisioning

info

VC

com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterProvisioning| Agent VM {vm.name} has been provisioned. Mark agent as available to proceed agent workflow ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.statusChanged

info

VC

com.vmware.vim.eam.agent.statusChanged| Agent status changed from {oldStatus} to {newStatus}

Since 5.1 Reference

com.vmware.vim.eam.agent.task.deleteVm

info

VC

com.vmware.vim.eam.agent.task.deleteVm| Agent VM {vmName} is deleted on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.task.deployVm

info

VC

com.vmware.vim.eam.agent.task.deployVm| Agent VM {vm.name} is provisioned on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.task.powerOffVm

info

VC

com.vmware.vim.eam.agent.task.powerOffVm| Agent VM {vm.name} powered off, on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.task.powerOnVm

info

VC

com.vmware.vim.eam.agent.task.powerOnVm| Agent VM {vm.name} powered on, on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.task.vibInstalled

info

VC

com.vmware.vim.eam.agent.task.vibInstalled| Agent installed VIB {vib} on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.task.vibUninstalled

info

VC

com.vmware.vim.eam.agent.task.vibUninstalled| Agent uninstalled VIB {vib} on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.cannotAccessAgentOVF

warning

VC

com.vmware.vim.eam.issue.cannotAccessAgentOVF| Unable to access agent OVF package at {url} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.cannotAccessAgentVib

warning

VC

com.vmware.vim.eam.issue.cannotAccessAgentVib| Unable to access agent VIB module at {url} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.hostInMaintenanceMode

warning

VC

com.vmware.vim.eam.issue.hostInMaintenanceMode| Agent cannot complete an operation since the host {host.name} is in maintenance mode ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.hostInStandbyMode

warning

VC

com.vmware.vim.eam.issue.hostInStandbyMode| Agent cannot complete an operation since the host {host.name} is in standby mode ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.hostPoweredOff

warning

VC

com.vmware.vim.eam.issue.hostPoweredOff| Agent cannot complete an operation since the host {host.name} is powered off ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.incompatibleHostVersion

warning

VC

com.vmware.vim.eam.issue.incompatibleHostVersion| Agent is not deployed due to incompatible host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.insufficientIpAddresses

warning

VC

com.vmware.vim.eam.issue.insufficientIpAddresses| Insufficient IP addresses in IP pool in agent's VM network ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.insufficientResources

warning

VC

com.vmware.vim.eam.issue.insufficientResources| Agent cannot be provisioned due to insufficient resources on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.insufficientSpace

warning

VC

com.vmware.vim.eam.issue.insufficientSpace| Agent on {host.name} cannot be provisioned due to insufficient space on datastore ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.missingAgentIpPool

warning

VC

com.vmware.vim.eam.issue.missingAgentIpPool| No IP pool in agent's VM network ({agencyname})

Since 5.0 Reference

com.vmware.vim.eam.issue.missingDvFilterSwitch

warning

VC

com.vmware.vim.eam.issue.missingDvFilterSwitch| dvFilter switch is not configured on host {host.name} ({agencyname})

Since 5.0 Reference

com.vmware.vim.eam.issue.noAgentVmDatastore

warning

VC

com.vmware.vim.eam.issue.noAgentVmDatastore| No agent datastore configuration on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.noAgentVmNetwork

warning

VC

com.vmware.vim.eam.issue.noAgentVmNetwork| No agent network configuration on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.noCustomAgentVmDatastore

error

VC

com.vmware.vim.eam.issue.noCustomAgentVmDatastore| Agent datastore(s) {customAgentVmDatastoreName} not available on host {host.name} ({agencyName})

Since 5.5 Reference

com.vmware.vim.eam.issue.noCustomAgentVmNetwork

error

VC

com.vmware.vim.eam.issue.noCustomAgentVmNetwork| Agent network(s) {customAgentVmNetworkName} not available on host {host.name} ({agencyName})

Since 5.1 Reference

com.vmware.vim.eam.issue.orphandedDvFilterSwitch

warning

VC

com.vmware.vim.eam.issue.orphandedDvFilterSwitch| Unused dvFilter switch on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.orphanedAgency

warning

VC

com.vmware.vim.eam.issue.orphanedAgency| Orphaned agency found. ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.ovfInvalidFormat

warning

VC

com.vmware.vim.eam.issue.ovfInvalidFormat| OVF used to provision agent on host {host.name} has invalid format ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.ovfInvalidProperty

warning

VC

com.vmware.vim.eam.issue.ovfInvalidProperty| OVF environment used to provision agent on host {host.name} has one or more invalid properties ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.resolved

info

VC

com.vmware.vim.eam.issue.resolved| Issue {type} resolved (key {key})

Since 5.1 Reference

com.vmware.vim.eam.issue.unknownAgentVm

warning

VC

com.vmware.vim.eam.issue.unknownAgentVm| Unknown agent VM {vm.name}

Since 5.0 Reference

com.vmware.vim.eam.issue.vibCannotPutHostInMaintenanceMode

warning

VC

com.vmware.vim.eam.issue.vibCannotPutHostInMaintenanceMode| Cannot put host into maintenance mode ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vibInvalidFormat

warning

VC

com.vmware.vim.eam.issue.vibInvalidFormat| Invalid format for VIB module at {url} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vibNotInstalled

warning

VC

com.vmware.vim.eam.issue.vibNotInstalled| VIB module for agent is not installed on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vibRequiresHostInMaintenanceMode

error

VC

com.vmware.vim.eam.issue.vibRequiresHostInMaintenanceMode| Host must be put into maintenance mode to complete agent VIB installation ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vibRequiresHostReboot

error

VC

com.vmware.vim.eam.issue.vibRequiresHostReboot| Host {host.name} must be reboot to complete agent VIB installation ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vibRequiresManualInstallation

error

VC

com.vmware.vim.eam.issue.vibRequiresManualInstallation| VIB {vib} requires manual installation on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vibRequiresManualUninstallation

error

VC

com.vmware.vim.eam.issue.vibRequiresManualUninstallation| VIB {vib} requires manual uninstallation on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmCorrupted

warning

VC

com.vmware.vim.eam.issue.vmCorrupted| Agent VM {vm.name} on host {host.name} is corrupted ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmDeployed

warning

VC

com.vmware.vim.eam.issue.vmDeployed| Agent VM {vm.name} is provisioned on host {host.name} when it should be removed ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmMarkedAsTemplate

warning

VC

com.vmware.vim.eam.issue.vmMarkedAsTemplate| Agent VM {vm.name} on host {host.name} is marked as template ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmNotDeployed

warning

VC

com.vmware.vim.eam.issue.vmNotDeployed| Agent VM is missing on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmOrphaned

warning

VC

com.vmware.vim.eam.issue.vmOrphaned| Orphaned agent VM {vm.name} on host {host.name} detected ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmPoweredOff

warning

VC

com.vmware.vim.eam.issue.vmPoweredOff| Agent VM {vm.name} on host {host.name} is expected to be powered on ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmPoweredOn

warning

VC

com.vmware.vim.eam.issue.vmPoweredOn| Agent VM {vm.name} on host {host.name} is expected to be powered off ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmSuspended

warning

VC

com.vmware.vim.eam.issue.vmSuspended| Agent VM {vm.name} on host {host.name} is expected to be powered on but is suspended ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmWrongFolder

warning

VC

com.vmware.vim.eam.issue.vmWrongFolder| Agent VM {vm.name} on host {host.name} is in the wrong VM folder ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmWrongResourcePool

warning

VC

com.vmware.vim.eam.issue.vmWrongResourcePool| Agent VM {vm.name} on host {host.name} is in the resource pool ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.login.invalid

warning

VC

com.vmware.vim.eam.login.invalid| Failed login to vSphere ESX Agent Manager

Since 5.0 Reference

com.vmware.vim.eam.login.succeeded

info

VC

com.vmware.vim.eam.login.succeeded| Successful login by {user} into vSphere ESX Agent Manager

Since 5.0 Reference

com.vmware.vim.eam.logout

info

VC

com.vmware.vim.eam.logout| User {user} logged out of vSphere ESX Agent Manager by logging out of the vCenter server

Since 5.0 Reference

com.vmware.vim.eam.task.scanForUnknownAgentVmsCompleted

info

VC

com.vmware.vim.eam.task.scanForUnknownAgentVmsCompleted| Scan for unknown agent VMs completed

Since 5.0 Reference

com.vmware.vim.eam.task.scanForUnknownAgentVmsInitiated

info

VC

com.vmware.vim.eam.task.scanForUnknownAgentVmsInitiated| Scan for unknown agent VMs initiated

Since 5.0 Reference

com.vmware.vim.eam.task.setupDvFilter

info

VC

com.vmware.vim.eam.task.setupDvFilter| DvFilter switch '{switchName}' is setup on host {host.name}

Since 5.0 Reference

com.vmware.vim.eam.task.tearDownDvFilter

info

VC

com.vmware.vim.eam.task.tearDownDvFilter| DvFilter switch '{switchName}' is teared down on host {host.name}

Since 5.0 Reference

com.vmware.vim.eam.unauthorized.access

warning

VC

com.vmware.vim.eam.unauthorized.access| Unauthorized access by {user} in vSphere ESX Agent Manager

Since 5.0 Reference

com.vmware.vim.eam.vum.failedtouploadvib

error

VC

com.vmware.vim.eam.vum.failedtouploadvib| Failed to upload {vibUrl} to VMware Update Manager ({agencyName})

Since 5.0 Reference

com.vmware.vim.vsm.dependency.bind.vApp

info

VC

com.vmware.vim.vsm.dependency.bind.vApp| event.com.vmware.vim.vsm.dependency.bind.vApp.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.bind.vm

info

VC

com.vmware.vim.vsm.dependency.bind.vm| event.com.vmware.vim.vsm.dependency.bind.vm.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.create.vApp

info

VC

com.vmware.vim.vsm.dependency.create.vApp| event.com.vmware.vim.vsm.dependency.create.vApp.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.create.vm

info

VC

com.vmware.vim.vsm.dependency.create.vm| event.com.vmware.vim.vsm.dependency.create.vm.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.destroy.vApp

info

VC

com.vmware.vim.vsm.dependency.destroy.vApp| event.com.vmware.vim.vsm.dependency.destroy.vApp.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.destroy.vm

info

VC

com.vmware.vim.vsm.dependency.destroy.vm| event.com.vmware.vim.vsm.dependency.destroy.vm.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.reconfigure.vApp

info

VC

com.vmware.vim.vsm.dependency.reconfigure.vApp| event.com.vmware.vim.vsm.dependency.reconfigure.vApp.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.reconfigure.vm

info

VC

com.vmware.vim.vsm.dependency.reconfigure.vm| event.com.vmware.vim.vsm.dependency.reconfigure.vm.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.unbind.vApp

info

VC

com.vmware.vim.vsm.dependency.unbind.vApp| event.com.vmware.vim.vsm.dependency.unbind.vApp.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.unbind.vm

info

VC

com.vmware.vim.vsm.dependency.unbind.vm| event.com.vmware.vim.vsm.dependency.unbind.vm.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.update.vApp

info

VC

com.vmware.vim.vsm.dependency.update.vApp| event.com.vmware.vim.vsm.dependency.update.vApp.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.update.vm

info

VC

com.vmware.vim.vsm.dependency.update.vm| event.com.vmware.vim.vsm.dependency.update.vm.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.provider.register

info

VC

com.vmware.vim.vsm.provider.register| event.com.vmware.vim.vsm.provider.register.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.provider.unregister

info

VC

com.vmware.vim.vsm.provider.unregister| event.com.vmware.vim.vsm.provider.unregister.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.provider.update

info

VC

com.vmware.vim.vsm.provider.update| event.com.vmware.vim.vsm.provider.update.fullFormat

Since 5.0 Reference

CustomFieldDefAddedEvent

info

VC

Created new custom field definition {name}

Since 2.0 Reference

CustomFieldDefEvent

info

VC

This event records a custom field definition event.

Since 2.0 Reference

CustomFieldDefRemovedEvent

info

VC

Removed field definition {name}

Since 2.0 Reference

CustomFieldDefRenamedEvent

info

VC

Renamed field definition from {name} to {newName}

Since 2.0 Reference

CustomFieldValueChangedEvent

info

VC

Changed custom field {name} on {entity.name} in {datacenter.name} to {value}

Since 2.0 Reference

CustomizationFailed

warning

VC

Cannot complete customization of VM {vm.name}. See customization log at {logLocation} on the guest OS for details.

Since 2.5 Reference

CustomizationLinuxIdentityFailed

warning

VC

An error occurred while setting up Linux identity. See log file '{logLocation}' on guest OS for details.

Since 2.5 Reference

CustomizationNetworkSetupFailed

warning

VC

An error occurred while setting up network properties of the guest OS. See the log file {logLocation} in the guest OS for details.

Since 2.5 Reference

CustomizationStartedEvent

info

VC

Started customization of VM {vm.name}. Customization log located at {logLocation} in the guest OS.

Since 2.5 Reference

CustomizationSucceeded

info

VC

Customization of VM {vm.name} succeeded. Customization log located at {logLocation} in the guest OS.

Since 2.5 Reference

CustomizationSysprepFailed

warning

VC

The version of Sysprep {sysprepVersion} provided for customizing VM {vm.name} does not match the version of guest OS {systemVersion}. See the log file {logLocation} in the guest OS for more information.

Since 2.5 Reference

CustomizationUnknownFailure

warning

VC

An error occurred while customizing VM {vm.name}. For details reference the log file {logLocation} in the guest OS.

Since 2.5 Reference

DasAdmissionControlDisabledEvent

info

VC

HA admission control disabled on cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DasAdmissionControlEnabledEvent

info

VC

HA admission control enabled on cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DasAgentFoundEvent

info

VC

Re-established contact with a primary host in this HA cluster

Since 2.0 Reference

DasAgentUnavailableEvent

error

Cluster

Unable to contact a primary HA agent in cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DasClusterIsolatedEvent

error

Cluster

All hosts in the HA cluster {computeResource.name} in {datacenter.name} were isolated from the network. Check the network configuration for proper network redundancy in the management network.

Since 4.0 Reference

DasDisabledEvent

info

VC

HA disabled on cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DasEnabledEvent

info

VC

HA enabled on cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DasHostFailedEvent

error

Cluster

A possible host failure has been detected by HA on {failedHost.name} in cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DasHostIsolatedEvent

warning

Cluster

Host {isolatedHost.name} has been isolated from cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DatacenterCreatedEvent

info

VC

Created datacenter {datacenter.name} in folder {parent.name}

Since 2.5 Reference

DatacenterRenamedEvent

info

VC

Renamed datacenter from {oldName} to {newName}

Since 2.5 Reference

DatastoreCapacityIncreasedEvent

info

VC

Datastore {datastore.name} increased in capacity from {oldCapacity} bytes to {oldCapacity} bytes in {datacenter.name}

Since 4.0 Reference

DatastoreDestroyedEvent

info

VC

Removed unconfigured datastore {datastore.name}

Since 2.0 Reference

DatastoreDiscoveredEvent

info

VC

Discovered datastore {datastore.name} on {host.name} in {datacenter.name}

Since 2.0 Reference

DatastoreDuplicatedEvent

error

VC

Multiple datastores named {datastore} detected on host {host.name} in {datacenter.name}

Since 2.0 Reference

DatastoreFileCopiedEvent

info

VC

File or directory {sourceFile} copied from {sourceDatastore.name} to {datastore.name} as {targetFile}

Since 4.0 Reference

DatastoreFileDeletedEvent

info

VC

File or directory {targetFile} deleted from {datastore.name}

Since 4.0 Reference

DatastoreFileMovedEvent

info

VC

File or directory {sourceFile} moved from {sourceDatastore.name} to {datastore.name} as {targetFile}

Since 4.0 Reference

DatastoreIORMReconfiguredEvent

info

VC

Reconfigured Storage I/O Control on datastore {datastore.name}

Since 4.1 Reference

DatastorePrincipalConfigured

info

VC

Configured datastore principal {datastorePrincipal} on host {host.name} in {datacenter.name}

Since 2.0 Reference

DatastoreRemovedOnHostEvent

info

VC

Removed datastore {datastore.name} from {host.name} in {datacenter.name}

Since 2.0 Reference

DatastoreRenamedEvent

info

VC

Renamed datastore from {oldName} to {newName} in {datacenter.name}

Since 2.0 Reference

DatastoreRenamedOnHostEvent

info

VC

Renamed datastore from {oldName} to {newName} in {datacenter.name}

Since 2.0 Reference

DrsDisabledEvent

info

VC

Disabled DRS on cluster {computeResource.name} in datacenter {datacenter.name}

Since 2.0 Reference

DrsEnabledEvent

info

VC

Enabled DRS on {computeResource.name} with automation level {behavior} in {datacenter.name}

Since 2.0 Reference

DrsEnteredStandbyModeEvent

info

VC

DRS put {host.name} into standby mode

Since 2.5 Reference

DrsEnteringStandbyModeEvent

info

VC

DRS is putting {host.name} into standby mode

Since 4.0 Reference

DrsExitedStandbyModeEvent

info

VC

DRS moved {host.name} out of standby mode

Since 2.5 Reference

DrsExitingStandbyModeEvent

info

VC

DRS is moving {host.name} out of standby mode

Since 4.0 Reference

DrsExitStandbyModeFailedEvent

error

ESXHost

DRS cannot move {host.name} out of standby mode

Since 4.0 Reference

DrsInvocationFailedEvent

error

Cluster

DRS invocation not completed

Since 4.0 Reference

DrsRecoveredFromFailureEvent

info

VC

DRS has recovered from the failure

Since 4.0 Reference

DrsResourceConfigureFailedEvent

error

Cluster

Unable to apply DRS resource settings on host {host.name} in {datacenter.name}. {reason.msg}. This can significantly reduce the effectiveness of DRS.

Since 2.0 Reference

DrsResourceConfigureSyncedEvent

info

VC

Resource configuration specification returns to synchronization from previous failure on host '{host.name}' in {datacenter.name}

Since 2.0 Reference

DrsRuleComplianceEvent

info

VC

{vm.name} on {host.name} in {datacenter.name} is now compliant with DRS VM-Host affinity rules

Since 4.1 Reference

DrsRuleViolationEvent

warning

VirtualMachine

{vm.name} on {host.name} in {datacenter.name} is violating a DRS VM-Host affinity rule

Since 4.1 Reference

DrsVmMigratedEvent

info

VC

DRS migrated {vm.name} from {sourceHost.name} to {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DrsVmPoweredOnEvent

info

VC

DRS powered On {vm.name} on {host.name} in {datacenter.name}

Since 2.5 Reference

DuplicateIpDetectedEvent

warning

ESXHostNetwork

Virtual machine {macAddress} on host {host.name} has a duplicate IP {duplicateIP}

Since 2.5 Reference

DvpgImportEvent

info

VC

Import operation with type {importType} was performed on {net.name}

Since 5.1 Reference

DvpgRestoreEvent

info

VC

Restore operation was performed on {net.name}

Since 5.1 Reference

DVPortgroupCreatedEvent

info

VC

Distributed virtual port group {net.name} in {datacenter.name} was added to switch {dvs.name}.

Since 4.0 Reference

DVPortgroupDestroyedEvent

info

VC

Distributed virtual port group {net.name} in {datacenter.name} was deleted.

Since 4.0 Reference

DVPortgroupReconfiguredEvent

info

VC

Distributed virtual port group {net.name} in {datacenter.name} was reconfigured.

Since 4.0 Reference

DVPortgroupRenamedEvent

info

VC

Distributed virtual port group {oldName} in {datacenter.name} was renamed to {newName}

Since 4.0 Reference

DvsCreatedEvent

info

VC

A Distributed Virtual Switch {dvs.name} was created in {datacenter.name}.

Since 4.0 Reference

DvsDestroyedEvent

info

VC

Distributed Virtual Switch {dvs.name} in {datacenter.name} was deleted.

Since 4.0 Reference

DvsEvent

info

VC

Distributed Virtual Switch event

Since 4.0 Reference

DvsHealthStatusChangeEvent

info

VC

Health check status was changed in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}

Since 5.1 Reference

DvsHostBackInSyncEvent

info

VC

The Distributed Virtual Switch {dvs.name} configuration on the host was synchronized with that of the vCenter Server.

Since 4.0 Reference

DvsHostJoinedEvent

info

VC

The host {hostJoined.name} joined the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsHostLeftEvent

info

VC

The host {hostLeft.name} left the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsHostStatusUpdated

info

VC

The host {hostMember.name} changed status on the vNetwork Distributed Switch {dvs.name} in {datacenter.name}

Since 4.1 Reference

DvsHostWentOutOfSyncEvent

warning

ESXHostNetwork

The Distributed Virtual Switch {dvs.name} configuration on the host differed from that of the vCenter Server.

Since 4.0 Reference

DvsImportEvent

info

VC

Import operation with type {importType} was performed on {dvs.name}

Since 5.1 Reference

DvsMergedEvent

info

VC

Distributed Virtual Switch {srcDvs.name} was merged into {dstDvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortBlockedEvent

info

VC

Port {portKey} was blocked in the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortConnectedEvent

info

VC

The port {portKey} was connected in the Distributed Virtual Switch {dvs.name} in {datacenter.name}

Since 4.0 Reference

DvsPortCreatedEvent

info

VC

New ports were created in the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortDeletedEvent

info

VC

Deleted ports in the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortDisconnectedEvent

info

VC

The port {portKey} was disconnected in the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortEnteredPassthruEvent

info

VC

dvPort {portKey} entered passthrough mode in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}

Since 4.1 Reference

DvsPortExitedPassthruEvent

info

VC

dvPort {portKey} exited passthrough mode in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}

Since 4.1 Reference

DvsPortJoinPortgroupEvent

info

VC

Port {portKey} was moved into the distributed virtual port group {portgroupName} in {datacenter.name}.

Since 4.0 Reference

DvsPortLeavePortgroupEvent

info

VC

Port {portKey} was moved out of the distributed virtual port group {portgroupName} in {datacenter.name}.

Since 4.0 Reference

DvsPortLinkDownEvent

warning

VC

The port {portKey} link was down in the Distributed Virtual Switch {dvs.name} in {datacenter.name}

Since 4.0 Reference

DvsPortLinkUpEvent

info

VC

The port {portKey} link was up in the Distributed Virtual Switch {dvs.name} in {datacenter.name}

Since 4.0 Reference

DvsPortReconfiguredEvent

info

VC

Reconfigured ports in the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortRuntimeChangeEvent

info

VC

The dvPort {portKey} runtime information changed in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.

Since 5.0 Reference

DvsPortUnblockedEvent

info

VC

Port {portKey} was unblocked in the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortVendorSpecificStateChangeEvent

info

VC

The dvPort {portKey} vendor specific state changed in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.

Since 5.0 Reference

DvsReconfiguredEvent

info

VC

The Distributed Virtual Switch {dvs.name} in {datacenter.name} was reconfigured.

Since 4.0 Reference

DvsRenamedEvent

info

VC

The Distributed Virtual Switch {oldName} in {datacenter.name} was renamed to {newName}.

Since 4.0 Reference

DvsRestoreEvent

info

VC

Restore operation was performed on {dvs.name}

Since 5.1 Reference

DvsUpgradeAvailableEvent

info

VC

An upgrade for the Distributed Virtual Switch {dvs.name} in datacenter {datacenter.name} is available.

Since 4.0 Reference

DvsUpgradedEvent

info

VC

Distributed Virtual Switch {dvs.name} in datacenter {datacenter.name} was upgraded.

Since 4.0 Reference

DvsUpgradeInProgressEvent

info

VC

An upgrade for the Distributed Virtual Switch {dvs.name} in datacenter {datacenter.name} is in progress.

Since 4.0 Reference

DvsUpgradeRejectedEvent

info

VC

Cannot complete an upgrade for the Distributed Virtual Switch {dvs.name} in datacenter {datacenter.name}

Since 4.0 Reference

EnteredMaintenanceModeEvent

info

VC

Host {host.name} in {datacenter.name} has entered maintenance mode

Since 2.0 Reference

EnteredStandbyModeEvent

info

VC

The host {host.name} is in standby mode

Since 2.5 Reference

EnteringMaintenanceModeEvent

info

VC

Host {host.name} in {datacenter.name} has started to enter maintenance mode

Since 2.0 Reference

EnteringStandbyModeEvent

info

VC

The host {host.name} is entering standby mode

Since 2.5 Reference

ErrorUpgradeEvent

error

VC

{message}

Since 2.0 Reference

esx.audit.dcui.defaults.factoryrestore

warning

VC

esx.audit.dcui.defaults.factoryrestore| The host has been restored to default factory settings. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.disabled

info

VC

esx.audit.dcui.disabled| The DCUI has been disabled.

Since 5.0 Reference

esx.audit.dcui.enabled

info

VC

esx.audit.dcui.enabled| The DCUI has been enabled.

Since 5.0 Reference

esx.audit.dcui.host.reboot

warning

VC

esx.audit.dcui.host.reboot| The host is being rebooted through the Direct Console User Interface (DCUI). Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.host.shutdown

warning

VC

esx.audit.dcui.host.shutdown| The host is being shut down through the Direct Console User Interface (DCUI). Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.hostagents.restart

info

VC

esx.audit.dcui.hostagents.restart| The management agents on the host are being restarted. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.login.failed

error

VC

esx.audit.dcui.login.failed| Authentication of user {1} has failed. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.login.passwd.changed

info

VC

esx.audit.dcui.login.passwd.changed| Login password for user {1} has been changed. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.network.factoryrestore

warning

VC

esx.audit.dcui.network.factoryrestore| The host has been restored to factory network settings. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.network.restart

info

VC

esx.audit.dcui.network.restart| A management interface {1} has been restarted. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.esxcli.host.poweroff

warning

ESXHost

esx.audit.esxcli.host.poweroff| The host is being powered off through esxcli. Reason for powering off: {1}. Please consult vSphere Documentation Center or follow the Ask VMware link for more information.

Since 5.1 Reference

esx.audit.esxcli.host.restart

info

ESXHost

esx.audit.esxcli.host.restart| event.esx.audit.esxcli.host.restart.fullFormat

Since 5.1 Reference

esx.audit.esximage.hostacceptance.changed

info

VC

esx.audit.esximage.hostacceptance.changed| Host acceptance level changed from {1} to {2}

Since 5.0 Reference

esx.audit.esximage.install.novalidation

warning

VC

esx.audit.esximage.install.novalidation| Attempting to install an image profile with validation disabled. This may result in an image with unsatisfied dependencies, file or package conflicts, and potential security violations.

Since 5.0 Reference

esx.audit.esximage.install.securityalert

warning

VC

esx.audit.esximage.install.securityalert| SECURITY ALERT: Installing image profile '{1}' with {2}.

Since 5.0 Reference

esx.audit.esximage.profile.install.successful

info

VC

esx.audit.esximage.profile.install.successful| Successfully installed image profile '{1}'. Installed VIBs {2}, removed VIBs {3}

Since 5.0 Reference

esx.audit.esximage.profile.update.successful

info

VC

esx.audit.esximage.profile.update.successful| Successfully updated host to image profile '{1}'. Installed VIBs {2}, removed VIBs {3}

Since 5.0 Reference

esx.audit.esximage.vib.install.successful

info

VC

esx.audit.esximage.vib.install.successful| Successfully installed VIBs {1}, removed VIBs {2}

Since 5.0 Reference

esx.audit.esximage.vib.remove.successful

info

VC

esx.audit.esximage.vib.remove.successful| Successfully removed VIBs {1}

Since 5.0 Reference

esx.audit.host.boot

info

VC

esx.audit.host.boot| Host has booted.

Since 5.0 Reference

esx.audit.host.maxRegisteredVMsExceeded

warning

ESXHost

esx.audit.host.maxRegisteredVMsExceeded| The number of virtual machines registered on host {host.name} in cluster {computeResource.name} in {datacenter.name} exceeded limit: {current} registered, {limit} is the maximum supported.

Since 5.1 Reference

esx.audit.host.stop.reboot

info

VC

esx.audit.host.stop.reboot| Host is rebooting.

Since 5.0 Reference

esx.audit.host.stop.shutdown

info

VC

esx.audit.host.stop.shutdown| Host is shutting down.

Since 5.0 Reference

esx.audit.lockdownmode.disabled

info

VC

esx.audit.lockdownmode.disabled| Administrator access to the host has been enabled.

Since 5.0 Reference

esx.audit.lockdownmode.enabled

info

VC

esx.audit.lockdownmode.enabled| Administrator access to the host has been disabled.

Since 5.0 Reference

esx.audit.maintenancemode.canceled

info

VC

esx.audit.maintenancemode.canceled| The host has canceled entering maintenance mode.

Since 5.0 Reference

esx.audit.maintenancemode.entered

info

VC

esx.audit.maintenancemode.entered| The host has entered maintenance mode.

Since 5.0 Reference

esx.audit.maintenancemode.entering

info

VC

esx.audit.maintenancemode.entering| The host has begun entering maintenance mode.

Since 5.0 Reference

esx.audit.maintenancemode.exited

info

VC

esx.audit.maintenancemode.exited| The host has exited maintenance mode.

Since 5.0 Reference

esx.audit.net.firewall.config.changed

info

VC

esx.audit.net.firewall.config.changed| Firewall configuration has changed. Operation '{1}' for rule set {2} succeeded.

Since 5.0 Reference

esx.audit.net.firewall.disabled

warning

VC

esx.audit.net.firewall.disabled| Firewall has been disabled.

Since 5.0 Reference

esx.audit.net.firewall.enabled

info

VC

esx.audit.net.firewall.enabled| Firewall has been enabled for port {1}.

Since 5.0 Reference

esx.audit.net.firewall.port.hooked

info

VC

esx.audit.net.firewall.port.hooked| Port {1} is now protected by Firewall.

Since 5.0 Reference

esx.audit.net.firewall.port.removed

warning

VC

esx.audit.net.firewall.port.removed| Port {1} is no longer protected with Firewall.

Since 5.0 Reference

esx.audit.net.lacp.disable

info

VC

esx.audit.net.lacp.disable| LACP for VDS {1} is disabled.

Since 5.1 Reference

esx.audit.net.lacp.enable

info

VC

esx.audit.net.lacp.enable| LACP for VDS {1} is enabled.

Since 5.1 Reference

esx.audit.net.lacp.uplink.connected

info

VC

esx.audit.net.lacp.uplink.connected| Lacp info: uplink {1} on VDS {2} got connected.

Since 5.1 Reference

esx.audit.net.vdl2.ip.change

warning

ESXHostNetwork

esx.audit.net.vdl2.ip.change| VDL2 IP changed on vmknic {1}, port {2}, DVS {3}, VLAN {4}.

Since 5.0 Reference

esx.audit.net.vdl2.mappingtable.full

warning

ESXHostNetwork

esx.audit.net.vdl2.mappingtable.full| Mapping table entries of VDL2 network {1} on DVS {2} exhausted. This network might suffer a low performance.

Since 5.0 Reference

esx.audit.net.vdl2.route.change

warning

ESXHostNetwork

esx.audit.net.vdl2.route.change| VDL2 IP interface on vmknic: {1}, DVS: {2}, VLAN: {3} default route changed.

Since 5.0 Reference

esx.audit.shell.disabled

info

VC

esx.audit.shell.disabled| The ESX command line shell has been disabled.

Since 5.0 Reference

esx.audit.shell.enabled

info

VC

esx.audit.shell.enabled| The ESX command line shell has been enabled.

Since 5.0 Reference

esx.audit.ssh.disabled

info

VC

esx.audit.ssh.disabled| SSH access has been disabled.

Since 5.0 Reference

esx.audit.ssh.enabled

info

VC

esx.audit.ssh.enabled| SSH access has been enabled.

Since 5.0 Reference

esx.audit.usb.config.changed

info

VC

esx.audit.usb.config.changed| USB configuration has changed on host {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.0 Reference

esx.audit.uw.secpolicy.alldomains.level.changed

warning

VC

esx.audit.uw.secpolicy.alldomains.level.changed| The enforcement level for all security domains has been changed to {1}. The enforcement level must always be set to enforcing.

Since 5.0 Reference

esx.audit.uw.secpolicy.domain.level.changed

warning

VC

esx.audit.uw.secpolicy.domain.level.changed| The enforcement level for security domain {1} has been changed to {2}. The enforcement level must always be set to enforcing.

Since 5.0 Reference

esx.audit.vmfs.lvm.device.discovered

info

VC

esx.audit.vmfs.lvm.device.discovered| One or more LVM devices have been discovered on this host.

Since 5.0 Reference

esx.audit.vmfs.volume.mounted

info

VC

esx.audit.vmfs.volume.mounted| File system {1} on volume {2} has been mounted in {3} mode on this host.

Since 5.0 Reference

esx.audit.vmfs.volume.umounted

info

VC

esx.audit.vmfs.volume.umounted| The volume {1} has been safely un-mounted. The datastore is no longer accessible on this host.

Since 5.0 Reference

esx.audit.vsan.clustering.enabled

info

VC

esx.audit.vsan.clustering.enabled| VSAN clustering and directory services have been enabled.

Since 5.5 Reference

esx.clear.coredump.configured

info

VC

esx.clear.coredump.configured| A vmkcore disk partition is available and/or a network coredump server has been configured. Host core dumps will be saved.

Since 5.1 Reference

esx.clear.net.connectivity.restored

info

ESXHostNetwork

esx.clear.net.connectivity.restored| Network connectivity restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up.

Since 4.1 Reference

esx.clear.net.dvport.connectivity.restored

info

ESXHostNetwork

esx.clear.net.dvport.connectivity.restored| Network connectivity restored on DVPorts: {1}. Physical NIC {2} is up.

Since 4.1 Reference

esx.clear.net.dvport.redundancy.restored

info

ESXHostNetwork

esx.clear.net.dvport.redundancy.restored| Uplink redundancy restored on DVPorts: {1}. Physical NIC {2} is up.

Since 4.1 Reference

esx.clear.net.lacp.lag.transition.up

info

VC

esx.clear.net.lacp.lag.transition.up| LACP info: LAG {1} on VDS {2} is up.

Since 5.5 Reference

esx.clear.net.lacp.uplink.transition.up

info

ESXHostNetwork

esx.clear.net.lacp.uplink.transition.up| Lacp info: uplink {1} on VDS {2} is moved into link aggregation group.

Since 5.1 Reference

esx.clear.net.lacp.uplink.unblocked

info

ESXHostNetwork

esx.clear.net.lacp.uplink.unblocked| Lacp error: uplink {1} on VDS {2} is unblocked.

Since 5.1 Reference

esx.clear.net.redundancy.restored

info

ESXHostNetwork

esx.clear.net.redundancy.restored| Uplink redundancy restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up.

Since 4.1 Reference

esx.clear.net.vmnic.linkstate.up

info

ESXHostNetwork

esx.clear.net.vmnic.linkstate.up| Physical NIC {1} linkstate is up.

Since 4.1 Reference

esx.clear.scsi.device.io.latency.improved

info

ESXHostStorage

esx.clear.scsi.device.io.latency.improved| Device {1} performance has improved. I/O latency reduced from {2} microseconds to {3} microseconds.

Since 5.0 Reference

esx.clear.scsi.device.state.on

info

ESXHostStorage

esx.clear.scsi.device.state.on| Device {1}, has been turned on administratively.

Since 5.0 Reference

esx.clear.scsi.device.state.permanentloss.deviceonline

info

ESXHostStorage

esx.clear.scsi.device.state.permanentloss.deviceonline| Device {1}, that was permanently inaccessible is now online. No data consistency guarantees.

Since 5.0 Reference

esx.clear.storage.apd.exit

info

ESXHostStorage

esx.clear.storage.apd.exit| Device or filesystem with identifer [{1}] has exited the All Paths Down state.

Since 5.1 Reference

esx.clear.storage.connectivity.restored

info

ESXHostStorage

esx.clear.storage.connectivity.restored| Connectivity to storage device {1} (Datastores: {2}) restored. Path {3} is active again.

Since 4.1 Reference

esx.clear.storage.redundancy.restored

info

ESXHostStorage

esx.clear.storage.redundancy.restored| Path redundancy to storage device {1} (Datastores: {2}) restored. Path {3} is active again.

Since 4.1 Reference

esx.clear.vsan.clustering.enabled

info

VC

esx.clear.vsan.clustering.enabled| VSAN clustering and directory services have now been enabled.

Since 5.5 Reference

esx.clear.vsan.network.available

info

VC

esx.clear.vsan.network.available| event.esx.clear.vsan.network.available.fullFormat

Since 5.5 Reference

esx.clear.vsan.vmknic.ready

info

VC

esx.clear.vsan.vmknic.ready| event.esx.clear.vsan.vmknic.ready.fullFormat

Since 5.5 Reference

esx.problem.3rdParty.error

error

VC

esx.problem.3rdParty.error| A 3rd party component, {1}, running on ESXi has reported an error. Please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}.

Since 5.0 Reference

esx.problem.3rdParty.info

info

VC

esx.problem.3rdParty.info| event.esx.problem.3rdParty.info.fullFormat

Since 5.0 Reference

esx.problem.3rdParty.warning

warning

VC

esx.problem.3rdParty.warning| A 3rd party component, {1}, running on ESXi has reported a warning related to a problem. Please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}.

Since 5.0 Reference

esx.problem.apei.bert.memory.error.corrected

error

ESXHostHardware

esx.problem.apei.bert.memory.error.corrected| A corrected memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10}

Since 4.1 Reference

esx.problem.apei.bert.memory.error.fatal

error

ESXHostHardware

esx.problem.apei.bert.memory.error.fatal| A fatal memory error occurred in the last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10}

Since 4.1 Reference

esx.problem.apei.bert.memory.error.recoverable

error

ESXHostHardware

esx.problem.apei.bert.memory.error.recoverable| A recoverable memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10}

Since 4.1 Reference

esx.problem.apei.bert.pcie.error.corrected

error

ESXHostHardware

esx.problem.apei.bert.pcie.error.corrected| A corrected PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}.

Since 4.1 Reference

esx.problem.apei.bert.pcie.error.fatal

error

ESXHostHardware

esx.problem.apei.bert.pcie.error.fatal| Platform encounterd a fatal PCIe error in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}.

Since 4.1 Reference

esx.problem.apei.bert.pcie.error.recoverable

error

ESXHostHardware

esx.problem.apei.bert.pcie.error.recoverable| A recoverable PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}.

Since 4.1 Reference

esx.problem.application.core.dumped

warning

ESXHost

esx.problem.application.core.dumped| An application ({1}) running on ESXi host has crashed ({2} time(s) so far). A core file might have been created at {3}.

Since 5.0 Reference

esx.problem.coredump.unconfigured

warning

ESXHost

esx.problem.coredump.unconfigured| No vmkcore disk partition is available and no network coredump server has been configured. Host core dumps cannot be saved.

Since 5.0 Reference

esx.problem.cpu.amd.mce.dram.disabled

error

ESXHostHardware

esx.problem.cpu.amd.mce.dram.disabled| DRAM ECC not enabled. Please enable it in BIOS.

Since 5.0 Reference

esx.problem.cpu.intel.ioapic.listing.error

error

ESXHostHardware

esx.problem.cpu.intel.ioapic.listing.error| Not all IO-APICs are listed in the DMAR. Not enabling interrupt remapping on this platform.

Since 5.0 Reference

esx.problem.cpu.mce.invalid

error

ESXHostHardware

esx.problem.cpu.mce.invalid| MCE monitoring will be disabled as an unsupported CPU was detected. Please consult the ESX HCL for information on supported hardware.

Since 5.0 Reference

esx.problem.cpu.smp.ht.invalid

error

ESXHostHardware

esx.problem.cpu.smp.ht.invalid| Disabling HyperThreading due to invalid configuration: Number of threads: {1}, Number of PCPUs: {2}.

Since 5.0 Reference

esx.problem.cpu.smp.ht.numpcpus.max

error

ESXHostHardware

esx.problem.cpu.smp.ht.numpcpus.max| Found {1} PCPUs, but only using {2} of them due to specified limit.

Since 5.0 Reference

esx.problem.cpu.smp.ht.partner.missing

warning

ESXHostHardware

esx.problem.cpu.smp.ht.partner.missing| Disabling HyperThreading due to invalid configuration: HT partner {1} is missing from PCPU {2}.

Since 5.0 Reference

esx.problem.dhclient.lease.none

error

ESXHostNetwork

esx.problem.dhclient.lease.none| Unable to obtain a DHCP lease on interface {1}.

Since 5.0 Reference

esx.problem.dhclient.lease.offered.error

warning

ESXHostNetwork

esx.problem.dhclient.lease.offered.error| event.esx.problem.dhclient.lease.offered.error.fullFormat

Since 5.0 Reference

esx.problem.dhclient.lease.persistent.none

warning

ESXHostNetwork

esx.problem.dhclient.lease.persistent.none| No working DHCP leases in persistent database.

Since 5.0 Reference

esx.problem.esximage.install.error

warning

VC

esx.problem.esximage.install.error| Could not install image profile: {1}

Since 5.0 Reference

esx.problem.esximage.install.invalidhardware

warning

VC

esx.problem.esximage.install.invalidhardware| Host doesn't meet image profile '{1}' hardware requirements: {2}

Since 5.0 Reference

esx.problem.esximage.install.stage.error

warning

VC

esx.problem.esximage.install.stage.error| Could not stage image profile '{1}': {2}

Since 5.0 Reference

esx.problem.hardware.acpi.interrupt.routing.device.invalid

warning

ESXHostHardware

esx.problem.hardware.acpi.interrupt.routing.device.invalid| Skipping interrupt routing entry with bad device number: {1}. This is a BIOS bug.

Since 5.0 Reference

esx.problem.hardware.acpi.interrupt.routing.pin.invalid

warning

ESXHostHardware

esx.problem.hardware.acpi.interrupt.routing.pin.invalid| Skipping interrupt routing entry with bad device pin: {1}. This is a BIOS bug.

Since 5.0 Reference

esx.problem.hardware.ioapic.missing

warning

ESXHostHardware

esx.problem.hardware.ioapic.missing| IOAPIC Num {1} is missing. Please check BIOS settings to enable this IOAPIC.

Since 5.0 Reference

esx.problem.host.coredump

warning

ESXHost

esx.problem.host.coredump| An unread host kernel core dump has been found.

Since 5.0 Reference

esx.problem.hostd.core.dumped

warning

ESXHost

esx.problem.hostd.core.dumped| {1} crashed ({2} time(s) so far) and a core file might have been created at {3}. This might have caused connections to the host to be dropped.

Since 5.0 Reference

esx.problem.iorm.badversion

warning

ESXHostStorage

esx.problem.iorm.badversion| Host {1} cannot participate in Storage I/O Control(SIOC) on datastore {2} because the version number {3} of the SIOC agent on this host is incompatible with number {4} of its counterparts on other hosts connected to this datastore.

Since 5.0 Reference

esx.problem.iorm.nonviworkload

warning

ESXHostStorage

esx.problem.iorm.nonviworkload| An external I/O activity is detected on datastore {1}, this is an unsupported configuration. Consult the Resource Management Guide or follow the Ask VMware link for more information.

Since 4.1 Reference

esx.problem.migrate.vmotion.default.heap.create.failed

error

Cluster

esx.problem.migrate.vmotion.default.heap.create.failed| Failed to create default migration heap. This might be the result of severe host memory pressure or virtual address space exhaustion. Migration might still be possible, but will be unreliable in cases of extreme host memory pressure.

Since 5.0 Reference

esx.problem.migrate.vmotion.server.pending.cnx.listen.socket.shutdown

warning

Cluster

esx.problem.migrate.vmotion.server.pending.cnx.listen.socket.shutdown| The ESXi host's vMotion network server encountered an error while monitoring incoming network connections. Shutting down listener socket. vMotion might not be possible with this host until vMotion is manually re-enabled. Failure status: {1}

Since 5.0 Reference

esx.problem.net.connectivity.lost

error

ESXHostNetwork

esx.problem.net.connectivity.lost| Lost network connectivity on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.

Since 4.1 Reference

esx.problem.net.dvport.connectivity.lost

error

ESXHostNetwork

esx.problem.net.dvport.connectivity.lost| Lost network connectivity on DVPorts: {1}. Physical NIC {2} is down.

Since 4.1 Reference

esx.problem.net.dvport.redundancy.degraded

warning

ESXHostNetwork

esx.problem.net.dvport.redundancy.degraded| Uplink redundancy degraded on DVPorts: {1}. Physical NIC {2} is down.

Since 4.1 Reference

esx.problem.net.dvport.redundancy.lost

warning

ESXHostNetwork

esx.problem.net.dvport.redundancy.lost| Lost uplink redundancy on DVPorts: {1}. Physical NIC {2} is down.

Since 4.1 Reference

esx.problem.net.e1000.tso6.notsupported

error

ESXHostNetwork

esx.problem.net.e1000.tso6.notsupported| Guest-initiated IPv6 TCP Segmentation Offload (TSO) packets ignored. Manually disable TSO inside the guest operating system in virtual machine {1}, or use a different virtual adapter.

Since 4.1 Reference

esx.problem.net.fence.port.badfenceid

warning

ESXHostNetwork

esx.problem.net.fence.port.badfenceid| VMkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: invalid fenceId.

Since 5.0 Reference

esx.problem.net.fence.resource.limited

warning

ESXHostNetwork

esx.problem.net.fence.resource.limited| Vmkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: maximum number of fence networks or ports have been reached.

Since 5.0 Reference

esx.problem.net.fence.switch.unavailable

warning

ESXHostNetwork

esx.problem.net.fence.switch.unavailable| Vmkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: dvSwitch fence property is not set.

Since 5.0 Reference

esx.problem.net.firewall.config.failed

error

ESXHostNetwork

esx.problem.net.firewall.config.failed| Firewall configuration operation '{1}' failed. The changes were not applied to rule set {2}.

Since 5.0 Reference

esx.problem.net.firewall.port.hookfailed

error

ESXHostNetwork

esx.problem.net.firewall.port.hookfailed| Adding port {1} to Firewall failed.

Since 5.0 Reference

esx.problem.net.gateway.set.failed

error

ESXHostNetwork

esx.problem.net.gateway.set.failed| Cannot connect to the specified gateway {1}. Failed to set it.

Since 5.0 Reference

esx.problem.net.heap.belowthreshold

warning

ESXHostNetwork

esx.problem.net.heap.belowthreshold| {1} heap free size dropped below {2} percent.

Since 5.0 Reference

esx.problem.net.lacp.lag.transition.down

warning

VC

esx.problem.net.lacp.lag.transition.down| LACP warning: LAG {1} on VDS {2} is down.

Since 5.5 Reference

esx.problem.net.lacp.peer.noresponse

error

ESXHostNetwork

esx.problem.net.lacp.peer.noresponse| Lacp error: No peer response on uplink {1} for VDS {2}.

Since 5.1 Reference

esx.problem.net.lacp.policy.incompatible

error

ESXHostNetwork

esx.problem.net.lacp.policy.incompatible| Lacp error: Current teaming policy on VDS {1} is incompatible, supported is IP hash only.

Since 5.1 Reference

esx.problem.net.lacp.policy.linkstatus

error

ESXHostNetwork

esx.problem.net.lacp.policy.linkstatus| Lacp error: Current teaming policy on VDS {1} is incompatible, supported link failover detection is link status only.

Since 5.1 Reference

esx.problem.net.lacp.uplink.blocked

warning

ESXHostNetwork

esx.problem.net.lacp.uplink.blocked| Lacp warning: uplink {1} on VDS {2} is blocked.

Since 5.1 Reference

esx.problem.net.lacp.uplink.disconnected

warning

ESXHostNetwork

esx.problem.net.lacp.uplink.disconnected| Lacp warning: uplink {1} on VDS {2} got disconnected.

Since 5.1 Reference

esx.problem.net.lacp.uplink.fail.duplex

error

ESXHostNetwork

esx.problem.net.lacp.uplink.fail.duplex| Lacp error: Duplex mode across all uplink ports must be full, VDS {1} uplink {2} has different mode.

Since 5.1 Reference

esx.problem.net.lacp.uplink.fail.speed

error

ESXHostNetwork

esx.problem.net.lacp.uplink.fail.speed| Lacp error: Speed across all uplink ports must be same, VDS {1} uplink {2} has different speed.

Since 5.1 Reference

esx.problem.net.lacp.uplink.inactive

error

ESXHostNetwork

esx.problem.net.lacp.uplink.inactive| Lacp error: All uplinks on VDS {1} must be active.

Since 5.1 Reference

esx.problem.net.lacp.uplink.transition.down

warning

ESXHostNetwork

esx.problem.net.lacp.uplink.transition.down| Lacp warning: uplink {1} on VDS {2} is moved out of link aggregation group.

Since 5.1 Reference

esx.problem.net.migrate.bindtovmk

warning

ESXHostNetwork

esx.problem.net.migrate.bindtovmk| The ESX advanced configuration option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for improved performance. Update the configuration option with a valid vmknic. Alternatively, if you do not want vMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank.

Since 4.1 Reference

esx.problem.net.migrate.unsupported.latency

warning

ESXHostNetwork

esx.problem.net.migrate.unsupported.latency| ESXi has detected {1}ms round-trip vMotion network latency between host {2} and {3}. High latency vMotion networks are supported only if both ESXi hosts have been configured for vMotion latency tolerance.

Since 5.0 Reference

esx.problem.net.portset.port.full

warning

ESXHostNetwork

esx.problem.net.portset.port.full| Portset {1} has reached the maximum number of ports ({2}). Cannot apply for any more free ports.

Since 5.0 Reference

esx.problem.net.portset.port.vlan.invalidid

warning

ESXHostNetwork

esx.problem.net.portset.port.vlan.invalidid| {1} VLANID {2} is invalid. VLAN ID must be between 0 and 4095.

Since 5.0 Reference

esx.problem.net.proxyswitch.port.unavailable

warning

ESXHostNetwork

esx.problem.net.proxyswitch.port.unavailable| Virtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. There are no more ports available on the host proxy switch.

Since 4.1 Reference

esx.problem.net.redundancy.degraded

warning

ESXHostNetwork

esx.problem.net.redundancy.degraded| Uplink redundancy degraded on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.

Since 4.1 Reference

esx.problem.net.redundancy.lost

warning

ESXHostNetwork

esx.problem.net.redundancy.lost| Lost uplink redundancy on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.

Since 4.1 Reference

esx.problem.net.uplink.mtu.failed

warning

ESXHostNetwork

esx.problem.net.uplink.mtu.failed| VMkernel failed to set the MTU value {1} on the uplink {2}.

Since 4.1 Reference

esx.problem.net.vdl2.instance.initialization.fail

error

ESXHostNetwork

esx.problem.net.vdl2.instance.initialization.fail| VDL2 instance on DVS {1} initialization failed.

Since 5.0 Reference

esx.problem.net.vdl2.instance.notexist

error

ESXHostNetwork

esx.problem.net.vdl2.instance.notexist| VDL2 overlay instance is not created on DVS {1} before initializing VDL2 port or VDL2 IP interface.

Since 5.0 Reference

esx.problem.net.vdl2.mcastgroup.fail

error

ESXHostNetwork

esx.problem.net.vdl2.mcastgroup.fail| VDL2 IP interface on vmknic: {1}, DVS: {2}, VLAN: {3} failed to join multicast group: {4}.

Since 5.0 Reference

esx.problem.net.vdl2.network.initialization.fail

error

ESXHostNetwork

esx.problem.net.vdl2.network.initialization.fail| VDL2 network {1} on DVS {2} initialization failed.

Since 5.0 Reference

esx.problem.net.vdl2.port.initialization.fail

error

ESXHostNetwork

esx.problem.net.vdl2.port.initialization.fail| VDL2 port {1} on VDL2 network {2}, DVS {3} initialization failed.

Since 5.0 Reference

esx.problem.net.vdl2.vmknic.fail

error

ESXHostNetwork

esx.problem.net.vdl2.vmknic.fail| VDL2 IP interface failed on vmknic {1}, port {2}, DVS {3}, VLAN {4}.

Since 5.0 Reference

esx.problem.net.vdl2.vmknic.notexist

error

ESXHostNetwork

esx.problem.net.vdl2.vmknic.notexist| VDL2 IP interface does not exist on DVS {1}, VLAN {2}.

Since 5.0 Reference

esx.problem.net.vmknic.ip.duplicate

warning

ESXHostNetwork

esx.problem.net.vmknic.ip.duplicate| A duplicate IP address was detected for {1} on the interface {2}. The current owner is {3}.

Since 4.1 Reference

esx.problem.net.vmnic.linkstate.down

warning

ESXHostNetwork

esx.problem.net.vmnic.linkstate.down| Physical NIC {1} linkstate is down.

Since 4.1 Reference

esx.problem.net.vmnic.linkstate.flapping

warning

ESXHostNetwork

esx.problem.net.vmnic.linkstate.flapping| Taking down physical NIC {1} because the link is unstable.

Since 5.0 Reference

esx.problem.net.vmnic.watchdog.reset

warning

ESXHostNetwork

esx.problem.net.vmnic.watchdog.reset| Uplink {1} has recovered from a transient failure due to watchdog timeout

Since 4.1 Reference

esx.problem.ntpd.clock.correction.error

warning

ESXHost

esx.problem.ntpd.clock.correction.error| NTP daemon stopped. Time correction {1} > {2} seconds. Manually set the time and restart ntpd.

Since 5.0 Reference

esx.problem.pageretire.platform.retire.request

info

VC

esx.problem.pageretire.platform.retire.request| Memory page retirement requested by platform firmware. FRU ID: {1}. Refer to System Hardware Log: {2}

Since 5.0 Reference

esx.problem.pageretire.selectedmpnthreshold.host.exceeded

warning

ESXHost

esx.problem.pageretire.selectedmpnthreshold.host.exceeded| Number of host physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}).

Since 5.0 Reference

esx.problem.pageretire.selectedmpnthreshold.kernel.exceeded

warning

ESXHost

esx.problem.pageretire.selectedmpnthreshold.kernel.exceeded| Number of kernel physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}).

Since 5.0 Reference

esx.problem.pageretire.selectedmpnthreshold.userclient.exceeded

warning

ESXHost

esx.problem.pageretire.selectedmpnthreshold.userclient.exceeded| Number of physical memory pages belonging to (user) memroy client {1} that have been selected for retirement ({2}) exceeds threshold ({3}).

Since 5.0 Reference

esx.problem.pageretire.selectedmpnthreshold.userprivate.exceeded

warning

ESXHost

esx.problem.pageretire.selectedmpnthreshold.userprivate.exceeded| Number of private user physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}).

Since 5.0 Reference

esx.problem.pageretire.selectedmpnthreshold.usershared.exceeded

warning

ESXHost

esx.problem.pageretire.selectedmpnthreshold.usershared.exceeded| Number of shared user physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}).

Since 5.0 Reference

esx.problem.pageretire.selectedmpnthreshold.vmmclient.exceeded

warning

ESXHost

esx.problem.pageretire.selectedmpnthreshold.vmmclient.exceeded| Number of physical memory pages belonging to (vmm) memroy client {1} that have been selected for retirement ({2}) exceeds threshold ({3}).

Since 5.0 Reference

esx.problem.scsi.apd.event.descriptor.alloc.failed

error

ESXHostStorage

esx.problem.scsi.apd.event.descriptor.alloc.failed| No memory to allocate APD (All Paths Down) event subsystem.

Since 5.0 Reference

esx.problem.scsi.device.close.failed

warning

ESXHostStorage

esx.problem.scsi.device.close.failed| "Failed to close the device {1} properly, plugin {2}.

Since 5.0 Reference

esx.problem.scsi.device.detach.failed

warning

ESXHostStorage

esx.problem.scsi.device.detach.failed| Detach failed for device :{1}. Exceeded the number of devices that can be detached, please cleanup stale detach entries.

Since 5.0 Reference

esx.problem.scsi.device.filter.attach.failed

warning

ESXHostStorage