|
|
|
|
AccountCreatedEvent
|
info
|
VC
|
An account was
created on host {host.name}
Since 2.0 Reference
|
AccountRemovedEvent
|
info
|
VC
|
Account {account}
was removed on host {host.name}
Since 2.0 Reference
|
AccountUpdatedEvent
|
info
|
VC
|
An account was
updated on host {host.name}
Since 2.0 Reference
|
ad.event.ImportCertEvent
|
info
|
VC
|
ad.event.ImportCertEvent| Import certificate
succeeded.
Since 5.0 Reference
|
ad.event.ImportCertFailedEvent
|
error
|
VC
|
ad.event.ImportCertFailedEvent| Import certificate
failed.
Since 5.0 Reference
|
ad.event.JoinDomainEvent
|
info
|
VC
|
ad.event.JoinDomainEvent| Join domain
succeeded.
Since 5.0 Reference
|
ad.event.JoinDomainFailedEvent
|
error
|
VC
|
ad.event.JoinDomainFailedEvent| Join domain
failed.
Since 5.0 Reference
|
ad.event.LeaveDomainEvent
|
info
|
VC
|
ad.event.LeaveDomainEvent| Leave domain
succeeded.
Since 5.0 Reference
|
ad.event.LeaveDomainFailedEvent
|
error
|
VC
|
ad.event.LeaveDomainFailedEvent| Leave domain
failed.
Since 5.0 Reference
|
AdminPasswordNotChangedEvent
|
info
|
VC
|
The default
password for the root user on the host {host.name} has not been
changed
Since 2.5 Reference
|
AlarmAcknowledgedEvent
|
info
|
VC
|
Acknowledged alarm
'{alarm.name}' on {entity.name}
Since 5.0 Reference
|
AlarmActionTriggeredEvent
|
info
|
VC
|
Alarm
'{alarm.name}' on {entity.name} triggered an action
Since 2.0 Reference
|
AlarmClearedEvent
|
info
|
VC
|
Manually cleared
alarm '{alarm.name}' on {entity.name} from
{from.@enum.ManagedEntity.Status}
Since 5.0 Reference
|
AlarmCreatedEvent
|
info
|
VC
|
Created alarm
'{alarm.name}' on {entity.name}
Since 2.0 Reference
|
AlarmEmailCompletedEvent
|
info
|
VC
|
Alarm
'{alarm.name}' on {entity.name} sent email to {to}
Since 2.0 Reference
|
AlarmEmailFailedEvent
|
error
|
VC
|
Alarm
'{alarm.name}' on {entity.name} cannot send email to {to}
Since 2.0 Reference
|
AlarmReconfiguredEvent
|
info
|
VC
|
Reconfigured alarm
'{alarm.name}' on {entity.name}
Since 2.0 Reference
|
AlarmRemovedEvent
|
info
|
VC
|
Removed alarm
'{alarm.name}' on {entity.name}
Since 2.0 Reference
|
AlarmScriptCompleteEvent
|
info
|
VC
|
Alarm
'{alarm.name}' on {entity.name} ran script {script}
Since 2.0 Reference
|
AlarmScriptFailedEvent
|
error
|
VC
|
Alarm
'{alarm.name}' on {entity.name} did not complete script:
{reason.msg}
Since 2.0 Reference
|
AlarmSnmpCompletedEvent
|
info
|
VC
|
Alarm
'{alarm.name}' on entity {entity.name} send SNMP trap
Since 2.0 Reference
|
AlarmSnmpFailedEvent
|
error
|
VC
|
Alarm
'{alarm.name}' on entity {entity.name} did not send SNMP trap:
{reason.msg}
Since 2.0 Reference
|
AlarmStatusChangedEvent
|
info
|
VC
|
Alarm
'{alarm.name}' on {entity.name} changed from
{from.@enum.ManagedEntity.Status} to
{to.@enum.ManagedEntity.Status}
Since 2.0 Reference
|
AllVirtualMachinesLicensedEvent
|
info
|
VC
|
All running
virtual machines are licensed
Since 2.5 Reference
|
AlreadyAuthenticatedSessionEvent
|
info
|
VC
|
User cannot logon
since the user is already logged on
Since 2.0 Reference
|
BadUsernameSessionEvent
|
warning
|
VC
|
Cannot login
{userName}@{ipAddress}
Since 2.0 Reference
|
CanceledHostOperationEvent
|
info
|
VC
|
The operation
performed on host {host.name} in {datacenter.name} was
canceled
Since 2.0 Reference
|
ChangeOwnerOfFileEvent
|
info
|
VC
|
Changed ownership
of file name {filename} from {oldOwner} to {newOwner} on
{host.name} in {datacenter.name}.
Since 5.1 Reference
|
ChangeOwnerOfFileFailedEvent
|
error
|
VC
|
Cannot change
ownership of file name {filename} from {owner} to {attemptedOwner}
on {host.name} in {datacenter.name}.
Since 5.1 Reference
|
ClusterComplianceCheckedEvent
|
info
|
VC
|
Checked cluster
for compliance
Since 4.0 Reference
|
ClusterCreatedEvent
|
info
|
VC
|
Created cluster
{computeResource.name} in {datacenter.name}
Since 2.0 Reference
|
ClusterDestroyedEvent
|
info
|
VC
|
Removed cluster
{computeResource.name} in datacenter {datacenter.name}
Since 2.0 Reference
|
ClusterOvercommittedEvent
|
warning
|
Cluster
|
Insufficient
capacity in cluster {computeResource.name} to satisfy resource
configuration in {datacenter.name}
Since 2.0 Reference
|
ClusterReconfiguredEvent
|
info
|
VC
|
Reconfigured
cluster {computeResource.name} in datacenter
{datacenter.name}
Since 2.0 Reference
|
ClusterStatusChangedEvent
|
info
|
VC
|
Configuration
status on cluster {computeResource.name} changed from
{oldStatus.@enum.ManagedEntity.Status} to
{newStatus.@enum.ManagedEntity.Status} in {datacenter.name}
Since 2.0 Reference
|
com.vmware.license.AddLicenseEvent
|
info
|
VC
|
com.vmware.license.AddLicenseEvent| License
{licenseKey} added to VirtualCenter
Since 4.0 Reference
|
com.vmware.license.AssignLicenseEvent
|
info
|
VC
|
com.vmware.license.AssignLicenseEvent| License
{licenseKey} assigned to asset {entityName}
Since 4.0 Reference
|
com.vmware.license.DLFDownloadFailedEvent
|
warning
|
VC
|
com.vmware.license.DLFDownloadFailedEvent| Failed to
download license information from the host {hostname} due to
{errorReason.@enum.com.vmware.license.DLFDownloadFailedEvent.DLFDownloadFailedReason}
Since 4.1 Reference
|
com.vmware.license.LicenseAssignFailedEvent
|
error
|
VC
|
com.vmware.license.LicenseAssignFailedEvent| License
assignment on the host fails. Reasons:
{errorMessage.@enum.com.vmware.license.LicenseAssignError}.
Since 4.0 Reference
|
com.vmware.license.LicenseCapacityExceededEvent
|
warning
|
VC
|
com.vmware.license.LicenseCapacityExceededEvent| The
current license usage ({currentUsage} {costUnitText}) for {edition}
exceeds the license capacity ({capacity} {costUnitText})
Since 5.0 Reference
|
com.vmware.license.LicenseExpiryEvent
|
error
|
VC
|
com.vmware.license.LicenseExpiryEvent| Your host
license will expire in {remainingDays} days. The host will be
disconnected from VC when its license expires.
Since 4.0 Reference
|
com.vmware.license.LicenseUserThresholdExceededEvent
|
warning
|
VC
|
com.vmware.license.LicenseUserThresholdExceededEvent|
Current license usage ({currentUsage} {costUnitText}) for {edition}
exceeded the user-defined threshold ({threshold}
{costUnitText})
Since 4.1 Reference
|
com.vmware.license.RemoveLicenseEvent
|
info
|
VC
|
com.vmware.license.RemoveLicenseEvent| License
{licenseKey} removed from VirtualCenter
Since 4.0 Reference
|
com.vmware.license.UnassignLicenseEvent
|
info
|
VC
|
com.vmware.license.UnassignLicenseEvent| License
unassigned from asset {entityName}
Since 4.0 Reference
|
com.vmware.vc.cim.CIMGroupHealthStateChanged
|
info
|
VC
|
com.vmware.vc.cim.CIMGroupHealthStateChanged| Health
of [data.group] changed from [data.oldState] to
[data.newState].
Since 4.0 Reference
|
com.vmware.vc.datastore.UpdatedVmFilesEvent
|
info
|
VC
|
com.vmware.vc.datastore.UpdatedVmFilesEvent| Updated
VM files on datastore {ds.name} using host {hostName}
Since 4.1 Reference
|
com.vmware.vc.datastore.UpdateVmFilesFailedEvent
|
error
|
VC
|
com.vmware.vc.datastore.UpdateVmFilesFailedEvent|
Failed to update VM files on datastore {ds.name} using host
{hostName}
Since 4.1 Reference
|
com.vmware.vc.datastore.UpdatingVmFilesEvent
|
info
|
VC
|
com.vmware.vc.datastore.UpdatingVmFilesEvent|
Updating VM files on datastore {ds.name} using host
{hostName}
Since 4.1 Reference
|
com.vmware.vc.dvs.LacpConfigInconsistentEvent
|
info
|
VC
|
com.vmware.vc.dvs.LacpConfigInconsistentEvent| Single
Link Aggregation Control Group is enabled on Uplink Port Groups
while enhanced LACP support is enabled.
Since 5.5 Reference
|
com.vmware.vc.ft.VmAffectedByDasDisabledEvent
|
warning
|
VirtualMachine
|
com.vmware.vc.ft.VmAffectedByDasDisabledEvent| VMware
HA has been disabled in cluster {computeResource.name} of
datacenter {datacenter.name}. HA will not restart VM {vm.name} or
its Secondary VM after a failure.
Since 4.1 Reference
|
com.vmware.vc.guestOperations.GuestOperation
|
info
|
VC
|
com.vmware.vc.guestOperations.GuestOperation| Guest
operation {operationName.@enum.com.vmware.vc.guestOp} performed on
Virtual machine {vm.name}.
Since 5.0 Reference
|
com.vmware.vc.guestOperations.GuestOperationAuthFailure
|
warning
|
VirtualMachine
|
com.vmware.vc.guestOperations.GuestOperationAuthFailure|
Guest operation authentication failed for operation
{operationName.@enum.com.vmware.vc.guestOp} on Virtual machine
{vm.name}.
Since 5.0 Reference
|
com.vmware.vc.HA.AllHostAddrsPingable
|
info
|
VC
|
com.vmware.vc.HA.AllHostAddrsPingable| All vSphere HA
isolation addresses are reachable by host {host.name} in cluster
{computeResource.name} in {datacenter.name}
Since 5.0 Reference
|
com.vmware.vc.HA.AllIsoAddrsPingable
|
info
|
VC
|
com.vmware.vc.HA.AllIsoAddrsPingable| All vSphere HA
isolation addresses are reachable by host {host.name} in cluster
{computeResource.name} in {datacenter.name}
Since 5.0 Reference
|
com.vmware.vc.HA.AnsweredVmLockLostQuestionEvent
|
warning
|
VirtualMachine
|
com.vmware.vc.HA.AnsweredVmLockLostQuestionEvent|
Lock-lost question on virtual machine {vm.name} on host {host.name}
in cluster {computeResource.name} was answered by vSphere HA
Since 5.0 Reference
|
com.vmware.vc.HA.AnsweredVmTerminatePDLEvent
|
warning
|
VirtualMachine
|
com.vmware.vc.HA.AnsweredVmTerminatePDLEvent| vSphere
HA answered a question from host {host.name} in cluster
{computeResource.name} about terminating virtual machine
{vm.name}
Since 5.1 Reference
|
com.vmware.vc.HA.AutoStartDisabled
|
info
|
VC
|
com.vmware.vc.HA.AutoStartDisabled| The automatic
Virtual Machine Startup/Shutdown feature has been disabled on host
{host.name} in cluster {computeResource.name} in {datacenter.name}.
Automatic VM restarts will interfere with vSphere HA when reacting
to a host failure.
Since 5.0 Reference
|
com.vmware.vc.HA.CannotResetVmWithInaccessibleDatastore
|
warning
|
Cluster
|
com.vmware.vc.HA.CannotResetVmWithInaccessibleDatastore|
vSphere HA did not reset VM {vm.name} on host {host.name} in
cluster {computeResource.name} in {datacenter.name} because the VM
had files on inaccessible datastore(s)
Since 5.5 Reference
|
com.vmware.vc.HA.ClusterContainsIncompatibleHosts
|
warning
|
Cluster
|
com.vmware.vc.HA.ClusterContainsIncompatibleHosts|
vSphere HA Cluster {computeResource.name} in {datacenter.name}
contains ESX/ESXi 3.5 hosts and more recent host versions, which
isn't fully supported.
Since 5.0 Reference
|
com.vmware.vc.HA.ClusterFailoverActionCompletedEvent
|
info
|
VC
|
com.vmware.vc.HA.ClusterFailoverActionCompletedEvent|
HA completed a failover action in cluster {computeResource.name} in
datacenter {datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.HA.ClusterFailoverActionInitiatedEvent
|
warning
|
Cluster
|
com.vmware.vc.HA.ClusterFailoverActionInitiatedEvent|
HA initiated a failover action in cluster {computeResource.name} in
datacenter {datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.HA.DasAgentRunningEvent
|
info
|
VC
|
com.vmware.vc.HA.DasAgentRunningEvent| HA Agent on
host {host.name} in cluster {computeResource.name} in datacenter
{datacenter.name} is running
Since 4.1 Reference
|
com.vmware.vc.HA.DasFailoverHostFailedEvent
|
error
|
Cluster
|
com.vmware.vc.HA.DasFailoverHostFailedEvent| HA
failover host {host.name} in cluster {computeResource.name} in
{datacenter.name} has failed
Since 4.1 Reference
|
com.vmware.vc.HA.DasFailoverHostIsolatedEvent
|
warning
|
Cluster
|
com.vmware.vc.HA.DasFailoverHostIsolatedEvent| Host
{host.name} has been isolated from cluster {computeResource.name}
in {datacenter.name}
Since 5.0 Reference
|
com.vmware.vc.HA.DasFailoverHostPartitionedEvent
|
warning
|
Cluster
|
com.vmware.vc.HA.DasFailoverHostPartitionedEvent|
Failover Host {host.name} in {computeResource.name} in
{datacenter.name} is in a different network partition than the
master
Since 5.0 Reference
|
com.vmware.vc.HA.DasFailoverHostUnreachableEvent
|
warning
|
Cluster
|
com.vmware.vc.HA.DasFailoverHostUnreachableEvent| The
vSphere HA agent on the failover host {host.name} in cluster
{computeResource.name} in {datacenter.name} is not reachable from
vCenter Server
Since 5.0 Reference
|
com.vmware.vc.HA.DasHostCompleteDatastoreFailureEvent
|
error
|
Cluster
|
com.vmware.vc.HA.DasHostCompleteDatastoreFailureEvent|
All shared datastores failed on the host {hostName} in cluster
{computeResource.name} in {datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.HA.DasHostCompleteNetworkFailureEvent
|
error
|
Cluster
|
com.vmware.vc.HA.DasHostCompleteNetworkFailureEvent|
All VM networks failed on the host {hostName} in cluster
{computeResource.name} in {datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.HA.DasHostFailedEvent
|
error
|
Cluster
|
com.vmware.vc.HA.DasHostFailedEvent| A possible host
failure has been detected by HA on host {host.name} in cluster
{computeResource.name} in datacenter {datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.HA.DasHostIsolatedEvent
|
warning
|
Cluster
|
com.vmware.vc.HA.DasHostIsolatedEvent| Host
{host.name} has been isolated from cluster {computeResource.name}
in {datacenter.name}
Since 5.0 Reference
|
com.vmware.vc.HA.DasHostMonitoringDisabledEvent
|
warning
|
Cluster
|
com.vmware.vc.HA.DasHostMonitoringDisabledEvent| No
virtual machine failover will occur until Host Monitoring is
enabled in cluster {computeResource.name} in
{datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.HA.DasTotalClusterFailureEvent
|
error
|
Cluster
|
com.vmware.vc.HA.DasTotalClusterFailureEvent| HA
recovered from a total cluster failure in cluster
{computeResource.name} in datacenter {datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.HA.FailedRestartAfterIsolationEvent
|
error
|
VirtualMachine
|
com.vmware.vc.HA.FailedRestartAfterIsolationEvent|
vSphere HA was unable to restart virtual machine {vm.name} in
cluster {computeResource.name} in datacenter {datacenter.name}
after it was powered off in response to a network isolation event.
The virtual machine should be manually powered back on.
Since 5.0 Reference
|
com.vmware.vc.HA.HeartbeatDatastoreChanged
|
info
|
VC
|
com.vmware.vc.HA.HeartbeatDatastoreChanged| Datastore
{dsName} is
{changeType.@enum.com.vmware.vc.HA.HeartbeatDatastoreChange} for
storage heartbeating monitored by the vSphere HA agent on host
{host.name} in cluster {computeResource.name} in
{datacenter.name}
Since 5.0 Reference
|
com.vmware.vc.HA.HeartbeatDatastoreNotSufficient
|
warning
|
Cluster
|
com.vmware.vc.HA.HeartbeatDatastoreNotSufficient| The
number of heartbeat datastores for host {host.name} in cluster
{computeResource.name} in {datacenter.name} is {selectedNum}, which
is less than required: {requiredNum}
Since 5.0 Reference
|
com.vmware.vc.HA.HostAgentErrorEvent
|
warning
|
Cluster
|
com.vmware.vc.HA.HostAgentErrorEvent| vSphere HA
Agent for host {host.name} has an error in {computeResource.name}
in {datacenter.name}:
{reason.@enum.com.vmware.vc.HA.HostAgentErrorReason}
Since 5.0 Reference
|
com.vmware.vc.HA.HostDasAgentHealthyEvent
|
info
|
VC
|
com.vmware.vc.HA.HostDasAgentHealthyEvent| HA Agent
on host {host.name} in cluster {computeResource.name} in datacenter
{datacenter.name} is healthy
Since 4.1 Reference
|
com.vmware.vc.HA.HostDasErrorEvent
|
warning
|
Cluster
|
com.vmware.vc.HA.HostDasErrorEvent| vSphere HA agent
on {host.name} in cluster {computeResource.name} in
{datacenter.name} has an error:
{reason.@enum.HostDasErrorEvent.HostDasErrorReason}
Since 5.0 Reference
|
com.vmware.vc.HA.HostDoesNotSupportVsan
|
error
|
VC
|
com.vmware.vc.HA.HostDoesNotSupportVsan| vSphere HA
cannot be configured on host {host.name} in cluster
{computeResource.name} in {datacenter.name} because vCloud
Distributed Storage is enabled but the host does not support that
feature
Since 5.5 Reference
|
com.vmware.vc.HA.HostHasNoIsolationAddrsDefined
|
warning
|
Cluster
|
com.vmware.vc.HA.HostHasNoIsolationAddrsDefined| Host
{host.name} in cluster {computeResource.name} in {datacenter.name}
has no isolation addresses defined as required by vSphere
HA.
Since 5.0 Reference
|
com.vmware.vc.HA.HostHasNoMountedDatastores
|
error
|
Cluster
|
com.vmware.vc.HA.HostHasNoMountedDatastores| vSphere
HA cannot be configured on {host.name} in cluster
{computeResource.name} in datacenter {datacenter.name} because
there are no mounted datastores.
Since 5.1 Reference
|
com.vmware.vc.HA.HostHasNoSslThumbprint
|
error
|
Cluster
|
com.vmware.vc.HA.HostHasNoSslThumbprint| vSphere HA
cannot be configured on host {host.name} in cluster
{computeResource.name} in datacenter {datacenter.name} because its
SSL thumbprint has not been verified. Check that vCenter Server is
configured to verify SSL thumbprints and that the thumbprint for
{host.name} has been verified.
Since 5.0 Reference
|
com.vmware.vc.HA.HostIncompatibleWithHA
|
error
|
Cluster
|
com.vmware.vc.HA.HostIncompatibleWithHA| The product
version of host {host.name} in cluster {computeResource.name} in
{datacenter.name} is incompatible with HA.
Since 5.0 Reference
|
com.vmware.vc.HA.HostPartitionedFromMasterEvent
|
warning
|
Cluster
|
com.vmware.vc.HA.HostPartitionedFromMasterEvent| Host
{host.name} is in a different network partition than the master
{computeResource.name} in {datacenter.name}
Since 5.0 Reference
|
com.vmware.vc.HA.HostStateChangedEvent
|
info
|
VC
|
com.vmware.vc.HA.HostStateChangedEvent| The vSphere
HA availability state of the host {host.name} has changed to
{newState.@enum.com.vmware.vc.HA.DasFdmAvailabilityState} in
{computeResource.name} in {datacenter.name}
Since 5.0 Reference
|
com.vmware.vc.HA.HostUnconfiguredWithProtectedVms
|
warning
|
Cluster
|
com.vmware.vc.HA.HostUnconfiguredWithProtectedVms|
Host {host.name} in cluster {computeResource.name} in
{datacenter.name} is disconnected, but contains {protectedVmCount}
protected virtual machine(s)
Since 5.0 Reference
|
com.vmware.vc.HA.HostUnconfigureError
|
warning
|
Cluster
|
com.vmware.vc.HA.HostUnconfigureError| There was an
error unconfiguring the vSphere HA agent on host {host.name} in
cluster {computeResource.name} in {datacenter.name}. To solve this
problem, connect the host to a vCenter Server of version 5.0 or
later.
Since 5.0 Reference
|
com.vmware.vc.HA.InvalidMaster
|
warning
|
Cluster
|
com.vmware.vc.HA.InvalidMaster| vSphere HA Agent on
host {remoteHostname} is an invalid master. The host should be
examined to determine if it has been compromised.
Since 5.0 Reference
|
com.vmware.vc.HA.NotAllHostAddrsPingable
|
warning
|
Cluster
|
com.vmware.vc.HA.NotAllHostAddrsPingable| The vSphere
HA agent on host {host.name} in cluster {computeResource.name} in
{datacenter.name} cannot reach some of the management network
addresses of other hosts, and thus vSphere HA may not be able to
restart VMs if a host failure occurs: {unpingableAddrs}
Since 5.0 Reference
|
com.vmware.vc.HA.StartFTSecondaryFailedEvent
|
info
|
VirtualMachine
|
com.vmware.vc.HA.StartFTSecondaryFailedEvent| vSphere
HA agent failed to start Fault Tolerance secondary VM
{secondaryCfgPath} on host {secondaryHost} for primary VM {vm.name}
in cluster {computeResource.name} in {datacenter.name}. Reason :
{fault.msg}. vSphere HA agent will retry until it times out.
Since 5.0 Reference
|
com.vmware.vc.HA.StartFTSecondarySucceededEvent
|
info
|
VC
|
com.vmware.vc.HA.StartFTSecondarySucceededEvent|
vSphere HA agent successfully started Fault Tolerance secondary VM
{secondaryCfgPath} on host {secondaryHost} for primary VM {vm.name}
in cluster {computeResource.name}.
Since 5.0 Reference
|
com.vmware.vc.HA.UserHeartbeatDatastoreRemoved
|
warning
|
Cluster
|
com.vmware.vc.HA.UserHeartbeatDatastoreRemoved|
Datastore {dsName} is removed from the set of preferred heartbeat
datastores selected for cluster {computeResource.name} in
{datacenter.name} because the datastore is removed from
inventory
Since 5.0 Reference
|
com.vmware.vc.HA.VcCannotFindMasterEvent
|
warning
|
Cluster
|
com.vmware.vc.HA.VcCannotFindMasterEvent| vCenter
Server is unable to find a master vSphere HA Agent in
{computeResource.name} in {datacenter.name}
Since 5.0 Reference
|
com.vmware.vc.HA.VcConnectedToMasterEvent
|
warning
|
VC
|
com.vmware.vc.HA.VcConnectedToMasterEvent| vCenter
Server is connected to the master vSphere HA Agent running on host
{hostname} in {computeResource.name} in {datacenter.name}
Since 5.0 Reference
|
com.vmware.vc.HA.VcDisconnectedFromMasterEvent
|
warning
|
VC
|
com.vmware.vc.HA.VcDisconnectedFromMasterEvent|
vCenter Server is disconnected from the master vSphere HA Agent
running on host {hostname} in {computeResource.name} in
{datacenter.name}
Since 5.0 Reference
|
com.vmware.vc.HA.VMIsHADisabledIsolationEvent
|
info
|
VC
|
com.vmware.vc.HA.VMIsHADisabledIsolationEvent|
vSphere HA did not perform an isolation response for {vm.name} in
cluster {computeResource.name} in {datacenter.name} because its VM
restart priority is Disabled
Since 5.1 Reference
|
com.vmware.vc.HA.VMIsHADisabledRestartEvent
|
info
|
VC
|
com.vmware.vc.HA.VMIsHADisabledRestartEvent| vSphere
HA did not attempt to restart {vm.name} in cluster
{computeResource.name} in {datacenter.name} because its VM restart
priority is Disabled
Since 5.1 Reference
|
com.vmware.vc.HA.VmNotProtectedEvent
|
warning
|
VirtualMachine
|
com.vmware.vc.HA.VmNotProtectedEvent| VM {vm.name} in
cluster {computeResource.name} in {datacenter.name} failed to
become vSphere HA Protected and vSphere HA may not attempt to
restart it after a failure.
Since 5.0 Reference
|
com.vmware.vc.HA.VmProtectedEvent
|
info
|
VC
|
com.vmware.vc.HA.VmProtectedEvent| VM {vm.name} in
cluster {computeResource.name} in {datacenter.name} is vSphere HA
Protected and vSphere HA will attempt to restart it after a
failure.
Since 5.0 Reference
|
com.vmware.vc.ha.VmRestartedByHAEvent
|
warning
|
VirtualMachine
|
com.vmware.vc.ha.VmRestartedByHAEvent| Virtual
machine {vm.name} was restarted on host {host.name} in cluster
{computeResource.name} by vSphere HA
Since 5.0 Reference
|
com.vmware.vc.HA.VmUnprotectedEvent
|
warning
|
VirtualMachine
|
com.vmware.vc.HA.VmUnprotectedEvent| VM {vm.name} in
cluster {computeResource.name} in {datacenter.name} is not vSphere
HA Protected.
Since 5.0 Reference
|
com.vmware.vc.HA.VmUnprotectedOnDiskSpaceFull
|
info
|
VC
|
com.vmware.vc.HA.VmUnprotectedOnDiskSpaceFull|
vSphere HA has unprotected virtual machine {vm.name} in cluster
{computeResource.name} in datacenter {datacenter.name} because it
ran out of disk space
Since 5.1 Reference
|
com.vmware.vc.host.AutoStartReconfigureFailedEvent
|
error
|
VC
|
com.vmware.vc.host.AutoStartReconfigureFailedEvent|
Reconfiguring autostart rules for virtual machines on {host.name}
in datacenter {datacenter.name} failed
Since 5.0 Reference
|
com.vmware.vc.host.clear.vFlashResource.inaccessible
|
info
|
VC
|
com.vmware.vc.host.clear.vFlashResource.inaccessible|
Host's vSphere Flash resource is restored to be accessible.
Since 5.5 Reference
|
com.vmware.vc.host.clear.vFlashResource.reachthreshold
|
info
|
VC
|
com.vmware.vc.host.clear.vFlashResource.reachthreshold|
Host's vSphere Flash resource usage dropped below {1}%.
Since 5.5 Reference
|
com.vmware.vc.host.problem.vFlashResource.inaccessible
|
warning
|
VC
|
com.vmware.vc.host.problem.vFlashResource.inaccessible|
Host's vSphere Flash resource is inaccessible.
Since 5.5 Reference
|
com.vmware.vc.host.problem.vFlashResource.reachthreshold
|
warning
|
VC
|
com.vmware.vc.host.problem.vFlashResource.reachthreshold|
Host's vSphere Flash resource usage is more than {1}%.
Since 5.5 Reference
|
com.vmware.vc.host.vFlash.defaultModuleChangedEvent
|
info
|
VC
|
com.vmware.vc.host.vFlash.defaultModuleChangedEvent|
Any new vFlash cache configuration request will use {vFlashModule}
as default vSphere Flash module. All existing vFlash cache
configurations remain unchanged.
Since 5.5 Reference
|
com.vmware.vc.host.vFlash.modulesLoadedEvent
|
info
|
VC
|
com.vmware.vc.host.vFlash.modulesLoadedEvent| vSphere
Flash modules are loaded or reloaded on the host
Since 5.5 Reference
|
com.vmware.vc.host.vFlash.SsdConfigurationFailedEvent
|
error
|
ESXHostStorage
|
com.vmware.vc.host.vFlash.SsdConfigurationFailedEvent|
{1} on disk '{2}' failed due to {3}
Since 5.5 Reference
|
com.vmware.vc.host.vFlash.VFlashResourceCapacityExtendedEvent
|
info
|
VC
|
com.vmware.vc.host.vFlash.VFlashResourceCapacityvSphere
Flash resource capacity is extended
Since 5.5 Reference
|
com.vmware.vc.host.vFlash.VFlashResourceConfiguredEvent
|
info
|
VC
|
com.vmware.vc.host.vFlash.VFlashResourceConfiguredEvent|
vSphere Flash resource is configured on the host
Since 5.5 Reference
|
com.vmware.vc.host.vFlash.VFlashResourceRemovedEvent
|
info
|
VC
|
com.vmware.vc.host.vFlash.VFlashResourceRemovedEvent|
vSphere Flash resource is removed from the host
Since 5.5 Reference
|
com.vmware.vc.npt.VmAdapterEnteredPassthroughEvent
|
info
|
VC
|
com.vmware.vc.npt.VmAdapterEnteredPassthroughEvent|
Network passthrough is active on adapter {deviceLabel} of virtual
machine {vm.name} on host {host.name} in {datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.npt.VmAdapterExitedPassthroughEvent
|
info
|
VC
|
com.vmware.vc.npt.VmAdapterExitedPassthroughEvent|
Network passthrough is inactive on adapter {deviceLabel} of virtual
machine {vm.name} on host {host.name} in {datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.ovfconsumers.CloneOvfConsumerStateErrorEvent
|
warning
|
VC
|
com.vmware.vc.ovfconsumers.CloneOvfConsumerStateErrorEvent|
Failed to clone state for the entity '{entityName}' on extension
{extensionName}
Since 5.0 Reference
|
com.vmware.vc.ovfconsumers.GetOvfEnvironmentSectionsErrorEvent
|
warning
|
VC
|
com.vmware.vc.ovfconsumers.GetOvfEnvironmentSectionsErrorEvent|
Failed to retrieve OVF environment sections for VM '{vm.name}' from
extension {extensionName}
Since 5.0 Reference
|
com.vmware.vc.ovfconsumers.PowerOnAfterCloneErrorEvent
|
warning
|
VC
|
com.vmware.vc.ovfconsumers.PowerOnAfterCloneErrorEvent|
Powering on VM '{vm.name}' after cloning was blocked by an
extension. Message: {description}
Since 5.0 Reference
|
com.vmware.vc.ovfconsumers.RegisterEntityErrorEvent
|
warning
|
VC
|
com.vmware.vc.ovfconsumers.RegisterEntityErrorEvent|
Failed to register entity '{entityName}' on extension
{extensionName}
Since 5.0 Reference
|
com.vmware.vc.ovfconsumers.UnregisterEntitiesErrorEvent
|
warning
|
VC
|
com.vmware.vc.ovfconsumers.UnregisterEntitiesErrorEvent|
Failed to unregister entities on extension {extensionName}
Since 5.0 Reference
|
com.vmware.vc.ovfconsumers.ValidateOstErrorEvent
|
warning
|
VC
|
com.vmware.vc.ovfconsumers.ValidateOstErrorEvent|
Failed to validate OVF descriptor on extension
{extensionName}
Since 5.0 Reference
|
com.vmware.vc.profile.AnswerFileExportedEvent
|
info
|
VC
|
com.vmware.vc.profile.AnswerFileExportedEvent| Answer
file for host {host.name} in datacenter {datacenter.name} has been
exported
Since 5.0 Reference
|
com.vmware.vc.profile.AnswerFileUpdatedEvent
|
info
|
VC
|
com.vmware.vc.profile.AnswerFileUpdatedEvent| Answer
file for host {host.name} in datacenter {datacenter.name} has been
updated
Since 5.0 Reference
|
com.vmware.vc.rp.ResourcePoolRenamedEvent
|
info
|
VC
|
com.vmware.vc.rp.ResourcePoolRenamedEvent| Resource
pool '{oldName}' has been renamed to '{newName}'
Since 5.1 Reference
|
com.vmware.vc.sdrs.CanceledDatastoreMaintenanceModeEvent
|
info
|
VC
|
com.vmware.vc.sdrs.CanceledDatastoreMaintenanceModeEvent|
The datastore maintenance mode operation has been canceled
Since 5.0 Reference
|
com.vmware.vc.sdrs.ConfiguredStorageDrsOnPodEvent
|
info
|
VC
|
com.vmware.vc.sdrs.ConfiguredStorageDrsOnPodEvent|
Configured storage DRS on datastore cluster {objectName}
Since 5.0 Reference
|
com.vmware.vc.sdrs.ConsistencyGroupViolationEvent
|
warning
|
VC
|
com.vmware.vc.sdrs.ConsistencyGroupViolationEvent|
Datastore cluster {objectName} has datastores that belong to
different SRM Consistency Groups
Since 5.1 Reference
|
com.vmware.vc.sdrs.DatastoreEnteredMaintenanceModeEvent
|
info
|
VC
|
com.vmware.vc.sdrs.DatastoreEnteredMaintenanceModeEvent|
Datastore {ds.name} has entered maintenance mode
Since 5.0 Reference
|
com.vmware.vc.sdrs.DatastoreEnteringMaintenanceModeEvent
|
info
|
VC
|
com.vmware.vc.sdrs.DatastoreEnteringMaintenanceModeEvent|
Datastore {ds.name} is entering maintenance mode
Since 5.0 Reference
|
com.vmware.vc.sdrs.DatastoreExitedMaintenanceModeEvent
|
info
|
VC
|
com.vmware.vc.sdrs.DatastoreExitedMaintenanceModeEvent|
Datastore {ds.name} has exited maintenance mode
Since 5.0 Reference
|
com.vmware.vc.sdrs.DatastoreInMultipleDatacentersEvent
|
warning
|
VC
|
com.vmware.vc.sdrs.DatastoreInMultipleDatacentersEvent|
Datastore cluster {objectName} has one or more datastores:
{datastore} shared across multiple datacenters
Since 5.0 Reference
|
com.vmware.vc.sdrs.DatastoreMaintenanceModeErrorsEvent
|
error
|
VC
|
com.vmware.vc.sdrs.DatastoreMaintenanceModeErrorsEvent|
Datastore {ds.name} encountered errors while entering maintenance
mode
Since 5.0 Reference
|
com.vmware.vc.sdrs.StorageDrsDisabledEvent
|
info
|
VC
|
com.vmware.vc.sdrs.StorageDrsDisabledEvent| Disabled
storage DRS on datastore cluster {objectName}
Since 5.0 Reference
|
com.vmware.vc.sdrs.StorageDrsEnabledEvent
|
info
|
VC
|
com.vmware.vc.sdrs.StorageDrsEnabledEvent| Enabled
storage DRS on datastore cluster {objectName} with automation level
{behavior.@enum.storageDrs.PodConfigInfo.Behavior}
Since 5.0 Reference
|
com.vmware.vc.sdrs.StorageDrsInvocationFailedEvent
|
error
|
VC
|
com.vmware.vc.sdrs.StorageDrsInvocationFailedEvent|
Storage DRS invocation failed on datastore cluster
{objectName}
Since 5.0 Reference
|
com.vmware.vc.sdrs.StorageDrsNewRecommendationPendingEvent
|
info
|
VC
|
com.vmware.vc.sdrs.StorageDrsNewRecommendationPendingEvent|
A new storage DRS recommendation has been generated on datastore
cluster {objectName}
Since 5.0 Reference
|
com.vmware.vc.sdrs.StorageDrsNotSupportedHostConnectedToPodEvent
|
warning
|
VC
|
com.vmware.vc.sdrs.StorageDrsNotSupportedHostConnectedToPodEvent|
Datastore cluster {objectName} is connected to one or more hosts:
{host} that do not support storage DRS
Since 5.0 Reference
|
com.vmware.vc.sdrs.StorageDrsRecommendationApplied
|
info
|
VC
|
com.vmware.vc.sdrs.StorageDrsRecommendationApplied|
All pending recommendations on datastore cluster {objectName} were
applied
Since 5.5 Reference
|
com.vmware.vc.sdrs.StorageDrsStorageMigrationEvent
|
info
|
VC
|
com.vmware.vc.sdrs.StorageDrsStorageMigrationEvent|
Storage DRS migrated disks of VM {vm.name} to datastore
{ds.name}
Since 5.0 Reference
|
com.vmware.vc.sdrs.StorageDrsStoragePlacementEvent
|
info
|
VC
|
com.vmware.vc.sdrs.StorageDrsStoragePlacementEvent|
Storage DRS placed disks of VM {vm.name} on datastore
{ds.name}
Since 5.0 Reference
|
com.vmware.vc.sdrs.StoragePodCreatedEvent
|
info
|
VC
|
com.vmware.vc.sdrs.StoragePodCreatedEvent| Created
datastore cluster {objectName}
Since 5.0 Reference
|
com.vmware.vc.sdrs.StoragePodDestroyedEvent
|
info
|
VC
|
com.vmware.vc.sdrs.StoragePodDestroyedEvent| Removed
datastore cluster {objectName}
Since 5.0 Reference
|
com.vmware.vc.sioc.NotSupportedHostConnectedToDatastoreEvent
|
warning
|
VC
|
com.vmware.vc.sioc.NotSupportedHostConnectedToDatastoreEvent|
SIOC has detected that a host: {host} connected to a SIOC-enabled
datastore: {objectName} is running an older version of ESX that
does not support SIOC. This is an unsupported configuration.
Since 5.0 Reference
|
com.vmware.vc.sms.datastore.ComplianceStatusCompliantEvent
|
info
|
VC
|
com.vmware.vc.sms.datastore.ComplianceStatusCompliantEvent|
Virtual disk {diskKey} on {vmName} connected to datastore
{datastore.name} in {datacenter.name} is compliant from storage
provider {providerName}.
Since 5.5 Reference
|
com.vmware.vc.sms.datastore.ComplianceStatusNonCompliantEvent
|
error
|
VirtualMachine
|
com.vmware.vc.sms.datastore.ComplianceStatusNonCompliantEvent|
Virtual disk {diskKey} on {vmName} connected to {datastore.name} in
{datacenter.name} is not compliant {operationalStatus] from storage
provider {providerName}.
Since 5.5 Reference
|
com.vmware.vc.sms.datastore.ComplianceStatusUnknownEvent
|
warning
|
VC
|
com.vmware.vc.sms.datastore.ComplianceStatusUnknownEvent|
Virtual disk {diskKey} on {vmName} connected to {datastore.name} in
{datacenter.name} compliance status is unknown from storage
provider {providerName}.
Since 5.5 Reference
|
com.vmware.vc.sms.LunCapabilityInitEvent
|
info
|
VC
|
com.vmware.vc.sms.LunCapabilityInitEvent| Storage
provider system default capability event
Since 5.0 Reference
|
com.vmware.vc.sms.LunCapabilityMetEvent
|
info
|
VC
|
com.vmware.vc.sms.LunCapabilityMetEvent| Storage
provider system capability requirements met
Since 5.0 Reference
|
com.vmware.vc.sms.LunCapabilityNotMetEvent
|
info
|
VC
|
com.vmware.vc.sms.LunCapabilityNotMetEvent| Storage
provider system capability requirements not met
Since 5.0 Reference
|
com.vmware.vc.sms.provider.health.event
|
info
|
VC
|
com.vmware.vc.sms.provider.health.event|
{msgTxt}
Since 5.0 Reference
|
com.vmware.vc.sms.provider.system.event
|
info
|
VC
|
com.vmware.vc.sms.provider.system.event|
{msgTxt}
Since 5.0 Reference
|
com.vmware.vc.sms.ThinProvisionedLunThresholdClearedEvent
|
info
|
VC
|
com.vmware.vc.sms.ThinProvisionedLunThresholdClearedEvent|
Storage provider thin provisioning capacity threshold
reached
Since 5.0 Reference
|
com.vmware.vc.sms.ThinProvisionedLunThresholdCrossedEvent
|
info
|
VC
|
com.vmware.vc.sms.ThinProvisionedLunThresholdCrossedEvent|
Storage provider thin provisioning capacity threshold
crossed
Since 5.0 Reference
|
com.vmware.vc.sms.ThinProvisionedLunThresholdInitEvent
|
info
|
VC
|
com.vmware.vc.sms.ThinProvisionedLunThresholdInitEvent|
Storage provider thin provisioning default capacity event
Since 5.0 Reference
|
com.vmware.vc.sms.vm.ComplianceStatusCompliantEvent
|
info
|
VC
|
com.vmware.vc.sms.vm.ComplianceStatusCompliantEvent|
Virtual disk {diskKey} on {vm.name} on {host.name} and
{computeResource.name} in {datacenter.name} is compliant from
storage provider {providerName}.
Since 5.5 Reference
|
com.vmware.vc.sms.vm.ComplianceStatusNonCompliantEvent
|
error
|
VC
|
com.vmware.vc.sms.vm.ComplianceStatusNonCompliantEvent|
Virtual disk {diskKey} on {vm.name} on {host.name} and
{computeResource.name} in {datacenter.name} is not compliant
{operationalStatus] from storage provider {providerName}.
Since 5.5 Reference
|
com.vmware.vc.sms.vm.ComplianceStatusUnknownEvent
|
warning
|
VC
|
com.vmware.vc.sms.vm.ComplianceStatusUnknownEvent|
Virtual disk {diskKey} on {vm.name} on {host.name} and
{computeResource.name} in {datacenter.name} compliance status is
unknown from storage provider {providerName}.
Since 5.5 Reference
|
com.vmware.vc.spbm.ProfileAssociationFailedEvent
|
error
|
VC
|
com.vmware.vc.spbm.ProfileAssociationFailedEvent|
Profile association/dissociation failed for {entityName}
Since 5.5 Reference
|
com.vmware.vc.stats.HostQuickStatesNotUpToDateEvent
|
info
|
VC
|
com.vmware.vc.stats.HostQuickStatesNotUpToDateEvent|
Quick stats on {host.name} in {computeResource.name} in
{datacenter.name} is not up-to-date
Since 5.0 Reference
|
com.vmware.vc.VCHealthStateChangedEvent
|
info
|
VC
|
com.vmware.vc.VCHealthStateChangedEvent| vCenter
Service overall health changed from '{oldState}' to
'{newState}'
Since 4.1 Reference
|
com.vmware.vc.vcp.FtDisabledVmTreatAsNonFtEvent
|
info
|
VC
|
com.vmware.vc.vcp.FtDisabledVmTreatAsNonFtEvent| HA
VM Component Protection protects virtual machine {vm.name} on
{host.name} in cluster {computeResource.name} in datacenter
{datacenter.name} as non-FT virtual machine because the FT state is
disabled
Since 4.1 Reference
|
com.vmware.vc.vcp.FtFailoverEvent
|
info
|
VC
|
com.vmware.vc.vcp.FtFailoverEvent| FT Primary VM
{vm.name} on host {host.name} in cluster {computeResource.name} in
datacenter {datacenter.name} is going to fail over to Secondary VM
due to component failure
Since 4.1 Reference
|
com.vmware.vc.vcp.FtFailoverFailedEvent
|
error
|
VirtualMachine
|
com.vmware.vc.vcp.FtFailoverFailedEvent| FT virtual
machine {vm.name} on host {host.name} in cluster
{computeResource.name} in datacenter {datacenter.name} failed to
failover to secondary
Since 4.1 Reference
|
com.vmware.vc.vcp.FtSecondaryRestartEvent
|
info
|
VC
|
com.vmware.vc.vcp.FtSecondaryRestartEvent| HA VM
Component Protection is restarting FT secondary virtual machine
{vm.name} on host {host.name} in cluster {computeResource.name} in
datacenter {datacenter.name} due to component failure
Since 4.1 Reference
|
com.vmware.vc.vcp.FtSecondaryRestartFailedEvent
|
error
|
VirtualMachine
|
com.vmware.vc.vcp.FtSecondaryRestartFailedEvent| FT
Secondary VM {vm.name} on host {host.name} in cluster
{computeResource.name} in datacenter {datacenter.name} failed to
restart
Since 4.1 Reference
|
com.vmware.vc.vcp.NeedSecondaryFtVmTreatAsNonFtEvent
|
info
|
VC
|
com.vmware.vc.vcp.NeedSecondaryFtVmTreatAsNonFtEvent|
HA VM Component Protection protects virtual machine {vm.name} on
host {host.name} in cluster {computeResource.name} in datacenter
{datacenter.name} as non-FT virtual machine because it has been in
the needSecondary state too long
Since 4.1 Reference
|
com.vmware.vc.vcp.TestEndEvent
|
info
|
VC
|
com.vmware.vc.vcp.TestEndEvent| VM Component
Protection test ends on host {host.name} in cluster
{computeResource.name} in datacenter {datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.vcp.TestStartEvent
|
info
|
VC
|
com.vmware.vc.vcp.TestStartEvent| VM Component
Protection test starts on host {host.name} in cluster
{computeResource.name} in datacenter {datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.vcp.VcpNoActionEvent
|
info
|
VC
|
com.vmware.vc.vcp.VcpNoActionEvent| HA VM Component
Protection did not take action on virtual machine {vm.name} on host
{host.name} in cluster {computeResource.name} in datacenter
{datacenter.name} due to the feature configuration setting
Since 4.1 Reference
|
com.vmware.vc.vcp.VmDatastoreFailedEvent
|
error
|
VirtualMachine
|
com.vmware.vc.vcp.VmDatastoreFailedEvent| Virtual
machine {vm.name} on host {host.name} in cluster
{computeResource.name} in datacenter {datacenter.name} lost access
to {datastore}
Since 4.1 Reference
|
com.vmware.vc.vcp.VmNetworkFailedEvent
|
error
|
VirtualMachine
|
com.vmware.vc.vcp.VmNetworkFailedEvent| Virtual
machine {vm.name} on host {host.name} in cluster
{computeResource.name} in datacenter {datacenter.name} lost access
to {network}
Since 4.1 Reference
|
com.vmware.vc.vcp.VmPowerOffHangEvent
|
error
|
VirtualMachine
|
com.vmware.vc.vcp.VmPowerOffHangEvent| HA VM
Component Protection could not power off virtual machine {vm.name}
on host {host.name} in cluster {computeResource.name} in datacenter
{datacenter.name} successfully after trying {numTimes} times and
will keep trying
Since 4.1 Reference
|
com.vmware.vc.vcp.VmRestartEvent
|
info
|
VC
|
com.vmware.vc.vcp.VmRestartEvent| HA VM Component
Protection is restarting virtual machine {vm.name} due to component
failure on host {host.name} in cluster {computeResource.name} in
datacenter {datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.vcp.VmRestartFailedEvent
|
error
|
VirtualMachine
|
com.vmware.vc.vcp.VmRestartFailedEvent| Virtual
machine {vm.name} affected by component failure on host {host.name}
in cluster {computeResource.name} in datacenter {datacenter.name}
failed to restart
Since 4.1 Reference
|
com.vmware.vc.vcp.VmWaitForCandidateHostEvent
|
error
|
VirtualMachine
|
com.vmware.vc.vcp.VmWaitForCandidateHostEvent| HA VM
Component Protection could not find a destination host for virtual
machine {vm.name} on host {host.name} in cluster
{computeResource.name} in datacenter {datacenter.name} after
waiting {numSecWait} seconds and will keep trying
Since 4.1 Reference
|
com.vmware.vc.vm.VmRegisterFailedEvent
|
error
|
VC
|
com.vmware.vc.vm.VmRegisterFailedEvent| Virtual
machine {vm.name} registration on {host.name} in datacenter
{datacenter.name} failed
Since 5.0 Reference
|
com.vmware.vc.vm.VmStateFailedToRevertToSnapshot
|
error
|
VirtualMachine
|
com.vmware.vc.vm.VmStateFailedToRevertToSnapshot|
Failed to revert the execution state of the virtual machine
{vm.name} on host {host.name}, in compute resource
{computeResource.name} to snapshot {snapshotName}, with ID
{snapshotId}
Since 5.0 Reference
|
com.vmware.vc.vm.VmStateRevertedToSnapshot
|
info
|
VC
|
com.vmware.vc.vm.VmStateRevertedToSnapshot| The
execution state of the virtual machine {vm.name} on host
{host.name}, in compute resource {computeResource.name} has been
reverted to the state of snapshot {snapshotName}, with ID
{snapshotId}
Since 5.0 Reference
|
com.vmware.vc.vmam.AppMonitoringNotSupported
|
warning
|
VC
|
com.vmware.vc.vmam.AppMonitoringNotSupported|
Application monitoring is not supported on {host.name} in cluster
{computeResource.name} in {datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.vmam.VmAppHealthMonitoringStateChangedEvent
|
warning
|
VC
|
com.vmware.vc.vmam.VmAppHealthMonitoringStateChangedEvent|
Application heartbeat status changed to {status} for {vm.name} on
{host.name} in cluster {computeResource.name} in
{datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.vmam.VmAppHealthStateChangedEvent
|
warning
|
VirtualMachine
|
com.vmware.vc.vmam.VmAppHealthStateChangedEvent|
vSphere HA detected that the application state changed to
{state.@enum.vm.GuestInfo.AppStateType} for {vm.name} on
{host.name} in cluster {computeResource.name} in
{datacenter.name}
Since 5.5 Reference
|
com.vmware.vc.vmam.VmDasAppHeartbeatFailedEvent
|
warning
|
VirtualMachine
|
com.vmware.vc.vmam.VmDasAppHeartbeatFailedEvent|
Application heartbeat failed for {vm.name} on {host.name} in
cluster {computeResource.name} in {datacenter.name}
Since 4.1 Reference
|
com.vmware.vc.VmCloneFailedInvalidDestinationEvent
|
error
|
VC
|
com.vmware.vc.VmCloneFailedInvalidDestinationEvent|
Cannot clone {vm.name} as {destVmName} to invalid or non-existent
destination with ID {invalidMoRef}: {fault}
Since 5.0 Reference
|
com.vmware.vc.VmCloneToResourcePoolFailedEvent
|
error
|
VC
|
com.vmware.vc.VmCloneToResourcePoolFailedEvent|
Cannot clone {vm.name} as {destVmName} to resource pool
{destResourcePool}: {fault}
Since 5.0 Reference
|
com.vmware.vc.VmDiskConsolidatedEvent
|
info
|
VC
|
com.vmware.vc.VmDiskConsolidatedEvent| Virtual
machine {vm.name} disks consolidated successfully on {host.name} in
cluster {computeResource.name} in {datacenter.name}.
Since 5.0 Reference
|
com.vmware.vc.VmDiskConsolidationNeeded
|
info
|
VC
|
com.vmware.vc.VmDiskConsolidationNeeded| Virtual
machine {vm.name} disks consolidation is needed on {host.name} in
cluster {computeResource.name} in {datacenter.name}.
Since 5.0 Reference
|
com.vmware.vc.VmDiskConsolidationNoLongerNeeded
|
info
|
VC
|
com.vmware.vc.VmDiskConsolidationNoLongerNeeded|
Virtual machine {vm.name} disks consolidation is no longer needed
on {host.name} in cluster {computeResource.name} in
{datacenter.name}.
Since 5.1 Reference
|
com.vmware.vc.VmDiskFailedToConsolidateEvent
|
error
|
VirtualMachine
|
com.vmware.vc.VmDiskFailedToConsolidateEvent| Virtual
machine {vm.name} disks consolidation failed on {host.name} in
cluster {computeResource.name} in {datacenter.name}.
Since 5.0 Reference
|
com.vmware.vc.vsan.DatastoreNoCapacityEvent
|
error
|
VC
|
com.vmware.vc.vsan.DatastoreNoCapacityEvent| VSAN
datastore {datastoreName} in cluster {computeResource.name} in
datacenter {datacenter.name} does not have capacity
Since 5.5 Reference
|
com.vmware.vc.vsan.HostCommunicationErrorEvent
|
error
|
ESXHost
|
com.vmware.vc.vsan.HostCommunicationErrorEvent|
event.com.vmware.vc.vsan.HostCommunicationErrorEvent.fullFormat
Since 5.5 Reference
|
com.vmware.vc.vsan.HostNotInClusterEvent
|
error
|
VC
|
com.vmware.vc.vsan.HostNotInClusterEvent| {host.name}
with the VSAN service enabled is not in the vCenter cluster
{computeResource.name} in datacenter {datacenter.name}
Since 5.5 Reference
|
com.vmware.vc.vsan.HostNotInVsanClusterEvent
|
error
|
VC
|
com.vmware.vc.vsan.HostNotInVsanClusterEvent|
{host.name} is in a VSAN enabled cluster {computeResource.name} in
datacenter {datacenter.name} but does not have VSAN service
enabled
Since 5.5 Reference
|
com.vmware.vc.vsan.HostVendorProviderDeregistrationFailedEvent
|
error
|
VC
|
com.vmware.vc.vsan.HostVendorProviderDeregistrationFailedEvent|
Vendor provider {host.name} deregistration failed
Since 5.5 Reference
|
com.vmware.vc.vsan.HostVendorProviderDeregistrationSuccessEvent
|
info
|
VC
|
com.vmware.vc.vsan.HostVendorProviderDeregistrationSuccessEvent|
Vendor provider {host.name} deregistration succeeded
Since 5.5 Reference
|
com.vmware.vc.vsan.HostVendorProviderRegistrationFailedEvent
|
error
|
VC
|
com.vmware.vc.vsan.HostVendorProviderRegistrationFailedEvent|
Vendor provider {host.name} registration failed
Since 5.5 Reference
|
com.vmware.vc.vsan.HostVendorProviderRegistrationSuccessEvent
|
info
|
VC
|
com.vmware.vc.vsan.HostVendorProviderRegistrationSuccessEvent|
Vendor provider {host.name} registration succeeded
Since 5.5 Reference
|
com.vmware.vc.vsan.NetworkMisConfiguredEvent
|
error
|
ESXHostNetwork
|
com.vmware.vc.vsan.NetworkMisConfiguredEvent| VSAN
network is not configured on {host.name} in cluster
{computeResource.name} in datacenter {datacenter.name}
Since 5.5 Reference
|
com.vmware.vc.vsan.RogueHostFoundEvent
|
error
|
VC
|
com.vmware.vc.vsan.RogueHostFoundEvent| Found another
host participating in the VSAN service in cluster
{computeResource.name} in datacenter {datacenter.name} which is not
a member of this host's vCenter cluster
Since 5.5 Reference
|
com.vmware.vim.eam.agency.create
|
info
|
VC
|
com.vmware.vim.eam.agency.create| {agencyName}
created by {ownerName}
Since 5.0 Reference
|
com.vmware.vim.eam.agency.destroyed
|
info
|
VC
|
com.vmware.vim.eam.agency.destroyed| {agencyName}
removed from the vSphere ESX Agent Manager
Since 5.0 Reference
|
com.vmware.vim.eam.agency.goalstate
|
info
|
VC
|
com.vmware.vim.eam.agency.goalstate| {agencyName}
changed goal state from {oldGoalState} to {newGoalState}
Since 5.0 Reference
|
com.vmware.vim.eam.agency.statusChanged
|
info
|
VC
|
com.vmware.vim.eam.agency.statusChanged| Agency
status changed from {oldStatus} to {newStatus}
Since 5.1 Reference
|
com.vmware.vim.eam.agency.updated
|
info
|
VC
|
com.vmware.vim.eam.agency.updated| Configuration
updated {agencyName}
Since 5.0 Reference
|
com.vmware.vim.eam.agent.created
|
info
|
VC
|
com.vmware.vim.eam.agent.created| Agent added to host
{host.name} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.agent.destroyed
|
info
|
VC
|
com.vmware.vim.eam.agent.destroyed| Agent removed
from host {host.name} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.agent.destroyedNoHost
|
info
|
VC
|
com.vmware.vim.eam.agent.destroyedNoHost| Agent
removed from host ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterPowerOn
|
info
|
VC
|
com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterPowerOn|
Agent VM {vm.name} has been powered on. Mark agent as available to
proceed agent workflow ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterProvisioning
|
info
|
VC
|
com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterProvisioning|
Agent VM {vm.name} has been provisioned. Mark agent as available to
proceed agent workflow ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.agent.statusChanged
|
info
|
VC
|
com.vmware.vim.eam.agent.statusChanged| Agent status
changed from {oldStatus} to {newStatus}
Since 5.1 Reference
|
com.vmware.vim.eam.agent.task.deleteVm
|
info
|
VC
|
com.vmware.vim.eam.agent.task.deleteVm| Agent VM
{vmName} is deleted on host {host.name} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.agent.task.deployVm
|
info
|
VC
|
com.vmware.vim.eam.agent.task.deployVm| Agent VM
{vm.name} is provisioned on host {host.name} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.agent.task.powerOffVm
|
info
|
VC
|
com.vmware.vim.eam.agent.task.powerOffVm| Agent VM
{vm.name} powered off, on host {host.name} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.agent.task.powerOnVm
|
info
|
VC
|
com.vmware.vim.eam.agent.task.powerOnVm| Agent VM
{vm.name} powered on, on host {host.name} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.agent.task.vibInstalled
|
info
|
VC
|
com.vmware.vim.eam.agent.task.vibInstalled| Agent
installed VIB {vib} on host {host.name} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.agent.task.vibUninstalled
|
info
|
VC
|
com.vmware.vim.eam.agent.task.vibUninstalled| Agent
uninstalled VIB {vib} on host {host.name} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.cannotAccessAgentOVF
|
warning
|
VC
|
com.vmware.vim.eam.issue.cannotAccessAgentOVF| Unable
to access agent OVF package at {url} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.cannotAccessAgentVib
|
warning
|
VC
|
com.vmware.vim.eam.issue.cannotAccessAgentVib| Unable
to access agent VIB module at {url} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.hostInMaintenanceMode
|
warning
|
VC
|
com.vmware.vim.eam.issue.hostInMaintenanceMode| Agent
cannot complete an operation since the host {host.name} is in
maintenance mode ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.hostInStandbyMode
|
warning
|
VC
|
com.vmware.vim.eam.issue.hostInStandbyMode| Agent
cannot complete an operation since the host {host.name} is in
standby mode ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.hostPoweredOff
|
warning
|
VC
|
com.vmware.vim.eam.issue.hostPoweredOff| Agent cannot
complete an operation since the host {host.name} is powered off
({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.incompatibleHostVersion
|
warning
|
VC
|
com.vmware.vim.eam.issue.incompatibleHostVersion|
Agent is not deployed due to incompatible host {host.name}
({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.insufficientIpAddresses
|
warning
|
VC
|
com.vmware.vim.eam.issue.insufficientIpAddresses|
Insufficient IP addresses in IP pool in agent's VM network
({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.insufficientResources
|
warning
|
VC
|
com.vmware.vim.eam.issue.insufficientResources| Agent
cannot be provisioned due to insufficient resources on host
{host.name} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.insufficientSpace
|
warning
|
VC
|
com.vmware.vim.eam.issue.insufficientSpace| Agent on
{host.name} cannot be provisioned due to insufficient space on
datastore ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.missingAgentIpPool
|
warning
|
VC
|
com.vmware.vim.eam.issue.missingAgentIpPool| No IP
pool in agent's VM network ({agencyname})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.missingDvFilterSwitch
|
warning
|
VC
|
com.vmware.vim.eam.issue.missingDvFilterSwitch|
dvFilter switch is not configured on host {host.name}
({agencyname})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.noAgentVmDatastore
|
warning
|
VC
|
com.vmware.vim.eam.issue.noAgentVmDatastore| No agent
datastore configuration on host {host.name} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.noAgentVmNetwork
|
warning
|
VC
|
com.vmware.vim.eam.issue.noAgentVmNetwork| No agent
network configuration on host {host.name} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.noCustomAgentVmDatastore
|
error
|
VC
|
com.vmware.vim.eam.issue.noCustomAgentVmDatastore|
Agent datastore(s) {customAgentVmDatastoreName} not available on
host {host.name} ({agencyName})
Since 5.5 Reference
|
com.vmware.vim.eam.issue.noCustomAgentVmNetwork
|
error
|
VC
|
com.vmware.vim.eam.issue.noCustomAgentVmNetwork|
Agent network(s) {customAgentVmNetworkName} not available on host
{host.name} ({agencyName})
Since 5.1 Reference
|
com.vmware.vim.eam.issue.orphandedDvFilterSwitch
|
warning
|
VC
|
com.vmware.vim.eam.issue.orphandedDvFilterSwitch|
Unused dvFilter switch on host {host.name} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.orphanedAgency
|
warning
|
VC
|
com.vmware.vim.eam.issue.orphanedAgency| Orphaned
agency found. ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.ovfInvalidFormat
|
warning
|
VC
|
com.vmware.vim.eam.issue.ovfInvalidFormat| OVF used
to provision agent on host {host.name} has invalid format
({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.ovfInvalidProperty
|
warning
|
VC
|
com.vmware.vim.eam.issue.ovfInvalidProperty| OVF
environment used to provision agent on host {host.name} has one or
more invalid properties ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.resolved
|
info
|
VC
|
com.vmware.vim.eam.issue.resolved| Issue {type}
resolved (key {key})
Since 5.1 Reference
|
com.vmware.vim.eam.issue.unknownAgentVm
|
warning
|
VC
|
com.vmware.vim.eam.issue.unknownAgentVm| Unknown
agent VM {vm.name}
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vibCannotPutHostInMaintenanceMode
|
warning
|
VC
|
com.vmware.vim.eam.issue.vibCannotPutHostInMaintenanceMode|
Cannot put host into maintenance mode ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vibInvalidFormat
|
warning
|
VC
|
com.vmware.vim.eam.issue.vibInvalidFormat| Invalid
format for VIB module at {url} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vibNotInstalled
|
warning
|
VC
|
com.vmware.vim.eam.issue.vibNotInstalled| VIB module
for agent is not installed on host {host.name}
({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vibRequiresHostInMaintenanceMode
|
error
|
VC
|
com.vmware.vim.eam.issue.vibRequiresHostInMaintenanceMode|
Host must be put into maintenance mode to complete agent VIB
installation ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vibRequiresHostReboot
|
error
|
VC
|
com.vmware.vim.eam.issue.vibRequiresHostReboot| Host
{host.name} must be reboot to complete agent VIB installation
({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vibRequiresManualInstallation
|
error
|
VC
|
com.vmware.vim.eam.issue.vibRequiresManualInstallation|
VIB {vib} requires manual installation on host {host.name}
({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vibRequiresManualUninstallation
|
error
|
VC
|
com.vmware.vim.eam.issue.vibRequiresManualUninstallation|
VIB {vib} requires manual uninstallation on host {host.name}
({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vmCorrupted
|
warning
|
VC
|
com.vmware.vim.eam.issue.vmCorrupted| Agent VM
{vm.name} on host {host.name} is corrupted ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vmDeployed
|
warning
|
VC
|
com.vmware.vim.eam.issue.vmDeployed| Agent VM
{vm.name} is provisioned on host {host.name} when it should be
removed ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vmMarkedAsTemplate
|
warning
|
VC
|
com.vmware.vim.eam.issue.vmMarkedAsTemplate| Agent VM
{vm.name} on host {host.name} is marked as template
({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vmNotDeployed
|
warning
|
VC
|
com.vmware.vim.eam.issue.vmNotDeployed| Agent VM is
missing on host {host.name} ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vmOrphaned
|
warning
|
VC
|
com.vmware.vim.eam.issue.vmOrphaned| Orphaned agent
VM {vm.name} on host {host.name} detected ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vmPoweredOff
|
warning
|
VC
|
com.vmware.vim.eam.issue.vmPoweredOff| Agent VM
{vm.name} on host {host.name} is expected to be powered on
({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vmPoweredOn
|
warning
|
VC
|
com.vmware.vim.eam.issue.vmPoweredOn| Agent VM
{vm.name} on host {host.name} is expected to be powered off
({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vmSuspended
|
warning
|
VC
|
com.vmware.vim.eam.issue.vmSuspended| Agent VM
{vm.name} on host {host.name} is expected to be powered on but is
suspended ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vmWrongFolder
|
warning
|
VC
|
com.vmware.vim.eam.issue.vmWrongFolder| Agent VM
{vm.name} on host {host.name} is in the wrong VM folder
({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.issue.vmWrongResourcePool
|
warning
|
VC
|
com.vmware.vim.eam.issue.vmWrongResourcePool| Agent
VM {vm.name} on host {host.name} is in the resource pool
({agencyName})
Since 5.0 Reference
|
com.vmware.vim.eam.login.invalid
|
warning
|
VC
|
com.vmware.vim.eam.login.invalid| Failed login to
vSphere ESX Agent Manager
Since 5.0 Reference
|
com.vmware.vim.eam.login.succeeded
|
info
|
VC
|
com.vmware.vim.eam.login.succeeded| Successful login
by {user} into vSphere ESX Agent Manager
Since 5.0 Reference
|
com.vmware.vim.eam.logout
|
info
|
VC
|
com.vmware.vim.eam.logout| User {user} logged out of
vSphere ESX Agent Manager by logging out of the vCenter
server
Since 5.0 Reference
|
com.vmware.vim.eam.task.scanForUnknownAgentVmsCompleted
|
info
|
VC
|
com.vmware.vim.eam.task.scanForUnknownAgentVmsCompleted|
Scan for unknown agent VMs completed
Since 5.0 Reference
|
com.vmware.vim.eam.task.scanForUnknownAgentVmsInitiated
|
info
|
VC
|
com.vmware.vim.eam.task.scanForUnknownAgentVmsInitiated|
Scan for unknown agent VMs initiated
Since 5.0 Reference
|
com.vmware.vim.eam.task.setupDvFilter
|
info
|
VC
|
com.vmware.vim.eam.task.setupDvFilter| DvFilter
switch '{switchName}' is setup on host {host.name}
Since 5.0 Reference
|
com.vmware.vim.eam.task.tearDownDvFilter
|
info
|
VC
|
com.vmware.vim.eam.task.tearDownDvFilter| DvFilter
switch '{switchName}' is teared down on host {host.name}
Since 5.0 Reference
|
com.vmware.vim.eam.unauthorized.access
|
warning
|
VC
|
com.vmware.vim.eam.unauthorized.access| Unauthorized
access by {user} in vSphere ESX Agent Manager
Since 5.0 Reference
|
com.vmware.vim.eam.vum.failedtouploadvib
|
error
|
VC
|
com.vmware.vim.eam.vum.failedtouploadvib| Failed to
upload {vibUrl} to VMware Update Manager ({agencyName})
Since 5.0 Reference
|
com.vmware.vim.vsm.dependency.bind.vApp
|
info
|
VC
|
com.vmware.vim.vsm.dependency.bind.vApp|
event.com.vmware.vim.vsm.dependency.bind.vApp.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.dependency.bind.vm
|
info
|
VC
|
com.vmware.vim.vsm.dependency.bind.vm|
event.com.vmware.vim.vsm.dependency.bind.vm.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.dependency.create.vApp
|
info
|
VC
|
com.vmware.vim.vsm.dependency.create.vApp|
event.com.vmware.vim.vsm.dependency.create.vApp.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.dependency.create.vm
|
info
|
VC
|
com.vmware.vim.vsm.dependency.create.vm|
event.com.vmware.vim.vsm.dependency.create.vm.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.dependency.destroy.vApp
|
info
|
VC
|
com.vmware.vim.vsm.dependency.destroy.vApp|
event.com.vmware.vim.vsm.dependency.destroy.vApp.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.dependency.destroy.vm
|
info
|
VC
|
com.vmware.vim.vsm.dependency.destroy.vm|
event.com.vmware.vim.vsm.dependency.destroy.vm.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.dependency.reconfigure.vApp
|
info
|
VC
|
com.vmware.vim.vsm.dependency.reconfigure.vApp|
event.com.vmware.vim.vsm.dependency.reconfigure.vApp.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.dependency.reconfigure.vm
|
info
|
VC
|
com.vmware.vim.vsm.dependency.reconfigure.vm|
event.com.vmware.vim.vsm.dependency.reconfigure.vm.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.dependency.unbind.vApp
|
info
|
VC
|
com.vmware.vim.vsm.dependency.unbind.vApp|
event.com.vmware.vim.vsm.dependency.unbind.vApp.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.dependency.unbind.vm
|
info
|
VC
|
com.vmware.vim.vsm.dependency.unbind.vm|
event.com.vmware.vim.vsm.dependency.unbind.vm.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.dependency.update.vApp
|
info
|
VC
|
com.vmware.vim.vsm.dependency.update.vApp|
event.com.vmware.vim.vsm.dependency.update.vApp.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.dependency.update.vm
|
info
|
VC
|
com.vmware.vim.vsm.dependency.update.vm|
event.com.vmware.vim.vsm.dependency.update.vm.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.provider.register
|
info
|
VC
|
com.vmware.vim.vsm.provider.register|
event.com.vmware.vim.vsm.provider.register.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.provider.unregister
|
info
|
VC
|
com.vmware.vim.vsm.provider.unregister|
event.com.vmware.vim.vsm.provider.unregister.fullFormat
Since 5.0 Reference
|
com.vmware.vim.vsm.provider.update
|
info
|
VC
|
com.vmware.vim.vsm.provider.update|
event.com.vmware.vim.vsm.provider.update.fullFormat
Since 5.0 Reference
|
CustomFieldDefAddedEvent
|
info
|
VC
|
Created new custom
field definition {name}
Since 2.0 Reference
|
CustomFieldDefEvent
|
info
|
VC
|
This event records
a custom field definition event.
Since 2.0 Reference
|
CustomFieldDefRemovedEvent
|
info
|
VC
|
Removed field
definition {name}
Since 2.0 Reference
|
CustomFieldDefRenamedEvent
|
info
|
VC
|
Renamed field
definition from {name} to {newName}
Since 2.0 Reference
|
CustomFieldValueChangedEvent
|
info
|
VC
|
Changed custom
field {name} on {entity.name} in {datacenter.name} to
{value}
Since 2.0 Reference
|
CustomizationFailed
|
warning
|
VC
|
Cannot complete
customization of VM {vm.name}. See customization log at
{logLocation} on the guest OS for details.
Since 2.5 Reference
|
CustomizationLinuxIdentityFailed
|
warning
|
VC
|
An error occurred
while setting up Linux identity. See log file '{logLocation}' on
guest OS for details.
Since 2.5 Reference
|
CustomizationNetworkSetupFailed
|
warning
|
VC
|
An error occurred
while setting up network properties of the guest OS. See the log
file {logLocation} in the guest OS for details.
Since 2.5 Reference
|
CustomizationStartedEvent
|
info
|
VC
|
Started
customization of VM {vm.name}. Customization log located at
{logLocation} in the guest OS.
Since 2.5 Reference
|
CustomizationSucceeded
|
info
|
VC
|
Customization of
VM {vm.name} succeeded. Customization log located at {logLocation}
in the guest OS.
Since 2.5 Reference
|
CustomizationSysprepFailed
|
warning
|
VC
|
The version of
Sysprep {sysprepVersion} provided for customizing VM {vm.name} does
not match the version of guest OS {systemVersion}. See the log file
{logLocation} in the guest OS for more information.
Since 2.5 Reference
|
CustomizationUnknownFailure
|
warning
|
VC
|
An error occurred
while customizing VM {vm.name}. For details reference the log file
{logLocation} in the guest OS.
Since 2.5 Reference
|
DasAdmissionControlDisabledEvent
|
info
|
VC
|
HA admission
control disabled on cluster {computeResource.name} in
{datacenter.name}
Since 2.0 Reference
|
DasAdmissionControlEnabledEvent
|
info
|
VC
|
HA admission
control enabled on cluster {computeResource.name} in
{datacenter.name}
Since 2.0 Reference
|
DasAgentFoundEvent
|
info
|
VC
|
Re-established
contact with a primary host in this HA cluster
Since 2.0 Reference
|
DasAgentUnavailableEvent
|
error
|
Cluster
|
Unable to contact
a primary HA agent in cluster {computeResource.name} in
{datacenter.name}
Since 2.0 Reference
|
DasClusterIsolatedEvent
|
error
|
Cluster
|
All hosts in the
HA cluster {computeResource.name} in {datacenter.name} were
isolated from the network. Check the network configuration for
proper network redundancy in the management network.
Since 4.0 Reference
|
DasDisabledEvent
|
info
|
VC
|
HA disabled on
cluster {computeResource.name} in {datacenter.name}
Since 2.0 Reference
|
DasEnabledEvent
|
info
|
VC
|
HA enabled on
cluster {computeResource.name} in {datacenter.name}
Since 2.0 Reference
|
DasHostFailedEvent
|
error
|
Cluster
|
A possible host
failure has been detected by HA on {failedHost.name} in cluster
{computeResource.name} in {datacenter.name}
Since 2.0 Reference
|
DasHostIsolatedEvent
|
warning
|
Cluster
|
Host
{isolatedHost.name} has been isolated from cluster
{computeResource.name} in {datacenter.name}
Since 2.0 Reference
|
DatacenterCreatedEvent
|
info
|
VC
|
Created datacenter
{datacenter.name} in folder {parent.name}
Since 2.5 Reference
|
DatacenterRenamedEvent
|
info
|
VC
|
Renamed datacenter
from {oldName} to {newName}
Since 2.5 Reference
|
DatastoreCapacityIncreasedEvent
|
info
|
VC
|
Datastore
{datastore.name} increased in capacity from {oldCapacity} bytes to
{oldCapacity} bytes in {datacenter.name}
Since 4.0 Reference
|
DatastoreDestroyedEvent
|
info
|
VC
|
Removed
unconfigured datastore {datastore.name}
Since 2.0 Reference
|
DatastoreDiscoveredEvent
|
info
|
VC
|
Discovered
datastore {datastore.name} on {host.name} in
{datacenter.name}
Since 2.0 Reference
|
DatastoreDuplicatedEvent
|
error
|
VC
|
Multiple
datastores named {datastore} detected on host {host.name} in
{datacenter.name}
Since 2.0 Reference
|
DatastoreFileCopiedEvent
|
info
|
VC
|
File or directory
{sourceFile} copied from {sourceDatastore.name} to {datastore.name}
as {targetFile}
Since 4.0 Reference
|
DatastoreFileDeletedEvent
|
info
|
VC
|
File or directory
{targetFile} deleted from {datastore.name}
Since 4.0 Reference
|
DatastoreFileMovedEvent
|
info
|
VC
|
File or directory
{sourceFile} moved from {sourceDatastore.name} to {datastore.name}
as {targetFile}
Since 4.0 Reference
|
DatastoreIORMReconfiguredEvent
|
info
|
VC
|
Reconfigured
Storage I/O Control on datastore {datastore.name}
Since 4.1 Reference
|
DatastorePrincipalConfigured
|
info
|
VC
|
Configured
datastore principal {datastorePrincipal} on host {host.name} in
{datacenter.name}
Since 2.0 Reference
|
DatastoreRemovedOnHostEvent
|
info
|
VC
|
Removed datastore
{datastore.name} from {host.name} in {datacenter.name}
Since 2.0 Reference
|
DatastoreRenamedEvent
|
info
|
VC
|
Renamed datastore
from {oldName} to {newName} in {datacenter.name}
Since 2.0 Reference
|
DatastoreRenamedOnHostEvent
|
info
|
VC
|
Renamed datastore
from {oldName} to {newName} in {datacenter.name}
Since 2.0 Reference
|
DrsDisabledEvent
|
info
|
VC
|
Disabled DRS on
cluster {computeResource.name} in datacenter
{datacenter.name}
Since 2.0 Reference
|
DrsEnabledEvent
|
info
|
VC
|
Enabled DRS on
{computeResource.name} with automation level {behavior} in
{datacenter.name}
Since 2.0 Reference
|
DrsEnteredStandbyModeEvent
|
info
|
VC
|
DRS put
{host.name} into standby mode
Since 2.5 Reference
|
DrsEnteringStandbyModeEvent
|
info
|
VC
|
DRS is putting
{host.name} into standby mode
Since 4.0 Reference
|
DrsExitedStandbyModeEvent
|
info
|
VC
|
DRS moved
{host.name} out of standby mode
Since 2.5 Reference
|
DrsExitingStandbyModeEvent
|
info
|
VC
|
DRS is moving
{host.name} out of standby mode
Since 4.0 Reference
|
DrsExitStandbyModeFailedEvent
|
error
|
ESXHost
|
DRS cannot move
{host.name} out of standby mode
Since 4.0 Reference
|
DrsInvocationFailedEvent
|
error
|
Cluster
|
DRS invocation not
completed
Since 4.0 Reference
|
DrsRecoveredFromFailureEvent
|
info
|
VC
|
DRS has recovered
from the failure
Since 4.0 Reference
|
DrsResourceConfigureFailedEvent
|
error
|
Cluster
|
Unable to apply
DRS resource settings on host {host.name} in {datacenter.name}.
{reason.msg}. This can significantly reduce the effectiveness of
DRS.
Since 2.0 Reference
|
DrsResourceConfigureSyncedEvent
|
info
|
VC
|
Resource
configuration specification returns to synchronization from
previous failure on host '{host.name}' in {datacenter.name}
Since 2.0 Reference
|
DrsRuleComplianceEvent
|
info
|
VC
|
{vm.name} on
{host.name} in {datacenter.name} is now compliant with DRS VM-Host
affinity rules
Since 4.1 Reference
|
DrsRuleViolationEvent
|
warning
|
VirtualMachine
|
{vm.name} on
{host.name} in {datacenter.name} is violating a DRS VM-Host
affinity rule
Since 4.1 Reference
|
DrsVmMigratedEvent
|
info
|
VC
|
DRS migrated
{vm.name} from {sourceHost.name} to {host.name} in cluster
{computeResource.name} in {datacenter.name}
Since 2.0 Reference
|
DrsVmPoweredOnEvent
|
info
|
VC
|
DRS powered On
{vm.name} on {host.name} in {datacenter.name}
Since 2.5 Reference
|
DuplicateIpDetectedEvent
|
warning
|
ESXHostNetwork
|
Virtual machine
{macAddress} on host {host.name} has a duplicate IP
{duplicateIP}
Since 2.5 Reference
|
DvpgImportEvent
|
info
|
VC
|
Import operation
with type {importType} was performed on {net.name}
Since 5.1 Reference
|
DvpgRestoreEvent
|
info
|
VC
|
Restore operation
was performed on {net.name}
Since 5.1 Reference
|
DVPortgroupCreatedEvent
|
info
|
VC
|
Distributed
virtual port group {net.name} in {datacenter.name} was added to
switch {dvs.name}.
Since 4.0 Reference
|
DVPortgroupDestroyedEvent
|
info
|
VC
|
Distributed
virtual port group {net.name} in {datacenter.name} was
deleted.
Since 4.0 Reference
|
DVPortgroupReconfiguredEvent
|
info
|
VC
|
Distributed
virtual port group {net.name} in {datacenter.name} was
reconfigured.
Since 4.0 Reference
|
DVPortgroupRenamedEvent
|
info
|
VC
|
Distributed
virtual port group {oldName} in {datacenter.name} was renamed to
{newName}
Since 4.0 Reference
|
DvsCreatedEvent
|
info
|
VC
|
A Distributed
Virtual Switch {dvs.name} was created in {datacenter.name}.
Since 4.0 Reference
|
DvsDestroyedEvent
|
info
|
VC
|
Distributed
Virtual Switch {dvs.name} in {datacenter.name} was deleted.
Since 4.0 Reference
|
DvsEvent
|
info
|
VC
|
Distributed
Virtual Switch event
Since 4.0 Reference
|
DvsHealthStatusChangeEvent
|
info
|
VC
|
Health check
status was changed in vSphere Distributed Switch {dvs.name} on host
{host.name} in {datacenter.name}
Since 5.1 Reference
|
DvsHostBackInSyncEvent
|
info
|
VC
|
The Distributed
Virtual Switch {dvs.name} configuration on the host was
synchronized with that of the vCenter Server.
Since 4.0 Reference
|
DvsHostJoinedEvent
|
info
|
VC
|
The host
{hostJoined.name} joined the Distributed Virtual Switch {dvs.name}
in {datacenter.name}.
Since 4.0 Reference
|
DvsHostLeftEvent
|
info
|
VC
|
The host
{hostLeft.name} left the Distributed Virtual Switch {dvs.name} in
{datacenter.name}.
Since 4.0 Reference
|
DvsHostStatusUpdated
|
info
|
VC
|
The host
{hostMember.name} changed status on the vNetwork Distributed Switch
{dvs.name} in {datacenter.name}
Since 4.1 Reference
|
DvsHostWentOutOfSyncEvent
|
warning
|
ESXHostNetwork
|
The Distributed
Virtual Switch {dvs.name} configuration on the host differed from
that of the vCenter Server.
Since 4.0 Reference
|
DvsImportEvent
|
info
|
VC
|
Import operation
with type {importType} was performed on {dvs.name}
Since 5.1 Reference
|
DvsMergedEvent
|
info
|
VC
|
Distributed
Virtual Switch {srcDvs.name} was merged into {dstDvs.name} in
{datacenter.name}.
Since 4.0 Reference
|
DvsPortBlockedEvent
|
info
|
VC
|
Port {portKey} was
blocked in the Distributed Virtual Switch {dvs.name} in
{datacenter.name}.
Since 4.0 Reference
|
DvsPortConnectedEvent
|
info
|
VC
|
The port {portKey}
was connected in the Distributed Virtual Switch {dvs.name} in
{datacenter.name}
Since 4.0 Reference
|
DvsPortCreatedEvent
|
info
|
VC
|
New ports were
created in the Distributed Virtual Switch {dvs.name} in
{datacenter.name}.
Since 4.0 Reference
|
DvsPortDeletedEvent
|
info
|
VC
|
Deleted ports in
the Distributed Virtual Switch {dvs.name} in
{datacenter.name}.
Since 4.0 Reference
|
DvsPortDisconnectedEvent
|
info
|
VC
|
The port {portKey}
was disconnected in the Distributed Virtual Switch {dvs.name} in
{datacenter.name}.
Since 4.0 Reference
|
DvsPortEnteredPassthruEvent
|
info
|
VC
|
dvPort {portKey}
entered passthrough mode in the vNetwork Distributed Switch
{dvs.name} in {datacenter.name}
Since 4.1 Reference
|
DvsPortExitedPassthruEvent
|
info
|
VC
|
dvPort {portKey}
exited passthrough mode in the vNetwork Distributed Switch
{dvs.name} in {datacenter.name}
Since 4.1 Reference
|
DvsPortJoinPortgroupEvent
|
info
|
VC
|
Port {portKey} was
moved into the distributed virtual port group {portgroupName} in
{datacenter.name}.
Since 4.0 Reference
|
DvsPortLeavePortgroupEvent
|
info
|
VC
|
Port {portKey} was
moved out of the distributed virtual port group {portgroupName} in
{datacenter.name}.
Since 4.0 Reference
|
DvsPortLinkDownEvent
|
warning
|
VC
|
The port {portKey}
link was down in the Distributed Virtual Switch {dvs.name} in
{datacenter.name}
Since 4.0 Reference
|
DvsPortLinkUpEvent
|
info
|
VC
|
The port {portKey}
link was up in the Distributed Virtual Switch {dvs.name} in
{datacenter.name}
Since 4.0 Reference
|
DvsPortReconfiguredEvent
|
info
|
VC
|
Reconfigured ports
in the Distributed Virtual Switch {dvs.name} in
{datacenter.name}.
Since 4.0 Reference
|
DvsPortRuntimeChangeEvent
|
info
|
VC
|
The dvPort
{portKey} runtime information changed in the vSphere Distributed
Switch {dvs.name} in {datacenter.name}.
Since 5.0 Reference
|
DvsPortUnblockedEvent
|
info
|
VC
|
Port {portKey} was
unblocked in the Distributed Virtual Switch {dvs.name} in
{datacenter.name}.
Since 4.0 Reference
|
DvsPortVendorSpecificStateChangeEvent
|
info
|
VC
|
The dvPort
{portKey} vendor specific state changed in the vSphere Distributed
Switch {dvs.name} in {datacenter.name}.
Since 5.0 Reference
|
DvsReconfiguredEvent
|
info
|
VC
|
The Distributed
Virtual Switch {dvs.name} in {datacenter.name} was
reconfigured.
Since 4.0 Reference
|
DvsRenamedEvent
|
info
|
VC
|
The Distributed
Virtual Switch {oldName} in {datacenter.name} was renamed to
{newName}.
Since 4.0 Reference
|
DvsRestoreEvent
|
info
|
VC
|
Restore operation
was performed on {dvs.name}
Since 5.1 Reference
|
DvsUpgradeAvailableEvent
|
info
|
VC
|
An upgrade for the
Distributed Virtual Switch {dvs.name} in datacenter
{datacenter.name} is available.
Since 4.0 Reference
|
DvsUpgradedEvent
|
info
|
VC
|
Distributed
Virtual Switch {dvs.name} in datacenter {datacenter.name} was
upgraded.
Since 4.0 Reference
|
DvsUpgradeInProgressEvent
|
info
|
VC
|
An upgrade for the
Distributed Virtual Switch {dvs.name} in datacenter
{datacenter.name} is in progress.
Since 4.0 Reference
|
DvsUpgradeRejectedEvent
|
info
|
VC
|
Cannot complete an
upgrade for the Distributed Virtual Switch {dvs.name} in datacenter
{datacenter.name}
Since 4.0 Reference
|
EnteredMaintenanceModeEvent
|
info
|
VC
|
Host {host.name}
in {datacenter.name} has entered maintenance mode
Since 2.0 Reference
|
EnteredStandbyModeEvent
|
info
|
VC
|
The host
{host.name} is in standby mode
Since 2.5 Reference
|
EnteringMaintenanceModeEvent
|
info
|
VC
|
Host {host.name}
in {datacenter.name} has started to enter maintenance mode
Since 2.0 Reference
|
EnteringStandbyModeEvent
|
info
|
VC
|
The host
{host.name} is entering standby mode
Since 2.5 Reference
|
ErrorUpgradeEvent
|
error
|
VC
|
{message}
Since 2.0 Reference
|
esx.audit.dcui.defaults.factoryrestore
|
warning
|
VC
|
esx.audit.dcui.defaults.factoryrestore| The host has
been restored to default factory settings. Please consult ESXi
Embedded and vCenter Server Setup Guide or follow the Ask VMware
link for more information.
Since 5.0 Reference
|
esx.audit.dcui.disabled
|
info
|
VC
|
esx.audit.dcui.disabled| The DCUI has been
disabled.
Since 5.0 Reference
|
esx.audit.dcui.enabled
|
info
|
VC
|
esx.audit.dcui.enabled| The DCUI has been
enabled.
Since 5.0 Reference
|
esx.audit.dcui.host.reboot
|
warning
|
VC
|
esx.audit.dcui.host.reboot| The host is being
rebooted through the Direct Console User Interface (DCUI). Please
consult ESXi Embedded and vCenter Server Setup Guide or follow the
Ask VMware link for more information.
Since 5.0 Reference
|
esx.audit.dcui.host.shutdown
|
warning
|
VC
|
esx.audit.dcui.host.shutdown| The host is being shut
down through the Direct Console User Interface (DCUI). Please
consult ESXi Embedded and vCenter Server Setup Guide or follow the
Ask VMware link for more information.
Since 5.0 Reference
|
esx.audit.dcui.hostagents.restart
|
info
|
VC
|
esx.audit.dcui.hostagents.restart| The management
agents on the host are being restarted. Please consult ESXi
Embedded and vCenter Server Setup Guide or follow the Ask VMware
link for more information.
Since 5.0 Reference
|
esx.audit.dcui.login.failed
|
error
|
VC
|
esx.audit.dcui.login.failed| Authentication of user
{1} has failed. Please consult ESXi Embedded and vCenter Server
Setup Guide or follow the Ask VMware link for more
information.
Since 5.0 Reference
|
esx.audit.dcui.login.passwd.changed
|
info
|
VC
|
esx.audit.dcui.login.passwd.changed| Login password
for user {1} has been changed. Please consult ESXi Embedded and
vCenter Server Setup Guide or follow the Ask VMware link for more
information.
Since 5.0 Reference
|
esx.audit.dcui.network.factoryrestore
|
warning
|
VC
|
esx.audit.dcui.network.factoryrestore| The host has
been restored to factory network settings. Please consult ESXi
Embedded and vCenter Server Setup Guide or follow the Ask VMware
link for more information.
Since 5.0 Reference
|
esx.audit.dcui.network.restart
|
info
|
VC
|
esx.audit.dcui.network.restart| A management
interface {1} has been restarted. Please consult ESXi Embedded and
vCenter Server Setup Guide or follow the Ask VMware link for more
information.
Since 5.0 Reference
|
esx.audit.esxcli.host.poweroff
|
warning
|
ESXHost
|
esx.audit.esxcli.host.poweroff| The host is being
powered off through esxcli. Reason for powering off: {1}. Please
consult vSphere Documentation Center or follow the Ask VMware link
for more information.
Since 5.1 Reference
|
esx.audit.esxcli.host.restart
|
info
|
ESXHost
|
esx.audit.esxcli.host.restart|
event.esx.audit.esxcli.host.restart.fullFormat
Since 5.1 Reference
|
esx.audit.esximage.hostacceptance.changed
|
info
|
VC
|
esx.audit.esximage.hostacceptance.changed| Host
acceptance level changed from {1} to {2}
Since 5.0 Reference
|
esx.audit.esximage.install.novalidation
|
warning
|
VC
|
esx.audit.esximage.install.novalidation| Attempting
to install an image profile with validation disabled. This may
result in an image with unsatisfied dependencies, file or package
conflicts, and potential security violations.
Since 5.0 Reference
|
esx.audit.esximage.install.securityalert
|
warning
|
VC
|
esx.audit.esximage.install.securityalert| SECURITY
ALERT: Installing image profile '{1}' with {2}.
Since 5.0 Reference
|
esx.audit.esximage.profile.install.successful
|
info
|
VC
|
esx.audit.esximage.profile.install.successful|
Successfully installed image profile '{1}'. Installed VIBs {2},
removed VIBs {3}
Since 5.0 Reference
|
esx.audit.esximage.profile.update.successful
|
info
|
VC
|
esx.audit.esximage.profile.update.successful|
Successfully updated host to image profile '{1}'. Installed VIBs
{2}, removed VIBs {3}
Since 5.0 Reference
|
esx.audit.esximage.vib.install.successful
|
info
|
VC
|
esx.audit.esximage.vib.install.successful|
Successfully installed VIBs {1}, removed VIBs {2}
Since 5.0 Reference
|
esx.audit.esximage.vib.remove.successful
|
info
|
VC
|
esx.audit.esximage.vib.remove.successful|
Successfully removed VIBs {1}
Since 5.0 Reference
|
esx.audit.host.boot
|
info
|
VC
|
esx.audit.host.boot| Host has booted.
Since 5.0 Reference
|
esx.audit.host.maxRegisteredVMsExceeded
|
warning
|
ESXHost
|
esx.audit.host.maxRegisteredVMsExceeded| The number
of virtual machines registered on host {host.name} in cluster
{computeResource.name} in {datacenter.name} exceeded limit:
{current} registered, {limit} is the maximum supported.
Since 5.1 Reference
|
esx.audit.host.stop.reboot
|
info
|
VC
|
esx.audit.host.stop.reboot| Host is rebooting.
Since 5.0 Reference
|
esx.audit.host.stop.shutdown
|
info
|
VC
|
esx.audit.host.stop.shutdown| Host is shutting
down.
Since 5.0 Reference
|
esx.audit.lockdownmode.disabled
|
info
|
VC
|
esx.audit.lockdownmode.disabled| Administrator access
to the host has been enabled.
Since 5.0 Reference
|
esx.audit.lockdownmode.enabled
|
info
|
VC
|
esx.audit.lockdownmode.enabled| Administrator access
to the host has been disabled.
Since 5.0 Reference
|
esx.audit.maintenancemode.canceled
|
info
|
VC
|
esx.audit.maintenancemode.canceled| The host has
canceled entering maintenance mode.
Since 5.0 Reference
|
esx.audit.maintenancemode.entered
|
info
|
VC
|
esx.audit.maintenancemode.entered| The host has
entered maintenance mode.
Since 5.0 Reference
|
esx.audit.maintenancemode.entering
|
info
|
VC
|
esx.audit.maintenancemode.entering| The host has
begun entering maintenance mode.
Since 5.0 Reference
|
esx.audit.maintenancemode.exited
|
info
|
VC
|
esx.audit.maintenancemode.exited| The host has exited
maintenance mode.
Since 5.0 Reference
|
esx.audit.net.firewall.config.changed
|
info
|
VC
|
esx.audit.net.firewall.config.changed| Firewall
configuration has changed. Operation '{1}' for rule set {2}
succeeded.
Since 5.0 Reference
|
esx.audit.net.firewall.disabled
|
warning
|
VC
|
esx.audit.net.firewall.disabled| Firewall has been
disabled.
Since 5.0 Reference
|
esx.audit.net.firewall.enabled
|
info
|
VC
|
esx.audit.net.firewall.enabled| Firewall has been
enabled for port {1}.
Since 5.0 Reference
|
esx.audit.net.firewall.port.hooked
|
info
|
VC
|
esx.audit.net.firewall.port.hooked| Port {1} is now
protected by Firewall.
Since 5.0 Reference
|
esx.audit.net.firewall.port.removed
|
warning
|
VC
|
esx.audit.net.firewall.port.removed| Port {1} is no
longer protected with Firewall.
Since 5.0 Reference
|
esx.audit.net.lacp.disable
|
info
|
VC
|
esx.audit.net.lacp.disable| LACP for VDS {1} is
disabled.
Since 5.1 Reference
|
esx.audit.net.lacp.enable
|
info
|
VC
|
esx.audit.net.lacp.enable| LACP for VDS {1} is
enabled.
Since 5.1 Reference
|
esx.audit.net.lacp.uplink.connected
|
info
|
VC
|
esx.audit.net.lacp.uplink.connected| Lacp info:
uplink {1} on VDS {2} got connected.
Since 5.1 Reference
|
esx.audit.net.vdl2.ip.change
|
warning
|
ESXHostNetwork
|
esx.audit.net.vdl2.ip.change| VDL2 IP changed on
vmknic {1}, port {2}, DVS {3}, VLAN {4}.
Since 5.0 Reference
|
esx.audit.net.vdl2.mappingtable.full
|
warning
|
ESXHostNetwork
|
esx.audit.net.vdl2.mappingtable.full| Mapping table
entries of VDL2 network {1} on DVS {2} exhausted. This network
might suffer a low performance.
Since 5.0 Reference
|
esx.audit.net.vdl2.route.change
|
warning
|
ESXHostNetwork
|
esx.audit.net.vdl2.route.change| VDL2 IP interface on
vmknic: {1}, DVS: {2}, VLAN: {3} default route changed.
Since 5.0 Reference
|
esx.audit.shell.disabled
|
info
|
VC
|
esx.audit.shell.disabled| The ESX command line shell
has been disabled.
Since 5.0 Reference
|
esx.audit.shell.enabled
|
info
|
VC
|
esx.audit.shell.enabled| The ESX command line shell
has been enabled.
Since 5.0 Reference
|
esx.audit.ssh.disabled
|
info
|
VC
|
esx.audit.ssh.disabled| SSH access has been
disabled.
Since 5.0 Reference
|
esx.audit.ssh.enabled
|
info
|
VC
|
esx.audit.ssh.enabled| SSH access has been
enabled.
Since 5.0 Reference
|
esx.audit.usb.config.changed
|
info
|
VC
|
esx.audit.usb.config.changed| USB configuration has
changed on host {host.name} in cluster {computeResource.name} in
{datacenter.name}.
Since 5.0 Reference
|
esx.audit.uw.secpolicy.alldomains.level.changed
|
warning
|
VC
|
esx.audit.uw.secpolicy.alldomains.level.changed| The
enforcement level for all security domains has been changed to {1}.
The enforcement level must always be set to enforcing.
Since 5.0 Reference
|
esx.audit.uw.secpolicy.domain.level.changed
|
warning
|
VC
|
esx.audit.uw.secpolicy.domain.level.changed| The
enforcement level for security domain {1} has been changed to {2}.
The enforcement level must always be set to enforcing.
Since 5.0 Reference
|
esx.audit.vmfs.lvm.device.discovered
|
info
|
VC
|
esx.audit.vmfs.lvm.device.discovered| One or more LVM
devices have been discovered on this host.
Since 5.0 Reference
|
esx.audit.vmfs.volume.mounted
|
info
|
VC
|
esx.audit.vmfs.volume.mounted| File system {1} on
volume {2} has been mounted in {3} mode on this host.
Since 5.0 Reference
|
esx.audit.vmfs.volume.umounted
|
info
|
VC
|
esx.audit.vmfs.volume.umounted| The volume {1} has
been safely un-mounted. The datastore is no longer accessible on
this host.
Since 5.0 Reference
|
esx.audit.vsan.clustering.enabled
|
info
|
VC
|
esx.audit.vsan.clustering.enabled| VSAN clustering
and directory services have been enabled.
Since 5.5 Reference
|
esx.clear.coredump.configured
|
info
|
VC
|
esx.clear.coredump.configured| A vmkcore disk
partition is available and/or a network coredump server has been
configured. Host core dumps will be saved.
Since 5.1 Reference
|
esx.clear.net.connectivity.restored
|
info
|
ESXHostNetwork
|
esx.clear.net.connectivity.restored| Network
connectivity restored on virtual switch {1}, portgroups: {2}.
Physical NIC {3} is up.
Since 4.1 Reference
|
esx.clear.net.dvport.connectivity.restored
|
info
|
ESXHostNetwork
|
esx.clear.net.dvport.connectivity.restored| Network
connectivity restored on DVPorts: {1}. Physical NIC {2} is
up.
Since 4.1 Reference
|
esx.clear.net.dvport.redundancy.restored
|
info
|
ESXHostNetwork
|
esx.clear.net.dvport.redundancy.restored| Uplink
redundancy restored on DVPorts: {1}. Physical NIC {2} is up.
Since 4.1 Reference
|
esx.clear.net.lacp.lag.transition.up
|
info
|
VC
|
esx.clear.net.lacp.lag.transition.up| LACP info: LAG
{1} on VDS {2} is up.
Since 5.5 Reference
|
esx.clear.net.lacp.uplink.transition.up
|
info
|
ESXHostNetwork
|
esx.clear.net.lacp.uplink.transition.up| Lacp info:
uplink {1} on VDS {2} is moved into link aggregation group.
Since 5.1 Reference
|
esx.clear.net.lacp.uplink.unblocked
|
info
|
ESXHostNetwork
|
esx.clear.net.lacp.uplink.unblocked| Lacp error:
uplink {1} on VDS {2} is unblocked.
Since 5.1 Reference
|
esx.clear.net.redundancy.restored
|
info
|
ESXHostNetwork
|
esx.clear.net.redundancy.restored| Uplink redundancy
restored on virtual switch {1}, portgroups: {2}. Physical NIC {3}
is up.
Since 4.1 Reference
|
esx.clear.net.vmnic.linkstate.up
|
info
|
ESXHostNetwork
|
esx.clear.net.vmnic.linkstate.up| Physical NIC {1}
linkstate is up.
Since 4.1 Reference
|
esx.clear.scsi.device.io.latency.improved
|
info
|
ESXHostStorage
|
esx.clear.scsi.device.io.latency.improved| Device {1}
performance has improved. I/O latency reduced from {2} microseconds
to {3} microseconds.
Since 5.0 Reference
|
esx.clear.scsi.device.state.on
|
info
|
ESXHostStorage
|
esx.clear.scsi.device.state.on| Device {1}, has been
turned on administratively.
Since 5.0 Reference
|
esx.clear.scsi.device.state.permanentloss.deviceonline
|
info
|
ESXHostStorage
|
esx.clear.scsi.device.state.permanentloss.deviceonline|
Device {1}, that was permanently inaccessible is now online. No
data consistency guarantees.
Since 5.0 Reference
|
esx.clear.storage.apd.exit
|
info
|
ESXHostStorage
|
esx.clear.storage.apd.exit| Device or filesystem with
identifer [{1}] has exited the All Paths Down state.
Since 5.1 Reference
|
esx.clear.storage.connectivity.restored
|
info
|
ESXHostStorage
|
esx.clear.storage.connectivity.restored| Connectivity
to storage device {1} (Datastores: {2}) restored. Path {3} is
active again.
Since 4.1 Reference
|
esx.clear.storage.redundancy.restored
|
info
|
ESXHostStorage
|
esx.clear.storage.redundancy.restored| Path
redundancy to storage device {1} (Datastores: {2}) restored. Path
{3} is active again.
Since 4.1 Reference
|
esx.clear.vsan.clustering.enabled
|
info
|
VC
|
esx.clear.vsan.clustering.enabled| VSAN clustering
and directory services have now been enabled.
Since 5.5 Reference
|
esx.clear.vsan.network.available
|
info
|
VC
|
esx.clear.vsan.network.available|
event.esx.clear.vsan.network.available.fullFormat
Since 5.5 Reference
|
esx.clear.vsan.vmknic.ready
|
info
|
VC
|
esx.clear.vsan.vmknic.ready|
event.esx.clear.vsan.vmknic.ready.fullFormat
Since 5.5 Reference
|
esx.problem.3rdParty.error
|
error
|
VC
|
esx.problem.3rdParty.error| A 3rd party component,
{1}, running on ESXi has reported an error. Please follow the
knowledge base link ({2}) to see the steps to remedy the problem as
reported by {3}. The message reported is: {4}.
Since 5.0 Reference
|
esx.problem.3rdParty.info
|
info
|
VC
|
esx.problem.3rdParty.info|
event.esx.problem.3rdParty.info.fullFormat
Since 5.0 Reference
|
esx.problem.3rdParty.warning
|
warning
|
VC
|
esx.problem.3rdParty.warning| A 3rd party component,
{1}, running on ESXi has reported a warning related to a problem.
Please follow the knowledge base link ({2}) to see the steps to
remedy the problem as reported by {3}. The message reported is:
{4}.
Since 5.0 Reference
|
esx.problem.apei.bert.memory.error.corrected
|
error
|
ESXHostHardware
|
esx.problem.apei.bert.memory.error.corrected| A
corrected memory error occurred in last boot. The following details
were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node:
{3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8},
Column: {9} Error type: {10}
Since 4.1 Reference
|
esx.problem.apei.bert.memory.error.fatal
|
error
|
ESXHostHardware
|
esx.problem.apei.bert.memory.error.fatal| A fatal
memory error occurred in the last boot. The following details were
reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3},
Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column:
{9} Error type: {10}
Since 4.1 Reference
|
esx.problem.apei.bert.memory.error.recoverable
|
error
|
ESXHostHardware
|
esx.problem.apei.bert.memory.error.recoverable| A
recoverable memory error occurred in last boot. The following
details were reported. Physical Addr: {1}, Physical Addr Mask: {2},
Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row:
{8}, Column: {9} Error type: {10}
Since 4.1 Reference
|
esx.problem.apei.bert.pcie.error.corrected
|
error
|
ESXHostHardware
|
esx.problem.apei.bert.pcie.error.corrected| A
corrected PCIe error occurred in last boot. The following details
were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function:
{4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register:
{8}, Status Register: {9}.
Since 4.1 Reference
|
esx.problem.apei.bert.pcie.error.fatal
|
error
|
ESXHostHardware
|
esx.problem.apei.bert.pcie.error.fatal| Platform
encounterd a fatal PCIe error in last boot. The following details
were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function:
{4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register:
{8}, Status Register: {9}.
Since 4.1 Reference
|
esx.problem.apei.bert.pcie.error.recoverable
|
error
|
ESXHostHardware
|
esx.problem.apei.bert.pcie.error.recoverable| A
recoverable PCIe error occurred in last boot. The following details
were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function:
{4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register:
{8}, Status Register: {9}.
Since 4.1 Reference
|
esx.problem.application.core.dumped
|
warning
|
ESXHost
|
esx.problem.application.core.dumped| An application
({1}) running on ESXi host has crashed ({2} time(s) so far). A core
file might have been created at {3}.
Since 5.0 Reference
|
esx.problem.coredump.unconfigured
|
warning
|
ESXHost
|
esx.problem.coredump.unconfigured| No vmkcore disk
partition is available and no network coredump server has been
configured. Host core dumps cannot be saved.
Since 5.0 Reference
|
esx.problem.cpu.amd.mce.dram.disabled
|
error
|
ESXHostHardware
|
esx.problem.cpu.amd.mce.dram.disabled| DRAM ECC not
enabled. Please enable it in BIOS.
Since 5.0 Reference
|
esx.problem.cpu.intel.ioapic.listing.error
|
error
|
ESXHostHardware
|
esx.problem.cpu.intel.ioapic.listing.error| Not all
IO-APICs are listed in the DMAR. Not enabling interrupt remapping
on this platform.
Since 5.0 Reference
|
esx.problem.cpu.mce.invalid
|
error
|
ESXHostHardware
|
esx.problem.cpu.mce.invalid| MCE monitoring will be
disabled as an unsupported CPU was detected. Please consult the ESX
HCL for information on supported hardware.
Since 5.0 Reference
|
esx.problem.cpu.smp.ht.invalid
|
error
|
ESXHostHardware
|
esx.problem.cpu.smp.ht.invalid| Disabling
HyperThreading due to invalid configuration: Number of threads:
{1}, Number of PCPUs: {2}.
Since 5.0 Reference
|
esx.problem.cpu.smp.ht.numpcpus.max
|
error
|
ESXHostHardware
|
esx.problem.cpu.smp.ht.numpcpus.max| Found {1} PCPUs,
but only using {2} of them due to specified limit.
Since 5.0 Reference
|
esx.problem.cpu.smp.ht.partner.missing
|
warning
|
ESXHostHardware
|
esx.problem.cpu.smp.ht.partner.missing| Disabling
HyperThreading due to invalid configuration: HT partner {1} is
missing from PCPU {2}.
Since 5.0 Reference
|
esx.problem.dhclient.lease.none
|
error
|
ESXHostNetwork
|
esx.problem.dhclient.lease.none| Unable to obtain a
DHCP lease on interface {1}.
Since 5.0 Reference
|
esx.problem.dhclient.lease.offered.error
|
warning
|
ESXHostNetwork
|
esx.problem.dhclient.lease.offered.error|
event.esx.problem.dhclient.lease.offered.error.fullFormat
Since 5.0 Reference
|
esx.problem.dhclient.lease.persistent.none
|
warning
|
ESXHostNetwork
|
esx.problem.dhclient.lease.persistent.none| No
working DHCP leases in persistent database.
Since 5.0 Reference
|
esx.problem.esximage.install.error
|
warning
|
VC
|
esx.problem.esximage.install.error| Could not install
image profile: {1}
Since 5.0 Reference
|
esx.problem.esximage.install.invalidhardware
|
warning
|
VC
|
esx.problem.esximage.install.invalidhardware| Host
doesn't meet image profile '{1}' hardware requirements: {2}
Since 5.0 Reference
|
esx.problem.esximage.install.stage.error
|
warning
|
VC
|
esx.problem.esximage.install.stage.error| Could not
stage image profile '{1}': {2}
Since 5.0 Reference
|
esx.problem.hardware.acpi.interrupt.routing.device.invalid
|
warning
|
ESXHostHardware
|
esx.problem.hardware.acpi.interrupt.routing.device.invalid|
Skipping interrupt routing entry with bad device number: {1}. This
is a BIOS bug.
Since 5.0 Reference
|
esx.problem.hardware.acpi.interrupt.routing.pin.invalid
|
warning
|
ESXHostHardware
|
esx.problem.hardware.acpi.interrupt.routing.pin.invalid|
Skipping interrupt routing entry with bad device pin: {1}. This is
a BIOS bug.
Since 5.0 Reference
|
esx.problem.hardware.ioapic.missing
|
warning
|
ESXHostHardware
|
esx.problem.hardware.ioapic.missing| IOAPIC Num {1}
is missing. Please check BIOS settings to enable this
IOAPIC.
Since 5.0 Reference
|
esx.problem.host.coredump
|
warning
|
ESXHost
|
esx.problem.host.coredump| An unread host kernel core
dump has been found.
Since 5.0 Reference
|
esx.problem.hostd.core.dumped
|
warning
|
ESXHost
|
esx.problem.hostd.core.dumped| {1} crashed ({2}
time(s) so far) and a core file might have been created at {3}.
This might have caused connections to the host to be
dropped.
Since 5.0 Reference
|
esx.problem.iorm.badversion
|
warning
|
ESXHostStorage
|
esx.problem.iorm.badversion| Host {1} cannot
participate in Storage I/O Control(SIOC) on datastore {2} because
the version number {3} of the SIOC agent on this host is
incompatible with number {4} of its counterparts on other hosts
connected to this datastore.
Since 5.0 Reference
|
esx.problem.iorm.nonviworkload
|
warning
|
ESXHostStorage
|
esx.problem.iorm.nonviworkload| An external I/O
activity is detected on datastore {1}, this is an unsupported
configuration. Consult the Resource Management Guide or follow the
Ask VMware link for more information.
Since 4.1 Reference
|
esx.problem.migrate.vmotion.default.heap.create.failed
|
error
|
Cluster
|
esx.problem.migrate.vmotion.default.heap.create.failed|
Failed to create default migration heap. This might be the result
of severe host memory pressure or virtual address space exhaustion.
Migration might still be possible, but will be unreliable in cases
of extreme host memory pressure.
Since 5.0 Reference
|
esx.problem.migrate.vmotion.server.pending.cnx.listen.socket.shutdown
|
warning
|
Cluster
|
esx.problem.migrate.vmotion.server.pending.cnx.listen.socket.shutdown|
The ESXi host's vMotion network server encountered an error while
monitoring incoming network connections. Shutting down listener
socket. vMotion might not be possible with this host until vMotion
is manually re-enabled. Failure status: {1}
Since 5.0 Reference
|
esx.problem.net.connectivity.lost
|
error
|
ESXHostNetwork
|
esx.problem.net.connectivity.lost| Lost network
connectivity on virtual switch {1}. Physical NIC {2} is down.
Affected portgroups:{3}.
Since 4.1 Reference
|
esx.problem.net.dvport.connectivity.lost
|
error
|
ESXHostNetwork
|
esx.problem.net.dvport.connectivity.lost| Lost
network connectivity on DVPorts: {1}. Physical NIC {2} is
down.
Since 4.1 Reference
|
esx.problem.net.dvport.redundancy.degraded
|
warning
|
ESXHostNetwork
|
esx.problem.net.dvport.redundancy.degraded| Uplink
redundancy degraded on DVPorts: {1}. Physical NIC {2} is
down.
Since 4.1 Reference
|
esx.problem.net.dvport.redundancy.lost
|
warning
|
ESXHostNetwork
|
esx.problem.net.dvport.redundancy.lost| Lost uplink
redundancy on DVPorts: {1}. Physical NIC {2} is down.
Since 4.1 Reference
|
esx.problem.net.e1000.tso6.notsupported
|
error
|
ESXHostNetwork
|
esx.problem.net.e1000.tso6.notsupported|
Guest-initiated IPv6 TCP Segmentation Offload (TSO) packets
ignored. Manually disable TSO inside the guest operating system in
virtual machine {1}, or use a different virtual adapter.
Since 4.1 Reference
|
esx.problem.net.fence.port.badfenceid
|
warning
|
ESXHostNetwork
|
esx.problem.net.fence.port.badfenceid| VMkernel
failed to set fenceId {1} on distributed virtual port {2} on switch
{3}. Reason: invalid fenceId.
Since 5.0 Reference
|
esx.problem.net.fence.resource.limited
|
warning
|
ESXHostNetwork
|
esx.problem.net.fence.resource.limited| Vmkernel
failed to set fenceId {1} on distributed virtual port {2} on switch
{3}. Reason: maximum number of fence networks or ports have been
reached.
Since 5.0 Reference
|
esx.problem.net.fence.switch.unavailable
|
warning
|
ESXHostNetwork
|
esx.problem.net.fence.switch.unavailable| Vmkernel
failed to set fenceId {1} on distributed virtual port {2} on switch
{3}. Reason: dvSwitch fence property is not set.
Since 5.0 Reference
|
esx.problem.net.firewall.config.failed
|
error
|
ESXHostNetwork
|
esx.problem.net.firewall.config.failed| Firewall
configuration operation '{1}' failed. The changes were not applied
to rule set {2}.
Since 5.0 Reference
|
esx.problem.net.firewall.port.hookfailed
|
error
|
ESXHostNetwork
|
esx.problem.net.firewall.port.hookfailed| Adding port
{1} to Firewall failed.
Since 5.0 Reference
|
esx.problem.net.gateway.set.failed
|
error
|
ESXHostNetwork
|
esx.problem.net.gateway.set.failed| Cannot connect to
the specified gateway {1}. Failed to set it.
Since 5.0 Reference
|
esx.problem.net.heap.belowthreshold
|
warning
|
ESXHostNetwork
|
esx.problem.net.heap.belowthreshold| {1} heap free
size dropped below {2} percent.
Since 5.0 Reference
|
esx.problem.net.lacp.lag.transition.down
|
warning
|
VC
|
esx.problem.net.lacp.lag.transition.down| LACP
warning: LAG {1} on VDS {2} is down.
Since 5.5 Reference
|
esx.problem.net.lacp.peer.noresponse
|
error
|
ESXHostNetwork
|
esx.problem.net.lacp.peer.noresponse| Lacp error: No
peer response on uplink {1} for VDS {2}.
Since 5.1 Reference
|
esx.problem.net.lacp.policy.incompatible
|
error
|
ESXHostNetwork
|
esx.problem.net.lacp.policy.incompatible| Lacp error:
Current teaming policy on VDS {1} is incompatible, supported is IP
hash only.
Since 5.1 Reference
|
esx.problem.net.lacp.policy.linkstatus
|
error
|
ESXHostNetwork
|
esx.problem.net.lacp.policy.linkstatus| Lacp error:
Current teaming policy on VDS {1} is incompatible, supported link
failover detection is link status only.
Since 5.1 Reference
|
esx.problem.net.lacp.uplink.blocked
|
warning
|
ESXHostNetwork
|
esx.problem.net.lacp.uplink.blocked| Lacp warning:
uplink {1} on VDS {2} is blocked.
Since 5.1 Reference
|
esx.problem.net.lacp.uplink.disconnected
|
warning
|
ESXHostNetwork
|
esx.problem.net.lacp.uplink.disconnected| Lacp
warning: uplink {1} on VDS {2} got disconnected.
Since 5.1 Reference
|
esx.problem.net.lacp.uplink.fail.duplex
|
error
|
ESXHostNetwork
|
esx.problem.net.lacp.uplink.fail.duplex| Lacp error:
Duplex mode across all uplink ports must be full, VDS {1} uplink
{2} has different mode.
Since 5.1 Reference
|
esx.problem.net.lacp.uplink.fail.speed
|
error
|
ESXHostNetwork
|
esx.problem.net.lacp.uplink.fail.speed| Lacp error:
Speed across all uplink ports must be same, VDS {1} uplink {2} has
different speed.
Since 5.1 Reference
|
esx.problem.net.lacp.uplink.inactive
|
error
|
ESXHostNetwork
|
esx.problem.net.lacp.uplink.inactive| Lacp error: All
uplinks on VDS {1} must be active.
Since 5.1 Reference
|
esx.problem.net.lacp.uplink.transition.down
|
warning
|
ESXHostNetwork
|
esx.problem.net.lacp.uplink.transition.down| Lacp
warning: uplink {1} on VDS {2} is moved out of link aggregation
group.
Since 5.1 Reference
|
esx.problem.net.migrate.bindtovmk
|
warning
|
ESXHostNetwork
|
esx.problem.net.migrate.bindtovmk| The ESX advanced
configuration option /Migrate/Vmknic is set to an invalid vmknic:
{1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for
improved performance. Update the configuration option with a valid
vmknic. Alternatively, if you do not want vMotion to bind to a
specific vmknic, remove the invalid vmknic and leave the option
blank.
Since 4.1 Reference
|
esx.problem.net.migrate.unsupported.latency
|
warning
|
ESXHostNetwork
|
esx.problem.net.migrate.unsupported.latency| ESXi has
detected {1}ms round-trip vMotion network latency between host {2}
and {3}. High latency vMotion networks are supported only if both
ESXi hosts have been configured for vMotion latency
tolerance.
Since 5.0 Reference
|
esx.problem.net.portset.port.full
|
warning
|
ESXHostNetwork
|
esx.problem.net.portset.port.full| Portset {1} has
reached the maximum number of ports ({2}). Cannot apply for any
more free ports.
Since 5.0 Reference
|
esx.problem.net.portset.port.vlan.invalidid
|
warning
|
ESXHostNetwork
|
esx.problem.net.portset.port.vlan.invalidid| {1}
VLANID {2} is invalid. VLAN ID must be between 0 and 4095.
Since 5.0 Reference
|
esx.problem.net.proxyswitch.port.unavailable
|
warning
|
ESXHostNetwork
|
esx.problem.net.proxyswitch.port.unavailable| Virtual
NIC with hardware address {1} failed to connect to distributed
virtual port {2} on switch {3}. There are no more ports available
on the host proxy switch.
Since 4.1 Reference
|
esx.problem.net.redundancy.degraded
|
warning
|
ESXHostNetwork
|
esx.problem.net.redundancy.degraded| Uplink
redundancy degraded on virtual switch {1}. Physical NIC {2} is
down. Affected portgroups:{3}.
Since 4.1 Reference
|
esx.problem.net.redundancy.lost
|
warning
|
ESXHostNetwork
|
esx.problem.net.redundancy.lost| Lost uplink
redundancy on virtual switch {1}. Physical NIC {2} is down.
Affected portgroups:{3}.
Since 4.1 Reference
|
esx.problem.net.uplink.mtu.failed
|
warning
|
ESXHostNetwork
|
esx.problem.net.uplink.mtu.failed| VMkernel failed to
set the MTU value {1} on the uplink {2}.
Since 4.1 Reference
|
esx.problem.net.vdl2.instance.initialization.fail
|
error
|
ESXHostNetwork
|
esx.problem.net.vdl2.instance.initialization.fail|
VDL2 instance on DVS {1} initialization failed.
Since 5.0 Reference
|
esx.problem.net.vdl2.instance.notexist
|
error
|
ESXHostNetwork
|
esx.problem.net.vdl2.instance.notexist| VDL2 overlay
instance is not created on DVS {1} before initializing VDL2 port or
VDL2 IP interface.
Since 5.0 Reference
|
esx.problem.net.vdl2.mcastgroup.fail
|
error
|
ESXHostNetwork
|
esx.problem.net.vdl2.mcastgroup.fail| VDL2 IP
interface on vmknic: {1}, DVS: {2}, VLAN: {3} failed to join
multicast group: {4}.
Since 5.0 Reference
|
esx.problem.net.vdl2.network.initialization.fail
|
error
|
ESXHostNetwork
|
esx.problem.net.vdl2.network.initialization.fail|
VDL2 network {1} on DVS {2} initialization failed.
Since 5.0 Reference
|
esx.problem.net.vdl2.port.initialization.fail
|
error
|
ESXHostNetwork
|
esx.problem.net.vdl2.port.initialization.fail| VDL2
port {1} on VDL2 network {2}, DVS {3} initialization failed.
Since 5.0 Reference
|
esx.problem.net.vdl2.vmknic.fail
|
error
|
ESXHostNetwork
|
esx.problem.net.vdl2.vmknic.fail| VDL2 IP interface
failed on vmknic {1}, port {2}, DVS {3}, VLAN {4}.
Since 5.0 Reference
|
esx.problem.net.vdl2.vmknic.notexist
|
error
|
ESXHostNetwork
|
esx.problem.net.vdl2.vmknic.notexist| VDL2 IP
interface does not exist on DVS {1}, VLAN {2}.
Since 5.0 Reference
|
esx.problem.net.vmknic.ip.duplicate
|
warning
|
ESXHostNetwork
|
esx.problem.net.vmknic.ip.duplicate| A duplicate IP
address was detected for {1} on the interface {2}. The current
owner is {3}.
Since 4.1 Reference
|
esx.problem.net.vmnic.linkstate.down
|
warning
|
ESXHostNetwork
|
esx.problem.net.vmnic.linkstate.down| Physical NIC
{1} linkstate is down.
Since 4.1 Reference
|
esx.problem.net.vmnic.linkstate.flapping
|
warning
|
ESXHostNetwork
|
esx.problem.net.vmnic.linkstate.flapping| Taking down
physical NIC {1} because the link is unstable.
Since 5.0 Reference
|
esx.problem.net.vmnic.watchdog.reset
|
warning
|
ESXHostNetwork
|
esx.problem.net.vmnic.watchdog.reset| Uplink {1} has
recovered from a transient failure due to watchdog timeout
Since 4.1 Reference
|
esx.problem.ntpd.clock.correction.error
|
warning
|
ESXHost
|
esx.problem.ntpd.clock.correction.error| NTP daemon
stopped. Time correction {1} > {2} seconds. Manually set the
time and restart ntpd.
Since 5.0 Reference
|
esx.problem.pageretire.platform.retire.request
|
info
|
VC
|
esx.problem.pageretire.platform.retire.request|
Memory page retirement requested by platform firmware. FRU ID: {1}.
Refer to System Hardware Log: {2}
Since 5.0 Reference
|
esx.problem.pageretire.selectedmpnthreshold.host.exceeded
|
warning
|
ESXHost
|
esx.problem.pageretire.selectedmpnthreshold.host.exceeded|
Number of host physical memory pages that have been selected for
retirement ({1}) exceeds threshold ({2}).
Since 5.0 Reference
|
esx.problem.pageretire.selectedmpnthreshold.kernel.exceeded
|
warning
|
ESXHost
|
esx.problem.pageretire.selectedmpnthreshold.kernel.exceeded|
Number of kernel physical memory pages that have been selected for
retirement ({1}) exceeds threshold ({2}).
Since 5.0 Reference
|
esx.problem.pageretire.selectedmpnthreshold.userclient.exceeded
|
warning
|
ESXHost
|
esx.problem.pageretire.selectedmpnthreshold.userclient.exceeded|
Number of physical memory pages belonging to (user) memroy client
{1} that have been selected for retirement ({2}) exceeds threshold
({3}).
Since 5.0 Reference
|
esx.problem.pageretire.selectedmpnthreshold.userprivate.exceeded
|
warning
|
ESXHost
|
esx.problem.pageretire.selectedmpnthreshold.userprivate.exceeded|
Number of private user physical memory pages that have been
selected for retirement ({1}) exceeds threshold ({2}).
Since 5.0 Reference
|
esx.problem.pageretire.selectedmpnthreshold.usershared.exceeded
|
warning
|
ESXHost
|
esx.problem.pageretire.selectedmpnthreshold.usershared.exceeded|
Number of shared user physical memory pages that have been selected
for retirement ({1}) exceeds threshold ({2}).
Since 5.0 Reference
|
esx.problem.pageretire.selectedmpnthreshold.vmmclient.exceeded
|
warning
|
ESXHost
|
esx.problem.pageretire.selectedmpnthreshold.vmmclient.exceeded|
Number of physical memory pages belonging to (vmm) memroy client
{1} that have been selected for retirement ({2}) exceeds threshold
({3}).
Since 5.0 Reference
|
esx.problem.scsi.apd.event.descriptor.alloc.failed
|
error
|
ESXHostStorage
|
esx.problem.scsi.apd.event.descriptor.alloc.failed|
No memory to allocate APD (All Paths Down) event subsystem.
Since 5.0 Reference
|
esx.problem.scsi.device.close.failed
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.close.failed| "Failed to
close the device {1} properly, plugin {2}.
Since 5.0 Reference
|
esx.problem.scsi.device.detach.failed
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.detach.failed| Detach failed
for device :{1}. Exceeded the number of devices that can be
detached, please cleanup stale detach entries.
Since 5.0 Reference
|
esx.problem.scsi.device.filter.attach.failed
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.filter.attach.failed| Failed
to attach filters to device '%s' during registration. Plugin load
failed or the filter rules are incorrect.
Since 5.0 Reference
|
esx.problem.scsi.device.io.bad.plugin.type
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.io.bad.plugin.type| Bad
plugin type for device {1}, plugin {2}
Since 5.0 Reference
|
esx.problem.scsi.device.io.inquiry.failed
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.io.inquiry.failed| Failed to
get standard inquiry for device {1} from Plugin {2}.
Since 5.0 Reference
|
esx.problem.scsi.device.io.invalid.disk.qfull.value
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.io.invalid.disk.qfull.value|
QFullSampleSize should be bigger than QFullThreshold. LUN queue
depth throttling algorithm will not function as expected. Please
set the QFullSampleSize and QFullThreshold disk configuration
values in ESX correctly.
Since 5.0 Reference
|
esx.problem.scsi.device.io.latency.high
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.io.latency.high| Device {1}
performance has deteriorated. I/O latency increased from average
value of {2} microseconds to {3} microseconds.
Since 5.0 Reference
|
esx.problem.scsi.device.io.qerr.change.config
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.io.qerr.change.config| QErr
set to 0x{1} for device {2}. This may cause unexpected behavior.
The system is not configured to change the QErr setting of device.
The QErr value supported by system is 0x{3}. Please check the SCSI
ChangeQErrSetting configuration value for ESX.
Since 5.0 Reference
|
esx.problem.scsi.device.io.qerr.changed
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.io.qerr.changed| QErr set to
0x{1} for device {2}. This may cause unexpected behavior. The
device was originally configured to the supported QErr setting of
0x{3}, but this has been changed and could not be changed
back.
Since 5.0 Reference
|
esx.problem.scsi.device.is.local.failed
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.is.local.failed| Failed to
verify if the device {1} from plugin {2} is a local - not shared -
device
Since 5.0 Reference
|
esx.problem.scsi.device.is.pseudo.failed
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.is.pseudo.failed| Failed to
verify if the device {1} from plugin {2} is a pseudo device
Since 5.0 Reference
|
esx.problem.scsi.device.is.ssd.failed
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.is.ssd.failed| Failed to
verify if the device {1} from plugin {2} is a Solid State Disk
device
Since 5.0 Reference
|
esx.problem.scsi.device.limitreached
|
error
|
ESXHostStorage
|
esx.problem.scsi.device.limitreached| The maximum
number of supported devices of {1} has been reached. A device from
plugin {2} could not be created.
Since 4.1 Reference
|
esx.problem.scsi.device.state.off
|
info
|
VC
|
esx.problem.scsi.device.state.off| Device {1}, has
been turned off administratively.
Since 5.0 Reference
|
esx.problem.scsi.device.state.permanentloss
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.state.permanentloss| Device
{1} has been removed or is permanently inaccessible. Affected
datastores (if any): {2}.
Since 5.0 Reference
|
esx.problem.scsi.device.state.permanentloss.noopens
|
info
|
VC
|
esx.problem.scsi.device.state.permanentloss.noopens|
Permanently inaccessible device {1} has no more opens. It is now
safe to unmount datastores (if any) {2} and delete the
device.
Since 5.0 Reference
|
esx.problem.scsi.device.state.permanentloss.pluggedback
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.state.permanentloss.pluggedback|
Device {1} has been plugged back in after being marked permanently
inaccessible. No data consistency guarantees.
Since 5.0 Reference
|
esx.problem.scsi.device.state.permanentloss.withreservationheld
|
error
|
ESXHostStorage
|
esx.problem.scsi.device.state.permanentloss.withreservationheld|
Device {1} has been removed or is permanently inaccessible, while
holding a reservation. Affected datastores (if any): {2}.
Since 5.0 Reference
|
esx.problem.scsi.device.thinprov.atquota
|
warning
|
ESXHostStorage
|
esx.problem.scsi.device.thinprov.atquota| Space
utilization on thin-provisioned device {1} exceeded configured
threshold. Affected datastores (if any): {2}.
Since 4.1 Reference
|
esx.problem.scsi.scsipath.limitreached
|
error
|
ESXHostStorage
|
esx.problem.scsi.scsipath.limitreached| The maximum
number of supported paths of {1} has been reached. Path {2} could
not be added.
Since 4.1 Reference
|
esx.problem.scsi.unsupported.plugin.type
|
warning
|
ESXHostStorage
|
esx.problem.scsi.unsupported.plugin.type| Scsi Device
Allocation not supported for plugin type {1}
Since 5.0 Reference
|
esx.problem.storage.apd.start
|
warning
|
ESXHostStorage
|
esx.problem.storage.apd.start| Device or filesystem
with identifer [{1}] has entered the All Paths Down state.
Since 5.1 Reference
|
esx.problem.storage.apd.timeout
|
warning
|
ESXHostStorage
|
esx.problem.storage.apd.timeout| Device or filesystem
with identifer [{1}] has entered the All Paths Down Timeout state
after being in the All Paths Down state for {2} seconds. I/Os will
be fast failed.
Since 5.1 Reference
|
esx.problem.storage.connectivity.devicepor
|
warning
|
ESXHostStorage
|
esx.problem.storage.connectivity.devicepor| Frequent
PowerOn Reset Unit Attentions are occurring on device {1}. This
might indicate a storage problem. Affected datastores: {2}1
Since 4.1 Reference
|
esx.problem.storage.connectivity.lost
|
error
|
ESXHostStorage
|
esx.problem.storage.connectivity.lost| Lost
connectivity to storage device {1}. Path {2} is down. Affected
datastores: {3}.
Since 4.1 Reference
|
esx.problem.storage.connectivity.pathpor
|
warning
|
ESXHostStorage
|
esx.problem.storage.connectivity.pathpor| Frequent
PowerOn Reset Unit Attentions are occurring on path {1}. This might
indicate a storage problem. Affected device: {2}. Affected
datastores: {3}
Since 4.1 Reference
|
esx.problem.storage.connectivity.pathstatechanges
|
warning
|
ESXHostStorage
|
esx.problem.storage.connectivity.pathstatechanges|
Frequent path state changes are occurring for path {1}. This might
indicate a storage problem. Affected device: {2}. Affected
datastores: {3}
Since 4.1 Reference
|
esx.problem.storage.iscsi.discovery.connect.error
|
warning
|
ESXHostStorage
|
esx.problem.storage.iscsi.discovery.connect.error|
iSCSI discovery to {1} on {2} failed. The iSCSI Initiator could not
establish a network connection to the discovery address.
Since 5.0 Reference
|
esx.problem.storage.iscsi.discovery.login.error
|
warning
|
ESXHostStorage
|
esx.problem.storage.iscsi.discovery.login.error|
iSCSI discovery to {1} on {2} failed. The Discovery target returned
a login error of: {3}.
Since 5.0 Reference
|
esx.problem.storage.iscsi.target.connect.error
|
warning
|
ESXHostStorage
|
esx.problem.storage.iscsi.target.connect.error| Login
to iSCSI target {1} on {2} failed. The iSCSI initiator could not
establish a network connection to the target.
Since 5.0 Reference
|
esx.problem.storage.iscsi.target.login.error
|
warning
|
ESXHostStorage
|
esx.problem.storage.iscsi.target.login.error| Login
to iSCSI target {1} on {2} failed. Target returned login error of:
{3}.
Since 5.0 Reference
|
esx.problem.storage.iscsi.target.permanently.lost
|
error
|
ESXHostStorage
|
esx.problem.storage.iscsi.target.permanently.lost|
The iSCSI target {2} was permanently removed from {1}.
Since 5.1 Reference
|
esx.problem.storage.redundancy.degraded
|
warning
|
ESXHostStorage
|
esx.problem.storage.redundancy.degraded| Path
redundancy to storage device {1} degraded. Path {2} is down.
Affected datastores: {3}.
Since 4.1 Reference
|
esx.problem.storage.redundancy.lost
|
warning
|
ESXHostStorage
|
esx.problem.storage.redundancy.lost| Lost path
redundancy to storage device {1}. Path {2} is down. Affected
datastores: {3}.
Since 4.1 Reference
|
esx.problem.syslog.config
|
warning
|
ESXHost
|
esx.problem.syslog.config| System logging is not
configured on host {host.name}. Please check Syslog options for the
host under Configuration -> Software -> Advanced Settings in
vSphere client.
Since 5.0 Reference
|
esx.problem.syslog.nonpersistent
|
warning
|
ESXHost
|
esx.problem.syslog.nonpersistent| System logs on host
{host.name} are stored on non-persistent storage. Consult product
documentation to configure a syslog server or a scratch
partition.
Since 5.1 Reference
|
esx.problem.vfat.filesystem.full.other
|
warning
|
ESXHostStorage
|
esx.problem.vfat.filesystem.full.other| The VFAT
filesystem {1} (UUID {2}) is full.
Since 5.0 Reference
|
esx.problem.vfat.filesystem.full.scratch
|
warning
|
ESXHostStorage
|
esx.problem.vfat.filesystem.full.scratch| The host's
scratch partition, which is the VFAT filesystem {1} (UUID {2}), is
full.
Since 5.0 Reference
|
esx.problem.visorfs.failure
|
error
|
ESXHostStorage
|
esx.problem.visorfs.failure| An operation on the root
filesystem has failed.
Since 5.0 Reference
|
esx.problem.visorfs.inodetable.full
|
warning
|
ESXHostStorage
|
esx.problem.visorfs.inodetable.full| The root
filesystem's file table is full. As a result, the file {1} could
not be created by the application '{2}'.
Since 5.0 Reference
|
esx.problem.visorfs.ramdisk.full
|
warning
|
ESXHostStorage
|
esx.problem.visorfs.ramdisk.full| The ramdisk '{1}'
is full. As a result, the file {2} could not be written.
Since 5.0 Reference
|
esx.problem.visorfs.ramdisk.inodetable.full
|
error
|
ESXHostStorage
|
esx.problem.visorfs.ramdisk.inodetable.full| The file
table of the ramdisk '{1}' is full. As a result, the file {2} could
not be created by the application '{3}'.
Since 5.1 Reference
|
esx.problem.vm.kill.unexpected.fault.failure
|
error
|
ESXHost
|
esx.problem.vm.kill.unexpected.fault.failure| The VM
using the config file {1} could not fault in a guest physical page
from the hypervisor level swap file at {2}. The VM is terminated as
further progress is impossible.
Since 5.1 Reference
|
esx.problem.vm.kill.unexpected.forcefulPageRetire
|
error
|
ESXHost
|
esx.problem.vm.kill.unexpected.forcefulPageRetire|
The VM using the config file {1} contains the host physical page
{2} which was scheduled for immediate retirement. To avoid system
instability the VM is forcefully powered off.
Since 5.0 Reference
|
esx.problem.vm.kill.unexpected.noSwapResponse
|
error
|
ESXHost
|
esx.problem.vm.kill.unexpected.noSwapResponse| The VM
using the config file {1} did not respond to {2} swap actions in
{3} seconds and is forcefully powered off to prevent system
instability.
Since 5.0 Reference
|
esx.problem.vm.kill.unexpected.vmtrack
|
error
|
ESXHost
|
esx.problem.vm.kill.unexpected.vmtrack| The VM using
the config file {1} is allocating too many pages while system is
critically low in free memory. It is forcefully terminated to
prevent system instability.
Since 5.1 Reference
|
esx.problem.vmfs.ats.support.lost
|
error
|
ESXHostStorage
|
esx.problem.vmfs.ats.support.lost|
event.esx.problem.vmfs.ats.support.lost.fullFormat
Since 5.1 Reference
|
esx.problem.vmfs.error.volume.is.locked
|
error
|
ESXHostStorage
|
esx.problem.vmfs.error.volume.is.locked| Volume on
device {1} is locked, possibly because some remote host encountered
an error during a volume operation and could not recover.
Since 5.0 Reference
|
esx.problem.vmfs.extent.offline
|
warning
|
ESXHostStorage
|
esx.problem.vmfs.extent.offline| An attached device
{1} may be offline. The file system {2} is now in a degraded state.
While the datastore is still available, parts of data that reside
on the extent that went offline might be inaccessible.
Since 5.0 Reference
|
esx.problem.vmfs.extent.online
|
info
|
ESXHostStorage
|
esx.problem.vmfs.extent.online| Device {1} backing
file system {2} came online. This extent was previously offline.
All resources on this device are now available.
Since 5.0 Reference
|
esx.problem.vmfs.heartbeat.recovered
|
info
|
ESXHostStorage
|
esx.problem.vmfs.heartbeat.recovered| Successfully
restored access to volume {1} ({2}) following connectivity
issues.
Since 4.1 Reference
|
esx.problem.vmfs.heartbeat.timedout
|
warning
|
ESXHostStorage
|
esx.problem.vmfs.heartbeat.timedout| Lost access to
volume {1} ({2}) due to connectivity issues. Recovery attempt is in
progress and outcome will be reported shortly.
Since 4.1 Reference
|
esx.problem.vmfs.heartbeat.unrecoverable
|
error
|
ESXHostStorage
|
esx.problem.vmfs.heartbeat.unrecoverable| Lost
connectivity to volume {1} ({2}) and subsequent recovery attempts
have failed.
Since 4.1 Reference
|
esx.problem.vmfs.journal.createfailed
|
warning
|
ESXHostStorage
|
esx.problem.vmfs.journal.createfailed| No space for
journal on volume {1} ({2}). Opening volume in read-only metadata
mode with limited write support.
Since 4.1 Reference
|
esx.problem.vmfs.lock.corruptondisk
|
error
|
ESXHostStorage
|
esx.problem.vmfs.lock.corruptondisk| At least one
corrupt on-disk lock was detected on volume {1} ({2}). Other
regions of the volume might be damaged too.
Since 4.1 Reference
|
esx.problem.vmfs.nfs.mount.connect.failed
|
error
|
ESXHostStorage
|
esx.problem.vmfs.nfs.mount.connect.failed| Failed to
mount to the server {1} mount point {2}. {3}
Since 4.1 Reference
|
esx.problem.vmfs.nfs.mount.limit.exceeded
|
error
|
ESXHostStorage
|
esx.problem.vmfs.nfs.mount.limit.exceeded| Failed to
mount to the server {1} mount point {2}. {3}
Since 4.1 Reference
|
esx.problem.vmfs.nfs.server.disconnect
|
error
|
ESXHostStorage
|
esx.problem.vmfs.nfs.server.disconnect| Lost
connection to server {1} mount point {2} mounted as {3}
({4}).
Since 4.1 Reference
|
esx.problem.vmfs.nfs.server.restored
|
info
|
ESXHostStorage
|
esx.problem.vmfs.nfs.server.restored| Restored
connection to server {1} mount point {2} mounted as {3}
({4}).
Since 4.1 Reference
|
esx.problem.vmfs.resource.corruptondisk
|
error
|
ESXHostStorage
|
esx.problem.vmfs.resource.corruptondisk| At least one
corrupt resource metadata region was detected on volume {1} ({2}).
Other regions of the volume might be damaged too.
Since 4.1 Reference
|
esx.problem.vmfs.volume.locked
|
error
|
ESXHostStorage
|
esx.problem.vmfs.volume.locked| Volume on device {1}
locked, possibly because remote host {2} encountered an error
during a volume operation and could not recover.
Since 4.1 Reference
|
esx.problem.vmsyslogd.remote.failure
|
error
|
ESXHost
|
esx.problem.vmsyslogd.remote.failure| The host "{1}"
has become unreachable. Remote logging to this host has
stopped.
Since 5.0 Reference
|
esx.problem.vmsyslogd.storage.failure
|
error
|
ESXHost
|
esx.problem.vmsyslogd.storage.failure| Logging to
storage has failed. Logs are no longer being stored locally on this
host.
Since 5.0 Reference
|
esx.problem.vmsyslogd.storage.logdir.invalid
|
error
|
ESXHost
|
esx.problem.vmsyslogd.storage.logdir.invalid| The
configured log directory {1} cannot be used. The default directory
{2} will be used instead.
Since 5.1 Reference
|
esx.problem.vmsyslogd.unexpected
|
warning
|
ESXHost
|
esx.problem.vmsyslogd.unexpected| Log daemon has
failed for an unexpected reason: {1}
Since 5.0 Reference
|
esx.problem.vpxa.core.dumped
|
warning
|
ESXHost
|
esx.problem.vpxa.core.dumped| {1} crashed ({2}
time(s) so far) and a core file might have been created at {3}.
This might have caused connections to the host to be
dropped.
Since 5.0 Reference
|
esx.problem.vsan.clustering.disabled
|
warning
|
VC
|
esx.problem.vsan.clustering.disabled| VSAN clustering
and directory services have been disabled thus will be no longer
available.
Since 5.5 Reference
|
esx.problem.vsan.net.not.ready
|
warning
|
ESXHostNetwork
|
esx.problem.vsan.net.not.ready| vmknic {1} that is
currently configured to be used with VSAN doesn't have an IP
address yet. There are no other active network configuration and
therefore the VSAN node doesn't have network connectivity.
Since 5.5 Reference
|
esx.problem.vsan.net.redundancy.lost
|
warning
|
ESXHostNetwork
|
esx.problem.vsan.net.redundancy.lost| VSAN network
configuration doesn't have any redundancy. This might be a problem
if further network configuration is removed.
Since 5.5 Reference
|
esx.problem.vsan.net.redundancy.reduced
|
warning
|
ESXHostNetwork
|
esx.problem.vsan.net.redundancy.reduced| VSAN network
configuration redundancy has been reduced. This might be a problem
if further network configuration is removed.
Since 5.5 Reference
|
esx.problem.vsan.no.network.connectivity
|
error
|
ESXHostNetwork
|
esx.problem.vsan.no.network.connectivity| VSAN
doesn't have any network configuration. This can severely impact
several objects in the VSAN datastore.
Since 5.5 Reference
|
esx.problem.vsan.vmknic.not.ready
|
warning
|
VC
|
esx.problem.vsan.vmknic.not.ready| vmknic {1} that is
currently configured to be used with VSAN doesn't have an IP
address yet. However, there are other network configuration which
are active. If those configurations are removed that may cause
problems.
Since 5.5 Reference
|
ExitedStandbyModeEvent
|
info
|
VC
|
The host
{host.name} is no longer in standby mode
Since 2.5 Reference
|
ExitingStandbyModeEvent
|
info
|
VC
|
The host
{host.name} is exiting standby mode
Since 4.0 Reference
|
ExitMaintenanceModeEvent
|
info
|
VC
|
Host {host.name}
in {datacenter.name} has exited maintenance mode
Since 2.0 Reference
|
ExitStandbyModeFailedEvent
|
error
|
ESXHost
|
The host
{host.name} could not exit standby mode
Since 4.0 Reference
|
FailoverLevelRestored
|
info
|
VC
|
Sufficient
resources are available to satisfy HA failover level in cluster
{computeResource.name} in {datacenter.name}
Since 2.0 Reference
|
GeneralEvent
|
info
|
VC
|
General event:
{message}
Since 2.0 Reference
|
GeneralHostErrorEvent
|
error
|
ESXHost
|
Error detected on
{host.name} in {datacenter.name}: {message}
Since 2.0 Reference
|
GeneralHostInfoEvent
|
info
|
VC
|
Issue detected on
{host.name} in {datacenter.name}: {message}
Since 2.0 Reference
|
GeneralHostWarningEvent
|
warning
|
ESXHost
|
Issue detected on
{host.name} in {datacenter.name}: {message}
Since 2.0 Reference
|
GeneralUserEvent
|
user
|
VC
|
User logged event:
{message}
Since 2.0 Reference
|
GeneralVmErrorEvent
|
error
|
VirtualMachine
|
Error detected for
{vm.name} on {host.name} in {datacenter.name}: {message}
Since 2.0 Reference
|
GeneralVmInfoEvent
|
info
|
VC
|
Issue detected for
{vm.name} on {host.name} in {datacenter.name}: {message}
Since 2.0 Reference
|
GeneralVmWarningEvent
|
warning
|
VirtualMachine
|
Issue detected for
{vm.name} on {host.name} in {datacenter.name}: {message}
Since 2.0 Reference
|
GhostDvsProxySwitchDetectedEvent
|
info
|
VC
|
The Distributed
Virtual Switch corresponding to the proxy switches {switchUuid} on
the host {host.name} does not exist in vCenter or does not contain
this host.
Since 4.0 Reference
|
GhostDvsProxySwitchRemovedEvent
|
info
|
VC
|
A ghost proxy
switch {switchUuid} on the host {host.name} was resolved.
Since 4.0 Reference
|
GlobalMessageChangedEvent
|
info
|
VC
|
The message
changed: {message}
Since 2.0 Reference
|
hbr.primary.AppQuiescedDeltaCompletedEvent
|
info
|
VC
|
hbr.primary.AppQuiescedDeltaCompletedEvent|
Application consistent delta completed for virtual machine
{vm.name} on host {host.name} in cluster {computeResource.name} in
{datacenter.name} ({bytes} bytes transferred)
Since 5.0 Reference
|
hbr.primary.ConnectionRestoredToHbrServerEvent
|
info
|
VC
|
hbr.primary.ConnectionRestoredToHbrServerEvent|
Connection to replication server restored for virtual machine
{vm.name} on host {host.name} in cluster {computeResource.name} in
{datacenter.name}.
Since 5.0 Reference
|
hbr.primary.DeltaAbortedEvent
|
warning
|
VC
|
hbr.primary.DeltaAbortedEvent| Delta aborted for
virtual machine {vm.name} on host {host.name} in cluster
{computeResource.name} in {datacenter.name}:
{reason.@enum.hbr.primary.ReasonForDeltaAbort}
Since 5.0 Reference
|
hbr.primary.DeltaCompletedEvent
|
info
|
VC
|
hbr.primary.DeltaCompletedEvent| Delta completed for
virtual machine {vm.name} on host {host.name} in cluster
{computeResource.name} in {datacenter.name} ({bytes} bytes
transferred).
Since 5.0 Reference
|
hbr.primary.DeltaStartedEvent
|
info
|
VC
|
hbr.primary.DeltaStartedEvent| Delta started by
{userName} for virtual machine {vm.name} on host {host.name} in
cluster {computeResource.name} in {datacenter.name}.
Since 5.0 Reference
|
hbr.primary.FailedToStartDeltaEvent
|
error
|
VC
|
hbr.primary.FailedToStartDeltaEvent| Failed to start
delta for virtual machine {vm.name} on host {host.name} in cluster
{computeResource.name} in {datacenter.name}:
{reason.@enum.fault.ReplicationVmFault.ReasonForFault}
Since 5.0 Reference
|
hbr.primary.FailedToStartSyncEvent
|
error
|
VC
|
hbr.primary.FailedToStartSyncEvent| Failed to start
full sync for virtual machine {vm.name} on host {host.name} in
cluster {computeResource.name} in {datacenter.name}:
{reason.@enum.fault.ReplicationVmFault.ReasonForFault}
Since 5.0 Reference
|
hbr.primary.FSQuiescedDeltaCompletedEvent
|
warning
|
VC
|
hbr.primary.FSQuiescedDeltaCompletedEvent| File
system consistent delta completed for virtual machine {vm.name} on
host {host.name} in cluster {computeResource.name} in
{datacenter.name} ({bytes} bytes transferred)
Since 5.0 Reference
|
hbr.primary.InvalidDiskReplicationConfigurationEvent
|
warning
|
VC
|
hbr.primary.InvalidDiskReplicationConfigurationEvent|
Replication configuration is invalid for virtual machine {vm.name}
on host {host.name} in cluster {computeResource.name} in
{datacenter.name}, disk {diskKey}:
{reasonForFault.@enum.fault.ReplicationDiskConfigFault.ReasonForFault}
Since 5.0 Reference
|
hbr.primary.InvalidVmReplicationConfigurationEvent
|
warning
|
VC
|
hbr.primary.InvalidVmReplicationConfigurationEvent|
Replication configuration is invalid for virtual machine {vm.name}
on host {host.name} in cluster {computeResource.name} in
{datacenter.name}:
{reasonForFault.@enum.fault.ReplicationVmConfigFault.ReasonForFault}
Since 5.0 Reference
|
hbr.primary.NoConnectionToHbrServerEvent
|
warning
|
VC
|
hbr.primary.NoConnectionToHbrServerEvent| No
connection to replication server for virtual machine {vm.name} on
host {host.name} in cluster {computeResource.name} in
{datacenter.name}:
{reason.@enum.hbr.primary.ReasonForNoServerConnection}
Since 5.0 Reference
|
hbr.primary.NoProgressWithHbrServerEvent
|
warning
|
VC
|
hbr.primary.NoProgressWithHbrServerEvent| Replication
server error for virtual machine {vm.name} on host {host.name} in
cluster {computeResource.name} in {datacenter.name}:
{reason.@enum.hbr.primary.ReasonForNoServerProgress}
Since 5.0 Reference
|
hbr.primary.QuiesceNotSupported
|
warning
|
VC
|
hbr.primary.QuiesceNotSupported| Quiescing is not
supported for virtual machine {vm.name} on host {host.name} in
cluster {computeResource.name} in {datacenter.name}.
Since 5.0 Reference
|
hbr.primary.SyncCompletedEvent
|
info
|
VC
|
hbr.primary.SyncCompletedEvent| Full sync completed
for virtual machine {vm.name} on host {host.name} in cluster
{computeResource.name} in {datacenter.name} ({bytes} bytes
transferred).
Since 5.0 Reference
|
hbr.primary.SyncStartedEvent
|
info
|
VC
|
hbr.primary.SyncStartedEvent| Full sync started by
{userName} for virtual machine {vm.name} on host {host.name} in
cluster {computeResource.name} in {datacenter.name}.
Since 5.0 Reference
|
hbr.primary.UnquiescedDeltaCompletedEvent
|
warning
|
VC
|
hbr.primary.UnquiescedDeltaCompletedEvent| Delta
completed for virtual machine {vm.name} on host {host.name} in
cluster {computeResource.name} in {datacenter.name} ({bytes} bytes
transferred).
Since 5.0 Reference
|
hbr.primary.VmReplicationConfigurationChangedEvent
|
info
|
VC
|
hbr.primary.VmReplicationConfigurationChangedEvent|
Replication configuration changed for virtual machine {vm.name} on
host {host.name} in cluster {computeResource.name} in
{datacenter.name} ({numDisks} disks, {rpo} minutes RPO, HBR Server
is {hbrServerAddress}).
Since 5.0 Reference
|
HealthStatusChangedEvent
|
info
|
VC
|
{componentName}
status changed from {oldStatus} to {newStatus}
Since 4.0 Reference
|
HostAddedEvent
|
info
|
VC
|
Added host
{host.name} to datacenter {datacenter.name}
Since 2.0 Reference
|
HostAddFailedEvent
|
error
|
VC
|
Cannot add host
{hostname} to datacenter {datacenter.name}
Since 2.0 Reference
|
HostAdminDisableEvent
|
warning
|
VC
|
Administrator
access to the host {host.name} is disabled
Since 2.5 Reference
|
HostAdminEnableEvent
|
warning
|
VC
|
Administrator
access to the host {host.name} has been restored
Since 2.5 Reference
|
HostCnxFailedAccountFailedEvent
|
error
|
ESXHost
|
Cannot connect
{host.name} in {datacenter.name}: cannot configure management
account
Since 2.0 Reference
|
HostCnxFailedAlreadyManagedEvent
|
error
|
ESXHost
|
Cannot connect
{host.name} in {datacenter.name}: already managed by
{serverName}
Since 2.0 Reference
|
HostCnxFailedBadCcagentEvent
|
error
|
ESXHost
|
Cannot connect
host {host.name} in {datacenter.name} : server agent is not
responding
Since 2.0 Reference
|
HostCnxFailedBadUsernameEvent
|
error
|
ESXHost
|
Cannot connect
{host.name} in {datacenter.name}: incorrect user name or
password
Since 2.0 Reference
|
HostCnxFailedBadVersionEvent
|
error
|
ESXHost
|
Cannot connect
{host.name} in {datacenter.name}: incompatible version
Since 2.0 Reference
|
HostCnxFailedCcagentUpgradeEvent
|
error
|
ESXHost
|
Cannot connect
host {host.name} in {datacenter.name}. Did not install or upgrade
vCenter agent service.
Since 2.0 Reference
|
HostCnxFailedEvent
|
error
|
ESXHost
|
Cannot connect
{host.name} in {datacenter.name}: error connecting to host
Since 2.0 Reference
|
HostCnxFailedNetworkErrorEvent
|
error
|
ESXHost
|
Cannot connect
{host.name} in {datacenter.name}: network error
Since 2.0 Reference
|
HostCnxFailedNoAccessEvent
|
error
|
ESXHost
|
Cannot connect
host {host.name} in {datacenter.name}: account has insufficient
privileges
Since 2.0 Reference
|
HostCnxFailedNoConnectionEvent
|
error
|
ESXHost
|
Cannot connect
host {host.name} in {datacenter.name}
Since 2.0 Reference
|
HostCnxFailedNoLicenseEvent
|
error
|
ESXHost
|
Cannot connect
{host.name} in {datacenter.name}: not enough CPU licenses
Since 2.0 Reference
|
HostCnxFailedNotFoundEvent
|
error
|
ESXHost
|
Cannot connect
{host.name} in {datacenter.name}: incorrect host name
Since 2.0 Reference
|
HostCnxFailedTimeoutEvent
|
error
|
ESXHost
|
Cannot connect
{host.name} in {datacenter.name}: time-out waiting for host
response
Since 2.0 Reference
|
HostComplianceCheckedEvent
|
info
|
VC
|
Host {host.name}
checked for compliance.
Since 4.0 Reference
|
HostCompliantEvent
|
info
|
VC
|
Host {host.name}
is in compliance with the attached profile
Since 4.0 Reference
|
HostConfigAppliedEvent
|
info
|
VC
|
Host configuration
changes applied.
Since 4.0 Reference
|
HostConnectedEvent
|
info
|
VC
|
Connected to
{host.name} in {datacenter.name}
Since 2.0 Reference
|
HostConnectionLostEvent
|
error
|
ESXHost
|
Host {host.name}
in {datacenter.name} is not responding
Since 2.0 Reference
|
HostDasDisabledEvent
|
info
|
VC
|
HA agent disabled
on {host.name} in cluster {computeResource.name} in
{datacenter.name}
Since 2.0 Reference
|
HostDasDisablingEvent
|
info
|
VC
|
HA is being
disabled on {host.name} in cluster {computeResource.name} in
datacenter {datacenter.name}
Since 2.0 Reference
|
HostDasEnabledEvent
|
info
|
VC
|
HA agent enabled
on {host.name} in cluster {computeResource.name} in
{datacenter.name}
Since 2.0 Reference
|
HostDasEnablingEvent
|
warning
|
Cluster
|
Enabling HA agent
on {host.name} in cluster {computeResource.name} in
{datacenter.name}
Since 2.0 Reference
|
HostDasErrorEvent
|
error
|
Cluster
|
HA agent on
{host.name} in cluster {computeResource.name} in {datacenter.name}
has an error {message}:
{reason.@enum.HostDasErrorEvent.HostDasErrorReason}
Since 2.0 Reference
|
HostDasOkEvent
|
info
|
VC
|
HA agent on host
{host.name} in cluster {computeResource.name} in {datacenter.name}
is configured correctly
Since 2.0 Reference
|
HostDisconnectedEvent
|
warning
|
ESXHost
|
Disconnected from
{host.name} in {datacenter.name}. Reason:
{reason.@enum.HostDisconnectedEvent.ReasonCode}
Since 2.0 Reference
|
HostDVPortEvent
|
info
|
VC
|
dvPort connected
to host {host.name} in {datacenter.name} changed status
Since 4.1 Reference
|
HostEnableAdminFailedEvent
|
error
|
VC
|
Cannot restore
some administrator permissions to the host {host.name}
Since 2.5 Reference
|
HostExtraNetworksEvent
|
error
|
ESXHostNetwork
|
Host {host.name}
has the following extra networks not used by other hosts for HA
communication:{ips}. Consider using HA advanced option
das.allowNetwork to control network usage
Since 4.0 Reference
|
HostGetShortNameFailedEvent
|
error
|
ESXHostNetwork
|
Cannot complete
command 'hostname -s' on host {host.name} or returned incorrect
name format
Since 2.5 Reference
|
HostInAuditModeEvent
|
info
|
VC
|
Host {host.name}
is running in audit mode. The host's configuration will not be
persistent across reboots.
Since 5.0 Reference
|
HostInventoryFullEvent
|
warning
|
ESXHost
|
Maximum
({capacity}) number of hosts allowed for this edition of vCenter
Server has been reached
Since 2.5 Reference
|
HostInventoryUnreadableEvent
|
info
|
VC
|
The virtual
machine inventory file on host {host.name} is damaged or
unreadable.
Since 4.0 Reference
|
HostIpChangedEvent
|
info
|
VC
|
IP address of the
host {host.name} changed from {oldIP} to {newIP}
Since 2.5 Reference
|
HostIpInconsistentEvent
|
warning
|
ESXHostNetwork
|
Configuration of
host IP address is inconsistent on host {host.name}: address
resolved to {ipAddress} and {ipAddress2}
Since 2.5 Reference
|
HostIpToShortNameFailedEvent
|
warning
|
ESXHostNetwork
|
Cannot resolve IP
address to short name on host {host.name}
Since 2.5 Reference
|
HostIsolationIpPingFailedEvent
|
warning
|
ESXHostNetwork
|
Host {host.name}
could not reach isolation address: {isolationIp}
Since 2.5 Reference
|
HostLicenseExpiredEvent
|
error
|
VC
|
A host license for
{host.name} has expired
Since 2.0 Reference
|
HostLocalPortCreatedEvent
|
info
|
ESXHostNetwork
|
A host local port
{hostLocalPort.portKey} is created on vSphere Distributed Switch
{hostLocalPort.switchUuid} to recover from management network
connectivity loss on virtual NIC device {hostLocalPort.vnic} on the
host {host.name}.
Since 5.1 Reference
|
HostMissingNetworksEvent
|
error
|
ESXHostNetwork
|
Host {host.name}
does not have the following networks used by other hosts for HA
communication:{ips}. Consider using HA advanced option
das.allowNetwork to control network usage
Since 4.0 Reference
|
HostMonitoringStateChangedEvent
|
info
|
VC
|
Host monitoring
state in {computeResource.name} in {datacenter.name} changed to
{state}
Since 4.0 Reference
|
HostNoAvailableNetworksEvent
|
error
|
ESXHostNetwork
|
Host {host.name}
currently has no available networks for HA Communication. The
following networks are currently used by HA: {ips}
Since 4.0 Reference
|
HostNoHAEnabledPortGroupsEvent
|
error
|
ESXHostNetwork
|
Host {host.name}
has no port groups enabled for HA communication.
Since 4.0 Reference
|
HostNonCompliantEvent
|
warning
|
VC
|
Host {host.name}
is not in compliance with the attached profile
Since 4.0 Reference
|
HostNoRedundantManagementNetworkEvent
|
warning
|
ESXHostNetwork
|
Host {host.name}
currently has no management network redundancy
Since 2.5 Reference
|
HostNotInClusterEvent
|
error
|
Cluster
|
Host {host.name}
is not a cluster member in {datacenter.name}
Since 2.5 Reference
|
HostOvercommittedEvent
|
error
|
VC
|
Insufficient
capacity in host {computeResource.name} to satisfy resource
configuration in {datacenter.name}
Since 4.0 Reference
|
HostPrimaryAgentNotShortNameEvent
|
error
|
ESXHostNetwork
|
Primary agent
{primaryAgent} was not specified as a short name to host
{host.name}
Since 2.5 Reference
|
HostProfileAppliedEvent
|
info
|
VC
|
Profile is applied
on the host {host.name}
Since 4.0 Reference
|
HostReconnectionFailedEvent
|
error
|
VC
|
Cannot reconnect
to {host.name} in {datacenter.name}
Since 2.0 Reference
|
HostRemovedEvent
|
info
|
VC
|
Removed host
{host.name} in {datacenter.name}
Since 2.0 Reference
|
HostShortNameInconsistentEvent
|
warning
|
ESXHostNetwork
|
Host names
{shortName} and {shortName2} both resolved to the same IP address.
Check the host's network configuration and DNS entries
Since 2.5 Reference
|
HostShortNameToIpFailedEvent
|
warning
|
ESXHostNetwork
|
Cannot resolve
short name {shortName} to IP address on host {host.name}
Since 2.5 Reference
|
HostShutdownEvent
|
info
|
VC
|
Shut down of
{host.name} in {datacenter.name}: {reason}
Since 2.0 Reference
|
HostStatusChangedEvent
|
info
|
VC
|
Configuration
status on host {computeResource.name} changed from
{oldStatus.@enum.ManagedEntity.Status} to
{newStatus.@enum.ManagedEntity.Status} in {datacenter.name}
Since 4.0 Reference
|
HostSyncFailedEvent
|
error
|
VC
|
Cannot synchronize
host {host.name}. {reason.msg}
Since 4.0 Reference
|
HostUpgradeFailedEvent
|
error
|
ESXHost
|
Cannot install or
upgrade vCenter agent service on {host.name} in
{datacenter.name}
Since 2.0 Reference
|
HostUserWorldSwapNotEnabledEvent
|
warning
|
VC
|
event.HostUserWorldSwapNotEnabledEvent.fullFormat
Since 4.0 Reference
|
HostVnicConnectedToCustomizedDVPortEvent
|
info
|
VC
|
Host {host.name}
vNIC {vnic.vnic} was reconfigured to use dvPort {vnic.port.portKey}
with port level configuration, which might be different from the
dvPort group.
Since 4.0 Reference
|
HostWwnChangedEvent
|
warning
|
ESXHostStorage
|
WWNs are changed
for {host.name}
Since 2.5 Reference
|
HostWwnConflictEvent
|
error
|
ESXHostStorage
|
The WWN ({wwn}) of
{host.name} conflicts with the currently registered WWN
Since 2.5 Reference
|
IncorrectHostInformationEvent
|
error
|
ESXHost
|
Host {host.name}
did not provide the information needed to acquire the correct set
of licenses
Since 2.5 Reference
|
InfoUpgradeEvent
|
info
|
VC
|
{message}
Since 2.0 Reference
|
InsufficientFailoverResourcesEvent
|
warning
|
Cluster
|
Insufficient
resources to satisfy HA failover level on cluster
{computeResource.name} in {datacenter.name}
Since 2.0 Reference
|
InvalidEditionEvent
|
error
|
VC
|
The license
edition '{feature}' is invalid
Since 2.5 Reference
|
IScsiBootFailureEvent
|
warning
|
VC
|
Booting from iSCSI
failed with an error. See the VMware Knowledge Base for information
on configuring iBFT networking
Since 4.1 Reference
|
LicenseExpiredEvent
|
error
|
VC
|
License
{feature.featureName} has expired
Since 2.0 Reference
|
LicenseNonComplianceEvent
|
error
|
VC
|
License inventory
is not compliant. Licenses are overused
Since 4.0 Reference
|
LicenseRestrictedEvent
|
error
|
VC
|
Unable to acquire
licenses due to a restriction in the option file on the license
server.
Since 2.5 Reference
|
LicenseServerAvailableEvent
|
info
|
VC
|
License server
{licenseServer} is available
Since 2.0 Reference
|
LicenseServerUnavailableEvent
|
error
|
VC
|
License server
{licenseServer} is unavailable
Since 2.0 Reference
|
LocalDatastoreCreatedEvent
|
info
|
VC
|
Created local
datastore {datastore.name} on {host.name} in
{datacenter.name}
Since 2.0 Reference
|
LocalTSMEnabledEvent
|
info
|
VC
|
The Local Tech
Support Mode for the host {host.name} has been enabled
Since 4.1 Reference
|
LockerMisconfiguredEvent
|
warning
|
VC
|
Datastore
{datastore} which is configured to back the locker does not
exist
Since 2.5 Reference
|
LockerReconfiguredEvent
|
info
|
VC
|
Locker was
reconfigured from {oldDatastore} to {newDatastore} datastore
Since 2.5 Reference
|
MigrationErrorEvent
|
error
|
Cluster
|
Unable to migrate
{vm.name} from {host.name} in {datacenter.name}: {fault.msg}
Since 2.0 Reference
|
MigrationHostErrorEvent
|
error
|
Cluster
|
Unable to migrate
{vm.name} from {host.name} to {dstHost.name} in {datacenter.name}:
{fault.msg}
Since 2.0 Reference
|
MigrationHostWarningEvent
|
warning
|
Cluster
|
Migration of
{vm.name} from {host.name} to {dstHost.name} in {datacenter.name}:
{fault.msg}
Since 2.0 Reference
|
MigrationResourceErrorEvent
|
error
|
Cluster
|
Cannot migrate
{vm.name} from {host.name} to {dstHost.name} and resource pool
{dstPool.name} in {datacenter.name}: {fault.msg}
Since 2.0 Reference
|
MigrationResourceWarningEvent
|
warning
|
Cluster
|
Migration of
{vm.name} from {host.name} to {dstHost.name} and resource pool
{dstPool.name} in {datacenter.name}: {fault.msg}
Since 2.0 Reference
|
MigrationWarningEvent
|
warning
|
Cluster
|
Migration of
{vm.name} from {host.name} in {datacenter.name}: {fault.msg}
Since 2.0 Reference
|
MtuMatchEvent
|
info
|
ESXHostNetwork
|
The MTU configured
in the vSphere Distributed Switch matches the physical switch
connected to uplink port {healthResult.uplinkPortKey} in vSphere
Distributed Switch {dvs.name} on host {host.name} in
{datacenter.name}
Since 5.1 Reference
|
MtuMismatchEvent
|
error
|
ESXHostNetwork
|
The MTU configured
in the vSphere Distributed Switch does not match the physical
switch connected to uplink port {healthResult.uplinkPortKey} in
vSphere Distributed Switch {dvs.name} on host {host.name} in
{datacenter.name}
Since 5.1 Reference
|
NASDatastoreCreatedEvent
|
info
|
VC
|
Created NAS
datastore {datastore.name} on {host.name} in
{datacenter.name}
Since 2.0 Reference
|
NetworkRollbackEvent
|
error
|
ESXHostNetwork
|
Network
configuration on the host {host.name} is rolled back as it
disconnects the host from vCenter server.
Since 5.1 Reference
|
NoAccessUserEvent
|
error
|
VC
|
Cannot login user
{userName}@{ipAddress}: no permission
Since 2.0 Reference
|
NoDatastoresConfiguredEvent
|
info
|
VC
|
No datastores have
been configured on the host {host.name}
Since 2.5 Reference
|
NoLicenseEvent
|
error
|
VC
|
A required license
{feature.featureName} is not reserved
Since 2.0 Reference
|
NoMaintenanceModeDrsRecommendationForVM
|
info
|
VC
|
Unable to
automatically migrate {vm.name} from {host.name}
Since 2.0 Reference
|
NonVIWorkloadDetectedOnDatastoreEvent
|
info
|
VC
|
Non-VI workload
detected on datastore {datastore.name}
Since 4.1 Reference
|
NotEnoughResourcesToStartVmEvent
|
info
|
VC
|
Not enough
resources to failover {vm.name} in {computeResource.name} in
{datacenter.name}
Since 2.0 Reference
|
OutOfSyncDvsHost
|
warning
|
VC
|
The Distributed
Virtual Switch configuration on some hosts differed from that of
the vCenter Server.
Since 4.0 Reference
|
PermissionAddedEvent
|
info
|
VC
|
Permission created
for {principal} on {entity.name}, role is {role.name}, propagation
is {propagate.@enum.auth.Permission.propagate}
Since 2.0 Reference
|
PermissionRemovedEvent
|
info
|
VC
|
Permission rule
removed for {principal} on {entity.name}
Since 2.0 Reference
|
PermissionUpdatedEvent
|
info
|
VC
|
Permission changed
for {principal} on {entity.name}, role is {role.name}, propagation
is {propagate.@enum.auth.Permission.propagate}
Since 2.0 Reference
|
ProfileAssociatedEvent
|
info
|
VC
|
Profile
{profile.name} attached.
Since 4.0 Reference
|
ProfileChangedEvent
|
info
|
VC
|
Profile
{profile.name} was changed.
Since 4.0 Reference
|
ProfileCreatedEvent
|
info
|
VC
|
Profile is
created.
Since 4.0 Reference
|
ProfileDissociatedEvent
|
info
|
VC
|
Profile
{profile.name} detached.
Since 4.0 Reference
|
ProfileEvent
|
info
|
VC
|
This event records
a Profile specific event.
Since 4.0 Reference
|
ProfileReferenceHostChangedEvent
|
info
|
VC
|
Profile
{profile.name} reference host changed.
Since 4.0 Reference
|
ProfileRemovedEvent
|
info
|
VC
|
Profile was
removed.
Since 4.0 Reference
|
RecoveryEvent
|
info
|
ESXHostNetwork
|
The host
{hostName} network connectivity was recovered on the management
virtual NIC {vnic} by connecting to a new port {portKey} on the
vSphere Distributed Switch {dvsUuid}.
Since 5.1 Reference
|
RemoteTSMEnabledEvent
|
info
|
VC
|
Remote Tech
Support Mode (SSH) for the host {host.name} has been enabled
Since 4.1 Reference
|
ResourcePoolCreatedEvent
|
info
|
VC
|
Created resource
pool {resourcePool.name} in compute-resource {computeResource.name}
in {datacenter.name}
Since 2.0 Reference
|
ResourcePoolDestroyedEvent
|
info
|
VC
|
Removed resource
pool {resourcePool.name} on {computeResource.name} in
{datacenter.name}
Since 2.0 Reference
|
ResourcePoolMovedEvent
|
info
|
VC
|
Moved resource
pool {resourcePool.name} from {oldParent.name} to {newParent.name}
on {computeResource.name} in {datacenter.name}
Since 2.0 Reference
|
ResourcePoolReconfiguredEvent
|
verbose
|
VC
|
Updated
configuration for {resourcePool.name} in compute-resource
{computeResource.name} in {datacenter.name}
Since 2.0 Reference
|
ResourceViolatedEvent
|
error
|
VC
|
Resource usage
exceeds configuration for resource pool {resourcePool.name} in
compute-resource {computeResource.name} in {datacenter.name}
Since 2.0 Reference
|
RoleAddedEvent
|
info
|
VC
|
New role
{role.name} created
Since 2.0 Reference
|
RoleRemovedEvent
|
info
|
VC
|
Role {role.name}
removed
Since 2.0 Reference
|
RoleUpdatedEvent
|
info
|
VC
|
Modifed role
{role.name}
Since 2.0 Reference
|
RollbackEvent
|
info
|
ESXHostNetwork
|
The Network API
{methodName} on this entity caused the host {hostName} to be
diconnected from the vCenter Server. The configuration change was
rolled back on the host.
Since 5.1 Reference
|
ScheduledTaskCompletedEvent
|
info
|
VC
|
Task
{scheduledTask.name} on {entity.name} in {datacenter.name}
completed successfully
Since 2.0 Reference
|
ScheduledTaskCreatedEvent
|
info
|
VC
|
Created task
{scheduledTask.name} on {entity.name} in {datacenter.name}
Since 2.0 Reference
|
ScheduledTaskEmailCompletedEvent
|
info
|
VC
|
Task
{scheduledTask.name} on {entity.name} in {datacenter.name} sent
email to {to}
Since 2.0 Reference
|
ScheduledTaskEmailFailedEvent
|
warning
|
VC
|
Task
{scheduledTask.name} on {entity.name} in {datacenter.name} cannot
send email to {to}: {reason.msg}
Since 2.0 Reference
|
ScheduledTaskEvent
|
info
|
VC
|
This event records
the completion of a scheduled task. The name of the task is
indicated.
Since 2.0 Reference
|
ScheduledTaskFailedEvent
|
warning
|
VC
|
Task
{scheduledTask.name} on {entity.name} in {datacenter.name} cannot
be completed: {reason.msg}
Since 2.0 Reference
|
ScheduledTaskReconfiguredEvent
|
info
|
VC
|
Reconfigured task
{scheduledTask.name} on {entity.name} in {datacenter.name}
Since 2.0 Reference
|
ScheduledTaskRemovedEvent
|
info
|
VC
|
Removed task
{scheduledTask.name} on {entity.name} in {datacenter.name}
Since 2.0 Reference
|
ScheduledTaskStartedEvent
|
info
|
VC
|
Running task
{scheduledTask.name} on {entity.name} in {datacenter.name}
Since 2.0 Reference
|
ServerLicenseExpiredEvent
|
error
|
VC
|
A vCenter Server
license has expired
Since 2.0 Reference
|
ServerStartedSessionEvent
|
info
|
VC
|
vCenter
started
Since 2.0 Reference
|
SessionTerminatedEvent
|
info
|
VC
|
A session for user
'{terminatedUsername}' has stopped
Since 2.0 Reference
|
SV130
|
info
|
ESXHost
|
SV130 Host
{host.name} has entered vCenter maintenance mode
|
SV131
|
info
|
ESXHost
|
SV130 Host
{host.name} has exited vCenter maintenance mode
|
SV132
|
error
|
ESXHostNetwork
|
SV132 Network
connection to Distributed Virtual Switch {DVS.name} has been
restored.
|
SV133
|
info
|
ESXHostNetwork
|
SV133 Network
connectivity issue for distributed virtual switch {DVS.name} in
Datacenter {datacenter.name}. The following host physical network
links are down
|
SV134
|
info
|
ESXHostNetwork
|
SV134 Physical
NICs were not assigned on DVS {DVS.name}
|
SV135
|
error
|
ESXHostStorage
|
SV135 Storage
connectivity issue for host {Host.name}. The following paths to
storage are dead - On storage adapter [Hba.Name]([Hba.model])
Pathname [PathName.name] (Datastore[Datastore.name])
|
SV136
|
info
|
ESXHostStorage
|
SV136 All VMHBA
storage paths are connected for host {Host.name}
|
SV137
|
info
|
ESXHostStorage
|
SV137 Storage
connectivity issue for host {Host.name} is unknown
|
SV138
|
error
|
ESXHostNetwork
|
SV138 Network
connectivity issue for virtual switch {Switch.name} on host
{host.name}. The following host physical network links are down
VMNIC name
|
SV139
|
info
|
ESXHostNetwork
|
SV139 Network
connection to Virtual Switch {Switch.name} has been
restored.
|
SV140
|
info
|
ESXHostNetwork
|
SV140 Physical
nics were not assigned on switch {switch.Name} on host
{host.name}
|
TaskEvent
|
info
|
VC
|
Task:
{info.descriptionId}
Since 2.0 Reference
|
TaskTimeoutEvent
|
info
|
VC
|
Task:
{info.descriptionId} time-out
Since 2.5 Reference
|
TeamingMatchEvent
|
info
|
ESXHostNetwork
|
Teaming
configuration in the vSphere Distributed Switch {dvs.name} on host
{host.name} matches the physical switch configuration in
{datacenter.name}. Detail:
{healthResult.summary.@enum.dvs.VmwareDistributedVirtualSwitch.TeamingMatchStatus}
Since 5.1 Reference
|
TeamingMisMatchEvent
|
error
|
ESXHostNetwork
|
Teaming
configuration in the vSphere Distributed Switch {dvs.name} on host
{host.name} does not match the physical switch configuration in
{datacenter.name}. Detail:
{healthResult.summary.@enum.dvs.VmwareDistributedVirtualSwitch.TeamingMatchStatus}
Since 5.1 Reference
|
TemplateBeingUpgradedEvent
|
info
|
VC
|
Upgrading template
{legacyTemplate}
Since 2.0 Reference
|
TemplateUpgradedEvent
|
info
|
VC
|
Template
{legacyTemplate} upgrade completed
Since 2.0 Reference
|
TemplateUpgradeFailedEvent
|
info
|
VC
|
Cannot upgrade
template {legacyTemplate} due to: {reason.msg}
Since 2.0 Reference
|
TimedOutHostOperationEvent
|
warning
|
ESXHost
|
The operation
performed on {host.name} in {datacenter.name} timed out
Since 2.0 Reference
|
UnlicensedVirtualMachinesEvent
|
info
|
VC
|
There are
{unlicensed} unlicensed virtual machines on host {host} - there are
only {available} licenses available
Since 2.5 Reference
|
UnlicensedVirtualMachinesFoundEvent
|
info
|
VC
|
{unlicensed}
unlicensed virtual machines found on host {host}
Since 2.5 Reference
|
UpdatedAgentBeingRestartedEvent
|
info
|
VC
|
The agent on host
{host.name} is updated and will soon restart
Since 2.5 Reference
|
UpgradeEvent
|
info
|
VC
|
This event records
that the agent has been patched and will be restarted.
Since 2.0 Reference
|
UplinkPortMtuNotSupportEvent
|
error
|
ESXHostNetwork
|
Not all VLAN MTU
settings on the external physical switch allow the vSphere
Distributed Switch maximum MTU size packets to pass on the uplink
port {healthResult.uplinkPortKey} in vSphere Distributed Switch
{dvs.name} on host {host.name} in {datacenter.name}.
Since 5.1 Reference
|
UplinkPortMtuSupportEvent
|
info
|
ESXHostNetwork
|
All VLAN MTU
settings on the external physical switch allow the vSphere
Distributed Switch maximum MTU size packets to pass on the uplink
port {healthResult.uplinkPortKey} in vSphere Distributed Switch
{dvs.name} on host {host.name} in {datacenter.name}.
Since 5.1 Reference
|
UplinkPortVlanTrunkedEvent
|
info
|
ESXHostNetwork
|
The configured
VLAN in the vSphere Distributed Switch was trunked by the physical
switch connected to uplink port {healthResult.uplinkPortKey} in
vSphere Distributed Switch {dvs.name} on host {host.name} in
{datacenter.name}.
Since 5.1 Reference
|
UplinkPortVlanUntrunkedEvent
|
error
|
ESXHostNetwork
|
Not all the
configured VLANs in the vSphere Distributed Switch were trunked by
the physical switch connected to uplink port
{healthResult.uplinkPortKey} in vSphere Distributed Switch
{dvs.name} on host {host.name} in {datacenter.name}.
Since 5.1 Reference
|
UserAssignedToGroup
|
info
|
VC
|
User {userLogin}
was added to group {group}
Since 2.0 Reference
|
UserLoginSessionEvent
|
verbose
|
VC
|
User
{userName}@{ipAddress} logged in
Since 2.0 Reference
|
UserLogoutSessionEvent
|
verbose
|
VC
|
User {userName}
logged out
Since 2.0 Reference
|
UserPasswordChanged
|
info
|
VC
|
Password was
changed for account {userLogin} on host {host.name}
Since 2.0 Reference
|
UserUnassignedFromGroup
|
info
|
VC
|
User {userLogin}
removed from group {group}
Since 2.0 Reference
|
UserUpgradeEvent
|
user
|
VC
|
{message}
Since 2.0 Reference
|
VcAgentUninstalledEvent
|
info
|
VC
|
event.VcAgentUninstalledEvent.fullFormat
Since 4.0 Reference
|
VcAgentUninstallFailedEvent
|
error
|
VC
|
Cannot uninstall
vCenter agent from {host.name} in {datacenter.name}.
{reason.@enum.fault.AgentInstallFailed.Reason}
Since 4.0 Reference
|
VcAgentUpgradedEvent
|
info
|
VC
|
vCenter agent has
been upgraded on {host.name} in {datacenter.name}
Since 2.0 Reference
|
VcAgentUpgradeFailedEvent
|
error
|
VC
|
Cannot upgrade
vCenter agent on {host.name} in {datacenter.name}.
{reason.@enum.fault.AgentInstallFailed.Reason}
Since 2.0 Reference
|
vim.event.LicenseDowngradedEvent
|
warning
|
VC
|
vim.event.LicenseDowngradedEvent| License downgrade:
{licenseKey} removes the following features: {lostFeatures}
Since 4.1 Reference
|
VimAccountPasswordChangedEvent
|
info
|
VC
|
VIM account
password was changed on host {host.name}
Since 2.5 Reference
|
VmAcquiredMksTicketEvent
|
info
|
VC
|
Remote console to
{vm.name} on {host.name} in {datacenter.name} has been
opened
Since 2.5 Reference
|
VmAcquiredTicketEvent
|
info
|
VC
|
A ticket for
{vm.name} of type {ticketType} on {host.name} in {datacenter.name}
has been acquired
Since 4.1 Reference
|
VmAutoRenameEvent
|
info
|
VC
|
Invalid name for
{vm.name} on {host.name} in {datacenter.name}. Renamed from
{oldName} to {newName}
Since 2.0 Reference
|
VmBeingClonedEvent
|
info
|
VC
|
Cloning {vm.name}
on host {host.name} in {datacenter.name} to {destName} on host
{destHost.name}
Since 2.0 Reference
|
VmBeingClonedNoFolderEvent
|
info
|
VC
|
Cloning {vm.name}
on host {host.name} in {datacenter.name} to {destName} on host
{destHost.name}
Since 4.1 Reference
|
VmBeingCreatedEvent
|
info
|
VC
|
Creating {vm.name}
on host {host.name} in {datacenter.name}
Since 2.0 Reference
|
VmBeingDeployedEvent
|
info
|
VC
|
Deploying
{vm.name} on host {host.name} in {datacenter.name} from template
{srcTemplate.name}
Since 2.0 Reference
|
VmBeingHotMigratedEvent
|
info
|
VC
|
Migrating
{vm.name} from {host.name} to {destHost.name} in
{datacenter.name}
Since 2.0 Reference
|
VmBeingMigratedEvent
|
info
|
VC
|
Relocating
{vm.name} from {host.name} to {destHost.name} in
{datacenter.name}
Since 2.0 Reference
|
VmBeingRelocatedEvent
|
info
|
VC
|
Relocating
{vm.name} in {datacenter.name} from {host.name} to
{destHost.name}
Since 2.0 Reference
|
VmClonedEvent
|
info
|
VC
|
Clone of
{sourceVm.name} completed
Since 2.0 Reference
|
VmCloneFailedEvent
|
error
|
VC
|
Cannot clone
{vm.name}: {reason.msg}
Since 2.0 Reference
|
VmConfigMissingEvent
|
info
|
VC
|
Configuration file
for {vm.name} on {host.name} in {datacenter.name} cannot be
found
Since 2.0 Reference
|
VmConnectedEvent
|
info
|
VC
|
Virtual machine
{vm.name} is connected
Since 2.0 Reference
|
VmCreatedEvent
|
info
|
VC
|
Created virtual
machine {vm.name} on {host.name} in {datacenter.name}
Since 2.0 Reference
|
VmDasBeingResetEvent
|
warning
|
VirtualMachine
|
{vm.name} on
{host.name} in cluster {computeResource.name} in {datacenter.name}
reset due to a guest OS error
Since 4.0 Reference
|
VmDasBeingResetWithScreenshotEvent
|
warning
|
VirtualMachine
|
{vm.name} on
{host.name} in cluster {computeResource.name} in {datacenter.name}
reset due to a guest OS error. Screenshot is saved at
{screenshotFilePath}.
Since 4.0 Reference
|
VmDasResetFailedEvent
|
error
|
VirtualMachine
|
Cannot reset
{vm.name} on {host.name} in cluster {computeResource.name} in
{datacenter.name} due to a guest OS error
Since 4.0 Reference
|
VmDasUpdateErrorEvent
|
error
|
VirtualMachine
|
Unable to update
HA agents given the state of {vm.name}
Since 2.0 Reference
|
VmDasUpdateOkEvent
|
info
|
VC
|
HA agents have
been updated with the current state of the virtual machine
Since 2.0 Reference
|
VmDateRolledBackEvent
|
error
|
VirtualMachine
|
Disconnecting all
hosts as the date of virtual machine {vm.name} has been rolled
back
Since 2.0 Reference
|
VmDeployedEvent
|
info
|
VC
|
Template
{srcTemplate.name} deployed on host {host.name}
Since 2.0 Reference
|
VmDeployFailedEvent
|
error
|
VC
|
Cannot deploy
template: {reason.msg}
Since 2.0 Reference
|
VmDisconnectedEvent
|
info
|
VC
|
{vm.name} on host
{host.name} in {datacenter.name} is disconnected
Since 2.0 Reference
|
VmDiscoveredEvent
|
info
|
VC
|
Discovered
{vm.name} on {host.name} in {datacenter.name}
Since 2.0 Reference
|
VmDiskFailedEvent
|
error
|
VirtualMachine
|
Cannot create
virtual disk {disk}
Since 2.0 Reference
|
VmDVPortEvent
|
info
|
VC
|
dvPort connected
to VM {vm.name} on {host.name} in {datacenter.name} changed
status
Since 4.1 Reference
|
VmEmigratingEvent
|
info
|
VC
|
Migrating
{vm.name} off host {host.name} in {datacenter.name}
Since 2.0 Reference
|
VmEndRecordingEvent
|
info
|
VC
|
End a recording
session on {vm.name}
Since 4.0 Reference
|
VmEndReplayingEvent
|
info
|
VC
|
End a replay
session on {vm.name}
Since 4.0 Reference
|
VmEvent
|
info
|
VC
|
This is a
catch-all event for various VM events (the type of event is listed
in the event). See VMware's documentation for the list of possible
events.
Since 2.0 Reference
|
VmFailedMigrateEvent
|
error
|
VirtualMachine
|
Cannot migrate
{vm.name} from {host.name} to {destHost.name} in
{datacenter.name}
Since 2.0 Reference
|
VmFailedRelayoutEvent
|
error
|
VirtualMachine
|
Cannot complete
relayout {vm.name} on {host.name} in {datacenter.name}:
{reason.msg}
Since 2.0 Reference
|
VmFailedRelayoutOnVmfs2DatastoreEvent
|
error
|
VirtualMachine
|
Cannot complete
relayout for virtual machine {vm.name} which has disks on a VMFS2
volume.
Since 2.0 Reference
|
VmFailedStartingSecondaryEvent
|
error
|
VirtualMachine
|
vCenter cannot
start the Secondary VM {vm.name}. Reason:
{reason.@enum.VmFailedStartingSecondaryEvent.FailureReason}
Since 4.0 Reference
|
VmFailedToPowerOffEvent
|
error
|
VirtualMachine
|
Cannot power Off
{vm.name} on {host.name} in {datacenter.name}: {reason.msg}
Since 2.0 Reference
|
VmFailedToPowerOnEvent
|
error
|
VirtualMachine
|
Cannot power On
{vm.name} on {host.name} in {datacenter.name}. {reason.msg}
Since 2.0 Reference
|
VmFailedToRebootGuestEvent
|
error
|
VirtualMachine
|
Cannot reboot the
guest OS for {vm.name} on {host.name} in {datacenter.name}.
{reason.msg}
Since 2.0 Reference
|
VmFailedToResetEvent
|
error
|
VirtualMachine
|
Cannot suspend
{vm.name} on {host.name} in {datacenter.name}: {reason.msg}
Since 2.0 Reference
|
VmFailedToShutdownGuestEvent
|
error
|
VirtualMachine
|
{vm.name} cannot
shut down the guest OS on {host.name} in {datacenter.name}:
{reason.msg}
Since 2.0 Reference
|
VmFailedToStandbyGuestEvent
|
error
|
VirtualMachine
|
{vm.name} cannot
standby the guest OS on {host.name} in {datacenter.name}:
{reason.msg}
Since 2.0 Reference
|
VmFailedToSuspendEvent
|
error
|
VirtualMachine
|
Cannot suspend
{vm.name} on {host.name} in {datacenter.name}: {reason.msg}
Since 2.0 Reference
|
VmFailedUpdatingSecondaryConfig
|
error
|
VirtualMachine
|
vCenter cannot
update the Secondary VM {vm.name} configuration
Since 4.0 Reference
|
VmFailoverFailed
|
warning
|
VirtualMachine
|
Failover
unsuccessful for {vm.name} on {host.name} in cluster
{computeResource.name} in {datacenter.name}
Since 2.0 Reference
|
VmFaultToleranceStateChangedEvent
|
info
|
VC
|
Fault Tolerance
state on {vm.name} changed from
{oldState.@enum.VirtualMachine.FaultToleranceState} to
{newState.@enum.VirtualMachine.FaultToleranceState}
Since 4.0 Reference
|
VmFaultToleranceTurnedOffEvent
|
info
|
VC
|
Fault Tolerance
protection has been turned off for {vm.name}
Since 4.0 Reference
|
VmFaultToleranceVmTerminatedEvent
|
error
|
VirtualMachine
|
The Fault
Tolerance VM ({vm.name}) has been terminated.
{reason.@enum.VmFaultToleranceVmTerminatedEvent.TerminateReason}
Since 4.0 Reference
|
VMFSDatastoreCreatedEvent
|
info
|
VC
|
Created VMFS
datastore {datastore.name} on {host.name} in
{datacenter.name}
Since 2.0 Reference
|
VMFSDatastoreExpandedEvent
|
info
|
VC
|
Expanded VMFS
datastore {datastore.name} on {host.name} in
{datacenter.name}
Since 4.0 Reference
|
VMFSDatastoreExtendedEvent
|
info
|
VC
|
Extended VMFS
datastore {datastore.name} on {host.name} in
{datacenter.name}
Since 4.0 Reference
|
VmGuestRebootEvent
|
info
|
VC
|
Guest OS reboot
for {vm.name} on {host.name} in {datacenter.name}
Since 2.0 Reference
|
VmGuestShutdownEvent
|
info
|
VC
|
Guest OS shut down
for {vm.name} on {host.name} in {datacenter.name}
Since 2.0 Reference
|
VmGuestStandbyEvent
|
info
|
VC
|
Guest OS standby
for {vm.name} on {host.name} in {datacenter.name}
Since 2.0 Reference
|
VmHealthMonitoringStateChangedEvent
|
info
|
VC
|
VM monitoring
state in {computeResource.name} in {datacenter.name} changed to
{state}
Since 4.0 Reference
|
VmInstanceUuidAssignedEvent
|
info
|
VC
|
Assign a new
instance UUID ({instanceUuid}) to {vm.name}
Since 4.0 Reference
|
VmInstanceUuidChangedEvent
|
info
|
VC
|
The instance UUID
of {vm.name} has been changed from ({oldInstanceUuid}) to
({newInstanceUuid})
Since 4.0 Reference
|
VmInstanceUuidConflictEvent
|
error
|
VirtualMachine
|
The instance UUID
({instanceUuid}) of {vm.name} conflicts with the instance UUID
assigned to {conflictedVm.name}
Since 4.0 Reference
|
VmMacAssignedEvent
|
info
|
VC
|
New MAC address
({mac}) assigned to adapter {adapter} for {vm.name}
Since 2.0 Reference
|
VmMacChangedEvent
|
warning
|
VC
|
Changed MAC
address from {oldMac} to {newMac} for adapter {adapter} for
{vm.name}
Since 2.0 Reference
|
VmMacConflictEvent
|
error
|
VirtualMachine
|
The MAC address
({mac}) of {vm.name} conflicts with MAC assigned to
{conflictedVm.name}
Since 2.0 Reference
|
VmMaxFTRestartCountReached
|
warning
|
VirtualMachine
|
Reached maximum
Secondary VM (with FT turned On) restart count for {vm.name} on
{host.name} in cluster {computeResource.name} in
{datacenter.name}.
Since 4.0 Reference
|
VmMaxRestartCountReached
|
warning
|
VirtualMachine
|
Reached maximum VM
restart count for {vm.name} on {host.name} in cluster
{computeResource.name} in {datacenter.name}.
Since 4.0 Reference
|
VmMessageErrorEvent
|
error
|
VirtualMachine
|
Error message on
{vm.name} on {host.name} in {datacenter.name}: {message}
Since 4.0 Reference
|
VmMessageEvent
|
info
|
VC
|
Message on
{vm.name} on {host.name} in {datacenter.name}: {message}
Since 2.0 Reference
|
VmMessageWarningEvent
|
warning
|
VirtualMachine
|
Warning message on
{vm.name} on {host.name} in {datacenter.name}: {message}
Since 4.0 Reference
|
VmMigratedEvent
|
info
|
VC
|
Migration of
virtual machine {vm.name} from {sourceHost.name} to {host.name}
completed
Since 2.0 Reference
|
VmNoCompatibleHostForSecondaryEvent
|
warning
|
VirtualMachine
|
No compatible host
for the Secondary VM {vm.name}
Since 4.0 Reference
|
VmNoNetworkAccessEvent
|
warning
|
VirtualMachine
|
Not all networks
for {vm.name} are accessible by {destHost.name}
Since 2.0 Reference
|
VmOrphanedEvent
|
warning
|
VirtualMachine
|
{vm.name} does not
exist on {host.name} in {datacenter.name}
Since 2.0 Reference
|
VMotionLicenseExpiredEvent
|
error
|
VC
|
A VMotion license
for {host.name} has expired
Since 2.0 Reference
|
VmPoweredOffEvent
|
info
|
VC
|
{vm.name} on
{host.name} in {datacenter.name} is powered off
Since 2.0 Reference
|
VmPoweredOnEvent
|
info
|
VC
|
{vm.name} on
{host.name} in {datacenter.name} is powered on
Since 2.0 Reference
|
VmPoweringOnWithCustomizedDVPortEvent
|
info
|
VC
|
Virtual machine
{vm.name} powered On with vNICs connected to dvPorts that have a
port level configuration, which might be different from the dvPort
group configuration.
Since 4.0 Reference
|
VmPowerOffOnIsolationEvent
|
info
|
VC
|
{vm.name} was
powered Off on the isolated host {isolatedHost.name} in cluster
{computeResource.name} in {datacenter.name}
Since 2.0 Reference
|
VmPrimaryFailoverEvent
|
error
|
VirtualMachine
|
VM ({vm.name})
failed over to {host.name}.
{reason.@enum.VirtualMachine.NeedSecondaryReason}
Since 4.0 Reference
|
VmReconfiguredEvent
|
info
|
VC
|
Reconfigured
{vm.name} on {host.name} in {datacenter.name}
Since 2.0 Reference
|
VmRegisteredEvent
|
info
|
VC
|
Registered
{vm.name} on {host.name} in {datacenter.name}
Since 2.0 Reference
|
VmRelayoutSuccessfulEvent
|
info
|
VC
|
Relayout of
{vm.name} on {host.name} in {datacenter.name} completed
Since 2.0 Reference
|
VmRelayoutUpToDateEvent
|
info
|
VC
|
{vm.name} on
{host.name} in {datacenter.name} is in the correct format and
relayout is not necessary
Since 2.0 Reference
|
VmReloadFromPathEvent
|
info
|
VC
|
{vm.name} on
{host.name} reloaded from new configuration {configPath}
Since 4.1 Reference
|
VmReloadFromPathFailedEvent
|
error
|
VirtualMachine
|
{vm.name} on
{host.name} could not be reloaded from {configPath}
Since 4.1 Reference
|
VmRelocatedEvent
|
info
|
VC
|
Completed the
relocation of the virtual machine
Since 2.0 Reference
|
VmRelocateFailedEvent
|
error
|
VirtualMachine
|
Cannot relocate
virtual machine '{vm.name}' in {datacenter.name}
Since 2.0 Reference
|
VmRemoteConsoleConnectedEvent
|
info
|
VC
|
Remote console
connected to {vm.name} on host {host.name}
Since 4.0 Reference
|
VmRemoteConsoleDisconnectedEvent
|
info
|
VC
|
Remote console
disconnected from {vm.name} on host {host.name}
Since 4.0 Reference
|
VmRemovedEvent
|
info
|
VC
|
Removed {vm.name}
on {host.name} from {datacenter.name}
Since 2.0 Reference
|
VmRenamedEvent
|
warning
|
VC
|
Renamed {vm.name}
from {oldName} to {newName} in {datacenter.name}
Since 2.0 Reference
|
VmRequirementsExceedCurrentEVCModeEvent
|
warning
|
VirtualMachine
|
Feature
requirements of {vm.name} exceed capabilities of {host.name}'s
current EVC mode.
Since 5.1 Reference
|
VmResettingEvent
|
info
|
VC
|
{vm.name} on
{host.name} in {datacenter.name} is reset
Since 2.0 Reference
|
VmResourcePoolMovedEvent
|
info
|
VC
|
Moved {vm.name}
from resource pool {oldParent.name} to {newParent.name} in
{datacenter.name}
Since 2.0 Reference
|
VmResourceReallocatedEvent
|
info
|
VC
|
Changed resource
allocation for {vm.name}
Since 2.0 Reference
|
VmRestartedOnAlternateHostEvent
|
info
|
VC
|
Virtual machine
{vm.name} was restarted on {host.name} since {sourceHost.name}
failed
Since 2.0 Reference
|
VmResumingEvent
|
info
|
VC
|
{vm.name} on
{host.name} in {datacenter.name} is resumed
Since 2.0 Reference
|
VmSecondaryAddedEvent
|
info
|
VC
|
A Secondary VM has
been added for {vm.name}
Since 4.0 Reference
|
VmSecondaryDisabledBySystemEvent
|
error
|
VirtualMachine
|
vCenter disabled
Fault Tolerance on VM '{vm.name}' because the Secondary VM could
not be powered On.
Since 4.0 Reference
|
VmSecondaryDisabledEvent
|
info
|
VC
|
Disabled Secondary
VM for {vm.name}
Since 4.0 Reference
|
VmSecondaryEnabledEvent
|
info
|
VC
|
Enabled Secondary
VM for {vm.name}
Since 4.0 Reference
|
VmSecondaryStartedEvent
|
info
|
VC
|
Started Secondary
VM for {vm.name}
Since 4.0 Reference
|
VmShutdownOnIsolationEvent
|
info
|
VC
|
{vm.name} was shut
down on the isolated host {isolatedHost.name} in cluster
{computeResource.name} in {datacenter.name}:
{shutdownResult.@enum.VmShutdownOnIsolationEvent.Operation}
Since 4.0 Reference
|
VmStartingEvent
|
info
|
VC
|
{vm.name} on host
{host.name} in {datacenter.name} is starting
Since 2.0 Reference
|
VmStartingSecondaryEvent
|
info
|
VC
|
Starting Secondary
VM for {vm.name}
Since 4.0 Reference
|
VmStartRecordingEvent
|
info
|
VC
|
Start a recording
session on {vm.name}
Since 4.0 Reference
|
VmStartReplayingEvent
|
info
|
VC
|
Start a replay
session on {vm.name}
Since 4.0 Reference
|
VmStaticMacConflictEvent
|
error
|
VC
|
The static MAC
address ({mac}) of {vm.name} conflicts with MAC assigned to
{conflictedVm.name}
Since 2.0 Reference
|
VmStoppingEvent
|
info
|
VC
|
{vm.name} on
{host.name} in {datacenter.name} is stopping
Since 2.0 Reference
|
VmSuspendedEvent
|
info
|
VC
|
{vm.name} on
{host.name} in {datacenter.name} is suspended
Since 2.0 Reference
|
VmSuspendingEvent
|
info
|
VC
|
{vm.name} on
{host.name} in {datacenter.name} is being suspended
Since 2.0 Reference
|
VmTimedoutStartingSecondaryEvent
|
error
|
VirtualMachine
|
Starting the
Secondary VM {vm.name} timed out within {timeout} ms
Since 4.0 Reference
|
VmUnsupportedStartingEvent
|
warning
|
VirtualMachine
|
Unsupported guest
OS {guestId} for {vm.name} on {host.name} in
{datacenter.name}
Since 2.0 Reference
|
VmUpgradeCompleteEvent
|
info
|
VC
|
Virtual hardware
upgraded to version {version}
Since 2.0 Reference
|
VmUpgradeFailedEvent
|
error
|
VirtualMachine
|
Cannot upgrade
virtual hardware
Since 2.0 Reference
|
VmUpgradingEvent
|
info
|
VC
|
Upgrading virtual
hardware on {vm.name} in {datacenter.name} to version
{version}
Since 2.0 Reference
|
VmUuidAssignedEvent
|
info
|
VC
|
Assigned new BIOS
UUID ({uuid}) to {vm.name} on {host.name} in
{datacenter.name}
Since 2.0 Reference
|
VmUuidChangedEvent
|
warning
|
VC
|
Changed BIOS UUID
from {oldUuid} to {newUuid} for {vm.name} on {host.name} in
{datacenter.name}
Since 2.0 Reference
|
VmUuidConflictEvent
|
error
|
VC
|
BIOS ID ({uuid})
of {vm.name} conflicts with that of {conflictedVm.name}
Since 2.0 Reference
|
VmVnicPoolReservationViolationClearEvent
|
info
|
VC
|
The reservation
violation on the virtual NIC network resource pool
{vmVnicResourcePoolName} with key {vmVnicResourcePoolKey} on
{dvs.name} is cleared
Since 5.5 Reference
|
VmVnicPoolReservationViolationRaiseEvent
|
info
|
VC
|
The reservation
allocated to the virtual NIC network resource pool
{vmVnicResourcePoolName} with key {vmVnicResourcePoolKey} on
{dvs.name} is violated
Since 5.5 Reference
|
VmWwnAssignedEvent
|
info
|
VC
|
New WWNs assigned
to {vm.name}
Since 2.5 Reference
|
VmWwnChangedEvent
|
warning
|
VirtualMachine
|
WWNs are changed
for {vm.name}
Since 2.5 Reference
|
VmWwnConflictEvent
|
error
|
VirtualMachine
|
The WWN ({wwn}) of
{vm.name} conflicts with the currently registered WWN
Since 2.5 Reference
|
vprob.net.connectivity.lost
|
error
|
ESXHostNetwork
|
vprob.net.connectivity.lost| Lost network
connectivity on virtual switch {1}. Physical NIC {2} is down.
Affected portgroups:{3}.
Since 4.0 Reference
|
vprob.net.e1000.tso6.notsupported
|
error
|
ESXHostNetwork
|
vprob.net.e1000.tso6.notsupported| Guest-initiated
IPv6 TCP Segmentation Offload (TSO) packets ignored. Manually
disable TSO inside the guest operating system in virtual machine
{1}, or use a different virtual adapter.
Since 4.0 Reference
|
vprob.net.migrate.bindtovmk
|
warning
|
ESXHostNetwork
|
vprob.net.migrate.bindtovmk| The ESX advanced config
option /Migrate/Vmknic is set to an invalid vmknic: {1}.
/Migrate/Vmknic specifies a vmknic that VMotion binds to for
improved performance. Please update the config option with a valid
vmknic or, if you don't want VMotion to bind to a specific vmknic,
remove the invalid vmknic and leave the option blank.
Since 4.0 Reference
|
vprob.net.proxyswitch.port.unavailable
|
error
|
ESXHostNetwork
|
vprob.net.proxyswitch.port.unavailable| Virtual NIC
with hardware address {1} failed to connect to distributed virtual
port {2} on switch {3}. No more ports available on the host proxy
switch.
Since 4.0 Reference
|
vprob.net.redundancy.degraded
|
warning
|
ESXHostNetwork
|
vprob.net.redundancy.degraded| Uplink redundancy
degraded on virtual switch {1}. Physical NIC {2} is down. {3}
uplinks still up. Affected portgroups:{4}.
Since 4.0 Reference
|
vprob.net.redundancy.lost
|
warning
|
ESXHostNetwork
|
vprob.net.redundancy.lost| Lost uplink redundancy on
virtual switch {1}. Physical NIC {2} is down. Affected
portgroups:{3}.
Since 4.0 Reference
|
vprob.scsi.device.thinprov.atquota
|
warning
|
VC
|
vprob.scsi.device.thinprov.atquota| Space utilization
on thin-provisioned device {1} exceeded configured
threshold.
Since 4.1 Reference
|
vprob.storage.connectivity.lost
|
error
|
ESXHostStorage
|
vprob.storage.connectivity.lost| Lost connectivity to
storage device {1}. Path {2} is down. Affected datastores:
{3}.
Since 4.0 Reference
|
vprob.storage.redundancy.degraded
|
warning
|
ESXHostStorage
|
vprob.storage.redundancy.degraded| Path redundancy to
storage device {1} degraded. Path {2} is down. {3} remaining active
paths. Affected datastores: {4}.
Since 4.0 Reference
|
vprob.storage.redundancy.lost
|
warning
|
ESXHostStorage
|
vprob.storage.redundancy.lost| Lost path redundancy
to storage device {1}. Path {2} is down. Affected datastores:
{3}.
Since 4.0 Reference
|
vprob.vmfs.error.volume.is.locked
|
error
|
ESXHostStorage
|
vprob.vmfs.error.volume.is.locked| Volume on device
{1} is locked, possibly because some remote host encountered an
error during a volume operation and could not recover.
Since 5.0 Reference
|
vprob.vmfs.extent.offline
|
warning
|
ESXHostStorage
|
vprob.vmfs.extent.offline| An attached device {1}
might be offline. The file system {2} is now in a degraded state.
While the datastore is still available, parts of data that reside
on the extent that went offline might be inaccessible.
Since 5.0 Reference
|
vprob.vmfs.extent.online
|
info
|
ESXHostStorage
|
vprob.vmfs.extent.online| Device {1} backing file
system {2} came online. This extent was previously offline. All
resources on this device are now available.
Since 5.0 Reference
|
vprob.vmfs.heartbeat.recovered
|
info
|
ESXHostStorage
|
vprob.vmfs.heartbeat.recovered| Successfully restored
access to volume {1} ({2}) following connectivity issues.
Since 4.0 Reference
|
vprob.vmfs.heartbeat.timedout
|
warning
|
ESXHostStorage
|
vprob.vmfs.heartbeat.timedout| Lost access to volume
{1} ({2}) due to connectivity issues. Recovery attempt is in
progress and outcome will be reported shortly.
Since 4.0 Reference
|
vprob.vmfs.heartbeat.unrecoverable
|
error
|
ESXHostStorage
|
vprob.vmfs.heartbeat.unrecoverable| Lost connectivity
to volume {1} ({2}) and subsequent recovery attempts have
failed.
Since 4.0 Reference
|
vprob.vmfs.journal.createfailed
|
warning
|
ESXHostStorage
|
vprob.vmfs.journal.createfailed| No space for journal
on volume {1} ({2}). Opening volume in read-only metadata mode with
limited write support.
Since 4.0 Reference
|
vprob.vmfs.lock.corruptondisk
|
error
|
ESXHostStorage
|
vprob.vmfs.lock.corruptondisk| At least one corrupt
on-disk lock was detected on volume {1} ({2}). Other regions of the
volume may be damaged too.
Since 4.0 Reference
|
vprob.vmfs.nfs.server.disconnect
|
error
|
ESXHostStorage
|
vprob.vmfs.nfs.server.disconnect| Lost connection to
server {1} mount point {2} mounted as {3} ({4}).
Since 4.0 Reference
|
vprob.vmfs.nfs.server.restored
|
info
|
ESXHostStorage
|
vprob.vmfs.nfs.server.restored| Restored connection
to server {1} mount point {2} mounted as {3} ({4}).
Since 4.0 Reference
|
vprob.vmfs.resource.corruptondisk
|
error
|
ESXHostStorage
|
vprob.vmfs.resource.corruptondisk| At least one
corrupt resource metadata region was detected on volume {1} ({2}).
Other regions of the volume may be damaged too.
Since 4.0 Reference
|
vprob.vmfs.volume.locked
|
error
|
ESXHostStorage
|
vprob.vmfs.volume.locked| Volume on device {1}
locked, possibly because remote host {2} encountered an error
during a volume operation and could not recover.
Since 4.0 Reference
|
WarningUpgradeEvent
|
warning
|
VC
|
{message}
Since 2.0 Reference
|