Pages

Friday, August 9, 2019

Creating Visio-Like diagrams for free using VMware stencils



As a MacBook user I little upset when I need to create a Visio diagram due to there is not Visio for Mac. What do you do? Normally I need to power on a Windows VM. Also some official and unofficial stencils templates and stuff were created for Visio. Finally many master diagrams were created for Visio.

Well, not anymore for me.

I found a very amazing tool called draw.io. I have used this tool in the past using the online framework, but I discovered a best way to create diagrams visio-like even using exactly the same icons and stencils.
Here’s one I made yesterday as VMC on AWS template:




Here the draw.io source file for this diagram.  My plan is share all my draw.io files on my public github.com coming soon. Meanwhile you can download the .drawio from here.

The amazing thing is draw.io support many formats even VSSX files, the format for Visio stencils templates. So what ever vssx file could be imported.

First I needed to import stencils from this places (just using the Import from File menu option):
Whilst I don’t see this as a full replacement for Visio. However last time, I just wanted to sketch up a quick concept for our VMware cluster and VMC on AWS architecture and it was great for that.

Also the web online tool, draw.io has announced its desktop clients, which we can download directly from here:

Draw.io has the advantage of including images from Amazon, Microsoft Azure, Veeam, and many others.

Brilliant work guys. Did I tell you it's free? Visio good bye

Regards


Thursday, July 25, 2019

VCD 9.7 Custom Branding Logo per-tenant



With release on vCloud Director 9.7 you can set the logo and the theme for your vCloud Director Service Provider Admin Portal and also now you can customize the vCloud Director Tenant Portal of each tenants.

 

 Provider Portal Branding

vCloud Director 9.7 UI can be modified for the following elements:
  • Portal name
  • Portal color
  • Portal theme (vCloud Director contains two themes – default and dark.)
  • Logo & Browser icon

 

Customize Portal Name, Portal Color and Portal Theme

To configure the Cloud Provider Portal Branding , make a PUT request to vCloud Director end point in to tenant organisation as below: ( T1 is my org Name)

  • Headers
    • Accept: application/*;version=32
    • Content-Type: application/json
  • PUThttps://<vCD Url>/cloudapi/branding/tenant/T1
  • BODY
    {
      "portalName": "Private Cloud",
      "portalColor": "#009AD9",
      "selectedTheme": {
        "themeType": "BUILT_IN",
        "name": "Default"
      },
      "customLinks": [
        {
          "name": "help",
          "menuItemType": "override",
          "url": "http://www.vlabware.com"
        }
      ]
    }

 

Customize Logo

To change the Logo, you need to define the Headers and PUT request.
  • Headers
    • Accept: image/*;version=32
    • Content-Type: image/png
Note: Unfortunately some references like VMTECHIE has wrong the Content-Type field, due to it add an extra ";version=32", but this is wrong, if you used it you will receive this message on the Chrome Console:

Refused to load the image ‘unsafe:data:image…’ because it violates the following Content Security Policy directive: “img-src * data: blow: ‘unsafe-inline'”.



but using the header "Content-Type" only with "image/png", it will works well each per-tenant.
  • PUT https://<vCD Url>/cloudapi/branding/logo 

  • Body – This is bit tricky since we need to upload an image as a body.
    • In Postman client inside “Body” click on “Binary” which will allow you to choose file as body and select your logo.


    For a particular tenant, you can selectively override the default logo. Any value that you do not set uses the corresponding system default value.

    By default, no org-specific branding will be shown outside of a logged in session that means it would not appear on login and logout pages. We don't show per-tenant branding outside a logged in session (it makes it possible for tenants to "discover" one another

    if you wish to allow branding outside of logged in sessions, you can use the cell management tool to execute the following command:

     /opt/vmware/vcloud-director/bin/cell-management-tool manage-config -n backend.branding.requireAuthForBranding -v false
    

    The result of the command is:


    Here the result on Login Page:

     Inside you could see this:



    Amazing.

    Regards



Tuesday, July 23, 2019

What is HCX Multi Site Services Mesh?


What is HCX Multi Site Services Mesh?

The Multi-Site Service Mesh enables the configuration, deployment, and serviceability of Interconnect virtual appliance pairs with ease. Now you have the choice to deploy/manage HCX services with the traditional Interconnect interface or with the new Multi-Site Service Mesh. To deploy the HCX IX’s you will choose either of the method. 

Before you plan to use HCX Multi-Site Service Mesh, let’s have a look at few benefits which we get out of this feature: 
  • Uniformity: the same configuration patterns at the source and remote sites.
  • Re-usability: Once a compute profile is created it can be used to connect to multiple HCX sites. Hence the site administrator need not define the same things again and again.
  • Multisite Ready: Compute Profiles and Network Profiles can be shared across multiple sites.
  • Ease of reconfiguration: New capability to pool datastores or modify them post-Interconnect deployment.
  • Scale-out deployment: The HCX-IX can be deployed per cluster or a single HCX-IX can be shared across multiple clusters.
Apart from that the are few usability enhancements that have been introduced:
  • Improved interfaces display a clear deployment diagrams.
  • New task tracking features give step by step details of the progress of operations
  • Preview of required firewall rules for ease of configuration.
Typically a compute profile looks like as shown in below image



Once the compute profile is created in both cloud side and on-prem, we initiate the service mesh creation from on-prem side. Service mesh can’t be created from cloud side.

During service mesh creation we map the compute/network profile of on-prem with the profiles created in cloud side. Once service mesh mapping is done, we can initiate the deployment of IX appliances. 

Once the appliances are deployed in both on-prem and cloud side, we can start consuming the HCX services. 

Regards

Friday, August 3, 2018

vExpert 2018 Award Announcement

I am happy to be selected again to be a part of the vExpert 2018, I'm vExpert for second year in a row.



https://blogs.vmware.com/vmtn/2018/03/vexpert-2018-award-announcement.html

Proud to be part of this great group also for this year and the chance to be in touch with top experts of the field will be helpful for sharing and improving the experience.



vExpert Program Benefits

  • Invite to our private #Slack channel

  • vExpert certificate signed by our CEO Pat Gelsinger.

  • Private forums on communities.vmware.com.

  • Permission to use the vExpert logo on cards, website, etc for one year

  • Access to a private directory for networking, etc.

  • Exclusive gifts from various VMware partners.

  • Private webinars with VMware partners as well as NFRs.

  • Access to private betas (subject to admission by beta teams).

  • 365-day eval licenses for most products for home lab / cloud providers.

  • Private pre-launch briefings via our blogger briefing pre-VMworld (subject to admission by product teams)

  • Blogger early access program for vSphere and some other products.

  • Featured in a public vExpert online directory.

  • Access to vetted VMware & Virtualization content for your social channels.

  • Yearly vExpert parties at both VMworld US and VMworld Europe events.

  • Identification as a vExpert at both VMworld US and VMworld EU.
Congratulations to all new and returning vExperts.

Regards

Tuesday, July 31, 2018

vRA 7.3.1 Upgrade Issue (401 error in the Infrastructure tab)

After a recent update to VMware vRealize Automation from 7.3.0 to 7.3.1, I found 401 errors appear in the Infrastructure tab. After some little time I found the solution.

In the Web_Admin_All.log located in C:\Program Files (x86)\VMware\vCAC\Server\Website\Logs on the IaaS web server, you see errors similar to:
[UTC:2016-03-31 18:18:00 Local:2016-03-31 12:18] 
[Error]: [sub-thread-Id="21" context token] Error occurred writing to the repository tracking log 
System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. ---> System.IO.IOException:
The handshake failed due to an unexpected packet format. 
at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result)
at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context) at System.Net.HttpWebRequest.GetRequestStream() at System.Data.Services.Client.ODataRequestMessageWrapper.SetRequestStream(ContentStream requestStreamContent) at System.Data.Services.Client.BatchSaveResult.BatchRequest() at System.Data.Services.Client.DataServiceContext.SaveChanges(SaveChangesOptions options) at DynamicOps.Repository.RepositoryServiceContext.SaveChanges(SaveChangesOptions options) at DynamicOps.Repository.Tracking.RepoLoggingSingleton.WriteExceptionToLogs(String message, Exception exceptionObject, Boolean writeAsWarning) INNER EXCEPTION: System.IO.IOException: The handshake failed due to an unexpected packet format. at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async)

Note: The preceding log is only an example. Date, time, and environmental variables may vary depending on your environment.

In the web.config file for the web administration service located in C:\Program Files (x86)\VMware\vCAC\Server\Website on the IaaS web server, the repository address is set to localhost on port 80, similar to: <add key="repositoryAddress" value="https://localhost:80/repository/" />

Under some circumstances, the web.config file can be updated with an invalid URL during the update of the IaaS web services. To resolve this issue, update the web.config with the correct URL using the following procedure.

Note: If there are more than one IaaS web servers, this procedure will need to be completed on all nodes.

Solution


  1. Log in to the IaaS web server and navigate to the location of the C:\Program Files (x86)\VMware\vCAC\Server\Website\web.config file.

  2. Back up the website web.config file.

  3. Change the repository address to use the appropriate FQDN for the Model Manager Website, which resides on the IaaS web server(s), similar to the following example:<add key="repositoryAddress</SPAN>" value="https://<IaaS Web FQDN>:443/repository/" />If there is only a single server, this will likely by the FQDN of the host. If there is more than one server, a VIP FQDN pointing to a load balancer will likely be in use.

  4. Run iisreset from an administrative command prompt to restart the service.
Reference: https://kb.vmware.com/s/article/2144965

Regards

Tuesday, January 16, 2018

F5 LB common misconfigurations for vRA 7.x

Working with some customers to build vRealize Automation 7.x in production environment I have had some problems configuring F5 Load Balalancers, although these errors affected to F5 these mistakes can affect other load balancers as well. These recommendations are based on my own experiences but I based this article on others blogs posts. Just I tried to make a summary.

1- Utilize the load balancer VIP for initial installation

Please don't try to use the load balancer VIP during vRA installation. While if setup perfectly this will work, a small mistake with the VIP configuration can make the installation and configuration of vRealize Automation feel impossible. For this I would recommend you create the VIP DNS record and just point it to your first nodes. Complete your vRA installation and configuration and only after confirming your setup is stable and fully installed to point your VIP DNS record to your actual VIP IP. This will make your installation go much smoother, and allow you a much easier path to troubleshooting if you made a mistake during load balancer configuration.



2- Leaving the vRA Virtual Servers Load balancing Type to “Standard”

F5 load balancer usually offers three Virtual Servers Load balancing types “Standard”, “Performance Layer 4”, and “Layer 7”. By default, F5 vRA Virtual Servers is configured with load balancing type “Standard”, which does not work well with vRealize Automation. I saw the network team leaving this paramater to the default value of “Standard” causing vRealize automation to fail. Below is a sample errors faced when using the “Standard” Load balancing type:

“Error processing ping response Unable to connect to the remote server Inner Exception: Unable to connect to the remote server”

“Error processing ping response System.Data.Services.Client.DataServiceTransportException: Unable to connect to the remote server —> System.Net.WebException: Unable to connect to the remote server —> System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it :443”

The recommended configuration for the F5 Virtual Servers Load balancing type is “Performance Layer 4” and using any different type can cause issues. I would recommend sticking with the supported, recommended, and tested configuration in here.

3- Forgetting to Setup Protocol Profile (Client) to “fastL4”

Not setting up Protocol Profile (Client) to “fast L4” in the F5 can cause similar issue to the ones seen in the above point. Same bad result.

4- Leaving the default setting to HTTP Profile to “http” in Virtual Servers

By default, F5 is configured with the HTTP Profile of “http” , which does not work well with vRealize Automation. The correct value is "none". The behaviour to leave this setting with "http" is undefined, sometimes work sometimes does not work. Looks unstable. With "none" the F5 works normally.




I hope this help some of you fix some issues caused by F5 load balancer configuration when creating a vRA distributed environment.

Regards

Thursday, January 4, 2018

Exam VCP7-CMA (2V0-731) passed

Last month I sat the VCP7-CMA exam (or 2V0-731 as it is affectionately known). The exam is new but I wanted to give it a shot while a had the chance and before other things consumed the diary.

I got a 335 score, it was close but I managed to pass. For me it was way tougher than VCP6-NV. I already had taken last year VCP6-CMA (2016), but saw that the VCP7-CMA was created, then I decided to do.  After two postpone times I could take it. It was a challenge because I confess, I almost didn't study, even one day before, I tried to postpone one more time, but I couldn't.

I studied reading some pdfs in the documentation (reference architecture, foundations, installing, configuring, managing) but I still got caught off guard with stuff like business management and composite blueprints. You should pay special attention to XaaS and vRO stuff. I recommend this guide.



The exam is 85 questions in 120 minutes (for not native english speakers). I used only 80 minutes. The questions I didn't know or I had doubt, I didn't stop so much.

Be careful, because the exam is based on vRA 7.2 rather than the latest 7.3. Then some little things are different.

Last recommendation, if you want to use some dump exam, be careful, all are wrong, they have many answers wrong and are different among them. I preferred to study instead.

Now, I go for VCAP7-CMA (3V0-732).

Regards and good luck.

Tuesday, December 6, 2016

How to copy VMs directly between ESXi Hosts using ovftool

I need to copy a Virtual Machine from one host to another, if you do not have shared storage sometimes it's could be a little difficult. On my HomeLab I have two hosts (micro servers). I wish to copy, not to move the VM from host. I can leverage tools like VMware Converter or exporting the VM to OVF and then re-importing that VM into the destination host but it could take awhile or I have to run a Windows system (I have but I don't like). If you are looking for a quick and easy way to copy a VM from one host to another, try using the ovftool (yes, I know that PowerCLI now works on a Mac System, even I have it, but that topic will be another article on my blog).

My HomeLab's first host has the IP address 192.168.1.80 (the source), and the target host has the address 192.168.1.81.

I had used ovftool before for convert a VMware Fusion VM to OVF file. Then when I tried to use to export VMs I got this error:

./ovftool vi://root@192.168.1.80
Segmentation fault: 11


Because I found some references where people had used ovftool to export VMs, like virtuallyGhetto site, I guessed my problem was the version.

I checked my Mac has the version: 3.5.2:

./ovftool -v VMware ovftool 3.5.2 (build-1880279)


I looked for a new one version, and I found the version 4.2.0 available on VMware site (VMware-ovftool-4.2.0-4586971-mac.x64.dmg file). After I installed the new one version, I checked again:

./ovftool -v VMware ovftool 4.2.0 (build-4586971)


We are ready. First, I need to check what is the list of VM's on the source host:

./ovftool vi://root@192.168.1.80
Enter login information for source vi://192.168.1.80/
Username: root
Password: ********
Error: Found wrong kind of object (ResourcePool). Possible completions are: 
  VMware vCenter Orchestrator Appliance
  VMware vCenter Server 6
  VMware vCenter Orchestrator Appliance OTB
  vRealize Infrastructure Navigator
  VMware vRealize Appliance 7.0
  Redhat Enterprise Linux 7.2 x86_64


I choose "vRealize Infrastructure Navigator", just because it's the little one. Also I need to define what is the datastore on target host, I chose datastore_1_Server1 datastore. We are ready, go:

./ovftool -ds=datastore_1_Server1  vi://root@192.168.1.80/vRealize\ Infrastructure\ Navigator vi://root@192.168.1.81
Enter login information for source vi://192.168.1.80/
Username: root
Password: ********
Opening VI source: vi://root@192.168.1.80:443/vRealize%20Infrastructure%20Navigator
Opening VI target: vi://root@192.168.1.81:443/
Deploying to VI: vi://root@192.168.1.81:443/
Transfer Completed                    
Completed successfully


Also, you can create some too simple script to do the task for each VM on source host, in my case it could be:
#!/bin/bash

OVFTOOL="/Applications/VMware\ OVF\ Tool/ovftool"
OIFS="$IFS"
IFS=$'\n'
VMs="VMware vCenter Orchestrator Appliance VMware vCenter Orchestrator Appliance OTB"

for vm in ${VMs}; do
   echo "${vm}"
   echo $OVFTOOL -ds=datastore_1_Server1  vi://root:VMware1\!@192.168.1.80/${vm} vi://root:VMware1\!@192.168.1.81
done

IFS="$OIFS"
Done. The VM's were copied. Remember, with this method you will copy the VM, not move it. Then, after you are sure the VM was copied fine, you need to remove the old one VM from the source host (after you tested the new copy is working).

Regards

Wednesday, November 30, 2016

Upgrading from ESXi 5.5 to ESXi 6.x via SSH using esxcli

If you want to upgraded your ESXi 5.5 server to ESXi 6 you can do using the install ISO file. However, it is also possible to perform the upgrade from 5.5 to 6.0 via SSH and esxcli.

To upgrade from ESXi 5.5 to 6.0 using esxcli:

1. Shut down all VMs running on your ESXi host machine.

2. Connect via SSH and run the following command to enter maintenance mode:
vim-cmd /hostsvc/maintenance_mode_enter 


3. After putting ESXi into maintenance mode, run the following command to set the correct firewall rules for the httpClient:
esxcli network firewall ruleset set -e true -r httpClient


4. Next, run the following command to list the ESXi 6.x updates available. You want the latest one that ends in “-standard” for your version of VMware. In my case I want the version 6.0.0 with updates on 2016.

esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml | grep ESXi-6.0.0-2016

ESXi-6.0.0-20160804001-standard   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20161104001-standard   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160504001-standard   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160302001-no-tools   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20161004001-no-tools   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160301001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-6.0.0-20161104001-no-tools   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20161004001-standard   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20161101001s-standard  VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160101001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160104001-no-tools   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160801001s-standard  VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160204001-standard   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160101001s-standard  VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160504001-no-tools   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160204001-no-tools   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20161101001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160302001-standard   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160804001-no-tools   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160301001s-standard  VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160801001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160104001-standard   VMware, Inc.  PartnerSupported


5. Once you’ve identified the correct version of VMware (my case was ESXi-6.0.0-20160302001-standard), run the following command to download and install the update.
Note: It is very important that you run esxcli software profile update here. Running esxcli software profile install may overwrite drivers that your ESXi host needs.

esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.0.0-20160302001-standard 

Update Result
   Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
   Reboot Required: true
   VIBs Installed: VMWARE_bootbank_mtip32xx-native_3.8.5-1vmw.600.0.0.2494585, VMware_bootbank_ata-pata-amd_0.3.10-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-atiixp_0.4.6-4vmw.600.0.0.2494585, VMware_bootbank_ata-pata-cmd64x_0.2.5-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-pdc2027x_1.0-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-serverworks_0.4.3-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-sil680_0.4.8-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-via_0.3.3-2vmw.600.0.0.2494585, VMware_bootbank_block-cciss_3.6.14-10vmw.600.0.0.2494585, VMware_bootbank_cpu-microcode_6.0.0-0.0.2494585, VMware_bootbank_ehci-ehci-hcd_1.0-3vmw.600.2.34.3620759, VMware_bootbank_elxnet_10.2.309.6v-1vmw.600.0.0.2494585, VMware_bootbank_emulex-esx-elxnetcli_10.2.309.6v-0.0.2494585, VMware_bootbank_esx-base_6.0.0-2.34.3620759, VMware_bootbank_esx-dvfilter-generic-fastpath_6.0.0-0.0.2494585, VMware_bootbank_esx-tboot_6.0.0-2.34.3620759, VMware_bootbank_esx-xserver_6.0.0-0.0.2494585, VMware_bootbank_ima-qla4xxx_2.02.18-1vmw.600.0.0.2494585, VMware_bootbank_ipmi-ipmi-devintf_39.1-4vmw.600.0.0.2494585, VMware_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.600.0.0.2494585, VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.600.0.0.2494585, VMware_bootbank_lpfc_10.2.309.8-2vmw.600.0.0.2494585, VMware_bootbank_lsi-mr3_6.605.08.00-7vmw.600.1.17.3029758, VMware_bootbank_lsi-msgpt3_06.255.12.00-8vmw.600.1.17.3029758, VMware_bootbank_lsu-hp-hpsa-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-2vmw.600.0.11.2809209, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-2vmw.600.0.11.2809209, VMware_bootbank_lsu-lsi-mpt2sas-plugin_1.0.0-4vmw.600.1.17.3029758, VMware_bootbank_lsu-lsi-mptsas-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.600.0.0.2494585, VMware_bootbank_misc-drivers_6.0.0-2.34.3620759, VMware_bootbank_net-bnx2_2.2.4f.v60.10-1vmw.600.0.0.2494585, VMware_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.600.0.0.2494585, VMware_bootbank_net-cnic_1.78.76.v60.13-2vmw.600.0.0.2494585, VMware_bootbank_net-e1000_8.0.3.1-5vmw.600.0.0.2494585, VMware_bootbank_net-e1000e_3.2.2.1-1vmw.600.1.26.3380124, VMware_bootbank_net-enic_2.1.2.38-2vmw.600.0.0.2494585, VMware_bootbank_net-forcedeth_0.61-2vmw.600.0.0.2494585, VMware_bootbank_net-igb_5.0.5.1.1-5vmw.600.0.0.2494585, VMware_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.600.0.0.2494585, VMware_bootbank_net-mlx4-core_1.9.7.0-1vmw.600.0.0.2494585, VMware_bootbank_net-mlx4-en_1.9.7.0-1vmw.600.0.0.2494585, VMware_bootbank_net-nx-nic_5.0.621-5vmw.600.0.0.2494585, VMware_bootbank_net-tg3_3.131d.v60.4-2vmw.600.1.26.3380124, VMware_bootbank_net-vmxnet3_1.1.3.0-3vmw.600.2.34.3620759, VMware_bootbank_nmlx4-core_3.0.0.0-1vmw.600.0.0.2494585, VMware_bootbank_nmlx4-en_3.0.0.0-1vmw.600.0.0.2494585, VMware_bootbank_nmlx4-rdma_3.0.0.0-1vmw.600.0.0.2494585, VMware_bootbank_nvme_1.0e.0.35-1vmw.600.2.34.3620759, VMware_bootbank_ohci-usb-ohci_1.0-3vmw.600.0.0.2494585, VMware_bootbank_qlnativefc_2.0.12.0-5vmw.600.0.0.2494585, VMware_bootbank_rste_2.0.2.0088-4vmw.600.0.0.2494585, VMware_bootbank_sata-ahci_3.0-22vmw.600.2.34.3620759, VMware_bootbank_sata-ata-piix_2.12-10vmw.600.0.0.2494585, VMware_bootbank_sata-sata-nv_3.5-4vmw.600.0.0.2494585, VMware_bootbank_sata-sata-promise_2.12-3vmw.600.0.0.2494585, VMware_bootbank_sata-sata-sil24_1.1-1vmw.600.0.0.2494585, VMware_bootbank_sata-sata-sil_2.3-4vmw.600.0.0.2494585, VMware_bootbank_sata-sata-svw_2.3-3vmw.600.0.0.2494585, VMware_bootbank_scsi-aacraid_1.1.5.1-9vmw.600.0.0.2494585, VMware_bootbank_scsi-adp94xx_1.0.8.12-6vmw.600.0.0.2494585, VMware_bootbank_scsi-aic79xx_3.1-5vmw.600.0.0.2494585, VMware_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.600.0.0.2494585, VMware_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.600.0.11.2809209, VMware_bootbank_scsi-fnic_1.5.0.45-3vmw.600.0.0.2494585, VMware_bootbank_scsi-hpsa_6.0.0.44-4vmw.600.0.0.2494585, VMware_bootbank_scsi-ips_7.12.05-4vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid2_2.00.4-9vmw.600.0.0.2494585, VMware_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.600.0.0.2494585, VMware_bootbank_scsi-mptsas_4.23.01.00-9vmw.600.0.0.2494585, VMware_bootbank_scsi-mptspi_4.23.01.00-9vmw.600.0.0.2494585, VMware_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.600.0.0.2494585, VMware_bootbank_uhci-usb-uhci_1.0-3vmw.600.0.0.2494585, VMware_bootbank_vsan_6.0.0-2.34.3563498, VMware_bootbank_vsanhealth_6.0.0-3000000.3.0.2.34.3544323, VMware_bootbank_xhci-xhci_1.0-3vmw.600.2.34.3620759, VMware_locker_tools-light_6.0.0-2.34.3620759
   VIBs Removed: VMware_bootbank_ata-pata-amd_0.3.10-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-atiixp_0.4.6-4vmw.550.0.0.1331820, VMware_bootbank_ata-pata-cmd64x_0.2.5-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-pdc2027x_1.0-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-serverworks_0.4.3-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-sil680_0.4.8-3vmw.550.0.0.1331820, VMware_bootbank_ata-pata-via_0.3.3-2vmw.550.0.0.1331820, VMware_bootbank_block-cciss_3.6.14-10vmw.550.0.0.1331820, VMware_bootbank_ehci-ehci-hcd_1.0-3vmw.550.0.0.1331820, VMware_bootbank_elxnet_10.0.100.0v-1vmw.550.0.0.1331820, VMware_bootbank_esx-base_5.5.0-2.33.2068190, VMware_bootbank_esx-dvfilter-generic-fastpath_5.5.0-0.0.1331820, VMware_bootbank_esx-tboot_5.5.0-2.33.2068190, VMware_bootbank_esx-xlibs_5.5.0-0.0.1331820, VMware_bootbank_esx-xserver_5.5.0-0.0.1331820, VMware_bootbank_ima-qla4xxx_2.01.31-1vmw.550.0.0.1331820, VMware_bootbank_ipmi-ipmi-devintf_39.1-4vmw.550.0.0.1331820, VMware_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.550.0.0.1331820, VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.550.0.0.1331820, VMware_bootbank_lpfc_10.0.100.1-1vmw.550.0.0.1331820, VMware_bootbank_lsi-mr3_0.255.03.01-2vmw.550.1.16.1746018, VMware_bootbank_lsi-msgpt3_00.255.03.03-1vmw.550.1.15.1623387, VMware_bootbank_misc-cnic-register_1.72.1.v50.1i-1vmw.550.0.0.1331820, VMware_bootbank_misc-drivers_5.5.0-2.33.2068190, VMware_bootbank_mtip32xx-native_3.3.4-1vmw.550.1.15.1623387, VMware_bootbank_net-be2net_4.6.100.0v-1vmw.550.0.0.1331820, VMware_bootbank_net-bnx2_2.2.3d.v55.2-1vmw.550.0.0.1331820, VMware_bootbank_net-bnx2x_1.72.56.v55.2-1vmw.550.0.0.1331820, VMware_bootbank_net-cnic_1.72.52.v55.1-1vmw.550.0.0.1331820, VMware_bootbank_net-e1000_8.0.3.1-3vmw.550.0.0.1331820, VMware_bootbank_net-e1000e_1.1.2-4vmw.550.1.15.1623387, VMware_bootbank_net-enic_1.4.2.15a-1vmw.550.0.0.1331820, VMware_bootbank_net-forcedeth_0.61-2vmw.550.0.0.1331820, VMware_bootbank_net-igb_5.0.5.1.1-1vmw.550.1.15.1623387, VMware_bootbank_net-ixgbe_3.7.13.7.14iov-11vmw.550.0.0.1331820, VMware_bootbank_net-mlx4-core_1.9.7.0-1vmw.550.0.0.1331820, VMware_bootbank_net-mlx4-en_1.9.7.0-1vmw.550.0.0.1331820, VMware_bootbank_net-nx-nic_5.0.621-1vmw.550.0.0.1331820, VMware_bootbank_net-tg3_3.123c.v55.5-1vmw.550.2.33.2068190, VMware_bootbank_net-vmxnet3_1.1.3.0-3vmw.550.0.0.1331820, VMware_bootbank_ohci-usb-ohci_1.0-3vmw.550.0.0.1331820, VMware_bootbank_qlnativefc_1.0.12.0-1vmw.550.0.0.1331820, VMware_bootbank_rste_2.0.2.0088-4vmw.550.1.15.1623387, VMware_bootbank_sata-ahci_3.0-20vmw.550.2.33.2068190, VMware_bootbank_sata-ata-piix_2.12-10vmw.550.2.33.2068190, VMware_bootbank_sata-sata-nv_3.5-4vmw.550.0.0.1331820, VMware_bootbank_sata-sata-promise_2.12-3vmw.550.0.0.1331820, VMware_bootbank_sata-sata-sil24_1.1-1vmw.550.0.0.1331820, VMware_bootbank_sata-sata-sil_2.3-4vmw.550.0.0.1331820, VMware_bootbank_sata-sata-svw_2.3-3vmw.550.0.0.1331820, VMware_bootbank_scsi-aacraid_1.1.5.1-9vmw.550.0.0.1331820, VMware_bootbank_scsi-adp94xx_1.0.8.12-6vmw.550.0.0.1331820, VMware_bootbank_scsi-aic79xx_3.1-5vmw.550.0.0.1331820, VMware_bootbank_scsi-bnx2fc_1.72.53.v55.1-1vmw.550.0.0.1331820, VMware_bootbank_scsi-bnx2i_2.72.11.v55.4-1vmw.550.0.0.1331820, VMware_bootbank_scsi-fnic_1.5.0.4-1vmw.550.0.0.1331820, VMware_bootbank_scsi-hpsa_5.5.0-44vmw.550.0.0.1331820, VMware_bootbank_scsi-ips_7.12.05-4vmw.550.0.0.1331820, VMware_bootbank_scsi-lpfc820_8.2.3.1-129vmw.550.0.0.1331820, VMware_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.550.0.0.1331820, VMware_bootbank_scsi-megaraid-sas_5.34-9vmw.550.2.33.2068190, VMware_bootbank_scsi-megaraid2_2.00.4-9vmw.550.0.0.1331820, VMware_bootbank_scsi-mpt2sas_14.00.00.00-3vmw.550.1.15.1623387, VMware_bootbank_scsi-mptsas_4.23.01.00-9vmw.550.0.0.1331820, VMware_bootbank_scsi-mptspi_4.23.01.00-9vmw.550.0.0.1331820, VMware_bootbank_scsi-qla2xxx_902.k1.1-9vmw.550.0.0.1331820, VMware_bootbank_scsi-qla4xxx_5.01.03.2-6vmw.550.0.0.1331820, VMware_bootbank_uhci-usb-uhci_1.0-3vmw.550.0.0.1331820, VMware_locker_tools-light_5.5.0-2.33.2068190
VIBs Skipped: VMware_bootbank_esx-ui_1.0.0-3617585


6. Once the update has been installed and prompts you to reboot, run the following command to restart:

reboot


7. After your ESXi host restarts, connect via SSH and run the following command to exit maintenance mode:

vim-cmd /hostsvc/maintenance_mode_exit
'vim.Task:haTask-ha-host-vim.HostSystem.exitMaintenanceMode-140'


At this point, your ESXi host should be upgraded to ESXi 6.0.0.

Regards