VMware ESXi#Purple Screen of Death
{{Short description|Enterprise-class, type-1 hypervisor for deploying and serving virtual computers}}
{{Use dmy dates|date=April 2016}}
{{Infobox software
| name = VMware ESXi
| logo =
| screenshot = VMwareESXiHostClientSummary.png
| caption =
| developer = VMware (Broadcom)
| released = {{Start date and age|2001|03|23|df=no}}
| latest release version = 8.0 Update 3e (Build 24674464){{cite web|url=https://knowledge.broadcom.com/external/article/316595/build-numbers-and-versions-of-vmware-esx.html|title=Build numbers and versions of VMware ESXi/ESX }}
| latest release date = {{Start date and age|2025|04|10|df=no}}{{cite web|title=VMware ESXi 8.0 Update 3c Release Notes|url=https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/release-notes/esxi-update-and-patch-release-notes/vsphere-esxi-80u3c-release-notes.html}}
| operating system =
| platform = IA-32 (x86-32) (discontinued in 4.0 onwards),{{cite web
| title = VMware ESX 4.0 only installs and runs on servers with 64bit x86 CPUs. 32bit systems are no longer supported.
| url = https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1009080
| publisher = VMware, Inc.}} x86-64, ARM{{cite web
| title = Announcing the ESXi-ARM Fling.
| url = https://blogs.vmware.com/vsphere/2020/10/announcing-the-esxi-arm-fling.html
| publisher = VMware, Inc. }}
| genre = Native hypervisor (type 1)
| license = Proprietary
| website = {{URL|https://www.vmware.com/products/esxi-and-esx.html}}
}}
VMware ESXi (formerly ESX) is an enterprise-class, type-1 hypervisor developed by VMware, a subsidiary of Broadcom, for deploying and serving virtual computers. As a type-1 hypervisor, ESXi is not a software application that is installed on an operating system (OS); instead, it includes and integrates vital OS components, such as a kernel.{{cite web
| url = https://www.vmware.com/support/esx21/doc/esx21_admin_system_architecture.html
| title = ESX Server Architecture
| publisher = VMware
| access-date = 2009-10-22
| archive-url = https://web.archive.org/web/20091107080723/https://www.vmware.com/support/esx21/doc/esx21_admin_system_architecture.html
| archive-date = 2009-11-07}}
After version 4.1 (released in 2010), VMware renamed ESX to ESXi. ESXi replaces Service Console (a rudimentary operating system) with a more closely integrated OS. ESX/ESXi is the primary component in the VMware Infrastructure software suite.[https://www.vmware.com/products/esxi-and-esx/overview.html VMware:vSphere ESX and ESXi Info Center]
The name ESX originated as an abbreviation of Elastic Sky X.{{cite web
| url = http://vmfaq.com/index.php?View=entry&EntryID=32
| title = What does ESX stand for?
| access-date = 3 October 2014
| archive-url = https://web.archive.org/web/20141220115744/http://vmfaq.com/index.php?View=entry&EntryID=32
| archive-date = 20 December 2014
| url-status = dead
| df = dmy-all}}{{cite web
| url = https://www.vmware.com/support/developer/studio/studio25/studio_developer.pdf
| title = Glossary
| year = 2011
| work = Developer's Guide to Building vApps and Virtual Appliances: VMware Studio 2.5
| publisher = VMware
| location = Palo Alto
| page = 153
| access-date = 2011-11-09}} In September 2004, the replacement for ESX was internally called VMvisor, but later changed to ESXi (as the "i" in ESXi stood for "integrated").{{Cite news
|url = http://up2v.nl/2014/05/12/did-you-know-vmware-elastic-sky-x-esx-was-once-called-scaleable-server/
|title = Did you know VMware Elastic Sky X (ESX) was once called 'Scaleable Server'?
|date = 2014-05-12
|work = UP2V
|access-date = 2018-05-09
|language = en-US
|archive-date = 10 June 2019
|archive-url = https://web.archive.org/web/20190610141027/http://up2v.nl/2014/05/12/did-you-know-vmware-elastic-sky-x-esx-was-once-called-scaleable-server/
|url-status = dead
| url = https://www.vladan.fr/vmware-esxi-was-created-by-a-french-guy/
| title = VMware ESXi was created by a French guy !!! {{!}} ESX Virtualization
| date = 2009-09-26
| work = ESX Virtualization
| access-date = 2018-05-09
| language = en-US}}
Architecture
ESX runs on bare metal (without running an operating system)[https://www.vmware.com/pdf/esx_datasheet.pdf "ESX Server Datasheet"] unlike other VMware products.{{cite web |url= https://www.vmware.com/support/esx21/doc/esx21_admin_system_architecture.html |title=ESX Server Architecture |publisher=Vmware.com |access-date=2009-07-01 |archive-url = https://web.archive.org/web/20070929084239/https://www.vmware.com/support/esx21/doc/esx21_admin_system_architecture.html |archive-date = 29 September 2007}} It includes its own kernel. In the historic VMware ESX, a Linux kernel was started first{{cite web |url= https://www.youtube.com/watch?v=AJ5yM_kdhUk |archive-url=https://ghostarchive.org/varchive/youtube/20211213/AJ5yM_kdhUk |archive-date=2021-12-13 |url-status=live|title=ESX machine boots |publisher=Video.google.com.au |date=12 June 2006 |access-date=2009-07-01}}{{cbignore}} and then used to load a variety of specialized virtualization components, including ESX, which is otherwise known as the vmkernel component.{{cite web |url=https://communities.vmware.com/docs/DOC-5501|title=VMKernel Scheduler|date=27 May 2008|publisher=vmware.com|access-date=2016-03-10}} The Linux kernel was the primary virtual machine; it was invoked by the service console. At normal run-time, the vmkernel was running on the bare computer, and the Linux-based service console ran as the first virtual machine. VMware dropped development of ESX at version 4.1, and now uses ESXi, which does not include a Linux kernel at all.{{cite web|last1=Mike|first1=Foley|title=It's a Unix system, I know this!|url=https://blogs.vmware.com/vsphere/2013/06/its-a-unix-system-i-know-this.html|website=VMware Blogs|publisher=VMware}}
The vmkernel is a microkernel{{cite web |url=https://www.vmware.com/company/news/releases/64bit.html |title=Support for 64-bit Computing |publisher=Vmware.com |date=19 April 2004 |access-date=2009-07-01 |archive-url=https://web.archive.org/web/20090702000340/http://www.vmware.com/company/news/releases/64bit.html |archive-date=2 July 2009 |url-status=dead |df=dmy-all }} with three interfaces: hardware, guest systems, and the service console (Console OS).
=Interface to hardware=
The vmkernel handles CPU and memory directly, using scan-before-execution (SBE) to handle special or privileged CPU instructions[http://markus-gerstel.de/files/2005-Xen.pdf Gerstel, Markus: "Virtualisierungsansätze mit Schwerpunkt Xen"] {{webarchive
| url = https://web.archive.org/web/20131010205239/http://markus-gerstel.de/files/2005-Xen.pdf
| date = 10 October 2013}}[https://www.vmware.com/resources/techresources/1009 VMware ESX]
and the SRAT (system resource allocation table) to track allocated memory.{{cite web
| url = https://www.vmware.com/pdf/esx2_NUMA.pdf
| title = VMware ESX Server 2: NUMA Support
| year = 2005
| publisher = VMware Inc
| location = Palo Alto, California
| page = 7
| access-date = 2011-03-29
| quote = SRAT (system resource allocation table) – table that keeps track of memory allocated to a virtual machine.}}
Access to other hardware (such as network or storage devices) takes place using modules. At least some of the modules derive from modules used in the Linux kernel. To access these modules, an additional module called vmklinux
implements the Linux module interface. According to the README file, "This module contains the Linux emulation layer used by the vmkernel."{{cite web|url=https://www.vmware.com/download/open_source.html |title=ESX Server Open Source |publisher=Vmware.com |access-date=2009-07-01}}
The vmkernel uses the device drivers:
- net/e100
- net/e1000
- net/e1000e
- net/bnx2
- net/tg3
- {{Not a typo|net/forcedeth}}
- net/pcnet32
- {{Not a typo|block/cciss}}
- scsi/adp94xx
- scsi/aic7xxx
- scsi/aic79xx
- scsi/ips
- scsi/lpfcdd-v732
- scsi/megaraid2
- scsi/mptscsi_2xx
- scsi/qla2200-v7.07
- scsi/megaraid_sas
- scsi/qla4010
- scsi/qla4022
- {{Not a typo|scsi/vmkiscsi}}
- scsi/aacraid_esx30
- scsi/lpfcdd-v7xx
- scsi/qla2200-v7xx
These drivers mostly equate to those described in VMware's hardware compatibility list.{{cite web|url=https://www.vmware.com/resources/techresources/cat/119 |title=ESX Hardware Compatibility List |publisher=Vmware.com |date=10 December 2008 |access-date=2009-07-01}} All these modules fall under the GPL. Programmers have adapted them to run with the vmkernel: VMware Inc. has changed the module-loading and some other minor things.
=Service console=
In ESX (and not ESXi), the Service Console is a vestigial general purpose operating system most significantly used as bootstrap for the VMware kernel, vmkernel, and secondarily used as a management interface. Both of these Console Operating System functions are being deprecated from version 5.0, as VMware migrates exclusively to the ESXi model.{{cite web|url=https://blogs.vmware.com/esxi/2009/06/esxi-vs-esx-a-comparison-of-features.html |title=ESXi vs. ESX: A comparison of features |publisher=Vmware, Inc |access-date=2009-06-01}}
The Service Console, for all intents and purposes, is the operating system used to interact with VMware ESX and the virtual machines that run on the server.
=Purple Screen of Death=
In the event of a hardware error, the vmkernel can catch a Machine Check Exception.[https://kb.vmware.com/kb/1005184|title= "KB: Decoding Machine Check Exception (MCE) output after a purple diagnostic screen |publisher=VMware, Inc."] This results in an error message displayed on a purple diagnostic screen. This is colloquially known as a purple diagnostic screen, or purple screen of death (PSoD, cf. blue screen of death (BSoD)).
Upon displaying a purple diagnostic screen, the vmkernel writes debug information to the core dump partition. This information, together with the error codes displayed on the purple diagnostic screen can be used by VMware support to determine the cause of the problem.
Versions
VMware ESX used to be available in two main types: ESX and ESXi, but as of version 5, the original ESX has been discontinued in favor of ESXi.
ESX and ESXi before version 5.0 do not support Windows 8/Windows 2012. These Microsoft operating systems can only run on ESXi 5.x or later.VMware KBArticle [https://kb.vmware.com/selfservice/microsites/microsite.do?cmd=displayKC&docType=kc&externalId=2006859&sliceId=2&docTypeID=DT_KB_1_1 Windows 8/Windows 2012 doesn't boot on ESX], visited 12 September 2012
VMware ESXi, a smaller-footprint version of ESX, does not include the ESX Service Console. Before Broadcom acquired VMware, it was available - without the need to purchase a vCenter license - as a free download from VMware, with some features disabled.{{cite web|title= Download VMware vSphere Hypervisor (ESXi)|url= https://my.vmware.com/web/vmware/info/slug/datacenter_cloud_infrastructure/vmware_vsphere_hypervisor_esxi/5_5#drivers_tools|website= www.vmware.com|access-date= 22 July 2014}}{{cite web|title= Getting Started with ESXi Installable|url= https://www.vmware.com/pdf/vsphere4/r41/vsp_41_esxi_i_get_start.pdf|website= VMware|access-date= 22 July 2014}}{{cite web|url= https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1023990 |title= VMware ESX and ESXi 4.1 Comparison |publisher= Vmware.com |access-date= 2011-06-09}}
ESXi stands for "ESX integrated".{{cite web
| url = https://vmin.wordpress.com/2011/08/31/what-do-esx-and-esxi-stand-for/
| title = What do ESX and ESXi stand for?
| date = 2011-08-31
| website = VM.Blog.
| access-date = 2016-06-21
| quote = Apparently, the 'i' in ESXi stands for Integrated, probably coming from the fact that this version of ESX can be embedded in a small bit of flash memory on the server hardware.}}
VMware ESXi originated as a compact version of VMware ESX that allowed for a smaller 32 MB disk footprint on the host. With a simple configuration console for mostly network configuration and remote based VMware Infrastructure Client Interface, this allows for more resources to be dedicated to the guest environments.
Two variations of ESXi exist:
- VMware ESXi Installable
- VMware ESXi Embedded Edition
The same media can be used to install either of these variations depending on the size of the target media.{{cite web
| url = http://www.v-front.de/2013/03/esxi-embedded-vs-esxi-installable-faq.html
| title = ESXi embedded vs. ESXi installable FAQ
| author = Andreas Peetz
| access-date = 2014-08-11}} One can upgrade ESXi to VMware Infrastructure 3{{cite web
| url = https://www.vmware.com/products/esxi/
| title = Free VMware ESXi: Bare Metal Hypervisor with Live Migration
| publisher = VMware
| access-date = 2009-07-01}} or to VMware vSphere 4.0 ESXi.
Originally named VMware ESX Server ESXi edition, through several revisions the ESXi product finally became VMware ESXi 3. New editions then followed: ESXi 3.5, ESXi 4, ESXi 5 and ({{as of | 2024 | lc = on}}) ESXi 8.
GPL violation lawsuit
VMware has been sued by Christoph Hellwig, a Linux kernel developer. The lawsuit began on March 5, 2015. It was alleged that VMware had misappropriated portions of the Linux kernel,{{cite web |url= https://sfconservancy.org/news/2015/mar/05/vmware-lawsuit/ |title=Conservancy Announces Funding for GPL Compliance Lawsuit |publisher=sfconservancy.org |date=5 March 2015 |access-date=2015-08-27}}{{cite web|url=https://sfconservancy.org/copyleft-compliance/vmware-lawsuit-links.html |title=Copyleft Compliance Projects - Software Freedom Conservancy |publisher=Sfconservancy.org |date=2018-05-25 |access-date=2020-02-07}}
and, following a dismissal by the court in 2016, Hellwig announced he would file an appeal.{{cite web |date=9 August 2016 |title=Hellwig To Appeal VMware Ruling After Evidentiary Set Back in Lower Court |url=http://bombadil.infradead.org/~hch/vmware/2016-08-09.html |url-status=dead |archive-url=https://web.archive.org/web/20200114173909/http://bombadil.infradead.org/~hch/vmware/2016-08-09.html |archive-date=January 14, 2020}}
The appeal was decided February 2019 and again dismissed by the German court, on the basis of not meeting "procedural requirements for the burden of proof of the plaintiff".{{cite web|url=https://www.golem.de/news/gpl-klage-klage-von-hellwig-gegen-vmware-erneut-abgewiesen-1903-139733.html |title=Klage von Hellwig gegen VMware erneut abgewiesen|date=1 March 2019}}
In the last stage of the lawsuit in March 2019, the Hamburg Higher Regional Court also rejected the claim on procedural grounds. Following this, VMware officially announced that they would remove the code in question.{{cite web |date=2019-03-04 |title=VMware's Update to Mr. Hellwig's Legal Proceedings |url=https://www.vmware.com/company/news/updates/march-2019-hellwig-legal-proceedings.html |url-status=deviated |archive-url=https://web.archive.org/web/20210727091510/https://www.vmware.com/company/news/updates/march-2019-hellwig-legal-proceedings.html |archive-date=2021-07-27 |access-date= |website=VMware.com |publisher=}} This followed with Hellwig withdrawing his case, and withholding further legal action.{{cite web|url=http://bombadil.infradead.org/~hch/vmware/Pressrelease-2019-04-03.pdf |title=Press release |publisher=bombadil.infradead.org |date=2019 |access-date=2020-02-07}}
Related or additional products
The following products operate in conjunction with ESX:
- vCenter Server, enables monitoring and management of multiple ESX, ESXi and GSX servers. In addition, users must install it to run infrastructure services such as:
- vMotion (transferring virtual machines between servers on the fly whilst they are running, with zero downtime)VMware Blog by Kyle Gleed: [https://blogs.vmware.com/uptime/2011/02/vmotion-whats-going-on-under-the-covers.html vMotion: what's going on under the covers], 25 February 2011, visited: 2 February 2012VMware website [https://web.archive.org/web/20100602001209/http://www.vmware.com/files/pdf/VMware-VMotion-DS-EN.pdf vMotion brochure] . Retrieved 3 February 2012
- svMotion aka Storage vMotion (transferring virtual machines between Shared Storage LUNs on the fly, with zero downtime){{cite web |url=http://www.vmware.com/files/pdf/VMware-Storage-VMotion-DS-EN.pdf |title=Archived copy |website=www.vmware.com |access-date=17 January 2022 |archive-url=https://web.archive.org/web/20091228223853/http://www.vmware.com/files/pdf/VMware-Storage-VMotion-DS-EN.pdf |archive-date=28 December 2009 |url-status=dead}}
- Enhanced vMotion aka {{Proper name|evMotion}} (a simultaneous vMotion and svMotion, supported on version 5.1 and above)
- Distributed Resource Scheduler (DRS) (automated vMotion based on host/VM load requirements/demands)
- High Availability (HA) (restarting of Virtual Machine Guest Operating Systems in the event of a physical ESX host failure)
- Fault Tolerance (FT) (almost instant stateful fail-over of a VM in the event of a physical host failure){{cite web |url=http://www.vmware.com/files/pdf/VMware-Fault-Tolerance-FT-DS-EN.pdf |title=Archived copy |website=www.vmware.com |access-date=17 January 2022 |archive-url=https://web.archive.org/web/20101121142244/http://www.vmware.com/files/pdf/VMware-Fault-Tolerance-FT-DS-EN.pdf |archive-date=21 November 2010 |url-status=dead}}
- Converter, enables users to create VMware ESX Server- or Workstation-compatible virtual machines from either physical machines or from virtual machines made by other virtualization products. Converter replaces the VMware "P2V Assistant" and "Importer" products — P2V Assistant allowed users to convert physical machines into virtual machines, and Importer allowed the import of virtual machines from other products into VMware Workstation.
- vSphere Client (formerly VMware Infrastructure Client), enables monitoring and management of a single instance of ESX or ESXi server. After ESX 4.1, vSphere Client was no longer available from the ESX/ESXi server but must be downloaded from the VMware web site.
=Cisco Nexus 1000v=
Network-connectivity between ESX hosts and the VMs running on it relies on virtual NICs (inside the VM) and virtual switches. The latter exists in two versions: the 'standard' vSwitch allowing several VMs on a single ESX host to share a physical NIC and the 'distributed vSwitch' where the {{Proper name|vSwitches}} on different ESX hosts together form one logical switch. Cisco offers in their Cisco Nexus product-line the Nexus 1000v, an advanced version of the standard distributed vSwitch. A Nexus 1000v consists of two parts: a supervisor module (VSM) and on each ESX host a virtual Ethernet module (VEM). The VSM runs as a virtual appliance within the ESX cluster or on dedicated hardware (Nexus 1010 series) and the VEM runs as a module on each host and replaces a standard dvS (distributed virtual switch) from VMware.
Configuration of the switch is done on the VSM using the standard NX-OS CLI. It offers capabilities to create standard port-profiles which can then be assigned to virtual machines using vCenter.
There are several differences between the standard dvS and the N1000v; one is that the Cisco switch generally has full support for network technologies such as LACP link aggregation or that the VMware switch supports new features such as routing based on physical NIC load. However, the main difference lies in the architecture: Nexus 1000v is working in the same way as a physical Ethernet switch does while dvS is relying on information from ESX. This has consequences for example in scalability where the Kappa limit for a N1000v is 2048 virtual ports against 60000 for a dvS.
The Nexus1000v is developed in co-operation between Cisco and VMware and uses the API of the dvS.Overview of the [https://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/data_sheet_c78-492971.html Nexus 1000v] virtual switch, visited 9 July 2012
=Third-party management tools=
Because VMware ESX is a leader in the server-virtualization market,[http://www.crn.com/news/virtualization/232900547/vmware-continues-virtualization-market-romp.htm VMware continues virtualization market romp], 18 April 2012. Visited: 9 July 2012 software and hardware vendors offer a range of tools to integrate their products or services with ESX. Examples are the products from Veeam Software with backup and management applicationsAbout [http://www.veeam.com/company/about.html?ad=menu Veeam], visited 9 July 2012 and a plugin to monitor and manage ESX using HP OpenView,[http://www.veeam.com/vmware-esx-monitoring-hp-operations.html Veeam OpenView plugin for VMware], visited 9 July 2012 Quest Software with a range of management and backup-applications and most major backup-solution providers have plugins or modules for ESX. Using Microsoft Operations Manager (SCOM) 2007/2012 with a Bridgeways ESX management pack gives the user a realtime ESX datacenter health view.
Hardware vendors such as Hewlett Packard Enterprise and Dell include tools to support the use of ESX(i) on their hardware platforms. An example is the ESX module for Dell's OpenManage management platform.[https://en.community.dell.com/techcenter/systems-management/w/wiki/1977.openmanage-support-for-vmware-esxi-5-0.aspx OpenManage (omsa) support for ESXi 5.0], visited 9 July 2012
VMware has added a Web ClientVMware info about Web Client [https://kb.vmware.com/kb/2005377 – VMware ESXi/ESX 4.1 and ESXi 5.0 Comparison] since v5 but it will work on vCenter only and does not contain all features.Availability of vSphere Client for Linux systems [https://kb.vmware.com/kb/1006095 – What the web client can do and what not]
Known limitations
As of September 2020, these are the known limitations of VMware ESXi 7.0 U1.
=Infrastructure limitations=
Some maximums in ESXi Server 7.0 may influence the design of data centers:{{cite web | title=What's New with VMware vSphere 7 Update 1 | website=VMware vSphere Blog | date=15 September 2020 | url=https://blogs.vmware.com/vsphere/2020/09/whats-new-with-vmware-vsphere-7u1.html | access-date=9 June 2023}}{{Cite web|url=https://configmax.vmware.com/guest?vmwareproduct=vSphere&release=vSphere%207.0&categories=2-0|title=VMware Configuration Maximum tool}}
- Guest system maximum RAM: 24 TB
- Host system maximum RAM: 24 TB
- Number of hosts in a high availability or Distributed Resource Scheduler cluster: 96
- Maximum number of processors per virtual machine: 768
- Maximum number of processors per host: 768
- Maximum number of virtual CPUs per physical CPU core: 32
- Maximum number of virtual machines per host: 1024
- Maximum number of virtual CPUs per fault tolerant virtual machine: 8
- Maximum guest system RAM per fault tolerant virtual machine: 128 GB
- VMFS5 maximum volume size: 64 TB, but maximum file size is 62 TB -512 bytes
- Maximum Video memory per virtual machine: 4 GB
=Performance limitations=
In terms of performance, virtualization imposes a cost in the additional work the CPU has to perform to virtualize the underlying hardware. Instructions that perform this extra work, and other activities that require virtualization, tend to lie in operating system calls. In an unmodified operating system, OS calls introduce the greatest portion of virtualization "overhead".{{Citation needed|date=January 2013}}
Paravirtualization or other virtualization techniques may help with these issues. VMware developed the Virtual Machine Interface for this purpose, and selected operating systems {{As of|2008|alt=currently}} support this. A comparison between full virtualization and paravirtualization for the ESX Server{{cite web|publisher=VMware, Inc. |url=https://www.vmware.com/pdf/VMware_VMI_performance.pdf |title=Performance of VMware VMI |date=13 February 2008 |access-date=2009-01-22 }} shows that in some cases paravirtualization is much faster.
=Network limitations=
When using the advanced and extended network capabilities by using the Cisco Nexus 1000v distributed virtual switch the following network-related limitations apply:
:* 64 ESX/ESXi hosts per VSM (Virtual Supervisor Module)
:* 2048 virtual Ethernet interfaces per VMware vDS (virtual distributed switch)
::* and a maximum of 216 virtual interfaces per ESX/ESXi host
:* 2048 active VLANs (one to be used for communication between VEMs and VSM)
:* 2048 port-profiles
:* 32 physical NICs per ESX/ESXi (physical) host
:* 256 port-channels per VMware vDS (virtual distributed switch)
::* and a maximum of 8 port-channels per ESX/ESXi host
=Fibre Channel Fabric limitations=
Regardless of the type of virtual SCSI adapter used, there are these limitations:{{cite web |title=vSphere 6.7 Configuration Maximums |url=https://configmax.vmware.com/guest?vmwareproduct=vSphere&release=vSphere%206.7&categories=1-0 |website=VMware Configuration Maximum Tool |publisher=VMware |access-date=12 July 2019}}
- Maximum of 4 Virtual SCSI adapters, one of which should be dedicated to virtual disk use
- Maximum of 64 SCSI LUNs per adapter
See also
- Comparison of platform virtualization software
- KVM Linux Kernel-based Virtual Machine – an open-source hypervisor platform
- {{Annotated link|Hyperjacking}}
- Proxmox Virtual Environment – a Free and Open-Source competitor of VMware ESX from Proxmox Server Solutions GmbH
- Hyper-V – a competitor of VMware ESX from Microsoft
- Virtual appliance
- Virtual disk image
- Virtual machine
- VMware VMFS
- x86 virtualization
- Xen – an open-source hypervisor platform
References
{{Reflist}}
External links
- [https://www.vmware.com/products/esxi-and-esx.html VMware ESXi product page]
- [https://www.virten.net/vmware/esxi-release-build-number-history/ ESXi Release and Build Number History]
- [https://www.hpe.com/us/en/servers/hpe-esxi.html VMware ESXI Image For HPE Servers]
{{Virtualization software}}
{{DEFAULTSORT:Vmware Esx}}