Virtual NIC Selection in VMware and Proxmox

Prev Next

Scope

This document provides guidance on selecting and configuring virtual network interface cards (vNICs) in virtualized environments, specifically focusing on VMware ESXi and Proxmox VE. It’s adapted from internal engineering guidance, customer experience and other publicly available guidance on this topic.

It explains the functional and performance differences between emulated NICs (e.g., E1000) and paravirtualized NICs (e.g., VMXNET3 and VirtIO), with emphasis on their impact in performance-sensitive environments such as real-time audio, multicast networking, and latency-sensitive applications.

This document is intended to:

  • Clarify why E1000 emulation may introduce performance and stability issues

  • Provide best-practice recommendations for NIC selection in virtual machines

  • Highlight considerations specific to timing-sensitive and multicast-heavy workloads

Out of Scope

This document does not cover:


Overview

When deploying virtual machines on platforms such as VMware ESXi or Proxmox VE, the choice of virtual network interface card (vNIC) can significantly impact performance, latency, and overall reliability.

A common question is why E1000 emulation frequently causes issues, while VMXNET3 (VMware) and VirtIO (Proxmox/KVM) tend to perform better.


Key Differences

E1000 (Emulated NIC)

  • Emulates an older Intel 1 GbE PCI network adapter

  • Designed for maximum compatibility, not performance

  • Requires the hypervisor to simulate real hardware behavior

VMXNET3 / VirtIO (Paravirtualized NICs)

  • Designed specifically for virtual environments

  • Guest OS communicates directly with the hypervisor using optimized drivers

  • Focused on performance, efficiency, and low overhead


Why E1000 Causes Problems

E1000 operates as a full hardware emulation layer. This introduces several inefficiencies:

  • Higher CPU Overhead
    Every packet requires hardware-level emulation (registers, interrupts, descriptors)

  • Increased Interrupt Load
    Less efficient interrupt handling leads to more CPU cycles per packet

  • Latency and Jitter Variability
    Timing-sensitive traffic can suffer due to inconsistent packet handling

  • Buffering Inefficiencies
    Emulated queues are less optimized than paravirtualized ones

  • Driver and Emulation Quirks
    Behavior depends on how accurately the hypervisor mimics hardware

  • Weaker Performance Under Load
    Bursty or high-throughput traffic can overwhelm the emulation layer


Why VMXNET3 and VirtIO Perform Better

Both VMXNET3 and VirtIO are paravirtualized drivers, meaning they are built specifically for virtualization:

  • Lower CPU Usage
    Fewer abstraction layers → more efficient packet handling

  • Improved Throughput
    Better batching and queue management

  • Reduced Latency and Jitter
    More direct communication between guest and host

  • Advanced Offloading Support
    Features like checksum offload, segmentation, and multiqueue

  • Better Scaling Under Load
    Designed to handle modern traffic patterns


Real-World Impact

In general IT workloads, E1000 may appear to function adequately. However, issues become more apparent with:

  • Real-time audio like Livewire or other AoiP (like AES67, SMPTE2110-30, SMPTE2022-7,etc)

  • SIP and telephony systems

  • Multicast-heavy environments

  • Timing-sensitive applications (e.g., PTP-adjacent systems)

  • High-throughput or bursty traffic

In these scenarios, E1000 can introduce:

  • Packet loss

  • Increased jitter

  • Timing inconsistencies

  • Unpredictable performance


Recommendations

Use E1000 only when:

  • Installing an OS that lacks paravirtualized drivers

  • Performing initial setup or recovery

  • Compatibility is the primary concern

Use VMXNET3 (VMware) or VirtIO (Proxmox/KVM) for:

  • Production environments

  • Performance-sensitive applications

  • Any workload involving real-time or multicast traffic


Important Considerations

Even with the correct vNIC, VM networking performance depends on:

  • Physical NIC capabilities and drivers

  • vSwitch / bridge configuration

  • CPU scheduling and host load

  • Interrupt moderation settings

  • Offload features (enabled/disabled appropriately)

Note: Virtual networking is not inherently deterministic. Poor host configuration can negate the advantages of paravirtualized drivers.


Summary

E1000 is best viewed as a compatibility fallback, not a performance solution.

For most modern deployments:

  • VMXNET3 (VMware)

  • VirtIO (Proxmox/KVM)

…are the preferred choices due to their efficiency, scalability, and suitability for real-time and high-performance workloads.


If you’re working in AoIP or similar environments, this becomes less of a recommendation and more of a requirement.