The OCe14000B family of Converged Network Adapters (CNAs) provides high performance 10Gb Ethernet (10GbE) and 40GbE connectivity delivering multiple benefits for the enterprise cloud, including:
- Increasing data center IT agility and scalability through deployment of a secure multi-tenant cloud
- Driving scalability and flexibility in converged infrastructures n Optimizing server hardware utilization by scaling high density virtualization
- Choice of single-port, dual-port, or quad-port 10GbE SFP+ or single-port 40GbE QSFP+ PCIe adapters
The OCe14000B family of 10GbE and 40GbE CNAs is designed for the high bandwidth and scalability demands of tier 1 enterprise applications with storage protocol (Fibre Channel over Ethernet [FCoE] and iSCSI) offloads, support for RDMA* over Converged Ethernet v2 (RoCEv2*) fabric, enhanced Single-Root I/O Virtualization (SR-IOV), Network Interface Card (NIC) port partitioning, and cloud optimization using overlay network technology. The 40GbE QSFP+ adapter supports an optical 40GBASE-SR4 connection or copper 40GBASE-CR4 connection which can be re-configured to support 4-ports of 10GBASE-CR for use with a QSFP+ to quad breakout SFP+ direct attach cable (DAC) or Active Optical Cable (AOC).
Emulex Virtual Network Exceleration (VNeX™) overlay network offloads for multi-tenant cloud networking
Scaling existing technologies for private or public multi-tenant infrastructures requires networking solutions that can enable virtual machine (VM)-to-VM communication and virtual workload migration across Layer 2 and Layer 3 boundaries without impacting connectivity or performance. At the same time, these solutions need to ensure isolation and security for thousands or millions of tenant networks. However, with existing technology, the available 4094 VLAN IDs are insufficient to isolate and secure each tenant in a private or hybrid cloud data center. Virtual Extensible Local Area Network (VXLAN) (supported by VMware and Linux) and Network Virtualization using Generic Routing Encapsulation (NVGRE) (supported by Microsoft) are two next-generation overlay networking solutions that address these requirements. These solutions are a frame-in-frame data packet encapsulation scheme enabling the creation of virtualized Layer 2 subnets that can span physical L3 IP networks. Traffic from each VM is tunneled to a specific virtual network; the packets are then routed transparently over the existing physical infrastructure
Emulex VNeX offload technology powered by a multicore adapter ASIC engine accelerates the performance of network virtualization by preserving legacy stateless TCP offloads and scaling methods on encapsulated packets, providing full native network performance in a virtual network environment.
Remote Direct Memory Access (RDMA*) support
The OCe14000B adapters leverage RoCEv2* enabling server to server data movement directly between application memory without any CPU involvement providing high throughput and data acceleration on a standard Ethernet fabric without the need for any specialized infrastructure or management. The OCe14000B adapters support the new Infiniband Trade Association (IBTA) RoCEv2* specification for Layer 3 routing.
Flexible workload storage connectivity with FCoE and iSCSI offloads
The OCe14000B adapters support FCoE hardware-based offload using the same enterprise-class Emulex drivers that work with Emulex LightPulse® Fibre Channel Host Bus Adapters (HBAs). The OCe14000B adapters also support iSCSI hardware-based offload, providing performance that is superior to iSCSI solutions based on software initiators and standard Network Interface Cards (NICs). Finally the OCe14000B adapters also have the ability to support NIC and iSCSI and FCoE offloads on the same port (i.e. concurrent storage).
Optimized host virtualization density with SR-IOV support
SR-IOV optimizes I/O for VMs, enabling higher host server virtualization ratios to deliver maximum server Return on Investment (ROI). SR-IOV provides a more cost-effective solution than multiple, physical adapter ports. SR-IOV enables multiple VMs to directly access the OCe14000B’s I/O resources, thus allowing VM networking I/O to bypass the host and take a path directly between the VM and the adapter, eliminating redundant I/O processing in the hypervisor. This, in turn, allows higher I/O performance, lower CPU utilization and significantly reduced latency as compared to the alternative of software-emulated NIC devices that are implemented in the hypervisor.
Optimized bandwidth allocation with Emulex Universal Multi-Channel port partitioning
Emulex Universal Multi-Channel (UMC) is ideal for virtualized server environments because bandwidth allocation can be optimized to support virtual machine migration, management and I/O intensive applications. UMC allows multiple Peripheral Component Interconnect (PCI) physical functions (PFs) to be created on each adapter port. As a CNA, each adapter caOptimized bandwidth allocation with Universal Multi-Channel port partitioning (also known as network partitioning, NPAR) can be configured with up to sixteen functions.
The key benefits of deploying Emulex UMC technology include:
Lower total cost of ownership (TCO)
- Consolidates multiple 1GbE adapters, associated cables and switch ports
- Higher VM workload bandwidth allocation to drive higher VM density on host servers
- Lower per-Gb bandwidth cost compared to deploying multiple 1GbE adapters
Optimized I/O utilization
- Granular bandwidth provisioning minimizes wasted idle bandwidth and waste of dedicated 1GbE adapters
- Enables Service Level Agreement (SLA) based provisioning and deployment
- UMC is not dependent on specialized OS support
- Works with any 10GbE or 40GbE switch
UMC is ideal for virtualized server environments because bandwidth allocation can be optimized to support I/O intensive applications, VM migration, and management functions.
Available Quad-port 10GbE SFP+ Model
Use cases such as Content Delivery Network (CDN) and hosting providers have standardized on 10GbE and often are looking for more bandwidth per server for the services they provide. The Emulex quad-port 10GbE model (OCe14104B) delivers 40GbE full-duplex bandwidth using a single PCIe slot. An additional benefit of this configuration is utilization of existing 10GbE SFP+ switch connections versus buying new 40GbE QSFP+ switches.
Simplified management OneCommand® Manager application
The OneCommand Manager application provides centralized management of Emulex OneConnect CNAs and LightPulse® HBAs throughout the data center from a single management console. The OneCommand Manager application provides a graphical user interface (GUI) and a scriptable command line user interface (CLI). OneCommand Manager for VMware is fully integrated with VMware vCenter to simplify management for virtual server deployments.
Fourth generation platform delivers enterprise-class reliability and performance
Leveraging generations of advanced, field-proven controller and adapter technology, OCe14000B CNAs meet the robust interoperability and reliability requirements of enterprise and scale-out data centers
- Maximizes server hardware ROI with high virtual machine density
- Simplifies deployment of secure, scalable multitenant cloud infrastructures
- Minimizes TCO through deployment of heterogeneous workloads on Converged Infrastructure
- Accelerates applications and storage performance
- Quad-port 10GbE SFP+ or single-port 40GbE QSFP+ Ethernet adapters for applications requiring 40GbE full-duplex bandwidth using a single PCIe for slot constrained server platforms
- Reduces complexity through the deployment of a common network platform
- Reduces management, infrastructure and energy costs
- Superior network convergence—storage and network traffic over a common Ethernet infrastructure
- SR-IOV n Data acceleration with RoCEv2* (Routable RoCE*) support n Powerful hardware offloads for:
- Overlay networks (NVGRE & VXLAN)
- Storage protocols: iSCSI and FCoE
- Stateless TCP
- Greater bandwidth with PCIe 3.0
- VMware vSphere NetQueue with RSS support
- Microsoft Windows Server VMQ, Dynamic VMQ, RSS and vRSS support
* RoCE/RDMA functionality is only available as a technical preview and is not intended for a production environment