Skip to main content

Cloud Computing And Big Data

5G Cloud Computing

5G Mobile Backhaul

Since 5G is implemented on general-purpose servers in data centers to virtualize the functions of hardware, such as the BBU, in software, Software Defined Network (SDN) and Network Function Virtualization (NFV) technologies are being investigated. The centralized BBU in the data center requires a communications bandwidth of 10 Gbit/s per unit and 5G mobile backhaul requires a maximum capacity of 1 Tbit/s. Achieving these requirements means using the optical fiber efficiently. As well as using the conventional Wavelength Division Multiplexing (WDM) and Passive Optical Network (PON) technologies, there is an increasing need for new next-generation, high-speed wired standards, such as 400 Gbit/s Ethernet. Additionally, 5G requires a highly reliable mobile backhaul. Consequently, the mainstream focus is on carrier-grade Ethernet supporting functions such as link monitoring, remote fault detection, automatic re-routing, etc., using Operation Administration and Maintenance (OAM) technologies.

Data Centers

Data center traffic is increasing year-on-year and volumes are expected to increase by about three fold between 2014 and 2019. In addition to facilitating Cloud Computing services, new applications such as IoT and M2M will appear alongside 5G. Processing of these “big data” and other applications will require parallel processing by multiple servers to achieve efficiency. In most cases, Ethernet is used for server-to-server communications, but InfiniBand is commonly used for High Performance Computing (HPC) applications and enterprise server level interconnects. InfiniBand is a fast and low-latency standard optimized for HPC by using Remote Direct Memory Access (RDMA) to lower CPU and memory access loads. It is mostly used now for 10 Gbit/s and 40 Gbit/s but is expected to become mainstream for future 100 Gbit/s and 400 Gbit/s services.

Faster Wired Communications

The era of 5G and IoT is seeing increasing need for faster bit rates, such as 100 Gbit/s and 400 Gbit/s, and this section outlines the physical-layer standards forming the basis for increasing the speed of Ethernet and InfiniBand communications standards.

The Institute of Electrical and Electronics Engineers, Inc. (IEEE) has standardized gigabit class Ethernet as 1, 10, 40, and 100 Gbit/s. However, work is underway on 25 Gbit/s and 50 Gbit/s standards to supplement the above bit rates for use as low-cost interconnects in data centers.

Additionally, new 200 Gbit/s and 400 Gbit/s speeds are being examined as new standards. The Non Return to Zero (NRZ) and 4 level Pulse Amplitude Modulation (PAM4) technologies used for the 400 GbE (Gigabit Ethernet) standard under investigation have both been broadly certified. Table 1 lists the 400 GbE standards.

Products

  • MP1800A - high-speed BERT, Signal Quality Analyzer for 400 G PAM4 modulation test for O/E devices within data center interconnect and backplanes. 1,6 Tb/s interconnect test.
  • MT1100A - WDM and core network physical/transport layer link performance test and verification.
  • eoLive, eoSight – cross correlation of multiple KPI’s to understand user experience and analyze "big data" warehouse to analyze customer behavior and quality of experience.

Products