Skip to main content

Overcoming High-Speed Interconnect Challenges

By Jon Martens and Bob Buxton

Overview

Cloud computing, smart phones, and LTE services are causing a large increase in network traffic. The instantaneous traffic rates at internet data centers have reached 1 Tbit/s. To support this increased traffic, speed of IT equipment – such as those used in high-end services in data centers – must be increased. Some of these higher rate standards are shown in Table 1. Device interconnects are causing transmission bottlenecks in many of these applications.

Standard Data Range Number of Lanes
CEI-25G-SR 19.90 to 28.05 Gbit/s 1 to N
CEI-25G-LR 19.90 to 25.80 Gbit/s 1 to N
IEEE802.3ba 100GBASE–LR/ER 25.78125 Gbit/s 4
32G Fiber Channel 28.05 Gbit/s 1
Infiniband 26G-IB-EDR 25.78125 Gbit/s 1 to N

Table 1. 20+ Gbit/s High Speed Standards.

This white paper discusses challenges introduced at these higher data rates and how Vector Network Analyzers can help meet these challenges.

Challenges facing signal integrity engineers

The drive to higher bit rates and ensuring compliance with the standards raises many challenges for signal integrity engineers. These challenges may be summarized as follows:

Cost/performance Trade-Offs

Higher data rates introduce new design challenges such as conductor skin effects and dielectric losses on PC boards, along with the design trade-offs related to choices of vias, stackups, and connector pins. Evaluating a selection of backplane materials and the impacts of various structural designs requires accurate measurement in both frequency and time domain. Accurate measurements provide the confidence to make cost/performance trade-off decisions. The aim is to evaluate the impact of interconnects on eye closure. Figure 1 shows an example of backplane impact on the eye pattern.

Figure 1

Figure 1: Example of a data signal with integrity degradation caused by frequency-dependent loss and group delay effects in the higher frequency bands resulting from skin effects and dielectric losses on the PC board.

Locating Defects

Sometimes problems are caused by vias, stackup issues, and connector pins. However, frequency domain data alone is not enough; it is necessary to transform that data into the time domain in order to locate the position of particular problems. Passive components, as well as near-end and far-end points between daughter boards must be measured in the frequency and time domains to assure that the transmission characteristics at each measurement point meet the standards. Using the best resolution capability improves your ability to locate discontinuities, impedance changes and crosstalk issues. In addition, many of the structures today are electrically large and put pressure on the measurement solution’s alias-free range.

Correlation Between Simulation and Measurement

Accurate models help accelerate your design cycle. However, models are only as good as the quality of data fed into them. Poor causality, where outputs can appear to happen in negative time, can be caused when there is insufficiently high frequency content in the data fed into models. Poor causality results in reduced confidence in simulations, potential convergence problems and inaccuracies. Conversely, poor low frequency information leading to DC extrapolation errors also degrades model accuracy and leads to poor agreement with 3-D EM simulators.

Fixture De-Embedding

There are many situations where it may not be possible to connect directly to the device under test. In this case it is necessary to de-embed the DUT from the surrounding test fixtures. The opposite is sometimes required: it may be useful to take a device and assess its performance when it is surrounded by other networks. Figure 2 illustrates this.

Figure 2

Figure 2: De-embedding can be used to remove test fixture contributions, modeled networks and other networks described by S-parameters (S2P files) from the measurements. Embedding is the reverse process.

However, many passivity and causality problems are due to poor calibration and de-embedding methods. In addition, high fixture loss may affect the accuracy and repeatability of de-embedding.

Solving Today’s Challenges

Fortunately the latest Vector Network Analyzer technology can provide a solution to these challenges.

    Primary VNA Performance or Feature
    Frequency Range Time Domain De-embedding techniques
Cost/performance Trade-Offs X    
Location of defects X X X
Correlation Between Simulation and Measurement X X  
Fixture De-Embedding     X

Table 2. Relevance of aspects of VNA performance to SI Engineer Challenges.

Maximizing Available Frequency Range

The lower and the upper frequency limits of an S-parameter characterization of a backplane or other interconnect both have impacts on the quality of the data and any subsequent modeling, but for different reasons. The following will consider each in turn.

The upper frequency range is what usually comes to mind first and many people perform measurements to the 3rd or 5th harmonic of the NRZ clock frequency. For a 28 Gbps data rate this means either 42 GHz or 70 GHz stop frequency for an S-parameter sweep. There is another way to think about the requirement for the upper measurement frequency; that is from the viewpoint of causality. When S-parameter data is transformed into the time domain for use in further simulation, causality errors can arise; these are essentially where events appear to result in negative time. This can lead to convergence problems in the simulations and inaccuracies in modeling larger-scale subsystems. While massaging of the frequency domain data can reduce these problems, there are potential issues related to distorting the actual physical behavior of the device. It is therefore often safer and more accurate to use as wide a frequency range as possible up to the point where repeatability and related distortions (e.g., the DUT starts radiating efficiently making the measurement very dependent on the surroundings) obscure the results. The desire for wider frequency range data becomes more compelling as faster and more complex transients are being studied in the higher level simulations.

The lower frequency bound of the sweep is just as important. Model accuracy generally improves the closer that data is acquired to DC. For example, consider the case where the measured S-parameter data for a backplane is fed into a software model in order to estimate the impact of that backplane on the eye pattern. Figure 3 shows what the eye pattern estimate will look like where the low frequency data has some error. In this example, it was found that a 0.5 dB error distribution at lower frequency (10 MHz) on transmission could take an 85% open eye to a fully closed eye. Since mid-band (10 GHz) transmission uncertainty may be near 0.1 dB depending on setup and calibration – and higher at low frequencies – this eye distortion effect cannot be neglected. Figure 4 shows what the resulting eye pattern will look like if the low frequency measurement data is of good quality and extends down to 70 kHz. This prediction correlates very well with the actual eye pattern measured using an oscilloscope as shown in figure 5.

Figure 3

Figure 3: With 0.5 dB insertion loss error at 10 MHz.

Figure 4

Figure 4: Accurate S-parameter data down to 70 kHz reveals an open eye pattern.

Since the non-transitioning parts of the eye-diagram are inherently composed of low frequency behavior, the sensitivity of the calculation to the low frequency S-parameter data makes sense. Because the low frequency insertion losses tend to be small, a large fixed-dB error (which is how VNA uncertainties tend to behave) can be particularly damaging.

Figure 5

Figure 5: Measured eye pattern.

Optimizing Time Domain Resolution

The time domain performance of a VNA is critical when trying to locate defects. In general, the wider the frequency-sweep, the better the time and hence spatial resolution. Figure 6 shows the differences in time domain resolution for three different frequency spans, 40, 50 and 70 GHz.

Resolution is maximized when Low-Pass time domain mode is used. This mode also permits characterization of impedance changes on the backplane. Low-Pass mode requires a quasi-harmonically related set of frequencies that start at the lowest frequency possible. A DC term is extrapolated that provides a phase reference, so the true nature of a discontinuity can be evaluated. Hence, the lower frequency that the sweep can commence, the better the extrapolation of the DC term.

Figure 6

Figure 6: Getting the best time domain resolution requires the most data points, narrowest frequency step size, and widest possible frequency bandwidth.

Using Flexible De-Embedding Techniques

Fixtures and connectors to devices under test come in many forms and poor de-embedding can lead to both passivity and causality errors. Causality errors were discussed above while passivity errors occur when it appears that a passive device has gain or is otherwise converting energy. The passivity error caused by small de-embedding problems can be subtle but it can have large effects on follow-on modeling or simulation as suggested by the earlier eye-diagram example. The solution is to have a wide range of techniques available that can handle different situations.

The following table lists several extraction methods for de-embedding.

Method Standard Complexity Fundamental Accuracy Sensitivity to Standards Media Preferences
Type A
(adapter removal)
High High High (refl.) Need good reflect and thru stds
Type B
(Bauer-Penfield)
Medium High High (refl.) Only need reflect standards, not great for coupled lines
Type C
(inner-outer)
High High Medium (refl.) More redundant than A so less sensitive but need good stds still
Type D
(2-port lines)
Medium Low for low-loss or mismatched fixtures Medium (line def’n.) Only need decent lines; match relegated to lower dependence; can handle coupled lines
Type E
(4 port inner-outer)
High High Medium (refl.) Somewhat redundant (like C) but need decent standards. Best for uncoupled multiport fixtures
Type F
(4-port uncoupled)
Medium Low for low-loss or mismatched fixtures Medium (line def’n.) Only need decent lines; match relegated to lower dependence; can handle coupled lines
Type G
(4-port coupled)
Medium Low for low-loss or mismatched fixtures Medium (line def’n.) Only need decent lines; match relegated to lower dependence; can handle coupled lines well
Type H
(TD-based)
Low Can be low for lossy or complex fixtures (many structures per wavelength) Low Best for electrically large structures

Table 3. De-embedding Methods.

As can be seen there are many extraction methods available and the choice is somewhat context dependent. For signal integrity applications, the most likely will be type F or G. There is not space in this white paper to go into the details for all of these methods. They will be covered in more detail in a future white paper.

Conclusion

Higher data rates require accurate measurements to provide the confidence needed to make performance/cost decisions. Measurement tools must help shorten design times and ensure stable signal integrity in mass production. Vector Network Analyzers play a key role in helping the signal integrity engineer to meet the challenges of increasing data rates, making appropriate cost/performance trade-offs, achieving correlation between simulations and measurement and extracting the effect of fixtures. When selecting a VNA, the user should be looking at characteristics such as upper and lower frequency limits, performance in time domain and a wide selection of advanced calibration and de-embedding techniques.