By Jon Martens and Bob Buxton
Introduction
Higher data rates introduce new challenges for test solutions. There are several 20+ Gbit/s high speed standards (Table 1) that are driving the upper end of the test spectrum to 70 GHz and even 110 GHz. Accurate measurements are needed to better understand higher order harmonics, as will new challenges related to conductor skin effects and dielectric losses on PC boards, along with the design trade-offs related to choices of vias, stackups, and connector pins.
Standard |
Data Rate |
Number of Lanes |
CEI-25G-SR |
19.90 to 28.05 Gbit/s |
1 to N |
CEI-25G-LR |
19.90 to 25.80 Gbit/s |
1 to N |
IEEE802.3ba 100GBASE-LR/ER |
25.78125 Gbit/s |
4 |
32G Fiber Channel |
28.05 Gbit/s |
1 |
Infiniband 26G-IB-EDR |
25.78125 Gbit/s |
1 to N |
Table 1. 20+ Gbit/s High Speed Standards.
When using Vector Network Analyzers (VNAs) to evaluate backplane and interconnects, one of the most basic considerations is the frequency range over which to make S-parameter measurements. The choice of frequency range affects the ability to locate defects, the correlation between simulations and measurements and ultimately the ability to make good decisions concerning cost/performance trade-offs.
This white paper discusses the importance of both high and low frequency test limits and also their impact on your time domain results.
High-Speed Harmonics Increase Required Frequency Range
It will come as no surprise that as bit rates increase, then the upper frequency limit for evaluating
backplane and interconnect transmission characteristics must also increase. Higher speeds
basically translate into higher test frequencies being required to perform measurements to the 3rd
or 5th harmonic of the NRZ clock frequency. For example, for a 28 Gbps data rate this means
either 42 GHz or 70 GHz stop frequency for an S-parameter sweep. Figure 1 shows a spectrum of
a 14 GHz square wave which would be the clock frequency for a 28 Gbps NRZ signal. This
example shows the spectrum after the signal has been passed through a connector/cable
assembly. Attenuating the harmonics of the clock frequency will distort the signal and hence the
need to characterize the frequency response of transmission media to higher frequencies – ideally
to at least the 5th harmonic.
Figure 1: Harmonic Content of 28 Gbps NRZ Clock Signal.
Upper Frequency Data Impact on Simulation Stability
There is another way to think about the requirement for the upper measurement frequency; that is
from the viewpoint of causality. Causality is simply a statement that the output from an electrical
network should occur after the stimulus. Lack of causality, where the output appears to occur prior
to the stimulus, can be observed when poor S-parameter data is transformed into the time domain
for use in circuit or other simulations. Non-causal S-parameter data can be the cause of unstable
simulations which may not converge to a solution or may lead to inaccurate results.
One cause of poor S-parameter data is insufficient higher frequency data. For ideal causality
S-parameter data would be available from DC to infinity – not a very practical situation, at least for
the upper limit. Figure 2 shows the time domain representation computed from S-parameter
datasets with upper frequency limits of 40 GHz and 110 GHz. As can be seen, the negative-time
(non-causal) energy is much greater for the 40 GHz situation. The vertical scale is small in
Figure 2 but even this level of energy can matter in many simulations.
Figure 2: Non-Causal Results for Various Data-Set Bandwidths
In theory, massaging the frequency domain data can reduce these problems; however this can
lead to potential issues related to distorting the actual physical behavior of the device. It is
therefore often safer and more accurate to use as wide a frequency range as possible up to the
point where repeatability and related distortions (e.g., the DUT starts radiating efficiently making
the measurement very dependent on the surroundings) obscure the results. The desire for wider
frequency range data becomes more compelling as faster and more complex transients are being
studied in the higher level simulations.
Near-DC Measurements – No Less Important
Once the upper frequency need has been addressed, it is time to look at the other end of the
spectrum; it is important to remember that accurate measurements to the lowest possible
frequency are still very important for signal integrity applications.
Often times the accuracy of your models can be improved by measuring down to as close to
DC as possible. For example, consider the case where the measured S-parameter data for a
backplane is fed into a software model in order to estimate the impact of that backplane on the
eye pattern. Figure 3 shows what the eye pattern estimate will look like where the low frequency
data has some error. In this example, it was found that a 0.5 dB error injected at a lower frequency
(<10 MHz) on transmission could take an 85% open eye to a fully closed eye. Since mid-band
(10 GHz) transmission uncertainty may be near 0.1 dB depending on setup and calibration – and
higher at low frequencies – this eye distortion effect cannot be neglected.
Figure 3: With 0.5 dB insertion loss error at 10 MHz the eye pattern appears to be closed.
Figure 4 shows what the resulting eye pattern will look like if the low frequency measurement data is of good quality and extends down to 70 kHz. This prediction correlates very well with the actual eye pattern measured using an oscilloscope as shown in Figure 5.
Figure 4: Accurate S-parameter data down to 70 kHz
reveals an 85% open eye pattern.
Figure 5: Measured eye pattern on an oscilloscope
verifies the accurate s-parameter result.
Since the non-transitioning parts of the eye-diagram are inherently composed of low frequency behavior, the sensitivity of the calculation to the low frequency S-parameter data makes sense. Since the low frequency insertion losses tend to be small, a large fixed-dB error (which is how VNA uncertainties tend to behave) can be particularly damaging.
Time Domain – it’s a question of both upper and lower frequency data
Passive components, as well as near-end and far-end points between daughter boards, must be measured in the frequency and time domains to assure that the transmission characteristics at each measurement point meet the standards. Using the best resolution capability improves your ability to locate discontinuities, impedance changes and crosstalk issues.
The time domain performance of a VNA is critical when trying to locate defects. In general, the wider the frequency-sweep, the better is the time and hence spatial resolution. Figure 6 clearly shows the benefits of using S-parameter data captured over a wide frequency bandwidth.
Figure 6: The time domain resolution benefits of wider frequency bandwidths are clearly shown
in these measurements of a short at the end of a fixture arm based on 40 GHz and 110 GHz data.
Lack of good low frequency S-parameter data can also lead to further complications when converting into the time domain for either measurement of impedance changes along a line or for modeling. Resolution is maximized when Low-Pass time domain mode is used. This mode also permits characterization of impedance changes on the backplane. Low-Pass mode requires a quasi-harmonically related set of frequencies that start at the lowest frequency possible. A DC term is extrapolated that provides a phase reference, so the true nature of a discontinuity can be evaluated. Hence, the lower the start frequency is, the better the extrapolation of the DC term.
Figure 7 shows how the DC extrapolation of data can vary significantly depending on the lower frequency measurement cut-off point. In the case, extrapoloation of measurements results from an analyzer with a minimum start frequency of f1 would predict one value for the DC term, whereas a set of results that more closely approaches zero Hz would provide a better DC extrapolation.
Figure 7: DC Extrapolation from measured S-parameter data.
A poorly estimated DC term then leads to an erroneous view of a device under test. Figure 8 shows the situation for a step change in reflection coefficient. Prior to 200 ps, the impedance of the line was 50 ohm and after that zero ohms. With a bad DC extrapolation, the 50 ohm section can clearly be seen to show sloping impedance along its length, whereas with a good extrapolation, the 50 ohm line is seen correctly.
Figure 8: Impact of poor DC extrapolation on time domain results.
Getting the Best of Both Worlds
From the foregoing it can be seen that the ideal VNA would have as a low a start frequency as possible and a stop frequency as high as required based on the bit rate and causality concerns. A VNA depends on directional devices to sample forward and reverse direction signals in order to make the various ratio measurements to compute the S-parameters. Typically microwave couplers are used. However, the coupling performance of these devices unfortunately degrades at lower frequencies, below say 1 GHz, which reduces the dynamic range and the uncertainty of lower frequency measurements. A bridge allows one to go much lower in frequency without performance degradation since it does not rely on pure geometrical length (in wavelengths) to accomplish the coupling. Rather, the bridge uses lumped impedance sources at low frequencies and distributed ones at higher frequencies to accomplish the coupling. Analyzers are available that use a hybrid approach: coupler based architectures for the higher frequencies and bridge based coupling devices at the lower frequencies.
Conclusion
As data rates increase, signal integrity engineers require ever widening frequency ranges and test equipment that maintains the accuracy they are used to. Vector Network Analyzers play a key role in helping them meet these challenges so they can make appropriate cost/performance trade-offs. When selecting a VNA, the user should be looking to maximize both their upper and lower frequency limits.