PVT, Derarting and STA

What is the derate value that can be used?
  • For setup check derate data path by 8% to 15%, no derate in the clock path.
  • For hold check derate clock path by 8% to 15%, no derate in the data path.


What are the corners you check for timing sign-off? Is there any changes in the derate value for each corner?
  • Corners: Worst, Best, Typical.
  • Same derating value for best and worst.For typical it can be less.

Write Setup and Hold equtions?
  • Setup equation: Tlaunch clock + Tclk-q_max + Tcombo_max <= Tcapute clock - (Tsetup+skew)
  • Hold equation: Tlaunch clock + Tclk-q_min + Tcombo_min >= Tcapture clock + (Thold-skew)

Where do you get the WLM's? Do you create WLM's? How do you specify?
  • Wire Load Models (WLM) are available from the library vendors.
  • We dont create WLM.
  • WLMs can be specified depending on the area.

Where do you get the derating value? What are the factors that decide the derating factor?
  • Based on the guidelines and suggestions from the library vendor and previous design experience derating value is decided.
  • PVT variation is the factor that decides the derating factor.

What factors decides the setup time of flip-flop?
  • D- pin transition and clock transition.

Why dont you derate the clock path by -10% for worst corner analysis?
  • We can do. But it may not be accurate as the data path derate.

Leakage Power Trends

Development of the digital integrated circuits is challenged by higher power consumption. The combination of higher clock speeds, greater functional integration, and smaller process geometries has contributed to significant growth in power density. At 90 nm and below, leakage power management is essential in the ASIC design process. As voltages scale downward with the geometries threshold voltages must also decrease to gain the performance advantages of the new technology but leakage current increases exponentially. Thinner gate oxides have led to an increase in gate leakage current.

Scaling improves transistor density and functionality on a chip. Scaling helps to increase speed and frequency of operation and hence higher performance. At the same time power dissipation increases. To counteract increase in active and leakage power Vth should also be scaled. Leakage power is catching up with the dynamic power in VDSM CMOS circuits as shown in Figure 1.


Figure 1. Leakage vs.Dynamic power [3]


According to Sung Mo Kang et al.[1] and Anantha P. Chandrakasan et al.[2] power consumption in a circuit can be divided into 3 different components. They are:

1) dynamic

2) static (or leakage) and

3) Short circuit power consumption.

Dynamic (or switching) power consumption occurs when signals which go through the CMOS circuits change their logic state charging and discharging of output node capacitor.

Leakage power consumption is the power consumed by the sub threshold currents and by reverse biased diodes in a CMOS transistor.

Short circuit power consumption occurs during switching of both NMOS and PMOS transistors in the circuit and they conduct simultaneously for a short amount of time.


Leakage Power

The power consumed by the sub threshold currents and by reverse biased diodes in a CMOS transistor is considered as leakage power. The leakage power of a CMOS logic gate does not depend on input transition or load capacitance and hence it remains constant for a logic cell.



Figure 2. Leakage power components in an inverter [5]


Leakage Components in Bulk CMOS

Different leakage power components are classified are as follows and are shown in Figure 3.

  • Diode reverse bias current or Reverse-biased, drain- and source-substrate junction band-to-band-tunneling (BTBT) –I1
  • Sub threshold current – I2
  • Gate induced drain leakage – I3


Figure 3. Major leakage components in a transistor [2] [3]


As technology node shrinks towards 45 nm and below gate leakage (i.e. leakage current due to direct tunneling) increases owing to the increased electric field. This is the reason why voltage is scaled down to around 1V. Improvements in the manufacturing process and material have helped to control other leakage components such as sub threshold leakage, GIDL and junction reverse bias leakage. A comparative graphical representation of different leakage currents in different technology nodes is shown in Figure 4.



Figure 4. Technology shrinking vs. Leakage components


Sub threshold leakage is controlled by having more control over threshold voltage. Olden process technologies are causing up to 50 % of threshold voltage variation but newer technologies produce very low threshold voltage deviation, 30 mV being maximum value. Decrease in junction area and voltage automatically decreases junction reverse bias leakage and GIDL respectively. But the tunneling effect is threatening further decrease in device dimension. Reducing the GIDL, reverse bias leakage and gate leakage due to tunneling is directly related to the improvements in fabrication chemistry of the device whereas designer has a little control over threshold voltage. The other way by which a designer can have control over these leakage components is to switch off the device itself in controlled fashion! Low power techniques like “power gating” does this effectively and “back bias” technique controls threshold voltage.

Xiaodong Zhang [3] has studied impact of dynamic and leakage power as technology node reaches deep submicron level. Their summary of the result and leakage trends studied by Massoud Pedram [4] is shown below in Table 1.



Table 1. Leakage power trends


Wide variety of techniques have been developed to address the various aspects of the power problem and to meet power specifications. These techniques include clock gating, multi-threshold (multi-Vt) voltage cells, multiple-voltage domains, substrate biasing, dynamic voltage and frequency scaling (DVFS), power gating. [1]


References

[1] Sung Mo Kang and Yusuf Leblebici, "CMOS Digital Integrated Circuits-Analysis and Design", Tata McGraw Hill, Third Edition, New Delhi, 2003

[2] Anantha P. Chandrakasan, Samuel Sheng and Robert W.Broadersen, “Low Power CMOS Digital Design”, IEEE Journal of Solid State Circuits, vol. 27, no. 4, pp. 472-484, April 1992

[3] Xiaodong Zhang, “High Performance Low Leakage Design Using Power Compiler and Multi-Vt Libraries”, Synopsys, SNUG, Europe, 2003, www.synopsys.com, 10/9/2007

[4] Massoud Pedram, Leakage Power Modeling and Minimization”, University of Southern California, Dept. of EE-Systems, Los Angeles, CA 90089, ICCAD 2004 Tutorial, www.ceng.usc.edu, 10/10/2007

[5] Michael Keating, David Flynn, Robert Aitken, Alan Gibsons and Kaijian Shi, “Low Power Methodology Manual for System on Chip Design”, Springer Publications, NewYork, 2007, www.lpmm-book.org, 4/9/2007

Process-Voltage-Temperature (PVT) Variations and Static Timing Analysis

The major design challenges of ASIC design consist of microscopic issues and macroscopic issues [1]. The microscopic issues are ultra-high speeds, power dissipation, supply rail drop, growing importance of interconnect, noise, crosstalk, reliability, manufacturability and the clock distribution. The macroscopic issues are time to market, design complexity, high levels of abstractions, reuse, IP portability, systems on a chip and tool interoperability.

To meet the design challenge of clock distribution, the timing analysis is performed. Timing analysis is to estimate when the output of a given circuit gets stable. Timing Analysis (TA) is a design automation program which provides an alternative to the hardware debugging of timing problems. The program establishes whether all paths within the design meet stated timing criteria, that is, that data signals arrive at storage elements early enough valid gating but not so early as to cause premature gating. The output of Timing Analysis includes ‘Slack” at each block to provide a measure of the severity of any timing problem [13].


Static vs. Dynamic Timing Analysis

Timing analysis can be static or dynamic.

Static Timing Analysis (STA) works with timing models where as the Dynamic Timing Analysis (DTA) works with spice models. STA has more pessimism and thus gives maximum delay of the design. DTA overcomes this difficulty because it performs full timing simulation. The problem associated with DTA is the computational complexity involved in finding the input pattern(s) that produces maximum delay at the output and hence it is slow. The static timing analyzer will report the following delays: Register to Register delays, Setup times of all external synchronous inputs, Clock to Output delays, Pin to Pin combinational delays. The clock to output delay is usually just reported as simply another pin-to-pin combinational delay. Timing analysis reports are often pessimistic since they use worst case conditions.

The wide spread use of STA can be attributed to several factors [2]:

The basic STA algorithm is linear in runtime with circuit size, allowing analysis of designs in excess of 10 million instances.

The basic STA analysis is conservative in the sense that it will over-estimate the delay of long paths in the circuit and under-estimate the delay of short paths in the circuit. This makes the analysis ”safe”, guaranteeing that the design will function at least as fast as predicted and will not suffer from hold-time violations.

The STA algorithms have become fairly mature, addressing critical timing issues such as interconnect analysis, accurate delay modeling, false or multi-cycle paths, etc.
Delay characterization for cell libraries is clearly defined, forms an effective interface between the foundry and the design team, and is readily available. In addition to this, the Static Timing Analysis (STA) does not require input vectors and has a runtime that is linear with the size of the circuit [9].


PVT vs. Delay

Sources of variation can be:

  • Process variation (P)
  • Supply voltage (V)
  • Operating Temperature (T)



Process Variation [14]


This variation accounts for deviations in the semiconductor fabrication process. Usually process variation is treated as a percentage variation in the performance calculation. Variations in the process parameters can be impurity concentration densities, oxide thicknesses and diffusion depths. These are caused bye non uniform conditions during depositions and/or during diffusions of the impurities. This introduces variations in the sheet resistance and transistor parameters such as threshold voltage. Variations are in the dimensions of the devices, mainly resulting from the limited resolution of the photolithographic process. This causes (W/L) variations in MOS transistors.

Process variations are due to variations in the manufacture conditions such as temperature, pressure and dopant concentrations. The ICs are produced in lots of 50 to 200 wafers with approximately 100 dice per wafer. The electrical properties in different lots can be very different. There are also slighter differences in each lot, even in a single manufactured chip. There are variations in the process parameter throughout a whole chip. As a consequence, the transistors have different transistor lengths throughout the chip. This makes the propagation delay to be different everywhere in a chip, because a smaller transistor is faster and therefore the propagation delay is smaller.


Supply Voltage Variation [14]


The design’s supply voltage can vary from the established ideal value during day-to-day operation. Often a complex calculation (using a shift in threshold voltages) is employed, but a simple linear scaling factor is also used for logic-level performance calculations.

The saturation current of a cell depends on the power supply. The delay of a cell is dependent on the saturation current. In this way, the power supply inflects the propagation delay of a cell. Throughout a chip, the power supply is not constant and hence the propagation delay varies in a chip. The voltage drop is due to nonzero resistance in the supply wires. A higher voltage makes a cell faster and hence the propagation delay is reduced. The decrease is exponential for a wide voltage range. The self-inductance of a supply line contributes also to a voltage drop. For example, when a transistor is switching to high, it takes a current to charge up the output load. This time varying current (for a short period of time) causes an opposite self-induced electromotive force. The amplitude of the voltage drop is given by .V=L*dI/dt, where L is the self inductance and I is the current through the line.


Operating Temperature Variation [14]


Temperature variation is unavoidable in the everyday operation of a design. Effects on performance caused by temperature fluctuations are most often handled as linear scaling effects, but some submicron silicon processes require nonlinear calculations.

When a chip is operating, the temperature can vary throughout the chip. This is due to the power dissipation in the MOS-transistors. The power consumption is mainly due to switching, short-circuit and leakage power consumption. The average switching power dissipation (approximately given by Paverage = Cload*Vpower supply 2*fclock) is due to the required energy to charge up the parasitic and load capacitances. The short-circuit power dissipation is due to the finite rise and fall times. The nMOS and pMOS transistors may conduct for a short time during switching, forming a direct current from the power supply to the ground. The leakage power consumption is due to the nonzero reverse leakage and sub-threshold currents. The biggest contribution to the power consumption is the switching. The dissipated power will increase the surrounding temperature. The electron and hole mobility depend on the temperature. The mobility (in Si) decreases with increased temperature for temperatures above –50 °C. The temperature, when the mobility starts to decrease, depends on the doping concentration. A starting temperature at –50 °C is true for doping concentrations below 1019 atoms/cm3. For higher doping concentrations, the starting temperature is higher. When the electrons and holes move slower, then the propagation delay increases. Hence, the propagation delay increases with increased temperature. There is also a temperature effect, which has not been considered. The threshold voltage of a transistor depends on the temperature. A higher temperature will decrease the threshold voltage. A lower threshold voltage means a higher current and therefore a better delay performance. This effect depends extremely on power supply, threshold voltage, load and input slope of a cell. There is a competition between the two effects and generally the mobility effect wins.


The following figure shows the PVT operating conditions.




The best and worst design corners are defined as follows:

  • Best case: fast process, highest voltage and lowest temperature

  • Worst case: slow process, lowest voltage and highest temperature


On Chip Variation


On-chip variation is minor differences on different parts of the chip within one operating condition. On-Chip variation (OCV) delays vary across a single die due to:
  • Variations in the manufacturing process (P)

  • Variations in the voltage (due to IR drop)

  • Variations in the temperature (due to local hot spots etc)

This need is to be modeled by scaling the coefficients. Delays have uncertainty due to the variation of Process (P), Voltage (V), and Temperature (T) across large dies. On-Chip variation allows you to account for the delay variations due to PVT changes across the die, providing more accurate delay estimates.





Timing Analysis With On-Chip Variation

  • For cell delays, the on-chip variation is between 5 percent above and 10 percent below the SDF back-annotated values.

  • For net delays, the on-chip variation is between 2 percent above and 4 percent below the SDF back-annotated values.

  • For cell timing checks, the on-chip variation is 10 percent above the SDF values for setup checks and 20 percent below the SDF values for hold checks.

    In Prime Time, OCV derations are implemented using the following commands:

  • pt_shell> read_sdf -analysis_type on_chip_variation my_design.sdf

  • pt_shell> set_timing_derate -cell_delay -min 0.90 -max 1.05

  • pt_shell> set_timing_derate -net -min 0.96 -max 1.02

  • pt_shell> set_timing_derate -cell_check -min 0.80 -max 1.10



In the traditional deterministic STA (DSTA), process variation is modeled by running the analysis multiple times, each at a different process condition. For each process condition, a so-called corner file is created that specifies the delay of the gates at that process condition. By analyzing a sufficient number of process conditions, the delay of the circuit under process variation can be bounded.

The uncertainty in the timing estimate of a design can be classified into three main categories.

  • Modeling and analysis errors: Inaccuracy in device models, in the extraction and reduction of interconnect parasitics and in the timing analysis algorithms.
  • Manufacturing variations: Uncertainty in the parameters of a fabricated devices and interconnects from die-to-die and within a particular die.

  • Operating context variations: Uncertainty in the operating environment of a particular device during its lifetime, such as temperature, supply voltage, mode of operation and lifetime wear-out.
For instance, the STA tool might utilize a conservative delay noise algorithm resulting in certain paths operating faster than expected. Environmental uncertainty and uncertainty due to modeling and analysis errors are typically modeled using worst-case margins, whereas uncertainty in process is generally treated statistically.

Taxonomy of Process Variations

As process geometries continue to shrink, the ability to control critical device parameters is becoming increasingly difficult and significant variations in device length, doping concentrations and oxide thicknesses have resulted [9]. These process variations pose a significant problem for timing yield prediction and require that static timing analysis models the circuit delay not as a deterministic value, but as a random variable.

Process variations can either systematic or random.

  • Systematic variation: Systematic variations are deterministic in nature and are caused by the structure of a particular gate and its topological environment. The systematic variations are the component of variation that can be attributed to a layout or manufacturing equipment related effects. They generally show spatial correlation behavior.

  • Random variation: Random or non-systematic variations are unpredictable in nature and include random variations in the device length, discrete doping fluctuations and oxide thickness variations. Random variations cannot be attributed to a specific repeatable governing principle. The radius of this variation is comparable to the sizes of individual devices, so each device can vary independently.

    Process variations can classified as follow:

  • Inter-die variation or die-to-die: Inter-chip variations are variations that occur from one die to next, meaning that the same device on a chip has different features among different die of one wafer, from wafer to wafer and from wafer lot to wafer lot. Die-to-die variations have a variation radius larger than the die size including within wafer, wafer to wafer, lot to lot and fab to fab variations [12].

  • Intra-die or within-die variation: Intra-die variations are the variations in device features that are present within a single chip, meaning that a device feature varies between different locations on the same die. Intra-chip variations exhibit spatial correlations and structural correlations.


  • Front-end variation: Front-end variations mainly refer to the variations present at the transistor level. The primary components of the front end variations entail transistor gate length and gate width, gate oxide thickness, and doping related variations. These physical variations cause changes in the electrical characteristics of the transistors which eventually lead to the variability in the circuit performance.

  • Back-end variation: Back-end variations refer to the variations on various levels of interconnecting metal and dielectric layers used to connect numerous devices to form the required logic gates.
In practice, device features vary among the devices on a chip and the likelihood that all devices have a worst-case feature is extremely small. With increasing awareness of process variation, a number of techniques have been developed which model random delay variations and perform STA. These can be classified into full-chip analysis and path-based analysis approaches.


Full Chip Analysis

Full-chip analysis models the delay of a circuit as a random variable and endeavors to compute its probability distribution. The proposed methods are heuristic in nature and have a very high worst-case computational complexity. They are also based on very simple delay models, where the dependence of gate delay due to slope variation at the input of the gate and load variation at the output of the gate is not modeled. When run time and accuracy are considered, full chip STA is not yet practical for industrial designs.


Path Based STA


Path based STA provides statistical information on a path-by-path basis. It accounts for intra-die process variations and hence eliminates the pessimism in deterministic timing analysis, based on case files. It is a more accurate measure of which paths are critical under process variability, allowing more correct optimization of the circuit. This approach does not include the load dependence of the gate delay due to variability of fan out gates and does not address spatial correlations of intra-die variability.

To compute the intra-die path delay component of process variability, first the sensitivity of gate delay, output slope and input load with respect to slope, output load and device length are computed. Finally, when considering sequential circuits, the delay variation in the buffered clock tree must be considered.

In general, the fully correlated assumptions will under-estimate the variation in the arrival times at the leaf nodes of the clock tree which will tend to overestimate circuit performance.


References

[1] http://www.ecs.umass.edu/ece/vspgroup/burleson/courses/558/558%20L01.pdf
[2] David Blaauw, Kaviraj Chopra, Ashish Srivastava and Lou Scheffer, “Statistical Timing Analysis: From basic principles to state-of-the-art.” Transactions on Computer-Aided Design of Integrated Circuits and Systems (T-CAD), invited review article, to appear.
[3] Andrew B. Kahng, Bao Liu and Xu Xu, “Statistical Timing Analysis in the Presence of Signal-Integrity Effects,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 22, no.10, Oct. 2007.
[4] http://eetimes.com/news/design/showArticle.jhtml?articleID=163703301
[5] Jinjun Xiong, Vladimir Zolotov, Natesan Venkateswaran and Chandu Visweswariah, “Criticality Computation in Parameterized Statistical Timing,” DAC 2006: 63-68.
[6] http://www.cdnusers.org/Interviewsstastratosphere/tabid/418/Default.aspx
[7] http://www.edadesignline.com/showArticle.jhtml;jsessionid=1ISIZARO0KMGMQSNDLOSKH0CJUNN2J
[8] A. Nardi, E. Tuncer, S. Naidu, A. Antonau, S. Gradinaru, T.Lin and J. Song, “Use of Statistical timing Analysis on Real Designs” Proceedings of the IEEE Design, Automation & Test in Europe Conference & Exhibition, pp. 1-6, April 2007.
[9] Agarwal, A. Blaauw, D. Zolotov, V. Sundareswaran, S. Min Zhao Gala, K. and Panda, R., “Statistically Delay computation considering spatial correlations,” Proceedings of the ASP-DAC 2003, pp.271-276, Jan 2003.
[10] Aseem Agarwal, David Blaauw and Vladimir Zolotov, “Statistical Timing Analysis for Intra-Die process Variations with spatial correlations” IEEE Transactions on Computer-Aided Design, pp. 900-907, Nov 2003.
[11] Aseem Agarwal, David Blaauw and Vladimir Zolotov, “Statistical Clock Skew Analysis Considering Intra-Die Process Variations,” IEEE Transactions on Computer-Aided Design, vol. 23, no. 8, pp. 1231-1242, Aug, 2004.
[12] Ayhan Mutlu, Kelvin J. Le, Mustafa Celik, Dar-sun Tsien, Garry Shyu, and Long-Ching Yeh, “An Exploratory Study on Statistical Timing Analysis and Parametric Yield Optimization,” Proceedings of the 8th International Symposium on Quality Electronic Design, pp. 677-684, 2007.
[13] Robert B.Hitchcock, Sr, Gordon L. Smith, David D. Cheng, “Timing Analysis of Computer Hardware,” IBM Journal, vol. 26, no. 1, Jan 1981.

Below link in contributed by Rajneesh. Thanks Raj.
(14) "Investigation of typical 0.13 μm CMOS technology timing effects in a complex digital system on-chip", www.diva-portal.org/diva/getDocument?urn_nbn_se_liu_diva-2118-1__fulltext.pdf

Authors
1)Sowmya yadala, MS in VLSI System Design from MSRSAS, Bangalore. She can be reached at: sowmyayadala@gmail.com
2) Myself !

Backend (Physical Design) Interview Questions and Answers

  • Below are the sequence of questions asked for a physical design engineer.


In which field are you interested?

  • Answer to this question depends on your interest, expertise and to the requirement for which you have been interviewed.
  • Well..the candidate gave answer: Low power design

Can you talk about low power techniques?
How low power and latest 90nm/65nm technologies are related?
  • Refer here and browse for different low power techniques.

Do you know about input vector controlled method of leakage reduction?
  • Leakage current of a gate is dependant on its inputs also. Hence find the set of inputs which gives least leakage. By applyig this minimum leakage vector to a circuit it is possible to decrease the leakage current of the circuit when it is in the standby mode. This method is known as input vector controlled method of leakage reduction.

How can you reduce dynamic power?
  • -Reduce switching activity by designing good RTL
  • -Clock gating
  • -Architectural improvements
  • -Reduce supply voltage
  • -Use multiple voltage domains-Multi vdd
What are the vectors of dynamic power?
  • Voltage and Current

How will you do power planning?
  • Refer here for power planning.

If you have both IR drop and congestion how will you fix it?
  • -Spread macros
  • -Spread standard cells
  • -Increase strap width
  • -Increase number of straps
  • -Use proper blockage

Is increasing power line width and providing more number of straps are the only solution to IR drop?
  • -Spread macros
  • -Spread standard cells
  • -Use proper blockage

In a reg to reg path if you have setup problem where will you insert buffer-near to launching flop or capture flop? Why?
  • (buffers are inserted for fixing fanout voilations and hence they reduce setup voilation; otherwise we try to fix setup voilation with the sizing of cells; now just assume that you must insert buffer !)
  • Near to capture path.
  • Because there may be other paths passing through or originating from the flop nearer to lauch flop. Hence buffer insertion may affect other paths also. It may improve all those paths or degarde. If all those paths have voilation then you may insert buffer nearer to launch flop provided it improves slack.

How will you decide best floorplan?
  • Refer here for floor planning.

What is the most challenging task you handled?
What is the most challenging job in P&R flow?
  • -It may be power planning- because you found more IR drop
  • -It may be low power target-because you had more dynamic and leakage power
  • -It may be macro placement-because it had more connection with standard cells or macros
  • -It may be CTS-because you needed to handle multiple clocks and clock domain crossings
  • -It may be timing-because sizing cells in ECO flow is not meeting timing
  • -It may be library preparation-because you found some inconsistancy in libraries.
  • -It may be DRC-because you faced thousands of voilations

How will you synthesize clock tree?
  • -Single clock-normal synthesis and optimization
  • -Multiple clocks-Synthesis each clock seperately
  • -Multiple clocks with domain crossing-Synthesis each clock seperately and balance the skew

How many clocks were there in this project?
  • -It is specific to your project
  • -More the clocks more challenging !

How did you handle all those clocks?
  • -Multiple clocks-->synthesize seperately-->balance the skew-->optimize the clock tree

Are they come from seperate external resources or PLL?
  • -If it is from seperate clock sources (i.e.asynchronous; from different pads or pins) then balancing skew between these clock sources becomes challenging.
  • -If it is from PLL (i.e.synchronous) then skew balancing is comparatively easy.

Why buffers are used in clock tree?
  • To balance skew (i.e. flop to flop delay)

What is cross talk?
  • Switching of the signal in one net can interfere neigbouring net due to cross coupling capacitance.This affect is known as cros talk. Cross talk may lead setup or hold voilation.

How can you avoid cross talk?
  • -Double spacing=>more spacing=>less capacitance=>less cross talk
  • -Multiple vias=>less resistance=>less RC delay
  • -Shielding=> constant cross coupling capacitance =>known value of crosstalk
  • -Buffer insertion=>boost the victim strength

How shielding avoids crosstalk problem? What exactly happens there?
  • -High frequency noise (or glitch)is coupled to VSS (or VDD) since shilded layers are connected to either VDD or VSS.
  • Coupling capacitance remains constant with VDD or VSS.

How spacing helps in reducing crosstalk noise?
  • width is more=>more spacing between two conductors=>cross coupling capacitance is less=>less cross talk

Why double spacing and multiple vias are used related to clock?
  • Why clock?-- because it is the one signal which chages it state regularly and more compared to any other signal. If any other signal switches fast then also we can use double space.
  • Double spacing=>width is more=>capacitance is less=>less cross talk
  • Multiple vias=>resistance in parellel=>less resistance=>less RC delay


How buffer can be used in victim to avoid crosstalk?
  • Buffer increase victims signal strength; buffers break the net length=>victims are more tolerant to coupled signal from aggressor.