Showing posts with label STA. Show all posts
Showing posts with label STA. Show all posts

Friday, 18 September 2015

Topics Covered in Static Timing Analysis (STA)

Statistical static timing analysis

Conventional static timing analysis (STA) has been a stock analysis algorithm for the design of digital circuits over the last 30 years. However, in recent years the increased variation in semiconductor devices and interconnect has introduced a number of issues that cannot be handled by traditional (deterministic) STA. This has led to considerable research into statistical static timing analysis, which replaces the normal deterministic timing of gates and interconnects with probability distributions, and gives a distribution of possible circuit outcomes rather than a single outcome.

Limits of conventional STA



STA, while very successful, has a number of limitations:
  • Cannot easily handle within-die correlation, especially if spatial correlation is included.
  • Needs many corners to handle all possible cases.
  • If there are significant random variations, then in order to be conservative at all times, it is too pessimistic to result in competitive products.
  • Changes to address various correlation problems, such as CPPR (Common Path Pessimism Removal) make the basic algorithm slower than linear time, or non-incremental, or both.
SSTA attacks these limitations more or less directly. First, SSTA uses sensitivities to find correlations among delays. Then it uses these correlations when computing how to add statistical distributions of delays.
Interestingly, there is no technical reason why determistic STA could not be enhanced to handle correlation and sensitivities, by keeping a vector of sensitivities with each value as SSTA does. Historically, this seemed like a big burden to add to STA, whereas it was clear it was needed for SSTA, so no-one complained. See some of the criticism of SSTA below where this alternative is proposed.

Methods of SSTA



There are two main categories of SSTA algorithms - path-based and block-based methods.
A path-based algorithm[1] sums gate and wire delays on specific paths. The statistical calculation is simple, but the paths of interest must be identified prior to running the analysis. There is the potential that some other paths may be relevant but not analyzed so path selection is important.
A block-based algorithm[2] generates the arrival times (and required) times for each node, working forward (and backward) from the clocked elements. The advantage is completeness, and no need for path selection. The biggest problem is that a statistical max (or min) operation that also considered correlation is needed, which is a hard technical problem.
There are SSTA cell characterization tools that are now available such as Altos Design Automation's Variety tool.

References

https://en.wikipedia.org/wiki/Statistical_static_timing_analysis 

Saturday, 22 August 2015

Advance Onchip Variation

What is Advanced OCV -



AOCV uses intelligent techniques for context specific derating instead of a single global derate value, thus reducing the excessive design margins and leading to fewer timing violations. This represents a more realistic and practical method of margining, alleviating the concerns of overdesign, reduced design performance, and longer timing closure cycles.

Advanced OCV  determines derate values as a function of logic depth and/or cell, and net location. These two variables provide further granularity to the margining methodology by determining how much a specific path in a design is impacted by the process variation.

There are two kinds of variations.
1) Random Variation
2) Systematic Variation

Random Variation- 
Random variation is proportional to the logic depth of each path being analyzed.
The random component of variation occurs from lot-to-lot, wafer-to-wafer, on-die and die-to-die. Examples random variation are variations in gate-oxide thickness, implant doses, and metal or dielectric thickness. 


Systematic Variation-
Systematic variation is proportional to the cell location of the path being analyzed.

The systematic component of variation is predicted from the location on the wafer or the nature of the surrounding patterns. These variations relate to proximity effects, density effects, and the relative distance of devices. Examples of systematic variation are variations in gate length or width and interconnect width. 



Take the example of random variation, given the buffer chain shown in Figure 1, with nominal cell delay of 20, nominal path delay @ stage N = N * 20. In a traditional OCV approach, timing derates are applied to scale the path delay by a fixed percentage, set_timing_derate –late 1.2;set_timing_derate –early 0.8 




Figure 1: Depth-Based Statistical Analysis

 Statistical analysis shows that the random variation is less for deeper timing paths and not all cells are simultaneously fast or slow. Using statistical HSPICE models, Monte-Carlo analysis can be performed to measure the accurate delay variation at each stage. Advanced OCV derate factors can then be computed as a function of cell depth to apply accurate, less pessimistic margins to the path.



Figure 2a shows an example of how PrimeTime Advanced OCV would determine the path depth for both launch and capture. These values index the derate table, as shown in Figure 7, to select the appropriate derate values.

                                  Fig 2a-Depth Based Advanced OCV




Effects of systematic variation shows that paths comprised of cells in close proximity exhibit less variation relative to one another. Using silicon data from test-chips, Advanced OCV derate factors based on relative cell-location are then applied to further improve accuracy and reduce pessimism on the path. Advanced OCV computes the length of the diagonal of the bounding box, as shown in Figure 2b, to select the appropriate derate value from the table. 


Fig2b -Distance Based advanced OCV







PrimeTime Advanced OCV Flow -
PrimeTime internally computes depth and distance metrics for every cell arc and net arc in the design. It picks the conservative values of depth and distance thus bounding the worst-case path through a cell. 


Fig-3

Saturday, 8 August 2015

STA

Why is timing analysis important when designing a chip?

STA Introduction


Timing is important because just designing the chip is not enough; we need to know how fast the chip is going to run, how fast the chip is going to interact with the other chips, how fast the input reaches the output etc…Timing Analysis is a method of verifying the timing performance of a design by checking for all possible timing violations in all possible paths.


  • Static timing analysis is a method of validating the timing performance of a design by checking all possible paths for timing violations.
  • Static Timing Analysis (STA) is a method of computing the expected timing of a digital circuit without requiring simulation
  • STA is an exhaustive method of analyzing, debugging and validating the timing performance of a design.

Majorly Tool used

Prime Time (From Synopsys)

Input Required for STA

1. Netlist
2. .lib for standard cells
3. .lib for hard macros
4. SPEF/SDF files
5. Constraints files (.sdc)


Advanced Timing Analysis

Analysis Modes
 Data to Data Checks 
 Case Analysis 
 Multiple Clocks per Register 
 Minimum Pulse Width Checks 
 Derived Clocks 
 Clock Gating Checks 
 Netlist Editing 
 Report_clock_timing 
 Clock Reconvergence Pessimism 
 Worst-Arrival Slew Propagation
 Debugging Delay Calculation 

PrimeTime Timing Models Support

PrimeTime offers the following timing models to address STA needs for IP, large hierarchical designs, and custom design:

  1. Quick Timing Model (QTM)  
  2. Extracted Timing Model (ETM)
  3.  Interface Logic Model (ILM)  
  4. Stamp Model