Saturday, 26 December 2015

capacitive loading & its affect on slew rate

By definition slew rate of a circuit is rate at which a circuit can charge and dischare capacitance. This capacitance may be external capacitor CL or  Cg gate capacitances of transistors connected to this circuit.

Normally a digital circuit during switching must charge or discharge  CL or Cg at faster rate, and this charging rate depends on output current of the circuit.

capacitive loading occurs when this output current is insufficient to drive load capacitances CL and one or more gates connected to original circuit as a result slew rate of the circuit decreases and circuit becomes slow(takes more time to charge capacitors connected to circuit).

Is it Possible to have Zero skew??

Theoretically it is possible....!
Practically it is impossible....!!

Practically we cant reduce any delay to zero.... delay will exist... hence we try to make skew "equal" (or same) rather than "zero"......now with this optimization all flops get the clock edge with same delay relative to each other.... so virtually we can say they are having "zero skew " or skew is "balanced". 

Chip Level Vs Block level design

·         Chip design has I/O pads; block design has pins.
·         Chip design uses all metal layes available; block design may not use all metal layers.
·         Chip is generally rectangular in shape; blocks can be rectangular, rectilinear.

·         Chip design requires several packaging; block design ends in a macro. 

Fall Time

·         Fall time is the difference between the time when the signal crosses a high threshold to the time when the signal crosses the low threshold.
·         The low and high thresholds are fixed voltage levels around the mid voltage level or it can be either 10% and 90% respectively or 20% and 80% respectively. The percent levels are converted to absolute voltage levels at the time of measurement by calculating percentages from the difference between the starting voltage level and the final settled voltage level.
·         For an ideal square wave with 50% duty cycle, the rise time will be 0.For a symmetric triangular wave, this is reduced to just 50%.

·         The rise/fall definition is set on the meter to 10% and 90% based on the linear power in Watts. These points translate into the -10 dB and -0.5 dB points in log mode (10 log 0.1) and (10 log 0.9). The rise/fall time values of 10% and 90% are calculated based on an algorithm, which looks at the mean power above and below the 50% points of the rise/fall times. 

Rise Time

·         Rise time is the difference between the time when the signal crosses a low threshold to the time when the signal crosses the high threshold. It can be absolute or percent.

·         Low and high thresholds are fixed voltage levels around the mid voltage level or it can be either 10% and 90% respectively or 20% and 80% respectively. The percent levels are converted to absolute voltage levels at the time of measurement by calculating percentages from the difference between the starting voltage level and the final settled voltage level.

Latch Vs Flipflops

  • ·         Both latches and flip-flops are circuit elements whose output depends not only on the present inputs, but also on previous inputs and outputs.
  • ·         They both are hence referred as "sequential" elements.
  • ·         In electronics, a latch, is a kind of bistable multi vibrator, an electronic circuit which has two stable states and thereby can store one bit of of information. Today the word is mainly used for simple transparent storage elements, while slightly more advanced non-transparent (or clocked) devices are described as flip-flops. Informally, as this distinction is quite new, the two words are sometimes used interchangeably.
  • ·         In digital circuits, a flip-flop is a kind of bistable multi vibrator, an electronic circuit which has two stable states and thereby is capable of serving as one bit of memory. Today, the term flip-flop has come to generally denote non-transparent (clocked or edge-triggered) devices, while the simpler transparent ones are often referred to as latches
  • ·         A flip-flop is controlled by (usually) one or two control signals and/or a gate or clock signal.
  • ·         Latches are level sensitive i.e. the output captures the input when the clock signal is high, so as long as the clock is logic 1, the output can change if the input also changes.
  • ·         Flip-Flops are edge sensitive i.e. flip flop will store the input only when there is a rising or falling edge of the clock.
  • ·         A positive level latch is transparent to the positive level(enable), and it latches the final input before it is changing its level(i.e. before enable goes to '0' or before the clock goes to -ve level.)
  • ·         A positive edge flop will have its output effective when the clock input changes from '0' to '1' state ('1' to '0' for negative edge flop) only.
  • ·         Latches are faster, flip flops are slower.
  • ·         Latch is sensitive to glitches on enable pin, whereas flip-flop is immune to glitches.
  • ·         Latches take less gates (less power) to implement than flip-flops.
  • ·         D-FF is built from two latches. They are in master slave configuration.
  • ·         Latch may be clocked or clock less. But flip flop is always clocked.
  • ·         For a transparent latch generally D to Q propagation delay is considered while for a flop clock to Q and setup and hold time are very important.

Saturday, 12 December 2015

Common Path Pessimism removal

Timing defines the performance of a chip. If timing constraints are not met, the chip is as good as dead. Any extra pessimism in timing analysis not only requires more time to fix the critical paths but could negatively impact other important parameters such as power and area. In the worst case, it might leave no option but to reduce the functional frequency of the design. On the other hand, optimism in timing analysis might result in silicon failure. Finding a bug in silicon can be a ponderous task, not to mention the monetary and goodwill loss for design companies. It is therefore prudent to remove undue pessimism and optimism from timing analysis.

Clock architectures have become fairly complex for modern SoCs. In synchronous design, clock controls the switching of sequential elements of the design and functionality of logic is ensured through meeting the required setup and hold checks. Timing engineers must remove any undue pessimism/optimism in the calculation of clock path delay because it can be detrimental for the design.

On-chip variation is one of the most important factors that necessitate pessimism introduction in timing analysis. It refers to the intra-die variations that may exist between different cells in different parts of the chip under the same operating condition. These variations may be:
  • Variation in the manufacturing process
  • Variation in the voltage: Due to different IR drops for different cells
  • Variation in the temperature: Due to formation of localized hot-spots on the chip.
Timing engineers model these variation in the form of derates. Applying derates on clock paths is the most popular and acceptable way to model these variations. Assuming 10% derates, for both early and late paths, delay X for a cell under an operating point can be modeled as 0.9X as capture clock path delay and 1.1X as the launch clock path delay for setup analysis. 

Common clock path pessimism is one of the most common sources of pessimism in the design.

Common path pessimism
Common path pessimism arises when the launching and capturing clocks share a common path. The difference between the max delay and min delay of this common clock path segment is called the common path pessimism. EDA tools take care of this using Common Path Pessimism Removal (CPPR). 

Figure 1: Common Clock Path Pessimism
There are two ways of calculating common path pessimism:
  1. Critical-path based approach (CPPR):
    a) Timing analysis tools finds the top critical paths with CPPR off.
    b) Only these critical paths are re-evaluated considering CPPR for the common clock path.

    While this method offers the advantage of being relatively fast compared to the Exhaustive Approach, it can miss critical paths. Hence for some corner cases, it might lead to an optimistic timing analysis.
  2. Exhaustive approach: This method does an exhaustive CPPR analysis and therefore does not miss any critical path. However, analysis requires more memory and CPU resources compared to those required for the critical-path based method.
Assuming two reg-2-reg timing paths with the same data path delay, the path with the lesser common clock path might get missed using critical-path based approach. Consider the following example:

Figure 2: Case study showing how path-based approach can be optimistic
As evident from Figure 2, path 2 was critical with CPPR off, but with CPPR on, path 1 became more critical.

Timing engineers therefore tend to analyze their design using the exhaustive approach once the design has achieved logic freeze.

Corner Cells

Corner cells - For pad ring connectivity, it contains only Metal layers. no active layers. these special cells contains metal structures that are bent 45 degrees, to maintain continuity of IO power buss structures

Useful Skew

Useful skew-If clock is skewed intentionally to resolve violations, it is called useful skew.

For example there is setup violation in the design, Then we add some skew along the clock path in order to eliminate the setup violation.


You have three registers A B C each getting the same clock A.
Now, the latency of clockA in A is 1 (slack 12 say) clockA in B 12 (slack -12) So clock B is setup violated. There are many ways of fixing setup violation like adding buffers to speed up the data path etc... but let us assume that we cannot use any of this and the only possible way to get this done is tweak the clock path (quite dangerous). Since Setup slack = required time (clock) - arrival time (data) if i reduce the required time my slack violation will cease. Now, i add delay buffers in clock path after clock A in A (not before) so that only the clock A for B is delayed.
Interesting to note here is that delay in clock path for B will also delay data path for the flop to which the output of flop B goes (assume C) If this C has setup margin such that B can add that delay only then can you use the skew. This is called useful skew which is used to fix violations by taking slack from some and giving it to the other




Max Cap Violations

The capacitance on a node is a combination of the fan-out of the output pin and capacitance of the net. This check ensures that the device does not drive more capacitance than the device is characterized for.
  1. The violation can be removed by increasing the drive strength of the cell.
By buffering the some of the fan-out paths to reduce the capacitance seen by the output pin.

Max trans Violation

In some cases, signal takes too long transiting from one logic level to another, than a transition violation is caused. The Trans violation can be because of node resistance and capacitance.
  1. By upsizing the driver cell.
  2. Decreasing the net length by moving cells nearer (or) reducing long routed net.
  3. By adding Buffers.
  4. By increase the width of the route at the violation instance pin. This will decrease the resistance of the route and fix the transition violation.

Thursday, 10 December 2015

Transition Time

Transition delay or slew is defined as the time taken by signal to rise from 10 %( 20%) to the 90 %( 80%) of its maximum value. This is known as “rise time”.

Equations for Setup and Hold Time

Equations for Setup and Hold Time
Let’s first define clock-to-Q delay (Tclock-to-Q). In a positive edge triggered flip-flop, input signal is captured on the positive edge of the clock and corresponding output is generated after a small delay called the Tclock-to-Q. The flip flop can only do the job correctly if the data at its input does not change for some time before the clock edge (Tsetup) and some time after the clock edge (Thold). Again, the clock signal which circulates via clock tree throughout the design has its own variability termed as skew.
From Figure 1 below, we derive equations for setup time and hold time. Figure 1 shows two talking flops, the first being the launching flop and the second is obviously the capturing flop. We shall derive equation for setup time for the capturing flop and equation for hold time for the launching flop. However, the derived equations will be true for either of the flops or for that matter any flops in the design.

Figure 1. Two Talking Flops Scenario
In the diagram above, at time zero FF1 is to process D2 and FF2 is to process D1. Time taken for the data D2 to propagate to FF2, counting from the clock edge at FF1, is invariably = Tc2q+Tcomband for FF2 to successfully latch it, this D2 has to be maintained at D of FF2 for Tsetup time before the clock tree sends the next positive edge of the clock to FF2. Hence to fulfill the setup time requirement, the equation should be like the following.
Tc2q + Tcomb + Tsetup  Tclk + Tskew ------- (1)
Let’s have a look at the timing diagram below to have a better understanding of the setup and hold time.

Figure 2. Setup and Hold Timing Diagram
Now, to avoid the hold violation at the launching flop, the data should remain stable for some time (Thold) after the clock edge. The equation to be satisfied to avoid hold violation looks somewhat like below:
Tc2q + Tcomb  Thold + Tskew ------- (2)
As seen from the above two equations, it can be easily judged that positive skew is good for setup but bad for hold. The only region where the input can vary is the ‘valid input window’ as shown in Figure 3.

Figure 3. Valid Input Window 




Full Custom ASIC Design

Full-custom ASIC design defines all the photolithographic layers of the device. Full-custom design is used for both ASIC design and for standard product design.
The benefits of full-custom design usually include reduced area (and therefore recurring component cost), performance improvements, and also the ability to integrate analog components and other pre-designed — and thus fully verified — components, such as microprocessor cores that form a system-on-chip.
The disadvantages of full-custom design can include increased manufacturing and design time, increased non-recurring engineering costs, more complexity in the computer-aided design (CAD) system, and a much higher skill requirement on the part of the design team.
For digital-only designs, however, "standard-cell" cell libraries, together with modern CAD systems, can offer considerable performance/cost benefits with low risk. Automated layout tools are quick and easy to use and also offer the possibility to "hand-tweak" or manually optimize any performance-limiting aspect of the design.
This is designed by using basic logic gates, circuits or layout specially for a design.

Routing Grid

That generally call routing grid, to match the pin std cell position.
Historically, the backend designer should align this grid function of the std cell, now the tool are generally able to analyze the std cell pins positions.
The tool router has a routing grid, plus a sub-routing that is the forth of the routing grid to provide more routing possibility

inputs to red hawk tool

Below are the basic inputs of redhawk tool

1. Design Data (milkyway database)
2. Parasitic Information
3. Technology file
4. Timing and power Info (.lib files)

Threshold Voltage



The threshold voltage, commonly abbreviated as Vth or VGS (th), of a field-effect transistor(FET) is the minimum gate-to-source voltage differential that is needed to create a conducting path between the source and drain terminals.

For an nMOS device at gate-to-source voltages above the threshold voltage ((VGS > Vth) but still below saturation (less than "fully on", (VGS − Vth) > VDS), the transistor is in its 'linear region', also known as ohmic mode, where it behaves like a voltage-controlled variable resistor.

How It is affecting Timing

Low VT Cells :- these cells are faster but results in more power consumption.
High VT Cells :- these cells are slower than low vt cells, consumes less power than LVT cells

Based on requirement we need to use HVT and LVT cells since there is always trade off between power and speed.

Cloning Vs Buffering

Cloning is where a clock-gate (a special gate in the clock tree that switches of the clock signal to a number of flip-flops to save power when they are not needed) is duplicated so that one clock-gate driving, for example, 40 flip-flops can be "cloned" to become 2 clock-gates driving 20 flip-flops each.

A buffer is a basic electronic gate that serves to strengthen a signal. It is needed when you wish to drive a signal along a long wire, or when you want to drive a signal to very many receiving pins. A single driving gate can only drive a short length of wire and only a small fanout. Buffering is the insertion of buffers to help drive the signal to bigger loads.

Tuesday, 24 November 2015

Project Challenges

Below are few Challenges you may face during your project execution time 


  • -It may be power planning- because you found more IR drop
  • -It may be low power target-because you had more dynamic and leakage power
  • -It may be macro placement-because it had more connection with standard cells or macros
  • -It may be CTS-because you needed to handle multiple clocks and clock domain crossings
  • -It may be timing-because sizing cells in ECO flow is not meeting timing
  • -It may be library preparation-because you found some inconsistancy in libraries.
  • -It may be DRC-because you faced thousands of voilations
  • Monday, 23 November 2015

    Frequently Asked Question Part 2

                                                                                                                               Previous Page

    * What is signal integrity? How it affects Timing?
    * What is IR drop? How to avoid IR drop .how it affects timing?
    * What is EM and it effects?
    * What is floor plan and power plan?
    * What are types of routing?
    * What is a grid .why we need and different types of grids?
    * What is core and how u will decide w/h ratio for core?
    * What is effective utilization and chip utilization?
    * What is latency? Give the types?
    * What is LEF?
    * What is DEF?
    * What are the steps involved in designing an optimal pad ring?
    * What are the steps that you have done in the design flow?
    * What are the issues in floor plan?
    * How can you estimate area of block?
    * How much aspect ratio should be kept (or have you kept) and what is the utilization?
    * How to calculate core ring and stripe widths?
    * What if hot spot found in some area of block? How you tackle this?
    * After adding stripes also if you have hot spot what to do?
    * What is threshold voltage? How it affect timing?
    * What is content of lib, lef, sdc?
    * What is meant my 9 track, 12 track standard cells?
    * What is scan chain? What if scan chain not detached and reordered? Is it compulsory?
    * What is setup and hold? Why there are ? What if setup violation fix and hold violation fixtures?
    * In a circuit, for reg to reg path ...Tclktoq is 50 ps, Tcombo 50ps, Tsetup 50ps, tskew is 100ps. Then what is the maximum operating frequency?
    * How R and C values are affecting time?
    * How ohm (R), fared (C) is related to second (T)?
    * What is transition? What if transition time is more?
    * What is difference between normal buffer and clock buffer?
    * What is antenna effect? How it is avoided?
    * What is ESD?
    * What is cross talk? How can you avoid?
    * How double spacing will avoid cross talk?
    * What is difference between HFN synthesis and CTS?
    * What is hold problem? How can you avoid hold time violations?
    * For an iteration we have 0.5ns of insertion delay and 0.1 skew and for other iteration 0.29ns insertion delay and 0.25 skew for the same circuit then which one you will select? Why?
    * What is partial floor plan?
    * What parameters (or aspects) differentiate Chip Design & Block level design??
    * How do you place macros in a full chip design?
    * Differentiate between a Hierarchical Design and flat design?
    * Which is more complicated when u have a 48 MHz and 500 MHz clock design?
    * Name few tools which you used for physical verification?
    * What are the input files will you give for primetime correlation?
    * What are the algorithms used while routing? Will it optimize wire length?
    * How will you decide the Pin location in block level design?
    * If the routing congestion exists between two macros, then what will you do?
    * How will you place the macros?
    * How will you decide the die size?
    * If lengthy metal layer is connected to diffusion and poly, then which one will affect by antenna problem?
    * If the full chip design is routed by 7 layer metal, why macros are designed using 5LM instead of using 7LM?
    * In your project what is die size, number of metal layers, technology, foundry, number of clocks?
    * How many macros in your design?
    * What is each macro size and no. of standard cell count?
    * How did u handle the Clock in your design?
    * What are the Input needs for your design?
    * What is SDC constraint file contains?
    * How did you do power planning?
    * How to find total chip power?
    * How to calculate core ring width, macro ring width and strap or trunk width?
    * How to find number of power pad and IO power pads?
    * What are the problems faced related to timing?
    * How did u resolve the setup and hold problem?
    * If in your design 10000 and more numbers of problems come, then what you will do?
    * In which layer do you prefer for clock routing and why?
    * If in your design has reset pin, then it’ll affect input pin or output pin or both?
    * During power analysis, if you are facing IR drop problem, then how did u avoid?
    * Define antenna problem and how did u resolve these problem?
    * How delays vary with different PVT conditions? Show the graph.
    * Explain the flow of physical design and inputs and outputs for each step in flow.
    * What is cell delay and net delay?
    * What are delay models and what is the difference between them?
    * What is wire load model?
    * What does SDC constraints has?
    * Why higher metal layers are preferred for Vdd and Vss?
    * What is logic optimization and give some methods of logic optimization.
    * What is the significance of negative slack?
    * How the width of metal and number of straps calculated for power and ground?
    * What is negative slack ? How it affects timing?
    * What is track assignment?
    * What is grided and gridless routing?
    * What is a macro and standard cell?
    * What is congestion?
    * Whether congestion is related to placement or routing?
    * What are clock trees?
    * What are clock tree types?
    * Which layer is used for clock routing and why?
    * What is cloning and buffering?
    * What are placement blockages?
    * How slow and fast transition at inputs effect timing for gates?
    * What is antenna effect?
    * What are DFM issues?
    * What is .lib, LEF, DEF, .tf?
    * What is the difference between synthesis and simulation?
    * What is metal density, metal slotting rule?
    * What is OPC, PSM?
    * Why clock is not synthesized in DC?
    * What are high-Vt and low-Vt cells?
    * What corner cells contains?
    * What is the difference between core filler cells and metal fillers?
    * How to decide number of pads in chip level design?
    * What is tie-high and tie-low cells and where it is used

    Sunday, 22 November 2015

    High Fanout Synthesis



    As all of us knows fanout of the clock signal is high. Apart from that few of the signals are existed in design like reset ,clear and scan enable signals and etc..

    The signal nets which have more fanout compared to specified fanout is also known as HFN

    we all know that
    set_max_fanout <some number> during synthesis this means we tell to the synthesis tool that more than the max_fanout number treat it as High fanout net.


    Why do we do this ?

    As we understanding HFN has lot of load obviously it has huge capacitance.

    And if we tried to report the timing it reports very huge cap violations and huge delays in the timing path.

    So to avoid this huge delays in timing path we are setting the same net as HFN.



    another way to set an HFN to synthesis tool: set_ideal_net <net name>


    This way the synthesis tool knows the specified net as a high fanout net and does not buffer them .

    Unified Power Format

    Files written to this standard annotate an electric design with the power and power control intent of that design. Elements of that annotation include:
    • Power Supplies: supply nets, supply sets, power states
    • Power Control: power switches
    • Additional Protection: level shifters and isolation
    • Memory retention during times of limited power: retention strategies and supply set power states
    • Refinable descriptions of the potential power applied to the electronic system: power states, transitions, a set of simstate, pg_type and function attributes of nets, and the -update argument to support the progressive refinement of the power intent.

    Saturday, 21 November 2015

    Interface Logic Models

    Interface Logic Model (ILM) is a technique to model blocks in hierarchal VLSI implementation flow.
    It is a gate level model of a physical block where only the connections from the inputs to the first stage of flip-flops, and the connections from the last stage of flip-flops to the outputs are in the model, including the flip-flops and the clock tree driving these flip-flops. All other internal flip-flop to flip-flop paths are stripped out of the ILM.
    The advantage of ILM is that entire path ( clock to clock path) is visible at top level for interface nets unlike traditional block based hierarchal implementation flow. That gives better accuracy in analysis for interface nets at negligible additional memory and runtime overhead.

    Wednesday, 11 November 2015

    NMOS

    N-type metal-oxide-semiconductor logic uses n-type field effect transistors (MOSFETs) to implement logic gates and other digital circuits. These nMOS transistors operate by creating an inversion layer in an p-type transistor body. This inversion layer, called the n-channel, can conduct electrons between n-type "source" and "drain" terminals. The n-channel is created by applying voltage to the third terminal, called the gate. Like other MOSFETs, nMOS transistors have four modes of operation: cut-off (or subthreshold), triode, saturation (sometimes called active), and velocity saturation.
    The n-type MOSFETs are arranged in a so-called "pull-down network" (PDN) between the logic gate output and negative supply voltage, while a resistor is placed between the logic gate output and the positive supply voltage. The circuit is designed such that if the desired output is low, then the PDN will be active, creating a current path between the negative supply and the output.

    PMOS

    P-type metal-oxide-semiconductor logic uses p-channel metal-oxide-semiconductor field effect transistors (MOSFETs) to implement logic gates and other digital circuits. PMOS transistors operate by creating an inversion layer in an n-type transistor body. This inversion layer, called the p-channel, can conduct holes between p-type "source" and "drain" terminals.
    The p-channel is created by applying voltage to the third terminal, called the gate. Like other MOSFETs, PMOS transistors have four modes of operation: cut-off (or subthreshold), triode, saturation (sometimes called active), and velocity saturation.
    The p-type MOSFETs are arranged in a so-called "pull-up network" (PUN) between the logic gate output and positive supply voltage, while a resistor is placed between the logic gate output and the negative supply voltage. The circuit is designed such that if the desired output is high, then the PUN will be active, creating a current path between the positive supply and the output.
    While PMOS logic is easy to design and manufacture (a MOSFET can be made to operate as a resistor, so the whole circuit can be made with PMOS FETs), it has several shortcomings as well. The worst problem is that there is a direct current (DC) through a PMOS logic gate when the PUN is active, that is, whenever the output is high, which leads to static power dissipation even when the circuit sits idle.
    Also, PMOS circuits are slow to transition from high to low. When transitioning from low to high, the transistors provide low resistance, and the capacitative charge at the output accumulates very quickly (similar to charging a capacitor through a very low resistance). But the resistance between the output and the negative supply rail is much greater, so the high-to-low transition takes longer (similar to discharge of a capacitor through a high resistance). Using a resistor of lower value will speed up the process but also increases static power dissipation.
    Additionally, the asymmetric input logic levels make PMOS circuits susceptible to noise

    CMOS Inverter

    CMOS circuits are constructed in such a way that all PMOS transistors must have either an input from the voltage source or from another PMOS transistor. Similarly, all NMOS transistors must have either an input from ground or from another NMOS transistor. The composition of a PMOS transistor creates low resistance between its source and drain contacts when a low gate voltage is applied and high resistance when a high gate voltage is applied. On the other hand, the composition of an NMOS transistor creates high resistance between source and drain when a low gate voltage is applied and low resistance when a high gate voltage is applied. CMOS accomplishes current reduction by complementing every nMOSFET with a pMOSFET and connecting both gates and both drains together. A high voltage on the gates will cause the nMOSFET to conduct and the pMOSFET to not conduct, while a low voltage on the gates causes the reverse. This arrangement greatly reduces power consumption and heat generation. However, during the switching time, both MOSFETs conduct briefly as the gate voltage goes from one state to another. This induces a brief spike in power consumption and becomes a serious issue at high frequencies.
    The image below shows what happens when an input is connected to both a PMOS transistor (top of diagram) and an NMOS transistor (bottom of diagram). When the voltage of input A is low, the NMOS transistor's channel is in a high resistance state. This limits the current that can flow from Q to ground. The PMOS transistor's channel is in a low resistance state and much more current can flow from the supply to the output. Because the resistance between the supply voltage and Q is low, the voltage drop between the supply voltage and Q due to a current drawn from Q is small. The output therefore registers a high voltage.
    On the other hand, when the voltage of input A is high, the PMOS transistor is in an OFF (high resistance) state so it would limit the current flowing from the positive supply to the output, while the NMOS transistor is in an ON (low resistance) state, allowing the output from drain to ground. Because the resistance between Q and ground is low, the voltage drop due to a current drawn into Q placing Q above ground is small. This low drop results in the output registering a low voltage.
    In short, the outputs of the PMOS and NMOS transistors are complementary such that when the input is low, the output is high, and when the input is high, the output is low. Because of this behavior of input and output, the CMOS circuit's output is the inverse of the input.
    The power supplies for CMOS are called VDD and VSS, or VCC and Ground(GND) depending on the manufacturer. VDD and VSS are carryovers from conventional MOS circuits and stand for the drain and source supplies. These do not apply directly to CMOS, since both supplies are really source supplies. VCC and Ground are carryovers from TTL logic and that nomenclature has been retained with the introduction of the 54C/74C line of CMOS.

    Start Point and End Point of STA

    Any given timing path need to begin from a valid startpoint and end on a valid endpoint.
    The following are valid startpoints:

    • A primary input port.
    • Capturing pin of a sequential cell (D pin of a flop, a latch or synchronous reset/set pins of a reset/set flop).
    The following are valid endpoints:
    • A primary output port.
    • Launching pin of a sequential cell (CK pin of a flop or EN pin of a latch).
    As evident from the number of start and end point types given above, any design with atleast two sequential cells, there can be 4 types of timing paths.
    • Input to Register - Starting on a primary input port and ending on a capturing pin of a register
    • Register to output - Starting on a launching pin of a flop and ending on an output port.
    • Register to Register - Starting on a launching pin of a flop and ending on a capturing pin of a register
    • Input to Output - Starting on a primary input port and ending on an output port.
    Any valid timing path should be one of the above types.
    There are several other types given by tools such as default, macro pins as startpoint and endpoint, which adds one additional startpoint type and one additional endpoint type and corresponding combinations of the timing paths. But, a macro in general can be considered as a register with many endpoints (input ports) and possibly many start points (clock pins).

    Tuesday, 10 November 2015

    False Path

    A false path, as its name denotes is a timing path not required to meet its timing constraints for the design to function properly.

    In general all timing paths those are launched by a valid startpoint and captured by a valid endpoint impose setup and hold constraints. If the functionality of the design is such that the timing constraints need not be met on the path, it is a false path.

    Every false path needs to be informed to the STA tool. As the STA tool considers every path that originates at a valid startpoint and ends on a valid endpoint as a valid timing path that needs to be met.

    A situation where the design results in false paths is:
    There are two 2 input muxes in a design. Both of which are selected by a common select line.

    If you consider the signal B to be launched by a flop and Y to be captured by another flop, in no situation, there will be valid funtional path between B and Y.
    Reason: Assume the muxes select signal at top side when the select line is 0 and the bottom signal when select line is 1. When the select line is 0, B is selected by mux 1, but the mux 1 output (D) will not be selected by mux 2. But the static timing analysis tool will not consider this as an invalid path, it just looks for valid timing arcs, an there is a timing arc from B input to output of the first mux and then to the Y signal through the second mux.

    This is just a simple example of a false path. There can be many paths of this type in a large design. But, we cannot find each and every path and declare it as false. If this type of path is found in a clock muxing logic, then declaring a false path (I think set_disable_timing might be required for the paths in clock logic) through B and Y would be useful. If this type of path is found in a critical path of design, then it will be very useful to add a false path.

    When functional ports are multiplexed with test logic, the input and output delays added on these ports would also show up many false paths.

    If possible (timing is not critical and area is not critical), optimizing a false path is of no harm compared to declaring a functionally necessary path as a false path by mistake