Thursday 29 October 2015

Glue Logic

Glue logic means non-regular logic, normally a group of gates working asynchronous in static decisions. Regular logic like memory cells is not called "glue".
Glue logic consists of simple gates forming logical decisions. The gates can also be used as rs-ffs or dffs.

In other words, the term glue logic refers to the relatively small amount of simple logic that are used to connect ("glue") –and interface between- larger logic blocks, functions, or devices.)

This sort of logic is implemented in CPLDs, FPGAs or Gate Arrays.
In earlier times only about 1000 gates were called "glue logic". It is normally supporting higher integrated devices with additional functions (data transfer, buffering, reading sensors etc.)

Generally it is recommended to avoid glue logic as mush as possible, because it might cause problems and design complexity later in the chip design flow. You can minimize this logic sometime by merging in inside another module.

Thursday 22 October 2015

Physical Design Course for beginners

Detailed list of Topics, Note this topics will be timely updated as per requirement, any feedback and suggestions are welcome (please write on comment section or send mail us to


FloorPlannig topics


Sign Off Check


Tools Guide and References

Interview Questions

Physcial Design

Career In Physical Design


Recovery and Removal Time

Recovery and Removal Time

These are timing checks for asynchronous signals similar to the setup and hold checks.
Recovery time is the minimum amount of time required between the release of an asynchronous signal from the active state to the next active clock edge.
Example: The time between the reset and clock transitions for a flip-flop. If the active edge occurs too soon after the release of the reset, the state of the flip-flop can be unknown.
Removal time specifies the minimum amount of time between an active clock edge and the release of an asynchronous control signal.
The following diagram illustrates recovery and removal times for an active low reset signal (RESET_N) and positive-edge triggered  CLOCK

Clock Skew

Clock Skew: 

It is the difference in arrival times of the capture edge at two adjacent Flip-flop pairs.

Positive skew

 If capture clock comes late than launch clock then it is called positive skew.

Negative skew

If capture clock comes early than launch clock it is called –ve skew.

Local skew-

 It is the difference in arrival of clock at two consecutive pins of a sequential element. 

Global skew-

It is Defined as the difference between max insertion delay and the min insertion delay of any flops. 
It is also defined as the difference between shortest clock path delay and longest clock path delay reaching two sequential elements

Boundary skew-

It is defined as the difference between max insertion delay and the min insertion delay of boundary flops.

Useful skew-

If clock is skewed intentionally to resolve violations, it is called useful skew.

Clock Latencies

Source latency: 

The delay from the clock origin point to the clock definition point in the design.
It is the insertion delay external to the circuit which we are timing. It applies to only primary clocks.

Network Latency: 

The delay from the clock definition point to the clock pin of the register
It is the internal delay for the circuit which we are timing (the delay of the clock tree from the source of the clock to all of the clock sinks).

I/O latency

If the flop of the block is talking with another flop outside the block, clock latency (network) of that flop will be the i/o latency of the block.



  • It is difference between the desired arrival times and the actual arrival time for a signal. 
  • Slack time determines [for a timing path], if the design is working at the desired frequency. 
  • Positive Slack indicates that the design is meeting the timing and still it can be improved. 
  • Zero slack means that the design is critically working at the desired frequency. 
  • Negative slack means , design has not achieved the specified timings at the specified frequency. 
  • Slack has to be positive always and negative slack indicates a violation in timing.

Required time: 

The time within which data is required to arrive at some internal node of the design. Designer specify this value by setting constraints.

Arrival Time: 

The time in which data arrives at the internal node. It incorporates all the net and logic delays in between the reference input point and the destination node. 

 Setup Slack = Required time - Arrival time
 Hold slack = Arrival time - Required time 

 Setup Slack: 

Amount of margin by which setup requirements are met. 

TCL = Total combinational delay in a pipe-lined stage 

TRC = RC delay of interconnects 

TC-Q = Clock to output delay 

Tarrival = Arrival time (at node) 

Tcycle,min = Minimum Achievable clock cycle 

To meet the setup requirements the following equation must be satisfied. 

Tslack,setup = Tcycle – Tarrival - Tsetup (For all Paths ) 

Here Tarrival= TCL + TRC + TC-Q

Hold Slack: 

Amount of margin by which hold time requirements are met. 

Tarrival >= Thold 

Tarrival – Thold = Thold,slack 

Thold,slack = TCL + TRC + TC-Q - Thold

  • The Negative value of Hold Slack means signal value propagates from one register to next, too fast that it overrides the old value before that can be detected by the corresponding active clock edge. 
  • The Clock frequency variation doesn’t effects the Hold time or the Hold slack so it is critical to fix the Hold time violations in a design prior to the setup violation if both exists simultaneously

Hold Time

Hold time:

  • Hold time is the minimum amount of time the data signal should be held steady after the clock event so that the data are reliably sampled. This applies to synchronous circuits such as the flip-flop. 
  • Or in short I can say that the amount of time the synchronous input (D) must be stable after the active edge of clock. 
  • The Time after clock pulse where data input is held stable is called hold time.

You May Be Also Interested In :-

  • How To Fix Hold Time Violations

  • Setup Time

    Setup Time

    • Setup time is the minimum amount of time the data signal should be held steady before the clock event so that the data are reliably sampled by the clock. This applies to synchronous circuits such as the flip-flop. 
    •  In short I can say that the amount of time the Synchronous input (D) must be stable before the active edge of the Clock. 
    • The Time when input data is available and stable before the clock pulse is applied is called Setup time.

    You May Be Also Intrested In :

    Latch Vs Flipflop (Advantages and Disadvantages)

    Advantages and Disadvantages

    1. In synthesis of HDL codes inappropriate coding can infer latches instead of flip flops. Eg.:”if” and “case” statements. This should be avoided sa latches are more prone to glitches.

    2. Latch takes less area, Flip-flop takes more area ( as flip flop is made up of latches) .

    3. Latch facilitate time borrowing or cycle stealing whereas flip flops allow synchronous logic.

    4. Latches are not friendly with DFT tools. Minimize inferring of latches if your design has to be made testable. Since enable signal to latch is not a regular clock that is fed to the rest of the logic. To ensure testability, you need to use OR gate using “enable” and “scan_enable” signals as input and feed the output to the enable port of the latch. 

    5. Most EDA software tools have difficulty with latches. Static timing analyzers typically make assumptions about latch transparency. If one assumes the latch is transparent (i.e.triggered by the active time of clock,not triggered by just clock edge), then the tool may find a false timing path through the input data pin. If one assumes the latch is not transparent, then the tool may miss a critical path.

    6. If target technology supports a latch cell then race condition problems are minimized. If target technology does not support a latch then synthesis tool will infer it by basic gates which is prone to race condition. Then you need to add redundant logic to overcome this problem. But while optimization redundant logic can be removed by the synthesis tool ! This will create endless problems for the design team.

    7. Due to the transparency issue, latches are difficult to test. For scan testing, they are often replaced by a latch-flip-flop compatible with the scan-test shift-register. Under these conditions, a flip-flop would actually be less expensive than a latch.

    8. Flip flops are friendly with DFT tools. Scan insertion for synchronous logic is hassle free.

    Latch Up Effect

    A latch-up is a type of short circuit which can occur in an integrated circuit (IC). More specifically it is the inadvertent creation of a low-impedance path between the power supply rails of a MOSFET circuit, triggering a parasitic structure which disrupts proper functioning of the part, possibly even leading to its destruction due to overcurrent. A power cycle is required to correct this situation.

    A single event latch-up is a latch-up caused by a single event upset, typically heavy ions or protons from cosmic rays or solar flares.

    The parasitic structure is usually equivalent to a thyristor (or SCR), a PNPN structure which acts as a PNP and an NPN transistor stacked next to each other. During a latch-up when one of the transistors is conducting, the other one begins conducting too. They both keep each other in saturation for as long as the structure is forward-biased and some current flows through it - which usually means until a power-down. The SCR parasitic structure is formed as a part of the totem-pole PMOS and NMOS transistor pair on the output drivers of the gates.

    How to Prevent Latch Up Effect

    • by adding tap wells, for example in an Inverter for NMOS add N+ tap in n-well and conncet it to Vdd, and for PMOS add P+ tap in p-substrate and connect it to Vss. 
    • an increase in substrate doping levels with a consequent drop in the value of  Rs.
    • reducing Rp by control of fabrication parameters and by ensuring a low contact resistance to Vss.
    • By Providing Guard rings

    Wednesday 21 October 2015

    Electrostatic Discharge

    ESD damage is usually caused by poor handling procedures. ESD is especially severe in low humidity environments. Electrostatic discharge is a pervasive reliability concern in VLSI circuits. It is a short duration (<200ns) high current (>1A) event that causes irreparable damage. The most common manifestation is the human body ESD event, where a charge of about 0.6uC can be induced on a body capacitance of 100pF, leading to electrostatic potentials of 4KV or greater. Whenever body comes in contact with plastic or other insulating material, static charge is generated. It can be a very small charge, as low as nano Coulombs, but it can cause potential damage to MOS devices, as voltages are pretty high. 
    We know that Q = CV V = Q/C V = It/C

    A typical solution of the ESD protection problem is to use clamping diodes implemented using MOS transistors with gates tied up to either GND for nMOS transistors, or to VDD for pMOS transistors as shown in Figure 1. For normal range of input voltages these transistors are in the OFF state. If the input voltage builds up above (or below) a certain level, one of the transistors starts to conduct clamping the input voltage at the same level.

    Clamping Transistor

    These clamping transistors are very big structures consisting of a number of transistors connected in parallel, and are able to sustain significant current. The thick field NMOS used design is not suitable for deep submicron processes, and the thin field oxide NMOS presents oxide breakdown problems while interfacing between blocks with high power supply voltages. Scaling of VLSI devices have reduced the dimensions of all structures used in ICs and this has increased their susceptibility to ESD damage. Hence ESD protection issues are becoming increasingly important for deep submicron technologies. The gate oxide thicknesses are approaching the tunneling regime of around 35 Angstroms. From an ESD perspective, the important issue is whether the oxide breakdown is reached before the protection devices are able to turn on and protect them! [21]


    NAND is a better gate for design than NOR because at the transistor level the mobility of electrons is normally three times that of holes compared to NOR and thus the NAND is a faster gate. Additionally, the gate-leakage in NAND structures is much lower. If you consider t_phl and t_plh delays you will find that it is more symmetric in case of NAND ( the delay profile), but for NOR, one delay is much higher than the other(obviously t_plh is higher since the higher resistance p mos's are in series connection which again increases the resistance).

    Temperature Inversion

    It is known that with increase in temperate, the resistance of a metal wire(conductor) increases. The reason for this phenomenon is that with increase in temperature, thermal vibrations in lattice increase. This gives rise to increased electron scattering. One can visualize this as electrons colliding with each other more and hence contributing less to the streamline flow needed for the flow of electric current.
    There is similar effect that happens in semiconductor and the mobility of primary carrier decreases with increase in temperature. This applies to holes  equally as well as electrons.
    But in semiconductors, when the supply voltage of a MOS transistor is reduced, and interesting effect is observed. At lower voltages the delay through the MOS device decreases with increasing temperature, rather than increasing. After all common wisdom is that with increasing temperature the mobility decreases and hence one would have expected reduced current and  subsequently reduced delay. This effect is also referred to as low voltage Inverted Temperature Dependence.
    Lets first see, what does the delay of a MOS transistor depend upon, in a simplified model.
    Delay = ( Cout * Vdd )/ Id [ approx ]
    Cout = Drain Cap
    Vdd = Supply voltage
    Id = Drain current.
    Now lets see what drain current depends upon.
    Id = µ(T) * (Vdd – Vth(T))α
    µ = mobility
    Vth = threshold voltage
    α = positive constant ( small number )
    One can see that Id is dependent upon both mobility µ and threshold voltage Vth. Let examine the dependence of mobility and threshold voltage upon temperature.
    μ(T) = μ(300) ( 300/T )m
    Vth(T) = Vth(300) − κ(T − 300)
    here ‘300’ is room temperature in kelvin.
    Mobility and threshold voltage both decreases with temperature. But decrease in mobility means less drain current and slower device, whereas decrease in threshold voltage means increase in drain current and faster device.
    The final drain current is determined by which trend dominates the drain current at a given voltage and temperature pair. At high voltage mobility determines the drain current where as at lower voltages threshold voltage dominates the drain current.
    This is the reason, at higher voltages device delay increase with temperature but at lower voltages, device delay increases with temperature.[20]

    Tuesday 20 October 2015

    Power Gating

    Power Gating is effective for reducing leakage power. Power gating is the technique wherein circuit blocks that are not in use are temporarily turned off to reduce the overall leakage power of the chip. This temporary shutdown time can also call as “low power mode” or “inactive mode”. When circuit blocks are required for operation once again they are activated to “active mode”. These two modes are switched at the appropriate time and in the suitable manner to maximize power performance while minimizing impact to performance. Thus goal of power gating is to minimize leakage power by temporarily cutting power off to selective blocks that are not required in that mode.

    Power gating affects design architecture more compared to the clock gating. It increases time delays as power gated modes have to be safely entered and exited. The possible amount of leakage power saving in such low power mode and the energy dissipation to enter and exit such mode introduces some architectural trade-offs. Shutting down the blocks can be accomplished either by software or hardware. Driver software can schedule the power down operations. Hardware timers can be utilized. A dedicated power management controller is the other option.

    An externally switched power supply is very basic form of power gating to achieve long term leakage power reduction. To shutoff the block for small interval of time internal power gating is suitable. CMOS switches that provide power to the circuitry are controlled by power gating controllers. Output of the power gated block discharge slowly. Hence output voltage levels spend more time in threshold voltage level. This can lead to larger short circuit current.

    Power gating uses low-leakage PMOS transistors as header switches to shut off power supplies to parts of a design in standby or sleep mode. NMOS footer switches can also be used as sleep transistors. Inserting the sleep transistors splits the chip’s power network into a permanent power network connected to the power supply and a virtual power network that drives the cells and can be turned off.

    The quality of this complex power network is critical to the success of a power-gating design. Two of the most critical parameters are the IR-drop and the penalties in silicon area and routing resources. Power gating can be implemented using cell- or cluster-based (or fine grain) approaches or a distributed coarse-grained approach.

    Power-gating parameters

    Power gating implementation has additional considerations than the normal timing closure implementation. The following parameters need to be considered and their values carefully chosen for a successful implementation of this methodology.
    Power gate size: The power gate size must be selected to handle the amount of switching current at any given time. The gate must be bigger such that there is no measurable voltage (IR) drop due to the gate. Generally we use 3X the switching capacitance for the gate size as a rule of thumb. Designers can also choose between header (P-MOS) or footer (N-MOS) gate. Usually footer gates tend to be smaller in area for the same switching current. Dynamic power analysis tools can accurately measure the switching current and also predict the size for the power gate.
    Gate control slew rate: In power gating, this is an important parameter that determines the power gating efficiency. When the slew rate is large, it takes more time to switch off and switch-on the circuit and hence can affect the power gating efficiency. Slew rate is controlled through buffering the gate control signal.
    Simultaneous switching capacitance: This important constraint refers to the amount of circuit that can be switched simultaneously without affecting the power network integrity. If a large amount of the circuit is switched simultaneously, the resulting “rush current” can compromise the power network integrity. The circuit needs to be switched in stages in order to prevent this.
    Power gate leakage: Since power gates are made of active transistors, leakage is an important consideration to maximize power savings.

    Fine-grain power gating

    Adding a sleep transistor to every cell that is to be turned off imposes a large area penalty, and individually gating the power of every cluster of cells creates timing issues introduced by inter-cluster voltage variation that are difficult to resolve. Fine-grain power gating encapsulates the switching transistor as a part of the standard cell logic. Switching transistors are designed by either library IP vendor or standard cell designer. Usually these cell designs conform to the normal standard cell rules and can easily be handled by EDA tools for implementation.

    The size of the gate control is designed with the worst case consideration that this circuit will switch during every clock cycle resulting in a huge area impact. Some of the recent designs implement the fine-grain power gating selectively, but only for the low Vt cells. If the technology allows multiple Vt libraries, the use of low Vt devices is minimum in the design (20%), so that the area impact can be reduced. When using power gates on the low Vt cells the output must be isolated if the next stage is a high Vt cell. Otherwise it can cause the neighboring high Vt cell to have leakage when output goes to an unknown state due to power gating.

    Gate control slew rate constraint is achieved by having a buffer distribution tree for the control signals. The buffers must be chosen from a set of always on buffers (buffers without the gate control signal) designed with high Vt cells. The inherent difference between when a cell switches off with respect to another, minimizes the rush current during switch-on and switch-off.

    Usually the gating transistor is designed as a high vt device. Coarse-grain power gating offers further flexibility by optimizing the power gating cells where there is low switching activity. Leakage optimization has to be done at the coarse grain level, swapping the low leakage cell for the high leakage one. Fine-grain power gating is an elegant methodology resulting in up to 10X leakage reduction. This type of power reduction makes it an appealing technique if the power reduction requirement is not satisfied by multiple Vt optimization alone.

    Coarse-grain power gating

    The coarse-grained approach implements the grid style sleep transistors which drives cells locally through shared virtual power networks. This approach is less sensitive to PVT variation, introduces less IR-drop variation, and imposes a smaller area overhead than the cell- or cluster-based implementations. In coarse-grain power gating, the power-gating transistor is a part of the power distribution network rather than the standard cell.

    There are two ways of implementing a coarse-grain structure:

    1) Ring-based

    2) column-based

    Ring-based methodology: The power gates are placed around the perimeter of the module that is being switched-off as a ring. Special corner cells are used to turn the power signals around the corners.

    Column-based methodology: The power gates are inserted within the module with the cells abutted to each other in the form of columns. The global power is the higher layers of metal, while the switched power is in the lower layers.

    Gate sizing depends on the overall switching current of the module at any given time. Since only a fraction of circuits switch at any point of time, power gate sizes are smaller as compared to the fine-grain switches. Dynamic power simulation using worst case vectors can determine the worst case switching for the module and hence the size. IR drop can also be factored into the analysis. Simultaneous switching capacitance is a major consideration in coarse-grain power gating implementation. In order to limit simultaneous switching daisy chaining the gate control buffers, special counters are used to selectively turn on blocks of switches.

    Isolation Cells

    Isolation cells are used to prevent short circuit current. As the name indicates these cells isolate power gated block from the normally on block. Isolation cells are specially designed for low short circuit current when input is at threshold voltage level. Isolation control signals are provided by power gating controller. Isolation of the signals of a switchable module is essential to preserve design integrity. Usually a simple OR or AND logic can function as an output isolation device. Multiple state retention schemes are available in practice to preserve the state before a module shuts down. The simplest technique is to scan out the register values into a memory before shutting down a module. When the module wakes up, the values are scanned back from the memory.

    Retention Registers

    When power gating is used, the system needs some form of state retention, such as scanning out data to a RAM, then scanning it back in when the system is reawakened. For critical applications, the memory states must be maintained within the cell, a condition that requires a retention flop to store bits in a table. That makes it possible to restore the bits very quickly during wakeup. Retention registers are special low leakage flip-flops used to hold the data of main register of the power gated block. Thus internal state of the block during power down mode can be retained and loaded back to it when the block is reactivated. Retention registers are always powered up. The retention strategy is design dependent. During the power gating data can be retained and transferred back to block when power gating is withdrawn. Power gating controller controls the retention mechanism such as when to save the current contents of the power gating block and when to restore it back.