For complex system-on-chip (SoC) designs at advanced nodes, there’s a tug-of-war brewing between conflicting goals around performance, power, and area (PPA) and turnaround time (TAT). Using traditional place-and-route tools, designers need to break their SoCs into many small blocks. Yet doing so makes it difficult to achieve optimal PPA and TAT levels without making any tradeoffs.
The new Cadence® Innovus™ Implementation System meets designers’ needs by delivering a typical 10% to 20% PPA advantage along with an up to 10X TAT gain.
Figure 1: Innovus Implementation System Typical PPA and TAT Improvement versus Traditional physical implementation solutions
Providing the industry’s first massively parallel solution, the system can effectively handle blocks as large as 10 million instances or more.
Figure 2: Innovus Implementation System provides the industry’s first massively parallel solution
The system features several key capabilities that make these results possible:
- Massively parallel architectures that can handle huge designs and take advantage of multi-threading on multi-core workstations, as well as distributed processing over networks of computers
- Its new GigaPlace solver-based placement technology, which is slack driven and topology, pin access, and color aware to provide optimal pipeline placement, wirelength, utilization, and PPA
- An advanced, multi-threaded, layer-aware timing- and power-driven optimization engine, which reduces dynamic and leakage power
- A unique concurrent clock and datapath optimization engine, which enhances cross-corner variability and boosts performance with reduced power
- Next-generation slack-driven routing with track-aware timing optimization, which addresses signal integrity early on and improves post-route correlation
- Full-flow multi-objective technology, which makes concurrent electrical and physical optimization possible
New Slack-Driven Placement Engine
The new GigaPlace engine changes the way placement is performed and enhances PPA. Traditionally, placement has been “timing aware” and “lightly” integrated with other engines in the implementation system, such as timing analysis and optimization. With the GigaPlace engine, placement is slack driven and tightly integrated; in other words, the engine helps place the cells in a timing-driven mode by building up the slack profile of the paths and performing the placement adjustments based on these timing slacks.
The GigaPlace engine models accurate electrical constraints and physical constraints (floorplan, route topology-based wire length, congestion). It also integrates the mathematical model of Cadence’s timing- and power-driven optimization engine, which is also embedded in the Innovus Implementation System. The engine enables concurrent, convergent optimization of electrical and physical metrics. More importantly, the designer’s intent can be extracted automatically from the electrical constraints, which in turn helps to achieve better optimization for physical metrics. A global optimization strategy and a novel numerical solver are employed to avoid the trap of local minima, resulting in the globally optimal PPA. This strategy avoids costly design iterations between different steps of the flow and results in a faster design closure with the best PPA.
Figure 6. High-performance design benchmarks on embedded processor
Advanced Timing- and Power-Driven Optimization Engine
Through its route-aware optimization capability, the next-generation, multi-threaded advanced timing- and power-driven optimization engine in the Innovus Implementation System can identify long timing-critical nets, query a new congestion-tracking infrastructure to ensure that there’s space available on the upper layers, and then rebuffer these nets on the upper layers in order to improve timing. With this capability, you can maintain critical layer assignments during the entire pre-route optimization flow. These assignments are passed on to the system’s next-generation massively parallel global routing engine so that the final routing will also have the correct layer assignment.
Better Cross-Corner Variability with Concurrent Clocking
The Innovus Implementation System features a next-generation clock concurrent optimization engine with true multi-threading, enhanced useful skew, and flow integration. It merges physical optimization with clock-tree synthesis (CTS), simultaneously building clocks and optimizing logic delays based directly on a propagated clocks model. All the optimization decisions are based on true propagated clocks, taking into account clock gates, inter-clock paths, and on-chip variation (OCV) derates.
Enabling Up to 10X Faster Turnaround Time
The Innovus Implementation System was designed to boost digital design TAT. First and foremost is its full-flow massively parallel architecture, which can run multi-threaded tasks simultaneously on multiple CPUs. The architecture is designed such that the system can produce best-in-class TAT with standard hardware, which is normally 8 to 16 CPUs per box. In addition, the flow can scale over a large number of CPUs for designs with a larger instance count. The architecture can also be described as “look ahead” in its approach, as it accounts for upstream and downstream steps and effects in the design flow, providing a runtime boost and minimizing design iterations between the placement, optimization, clocking, and routing engines.
Figure 7: Innovus Implementation System TAT vs. reference tool
Boosting Engineering Productivity with Familiar Flow
Since multiple production-proven signoff engines are integrated into the Innovus Implementation System, it was essential to have a simplified user and scripts interface. The system fosters usability by simplifying command naming and aligning common implementation methods across other Cadence digital and signoff tools. The processes of design initialization, database access, command consistency, and metric collection have all been streamlined and simplified. In addition, updated and shared methods have been added to run, define, and deploy reference flows. These updated interfaces and reference flows increase productivity by delivering a familiar interface across core implementation and signoff products.
Request product information here: http://www.cadence.com/cadence/contact_us/Pages/rpi.aspx