A learning-based Oracle for Automatic Logic Optimization
The growing complexity of semiconductor chips has revealed two critical limitations of contemporary EDA techniques. First, while contemporary EDA methodologies are extremely well developed and highly optimized, they usually only target specific types of logic networks. For instance, And-Inverter Graph (AIG)-based logic optimizations perform well for control-intensive logic networks, while the recently developed Majority-Inverter Graph (MIG)-based logic optimizations are efficient for arithmetic-intensive logic network. As contemporary semiconductor chips become more diverse in functionality than before, using a unified logic optimization methodology may no longer guarantee best Performance-Power-Area (PPA). To tightly follow the trend in circuit complexity, it is worthwhile to study how to partition a design efficiently into portions of specific attributes (arithmetic, control, etc.) and apply ad-hoc logic optimization techniques on them. In contemporary EDA flow, such optimizations are only achieved by intensive manual efforts and strong expertise is required. Another contemporary issue of logic synthesis is the difficulty to parallelize the optimization steps. As the complexity of chips exploded in recent years and considering that such trend may remain in future, using a traditional logic optimization methodology cause the runtime of EDA flows (even only the runtime of logic synthesis) to be more than 24 hours. To achieve parallelization, large semiconductor designs can be partitioned into equal blocks that are distributed over different processors for EDA optimizations. However, such simple parallelism would reduce runtime at the cost of PPA degradation. Indeed, as partitioning would be achieved regardless of logic attributes, each partitions would have to be treated with generic optimization techniques, leading the final QoR to be far from the optimum. Therefore, we targets at a fast automatic tool capable of identifying efficient logic optimization techniques for the different portion of circuit designs. By partitioning SoC designs in terms of logic attributes and applying ad-hoc logic optimization technique, it brings an opportunity for logic synthesis to achieve significant runtime reduction and performance improvement simultaneously.
Fig.: Logic optimization flow.
This research effort is funded by DARPA, under the grant # FA8650-18-2-7849.