By Lauro Rizzatti (Contributed Content) | Thursday, February 20, 2020
More and more new system-on-chip (SoC) designs at leading semiconductor companies embody software algorithms conceived and developed on host computers and in earlier embedded designs.
Migrating such algorithms to cutting-edge SoC projects that power up AI, deep or machine learning, high-performance compute, automotive, 5G, and open-source ISA applications is a challenge. For one, no embedded CPU or even arrays of CPUs possesses the computing power needed to process the volume of data traffic that must be executed in real time by advanced algorithms. The below table exemplifies the processing requirements in three state-of-the-art semiconductor vertical markets.
ML | 5G | HDTV | |
Maximum Processing Performance | Trillions of FPOP/s | 20 Gbps | Billions of OP/s |
Current maximum compute processing performance in ML, 5G, and HD video
When it comes to data movement, traditional memory architectures serving the venerable CPU as well as their limited bandwidth kill performance and propel power consumption. The solution is to migrate software algorithms from CPU-based execution to a hardware-accelerated implementation optimized for power, performance, and area.
Easy to say, far more difficult to do. Migration is today’s nightmare of the chip design verification community. No surprise that many in the ranks of verification engineering at hot new startups or in cool chip project groups complain that their verification environments are strained and breaking from these demanding tasks in complex billion-gate designs.
Stringent time-to-market requirements set at under 12 months defeat the classical approach based on design optimization at the register transfer level (RTL). Bug detection has never been harder. It used to be the equivalent of finding a needle in a haystack. Now, there are two huge haystacks: a software stack and a hardware stack.
Calls for improvements to the verification flow are louder than ever. What worked in the past is no longer viable. A verification overhaul is necessary to address new market opportunities, and a system-level flow for SoC architecture analysis is the way to go.
Specifically, what is needed is a new design and verification flow that starts by modeling the SoC design at a high level of abstraction that includes the software algorithm typically written via Python. Such a scalable flow continues with high-level synthesis followed by tightly coupled hybrid verification. Multiple iterations between the two can accelerate the creation of an optimal architecture for speed, power, and area for a specific application. Once an architecture is determined, system-level performance and power analysis via hardware emulation run on the RTL design may quickly confirm the choice. Finally, FPGA-prototyping performing system validation with real-world traffic can complete the verification task.
It’s an exciting time to be part of an industry that is unveiling at a rapid pace more and more chips solving thorny problems. It can’t be done without chip design verification engineers who need new approaches, technologies, and methodologies to get their work done. Let’s make sure they have those tools.
Author’s note: DVCon U.S., March 2–5 in San Jose, California, will explore many of the challenges described in this blog post. A panel session on Wednesday, March 4, “Predicting the Verification Flow of the Future,” will poll five verification experts on what the verification environment of the future will look like.