Source: Tech Design Forum
It’s budget season. Annual reviews are well under way at many departments within semiconductor companies. It’s something most managers dread, a marathon of meetings spent justifying expediture and enduring possible cuts.
Even though engineering departments do not budget entirely on a yearly cycle – most spending is determined by project cycles – it’s still a good time to look at what is spent on design and why.
I particularly want to discuss how emulation fits into budgeting as an essential line item.
Two critical parts of any design budget are specific EDA tool requests and detailed cost/benefit analyses of what as-yet unspecified tools could be required for upcoming projects. Evaluating tools can be long and tedious but is nevertheless essential to determining whether a tool is appropriate for the design and verification flow, and how much you should spend on it.
Hardware emulation is playing a greater role in this process because of the investment and the complexity that accompanies any SoC design. It’s widely seen as the ‘universal’ verification tool, one that can be used throughout development.
Emulation can map any design size (even one in excess of a billion gates), verify designs up to six orders of magnitude faster than an RTL simulator and give users full design visibility for thorough debugging.
Sounds great, but the return-on-investment (ROI) analysis has to impress both engineering managers and financial directors. Here, given ‘tis the season to encourage generosity, my argument applies to both groups but is really targeted at the latter. What is the dollar-case an engineering manager can put for emulation to whoever controls the purse strings?
Well…
Emulation payback
Hardware emulation is versatile. It is already extensively and successfully used to debug automotive, graphics, multimedia, networking, processor, storage and practically any other digital design. Emulation has a lengthening track record.
Today’s design teams comprise hardware engineers and software developers because software and hardware have to be produced simultaneously and in conjunction – Time-to-market rules.
Both groups have come to value hardware emulation’s ability to verify hardware and software at the same time. Because the emulated design is based on an actual implementation (albeit not timing-accurate), it offers a practical functional representation of the design before silicon itself is ready for testing. Emulation’s value here is measureable and defensible. It’s a bridge between two culturally different disciplines that greatly cuts delivery time.
Beyond that, we can cite the increasing deployment of hardware emulation as a design datacenter resource across the enterprise. It can be accessed remotely by any number of users from anywhere, with its efficiency and effectiveness enabled by state-of-the-art resource management tools.
Global emulation enterprise servers support multiple large designs or a combination of large and small designs. This is a change from the traditional model where one emulator supported one engineering team at the facility where it was located. Capacity was an issue as well. This trend has obvious implications for how you assess an emulator’s returns – it can reach more people in more places, not just those with access to its ‘home’ lab. And, as we’ll see, it can reach them at just about any time.
Change has been accelerated by the need for a more accommodating test environment than traditional in-circuit emulation (ICE) mode. In ICE mode, still in use but waning in popularity, a physical target where the design-under-test (DUT) will sit after tape-out provides stimulus and processes the response. Today though, designers can use an emulator in conjunction with virtual target systems that drive the DUT via interfaces implemented by ‘transactors’.
Transactors are synthesizable software models of protocol interfaces. They talk to the virtual target system via untimed packets of information and to the DUT via bit-level signals. Because transactors can be mapped inside the emulator, they execute at its maximum speed. In other words, Mr Financial Director, you get the biggest bang for your buck.
In transaction-based verification, users describe the virtual test environment or testbench at a higher level of abstraction using at least 10X less code than in conventional hardware verification language testbenches. That’s less work, done sooner equating into greater efficiency and faster project delivery.
Moreover, transaction-based emulation interests many engineering teams and semiconductor companies because it does not require that a technician be on call at all times. This is handy when a remote user logs in or if another needs to swap designs because no manual intervention is needed to plug or unplug speed adapters. That’s an excellent example of emulation helping you to manage resources and potentially maintain truly continuous design flows.
Hardware emulation’s versatility makes it a useful tool for performance characterization and other uses that I will describe at another time. But what I have tried to summarize here are the headline values of emulation in terms of ROI.
Fewer engineers today see emulation as an expensive luxury limited to cutting edge designs at the biggest companies. But chances are many accountants still do. Feel free to use these arguments – or even forward the article to your FD – as you look to secure more support for these increasingly valuable tools.
Emulation is no longer even just an important part of any design budget. It needs to be one of the tentpoles.