Source: ChipDesign
In the not too distant past –– say 20- to 30 years ago –– hardware emulators were huge and hulking, with a massive amount of cables and wires trailing all over the floor. I still remember my first visit to AMD in Austin, Texas, back in 1995, when I saw Quickturn’s Enterprise Emulation System emulating a K5 processor. The K5 was AMD’s first x86 processor developed entirely in-house. It was the archetypal example of “spaghetti syndrome” with thousands of cables running all over the room without a semblance of structure.
It took an entire engineering team of 10 people to get it set up and running, and to keep it functioning. Setup times stretched to weeks or even months past the time first silicon came back for the foundry, defeating the whole purpose of their existence and often for pitifully little reward. The sad, often repeated refrain was: “Time to emulation.”
Reliability was horrendous, with failures occurring every day or so. Designs, while small by today’s standard, way smaller than one-million of gates, were the largest at the time and the reason for emulation to be devised. Quickturn advertised a capacity of 330,000 gates for one Enterprise Emulation System box. Even then simulation was running out of steam, but it was caveat emptor with emulation. They were unfriendly to use, required highly specialized expertise, a rare and precious commodity not easily available, and often came with an AE team in-the-box, which meant unlucky AEs were onsite for an unforeseen extended stay.
The cost-of-ownership, including the cost of the machines, the infrastructure, and the personnel to operate and maintain them was outrageously expensive, limiting their appeal and viability to the biggest designs developed at the largest companies with big pockets. At that time, the cost was $3.5 per gate for one Enterprise Emulation System box and somewhat less for multi-box configurations.
The attraction to hardware emulation was unmistakable, even if they were used only for in circuit emulation (ICE) mode. What made them appealing was how the pre-silicon design under test (DUT) could be verified in the post-silicon target environment at real or quasi-real speed.
All that’s changed in recent years as EDA vendors and project teams begin to fully realize their value and potential. Improvements were made with each new generation of machine. New deployment approaches have been devised; new verification objectives and applications have been expanding.
The latest generations, then, are ushering in the golden age of hardware emulation.
Just consider today’s machines are smaller and more cost effective, despite their increased capacity to accommodate designs exceeding one billion gates in size. The industry has moved well beyond loads of cables and wiring to only a few.
Reliability has improved dramatically as well, with failures occurring in time frames of weeks or months. The number of engineers needed to support each machine has dwindled to one or only a small few.
The average cost of the machine has lowered to about 1 cent or so per gate –– more than 100X less expensive than what is was 20 years ago. In fact, it is the least expensive verification tool when measured in dollars per verification cycle.
At last count, hardware emulation was used in four different modes, including ICE. The others are embedded software acceleration, simulation testbench acceleration and transaction-based acceleration. Some project teams mix and match modes to meet their purposes.
Which brings me to the next and perhaps most important reason hardware emulation is enjoying its moment –– its varied applications and objectives. Hardware emulation can be used from block- and IP-level testing, to full system validation, including embedded software validation. The verification objectives have expanded beyond traditional hardware debugging, adding functional coverage checking, low-power verification and power estimation. A new verification territory is now performance characterization.
The emulation design data center is one more application that’s getting serious attention from project teams worldwide. It is built on three technological advances: unmanned remote access, shared resource, and powerful software management.
Unmanned remote access has been made possible by the transaction technology and the VirtuaLAB concept functionally equivalent to ICE. Today, hardware emulation can be accessed remotely from anywhere in the world, as simulation server farms used to be for many years. Gone are the days of AE team in-a-box. In this configuration, human intervention for connecting/disconnecting external peripherals is not necessary.
It can also be shared among several teams made up of a combination of hardware designers, firmware developers, software engineers located in different time zones and remote geographies. Advanced job queue management automates, simplifies and optimizes the allocation of the emulation resources, essential for managing an expensive piece of capital equipment as an emulation platform is.
All of this tallies up to a dramatic drop in the cost of ownership. Pricing for the machine, infrastructure investment and support personnel expenditures make the cost of ownership approximately 1,000X lower than the old days.
Gone are the days when emulation was a verification tool few companies could afford and even fewer verification engineers wanted to use. These days, they’re now basking in their debugging successes and welcoming the golden age of hardware emulation.
About Lauro Rizzatti
Lauro Rizzatti is a verification consultant. He was formerly general manager of EVE-USA and its vice president of marketing before Synopsys’ acquisition of EVE. Previously, he held positions in management, product marketing, technical marketing, and engineering. He can be reached at lauro@rizzatti.com.