Reading the transcriptions and watching the videos of these three talks delivers an exhilarating look at the semiconductor industry’s future
Source: EEWEB
By Lauro Rizzatti (Contributed Content)
Friday, September 21, 2018
Reading the transcriptions and watching the videos of these three talks delivers an exhilarating look at the semiconductor industry’s future
Technical conferences and other industry events offer plenty of opportunities to network and make connections. Often forgotten is the rare chance to hear noted domain-specific experts describe trends and advancements in their area of focus.
This year, no other industry event seemed to capture the artificial intelligence (AI) and open-source architecture trends better than the Design Automation Conference (DAC) in June in San Francisco. I was in the audience for three keynotes/sky talks as knowledgeable and articulate experts in the area of AI, machine learning, and open-source architecture took the stage. Each had a different perspective that moved through the spectrum of physics, systems and architectures, and AI startups.
“A New Golden Age for Computer Architecture”
One keynote at the opening of a fascinating week was given by Dr. David A. Patterson, distinguished software engineer at Google and professor emeritus from the University of California, Berkeley. Dr. Patterson’s presentation — “A New Golden Age for Computer Architecture” — just about sums up the semiconductor industry. Of course, he was referring to Open RISC-V. Dr. Patterson is an integral part of the RISC-V movement; in addition to being well-known for coining the term RISC, he’s also made significant contributions to the design of the RISC processor and co-authored the book “Computer Architecture: A Quantitative Approach.”
Dr. Patterson’s talk, based on the Turing Award lecture that he gave with John Hennessey, chairman of Alphabet Inc. and former Stanford University president, neatly anticipated the three later presentations together with a view of open-source architectures’ potential.
Dr. Patterson reviewed the history of computer architecture, identified a few big challenges long-known to the industry, and opportunities going forward. His remarkable and fascinating speech traveled through 50 years of computer architectures and described the inner workings of the RISC open architecture. The last part of his talk covered the RISC-V core and its open-ended architecture.
Open architectures in general, and RISC-V in particular, are going to be the exemplar, he believes, and where the industry will make progress. Security will happen with the availability of open architectures. The architectures will be technology-driven, which means that they can be efficient at the low end and is the reason why RISC-V architectures were popular in mobile phones. It’s an open design. Companies are competing on the same instruction set with more companies in the marketplace and faster innovation. Dr. Patterson thinks that open architectures will become the dominant vehicle for security.
Finally, he disclosed that the goal of RISC-V is to become the Linux for processor and as popular as Linux is for the operating system.
Dr. Patterson concluded his keynote on an inspiring note that well captured much of what the three other speakers said: “This next decade (will) be filled with breakthroughs like it was in the 1980s, and what a great time to be in the hardware again.”
Fortunately, Dr. Patterson’s keynote was recorded, and it is well worth your time to set aside 45 minutes to watch this video.
“The Future of Computing: Pushing the Limits of Physics, Architectures, and Systems for AI”
Dr. Dario Gil, vice president of AI and IBM Q at IBM Research, started his keynote with a shout-out to the design automation community that has done so much to shape the past, the present, and the future of computing. His talk, which was titled “The Future of Computing: Pushing the Limits of Physics, Architectures, and Systems for AI,” took a twist on AI. Dr. Gil inventoried the current state of AI and noted, “AI is the new IT.” While acknowledging that AI is in its infancy, he addressed where it needs go with an emphasis on AI’s computational substrate, different architectures, analog computing, and quantum computing as applied to AI.
The end of the AI story, he said, will be to achieve general artificial intelligence, which is quite far away. The ability to create systems that can learn and reason automatically, move across domains, and learn across arbitrary spaces remains a difficult problem to solve. The general estimate is 2050 or beyond, which is to say, as pointed out by Dr. Gil, that scientists and engineers really have no idea.
In the meantime, he challenged his audience to think about how AI needs to evolve. He stated that, today, a neural network can be devised and trained to tackle a single task on a single domain with enough labeled data to achieve superhuman performance accuracy. However, he noted that the moment that a different task or domain needs to be addressed, another neural network needs to be created and trained. AI needs to evolve with an ability to create neural networks that can grow across tasks and domains with more modalities of input. Also, AI must be more distributed — centrally and at the edge — and more explainable.
Dr. Gil talked for about 45 minutes, and I distilled down what I learned from his presentation, although there is so much more to learn from him. Once again, I recommend viewing the video of this talk.
“How Deep-Learning Startups are Driving the Silicon World”
At the DAC Pavilion, Chris Rowen, a name familiar to the semiconductor industry as founder and CEO of Tensilica (now Cadence), gave a talk on “How Deep-Learning Startups are Driving the Silicon World.” In the opening minutes of his 30-minute Pavilion talk, he remarked, “It goes without saying that there is enormous interest, enthusiasm, even hype around the world of AI and deep learning.”
Rowen, currently founder and CEO of BabbleLabs and citing Google as his source, went on to observe that close to 12,000 startups say that they are doing AI for a broad range of different technologies. The level of activity by many measures — startups, popular interest, research around AI, and deep learning — is phenomenal. “In my career, I have never seen the rise of a specific category of techniques and ideas that has been so rapid,” he affirmed.
The real opportunity and what is causing the most disruption of the silicon landscape is the emergence of new designs that include significant deep-learning capabilities. Deep learning is a new computational model that sits alongside conventional algorithmic programming running on conventional microprocessors as an essential building block for these systems.
It’s a fundamental shift in software development and what it means to execute a piece of software for new types of systems in the cloud and at the edge with tradeoffs between computing memory and higher arithmetic density. He used one intriguing development as an example. The neural network is the new circuit creating a new category of highly technical EDA tools that can be applied to optimization, training, and mapping of these neural networks onto whatever the underlying fabric.
This is a new computing model, claimed Rowen, adding that it is not a Turing machine but a mathematical function. This is an EDA-like opportunity with new applications, the importance of dataset access because data is a program, new development tools, and the difference between deep learning in the edge versus in the cloud. In the cloud, it’s about training and inference across aggregated data. At the edge, it is a high bandwidth or low power, low latency, and low cost. Neural networks are inherently diverse.
A final point from Rowen is the fundamental shift in software methods because of a new thin, lighter microprocessor that coexists with different dynamics of innovation. It’s causing a greater impact on silicon economy than building new silicon structures themselves. Deep-learning startups are a window into the future of electronics.
Rowen closed his talk, echoing Dr. Patterson: “It’s an incredibly exciting time because of this radical transformation of at least one dimension of what computing is about.” Again, I would recommend that you take the time to watch the video of Rowen’s talk.
Reading the transcriptions and watching the videos of these three talks delivers an exhilarating look at the semiconductor industry’s future. The only conclusion to draw is that the future for the industry is very bright, indeed.
P.S. I should also note that many more videos from this year’s DAC can be found on the dacTV webpage.