AI Inferencing in Data Centers: Breaking the Efficiency-Cost Tradeoff

Training and inferencing comprise two crucial aspects of AI processing in datacenters. Learn the differences between the two, and the cost-efficiency issues involved. The execution of artificial intelligence (AI) workloads in datacenters (Figure 1) involves two crucial processes: training and inference. At first glance, these processes appear similar—both involve reading data, processing it, and generating … Read more

A Deep Dive into SoC Performance Analysis: Optimizing SoC Design Performance Via Hardware-Assisted Verification Platforms

Part 2 of 2 – Performance Validation Across Hardware Blocks and Firmware in SoC Designs Part 2 explores the performance validation process across hardware blocks and firmware in System-on-Chip (SoC) designs, emphasizing the critical role of Hardware-Assisted Verification (HAV) platforms. It outlines the validation workflow driven by real-world applications, and best practices for leveraging HAV … Read more

A closer look at LLM’s hyper growth and AI parameter explosion

The rapid evolution of artificial intelligence (AI) has been marked by the rise of large language models (LLMs) with ever-growing numbers of parameters. From early iterations with millions of parameters to today’s tech giants boasting hundreds of billions or even trillions, the sheer scale of these models is staggering. Table 1 outlines the number of parameters … Read more

A Deep Dive into SoC Performance Analysis: What, Why, and How

Part 1 of 2 – Essential Performance Metrics to Validate SoC Performance Analysis Part 1 provides an overview of the key performance metrics across three foundational blocks of System-on-Chip (SoC) designs that are vital for success in the rapidly evolving semiconductor industry and presents a holistic approach to optimize SoC performance, highlighting the need for … Read more