View count: 14

I634 - AMD Logic and Computation Lab.



The AMD Logic and Computing Lab focuses on logic design and advanced computing architectures. We explore how to design and optimize processors (CPUs) and hardware that supports artificial intelligence (AI) computing. Here, students will learn to design basic digital logic circuits, understand CPU architecture, and explore ways to enhance computational efficiency to support the development of AI technologies.
AMD邏輯及運算實驗室
 

📌Logic design

Logic design is the core of digital circuit design, encompassing how to use logic gates (such as AND, OR, NOT, etc.) to implement the basic arithmetic units in computers. These arithmetic units are the foundation of various functions in processors and other electronic devices. Students will learn how to design basic digital circuits (such as adders, multipliers, etc.) to complex control units. Logic design is the starting point for any hardware design and is crucial for improving performance, efficiency, and functionality.

📌CPU Design

The CPU (Central Processing Unit) is the core of a computer system, responsible for executing program instructions and controlling the system's operations. CPU design includes:

  • Processor architecture:Determining the basic operation of the CPU includes the design of the instruction set (ISA), memory management, and the design of arithmetic units, among other components.

  • Pipelining Design:Pipelining technology allows multiple instructions to be processed concurrently, improving the CPU's computational efficiency and speed.

  • Multi-Core Processors:Modern CPUs often use multi-core designs, where each core can independently process instructions, which is crucial for parallel computing. Students will learn how to design and optimize these key components to make processors more powerful and efficient.
     

📌AI Computing

With the rapid development of artificial intelligence technology, the demand for hardware capable of accelerating AI computations is growing significantly. AI computing often requires processing large amounts of data, which places extremely high demands on computational resources.

  • Dedicated AI Processors:Processors such as GPUs (Graphics Processing Units) or TPUs (Tensor Processing Units) are specifically designed for machine learning and deep learning tasks, enabling efficient large-scale parallel computation.

  • Hardware Accelerators:In addition to traditional CPUs, AI computing often requires additional hardware accelerators to handle computation-intensive tasks, such as matrix operations or deep neural network calculations.

  • Computing Architecture Design:The lab focuses on researching how to design and optimize hardware architectures that support AI computing, improving the processing speed and performance of AI applications.