Видеокарта AMD Radeon Instinct MI8 профессиональная

На данной странице представлена подробная информация про видеокарту. Сможете узнать все характеристики, фото, рейтинг и сравнение. Обновлено: 2017-06-21 | Ошибка?
Обзор
Производитель
AMD
Серия видеокарты
Radeon Instinct
Дата выхода
Июнь 20th, 2017
Код печатной платы (PCB)
109-C88237
Модель платы
AMD C882
Графический процессор
Модель GPU
Fiji
Архитектура
GCN 3.0
Техпроцесс
28 nm
Площадь ядра
596 mm2
Количество транзисторов
8.9B
Плотность транзисторов
14.9M TRAN/mm2
Потоковые процессоры
4096
Текстурных блоков
256
Блоков растеризации (ROP)
64
Частоты
Максимальная частота
1000 MHz
Частота видеопамяти
500 MHz
Быстродействие памяти
1000 Mbps
Конфигурация памяти
Объем памяти
4096 MB
Тип памяти
HBM
Шина памяти
4096-bit
Пропускная способность
512.0 GB/s

жалоба
Физические характеристики
Интерфейс
PCI-Express 3.0 x16
Высота
2-slot
Разъемы питания
1× 8-pin
Макс. потребление (TDP/TBP)
175 W
Рекомендуемое питание
500 W
Поддержка API
DirectX
11.2
Vulkan
1.0
OpenGL
4.4
OpenCL
2.0

Производительность
Пиксельная скорость
64 GPixels/s
Текстурная скорость
256 GTexel/s
Макс. производительность
8.2 TFLOPS
Производительность на Вт.
46.8 GFLOPS/W
Производительность на мм2
13.7 GFLOPS/mm2




  Модель Ядра Максимальная частота Быстродействие памяти Конфигурация памяти
Thumbnail
AMD Instinct MI250X
 
14080
 
1700 MHz
 
3.2 Gbps
 
128 GB 8192b
Thumbnail
AMD Instinct MI250
 
13312
 
1700 MHz
 
3.2 Gbps
 
128 GB 8192b
Thumbnail
AMD Instinct MI100
 
7680
 
1504 MHz
 
2.4 Gbps
 
32 GB HB2 4096b
Thumbnail
AMD Instinct MI210
 
6656
 
1700 MHz
 
3.2 Gbps
 
64 GB 4096b
Thumbnail
AMD Radeon Instinct MI60
 
4096
 
1800 MHz
 
2 Gbps
 
32 GB HB2 4096b
Thumbnail
AMD Radeon Instinct MI25
 
4096
 
1501 MHz
 
1.9 Gbps
 
16 GB HB2 2048b
Thumbnail
AMD Vega Cube
 
4096
 
1501 MHz
 
1.9 Gbps
 
16 GB HB2 2048b
Thumbnail
AMD Radeon Instinct MI8
 
4096
 
1000 MHz
 
1 Gbps
 
4 GB HB1 4096b
Thumbnail
AMD Radeon Instinct MI50 32GB уточняется уточняется уточняется уточняется
Thumbnail
AMD Radeon Instinct MI50
 
3840
 
1746 MHz
 
2 Gbps
 
16 GB HB2 4096b
Thumbnail
AMD Radeon Instinct MI6
 
2304
 
1237 MHz
 
7 Gbps
 
16 GB GD5 256b
  Модель Ядра Максимальная частота Быстродействие памяти Конфигурация памяти
Thumbnail
AMD Project Quantum
 
16384
-
 
1 GB/s
 
64 GB HB1 4096b
Thumbnail
AMD Radeon Pro Duo
 
8192
 
1000 MHz
 
1 GB/s
 
16 GB HB1 4096b
Thumbnail
AMD FirePro S9300 X2
 
8192
-
 
1 GB/s
 
16 GB HB1 4096b
Thumbnail
AMD Radeon Pro SSG
 
4096
 
1050 MHz
 
1 GB/s
 
4 GB HB1 4096b
Thumbnail
AMD Radeon R9 Fury X
 
4096
 
1050 MHz
 
1 GB/s
 
4 GB HB1 4096b
Thumbnail
AMD Radeon R9 Nano
 
4096
-
 
1 GB/s
 
4 GB HB1 4096b
Thumbnail
AMD Radeon Instinct MI8
 
4096
 
1000 MHz
 
1 GB/s
 
4 GB HB1 4096b
Thumbnail
AMD Radeon R9 Fury
 
3584
-
 
1 GB/s
 
4 GB HB1 4096b

Производительность

8.2 TFLOPS of Peak Half or Single Precision Производительность with 4GB HBM1 1

  • 8.2 TFLOPS peak FP16 | FP32 GPU compute Производительность.

    With 8.2 TFLOPS peak compute Производительность on a single board, the Radeon Instinct MI8 server accelerator provides superior single-precision Производительность per dollar for machine and deep learning inference applications, along with providing a cost-effective solution for HPC development systems. 1

  • 4GB high-bandwidth HBM1 GPU Память on 512-bit Память Интерфейс.

    With 4GB of HBM1 GPU Память and up to 512GB/s of Пропускная способность, the Radeon Instinct MI8 server accelerator provides the perfect combination of single-precision Производительность and Память system Производительность to handle the most demanding machine intelligence and deep learning inference applications to abstract meaningful results from new data applied to trained neural networks in a cost-effective, efficient manner.

  • 47 GFLOPS/watt peak FP16|FP32 GPU compute Производительность.

    With up to 47 GFLOPS/watt peak FP16|FP32 GPU compute Производительность, the Radeon Instinct MI8 server accelerator provides superior Производительность per watt for machine intelligence and deep learning inference applications. 2

  • 64 Вычислительные блоки (4,096 Потоковые процессоры).

    The Radeon Instinct MI8 server accelerator has 64 Вычислительные блоки each containing 64 Потоковые процессоры, for a total of 4,096 Потоковые процессоры that are available for running many smaller batches of data simultaneously against a trained neural network to get answers back quickly. Single-precision Производительность is crucial to these types of system installations, and MI8 accelerator provides superior single-precision Производительность in a single GPU card.

FEATURES

Passively Cooled Accelerator Using <175 Watts TDP for Scalable Server Deployments

  • Passively cooled server accelerator Основана на “Fiji” Архитектура. The Radeon Instinct MI8 server accelerator, Основана на the “Fiji” Архитектура with a 28nm HPX process and is designed for highly-efficient, scalable server deployments for single-precision inference applications in machine intelligence and deep learning. This GPU server accelerator provides customers with great Производительность while consuming only 175W TDP board Питание.
  • 175W TDP board Питание, dual-slot, 6” GPU server card. The Radeon Instinct MI8 server PCIe® Gen 3 x16 GPU card is a full-height, dual-slot card designed to fit in most standard server designs providing a highly-efficient server solution for heterogeneous machine intelligence and deep learning inference system deployments.
  • High Bandwidth Память (HBM1) with up to 512GB/s Пропускная способность. The Radeon Instinct MI8 server accelerator is designed with 4GB of high bandwidth HBM1 Память allowing numerous batches of data to be quickly handled simultaneously for the most demanding machine intelligence and deep learning inference applications, allowing meaningful results to be quickly abstracted from new data applied to trained neural networks.
  • MxGPU SR-IOV HW Virtualization. The Radeon Instinct MI8 server accelerator is designed with support of AMD’s MxGPU SR-IOV hardware virtualization technology designed to drive greater utilization and capacity in the data center.

USE CASES

Inference for Deep Learning

Today’s exponential data growth and dynamic nature of that data has reshaped the requirements of data center system configurations. Data center designers need to build systems capable of running workloads more complex and parallel in nature, while continuing to improve system efficiencies. Improvements in the capabilities of discrete GPUs and other accelerators over the last decade are providing data center designers with new options to build heterogeneous computing systems that help them meet these new challenges.

 

Datacenter deployments running inference applications, where lots of new smaller data set inputs are being run at half precision (FP16) or single precision (FP32) against trained neural networks to discover new knowledge, require parallel compute capable systems that can quickly run data inputs across lots of smaller cores in a Питание-efficient manner.

 

The Radeon Instinct™ MI8 accelerator is an efficient, cost-sensitive solution for machine intelligent and deep learning inference deployments in the datacenter delivering 8.2 TFLOPS of peak half or single precision (FP16|FP32) floating point Производительность in a single 175 watt TDP card. 1 The Radeon Instinct™ MI8 accelerator, Основана на AMD’s “Fiji” Архитектура with 4GB high-bandwidth HBM1 Память and up to 512 GB/s bandwidth, combined with the Radeon Instinct’s open ecosystem approach with the ROCm platform, provides data center designers with a highly-efficient, flexible solution for inference deployments.

Key Benefits for Inference:

  • 8.2 TFLOPS peak half or single precision compute Производительность 1
  • 47 GFLOPS/watt peak half or single precision compute Производительность 2
  • 4GB HBM1 on 512-bit Память Интерфейс provides high bandwidth Память Производительность
  • Passively cooled accelerator using under 175 watts TDP for scalable server deployments
  • ROCm software platform provides open source Hyperscale platform
  • Open source Linux drivers, HCC compiler, tools and libraries for full control from the metal forward
  • Optimized MIOpen Deep Learning framework libraries 3
  • Large BAR Support for mGPU peer to peer
  • MxGPU SR-IOV hardware virtualization for optimized system utilizations
  • Open industry standard support of multiple Архитектураs and open standard interconnect technologies 4

 

Heterogeneous Compute for HPC General Purpose and Development

The HPC industry is creating immense amounts of unstructured data each year and a portion of HPC system configurations are being reshaped to enable the community to extract useful information from that data. Traditionally, these systems were predominantly CPU based, but with the explosive growth in the amount and different types of data being created, along with the evolution of more complex codes, these traditional systems don’t meet all the requirements of today’s data intensive HPC workloads. As these types of codes have become more complex and parallel, there has been a growing use of heterogeneous computing systems with different mixes of accelerators including discrete GPUs and FPGAs. The advancements of GPU capabilities over the last decade has allowed them to be used for a growing number of these mixed precision parallel codes like the ones being used for training neural networks for deep learning. Scientists and researchers across the globe are now using accelerators to more efficiently process HPC parallel codes across several industries including life sciences, energy, financial, automotive and aerospace, academics, government and defense.

 

The Radeon Instinct™ MI8 accelerator, combined with AMD’s revolutionary ROCm open software platform, is an efficient entry-level heterogeneous computing solution delivering 8.2 TFLOPS peak single precision compute Производительность in an efficient GPU card with 4GB of high-bandwidth HBM1 Память. 1 The MI8 accelerator is the perfect open solution for cost-effective general purpose and development systems being deployed in the Financial Services, Energy, Life Science, Automotive and Aerospace, Academic (Research & Teaching), Government Labs and other HPC industries.