hpc [at] brin.go.id

Hardware – MAHAMERU 3

Gen 2 - Computing Science Nodes

Computating science nodes are consist of CPU and GPU nodes. In total, 792 cores CPU with 4.3 TB memory and 6 GPUs. Each node is connected through the Infiniband network.

CPU node

HPE Apollo r2200
(Intel(R) Xeon(R) CPU 2.10GHz)

  • 28 nodes
  • 1008 cores
  • 3.5 TB total memory (128 GB/node) 

GPU node

HPE DL 380 Gen 9
(Intel(R) Xeon(R) CPU E5-2695)

  • 4 nodes
  • 144 cores CPU
  • Tesla P100 GPU
  • 2 TB total memory (512GB/node) 

Dual GPU node

Dell Power Edge R740XD
(Intel(R) Xeon Gold 6140 2.3GHz)

  • 2 nodes
  • 72 cores CPU
  • Dual Tesla P100 GPU
  • 768 GB total memory (384GB/node) 

Gen 2 - Render Farm Nodes

Server hardware

  • HPE Apollo r2200 (10 nodes CPU)
  • Asus ESC4000 G4 (10 nodes quad GPU)

Gen 2 - Storage Cluster

  • Dell Power Edge R740XD
  • 2.6 PB in total, distributed in 20 nodes (132 TB/node)
  • Divided into Hadoop cluster (10 nodes)* and storage cluster (10 nodes)*.
*currently in development.

Gen 1

Server hardware

  • Fujitsu RX350 S7
  • Fujitsu RX200 S7
  • Fujitsu CX270 S1

Storage

  • Storage: Fujitsu Eternus DX1/200 S3

Network

  • Dual Gigabit Network
*Currently inactive, dev only. 
Scroll to top