コンテンツ

  1. HOME
  2. Services
  3. Computing Services
  4. Supercomputer System
  5. supercomputer system

supercomputer system

Those who wish to use our Large-Scale Computer Systems are required to obtain User ID/account.

Contents


System Configurations

The System A (Camphor 2), System B (Laurel 2) System C (Cinnamon 2), storage system, and System E (Camellia) are in operation.

System A(nickname: Camphor 2)

Specifications Machine Cray XC40
Number of Nodes 1,800
Performance 5.48 PFlops
Total Memory Capacity 196.9 TB
Network Topology Dragonfly
Bisection Bandwidth 13.5 TB/sec
Node Specifications Processor (Core) 1(1×68 = 68)
Performance 3.05 TFlops
Memory 96GB+16GB
Injection Bandwidth 15.75 GB/sec
Interconnect Aries
Processor Specifications Processor Intel Xeon Phi KNL
Architecture x86-64
Clock 1.4 GHz
Number of Cores 68
Performance 3.05 TFlops
High-speed auxiliary storage System name Cray DataWarp
Total capacity 230 TB
I/O performance 200 GB/sec

System B(nickname: Laurel 2)

Specifications Machine Cray CS400 2820XT
Number of Nodes 850
Performance 1.03 PFlops
Total Memory Capacity 106.3 TB
Network Topology Fat tree
Bisection Bandwidth 5.1 TB/sec
Node Specifications Processor (Core) 2(2×18 = 36)
Performance 1.21 TFlops
Memory 128GB
Injection Bandwidth 12.1 GB/sec
Interconnect Omni-path
Processor Specifications Processor Intel Xeon Broadwell
Architecture x86-64
Clock 2.1 GHz
Number of Cores 18
Performance 605 GFlops

System C (nickname: Cinnamon 2)

Specifications Machine Cray CS400 4840X
Number of Nodes 16
Performance 42.4 TFlops
Total Memory Capacity 48.0 TB
Network Topology Fat tree
Bisection Bandwidth 193.6 GB/sec
Node Specifications Processor (Core) 4(4×18 = 72)
Performance 2.65 TFlops
Memory 3 TB
Injection Bandwidth 24.3 GB/sec
Interconnect Omni-path
Processor Specifications Processor Intel Xeon Haswell
Architecture x86-64
Clock 2.3 GHz
Number of Cores 18
Performance 662.5 GFlops

System E (nickname:Camellia)

Specifications Machine Cray XC30
Number of Nodes 482
Theoretical Peak Performance Total: 583.6 teraflops
Processor: 96.4 teraflops
Coprocessor: 487.2 teraflops
Total Memory Capacity Processor: 15 TB
Coprocessor: 3.8 TB
Network Topology Dragonfly
Node Specifications Processor (Core) 2 (Processor: 10, Coprocessor: 60)
Theoretical Peak Performance Total: 1,210.88 gigaflops
Processor: 200 gigaflops
Coprocessor: 1,010.88 gigaflops
Memory Processor: DDR3-1600 32GB
Coprocessor: GDDR5 8GB
Interconnect Aries
Bisection Bandwidth 15.7 GB/s
Processor Specifications Processor Intel Xeon E
Architecture x86-64
Architecture 2.5 GHz
Number of Cores 10
Theoretical Peak Performance 200 gigaflops
Coprocessor Specifications Coprocessor Intel Xeon Phi 5120D
Architecture x86-64
Clock 1.053 GHz
Number of Cores 60
Theoretical Peak Performance 1,010.88 gigaflops

New Storage System

    The storage system is sheduled to be strengthened at the end of March, 2018

    Machine DDN SFA14K(DDN ExaScaler)
    Physical capacity 24 PB
    ※16 PB(2016.10-2018.3)
    Effective capacity 18.8 PB
    ※12.6 PB(2016.10-2018.3)
    Data transfer rate 150GB/sec
    ※100GB/sec(2016.10-2018.3)

    High-speed auxiliary storage (in preparation) will be available for the System A, B, and C.

    High-speed auxiliary storage System A System name Cray DataWarp
    Total capacity 230 TB
    I/O Performance 200 GB/sec
    System B,C System name DDN IME
    Total capacity 230 TB
    I/O Performance 240 GB/sec

      The software stack

      New software stack is shown in the following figure and basically offers equivalent compiler and libraries to the previous ones.

      Service Courses

      Referring to Service Course Selection Guide and Usage Fees of ACCMS (*Japanese Version Only) for details of avaiable computing resources by course and type, complete the necessary application procedures for use of our services.

      Supercomputer System User's Guide

      Supercomputer System User's Guide explains how to login to the systems, compilers, libraries, and application software.

       

      Copyright © Institute for Information Management and Communication, Kyoto University, all rights reserved.