The Armis2 service is the go-to cluster for sensitive data, and is available to all U-M researchers.
Key features of Armis2
- 24 standard nodes using the Intel Haswell-processor, each with 24 cores. More capacity will be added soon
- Slurm provides the resource manager and scheduler
- The scratch storage system will provide high-performance temporary storage for compute jobs. See the User Guide for quotas and the file purge policy
- The EDR InfiniBand network is 100Gb/s to each node
- Large Memory Nodes have 1.5TB memory per node
- GPU Nodes with NVidia K40GPUs (4GPUs per node). Newer GPUs will be added soon.
- For more information on the technical aspects, see the Armis2 configuration page.
A simplified accounting structure
On Armis2, you can use a single account to request the resources you need, including standard, GPU, and large memory nodes. If you already have an active account on Armis, it has been migrated to Armis2. See the User Guide for more information on accounts and partitions.
Rates: “Pay only for what you use”
Use of Armis2 is currently free. Beginning on December 2, 2019, all jobs that run on Armis2 will be subject to applicable Armis2 rates.
Data migration information for existing Armis users
All migrations from Armis to Armis2 must be completed by November 25, 2019, as Armis will not run any jobs beyond that date. View the Armis2 HPC timeline. The Armis2 User Guide has details about how the new resource manager works, updating the job submission scripts to work with Slurm, and best practices for migrating data and software to Armis2.
Learn how to use the new system
Discover Armis2 and Slurm, and get a list of policies, view the documentation on the Armis2 website. Additionally, ARC-TS and academic unit support teams will be offering training sessions around campus. As information becomes available, there will be a schedule on the ARC-TS website as well as Twitter and email.
Ask questions or get started by contacting email@example.com.