Skip to header Skip to Content Skip to Footer

High-Performance Computing

Frequently Asked Questions

  • There are no costs to use any of the research computing solutions, although additional storage may incur an annual assessment.
  • UNCW Cuttlefish: On-site cluster. Access is currently limited to the Center for Marine Science (CMS) research efforts as UNCW onboards the cluster.

  • NCSU Hazel: This cluster is open to faculty, staff, as well as students who have a faculty sponsor.

  • NCShare: This container management solution offers containers and traditional HPC solutions. It is available for faculty, staff and students. NCShare does not allow restricted or sensitive data to be stored in its environment.

  • Faculty and staff can request a project on an HPC cluster.
  • Faculty may also request a temporary project for a course or research.

  • The request below can also be used for a consultation to determine if HPC is right for you. 

Request Research Computing (HPC) Access

  • 14 nodes
  • 920 cores on intel Sapphire Rapids processors
  • Two H100 GPUs
  • Nodes range from 256GB to 2TB of memory

Overview: 

  • All users get a 1TB home directory for storing scripts, small applications, environment files, etc. 
  • Lab/Project space is also available for each group. 
  • Additional space may be available for an annual cost. 
  • The cluster includes 107TB of scratch space used for temporary storage for running jobs.
    • This space removes any files unused for 30 days. 

Home Directory:

  • 1TB home directories
  • Located at /storage/<department>/<lab_name>/<username>
  • For housing scripts, small applications, environment files, etc.

Scratch Space:

  • 107TB of NVMe fast storage
  • For housing data for actively running jobs and analysis
  • Scratch space is NOT backed up
  • Files not accessed within 30 days are AUTOMATICALLY DELETED
  • Computation work should be transferred from scratch space promptly to avoid loss of work
  • Available Storage:
    • 2.5PB of traditional storage
    • 1TB per shared lab/project

Slurm is the job scheduler used for submitting and queuing work.

General Queue

  • Default queue jobs can run up to 14 days.
  • Access to 12 nodes with 256GB (x8), 512GB (x4) of memory.

Interactive Queue

  • 12-hour queue for when you need to interact with the job as it runs.
  • Access to 12 nodes with 256GB (x8) and 512GB (x4) of memory.

Highmem Queue

  • Highmem queue jobs can run up to 14 days.
    Access to 1 node with 2TB of memory.

GPU Queue

  • 3-day queue for when you need a GPU resource.
  • Single node with 256GB of memory for the CPU, 80GB per GPU

For information on getting started and requesting access, please visit our How do I request High-Performance Computing (HPC)? knowledge base article.

Please return to this area in the future for detailed information about the cluster to be used in grant proposals. 

For information on getting started and requesting access, please visit our How do I request High-Performance Computing (HPC)? knowledge base article.

  • User friendly GUI to run JupyterLabs 
  • Environment Reproducibility​ 
  • Teaching Various Technologies​ 
  • Collaboration​ 
  • Continuous Integration and Deployment (CI/CD)​ 

For information on getting started and requesting access, please visit our How do I request High-Performance Computing (HPC)? knowledge base article.