Theta File Systems

Help Desk

Hours: 9:00am-5:00pm CT M-F


Theta has one discrete file system: theta-fs0 used for project data. Theta-fs0 is an Intel Enterprise Edition Lustre parallel file system mounted as /lus-projects or /projects.

Theta also shares a GPFS home file system with Mira, mira-home. The home file system is mounted as /home, and should generally be used for small files and any binaries to be run on the XC40. The performance of this file system is reasonable, but using it for intensive I/O from the compute nodes is discouraged because I/O from the compute nodes uses the project data file systems, which are fast parallel systems and have far more storage space and greater I/O performance than the home directory space.

The Mira home file system is regularly backed up to tape and has snapshot capability. The data file system is not backed up. It is the user’s responsibility to ensure that copies of any critical data on the data file system have either been archived to tape or stored elsewhere.

Name Accessible From Type Path Production Backed-up Usage
mira-home Theta
GPFS /home
Yes Yes general use
lus-projects Theta Lustre /projects
/lus-projects or /lus/theta-fs0/projects
Yes No intensive job output, large files
Node SSD Theta - compute node only ext3 /local/scratch  Yes - By Request Only No local node scratch during run


Available Directories

Home Directories

  • Created when an allocation (INCITE, Discretionary, etc.) is granted.
  • Located under /home (mira-home).
  • Each home directory is subject to a quota based on user file ownership. The default quota is 100 GB. 

Home Project Directories

  • Available to users by special request.
  • Located under /home/projects.
    • These directories can be requested in order to hold small amounts of code that can be made available to a group of users for sharing.

Note: mira-home has user quotas in place, so any files written to a home project directory count against the file owner’s quota. For quota information, see the Disk Quota page.

Project Directories

  • Created when an allocation (INCITE, discretionary, etc.) is granted.
  • Located under /projects or /lus-projects.

These project spaces do not have user quotas but a directory quota, meaning that ALL files contained within a project directory, regardless of the username, cannot exceed the disk space allocation granted to the project. For more information on quotas, see the Disk Quota page.

Local Node SSD

Access to SSDs is disabled by default. Project PIs may request access by emailing A use case will need to be provided.

SSD Information

  • Local scratch SSD storage on compute nodes for running jobs
  • Completely local non-parallel filesystem
  • Located at /local/scratch
  • Wiped between Cobalt jobs
  • No automatic backups provided
  • Refer to this page for requesting SSDs in Cobalt.
  • Information on the current SSD drives in use is below:

    Model SM961 drives - Specifications

Model SM961 drives

Capacity 128 GB
Sequential Read 3100 MB/s
Sequential Write 700 MB/s


Model SM951 drives - Specifications

Model SM951 drives

Capacity 128 GB
Sequential Read 2150 MB/s
Sequential Write 1550 MB/s