Documentation

Changes and Maintenance

See the EL6 migration document for a summary of what has changed.

Acceptable Use Policies

All users must abide by the Acceptable Use Policies when using the Brazos Cluster.


Accessing the Cluster

The cluster is accessible to authorized users via the SSH protocol.
Login information

Graphical login to a GNOME desktop is available using X2Go. See the Graphical Login page for more information.

Data can be moved to and from Brazos using Globus Online.


Unix Command Reference

The Brazos Cluster runs Linux and uses Unix commands for file transfer, managing files, and starting batch jobs. We've put together a Unix command quick reference sheet for your convenience. You can also search Google for additional resources.
Unix Command Reference Sheet


Storage

Disk storage is available under /home and /fdata which is accessible from every cluster node.
A small /tmp space is available on each node for transient use only.
Storage information


Compilers

The Brazos cluster supports both the free GNU compilers as well as the commercial compilers from Intel.
Compiler information


Modules

Many of the software packages are loaded via the Lmod environmental modules system.
Please see our modules documentation for details.
For a list of installed modules see Modules Available on Brazos.

Batch Processing

We are using the SLURM open-source workload manager for batch scheduling. All processing on the cluster must run through the batch system. Do not run large memory or long running applications on the cluster's login nodes. They will be terminated without notice.
Using SLURM on the Brazos Cluster


Libraries and Applications

Local usage notes are available for the following packages:


MPI for Parallel Applications

Open MPI is a robust message passing library for parallel applications on the Brazos Cluster. We have compiled Open MPI in 64-bit mode using the GNU and Intel compilers. Please see our Open MPI page for usage information.

MVAPICH2 is a very high performance implementation of MPI for use on our InfiniBand nodes. MVAPICH2 has been compiled using the GCC and Intel compilers. Documentation is found on our MVAPICH2 page.


E-mail Lists - Getting Help

E-mail List Info


Please direct questions and commentary to our contact page.