EML Computing Clusters


The Econometrics Laboratory (EML) has 2 high-performance computing clusters. These systems allow users to work with massive data sets and easily manage long-running jobs. All software running on EML Linux machines is available on the cluster. Users can also compile programs on any EML Linux machine and then run that program on the cluster.


High Priority Cluster

This cluster has four nodes, each with two 14-core CPUs. Each core has two hyperthreads, for a total of 224 processing units. Each node has 132 GB dedicated RAM. It is managed by the SLURM queuing software. SLURM provides a standard batch queuing system through which users submit jobs to the cluster. Jobs are typically submitted to SLURM using a user-defined shell script that executes one's application code. Interactive use is also an option. Users may also query the cluster to see job status. As currently set up, the cluster is designed for processing single-core and multi-core/threaded jobs (at most 24 cores per user. Whether in a single job or spread across multiple jobs), as well as distributed memory jobs that use MPI.

Details on how to use the EML high priority compute cluster is available at https://eml.berkeley.edu/52


Production Cluster

This cluster has eight nodes, each with two 16-core CPUs available for compute jobs (i.e., 32 cores per node), for a total of 256 cores. Each node has 248GB dedicated RAM. It is managed by the Sun Grid Engine (SGE) queuing software. SGE provides a standard batch queuing system through which users submit jobs to the cluster. Jobs are typically submitted to SGE via a user's shell script which executes one's application code. Users may also query the cluster to see job status. As currently set up, the cluster is designed for processing single-core and multi-core/threaded jobs (at most 32 cores per job) including distributed memory computations.

Details on how to use the EML compute cluster is available at https://eml.berkeley.edu/563