Hardware


Workstations

The EML supports ten public workstations composed of Apple iMac machines. These permit one user at the console, and many simultaneous remote logins. The operating system for the Apple iMac is OSX and a multi-user, multi-tasking system. We do not limit the number of simultaneous logins, which can include one user at the console, and multiple concurrent remote logins and file transfers.

Servers

The EML supports ten linux compute servers. The compute servers processing units vary from 8-56 and the memory on each vary from 64 GB to 193 GB. A complete list is available at the Computing Grid which is available only to EML users. The operating system for the compute servers is Ubuntu. Access to the compute servers klein, hicks and nerlove is restricted to individuals who have obtained instructions regarding the use and limitations of running jobs on the compute servers. Please contact the EML staff to register as a restricted compute server user or to schedule access to additional resources on the EML compute servers.

Computing Cluster

The EML also has a high performance computing cluster, each with four nodes containing two 16-core CPUs available for compute jobs. Each node has 248 GB dedicated RAM. It is managed by the SLURM queueing software. Slurm provides a standard batch queueing system through which users submit jobs to the cluster. Jobs are typically submitted to Slurm via a user's shell script which executes one's application code. Users may also query the cluster to see job status. As currently set up, the cluster is designed for processing single-core and multi-core/threaded jobs (at most 32 cores per job), but not for distributed memory computations (MPI is not currently available). All software running on EML Linux machines and compute servers are also available on the cluster. Users can also compile programs on any EML Linux machine and then run that program on the cluster. Details on how to use the EML compute cluster is available at https://eml.berkeley.edu/52

Condo Node at Savio

EML users may access the condo node at Savio, the high-performance computational cluster managed by the Berkeley Research Computing (BRC) program. This allows EML users to run jobs with priority access to 2 nodes. They are also entitled to use the extra resource available on the Savio cluster through a low priority QoS. More information is available at the Savio website.

Printers

There are two network printers are available in the public lab (616 Evans).