Facilities - IT

IT Mission

To provide an innovative and robust compute environment which facilitates research, fosters collaboration among research groups, and assists in the technical training of students studying the Earth.

  • Maintain a secure and reliable environment while providing remote access and flexible solutions to diverse needs.
  • Leverage modern technologies to provide a scalable, economical and sustainable high-performance environment.

General Information

The Institute's computational facility is a reflection of the features of the research unit itself: a unique, shared, and communal resource allowing interdisciplinary and collaborative research and training to thrive. The community computer resource enables students and faculty researchers to share not only hardware and software resources but also the data sets and specialized computer programs that are the core of the individual research projects. This sharing of intellectual achievements enables institute researchers to easily make new and important Earth system science and integrated assessment discoveries while sharing results quickly with the wider community, which provides a truly interdisciplinary environment in which to train students. The Institute supports:

  • ≈200 UNIX systems (CentOS, Fedora, Ubuntu) - ≈100 hardware, ≈100+ virtual machines (VMs)
  • ≈100 Macs
  • ≈40 Windows PCs
  • 10 Windows Servers
  • 21 managed network switches
  • 10 networked printers - 1 color, 12 monochrome
  • 5 "Fat-Node" SMP HPC systems totaling 230 cores.
  • 2 GPU oriented HPC systems.
  • 2 Windows Server HPC systems with 100 cpu cores, 500GB of RAM and 30+TB of local storage between the two.
  • ≈7.0 PB of disk storage
  • ≈111 Websites - including 47+ CMS (Drupal, MediaWiki, Wordpress, etc.)
  • 118+ databases (mysql + postgres + sqlite + nosql)
  • Conference Facilities - 70" HDMI display, 90" HDMI display, Teleconferencing phone, Owl and additional accessories.


The research unit has a 40Gb/s connection from the UCSB campus backbone to server rooms in the North Hall Data Center and Ellison Hall. This provides shared access to a 622Mb/s CALREN-2 connection, which in turn provides access to Internet2. High speed layer two switches and WAPs provide Ethernet, Fast-Ethernet, Gigabit-Ethernet and Wi-Fi connectivity. The Institute's network spans three class C subnets and extends to several campus locations via VLAN tagging and additional small subnet allocations across the campus backbone. Locations include: Ellison Hall, Webb Hall, Girvetz Hall, Bren and Harder Stadium.

ERI IT staff manages a wireless network on the 6th floor of Ellison with five WAPs and also helps manage a research VPN allowing communication with remote field observatories.


The computing environment is based on a network of primarily Linux-based (x86) hardware. 

The computing environment's architecture is designed to permit rapid deployment and easy integration of new hardware. Virtual systems based on the open source KVM and Xen projects are also available to Institute researchers providing rapid, inexpensive, flexible and reliable resources. Vast datasets of MODIS, TM, AVHRR, to name just a few, are readily available on-line to all researchers at the Institute as are the tools and software for modeling and other modalities of scientific analysis.

ERI Storage Services provides researchers with access to online disk storage and automated backups on a pay for usage basis. This provides flexible and economical digital storage where the user pays only for what they use.

High Performance Computing

Various "Fat Node" linux systems are available to all ERI researchers. They are Symetrical Multi-Processor/Shared Memory Communication (SMP/SMC) systems configured with job queueing software. Each system has a queuing system, access to scientific computing software and more than 4PB of NFS networked storage on the various disk servers at ERI. 

  • Hammer - 48 2.1GHz AMD CPU cores, 128Gb of RAM, and 7Tb of local scratch disk
  • Tong - 32 2.2GHz Intel CPU cores, 256Gb of RAM and 8Tb of local scratch disk
  • Anvil - 32 2.3GHz Intel CPU cores, 512Gb of RAM and 11Tb of local scratch disk
  • Forge - 28 2.6GHz Intel CPU cores, 512Gb of RAM and 7Tb of local scratch space
  • Tana(GPU) -  Two GTX 1080 Ti Blower Cooled GPU, 4 3.5GHz Intel CPU cores, 128Gb of RAM, 1TB of local SSD and 4 TB of local scratch space
  • Bellows - 96 x Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz cores, 1536G of RAM and 14T local NVMe scratch storage
  • Striker - 96 x Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz cores, 756G of RAM
  • Biscuit & Parallel: Windows Server HPC systems with combined 100 CPU cores, 500GB of RAM and 30+TB of local storage.


ERI IT staff is available to assist users in gaining access to these and other larger on and off campus computing resources, including supercomputing resources available via the CSC compute clusters at UCSB, please ask for more information.

Desktop Computing

Windows and Mac systems predominate on desktops which integrate with the general compute environment. High-performance SATA based RAID disk arrays allow participants to add disk storage to the environment inexpensively. Nightly backups to off-site RAID arrays minimizes the risk of critical data loss. There are networked printers including color laser printers. Finally, a full compliment of computational, image processing, statistical, database, graphical, scientific visualization, and animation software are available for use by our researchers.

Desktop backups are provided using BackupPC, a free and opensource product which provides data compression and data deduplication to minimize space costs. Management and restores are accomplished via a web interface. Duplicati provides a flexible backup solution for roaming systems (laptops).