Computing Resources
The Computational Sciences program at SDSU provides a variety of cluster and Linux SMP systems for use by its students and partner researchers
The CSRC manages the SDSU Science DMZ network using infrastructure acquired through NSF CC-NIE award 12453212 (12/01/12-05/31/16). This grant implemented a Science DMZ at SDSU using Alcatel-Lucent and Brocade MLXe-4 switching infrastructure, two independent 10 Gbps uplinks to the CENIC CalRED-DC and CalREN-HPR (High-Performance Research) networks under the California Research and Education Network (CalREN), and a 100Gbps uplink to the HPR network acquired through SDSU’s participation in the NSF funded project Pacific Research Platform. Science DMZ performance monitoring is accomplisted using the perfSONAR Performance Toolkit. The perfSONAR host is publically accessible via URL http://perfsonar.sdsu.edu/ (IPv6 and IPv4). Scientific datasets are transferred daily between our Science DMZ and XSEDE systems using Globus gridFTP.
Access to cyberinfrastructure residing within our Science DMZ is accomplished using the CILogin Service and our university Shibboleth identity management system. CILogon is a research and scholarship service provider in the InCommon federation. Our Science DMZ identity management system supports the InCommon Research and Scholarship Category to provide collaborative services for researchers and scholars via their federated identities. CalREN is a multitiered, advanced networkservices fabric that is managed by the CENIC and serves research and education institutions in the state of California. The CalREN-DC (Digital California) network provides connectivity to the commercial Internet. Most educational institutions (K-12, community colleges, and universities) obtain Internet connectivity through the CalREN-DC network tier. The CalREN-HPR (High-Performance Research) network provides 10 Gbps and 100 Gbps connectivity among research universities and institutions in California, and connectivity to Internet2, the National LambdaRail, and ESnet, over a 100 Gbps backbone.
The CSRC has a Mellanox SX6036 36-port Non-blocking Managed 56Gb/s InfiniBand/VPI Switch along with nine InfiniBand cables, to connect the BeeGFS cluster components and provide 56Gb/s Fourteen Data Rate (“FDR”) InfiniBand connectivity to Linux stations located in the CSRC Data Center.
Cluster Specifications
The Computational Sciences program at SDSU provides a variety of cluster and Linux SMP systems for use by its students and partner researchers.
For consultation with regard to using CSRC HPC resources please contact the technical support team.
CPU Clusters
- Anthill — 40-node Dual-8 Xeon CPU (E5-2650 2.6GHz). 640 total cores. 8GB RAM per core. SGE batch scheduler. Infiniband interconnect.
- Dugong — 7-node Quad-8 AMD CPU Section (5110 1.60GHz) 2GB RAM per core. 13-node Dual-6 Xeon Section (X5680 3.33GHz) 4GB RAM per core. 380 total cores. OpenPBS/Torque batch scheduler. Infiniband interconnect.
- Cinci — 16-node Dual-8 Xeon CPU Section 4GB RAM per core. 4-node Dual-12 Xeon Section 5.33 GB RAM per core. Dual P100 Nvidia GPU server 98GB RAM. 368 total cores. OpenPBS/Torque batch scheduler. Infiniband interconnect.
- COD — 6-node Dual-8 Xeon CPU Section (E5-2640v3 2.6GZ) 4GB RAM per core. 12-node Dual-10 Xeon Section (E5-2640v4 2.4GHz) 12.8 GB RAM per core. Dual P100 Nvidia GPU server 128GB RAM. 352 total cores. OpenPBS/Torque batch scheduler. Infiniband interconnect.
- Mesxuuyan — 16-node Dual-10 Xeon CPU (E5 -2630v4 2.20GHz) 12.8GB RAM per core. 320 total cores. Torque batch scheduling. Omnipath interconnect.
CPU/GPU Cluster
- Notos — AI GPU cluster. 8x Tesla V100 GPU. TensorFlow, Caffe, Keras deep learning frameworks.
Student-Teaching Cluster
Tuckoo — A mix of 8 multi CPU and GPU hosting servers supporting student research and HPC classes in the COMP academic program. Torque batch scheduler. Infiniband interconnect.
Technical Support
The Computational Sciences program at SDSU provides a variety of cluster and Linux SMP systems for use by its students and partner researchers.
To schedule a consultation with regard to selection and use of appropriate resources for your computing needs please contact our technical team:
Email : jotto@.sdsu.edu
Phone : 619-594-3505 (v)
CSRC Shared Resources Policy
We have a policy that helps optimize the use of computer resources among our faculty. Instead of each faculty having a cluster, we ask them to contribute to our pool resources. The faculty contributing will have a guarantee of 60% usage of the hardware and 40% will go toward the shared resource pool. If the faculty, at some point, needs 100% of the contributed resources, this can be arranged. In addition, if for some reason the faculty needs more than the contributed resources for certain projects, this can also be arranged. However, in case the need is greater than what we can provide, we will facilitate the request for use of supercomputers outside SDSU. The CSRC maintains and administers the clusters which is a cost saving for the departments. As you can see, this benefits the faculty contributing as well as other potential users.