|
Campus Cyberinfrastructure Improvements under NSF CC-NIE
[ Watch on YouTube ] Introduction In his introduction to the session, moderator Tad Reynales of UCSD informed attendees that out of the 36 awards from the most recent round of awards for the NSF's Campus Cyberinfrastructure Network Infrastructure and Engineering (CC-NIE) program, California received six. This is a significant fraction of total awards, and Reynales made a point of stating that CENIC's California Research & Education Network (CalREN) was a major factor in the overall success of the Golden State's research and education community in obtaining CC-NIE funds, particularly access to the High-Performance Research network tier. Reynales thanked CENIC President and CEO Louis Fox for his letters of support on behalf of these institutions and pointed out the value of the CC-NIE program in promoting campus cyberinfrastructure that can take full advantage of 100G networking as illustrated by the "100G and Beyond: Ultra High Performance Networking in California" workshop held on February 26, 2013 at Calit2 and sponsored by CENIC, Calit2, and ESnet. Campuses whose cyberinfrastructure improvements were presented include UC Davis, UCSD, San Diego State University, and Stanford University.
UC Davis: "Improved Infrastructure for Data Movement and Monitoring" Matt Bishop outlined the goals of UC Davis, concentrating on their needs to move increasingly large amounts of data in disciplines such as astronomy and the life sciences and using as examples the Large Synoptic Survey Telescope in Chile, which anticipates moving up to 30 TB per day and the UC Davis Genome Center, which must still mail hard drives between collaborators and storage services with Amazon. Further, while "big science" disciplines are the heavy-hitters in campus networking considerations, the social sciences are becoming more and more data-dependent and also have unique privacy concerns over the data that they create, store, share, and access. As with Larry Smarr's keynote on the previous day, Bishop also emphasized the importance of involving the researchers in the cyberinfrastructure planning and development from the ground up, as opposed to simply waiting for researchers to let the network specialists know what they need. This ensures that the cyberinfrastructure will be used more, but it also enables it to move to production more quickly. Bishop also was careful to advise his audience to look to the future and plan for future growth, stating that the flexibility afforded by software-defined networking was a key part of making the most of any campus cyberinfrastructure. He also stated that while growing a good cyberinfrastructure itself was vital, growing a team of specialists to support and promote it was just as important, including using student assistants to aid researchers in using the network optimally. As seen below in Fig. 1, the connection to CalREN-HPR is a significant part of the campus's planned cyberinfrastructure.
UCSD: "PRISM@UCSD: A Research Defined 10 and 40 Gbit/s Campus Scale Data Carrier" Phil Papadopoulos touched on some of the same issues described in Smarr's keynote, such as the breadth of research and facilities that must be supported by PRISM as a termination network for a 100G connection. An illuminating way to approach the requirements of a campus cyberinfrastructure was shown by Papadopoulos in a slide that listed a number of campus facilities and how quickly they would "drain" with a 100G connection. The Triton compute nodes, with 25 GB/node, would drain in about 2 seconds. 24 high-performance flash drives at 250GB each would saturate a 100G connection for about 8 minutes, and the Data Oasis High-Performance Parallel File System at SDSC would drain in slightly over 4 days. Clearly as Papadopoulos pointed out, 100G is more than some facilities might dream of at the moment, but other facilities already are planning ways in which they could render even this amount of bandwidth insufficient. Papadopoulos then gave an overview to attendees on UCSD's history in the realm of leading-edge research networks, beginning in 2002 with the OptIPuter (designed around the concept of how distributed program design might change if the network ceased to be a bottleneck) and 2004's Quarzite as a campus-wide, Terabit-class, field-programmable, hybrid switching instrument for comparative studies. A diagram of the 2005 OptIPuter network was also shown, which compared interestingly with the PRISM diagram also shown. Papadopoulos's last slide outlined the basic rationale behind the PRISM design process, specifically the highly bursty use to which most campus facilities would put PRISM, the needs of the students, faculty, and staff, the repeated importance of software-defined networking, and the need for PRISM to function as a bridge to identified and trusted networks.
San Diego State University: "Implementation of a Science DMZ at San Diego State University to Facilitate High-Performance Data Transfer for Scientific Applications" Christopher Paolini introduced attendees to San Diego State University's planned cyberinfrastructure, beginning with the statement that such a network must not only support diverse missions but often conflicting ones, particularly in the arenas of security versus performance. Part of this challenge is created by the fact that often, campus network operations centers (NOCs) are accountable not to researchers but to the university business divisions which, unlike computation and "big data" researchers, value security over performance. ESnet's ScienceDMZ model is an excellent way to conceptualize this, and to create a campus cyberinfrastructure that can perform as researchers need while still paying mind to the business demands of security. Paolini also invited attendees and other interested parties to contact him if they wished to discuss the details of SDSU's implementation of a ScienceDMZ using the multiple tiers of CENIC's CalREN.
Stanford University: "Bringing SDN based Private Cloud to University Research" Johan van Reijendam of Stanford University described that institution's focus on a particular aspect of campus cyberinfrastructure, a software-defined-networking (SDN)-based private cloud that would integrate compute clusters with a campus-wide SDN network. Many commercial cloud providers such as Google and Amazon have enabled users to scale up quickly and without any infrastructure costs, and yet their services do not always fit well with campus-based researchers' needs. The SDN-based private cloud implemented by Stanford University under this award will consist of three virtualized computing clusters using SDN, a campus-wide sliceable SDN backbone, sliceable SDN edge networks, and SDN-based control and management applications. Network operations, firewalling, and load balancing have all been incorporated into the design as well, demonstrating a specific and topical application of great interest to research and education made possible through SDN. Questions from the audience including the specifics of switches employed by each campus and strategies to deal with hacking. The latter question was asked of van Reijendam, who replied that at the speed at which campuses will operate in the future (100g and beyond), it simply isn't feasible to check ever packet. Because of this, security at the borders of ultra-high-performance networks and campus cyberinfrastructures will be based on exception forwarding and trust between known endpoints.
[ back to top ]
|
|
About CENIC and How to Change Your Subscription: |
|
|
California's education and research communities leverage their networking resources under CENIC, the Corporation for Education Network Initiatives in California, in order to obtain cost-effective, high-bandwidth networking to support their missions and answer the needs of their faculty, staff, and students. CENIC designs, implements, and operates CalREN, the California Research and Education Network, a high-bandwidth, high-capacity Internet network specially designed to meet the unique requirements of these communities, and to which the vast majority of the state's K-20 educational institutions are connected. In order to facilitate collaboration in education and research, CENIC also provides connectivity to non-California institutions and industry research organizations with which CENIC's Associate researchers and educators are engaged. CENIC is governed by its member institutions. Representatives from these institutions also donate expertise through their participation in various committees designed to ensure that CENIC is managed effectively and efficiently, and to support the continued evolution of the network as technology advances. For more information, visit www.cenic.org. Subscription Information: You can subscribe and unsubscribe to CENIC Updates at http://lists.cenic.org/mailman/listinfo/cenic-announce. |
|