NSF CSR/NeTS 2019 Joint PI Meeting

NSF CSR/NeTS 2019 Joint PI Meeting

November 4th and 5th, 2019

Crystal Gateway Marriott,
1700 Jefferson Davis Highway
Arlington, VA, 22202

 

Reimbursement Info

 

PIs will be reimbursed for attending the meeting.  Please see the FAQ for 

further details.

2019 PI Meeting

CSR and NeTS are two Core Programs within the Division of Computer and Network Systems (CNS) at the NSF.

The CSR program within CNS broadly supports computer systems research, with a focus on seamless integration with humans. The charger ranges from small low-power embedded systems to large high-performance systems. Specifically:

The Computer Systems Research (CSR) program supports transformative scientific and engineering research leading to the development of the next generation of highly performant, heterogeneous, power-efficient, environmentally sustainable, and secure computer systems. The scope of the program includes embedded and multicore systems and accelerators; mobile and extensible distributed systems; cloud and data-intensive processing systems; and memory, storage, and file systems. The program seeks innovative research proposals that will advance the reliability, performance, power, security and privacy, scalability, and sustainability of computer systems.

The NeTS program supports networking research, specifically focusing on the basic research for interconnecting and securing computer systems. The charter includes basic connectivity technologies to enable faster networking on both wired and wireless systems, ensuring availability under attack or failure to management of large and small networked systems. Specifically:

Computer and communication networks need to be available anytime and anywhere, and be accessible from any device. Networks need to evolve over time to incorporate new technologies, support new classes of applications and services, and meet new requirements and challenges; networks need to scale and adapt to unforeseen events and uncertainties across multiple dimensions, including types of applications, size and topology, mobility patterns, and heterogeneity of devices and networking technologies. Networks need to be easily controllable and manageable, resource and energy efficient, and secure and resilient to failures and attacks. The Networking Technology and Systems (NeTS) program supports transformative research on fundamental scientific and technological advances in networking as well as systems research leading to the development of future-generation, high-performance networks and future Internet architectures.
 

This PI meeting will bring together researchers from both the NeTS and CSR communities. The research in these areas are inter-related, and the goal is for the interaction between PIs from these two programs to exchange ideas that will be beneficial to both, and together produce a whole that is greater than the sum of the parts. The broad theme of the workshop is "Convergence", both in NeTS and CSR research and with the broader CS community. The objective of the workshop is to provide input to NSF and the broader community on two basic questions:

  1. Focusing on integrating CSR, NeTS and broader Computer Science research, what are the most important challenges facing the CSR and NeTS communities over the next decade?
     
  2. What are the areas where researchers from Computer Systems and from Networking best collaborate with each other and the broader research community to exploit synergies in tackling these challenges?

Schedule

Monday (Nov. 4, 2019)

7:45 am - 8:30 am

Continental Breakfast

(Salons 1-3)

8:30 am - 8:45 am

Meeting Logistics

Bobby Bhattacharjee, Songqing Chen

(Salons 1-3)

8:45 am - 9:00 am

Welcome, update from NSF

Dr. Erwin Gianchandani, Dr. Kenneth L. Calvert

(Salons 1 - 3)

9:00 am - 9:45 am

Dennis Moreau

(Salons 1-3)

Slides

The rapid growth in the adoption of modern application development and hosting technologies has brought with it, unprecedented levels of complexity, in terms of stack decoupling, instance dynamics, and system distribution. The underlying hosting platforms readily span multiple on-premise, co-hosted and cloud-hosted sites, easily extending across geographic and regulatory boundaries. Within individual platforms there is an accompanying convergence of computation, networking and storage capabilities, realized over common resources and shared fabrics. The result is that services, applications, platforms, infrastructure and even bare metal can all be consumed on demand at incredible scale.

9:45 am – 10:15 am

Break

(Arlington Foyer)

10:15 am – 12:15 pm

Breakout Session I

Lead: Mahadev Satyanarayanan, Wei Gao

(Salon 5)

Slides

The focus of this session is the intersection of Edge Computing and ML/Cognition, especially in latency-sensitive settings. Here are some relevant questions (not prioritized). 1. What is the kind of hardware acceleration needed for ML/Cognition at edge nodes (i.e., cloudlets)? How do we reconcile these power-hungry and heat-generating accelerators with the physical and electrical limitations of cloudlets? What does it imply for hardware design at the edge? 2. In the battle for MIMD (cores) versus SIMD (GPUS, FPGAs, etc.) at the edge, how do we strike a good balance especially in multitenant environments? 3. Is inferencing the primary focus, or is training at the edge also important? Why isn't training in the cloud good enough? When is training at the edge valuable/important? 4. 'Just-in-time learning' at the edge has been proposed as an approach to dynamically adapting classifiers to fresh incoming data (e.g. during a drone flight). How important/valuable is this? Does it only apply to a small set of use cases, or is it of broad applicability? 5. There is an ongoing dialectic between Tier-3 (mobile and IoT devices) and Tier-2 (cloudlets/edge computing). Over time, each new generation of Tier-3 devices has more sophisticated SoC functionality. So the optimal split between Tier-2 and Tier-3 is constantly changing. In a multi-user setting, there may be diverse splits because of old Tier-3 hardware versus new Tier-3 hardware. How do we create robust, and easily-maintained applications in the face of so much variability? 6. Splitting pipelines (e.g. DNN inferencing) across Tier-3 and Tier-2 is being actively explored by many researchers today. How successful are these efforts? What are the key challenges today? Why is it so hard? 7. Edge computing lends itself naturally to distributed learning. For example, video cameras at different cloudlets collect different parts of an overall training set for DNN training. What is the current state of the art in this effort? What makes it hard? What are the challenges?

Lead: Yiran Chen and Yingyan Lin

(Alexandria)

Slides

This breakout session aims exploiting the requirements of future computing architectures for edge computing. The session participants will discuss (but not limited to) the following questions: 1. What are the critical requirements of computing architectures for future edge computing applications? In particular, are there any new requirements that have not been fully accommodated in the current computing architecture design? 2. What kinds of new computing units will be needed in future CAEC and are there any necessary architectural changes? 3. Can the current memory and storage systems satisfy the new requirements? If not, what are the new technologies that are worth exploring? 4. Any new computing models and algorithms that need to support? 5. How the workload to be partitioned between different layers in the context of edge computing? 6. Any new benchmarking methods required? 7. Any critical concerns and solutions on security and safety?

Lead: Ali Butt

(Salon 6)

Slides

The focus of this session is to discuss and identify research directions and challenges arising in modern and emerging data and storage systems research in the face of disruptive applications such as deep learning, edge systems, and smart infrastructure. We will also discuss comprehensive techniques that provide practical solutions to the storage issues facing the information technology community. Following are some relevant questions to consider: - How to evolve storage systems to meet the challenges of scale, throughput, and sustainability arising from emerging applications such as deep learning? - How to rearchitect and design storage systems to accommodate and leverage new types of memories such as NVRAM? - How to address allocation, management, privacy, performance, and multi-tenancy to meet the demands of the intense migration of data from on-premise to cloud deployments? - How to design file system APIs and higher-level yet simple-to-use interfaces for new storage systems? - How to design programming models to efficiently support innovative storage and deep storage hierarchies? - How to address pipeline issues and train the next generation of storage systems researchers? - How to design and make it easy for systems researchers to realize new applications and storage hardware?

Lead: Tarek Abdelzaher, Ramesh Govindan

(Rosslyn 1)

Slides

IoT technologies will be pervasive in smart cities of the future and necessary for recovery from catastrophe and conflicts. In this session, we will explore research directions in sensing and actuation with emphasis on large-scale IoT deployments, both engineered and ad-hoc. Our session will explore emerging sensing and actuation technologies, programming models for IoT systems, data analytics systems targeted towards multimodal data streams, machine learning solutions for IoT, techniques to ensure reliability and survivability of IoT deployments in the face of extreme physical conditions, and security and privacy challenges arising from pervasive device deployments.

Lead: Rose Hu, Swarun Kumar

(McLean)

Slides

In wireless networks, radio resource and spectrum management have always been among the key research topics in delivering network capacity and QoS requirements. When it comes with 5G, especially with requirements such as high node density, high heterogeneity, massive connectivity, low latency, high capacity (with mmWave or even Tera Hz bandwidth), the complexity of resource management goes up tremendously. Traditional centralized resource management and computing may soon reach their imitations, due to concerns on complexity, latency, communications bandwidth, privacy, security, etc. Further motivated by the increasing computational capacity of wireless local devices as well as the ever increasing concerns on sharing data due to privacy and security, next-generation communications/computing networks will encounter a paradigm shift from conventional cloud computing to edge computing, which largely deploys computational power to the network edges/fog nodes to meet the needs of applications that demand very high computations and low latency. As a result, 5G and beyond will fully exploit the combination of a centralized and decentralized computing architecture for efficient resource allocation. Furthermore, 5G goes beyond meeting evolving consumer mobile demands by also delivering various industry vertical sectors such as autonomous vehicles, e-health, etc. Spectrum needs to be sensed effectively and shared appropriately. Deployment paradigms such as licensed spectrum, shared spectrum, and unlicensed spectrum at different possible bands such as sub-6 Ghz and mmWave can be supported. Complicating resource optimization further is that fact that 5G will see a heterogenous mix of user devices ranging from energy starved and low power devices, to highly capable high bandwidth devices (e.g. augmented reality headsets) and high mobility contexts (vehicles, UAVs, high speed rail, etc.). This session will focus on 5G and beyond resource management and spectrum management as well as how to leverage the edge computing to enable distributed resource allocation and spectrum management. Specifically, Machine Learning has become a powerful tool to enable mining the valuable information from big data and autonomously uncover relationships that would have been too complex to extract by traditional approaches. We would also like to discuss how cloud/edge computing platforms facilitate the research and development of distributed machine learning for radio resource and spectrum management in 5G and beyond.

Lead: Neil Spring

(Manassas)

Slides

Measurement support for performance and diagnostics: The goal of this session is to find opportunities for integrating modern kernel and systems diagnostics (ebpf, kprobes) with techniques for network debugging (traceroutes, latency monitoring) with approaches to network management (snmp), using techniques from data science and machine learning to identify faults, vulnerabilities, and opportunities for upgrades. The domain includes management of cloud applications, cloud systems, ISP networks, IoT applications, and Internet applications overall. What could go wrong, what just went wrong, and how can we fix it?

Lead: Lixia Zhang and Edmund Yeh

(Lee)

Slides

We are now in a prolific era of exploration, discovery and realization of new knowledge in many fields of data intensive science, health, and engineering. Domain experts in these fields have been generating, managing, and processing data sets that are growing dramatically in scale and complexity. At present, these respective domains face the daunting challenge of developing individual systems to handle big data and to address issues such as data storage, indexing, performance, security, and privacy. These individual solutions, while addressing similar problems, are typically developed using incremental approaches, and in isolation from each other. We speculate that the above phenomenon is likely a direct consequence of the following basic fact: today?s computer systems/networks are generally centered around low-level primitives, such as addresses, processes, servers, and connections; so do the existing security solutions, which secure data containers and delivery pipes but not the data itself. Because big data applications focus on the data, this incongruity creates a gap between what the underlying systems provide and what the applications need, leaving individual users, who may not be computer experts, with difficult tasks of mapping their problems from application space to the underlying computing systems. Frequently the above leads to big data researchers relying on commercially available cloud services for storage and processing. Unfortunately this does not address some of the most fundamental challenges facing big data scientists, including: * the need to deal with lower layer device issues to collect data; * the difficulty in navigating unstructured data collections; * the lack of systematic solutions for security and privacy when data is outside the cloud, and the lack of auditability when the data is inside the cloud; and * systematic support for user authentication which is needed for data access control. We believe that harnessing the big data revolution will require a cross-disciplinary and domain-agnostic big data ecosystem that can support big data management through the whole data life cycle, starting from data production, naming, securing data directly, to scalable retrieval, and controlled data access that enables data sharing across the boundaries of different service providers, different applications, and different disciplines. Such a big data ecosystem has the potential to provide agile, integrated, interoperable, scalable, robust and trustworthy solutions for heterogeneous data-intensive domains. One promising direction for developing this big data ecosystem is to take on a data-centric design approach, as pointed out by a recent NSF DCL emphasizing the need to 'Support Future Data-Intensive Science and Engineering Research.' The goal of this session is to articulate the need for a principled design for a data-centric ecosystem, and identify approaches towards its realization.

Lead: Shuvra Bhattacharyya and Marilyn Wolf

(Jefferson)

Slides

Heterogeneous computing offers the potential to streamline execution of key processing tasks using architectures that are better suited to those tasks than architectures composed of identical processing elements. This potential together with the increasing diversity of available processor architectures and domain-specific programming languages opens up complex new design spaces for computer systems. This breakout session will explore research opportunities and identify priority areas for research emphasis in heterogeneous computing systems. The workshop will have a specific focus on heterogeneous computing in the context of Internet of things (IoT) and cyber-physical systems (CPS), which are important, rapidly-developing areas that stand to benefit significantly from advances in heterogeneous computing.

Lead: Suman Banerjee and Kyle Jamieson

(Jackson)

Slides

Mobile Reality focuses on systems that support interactive multimedia in different forms, including virtual, augmented, and mixed reality. - How to design next generation of techniques to support interactive audio and video systems, virtual, augmented and mixed reality systems. - Challenges: Extremely high data rates and low latency needs, high variability, and extensive computational needs. - Opportunities: Edge compute support, 5G networks with high speeds and low latency.

Lead: Monica Lam and Qun Li

(Rosslyn 2)

Slides

Machine learning has fueled advances in many disciplines from medicine, science, to social science. How can machine learning advance computer systems? On the front end, artificial intelligence can significantly improve our system user interface. For example, will we be communicating with the computers in natural language? Will consumers be able to code in natural language? On the back end, how can machine learning improve our algorithms to manage systems? Many have started to worry about the potential negative impact of AI, from algorithmic biases, loss of jobs, AI surveillance abuse, to centralization of private information. Are there system architectures that can support autonomy and privacy while letting users benefit from big data analysis? Last but not least, machine learning has an insatiable need for computing resources. What breakthroughs can we see to support the computational demands of deep learning?

 

Lead: Manos Kapritsos, Carlos Varela

(Mount Vernon)

Slides

In the last few years, formal verification has made significant inroads in the systems community. As the tools, languages and methodologies become more powerful, the promise of provably correct systems starts becoming a reality. In this session, we will discuss some of the open challenges in applying formal methods to real-world systems. In particular, we will discuss classes of applications that could benefit from formal verification, such as machine learning, cyber-physical systems, cryptographic protocols, data-driven and distributed systems. We will also talk about the various methods of formally verifying these systems, including symbolic/statistical model checking, and interactive/automated theorem proving, and their relationship to open questions on knowledge representation, modal logics, probabilistic reasoning, and uncertainty modeling, among others. Finally, we will discuss several topics that apply broadly to all aspects of formal verification, such as: -How to scale formal verification to large systems? -How to tame the complexity of reasoning about concurrency and distribution, including actor-, process-based, and multi-threaded programs? -How to make formal methods modular, composable, reusable, and accessible to ordinary developers?

12:15 - 1:30 pm

Lunch

Inclusion 

Jennifer Rexford, Dave Levin

 (Salons 1-3)

1:30 pm - 3:30 pm

Breakout Session II

Lead: Misha Rabinovich, Aruna Balasubramanian

(Salon 5)

Slides

We are going through a burst of activity around edge computing. From industrial consortia to new academic conferences, there is a staggering amount of effort devoted to shaping and getting this area off the ground. A number of diverse applications in such areas as IoT, sensor networks, pervasive computing, gaming, streaming, virtual and augmented reality, are re-casted as edge applications. However, there is no commonly accepted system model that properly characterizes the edge computing environment, and this has become a significant handicap in placing the myriad of technical innovation in proper context. Where is the ?edge? situated between the end-devices and the central cloud? Will edge-native or automatically partitioned applications drive edge computing deployments? Where would the edge computing occur ? at the impoverished ubiquitous servers at the cell towers, the lighting polls, etc., or more powerful computing facilities in metro-area data centers? Do we foresee edge deployments to be as ubiquitous as that of WiFi routers? Will we see one in each home and coffee shop, or do we assume that the edge will be deployed at universities and enterprises but will not be ubiquitous? What should our assumptions be about the edge capabilities -- are they compute, network, or memory bottlenecked? And ultimately, what is the killer app for edge computing, which would not merely benefit but be enabled by this environment? This session will discuss these and other questions to help providing a focus for edge computing research.

Lead: Xiaowei Yang

(Alexandria)

Slides

This breakout session aims to discuss the relationship between networking research and ML techniques and applications. Specifically, we hope to discuss the following questions: What are your experiences of using ML techniques in your research? What have you found useful? What have you found challenging? What lessons can you share? What challenges in the field of networking are made easier by ML techniques? Traffic engineering, traffic anomaly detection, capacity planning, congestion control, …? How can networks better support ML applications? Looking forward, how do you envision networking research shapes ML applications or vice versa?

Lead: Ang Chen

(Salon 6)

Slides

Computer networks and systems are foundational to modern applications. Despite significant progress in performance, reliability, and many other dimensions, we have not seen commensurate growth in security. Retrofitting security in these systems leads to temporary fixes at best, as the resulting solutions tend to be very brittle. We need a more fundamental approach where security is explicitly designed into the system as a "first-class" goal. This could be achieved, for instance, using principled approaches to system design and development, or using formal methods to provide strong guarantees. We will discuss recent trends in networking and systems, such as network programmability, resource disaggregation, etc., and examine their security implications. Can we leverage these trends to make the future more secure? Would these trends lead to new, unforeseen security problems? We hope to achieve, through community effort, a systematic approach to security in network and systems design.

Lead: Nick Feamster, Alex Snoeren

(Lee)

Slides

In this breakout, we plan to discuss the following questions: 1. What are current best practices regarding (a) The reproducibility of published research in the CSR/NeTS community? (b) The release of source code and data to facilitate such reproducibility? 2. What are the current impediments to fostering more reproducibility in published research in the CSR/NeTS community? 3. How can the NSF encourage researchers to release artifacts that assist in the reproducibility of published research in the CSR/NeTS community? 4. What types of activities do other research communities do to foster reproducibility? How do researchers engage in those processes? Are they successful? 5. What is the appropriate way to address reproducibility for CSR/NeTS-funded research that is (completely or partially) based on proprietary datasets, systems, or code that might make the research difficult to reproduce? 6. Should the NSF alter its framing of proposal review or reporting processes (e.g., articulation of broader impacts, additional information requested at reporting time) to help implement any of the recommendations discussed in the above questions?

Lead: Sonia Fahmy, K. K. Ramakrishnan

(McLean)

Slides

1. What use cases are most important for NFV, e.g., 5G, IoT, VR/AR? 2. What are the different challenges with different types of VNFs: middleboxes, services, micro-services with multiple components? 3. How are stateful VNFs handled when scaling out or migration takes place? How can load be balanced among multiple instances of a VNF? 4. What do different types of deployments on VMs, containers, or serverless instances that use external data sources, entail? 5. What types of challenges does deploying and orchestrating VNFs in the edge entail, versus deploying them in the core, or a joint deployment? 6. What challenges do service function chains (SFCs) introduce in terms of performance bounds, scaling up or down, in or out? How do we guarantee various performance goals for an SFC with SDN stitching the VNFs? Are there any VNF interoperability problems? 7. Are new attack surfaces being introduced with NFV? How can we achieve built-in security and resilience? Does VNF placement have to consider security, privacy, and trust goals?"

Lead: Ramesh Sitaraman

(Manassas)

Slides

This session pertains to challenges that arise in the context of large distributed networks, such as Content Delivery Networks, Cloud Systems, etc (Think half-a-million edge servers in hundreds of locations). Future research challenges pertain to novel edge services, web/video/application performance, security, sustainability, and operations. Example Challenges: 1) 360 Video Delivery: Imagine delivering the soccer world cup in real-time to a billion users who are wearing VR headsets for an immersive experience. The combination of high bitrate, scale, and small latency (20 ms of motion-to-photon latency to avoid cyber sickness) is much beyond the realm of what video delivery networks can do today. 2) Edge Security: The CDN edge is the perimeter behind which much of the world<80><99>s major online assets are deployed (both commercial and government). How do we defend this perimeter in a distributed and real-time fashion to prevent DDoS and other attacks. 3) Sustainability: Internet-scale networks consume large amounts of energy, more than some mid-sized countries. How do we decrease the energy consumption of these large networks without sacrificing performance, reliability, and security? 4) Operations: The problems that arise in operating highly-distributed networks of this scale are very challenging and poorly researched. Key issues include provisioning, deployment, dynamically adapting policies to changes induced by billions users, thousands of ISPs, thousands of content providers, and constantly-evolving workload characteristics.

Lead: Prashant Shenoy, Saurabh Bagchi

(Rosslyn 1)

Slides

As the popularity of IoT devices and systems continues to grow, future IoT systems and applications, particularly for smart cities, will see deployments of large numbers of IoT devices, resulting in new challenges in systems and network design, This session will discuss the following research questions. For each, we will survey from the participants what is the state-of-the-art and state-of-the-practice, discuss challenging technical problems and what are promising directions being explored today. 1. What system challenges need to be addressed in designing IoT systems with very large numbers of devices? How can these systems scale from a performance standpoint when they are composed of millions or billions of devices? 2. What are the reliability challenges that emerge when systems are run at such large scales? Do the characteristics of IoT devices (homogeneity, dependence on physical environment, resource constraints, thin or no OS) bring out new reliability challenges? 3. What are the energy and power issues in designing large-scale IoT systems? How can techniques such as energy harvesting be used to make IoT devices self-powered? How can protocols be made more energy efficient? How can protocols be developed to account for intermittent execution? 4. How should IoT devices leverage emerging wireless networking technologies such as LoRa and 5G? 5. How do specific smart city application domains such as connected healthcare, renewable energy, buildings, transportation impact system design? For example, how do their requirements impact whether IoT data processing is done on-device, at the edge, or in the cloud?

Lead: Felix Lin and Karthik Dandu

(Jefferson)

Slides

This session will focus on OS/networking challenges raised by augmented reality (AR), mixed reality (MR), and virtual reality (VR), e.g. those backed by 360-degree or volumetric videos. We will identify key research questions and discuss potential solutions. Sample topics include: - New IO technologies for spanning AR/MR/VR experiences over multiple devices - New OS abstractions and facilities for storing and AR/MR/VR contents, e.g. for space efficiency and fast retrieval - New frameworks for rendering AR/MR/VR with high quality and low delay comparable to human vision - Domain-specific power management for AR/MR/VR apps - New OS facilities for sharing resources and amortizing costs among co-running AR/MR/VR apps - New network mechanisms for delivering large AR/MR/VR streams - New facilities for trustworthy AR/MR/VR experience

Lead: Christine Bassem

(Jackson)

Slides

In Mobile Crowd Sensing (MCS) platforms, the power of crowds is leveraged to assist incompleting spatio-temporal sensory tasks. Thus expanding the pool of resources foundin typical sensor systems to include already roaming devices and the humans controlling them. This new area opens up various research topics which are of interest to both theCSR and NeTS program (though maybe more to CSR). Topics of discussion may include: Building MCS and IoT platforms with humans in the loop MCS and edge computing Privacy concerns for participants in such platform Incentive mechanisms for participants in such platforms Mobility analysis and coordination

Lead: Hadi Esmaeilzadeh, Don Porter

(Rosslyn 2)

Slides

We are at a point where traditional hardware scaling techniques have plateaued, both in CPUs and in storage technologies, and designers are exploring a range of more restricted or special-purpose designs. For instance, hard-drive vendors are scaling capacity with shingled and interlaced magnetic recording, which introduce considerable, non-backward compatible restrictions on I/O patterns. Of course, this can be somewhat hidden behind a more complex layer of firmware, but at a significant hardware cost and significantly perturbing application performance. This creates new opportunities and challenges for researchers: 1) How to get the most from an increasingly constrained hardware budget? 2) How to balance general-purpose versus special-purpose designs? 3) How to redefine hardware/software abstractions that enable updating specialized hardware without redoing its software stack? 4) How to write future-proof software that will not only work, but perform well, on hardware that did not exist when the software was written? 5) How to move from domain-specific hardware to domain-specific computational stack? 6) How to do performance analysis and tuning when performance-opaque firmware has a first-order impact on performance? 7) How to facilitate developers designing their own special-purpose hardware?

Integrated sensing, computation and feedback in wearable devices, from e-tattoos to near-zero computation to digital therapy

Lead: Roozbeh Jafari, Hassan Ghasemzadeh, Tinoosh Mohsenin

(Fairfax)

Slides

 

Lead: Kirk Cameron, Xipeng Shen

(Mount Vernon)

Slides

Cloud Computing and High Performance Computing (HPC) have many differences, but share many common concerns, such as computing efficiency, scalability, throughput, cost, energy efficiency. This session aims to examine the differences and potential connections between Cloud Computing and HPC, in the backdrop of recent advances in cloud execution models (e.g., serverless computing), workload evolvement (e.g., machine learning-based applications), new trends in scientific computing, and the emergence of new hardware (memory, accelerators, etc.). Example questions include what are the grand challenges in each of them; what common challenges they share; what are the new opportunities; what synergy can be built between Cloud Computing and HPC; what can be provided to cultivate the synergy.

3:30 pm - 4:30 pm

Break, Set up for Poster Session

(Arlington Foyer)

4:30 pm - 6:30 pm

Poster Session

(Salon 4)

6:30 pm - 9:00 pm

Dinner

NSF PD Event

(Salons 1-3)

Tuesday (Nov. 5, 2019)

7:45 am - 8:30 am

Continental Breakfast

(Salons 1-3)

8:30 am - 9:15 am

Ion Stoica

(Salons 1-3)

Slides

The research at the intersection between machine learning (ML) and systems has the potential to fuel the innovation over the next decade. When used together, ML and systems can lead to a rapidly evolving positive feedback loop, in which systems accelerate ML algorithms, and ML algorithms make the systems faster and more secure. In the context of the RISELab at UC Berkeley, we have done work in both directions of this feedback loop. On one hand, we have built new systems to better support ML workloads, such as Ray, a general distributed framework that supports highly scalable ML libraries, including a reinforcement learning library (RLlib) and a Hyperparameter Search library (Tune). On the other hand, we have used Ray, RLlib and Tune to perform systems optimizations (e.g., database query optimizations, compiler optimizations), algorithm optimizations (e.g., building decision trees for packet classification), and program synthesis. When applying ML to systems, we have developed new techniques which, instead of applying ML (e.g., deep learning) to solve the problem end to end, we apply ML to synthesize "classic" solutions for the problems at hand. For example, instead of using a deep learning (DL) network to perform packet classification, we use DL to synthesize a decision tree that performs packet classification. By doing so we develop solutions that are explainable (thus addressing a significant challenge of DL), and are easier to deploy in existing systems.

9:15 am - 10:15 am

Report on Breakout Session I

(Salons 1-3)

10:15 am - 10:45 am

Break

(Arlington Foyer)

10:45 am - 11:45 am

Report on Breakout Session II

(Salons 1-3)

11:45 am - 12:30 pm

CSR/NeTS Panel

NSF personnel

12:30 pm

Box Lunch

(Salons 1-3)

 

Meeting Room Locations:
Lobby Level: Rosslyn 1-2, Lee, Jefferson, Jackson, Madison
First Level: Salons 1-6, Salons A-K
Second Level: Alexandria, Mt. Vernon, Manassas, McLean, Fairfax

 

 

Meeting Room Locations:
Lobby Level: Rosslyn 1-2, Lee, Jefferson, Jackson, Madison
First Level: Salons 1-6, Salons A-K
Second Level: Alexandria, Mt. Vernon, Manassas, McLean, Fairfax

 

Accomodations

Meeting Hotel: Crystal Gateway Marriott, Arlington, Virgina

The 2019 Joint NSF CSR and NeTS PI Meeting will be held in Crystal
Gateway Marriott, 1700 Jefferson Davis Highway, Arlington, Virginia
22202.

We have negotiated and reserved a block of hotel rooms at the special
rate of $199/night with the hotel. Please reserve using the following link:

https://book.passkey.com/e/49981153

 

The reimbursement for rooms is the minimum of your actual cost OR $199/night plus tax, so please take advantage of the negotiated rates while available.

 

 

 

Nearby Hotels:

There are many other hotels nearby as well (though we do not have
have special arrangements with any of them):

1. The Westin Crystal City

Address: 1800 Jefferson Davis Hwy, Arlington, VA 22202

Phone: (703) 486-1111

Website: https://www.marriott.com/hotels/travel/waswi-the-westin-crystal-city-rea...

 

 

2.  Crystal City Marriott (NOT the meeting hotel)

Address: 1999 Jefferson Davis Highway, Arlington, Virginia 22202
Phone: 703-413-5500

Website: https://www.marriott.com/hotels/travel/wascc-crystal-city-marriott-at-re...

 

 

3. Hampton Inn & Suites Reagan National Airport

Address: 2000 Richmond Hwy, Arlington, VA 22202

Phone: (703) 418-8181

Website: https://hamptoninn3.hilton.com/en/hotels/virginia/hampton-inn-and-suites...

 

 

4. Hilton Crystal City

Address: 2399 Richmond Hwy, Arlington, VA 22202

Phone: (703) 418-6800

Website: https://www3.hilton.com/en/hotels/virginia/hilton-crystal-city-at-washin...

 

 

5. Hilton Garden Inn Reagan National Airport

Address: 2020 VA-110, Arlington, VA 22202

Phone: (703) 892-1050

Website: https://hiltongardeninn3.hilton.com/en/hotels/virginia/hilton-garden-inn...

Reimbursements

Please send your receipts by email to Regis Boykin, rboykin [-at-] cs [dot] umd [dot] edu.

The FAQ has more information about reimbursement requests.

Please use subject line: NSF 2019

Reimbursement in your reimbursement request. Ms. Boykin will be in contact with you individually should more information be required beyond what you have provided.

FAQ

How long is the program?

  • 1.5 days.  A box lunch will be provided on the 5th, and the official program completes around noon.

Is it OK to attend for only one day of two?

  • PIs are expected to attend both days.

Can a student/co-PI attend instead of the PI?

  • The meeting space is limited, and currently reserved for PIs only.  Everyone who has received an invitation by email is welcome and encourged to register, though the number of registration slots are limited.  We understand the benefit students accrue from attending these type of meetings, and if slots are left unused, will open the registration to a wider audience. 

I received an invitation, but my grant expires prior to the PI meeting. Do I have to attend the PI meeting or can I skip?

  • You are not obliged to attend. However, the PI meeting is a service to the community to enable sharing of ideas, to provide a platform for the PIs to collaborate, and look into the future, and you are welcome to attend.

Can NSF funds be used for the registration?

  • Yes, NSF funds from CSR or NeTS may be used to cover registration costs.

Will the cost for attending the meeting be covered?

  • Yes.  Costs for attendance and accommodations will be reimbursed.

Is it possible to cancel the registration?

  • No, registration is non-refundable.

Must I bring a poster?

  • No.  The venue can accommodate up to 100 posters.  If you would like to present a poster, please indicate so when you register.

Is the Crystal Gateway Marriott (meeting place) the same as Crystal City Marriott?

  • No, they are different hotels. But they are close to each other, within walking distance.

How does one get to the hotel?

  • The hotel is close (~1.3 miles) to the Ronald Reagan Washington National Airport (DCA). You may take the DC subway (Metro) to the Crystal City station (Blue or Yellow Lines) and then walk to the hotel.  Or you may use the Marriott’s Complimentary Airport Shuttle. Please refer to Marriott’s web site for more options: https://www.marriott.com/hotels/maps/travel/wasgw-crystal-gateway-marriott/

Will international airfares be reimbursed?

  • Reimbursement per-PI is limited to $1200.  Should this cover the international fare, then it will be fully reimbursed.  The usual NSF guidelines of using US carriers and fare classes still apply. 

What is the reimbursement process?

  • Reimbursements will be processed after the event by the University of Maryland.  Per Univ. of Maryland and NSF guidelines, all reimbursement is receipt based, and covers the cost of travel (US airlines only, non-premium seats), and hotel stay for November 3rd and 4th.  The reimbursement for hotel is capped at $199+taxes per night of stay. The designated PI meeting hotel has rooms at that rate.
    In order to be reimbursed, the Univ. of Maryland requires each traveler to provide their US Social Security Number and a mailing address. Reimbursements will in the form of a paper check sent by US mail. The reimbursement process takes about 6 weeks.

 

Location

Crystal Gateway Marriott
1700 Jefferson Davis Highway
Arlington, VA, 22202

 

Driving to the hotel:

The hotel suggests 1700 South Eads Street, Arlington VA 22202 as the GPS destination.

 

 

 

Contact

Please direct all questions about the meeting to

csrnets2019 [-at-] cs [dot] umd [dot] edu