FCRC 2015 assembles a spectrum of affiliated research conferences and workshops into a week long coordinated meeting held at a common time in a common place. This model retains the advantages of the smaller conferences, while at the same time, facilitates communication among researchers in different fields in computer science and engineering. Each morning, FCRC features a joint plenary talk on topics of broad appeal to the computing research community.
The technical program for each affiliated conference is independently administered, with each responsible for its own meeting's structure, content, and proceedings. To the extent facilities allow, attendees are free to attend technical sessions of other affiliated conferences being held at the same time as their "home" conference.
Tutorials and 1-day workshops do not permit cross-attendance unless noted.
One of the major advantages of the FCRC model is the opportunity to network with researchers in other fields. Individual conferences will have access to common breaks on the following schedule:
Detailed schedules for individual conferences and workshops are available by clicking on their names in the left of this web page.
Sunday, June 14, 18:00 - 19:15
Monday, June 15, 11:20 - 12:30
Title: Interdisciplinarity: A View from Theory of Computation
Abstract: Increasingly, the concepts and methods of computer science are being recognized as a source of great intellectual interest, injecting fresh ideas into other scientific disciplines. Through discourses and collaborations, exciting multidisciplinary areas are blossoming. We illustrate this phenomenon from the viewpoint of Theory of Computation.
Tuesday, June 16, 11:20 - 12:30
Hardware Neural Networks: From Inflated Expectations to Plateau of Productivity
Times and again, a longstanding fascination with the brain has attracted computer architects to hardware neural networks. However, the fascinating nature of the topic is also its greatest pitfall: it sometimes drives researchers to forgo the pragmatic application-driven nature of computer architecture, and the results can fall disappointingly short of the lofty goal of emulating the brain in hardware; as a result, most computer architects have stayed away from the topic.
In this presentation, we outline that it might be more sustainable to focus on brain functionality than on brain structure. Architectures designed to efficiently implement some important functionality are more likely to find support than architectures designed to emulate a biological structure. And in the past decade, machine-learning researchers, who largely share this longstanding fascination with the brain, have made significant progress towards emulating some elementary, but already important, brain functionalities (e.g., image and speech recognition) using so-called deep neural networks. These successes come at a time where transistor technology constraints are nudging architectures towards custom accelerators. This remarkable conjunction of algorithm, application and technology evolutions can pave the way for the development of competitive hardware neural network accelerators, and help cement the adoption of the topic within the computer architecture community.
Wednesday, June 17, 11:20 - 12:30
Title: The F# Path to Relaxation
Abstract: Born in a lab, matured in the enterprise, and now fully baked as an open-source, cross-platform, professionally-supported language - the F# journey has always been about reconciling the apparently irreconcilable: Functional and Objects, Types and Dynamism, Company and Openness, Programming and Data, Patterns and Abstraction, GPU and CPU, Async and Sync, Server and Client. Take two irreconcilable ideas, and F# finds a way. Come along and take a journey with me through the modern programming landscape and the F# approach to research, language design, interoperability, tooling and community.
Thursday, June 18, 11:20 - 12:30
The Endgame for Moore's Law: Architecture, Algorithm, and Application Challenges.
Single processor clock speed scaling ended a decade ago, and transistor sizes will approach atomic scales in the next decade. With no abatement in the ideas of how to use more computing in science, engineering and business applications, and new performance drivers coming from increased density, speed and ubiquity of data collection devices, how will the demands for computing be met? Future computing system designs will be constrained by power density and total system energy, and the two performance approaches are likely to be parallelism and specialization. Data movement already dominates running time and energy costs, making communication cost reduction the primary optimization criteria for compilers and programmers.
The endgame for Moore’s Law will require rethinking our models of computation to minimize communication, expose fine-grained parallelism, and manage new hardware features. Specialization of languages can simplify analysis, and specialization of hardware may reduce energy use, but will disrupt the ecosystems for hardware and software development. These changes will affect the theoretical models of computing, the analysis of performance, the design of algorithms, and the practice of programming in fundamental ways.
Friday, June 19, 11:20 - 12:30
A Big Data System for the Internet of Moving Things
Abstract:The world consists of many interesting things that move: people go to work, home, school, and shop in public transit buses and trains or in cars and taxis; goods move on these networks and by trucks or by air each day; and food items travel a very large distance to meet their eater. Thus, massive movement processes are underway in the world every day and it is critical to ensure their safe, timely and efficient operation. Towards this end, low-cost sensing and acquisition of the movement data is being achieved: from GPS devices, RFID and barcode scanners, to smart commuter cards and smartphones, snapshots of the movement process are becoming available.
In this talk, I will present a system for stitching together these snapshots and reconstructing urban mobility at a very fine-grained level. The system, which we call the Space-Time Engine, provides an interactive dashboard and a querying engine for answering questions such as: what is the crowding at a train station? where're packages held up and how can their delivery be sped up? how can the available supply of transport capacity be better used to address daily demand as well as the demand on exceptional days (such as rallies and severe weather events). I will describe the STE's capabilities for operational and planning purposes, and as a learning system.
Bio: Balaji Prabhakar is Professor of Electrical Engineering and Computer Science at Stanford University, and Chief Scientist and Co-Founder of Urban Engines. His work is centered around the design and management of large, complex networks---the Internet, Cloud Computing and, recently, Smart Transportation. He has developed and deployed "nudge engines": systems for influencing the behavior of large populations. He is interested in data systems for "things that move"---commuters, cars, metro systems, food, etc. He has been a Terman Fellow at Stanford University and a Fellow of the Alfred P. Sloan Foundation. He has received the NSF CAREER Award, the Erlang Prize from the INFORMS Applied Probability Society, the Rollo Davidson Prize, and delivered the Lunteren Lectures. He is the recipient of the inaugural IEEE Innovation in Societal Infrastructure Award which recognizes "significant technological achievements and contributions to the establishment, development and proliferation of innovative societal infrastructure systems". He serves on the Advisory Board of the Future Urban Mobility Initiative of the World Economic Forum.