Senior software engineer at Google
Research with Kubernetes.
With increasing popularity of Docker, containers are being used more and more often in developer, test and production environment. However managing many docker containers running on multiple machines could be a complicated and time consuming task.
This presentation will talk about Kubernetes - an open source orchestration system for Docker containers. It will describe how it handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions.
Presentation will also show why modular architecture makes Kubernetes a great tool to do research and perform experiments related to scheduling algorithms, resource management etc.
Filip is a senior software engineer at Google and has worked in technical infrastructure for the last four years. His main area of expertise is cluster management - scheduling, optimizing utilization, high level architecture and systems for automatic management. He currently works on involved in Kubernetes, an open source cluster management stack branded by Google.
Computer scientist at Argonne National Laboratory
Chameleon: Building a Large-scale Experimental Testbed for Cloud Research.
Cloud services have become essential to all major 21st century economic activities. The new capabilities they enable gave raise to many open questions, some of the most important and contentious issues being the relationship between cloud computing and high performance computing, the suitability of cloud computing for data-intensive applications, and its position with respect to emergent trends such as Software Defined Networking. A persistent barrier to further understanding of those issues has been the lack of a large-scale and open cloud research platforms.
With funding from the National Science Foundation, the Chameleon project is providing such a platform to the research community. The testbed, deployed at the University of Chicago and the Texas Advanced Computing Center, will ultimately consist of ~15,000 cores, 5PB of total disk space, and leverage 100 Gbps connection between the sites and consist of a mix of large-scale homogenous hardware and a smaller investment in heterogeneous components high-memory, large-disk, low-power, GPU, and co-processor units. The majority of the testbed is now deployed and available to Early Users with general availability planned for July this year.
Kate Keahey is a computer scientist and a Computation Institute fellow at the University of Chicago. Her research interests focus on virtualization, resource management and cloud computing. She created and leads the Nimbus project.
Head of Research, Algorithms, Parallel Computing and Big Data at Huawei France Research Centre
New Directions in BSP Computing at Google, Facebook, and Huawei.
Bill McColl (Oxford) and Leslie Valiant (Harvard) developed BSP as a new way to design architecture-independent massively parallel algorithms and software. Today, BSP is revolutionizing how parallel computing, big data analytics, and machine learning is done at massive scale at Google, Facebook, Yahoo, Twitter, LinkedIn, Microsoft, Tencent, and many other leading edge companies. Companies such as Teradata are also now looking at BSP to reinvent database and data warehouse architectures to handle major new types of massive data sets, such as huge graphs and networks with billions of nodes and trillions of edges. At Huawei CSI Paris we are building the world's leading BSP R&D Team, and developing powerful new BSP algorithms and software for super-fast and super-scalable realtime parallel computing and big data analytics. These new algorithms and software systems will drive the design of next-generation systems and platforms for Carriers, IT and Cloud Services.
Bill McColl was Professor of Computer Science, Head of the Parallel Computing Research Center, and Chairman of the Computer Science Faculty at Oxford University. He left Oxford to found Cloudscale. At Along with Les Valiant of Harvard, he developed the BSP approach to parallel programming. He has led research, product, and business teams, in a number of areas: massively parallel algorithms and architectures, parallel programming languages and tools, datacenter virtualization, realtime stream processing, big data analytics, and cloud computing.
Solutions Architect and Technical Manager at Microsoft Research
Hyper-scale cloud computing for research and innovation.Cloud computing is pervasive in everyday life, from email and social networks, to smartphones and gaming consoles. It enables an unlimited number of services to connect people, devices, and data in a way that is becoming increasingly seamless. The complexity of achieving this at global scale from both the hardware and software perspectives can be tremendous, raising many research challenges. We describe how Microsoft Research is helping to make Azure a scalable, robust, trustworthy, high performance, hyper-scale cloud platform. We show how scientists, startups and organisations of all sizes are using the cloud to accelerate and scale-out their research and innovation at an increasingly rapid pace.
Dr Kenji Takeda's current focus is helping researchers take best advantage of cloud computing, including through big data and data science approaches, including the Azure for Research programme. He has extensive experience in Cloud Computing, High Performance and High Productivity Computing, Data-intensive Science, Scientific Workflows, Scholarly Communication, Engineering and Educational Outreach.