CLARISSE project
Cross-Layer Abstractions and Run-time for I/O Software Stack of Extreme-scale systems
Currently, the I/O software stack of high-performance computing platforms consists of independently developed layers (scientific libraries, middlewares, I/O forwarding, parallel file systems), lacking global coordination mechanisms. This uncoordinated development model negatively impacts the performance of both independent and ensembles of applications relying on the I/O stack for data access. The CLARISSE project is designing cross-layer mechanisms for the the I/O software stack aiming to facilitate performance optimizations, programmability, and extendability.
Research Objectives:
To investigate, design and implement control mechanisms for cross-layer dissemination of application hints, run-time feedback, notifications, and shipping of I/O functionality throughout the I/O software stack.
To explore algorithms and to design and implement mechanisms and policies for the adaptive control of the storage I/O data path in order to improve the I/O software stack scalability and resilience.
To study and develop techniques for exposing and exploiting data locality throughout the I/O software stack in order to reduce the storage I/O traffic and improve the performance.
Software
The CLARISSE software is freely available under the BSD license in a bitbucket repository.
Publications and Presentations:
Publications
- F. Isaila, J. Garcia, J. Carretero, R. Ross, and D. Kimpe. Making the Case for Reforming the I/O Software Stack of Extreme-Scale Systems. Accepted for publication in Journal: Advances in Engineering Software (AES), 2017.
- F. Isaila, J. Carretero, R. Ross. CLARISSE: a middleware for data-staging coordination and control on large-scale HPC platforms In Proceedings of IEEE/ACM CCGrid 2016 (Best Paper Award). pdf
- Francisco Rodrigo Duro, Javier Garcia Blas, Florin Isaila, Jesus Carretero, Justin M. Wozniak, and Robert Ross.Flexible Data-Aware Scheduling for Workflows over an In-Memory Object Store. In Proceedings of IEEE/ACM CCGrid 2016.
- Francisco Rodrigo Duro, Javier Garcia Blas, Florin Isaila, Jesus Carretero, Justin M. Wozniak, and Robert Ross. Experimental evaluation of a flexible I/O architecture for accelerating workflow engines in ultrascale environments. Accepted for publication in Elsevier’s Parallel Computing Journal, 2016.
- Francois Tessier, Venkatram Vishwanath, Preeti Malakar, Emmanuel Jeannot and Florin Isaila. Topology-Aware Data Aggregation for Intensive I/O on Large-Scale Supercomputers. Proceeding of Workshop on Communication Optimizations in HPC, in conjunction with SC 2016, Denver.
- Florin Isaila, Prasanna Balaprakash, Stefan M. Wild, Dries Kimpe, Rob Latham, Rob Ross, Paul Hovland. Collective I/O tuning using analytical and machine learning models. In IEEE Cluster Computing Conference, 2015.
- Francisco Rodrigo Duro, Javier Garcia Blas, Florin Isaila, Jesus Carretero. Experimental evaluation of a flexible I/O architecture for accelerating workflow engines in cloud environments. Proceedings of the 2015 International Workshop on Data-Intensive Scalable Computing Systems, DISKS ’15.
- Francisco Rodrigo Duro, Javier Garcia Blas, Florin Isaila, Jesus Carretero, Justin M. Wozniak, and Robert RossExploiting data locality in Swift/T workflows using Hercules. In Proceedings of NESUS Workshop 2014.
Posters:
- Florin Isaila. CLARISSE: Cross-layer abstractions and run-time for I/O stack of extreme scale systems. In USENIX File and Storage Technologies (FAST), Santa Clara 2014.
Presentations:
- Florin Isaila. CLARISSE: A run-time middleware for coordinating data staging on large scale supercomputers. In Illinois Institute of Technology, Chicago 2015.
- Florin Isaila. Collective I/O tuning using analytical and machine learning models. In IEEE Cluster, Chicago 2015. pdf
- Florin Isaila. CLARISSE: A run-time middleware for coordinating data staging on large scale supercomputers. In CluStor: Workshop on Cluster Storage Technology – Hamburg, 2015. pdf
- Florin Isaila.I/O research at Argonne National Laboratory. In HPC-IODC: HPC I/O in the Data Center Workshop – Frankfurt, 2015.
- Florin Isaila. Optimizing data staging based on autotuning, coordination, and locality exploitation on large scale supercomputers In the 3rd Joint Laboratory for Extreme Scale Computing (JLESC) workshop, Barcelona 2015. pdf
- Florin Isaila.CLARISSE: reforming the I/O stack of HPC computing platforms. In the 2nd Joint Laboratory for Extreme Scale Computing (JLESC) workshop, Chicago 2014.
- Florin Isaila. CLARISSE: Cross-layer abstractions and run-time for I/O stack of extreme scale systems. In Greater Chicago Area Systems Research Workshop, Chicago 2014.
- Florin Isaila. CLARISSE: Cross-layer abstractions and run-time for I/O stack of extreme scale systems. In USENIX File and Storage Technologies (FAST), Santa Clara 2014.
- Florin Isaila.CLARISSE: Cross-layer abstractions and run-time for I/O stack of extreme scale systems. In IEEE/ACM Supercomputing – Emerging Technologies, Denver 2013.
Collaborators
Prasanna Balaprakash (ANL)
Phil Carns (ANL)
Jesus Carretero (UC3M)
Franscisco Duro (UC3M)
Javier Garcia (UC3M)
Kevin Harms (ANL)
Paul Hoveland (ANL)
Dries Kimpe (ANL)
Emmanuel Jeannot (INRIA)
Rob Latham (ANL)
Tom Peterka (ANL)
Rob Ross (ANL)
Stefan Wild (ANL)
Contact
Florin Isaila
Email: fisaila at inf.uc3m.es