Отправляет email-рассылки с помощью сервиса Sendsay
  Все выпуски  

Система компьютерной алгебры GAP: 17th EuroMPI conference



*

CALL FOR PARTICIPATION
(early bird deadline ends June, 30th 2010)

*
17th EuroMPI conference
Stuttgart, Germany
12-15 September 2010
http://www.eurompi2010.org
*

ABOUT THE CONFERENCE

MPI (Message Passing Interface) has evolved into the standard
interfaces for high-performance parallel programming in the
message-passing paradigm. EuroMPI is the most prominent meeting
dedicated to the latest developments of MPI, its use, including support
tools, and implementation, and to applications using these interfaces.

The 17th European MPI Users' Group Meeting will be a forum for users and
developers of MPI and other message-passing programming environments.
Through the presentation of contributed papers, poster presentations and
invited talks, attendees will have the opportunity to share ideas and
experiences to contribute to the improvement and furthering of
message-passing and related parallel programming paradigms.
Topics of interest for the meeting include, but are not limited to:
- MPI implementation issues and improvements
- Latest extensions to MPI
- MPI for high-performance computing, clusters and grid environments
- New message-passing and hybrid parallel programming paradigms
- Interaction between message-passing software and hardware
- Fault tolerance in message-passing programs
- Performance evaluation of MPI applications
- Tools and environments for MPI
- Algorithms using the message-passing paradigm
- Applications in science and engineering based on message-passing

EuroMPI 2010 will hold the 'Outstanding Papers' sessions, where the best
papers selected by the program committee will be presented.


REGISTRATION

Early bird registration ends June 30th, 2010.
http://www.eurompi2010.org/registration/


INVITED SPEAKERS

- Jack Dongarra (UTK and ORNL, USA)
"The Challenges of Extreme Scale Computing"

- Jan Westerholm (Abo Akademi University, Finland)
"Observations on MPI usage in large scale simulation programs"

- Rolf Hempel (Deutsches Zentrum fA~1/4r Luft- und Raumfahrt e.V., Germany)
"Interactive Visualization of Large Simulation Datasets"

- Jesus Labarta (Barcelona Supercomputing Center, Spain)
"Detail at scale in performance analysis"

- William Gropp (University of Illinois Urbana-Champaign, USA)
"Does MPI Make Sense For Exascale Systems?"



LIST OF ACCEPTED PAPERS
(listed alphabetically)

Mohamed Abouelhoda and Hisham Mohamed
. Parallel Chaining Algorithms

Rakhi Anand, Edgar Gabriel and Jaspal Subhlok
. Communication Target Selection for Replicated MPI Processes

Pavan Balaji, Darius Buntinas, David Goodell, William Gropp, Jayesh Krishna, Ewing Lusk and Rajeev Thakur
. PMI: A Scalable Parallel Process Management Interface for Extreme-Scale Systems

Aurelien Bouteiller, George Bosilca, Pierre Lemarinier, Thomas Herault and Jack Dongarra
. Dodging the Cost of Unavoidable Memory Copies in Message Logging Protocols

Ron Brightwell, Kurt Ferreira and Rolf Riesen
. Transparent Redundant Computing with MPI

Gabor Dozsa, Sameer Kumar, Pavan Balaji, Darius Buntinas, David Goodell, William Gropp, Joseph Ratterman and Rajeev Thakur
. Enabling Concurrent Multithreaded MPI Communication on Multicore Petascale Systems

Richard Graham, Ishai Rabinovitz, Pavel Shamis, Noam Bloch and Gilad Shainer
. Network Offloaded Hierarchical Collectives Using ConnectX-2a^E=(tm)s CORE-Direct Capabilities

Timo Heister, Martin Kronbichler and Wolfgang Bangerth
. Massively Parallel Finite Element Programming

Scott Hemmert, Brian Barrett and Keith Underwood
. Using Triggered Operations to Oi:NO,,oad Collective Communication Operations

Torsten Hoefler, Greg Bronevetsky, Brian Barrett, Bronis de Supinski and Andrew Lumsdaine
. Efficient MPI Support for Advanced Hybrid Programming Models

Torsten Hoefler and Steven Gottlieb
. Parallel Zero-Copy Algorithms for Fast Fourier Transform and Conjugate Gradient using MPI Datatypes

Torsten Hoefler, William Gropp, Rajeev Thakur and Jesper Larsson Traeff
. Toward Performance Models of MPI Implementations for Understanding Application Scaling Issues

Michael Hofmann and Gudula Ruenger
. An In-place Algorithm for Irregular All-to-All Communication with Limited Memory

Joshua Hursey, Chris January, Mark O'Connor, Paul Hargrove, David Lecomber, Jeffrey Squyres and Andrew Lumsdaine
. Checkpoint/Restart-Enabled Parallel Debugging

Vivek Kale and William Gropp
. Load Balancing Regular Meshes on SMPs with MPI

Rainer Keller and Richard L. Graham
. Characteristics of the Unexpected Message Queue of MPI applications

Seong Jo Kim, Yuanrui Zhang, Seung Woo Son, Ramya Prabhakar and Mahmut Kandemir
. Automated Tracing of I/O Stack

Dries Kimpe, David Goodell and Robert Ross
. MPI Datatype Marshalling: A Case Study in Datatype Equivalence

Jayesh Krishna, Pavan Balaji, Ewing Lusk, Rajeev Thakur and Fabian Tillier
. Implementing MPI on Windows: Comparison with Common Approaches on Unix

Jesper Larsson TrA~Cuff
. Compact and Efficient Implementation of the MPI Group Operations

Jesper Larsson TrA~Cuff
. Transparent neutral element elimination in MPI reduction operations

Teng Ma, George Bosilca, Aurelien Bouteiller and Jack Dongarra
. Locality and Topology aware Intra-node Communication Among Multicore CPUs

Stephanie Moreaud, Brice Goglin and Raymond Namyst
. Adaptive MPI Multirail Tuning for Non-Uniform Input/Output Access

Thorvald Natvig and Anne C. Elster
. Run-Time Analysis and Instrumentation for Communication Overlap Potential

Akihiro Nomura and Yutaka Ishikawa
. Design of Kernel-level Asynchronous Collective Communication

Paul Sack and William Gropp
. A Scalable MPI_Comm_split Algorithm for Exascale Computing

Michel Schanen, Uwe Naumann and Michael FA~PIrster
. Second-Order Adjoint Algorithmic Differentiation by Source Transformation of MPI Code

Jerome Soumagne, John Biddiscombe and Jerry Clarke
. An HDF5 MPI virtual file driver for parallel in-situ post-processing

Sarvani Vakkalanka, Anh Vo, Ganesh Gopalakrishnan and Robert Kirby
. Precise Dynamic Analysis for Slack Elasticity: Adding Buffer Without Adding Bugs

MORE INFORMATION

Additional information about the conference can be found at:
http://www.eurompi2010.org

В избранное