A Hands on introduction to HPC for Women in HPC in collaboration with PRACEDays15 @ EPCC

Posted on 2015-05-05 13:13:37 by Clair Barrass

25-26 May 2015

Dublin


Ballsbridge Hotel (Day 1) and Aviva Stadium Conference Centre (Day 2), Dublin, Ireland.ARCHER, the UK's national supercomputing service, offers training in software development and high-performance computing to scientists and researchers across the UK. As part of our training service, in collaboration with the Women in HPC network (www.womeninhpc.org.uk) and the PRACE Scientific and Industrial Conference we will be running a 1.5 day ‘Hands on Introduction to HPC’ training session.


RegisterThis course provides both a general introduction to High Performance Computing (HPC) using the UK national HPC service, ARCHER, as the platform for exercises.Familiarity with desktop computers is presumed but no programming or HPC experience is required. Programmers can however gain extra benefit from the course as source code for all the practicals will be provided.This event is open to everyone interested in using HPC, but all our training staff will be women and we hope that this provides an opportunity for women to network and build collaborations as well as learning new skills for a challenging and rewarding career in HPC.

Please note that although the venue is different on each of the two days, this is provide attendees with the opportunity to attend the PRACEDays opening session immediately after the close of the training session on the Tuesday. On day one the training will be run at the Ballsbridge Hotel where many conference attendees will be staying, and on day two the training will continue at the Aviva Stadium Conference Centre, less than 10 minutes walk from the Ballsbridge hotel.

PRACE Scientific and Industrial Conference (PRACEDays15) 
26 - 28 May 2015, Aviva Stadium Conference Centre, Dublin, Ireland
Registration cost: €60
PRACEDays15 will open immediately after the close of this workshop and will bring together experts from academia and industry who will present their advancements in HPC-supported science and engineering.hosting this training and is open to all attendees. If you wish to attend the conference please note that registration for this needs to be completed separately at http://www.prace-ri.eu/pracedays15/



This entry was posted in Uncategorized.
  • Currently /5

Rating: 0/5 (0 votes cast).

Log in to comment or rate this post.
Comments for this post have been disabled

Single-sided PGAS Communications Libraries @ EPCC

Posted on 2015-05-05 13:10:57 by Clair Barrass


20-21 May 2015

University of Bristol

ARCHER, the UK's national supercomputing service, offers training in software development and high-performance computing to scientists and researchers across the UK. As part of our training service we will be running a 2 day ‘Single-sided PGAS Communications Libraries’ training session.

Register

In some applications, the overheads associated with the fundamentally two-sided (send and receive) nature of MPI message-passing can adversely affect performance. This is of particular concern for scaling up to extremely large systems. There is a renewed interest in simpler single-sided communications models where data can be written/read directly to/from remote processes.

This two-day course covers two single-sided PGAS libraries: the OpenSHMEM standard http://www.openshmem.org/ on day 1, and the new open-source GASPI library http://www.gaspi.de/ on day 2. Hands-on practical sessions will play a central part in the course, illustrating key issues such as the need for appropriate synchronisation to ensure program correctness. All the exercises can be undertaken using C or Fortran. The OpenSHMEM material will be delivered by the ARCHER CSE team; the GASPI material will be delivered by Christian Simmendinger (T-Systems Solutions for Research). Further details of the GASPI training are available below.


Efficient Parallel Programming with GASPI

The HPC programmers of tomorrow will have to write codes, which are able to deal with systems hundreds of times larger than the top supercomputers of today. In this Tutorial we present an asynchronous dataflow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI.

GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its significantly improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (also see http://www.gaspi.de). GASPI today is used in academic and industrial simulation applications.

The Tutorial gives an overview over the key concepts of elements of GASPI, such as synchronization primitives, synchronous and asynchronous collectives, fine grained control over one-sided read and write communication primitives, global atomics, passive receives, communication groups and communication queues. GASPI aims at multi-threaded execution, offers a thread-safe API and can be used in combination with all current threading models (OpenMP, Pthreads, MCTP, and others). GASPI provides its partitioned global address space in the form of configurable memory segments and features support for heterogeneous memory architectures. All GASPI segments can directly read and write from/to each other. By spawning a GASPI segment across e.g. the main memory of a distributed Xeon Phi system and a segment across the memory of the corresponding x86 host, the GASPI API hence can provide a consistent and cohesive view of this hybrid distributed memory architecture. The flexibility of the configurable GASPI segments also allows developers to both leverage multiple memory models within a single application and/or to globally tightly couple different applications (e.g. multi-physics solvers). GASPI is failure tolerant and allows for a dynamic (shrinking or growing) node set. All non-local procedures feature timeout parameters and provide a well defined exit status.

The Tutorial will provide a hands-on introduction (in C and Fortran) which features examples and use-cases for all its key concepts (segment creation, synchronization primitives, read and write communication primitives, global atomics, passive receives, collectives, communication groups and communication queues). Case studies which demonstrate the various aspects of the GASPI API and application categories that can take advantage of these aspects are identified. This Tutorial also includes a discussion of the current GPI-2 release, the first release which supports the GASPI standard. Benchmark results on different platforms are discussed. Tools for programming with GASPI/GPI such as profiling tools are presented in a "howto" section.





Starts 20 May 2015 09:30

Ends 21 May 2015 17:00

Europe/London




University of Bristol

G10/11

8-10 Berkeley Square Education Support Centre Bristol BS8 1HH


No material yet



This entry was posted in Uncategorized.
  • Currently /5

Rating: 0/5 (0 votes cast).

Log in to comment or rate this post.
Comments for this post have been disabled

Efficient Parallel IO on ARCHER @ EPCC

Posted on 2014-06-22 09:34:08 by luke tweddle

One of the greatest challenges to running parallel applications on large numbers of processors is how to handle file IO: standard IO routines are not designed with parallelism in mind. Parallel file systems such as Lustre are optimised for large data transfers, and performance can be far from optimal if many files are opened at once.

The IO part of the MPI standard gives programmers access to efficient parallel IO in a portable fashion. However, there are a large number of different routines available and some can be difficult to use in practice. Despite its apparent complexity, MPI-IO adopts a very straightforward high-level model. If used correctly, almost all the complexities of aggregating data from multiple processes can be dealt with automatically by the library.

The first day of the course will cover the MPI-IO standard, developing IO routines for a regular domain decomposition example. It will also briefly cover higher-level standards such as HDF5 and NetCDF.

The second day will concentrate on ARCHER, covering how to configure the Lustre file system for best performance and how to tune the Cray MPI-IO library. Case studies from real codes will also be presented.

Prerequisites: The course assumes a good understanding of basic MPI programming in Fortran, C or C++. Knowledge of MPI derived datatypes would be useful but not essential.

Date: 02-03 September 2014

Location: EPCC,

Full course details and registration available on the PRACE website.

This entry was posted in PATC.
  • Currently /5

Rating: 0/5 (0 votes cast).

Log in to comment or rate this post.
Comments for this post have been disabled

Message-Passing Programming with MPI @ EPCC

Posted on 2014-06-21 15:08:10 by luke tweddle

The world’s largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture.

Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues.

The course is normally delivered in an intensive three-day format using EPCC’s dedicated training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts.

Intended Learning Outcomes

On completion of this course students should be able to:

Understand the message-passing model in detail.
Implement standard message-passing algorithms in MPI.
Debug simple MPI codes.
Measure and comment on the performance of MPI codes.
Design and implement efficient parallel programs to solve regular-grid problems.

Pre-requisite Programming Languages:

Fortran, C or C++. It is not possible to do the exercises in Java.

Date: 02-03 October 2014

Location: EPCC, SURFsara, Amsterdam, Netherlands.

Full course details and registration available on the PRACE website.

This entry was posted in PATC.
  • Currently /5

Rating: 0/5 (0 votes cast).

Log in to comment or rate this post.
Comments for this post have been disabled

ARCHER Software Carpentry boot camp and Introduction to Scientific Programming in Python @ EPCC

Posted on 2014-06-07 11:02:36 by luke tweddle

ARCHER, the UK's new national supercomputing service, offers training in software development and high-performance computing to scientists and researchers across the UK. As part of our training service we are running a 3 day Software Carpentry boot camp and Introduction to Scientific Programming in Python.

Software Carpentry boot camps help researchers become more productive by teaching software development skills that enable more to be done, in less time, and with less pain. We will cover skills including version control, task automation, good programming practice and automated testing. These are skills that, in an ideal world, researchers would master before tacking anything with "cloud" or "peta" or "HPC" in their name, skills that enable researchers to optimise their time and provide them with a secure basis to optimise and parallelise their code.

Our Introduction to Scientific Programming in Python, will provide an introduction to Python on ARCHER. We will introduce Python's capabilities for scientific computing, in particular the Cython, mpi4py, NumPy, SciPy and matplotlib Python libraries. We will also introduce how to interface Python with C and Fortran codes.

To attend, you must have some experience of writing code or scripts and be familiar with programming concepts including conditionals, loops, arrays and functions. You should also be comfortable with using the bash shell. For an introduction to the shell, please see, for example Software Carpentry's lessons on Unix Shell

The course will be hands-on, and you are encouraged to bring your own laptop (you'll be asked to install some software before you arrive). Alternatively, PCs will provided for use.

Date: 21-23 July 2014

Location: Cranfield University, Coollege Road, Cranfield, Bedofrdshire, MK43 0AL

Full course details and registration available on the PRACE website.

This entry was posted in PATC.
  • Currently /5

Rating: 0/5 (0 votes cast).

Log in to comment or rate this post.
Comments for this post have been disabled
Older entries »