Skip to main content Site map
HomeNews and blogs hub

Universe-HPC Training Pilot Explores Hybrid Delivery

Bookmark this page Bookmarked

Universe-HPC Training Pilot Explores Hybrid Delivery

Author(s)
Steve Crouch

Steve Crouch

Software Team Lead

Philippa Broadbent

Posted on 25 October 2023

Estimated read time: 8 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

Universe-HPC Training Pilot Explores Hybrid Delivery

Universe HPC logo

Training is always evolving, both in terms of the skills researchers need to accomplish their work, but also in how training is provided. The Universe-HPC project aims to define a training curriculum framework – spanning from undergraduate to continuing professional development level – for Research Software Engineers (RSEs) specialising in high performance computing (HPC), and in August we piloted a small online training workshop to not only roadtest new training materials, but also to explore new ways to deliver that training.

This pilot was a little different from ones we've run before. Due to the nature and level of materials being piloted, the training targeted researchers between novice and intermediate research software skills. In an attempt to reach our target audience, many of our learners for the pilot were recruited from a pool of PGR helpers who had previously signed up to assist with novice-level training courses. We also experimented with a hybrid model of delivery, where we used live coding for novice lessons but supported self-learning for subsequent intermediate topics, and continued trialling a new training infrastructure developed by the Oxford Research Software Engineering group, that combines a way to host training materials with some useful features to support learners and instructors.

Dual Delivery Styles

From as far back as 1998, the Carpentries (back then known as Software Carpentry) pioneered hands-on methods of delivering software engineering and data science skills training to researchers using live coding. This has proven to be very effective - between 2012 and 2022 the Carpentries have taught over 4000 workshops in 65 countries and trained about 100,000 novice learners. Of course, the COVID pandemic meant that in-person training was no longer possible, and online options needed to be explored. Due to the increased accessibility offered by online training, many training courses have continued online even after the pandemic. Indeed, delivering Carpentry courses online can be effective with the right preparation. Online training has also given rise to further innovation in delivery, for example supported self-learning - a kind of tutorial-based approach, where topics are introduced lecture-style and then learners follow training materials in small online breakout groups, with instructors on hand in each room to help with questions and issues.

Teaching novice learners often requires a different approach to teaching intermediates. The pedagogically evidence-based Carpentries' instructor training for example, teaches the importance of building a working mental model of a topic for novices, so proceeding carefully when teaching a topic and not overloading learners is critical. Competent practitioners on the other hand, typically have a solid mental model of the fundamentals for how something works, and respond well to more flexible methods of teaching intermediate skills. Accordingly, novice learners may respond better to a code-along delivery method, whereas intermediate learners may prefer supported self-learning.

The August pilot was targeted at researchers above the novice level but not yet intermediate - learners who had some experience with established tools and development techniques. However, the pre-workshop survey informed us the cohort was more towards the novice end of the spectrum - perhaps not entirely surprising given the very high demand for learning novice skills - which indicated that, to reach our target audience,we need to be more specific in how we advertise our workshops in future! 

The Importance of Feedback

Collecting feedback in ways that are useful for future improvements to the materials - and to help get an overall picture of how things are going - is useful for any training workshop, but central to a pilot. We collected information and feedback at a number of key points.

Firstly, we conducted a pre-workshop survey, which established a baseline level of prior knowledge across the cohort by asking representative ability-based questions for each of the key lessons. For example, relevant to the version control lesson, we asked whether learners could check out a repository, add a new file and commit the change (with responses as either "No", "Yes, with documentation", or "Yes, without documentation"). The responses to this survey informed how we taught each workshop lesson.

We also employed surveys at the end of each session, where we asked about the pace, difficulty, level of engagement, effectiveness as a learning experience, level of instructor support, and usefulness of the material in the learners’ own research. This survey also gives all learners the opportunity to help us steer the following days of training. For example, if a number of learners indicate that the pace is too fast, the instructor can slow down the pace in the next session.

Our final post-workshop survey asked what worked well and what needs improvement for the pilot overall, and also asked the same ability-based questions as the pre-workshop survey a second time. By comparing the responses  between the pre- and post-workshop surveys, we can work out the extent to which learning has effectively taken place for each topic.

Trainer notes were also used, where instructors recorded key issues encountered and any solutions found. These are naturally useful for knowing what and how to improve in the future, but also as a central tool for instructors who are in different breakout rooms to exchange information  and learn from their separate experiences as the pilot progresses.

The Pilot

The online pilot was conducted using Microsoft Teams over 3 afternoons for 19 learners from across the University of Southampton. The first two afternoons - delivered using live-coding - covered short introductions to the Bash shell and Python, and version control using Git. The final afternoon covered the more intermediate topics of unit testing and continuous integration, topics which we typically aim at intermediate learners.

The hybrid model of delivery was well received. The first two afternoons of live-coding were generally very successful, with the vast majority of learners responding well to the amount of content, pacing and level of difficulty, although on reflection we needed more time for Python given the cohort was at a more novice level than anticipated. The general smooth running of the novice sessions is perhaps not surprising given longer versions of these lessons have been delivered many times by the instructors. The final afternoon of supported self-learning for unit testing and continuous integration was also well received, and respondents felt that help was available to them when they needed it. However, whilst the more intermediate learners were satisfied with the self-learning approach, there was a notable reduction in the level of engagement with the material particularly by the more novice learners (in the session surveys, overall falling from average of 6.9/10, n=11 and 8.4/10, n=9 for first two days to 5.5/10, n=4 for the final day). It was suggested that having more time to digest the material would be useful, and more points of scheduled engagement between instructors and learners would help maintain momentum with the material, which we'll take forward to future workshops. When asked about the effectiveness of the hybrid method of using both instructor-led and self-taught delivery together, respondents were very positive (in the post-workshop survey, average of 7.8/10, n=6 on a scale from "not at all effective" to "very effective"). This gives us some validation that a mixed mode of delivery is a good way to structure such workshops for cohorts above novice level, albeit with some improvements.

The infrastructure we used to host the materials was also very well received by learners (an average of 9.3/10, n=6 across all respondents for the ease of following materials hosted on the infrastructure). A key aim of the pilot was to deploy and use this outside Oxford for the first time, and it performed very well. This infrastructure will prove even more useful as a tool to pilot materials in the future as further planned features are implemented by the Oxford team, such as ways to highlight and annotate problems and potential enhancements with training materials in real time, by both instructors and learners.

As a useful learning experience, the course was rated very highly overall. As a learning experience, respondents indicated an average of 8.3/10 (from "not at all effective" to "very effective"), and that what was learnt during the course will be very useful their research (an average of 7.9/10 across all topics, on a scale from "not at all useful" to "very useful").

We also gathered a wealth of useful qualitative feedback to improve the course, which has already resulted in many refinements to the training material and hosting infrastructure to benefit future pilots and workshops. We also plan to write up the pilot experiences and feedback as a publicly available report, and use them to build guidance for others that aim to organise and deliver training pilots of their own.
 

Share on blog/article:
Twitter LinkedIn
Back to Top Button Back to top