HomeNews and blogs hub

Teaching the Next Generation of Software Engineers

Bookmark this page Bookmarked

Teaching the Next Generation of Software Engineers

Author(s)

Jacalyn Laird

Posted on 6 May 2021

Estimated read time: 7 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

Teaching the Next Generation of Software Engineers

Posted by j.laird on 6 May 2021 - 10:00am

Laptop with codePhoto by James Harrison on Unsplash

By Peter Crowther (Research Software Engineer, IT Services, University of Manchester) and Suzanne Embury (Reader in the Department of Computer Science, University of Manchester).

Originally posted on the University of Manchester's Research IT news.

Find out how University of Manchester Research Software Engineers have been working with researchers from Computer Science to bring coding to the next generation.

The Institute of Coding (IoC) is a UK-wide consortium of educators, employers and outreach organisations which aim to address the UK digital skills gap. In 2018, Suzanne Embury and Caroline Jay from the Department of Computer Science and Robert Haines (Head of Research IT) were awarded IoC funding to develop improved tools and techniques for teaching software engineering at university undergraduate level. The researchers sought help from Research IT group (RIT) who were able to supply Research Software Engineers to develop the required new systems.

The IoC project at Manchester was grouped under several themes with RIT members contributing to two main themes:

  • RoboTA: A Continuous Marking and Feedback System for Software Engineering.
  • Empirical Analysis of Learning Challenges in Software Engineering.

In this blog post we cover the first theme – an automated marking and feedback system.

RoboTA – A Robot Teaching Assistant

A significant challenge of teaching is providing prompt feedback to learners. Especially at modern universities, there tends to be a high ratio of students to staff and providing feedback takes a great deal of time and resource. This means that feedback is typically only provided at the final assessment stage. It would be valuable to be able to provide continuous feedback while the students work, as this would mean that errors could be identified and corrected as they are made, and students would be able to progress their understanding more quickly.

A coursework-based introductory software engineering course is a good target for an automated assessment system as it has several favourable qualities:

  • The subject is technical which means that students make simple errors which can prevent them from making progress – this makes a rapid feedback system valuable.
  • The subject requires that students regularly add versions of the work as the project progresses – this provides opportunities for feedback as part of the regular student workflow.
  • A proportion of the assessment is based upon whether students have correctly followed a defined process rather than on results – this makes automated assessment easier.

Our contribution to this goal is RoboTA - an automated system that provides students with feedback as they do their coursework. RoboTA has been designed and deployed for two undergraduate software engineering units in the Computer Science department at UoM, although it has been designed to be as flexible as possible so that it can be extended to other courses in the future.

RoboTA is implemented in the Python programming language and is made up of several modular components:

robota-core

This is the central component of RoboTA which collects information about student work from several data sources. This component is designed to be general and reusable and is used by the other components.

robota-marking

The marking component has a framework and the logic for the assessment of the COMP23311 teaching unit at UoM. A lot of this assessment code is specific to this teaching unit, though the structure could be used to develop assessment for other teaching units. This part of RoboTA has not yet been released publicly but we hope to do so soon.

robota-common-errors

This component is designed to be more general than robota-marking, identifying common errors in software engineering practices. This component can be applied to any software repository.

robota-progress

robota-progress is a progress dashboard for monitoring the contributions of different team members to a software repository. robota-progress was used as part of a team coursework exercise on the COMP23412 teaching unit at UoM. The learner analytics team collected student feedback about student perceptions of the progress dashboard and these results have been published in the Journal of Systems and Software.

Lessons learned from RoboTA

robota-progress was used for the 2019 and 2020 runs of the COMP23412 unit and robota-marking was deployed to a cohort of over 200 students for the 2020 run of the COMP23311 unit, providing real-time feedback. Though there were some early bugs encountered in the assessment of some of the components, these were quickly worked out and student feedback was very positive. Finding out what works and what doesn’t is part of the outcome of the IoC project and so here we report some key findings.

The most important consideration is that the course content and how it is assessed must be prepared with automated assessment in mind. This means finding areas where students have problems and creating tasks which have a chance of exposing these problems. Also, assessment must be sufficiently simple to be automated. Tasks based on assessment of natural language, tasks where the input provided by the student varies significantly or tasks where there are qualitative aspects that cannot be objectively evaluated, are a poor target for automation. Good targets for automated assessment are those where there is a clear “correct” answer or where assessment is based upon the student following a well-defined process.

We found writing assessment code to be time consuming. We have developed RoboTA to be an adaptable framework for accessing student data but each course or piece of work being assessed will require writing specific code to assess whether students have met the assessment criteria. This partly links to the previous point, in that assessment criteria need to be considered carefully to make them easier to automatically assess.

Since this project started as a prototype, we did not develop a rigorous testing framework for the assessment code. This came back to us when we deployed the code, where many bugs were encountered. While the students were good at identifying problems, there is concern that if the marks are not accurate then the students will not trust the system. Comparison with data from previous years found that the automated assessment was more accurate and consistent in a lot of cases than marks given by teaching assistants, however students are more likely to find any errors if the marks and feedback are broken down in a detailed way. There is some comparison to be made here with autonomous vehicle technology - sometimes automated systems are held to higher standards than their human driver counterparts!

RoboTA is already proving useful at UoM, but we also hope that we have also developed some useful reusable components which can continue to be developed in the future. If you have any questions about RoboTA please contact Suzanne Embury.

Share on blog/article:
Twitter LinkedIn