HomeNews and blogs hub

How will advances in generative AI and large language models aid software sustainability and change the role of RSEs?

Bookmark this page Bookmarked

How will advances in generative AI and large language models aid software sustainability and change the role of RSEs?

Author(s)
Abhishek Dasgupta

Abhishek Dasgupta

SSI fellow

Alex Clarke

Alex Clarke

SSI fellow

Aleksandra Nenadic

Aleksandra Nenadic

Training Team Lead

Gaurav Bhalerao

Gaurav Bhalerao

SSI fellow

Aisha Aldosery

Adam Ward

Simon Rolph

Matt Alexandrakis

Posted on 17 August 2023

Estimated read time: 7 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

How will advances in generative AI and large language models aid software sustainability and change the role of RSEs?

Posted by d.barclay on 17 August 2023 - 11:00am 

A laptop showing an AI generated image of a female profile with leaves for hairBy Aisha Aldosery, Abhishek Dasgupta, Adam Ward, Simon Rolph, Alex ClarkeAleksandra Nenadic, Matt Alexandrakis, and Gaurav Bhalerao.

This blog post is part of our Collaborations Workshop 2023 speed blog series. 

As technology advances, so does the field of research software engineering. With the emergence of large language models, such as OpenAI's GPT-4, the way we approach research and development has drastically changed. These models are capable of processing vast amounts of text data and producing human-like responses, which has revolutionised many industries and sectors, including healthcare, finance, and education. In this blog post, we will explore the impact that large language models have on the work of research software engineers. We will discuss the potential benefits and drawbacks of using these models in software development, as well as the new challenges that arise from integrating these models into existing workflows. Additionally, we will explore how research software engineers can leverage these models to improve their work, and what skills they need to develop to stay ahead of the curve in this exciting new field. Whether you are a seasoned software engineer or a newcomer to the field, this blog post will provide valuable insights into the world of large language models and their impact on research software engineering.

Opportunities

Large language models (LLMs) offer several opportunities for research software engineers to improve their work and make it more efficient.

Technical documentation

Writing technical documentation for research software can be a time-consuming task, but with the help of LLMs, it becomes easier to write clear and concise descriptions of software functionalities. Research software engineers can use LLMs to generate technical blogs and software descriptions making it easier for end-users to understand how the software works. Research software engineers can use LLMs to generate comments for their code, reducing the time spent on writing and improving the readability of the code. This can free up research software engineers to focus on more creative and strategic tasks, such as designing new algorithms and features. LLMs can be used to convert complex scientific language into plain English, making research findings more accessible to a wider audience.

Programming and Coding

Generative models can also help to make code more accessible. For example, they can be used to generate code that is written in natural language or to generate code that can be easily understood by non-technical users. This can help to make research software and programming more accessible to a wider range of people and facilitate collaboration between research software engineers and non-technical users. Research software engineers can use LLMs to generate unit tests for their code, reducing the amount of manual testing required. Additionally, LLMs can help in translating code from one programming language to another, optimising code, and generating first drafts or versions of code that can be maintained manually in the later stages.

Beginners in research software

LLMs can be used to build low-code solutions with natural text interfaces for non-experts or new users. Additionally, LLMs can help research software engineers in learning new programming languages for example, ChatGPT can act as a tutor and guide in learning new programming languages. Research software engineers can discuss new features from a user perspective to generate initial ideas and test their feasibility using LLMs.

Overall, generative models have the potential to make a significant contribution to the sustainability of research software development. These models can help research software engineers write code more quickly, easily, and effectively to improve code quality, readability, and accessibility. Their effective use can bridge the gap and increase collaboration between research software engineers and non-technical users.

Challenges

While recent large language models show enormous promise in accelerating the field of research software engineering, we should be cognizant of challenges that arise from the use of such a nascent technology.

Attribution and licensing

Most LLMs in broad use today are not open source. The data that such models have been trained on have been typically obtained from scraping, and it is unclear what license restrictions apply to a work created by an algorithm that has been trained on copyrighted data. Examples in this area include the verbatim reproduction of code that has copyright restrictions such as the GNU General Public License that would ordinarily exclude its use along with code under more permissive licenses. For tasks such as text generation, current LLMs do not offer a way to identify the source or inspiration of a quote, which could lead to unintentional plagiarism.

Correctness

While LLMs (such as ChatGPT) show remarkable general proficiency across a range of tasks, at its core such models are next token predictors, and it is unclear at this moment in their technological development, whether they have any notion of understanding. LLMs have been shown to produce subtly incorrect code, and are of greater concern in areas such as generative text, made-up links to articles and books that do not exist.

Sensitive data and privacy

It is extremely important to be aware of the risks involved while using such models for an organisation. Individuals and organisations should take great care with the data they choose to submit in prompts. Also, for sensitive data such as clinical datasets, one should be aware of the guidelines for sharing and using such data in these models. Using AI models in some sensitive fields such as health, medicine and genetics could be risky as there are high chances of false positives. Therefore, a clear regulation and policy on how, who and in which situation the AI model could be used is significantly needed. 

Impact on the software engineer community

Will software developers feel fulfilled using LLMs? On one hand, they can potentially work more efficiently, but on the other hand, they may feel they aren’t developing their skills much. For example, do you gain more by solving a coding problem yourself rather than asking an LLM for a solution? From a company's point of view, are they relying too much on LLMs rather than having employees develop their skills? The impact of such emergent technology is still unknown and could be impacting the software engineering community in either way, by putting more pressure on engineers to increase their productivity or by having fewer opportunities as such tools might decrease the required resource.

Conclusion

In conclusion, large language models (LLMs) offer a wide range of opportunities for research software engineers, including automating tasks such as documentation writing and code commenting, and generating media such as images and speech. However, the use of LLMs also poses several challenges that need to be addressed to ensure software sustainability and integrity. To ensure the sustainability of software developed using LLMs, it is essential to develop safeguards and best practices to mitigate the risks associated with the use of LLMs. This includes addressing issues such as authenticity, ownership, and validity of output generated by LLMs, as well as the environmental impact of training such models. Moreover, research software engineers should strike a balance between using LLMs to automate tasks and developing their problem-solving and critical-thinking skills. The use of LLMs should be seen as a tool to assist and enhance software development, rather than a replacement for human intelligence and ingenuity. By addressing these challenges and adopting sustainable practices, research software engineers can continue to leverage the power of LLMs to develop high-quality, innovative, and sustainable software solutions that meet the evolving needs of society.

 

 

 

 

Share on blog/article:
Twitter LinkedIn