HomeNews and blogs hub

Conference on Science and the Internet

Bookmark this page Bookmarked

Conference on Science and the Internet

Estimated read time: 5 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

Conference on Science and the Internet

Dusseldorf, Germany, 1-3 August 2012.

Event website.

By Aleksandra Pawlik, Agent and PhD student, the Open University.


  1. What defines e-research and what will be the next paradigm after the Big Data?
  2. The Scientific Study of Public Communication on Twitter in Australia – developing metrics for research on Twitter
  3. The discussion about developing clearer policies for Open Science

Conference report

How are digital humanities defined? What are the best approaches to researching social media? What metrics can be applied to studying social behaviours in the online environment? What are the purpose and the importance of scientific blogs? How to employ new media to promote science and establish dialogue with a wider audience? How to make sure that large national online data repositories retain context-sensitive information?

These were just some of a number of questions asked and discussed during the Conference on Science and the Internet which took place in Dusseldorf, Germany, from 1st till 3rd August 2012. The conference had a multidisciplinary character. The participants discussed how new technologies and media are used and may influence contemporary academia and how the rich online world is studied by scholars from different disciplines. 

Ralph Schroeder (http://www.oii.ox.ac.uk/people/?id=26) from Oxford Internet Institute talked about the different definitions of e-research. He pointed out that one of the main criterions which constitute such definitions is the “use of tools and data for the distributed and collaborative production of knowledge”. However, as the paradigms of e-research have been changing, from supercomputer through grid, web 2.0 to clouds and Big Data, the definitions and perceptions of e-research may evolve in the future. New technologies change research practices. Whilst this process can bring many benefits, the scholars should also be aware of some of its shortcomings; namely the effect of ‘automatization’ which means further separation of researchers from their objects of study by the employed research technologies. In addition to that, the expansion of e-research enables the researchers easy sharing of their work artifacts and results. Hence, academics need to work further on developing clearer policies with regards to Open Science.
During the conference there were more presentations and discussion about Open Science. They focused on the questions about the definitions of ‘openness’ in science, the relevant legal matters, reusing and sharing of the research artifacts as well as reproducible science. One of the interesting issues raised were the capabilities of the technologies for discovering plagiarism. Currently, the most sophisticated systems are only able to discover ‘blunt’ plagiarism. That is, the cases in which a part of a text was copied and slightly edited. The plagiarism of ideas can still be discovered by only human readers. Another issue was the common in science practice of sharing data or software. Sometimes scientists are keen to do it (for example, sharing their source code), however from the legal point of view the matter of sharing of the outcomes of scholarly work may not be entirely straightforward.

The issues related to setting up large national online data repositories for toxicology were discussed by Deborah Keil (http://www.path.utah.edu/education/mls/keil/) & Kenning Arlitsch (https://faculty.utah.edu/u0028451-KENNING_ARLITSCH/research/index.hml) from University of Utah. Building national-scales repositories and making large datasets available to researchers brings many advantages the institutions that do it. However, there are several challenges which may outweigh the advantages. Firstly, the repositories may be based on dispersed databases some of which may not always be maintained and available. Secondly, web search engines (including Google Scholar, which is increasingly used by academics) do not always pick up the information stored in the repositories (for example, preprints). Thirdly, the lack of consistent ontologies creates many problems and confusion. Finally, the repositories may sometimes use software whose provenance may not allow free use. 

Axel Bruns (http://staff.qut.edu.au/staff/bruns/) & Jean Burgess (http://cci.edu.au/profile/jean-burgess) from Queensland University of Technology Brisbane in Australia talked about their recent study of Public Communication on Twitter. One of the main issues they raised were the challenges in developing comprehensive and sustainable tools and methodologies for studying Twitter. For the purpose of this study a piece of software had to be developed using Twitter’s API, even though there were existing tools for gathering data from Twitter. These tools did not however meet the needs of the project. The presenters pointed out that big datasets containing information about social media users do exist, however, they are often commercial and not available for researchers to use. Dealing with ethical issues also poses some problems in gathering and handling data from Web 2.0. The presenters concluded that collecting and analysing large amounts of data from social media would require well established metrics, which scholars from various institutions would be likely to use and reuse in their studies.

Many of the discussions during the Conference on Science and the Internet focused on the methodologies and tools for researching new media. Other topics covered various aspects of using the internet by scholars. All these talks and presentations were leading towards more reflective questions and discussions on how new technologies change research and what researchers should be aware of when it comes to e-Science. 

Share on blog/article:
Twitter LinkedIn