Skip to content


The times provided are approximated for the maximum amount of time spent on each section. The sections may be shorter depending on the session.

  1. Introduction to Open Science Practices (Lecture) - 80 minutes
    1. Why should we do Open Science?
    2. A Brief Disclaimer
    3. Preregistration
      1. Side Note: Registered Reports
    4. Open Data
    5. Open Materials
    6. Publishing Papers
      1. Open Access
      2. Preprints
    7. Break and Q&A
  2. Working with the Open Science Framework (Interactive) - 40 minutes
    1. OSF Project
    2. OSF Preregistration
    3. Break and Q&A
  3. Debugging and Fixing Reproducibility (Interactive) - 30 minutes
    1. What software do you use?
    2. Reproducibility as relevant to the participants
    3. Break and Q&A
  4. Audience Choice (Interactive) - 30 minutes
    1. Examples include:
      • Applications in Existing Papers
      • Understanding Git
      • Understanding Docker
      • etc.

Tutorial Structure

The tutorial will occur over half a day and focuses on introducing some common open science practices and their usage within education technology, providing an example on using the Open Science Framework to create a project, post content, and preregister studies, and using previous papers to apply the learned practices and any additional reproduction mitigation strategies.

  1. First, we will provide a presentation on an overview of a few problems when conducting research. Using this as a baseline, we will introduce open science and its practices and how they can be used to nullify some of these issues and mitigate others. In addition, we will attempt to dispel some of the misconceptions of these practices.
  2. Second, we will provide a live example of using the Open Science Framework (OSF) website to make an account, create a project, add contributors, add content and licensing, and publicize the project for all to see. Afterwards, we will provide a guide to creating a preregistration, explaining best practices, and identifying how to create an embargo. Additional features and concerns, such as anonymizing projects for review and steps required to properly do so, will be shown.
  3. Third, we will discuss reproducibility metrics within work when providing datasets and materials. This will review commonly used software and languages (e.g., Python, R and RStudio, Visual Studio Code) and how, without any steps taken, most work tends to be extremely tedious to reproduce or are not reproducible in general. Afterwards, we will provide some mitigation strategies needed to remove these concerns.
  4. Finally, we will use a few papers, each containing different issues, and apply the necessary steps needed to reproduce the results within the paper.