How to Conduct QM Research

Have an Idea for a Research Topic?

Designing_Quality_Online_Research_icon.png
Get help with selecting measurable outcomes and framing your study in Designing Quality Online Learning Research workshop.

The QM Research toolkit is designed to guide anyone interested in doing research on the impact of Quality Matters. The steps move through the process of conducting a research project, giving an explanation for each step along with examples. The toolkit begins with information on developing a QM-focused research question that will be the basis of the project.

Who should be conducting research related to QM?

This research toolkit is designed for new researchers, as well as experienced researchers who are unfamiliar with the Quality Matters working principles, processes, and procedures.

Set yourself up for success

Designing a research study on the impact of Quality Matters requires awareness, comprehension, analysis, synthesis, and evaluation of a scholarly online/distance education knowledge base, as well as a background in the principles and application of the QM process within a system of quality assurance. These reflective processes will better prepare the researcher in making judgments for the development of a study that can inform and continuously improve the QM knowledge base, which, in turn, will improve online/distance education through research and practice.

Interested in learning more about QM-focused research?

Contact Barbra Burch, Manager of Research and Development, or Bethany Simunich, Director of Research and Innovation.

Step One: [Develop your research question] What do you want to know?

Your research question will come from a broader topic. What really interests you about online learning? What interests you about participating with Quality Matters? After exploring further, you will begin to narrowly define a doable research question related to what you want to learn more about or to what you predict might be an answer to a specific set of conditions. In education, a research question can be focused on what, how, who, and why (qualitative research), or it can be focused on the projected relationships of specific variables (quantitative research). Or, it can be a combination of the two types of research approaches (mixed research). Jumping to a research question too early - before discovering, analyzing, and evaluating what research has been already done on the topic/s - is tempting, but needs to be resisted for the research to be significant.

 

  • Part 1

    Find out what's already known, based on previous research findings, relating to your topic. Review the research literature to become better informed about what other researchers have already written about the topic, especially as such pieces might relate to the aspect of QM and online learning that you want to explore. Becoming familiar with what research has been done on a particular topic is the hallmark for a well-designed study and will enable you to design a study built on existing knowledge rather than only personal assumptions/theories. (This review also saves a lot of re-inventing-the-wheel types of "research.")

    1. Explore what has been written on your general topic in the peer-reviewed journals.
      • Do not be surprised if you find a lot of information, sometimes conflicting, on a general topic (for example, interaction). It is important to become familiar with different perspectives and information on your topic of interest.
      • Make sure you are gathering information from research, not opinion pieces. Be alert when reading what might be called a "case study" in a blog or "how-to" professional development piece. Often these are actually descriptive write-ups without the rigors of actual case study methodologies. (Example 1)
    2. Use the QM Research Library as a first step in identifying possible leads on your general topic as it might relate to the QM Standards.
    3. Become familiar with approaches and findings from previous QM-focused studies and with the "what we're learning from QM-focused" papers.
  • Part 2

    Determine what you want to learn from your study.   Can you answer the Who, What, Where, Why, How, and When of your research question? Is your question too broad to be doable with the resources/time you're likely to have available for the study? Does your question for this particular study need to be narrowed?

    For more about the process of narrowing down from topic to question, see the following videos:

    1. Narrowing Your Topic
    2. Choosing and Narrowing Research Topic
       

    Can you use visualization to map relationships of interacting and/or inter-related components of your basic research question?

    Since your research question will guide the design of a study on the impact of QM, some decisions must be made to narrow your broad interest into a project that is both valid and doable.

    How are you going to define the impact of QM? What kind of impact? Impact on what? How? By whom? Why?

    1. Example 1 of a research question: "What is the impact of QM on student learning?"
      • While that might describe a general research topic, it is too broad to be a doable research question. We don't know what (how, when) aspect of participation in (which component of) QM is to be explored (by whom and why) and we don't know how (what, who, why, when) student learning is defined and how/when it will be measured). 
    2. Example 2 of a research question: "What is the impact of QM on student learning as measured by final course grades of students enrolled in an online undergrad nursing program at a dual-mode institution in the three-year period before and after all courses have met QM Standards and faculty have completed QM's APPQMR course?"  
      • This question is more specific and doable.
      • Examples of hypotheses for this question might be:
      • H1: Student grades will be significantly higher in courses that have met QM standards and are taught by QM-informed instructors
      • Ho: There is no significant difference in student grades in courses that have met QM Standards and are taught by QM-informed instructors. (This would be a null hypothesis.)
    3. Example 3 of a research question: "What is the impact of QM on retention?"
      • This question describes a broad, almost too broad without defining "retention," interest area. We don't know what (how, when) aspect of participation in QM will be explored (by whom and why) and we don't know how (what, who, why, when) "retention" is defined and how/when it will be measured.
    4. Example 4 of a research question: "What is the impact over a five-year period of online courses meeting QM Standards and faculty completion of QM professional development courses at a historically Black College University on student withdrawals, passing and failing grades?"
  • Part 3

    Define the key terms necessary to study your question. Terms such as interaction, engagement, and retention must be defined. Your review of the literature will greatly assist you in defining key terms in online learning. If it hasn't, you'll need to go back into the literature to explore definitions and dimensions of a term.

  • Part 4

    Determine why and to whom answering this particular question is important. Who will benefit from the findings of your study and why will they benefit?   Who are the shareholders for the information that you want to discover from this particular study?

    Examples of linking research topics to existing research and building a specific QM-focused research question.

Step Two: [Identify a theoretical or conceptual framework] What perspective will guide your research?

At this point, you'll want to step back a bit from the project so that you can reflect on the conceptual framework you will be using as a guide in developing the actual study. This process happens, to a degree, in Step 1: Decide on the viewpoint from which you will organize the interrelated concepts (as informed by your review of the literature) to form a research question.

Your theoretical or conceptual framework will be the basis of asking your research question and processing the data. This is not unlike the alignment feature in a QM-certified course! Your research question contains assumptions (theories/concepts) that will include activities and be evaluated.

What is the Meaning of Theoretical Framework?

Theoretical Framework

Theoretical and Conceptual Framework

 

  • Identifying Theoretical or Conceptual Frameworks

    Acknowledge your overall research aim:

    Do you want your research project to be a subjective (in the true research sense Subjectivity in Research), reflective study aimed at improving a specific educational situation? If so, then you will be doing action research. This will likely be Qualitative Research. Action research focuses on the participant experience/perspective instead of being prescriptive or starting with a hypothesis. (See diagrams of Action Research and ActionResearch.net.) Action research is the category of research that is the rigorous, reflective study of an aspect of education while actively engaged in the process. The aim is for improvement of the educational process and outcomes. 

    Do you plan on using a specific theoretical perspective to "test it out" within an authentic educational situation (for example, an active online course or program)? If so, then you will be doing Design-Based Research. Design-based research focuses on "specific theoretical claims about teaching and learning and helps us understand the relationships among educational theory, designed artifact, and practice." This type of research will likely be qualitative. "The aim is for the advance of educational theories of teaching and teaching in complex settings," (The Design-Based Research Collective, 2003, p. 1) What is Design-Based Research? ) and thereby improving educational practice, processes, and outcomes.

    1. Your theoretical framework acknowledges the belief system that will guide and support your study. Your orientation to learning will impact your approach to using QM Learning Theory: Models, Product, and Process.

    2. Do you want to provide numbers that are analyzed statistically to support a predictive statement you make to answer your research question? If so, then you will have specific hypotheses and be doing quantitative research (Quantitative Methods). The aim of quantitative research is to explain the relationship among specific variables by using mathematically based methods of data collection and analysis, which can then be used to make future predications of those relationships.

  • Learning Theory Examples
  • The Importance of Re-Reviewing the Literature

    Now, and this is important, make sure you don't latch on to a theory or construct/framework/model without going back into the review of the research literature to see if/how it has been applied to online learning already! For example, you might discover the community of inquiry framework (which comes from a constructivist theoretical perspective) and think, bingo, that's the one that I can use to guide my research study.

    Without becoming familiar with how that framework/model has already been applied to online learning, even to QM, you will be setting yourself up for a "re-inventing the wheel" study. While strategic replication is important in research, we need to make sure we're building on/refining/advancing what is known for continuous improvement in online learning.

    A few more resources. These are instructional design models you might find useful when designing your study.

  • Relationship Between Theoretical and Conceptual Model

    An example of the relationship between the theoretical and conceptual framework would be the application of the community of inquiry framework/model to guide the methodological design/analysis of a study. The theoretical framework would be a social constructivist one from which the CoI framework was developed.

  • Example 1

    Analyzing Predictors of Faculty Behavior to Engage in Peer Review

    2013 QM Research Grant supported study done by Altman, Schwegler, & Bunkowski at Texas A & M University/Central TX

    The proposed research directly ties the implementation of QM Standards to improve online course design to a widely accepted behavior prediction model, "The Theory of Planned Behavior" [Ajzen, I. (1991). Linking participation in the peer review process to the larger theory of behavior prediction will enable us to predict who will successfully complete the peer review process and provide an indication of the attitudes, norms, perceived behavioral control and intentions regarding the process. We will subsequently link these measured constructs to actual behavior through completion of the internal peer review and ultimately an external QM review in future research. We know of no research that has attempted to examine these constructs, and we expect this aspect of our research agenda to make a substantial contribution to the QM research literature.

    Through this proposed research we can

    (a) identify which factors, measured as indirect indicators of attitudes, norms, and perceived behavioral control, are most closely associated with successful revision of courses and completion of peer reviews based on QM Standards,

    (b) identify which factors are most likely to serve as obstacles to course improvement consistent with QM Standards, and

    (c) serve as a guide to identifying factors to target and alter to make faculty more likely to successfully implement QM Standards in their courses. Empirically examining our attempts to overcome the obstacles identified in the present research is a future avenue of research

  • Example 2

    Does Findability Matter?: Findability, Student Motivation and Self-Efficacy in Online Courses

    2012 QM Research Grant support study done by Simunich, Robins, & Kelly at Kent State University

    There are several important course components that are imperative for students to locate early on in the course, such as instructions for getting started, a self-introduction by the instructor, and a place for student introductions. All of these components may be present and written in a clear manner, but are they easily findable? This project is a first step in determining whether this "search time," or ease of findability, impacts student learning. For example, if students need to search for course essentials, how does their frustration level impact their motivation? At what point do they stop searching? If further, "essential items" that students need early on in a course, such as the syllabus, are hard to find, how does that influence student perception of course or instructor quality? It is important to investigate the potential barrier it poses to students if they have to spend time interpreting the learning environment. Logically, if students need to spend time finding essential course components, this may result in spending less time learning the course content or engaging in course participation. Perhaps more notably, low findability and the frustration that accompanies it may not only impact student learning, but also course attrition.

    Unfortunately, as noted by Fisher and Wright (2010), "...there is little research regarding the implementation of usability testing in academia, especially in online course development" (p. 1) While past research has shown a direct effect of "system usability" (i.e., LMS software usability) on student performance (Tselios, Avouris, Dimitracopoulou, & Daskalaki, 2001), there is a paucity of research on the effect of usability in the e-learning environment, and apparently no research on findability specifically. This study attempts to address that gap, and to investigate findability and its relation to student perception of course quality and overall experience. The opportunity to improve online learning with such a study is substantial, as there is the opportunity to discern if best practices in user-centered design, such as findability, are specifically correlated with increased student learning.

  • Example 3

    The Development of Technological Pedagogical Content Knowledge (TPACK) in Instructors Using Quality Matters Training, Rubric, and Peer Collaboration

    2011 QM Research Grant supported study done by Ward at University of Akron

    TPACK (Technological, Pedagogical, Content Knowledge) is a conceptual framework that explains the complex, multifaceted and contextual nature of teacher knowledge of the relationship between technology, content and pedagogy (Mishra & Koehler, 2006). Schulman (1986) wrote of specialize[d] knowledge that teachers develop at the intersection of content and pedagogy (PCK) and Mishra and Koehler expanded on this with the introduction of technological pedagogical content knowledge (TPCK). The TPACK framework outlines overlap[ping] areas that explain complementary knowledge that has to be developed in order to integrate technology in a way that supports content and pedagogical decisions. In the figure below and in their seminal work Mishra and Koehler illustrate the discrete domains of Technological Knowledge (TK), Pedagogical Knowledge (PK), and Content Knowledge (CK) and the overlapping areas representing [the] area of combined new knowledge: Pedagogical/Content Knowledge (PCK), Technological/Content Knowledge (TCK), Technological/Pedagogical Knowledge (TPK), and the middle overlap of all three circles called TPCK-Technological, Pedagogical, Content knowledge.

    TPACK has emerged as a useful conceptual framework for understanding the teachers knowledge base needed for effectively teaching with technology (Voogt, et. al., 2011). In the development of online learning the technology typically becomes the first topic of discussion, but TPACK discussions have clearly helped instructors think about pedagogical and content organizational changes that are relevant in this work.

    Technological Pedagogical Content Knowledge Venn Diagram

    Reproduced by permission of the publisher, © 2012 by tpack.org, per http://www.tpack.org/

    Figure 1. Technological, Pedagogical, Content Knowledge (Mishra & Koehler, 2011)

    Archambalt and Oh-Young (2009) examined how teachers prepare to teach in online environments. They found that the three major components (technology, content, and pedagogy) needed to ensure quality instruction were well expressed in the TPACK conceptual framework. In their investigation of key issues specific to online teaching, they found the TPACK framework was particularly useful.

    The TPACK framework has allowed instructors of online learning to view each component in an intentional way reflecting on their own knowledge of technology, content and pedagogy, but for some instructors a new look at these areas in an overlapping context can be challenging. Mishra and Koehler (2006) write about this new awareness for instructors working in online environments:

    For instance, consider faculty members developing online courses for the first time. The relative newness of the online technologies forces these faculty members to deal with all three factors, and the relationships between them, often leading them to ask questions of their pedagogy, something that they may not have done in a long time (p. 1030).

  • Example 4

    Linking Online Course Design and Implementation to Learning Outcomes

    2011 QM Research Grant supported study done by Swan, Bogle, Matthews, & Day at University of Illinois/Springfield

    The Community of Inquiry (CoI) framework (Garrison, Anderson & Archer, 2000), on the other hand, does address learning processes. It addresses them, moreover, from a collaborative constructivist point of view. Building from the notion of social presence, the CoI framework represents online learning experiences as a function of relationships among three presences: social, teaching, and cognitive. The CoI framework views all three as working together to support deep and meaningful learning processes. Indeed, research findings have linked social presence (Swan & Shih, 2006), teaching presence (Shea, Li, Swan, & Pickett, 2005) and cognitive presence (Garrison & Cleveland-Innes, 2005) to each other and to such outcomes as course satisfaction, community and perceived learning.

    In 2008, researchers working with the CoI framework developed a survey designed to measure student perceptions of each of these presences. The survey consists of 34 items (13 teaching presence, 9 social presence, and 12 cognitive presence items) that ask students to rate their agreement on a 5 point Likert scale (1=strongly disagree; 5=strongly agree) with statements related to the CoI framework (see Appendix B). The survey has been validated through factor analysis (Arbaugh, et al, 2009; Swan et al., 2008) and used to further explore the CoI framework and the interactive effects of all three presences (Garrison, Cleveland-Innes & Fung, 2010; Shea & Bidjerano, 2009) with some meaningful results. For example, researchers have linked 21% of the variance in program retention to two social presence survey items (Boston et al., 2010). It should be noted, however, that perceptions are a subjective measure, and that while that is very appropriate in the constructivist frame, in may not be everywhere appropriate.

    Accordingly, CoI researchers have recently begun exploring ways to link it to course outcomes (Arbaugh, Bangert & Cleveland-Innes, 2010; Boston et al., 2010). Quality Matters (QM) researchers have begun likewise investigating the relationship between course redesign and course outcomes. The research reported in this paper explores links between course design (as measured by the QM Rubric), learning processes (as measured by the CoI survey), and course outcomes.

 

Step Three: [Designing your study] What, why, when, how, and where will you do a study that will provide you with the data you need to answer your specific research question?

What's your plan for actually being able to conduct the research necessary to answer your question, to get some outcomes from your research project? Your conceptual or theoretical framework and your question will lead you to a research design. You will already have thought about this when writing your research question and when acknowledging your theoretical framework.

  • Quantitative (positivist/empiricist)

    Quantitative (positivist/empiricist) (See Research and Methodology)

    Example 1: Analyzing Predictors of Faculty Behavior to Engage in Peer Review

    2013 QM Research Grant supported study done by Altman, Schwegler, & Bunkowski at Texas A & M University/Central TX

    This quantitative program of research has the individual faculty member who created an online course as its unit of analysis.

    The faculty will provide the data collected for this research using four instruments:

    1. "Intentions Survey," designed using the "Theory of Planned Behavior," consists of 79 items divided into categories of Direct Measures (Past Behavior, Attitudes, Norms, Behavior, Intentions) and Indirect Measures (Behavioral Beliefs, Outcomes Evaluations, Normative Beliefs, Motivations to Comply, Control Beliefs, Power of Control Factors).
    2. "TAMUCT" [the institution] "Online Course Self-Review Worksheet" using the 2011-2013 QM Rubric
    3. "Online Coordinators Review Worksheet" using the 2011-2013 QM Rubric
    4. Peer Review Community combined QM Rubric Worksheet utilizing QM's web-based "Course Review Management System" (CRMS)

    Example 2: Measuring Online Course Design: A Comparative Analysis

    2013 QM Research Grant supported study done by You, Xiao, Ballard, Hochberg, & Walters at University of Toledo

    This study attempts to achieve three objectives. First, it attempts to validate the instrument designed based on QM Standards to measure online course design. Second, it attempts to analyze the data and understand to what degree the selected courses meet QM Standards from a student's perspective. Third, it attempts to identify if there is a gap between student's perspective and QM certified reviewer's perspective about QM essential Standards. The results of this study can be used for QM reviewers to identify the gap between QM reviewers' and students' perspectives and better understand students' perspectives.

    Example 3: Effect of Student Readiness on Student Success in Online Courses

    2013 QM Research Grant supported study done by Geiger, Morris, & Subocz at College of Southern Maryland

    The research team hypothesizes that student success in well-designed courses (those that meet the Quality Matters Standards) and that are taught by experienced, engaged faculty is most influenced by student readiness factors, including individual attributes (such as motivation), life factors, comprehension, general knowledge, reading rate and recall, and typing speed and accuracy. A goal of the study is to determine which of these factors correlated most closely to student success. Student readiness will be access[ed] using the nationally normed SmarterMeasures instrument and correlated to student course retention and course grade.

  • Qualitative (constructivist/phenomenological)

    See Qualitative Research.

    Example 1:The Development of Technological Pedagogical Content Knowledge (TPACK) in Instructors Using Quality Matters Training, Rubric, and Peer Collaboration

    2011 QM Research Grant supported study done by Ward at University of Akron

    The project will collect multiple data sources to study the process and impact of the QM training, Rubric, and peer collaboration model on helping instructors construct new knowledge in the areas of TPACK. The research team hopes to develop a process theory of how the QM Rubric is implemented and integrated as a catalyst to inform and guide online instructors for quality design and instruction. The team believes that the situated process theory will provide contextual rich, situational specific and practically applicable descriptions of the key elements for a model for online transformative teaching and learning. Quality design and instruction in this study will be measured comparing baseline and exit observation data regarding course design, delivery process, instructional interaction, and participants' feedback using the QM Rubric through the lens of the TPACK framework.

    Both baseline and outcome data will be collected by observing volunteers who participate in the QM training. Data sources will include the observation of the content, structure, organization of the course design, observation journals on student instructor interaction, student engagement (the amount of time navigating the course, discussion transcriptions, and training activity participation) to identify themes and monitor instructional change.

  • Pragmatist (mixed research)

    See Mixed Research and Online Learning: Strategies for Improvement.

    Example 1: The Impact of Findability on Student Motivation, Self-Efficacy, and Perceptions of Online Course Quality

    2012 QM Research Grant support study done by Simunich, Robins, & Kelly at Kent State University

    This will be a pre-test/post-text mixed study. The unit of analysis is the individual, as we will be evaluating students' response (both conscious and unconscious) of findability in specific components of an online course, one version that meets and another that does not meet QM Standard 6.3 (Navigation throughout the online components of the course is logical, consistent, and efficient.).

    Prior to the findability testing, participants will complete a pre-assessment measure composed of demographic questions, experience with technology/online courses and feelings of self-efficacy in online education. The findability testing will have each student go through the provided course to find 3-5 course components (gleaned from QM Rubric essential Standards, such as the getting started statement, learning objectives, grading policy, etc.). Findability will be triangulated using three measures: time-on-task, eye-tracking (including search patterns and pupil dilation), and number of clicks (one heuristic for web design is that nothing should be more than three clicks away from the starting point). Post-assessment measures will evaluate their feelings of self-efficacy in the specific course, as well as measure of motivation taken from the Intrinsic Motivation Inventory (such as those related to perceived confidence and pressure/tension). The participants will also give retrospective think-alouds about their findability experience, providing qualitative data for the study.

  • QM Expertise

    Since your study will focus on a narrow question about the impact of QM, you'll need to take on, to some degree, the role of a QM content expert to develop a study that can provide meaningful information. An example: Setting up a study in which student final course grades from a course semester prior to a QM formal course review will be compared to final course grades in the following semester is of little value without detail on exactly what was updated in the course as a result of the QM review. Without that there is no information to indicate that the course had not already been one of quality, essentially already meeting QM standards even before an official course review. Detail is also required on exactly what changed as result of the QM review process. Only when we understand the before condition of the course, can we make statements on the impact of the QM review and revision process. Without that information we cannot understand if any difference in grades is likely be the result of different students in the course or a different instructor.

    Which aspect of QM do you need to focus on to answer your question? For example, do you want to consider investigating the "impact" (how will this be defined?) on who? what? of participation (how will this be defined?) in QM faculty development (which? who?)? Of participation in the QM review process (formal? informal? as reviewer or course developer?), Of courses designed using QM Standards in successfully meeting formal course reviews? Of having departmental or institutional implementation (as you define that) of the QM process?

  • Unit of Analysis

    From whom or what will be you gathering and analyzing data from in your study? What will be the unit of analysis?

    Example 1: Does Findability Matter?: Findability, Student Motivation and Self-Efficacy in Online Courses 2012 QM Research Grant support study done by Simunich, Robins, & Kelly at Kent State University

    The unit of analysis is the individual, as we will be evaluating students' response (both conscious and unconscious) of findability in specific components of an online course, one version that meets and another that does not meet QM Standard 6.3 (Navigation throughout the online components of the course is logical, consistent, and efficient.).

    Example 2: Measuring Online Course Design: A Comparative Analysis This study attempts to achieve three objectives. 2013 QM Research Grant supported study done by You, Xiao, Ballard, Hochberg, & Walters at University of Toledo

    Stage one of the project involves content analysis, coding, and data input (SPSS statistical software) of student Discussion Board submissions (unit of analysis: student post) according to the Rubric/item description proposed by Garrison et al (2000) and Arbaugh et al (2008).

  • Study Participants

    Will you have study participants or will you use existing institutional data? Keep in mind you will need to submit an application for using humans in research to your institution's Institutional Review Board (IRB). 

    If you will have study participants, who will they be?

    If faculty, what other factors, such as online teaching experience, have been suggested during your review of the research literature that might have a relationship to answering your question about the impact of QM? (By now, you would have already defined what you mean by "impact of QM" in your narrowed study question.)  If students, what other factors, such as prior success in online courses, have been suggested during your review of the research literature that might have a relationship to answering your question about the impact of QM? (By now, you would have already defined what you mean by "impact of QM" in your narrowed study question.) How will you get participants? From where? When? Why? If you will be investigating existing data sources, such as in the LMS or data gathered independently by the institution's IR department, you will need to establish a close working relationship with them for the study.

  • Sample Size

    How will you determine the appropriate sample size? Of course, identification of your sample size is influenced by your research design approach: quantitative, qualitative, or mixed.

    Some helpful resources for determining sample size, especially if you are using quantitative or mixed methodologies:

  • Data Needs and Analysis

    What data would you need to answer your question? Why? How would you gain access to the data? When?

    Example 1: A Study of the Relationship between Quality Matters Standard 5 Level of Interaction and Academic Performance in Online Learning. 2014 QM Research Grant Supported study done by de la Rosa at University of Texas/Pan American

    Sampling Plan Researchers will obtain University of Texas -- Pan American IRB approval prior to start of study. Participants recruited to take part in the study are students enrolled in online courses at the University of Texas -- Pan American.

    Example 2: Quality Matters Rubric as "Teaching Presence":   Application of COI Framework to Analysis of the QM Rubric's Effects on Student Learning.   2010 QM Research Grant Supported Study done by Hall at Delgado Community College

    Much of the methodology used to date in assessing the effectiveness of online learning can best be described as an exploratory qualitative approach, resting on an interpretation-based approach to text analysis. Arguably, a transition to a phase that utilizes both qualitative and quantitative approaches to studying online learning communities would prove beneficial towards studying larger inter-disciplinary and inter-institutional samples over time (Garrison and Archer 2003; Arbaugh et al. 2008).

    This project will examine the online learning dynamics over a substantial period of time (between the Fall 2006 academic semester and the Fall 2009 academic semester) across 15 sections of the same course (Introductory Sociology) taught by the same instructor. The total sample size included in the proposed analysis is 335 students (with 160 students included in the pre-QM Rubric subsample and 175 students in the post-QM Rubric implementation subsample).

    The size of the sample makes it possible to combine both the text-interpretation based content analysis (used in coding student postings) and inference statistics (Multiple Regression Models reflecting temporal changes in student presences attributable to the QM Rubric implementation and Structural Equation Models estimating latent structures of the two forms of student presence and their effects on student performance).

    Data Collection Methods

    Archived sections of the Introductory Sociology course taught by the principle investigator in this project during the period of Fall 2006 - Fall 2009 semesters will be retrieved.

    Stage one of the project involves content analysis, coding, and data input (SPSS statistical software) of student Discussion Board submissions (unit of analysis: student post) according to the Rubric/item description proposed by Garrison et al (2000) and Arbaugh et al (2008). Students' submissions to Regular Discussion Board Forums as well as submissions to Mid-Term and Final Research Papers Discussion Forums will be included in the data set.

    • The following concepts/indicators of the three presences will be used in this project (and will be further operationalized at this stage of project design): (i) Teaching Presence: (teaching presence will be estimated based on Instructor's self-evaluation) Design & Organization: Setting Curriculum & Methods (implementation of the QM. Rubric will be used as a dichotomous proxy measure of Design and Organization) Facilitating Discourse: Sharing Personal Meaning Direct Instruction: Focusing Discussion and (ii) Social presence: Effective Expression: Emoticons, Open Communication: Risk-free Expression, Group Cohesion: Encourage Collaboration, and (iii) Cognitive presence: Exploration: Information Exchange; Integration: Connecting Ideas; Resolution: Apply New Ideas
    • The complete data set will include 8 sections taught prior to QM Rubric implementation (Fall 2006 - Spring 2008) as well as 7 sections of the course taught after the QM Rubric implementation (Summer 2008 - Fall 2009 semesters).

    The third stage of the analysis will consist of Structural Equation Modeling (EQS 6.1 statistical software) measuring the effects of social, cognitive, and teacher presences on individual student performance/grades in each section of the course, both pre- and post-QM Rubric implementation. The purpose of this analysis is to demonstrate that increased "teacher presence" (arguably, resulting from QM Rubric implementation) is causally related to social and cognitive student presences as well as individual student performance in the course.

    The second stage of the analysis will consist of multivariate analysis of changes in the indicators of the three presences over the period of 3.5 years, for each section of the class, for the duration of each semester, including pre-QM and post-QM Rubric implementation.

Step Four: How/when/by whom will the data be analyzed?

Developing an analysis plan

Work with statistician or methodologist at your institution to assure selection of and access to appropriate to needed analytical tools for your QM-focused study.

  • Example 1 : Measuring Online Course Design: A Comparative Analysis

    2013 QM Research Grant supported study done by You, Xiao, Ballard, Hochberg, & Walters at University of Toledo

    Method

    Instrument

    Online course design evaluation, a questionnaire with 27 questions in Likert scale (to little or no extent 1-5 to a great extent) and three open-ended questions will be designed based on Quality Matters Standards by the instructional design team. Feedback will also be obtained from a professor in the field of research and measurement. The instrument focuses on design aspect of online courses.

    Data Collection

    Student Data

    The instrument used at the university to collect feedback from students about the design aspect of online courses since Fall 2011. The project team identified three online courses for this project. One course was offered in Fall 2011 and 35 students completed the course design evaluation survey and two courses were offered in Spring 2012, 18 students completed the course design evaluation in the first one, and 20 students completed the course design evaluation survey in the second course.

    Reviewer Data

    Three reviewer reports have been collected on each of the three courses. All three reviewers are Quality Matters-certified reviewers and they were trained to review online courses from a student's point of view. However, this review is not an official review and none of the reviewers are subject matter experts in the field of study of these courses and no master reviewers are in the review team.

    Data Coding and Analysis

    To achieve the first two objectives of the project data collected from three online courses were analyzed separately with Winsteps. Winsteps is a Windows-based software that assists with many applications of the Rasch model, particularly in the areas of educational testing, attitude surveys and rating scale analysis (Linacre, J. M., 2009).

    To achieve the third objective of this project the data were treated. Students' results are converted into a measure that is comparable to the reviewers' rating. Student responses of To a Great Extent "4" or To a Very Great Extent "5" are used as at or above the 85% level and coded as "1." Student responses of To a Moderate Extent "3," To Some Extent "2," and To Little or No Extent "1" are used as below 85% level and coded as "0." According to the majority rule principle if 2/3 of the students select To a Great Extent "4" or To a Very Great Extent "5" for an item in the survey, then the course meets that Specific Standard from a student's perspective. See Tables 1, 2, and 3.

    Three QM-certified Peer Reviewers reviewed the three courses according to QM Standards and recorded their scores in a spreadsheet. If a Standard is met, "1" is recorded for the Standard. If a Standard is not met, "0" is recorded for the Standard. If two (2/3) of the Peer Reviewers assigned a score to a Specific Standard, then the course meets that Standard from a Peer Reviewer's perspective. The data were treated in a spreadsheet and analyzed with SPSS. A Nonparametric Mann-Whitney U test (2 independent samples) was used to evaluate a difference in medians between the two groups (students and Peer Reviewers). The two groups are different even though Peer Reviewers are asked to take a student's view when completing course reviews. They are independent of each other.

  • Example 2: Effect of Student Readiness on Student Success in Online Courses

    2013 QM Research Grant supported study done by Geiger, Morris, & Subocz at College of Southern Maryland

    Students will be required to take the SmarterMeasure™ learning readiness indicator (SmarterMeasure) before beginning the substantive course work. This is a web-based tool, which assesses a learner's likelihood for succeeding in an online and/or technology-rich learning program. SmarterMeasure indicates the degree to which an individual student possesses attributes, skills, and knowledge that contribute to academic success.  

    SmarterMeasure data for six indicators will be aggregated based on a percentage scale of 0% to 100%. The six indicators include On-screen Reading Rate and Recall; Typing Speed and Accuracy; Life Factors; Technical Knowledge; Reading Comprehension; and, Individual Attributes (including motivation, procrastination, willingness to ask for help). The final grades earned for the selected CSM courses will be aggregated and rated by academic success. The study is considered quantitative as the findings were analyzed through Chi Square tests for statistical significance. At the end of the semesters, a statistical analysis will be conducted to measure the relationships between SmarterMeasure scores and CSM measures of retention, grade distribution, and academic success. The study will cover two semesters.

    For measures of control, faculty teaching the study courses will be Quality Matters Master Reviewer trained, have had two or more classes that they designed meet Quality Matters Standards, and be active participants in CSM's student learning outcomes and assessment processes.   Additionally, the courses employed in this research will have met quality standards as defined by Quality Matters certification.

    Statistical Analysis

    A Chi Squared analysis will be conducted to search for statistical significance to the scores of the SmarterMeasure assessment compared to the final course grades the students earned in the selected course sections.   The six SmarterMeasure indicators scores will be   aggregated and compared to the final grade the individual student earned in the course.   SmarterMeasure scores rely on student answers, some being subjective (life factors and individual attributes) as well as objective measures.

    The final grades for the class will be measured as "successful" at the rate of 70% or higher, equating to a letter grade of C, B, or A.   CSM policy supports this valuation, as 70% is the cut-off score for credit being earned for the course as well as its ability to be transferred to another school.   In addition, the majority of student learning outcomes assessments at CSM use the benchmark of 70% or higher.

Step Five: Writing the Report

See examples of final QM grant reports:

Effect of Student Readiness on Student Success in Online Courses

2013 QM Research Grant supported study done by Geiger, Morris, & Subocz at College of Southern Maryland

Geiger, L. A. (2013, September 25). The effect of student readiness on student success in online courses.   2013 QM research grant presentation at the 5th Annual Quality Matters Conference, Nashville, TN. 

Analyzing Predictors of Faculty Behavior to Engage in Peer Review

2013 QM Research Grant supported study done by Altman, Schwegler, & Bunkowski at Texas A & M University/Central TX

Item

Does Findability Matter?: Findability, Student Motivation and Self-Efficacy in Online Courses

2012 QM Research Grant support study done by Simunich, Robins, & Kelly at Kent State University

Measuring Online Course Design: A Comparative Analysis

2013 QM Research Grant supported study done by You, Xiao, Ballard, Hochberg, & Walters at University of Toledo

The Development of Technological Pedagogical Content Knowledge (TPACK) in Instructors Using Quality Matters Training, Rubric, and Peer Collaboration

2011 QM Research Grant supported study done by Ward at University of Akron

Linking Online Course Design and Implementation to Learning Outcomes: A Design Experiment

2010 QM Research Grant supported study done by Swan, Matthews, Bogle, Boles, & Day at University of Illinois Springfield

This paper reports on preliminary finding from ongoing design-based research being conducted in the fully online Masters of Teacher Leadership program at the University of Illinois Springfield. Researchers are using the Quality Matters (QM) and Community of Inquiry frameworks to guide the iterative redesign of core courses in the program. Preliminary results from the redesign of one course suggest that such approach can improve student learning outcomes. Results also support the efficacy of the QM and CoI theoretical frames, and the usefulness of design-based approaches in online learning.

Resources for Writing Scholarly Papers

How to Write an Empircal Journal Article, By Daryl J. Bem, Cornell University

Purdue's Online Writing Lab (OWL): A great online writing lab with many useful and helpful articles and resources.

Frequently Asked Questions

Helpful Articles