Research Methods and Investigating Psychology
QUALIFI Level 4 Diploma in Psychology
Session 1
Session 1: Introduction to the Scientific Method in Psychology
Session 1: Introduction to the Scientific Method in Psychology
Teacher's Guidance
- Time Allocation: Introduction (10 mins), Core Concepts (30 mins), Visual Aid/Diagram (15 mins), Activities (45 mins), Q&A/Wrap-up (20 mins).
- Depth of Detail: Focus deeply on the cyclical nature of theory and research. The steps of the scientific method should be presented as an idealized model, with emphasis on its real-world messiness. Falsifiability is a key concept to stress.
- Online Facilitation Tips: Use the initial poll to gauge understanding and correct misconceptions early. For the breakout rooms, give a clear time limit (e.g., 10 minutes) and a specific deliverable (e.g., "Post your testable hypothesis in the chat when you return"). Actively monitor breakout rooms to ensure groups are on track.
Educational Content
This foundational session introduces psychology as an empirical science, grounded in the principles of the scientific method. We will move beyond the common perception of psychology as mere intuition or "common sense" to understand it as a rigorous discipline that uses systematic processes to acquire knowledge. The core of this session is understanding that the scientific method is not just a collection of techniques but a mindset—an approach to inquiry that values objectivity, evidence, and critical thinking.
We will deconstruct the key steps of the scientific method:
- Observation: The process begins with observing a phenomenon in the world and asking questions about it. For example, a teacher might observe that students who eat breakfast seem to perform better on morning quizzes.
- Formulating a Hypothesis: Based on observation, a researcher forms a hypothesis—a specific, testable prediction about the relationship between variables. The teacher's hypothesis might be: "Students who consume a balanced breakfast will have significantly higher scores on a 9 a.m. math quiz than students who do not."
- Testing the Hypothesis (Experimentation): The researcher designs a study to test the hypothesis and collect data. This involves carefully controlling conditions to isolate the variables of interest.
- Analyzing Data and Drawing Conclusions: The collected data is analyzed using statistical methods to determine whether the results support or refute the hypothesis.
- Reporting and Replicating Results: Findings are shared with the scientific community through publication. Replication—where other researchers conduct the same study to see if they get the same results—is crucial for verifying the reliability of findings.
A critical distinction will be made between scientific knowledge and other ways of knowing, such as intuition, anecdote, or authority. We will emphasize core scientific principles like objectivity (minimizing bias), skepticism (questioning claims and demanding evidence), and falsifiability (the idea that a scientific theory must be testable in a way that it could be proven wrong). For instance, Freud's concept of the "id" is often criticized for being unfalsifiable, as it's difficult to design an experiment that could definitively prove it doesn't exist.
Visual Aid: The Theory-Research Cycle
Theory ➔ Leads to Hypotheses ➔ Leads to Research & Observation ➔ Generates or Refines ➔ Theory
This diagram illustrates the dynamic, cyclical relationship between theory and research. A theory is a broad, integrated framework that explains and predicts phenomena (e.g., Attachment Theory). Research is conducted to test specific hypotheses derived from that theory. The results of the research then feed back to either support, refine, or challenge the theory, leading to a continuous advancement of knowledge.
Session 1: Activities & Resources
Mini-Activity (5 mins): In pairs, students identify one "psychological fact" they believe to be true (e.g., "opposites attract"). They then discuss how they could turn this belief into a testable, falsifiable hypothesis.
Interactive Exercise (15 mins): Present a short, simplified research scenario (e.g., a study testing a new memory-enhancing drug). In small groups (breakout rooms), students must identify each step of the scientific method within the scenario: the initial observation, the hypothesis, the testing method, the type of data collected, and a potential conclusion. Each group reports one step back to the main session.
Classroom Application (Online): Use a poll to ask students: "Which of the following is the most important principle of the scientific method: A) Proving theories true, B) Falsifiability, C) Relying on expert opinion?" Use the results to launch a discussion about why falsifiability (B) is considered a cornerstone of science.
Collaborative Brainstorm (15 mins): Using a collaborative whiteboard (e.g., Miro, Padlet), pose the question: "What are the challenges of applying the 'ideal' scientific method to studying human behavior?" Students post their ideas as virtual sticky notes. The instructor then groups related ideas to summarize the discussion.
Distinction-Level Thinking: "While the scientific method is presented as a linear process, in reality, it is often messy and iterative. Discuss the limitations of this linear model and provide an example of how a real-world scientific discovery might deviate from these neat steps."
- Video: Psychological Research: Crash Course Psychology #2 - An engaging and fast-paced overview of the scientific method in psychology.
- Video: The Scientific Method: Steps, Examples, Tips, and Exercise - A clear, animated explanation of the six steps of the scientific method.
- Video: Veritasium - The Scientific Method - A video that teaches about the scientific method and how preconceived notions can affect the discovery of new information.
- Article: Research Problems and Hypotheses in Empirical Research - An article discussing the role of research problems and hypotheses in quantitative studies.
Covered Learning Outcomes
- LO 1.2: Analyse the way in which scientific method, experimental and descriptive research are interlinked.
Teacher's Checklist
- Explain the concept of psychology as an empirical science. (LO 1.2)
- Detail the steps of the scientific method (observation, hypothesis, experiment, analysis, conclusion). (LO 1.2)
- Distinguish between scientific and non-scientific knowledge, emphasizing the principles of objectivity and falsifiability. (LO 1.2)
- Analyze the dynamic relationship between theory and research in psychology. (LO 1.2)
Session 2
Session 2: Principles of Research Design (1): Variables, Hypotheses, and Sampling
Session 2: Principles of Research Design (1): Variables, Hypotheses, and Sampling
Teacher's Guidance
- Time Allocation: Core Concepts (40 mins), Visual Aid/Diagram (15 mins), Activities (45 mins), Q&A/Wrap-up (20 mins).
- Depth of Detail: Spend significant time on operational definitions, as this is a common point of confusion. Use multiple examples. For sampling, focus on the core difference between probability (generalizable) and non-probability (practical) methods rather than memorizing every subtype.
- Online Facilitation Tips: The collaborative whiteboard activity is excellent for engagement. Create the columns ahead of time. When students post notes, verbally read them out and drag them into place to create a dynamic, interactive feel. Use the "raise hand" feature to have groups share their findings from the interactive exercise one by one.
Educational Content
This session delves into the fundamental building blocks of any research study. We begin with variables, which are any characteristics or factors that can vary or change. Understanding how to identify and define variables is the first step in designing a valid experiment.
- Independent Variable (IV): The variable that the researcher manipulates or changes to observe its effect. It is the "cause" in a cause-and-effect relationship.
Example 1: In a study on the effects of caffeine on reaction time, the IV is the amount of caffeine administered (e.g., 0mg, 100mg, 200mg).
Example 2: In an experiment testing a new teaching method, the IV is the type of teaching method used (new vs. traditional).
- Dependent Variable (DV): The variable that is measured by the researcher to see if the manipulation of the IV had an effect. It is the "effect."
Example 1: In the caffeine study, the DV is the participants' reaction time, measured in milliseconds.
Example 2: In the teaching method study, the DV is the students' scores on a final exam.
- Extraneous/Confounding Variables: Other variables that could potentially influence the DV, creating a false relationship between the IV and DV. Researchers must control these.
Example: In the caffeine study, confounding variables could include the time of day, participants' natural tolerance to caffeine, or how much sleep they got the night before.
Next, we focus on formulating clear and testable hypotheses. A key concept here is the operational definition, which means defining a variable in terms of the specific procedures used to measure or manipulate it. For example, "anxiety" could be operationally defined as a score on the Beck Anxiety Inventory or as a physiological measure like heart rate. We will distinguish between:
- Research Hypothesis (H1): A statement that predicts a relationship or difference between variables (e.g., "Caffeine consumption will decrease reaction time"). This can be directional (specifying the direction of the effect) or non-directional (simply stating a difference will exist).
- Null Hypothesis (H0): A statement that there is no relationship or difference between variables (e.g., "Caffeine consumption will have no effect on reaction time"). The goal of statistical testing is to see if we have enough evidence to reject the null hypothesis.
Visual Aid: Sampling Techniques
Probability Sampling
Every member of the population has a known chance of selection. Allows for generalization.
- Simple Random: Drawing names from a hat.
- Stratified: Selecting proportionally from subgroups (e.g., 50% male, 50% female).
Non-Probability Sampling
Selection is not random. Practical but limits generalizability.
- Convenience: Using whoever is available (e.g., students in the hallway).
- Snowball: Asking participants to refer others.
This diagram contrasts the two main families of sampling—the process of selecting a subset (sample) from a larger group (population). The goal is to obtain a representative sample that accurately reflects the population's characteristics.
Session 2: Activities & Resources
Mini-Activity (5 mins): Provide students with a simple research question: "Does listening to classical music while studying improve exam performance?" Ask them to identify the IV, the DV, and at least two potential confounding variables.
Interactive Exercise (15 mins): In groups (breakout rooms), students are given a research topic (e.g., "The effect of social media use on self-esteem"). They must: 1) Formulate a directional research hypothesis and a null hypothesis. 2) Create operational definitions for their IV ("social media use") and DV ("self-esteem"). 3) Propose one probability and one non-probability sampling method they could use.
Classroom Application (Online): Use a collaborative whiteboard (like Miro or Jamboard). Create columns for "IV," "DV," "H1," and "H0." Give students a research scenario and have them post virtual sticky notes in the correct columns to identify the key components of the design.
Live Poll (10 mins): Present several research variables (e.g., "depression," "memory," "intelligence"). For each, provide multiple-choice options for operational definitions. Students vote on the best operational definition, followed by a brief discussion of why it is the most specific and measurable.
Distinction-Level Thinking: "A researcher uses a convenience sample of psychology undergraduate students for a study on memory. Critically evaluate the implications of this sampling choice. What specific biases might be introduced, and how do these biases affect the ability to generalize the findings to the wider population?"
- Video: Experimental Design: Variables, Groups, and Random Assignment - A video that outlines how to conduct a psychology experiment, focusing on variables and groups.
- Video: Psychological Research: Crash Course Psychology #2 - This video covers different kinds of bias in experimentation and how research practices help avoid them.
- Article: Developing Research Questions: Hypotheses and Variables - A chapter that details how to define and select variables for psychological research.
- Article: Purposeful sampling for qualitative data collection and analysis in mixed method implementation research - A paper reviewing the principles and practice of purposeful sampling.
Covered Learning Outcomes
- LO 1.1: Analyse the principles of research design.
- LO 3.2: Analyse the interrelationship between statistics and research hypotheses in psychology.
Teacher's Checklist
- Define and explain the types of variables (independent, dependent, extraneous). (LO 1.1)
- Explain how to formulate hypotheses (research and null) and the importance of operational definitions. (LO 1.1, LO 3.2)
- Discuss the concepts of population and sample, and the importance of representativeness. (LO 1.1)
- Analyze probability and non-probability sampling methods and their respective advantages and disadvantages. (LO 1.1)
Session 3
Session 3: Principles of Research Design (2): Validity, Reliability, and Control
Session 3: Principles of Research Design (2): Validity, Reliability, and Control
Teacher's Guidance
- Time Allocation: Core Concepts (35 mins), Visual Aid/Diagram (15 mins), Activities (50 mins), Q&A/Wrap-up (20 mins).
- Depth of Detail: The trade-off between internal and external validity is a crucial, high-level concept. Use the lab vs. field study contrast to make it concrete. Reliability vs. Validity can be confusing; the target analogy is very effective for this.
- Online Facilitation Tips: The Kahoot/Mentimeter quiz is excellent for a mid-session energy boost and formative assessment. For the flawed study exercise, ensure the flaws are obvious enough to be identified but subtle enough to generate discussion. Encourage groups to use the "annotate" feature on a shared screen to highlight the flaws directly on the text.
Educational Content
This session builds on our understanding of research design by focusing on the crucial concepts of quality and rigor. For research findings to be meaningful, the methods used must be both reliable and valid.
Reliability refers to the consistency or stability of a measurement. If you use a measure multiple times under the same conditions, will you get the same results? We will discuss several types:
- Test-Retest Reliability: The consistency of results over time. If you give a personality test to a group of people today and again next month, their scores should be similar (assuming the personality trait is stable).
- Inter-Rater Reliability: The degree of agreement between two or more independent observers. This is vital in observational research. If two researchers are observing children's aggressive behavior on a playground, their tallies of aggressive acts should be highly correlated.
Validity refers to the accuracy of a measure or a study. Does it truly measure what it claims to measure, and are the conclusions drawn from it sound? It's a more complex concept than reliability.
- Internal Validity: The degree of confidence that the causal relationship being tested is trustworthy and not influenced by other factors or variables. High internal validity means we are sure that the change in the DV was caused by the manipulation of the IV. Threats to internal validity include confounding variables.
- External Validity: The extent to which the results of a study can be generalized to other settings (ecological validity), other people (population validity), and over time (historical validity). A lab experiment might have high internal validity but low external validity because the controlled environment is very different from the real world.
- Construct Validity: The extent to which a measurement tool (e.g., a questionnaire) accurately measures the theoretical concept it is designed to measure. For example, does an IQ test truly measure "intelligence," or does it just measure test-taking ability?
Visual Aid: The Trade-Off in Research Design
Lab Experiment
High Internal Validity: Excellent control over extraneous variables.
Low External Validity: Artificial setting may not reflect real life.
Field / Naturalistic Study
Low Internal Validity: Difficult to control confounding variables.
High External Validity: Findings are more likely to apply to the real world.
This diagram illustrates the common trade-off researchers face. The final part of the session focuses on control—the strategies researchers use to minimize the influence of extraneous variables and thus increase internal validity, such as using a Control Group, Random Assignment, and Standardization.
Session 3: Activities & Resources
Mini-Activity (5 mins): Present the analogy of a target. Ask students to draw four targets representing: 1) Reliable but not valid, 2) Valid but not reliable (this is tricky/impossible), 3) Neither reliable nor valid, 4) Both reliable and valid. This helps visualize the concepts.
Interactive Exercise (20 mins): Describe a flawed research study (e.g., a study on a new therapy where the therapist knows which patients are in the treatment group, and the control group gets no contact at all). In groups (breakout rooms), students must identify the specific threats to internal validity (e.g., experimenter bias, placebo effects) and suggest ways to improve the study's design using control techniques (e.g., a double-blind procedure, an active control group).
Classroom Application (Online): Create a short quiz using a tool like Kahoot or Mentimeter with scenarios. For each scenario, students must identify whether the primary issue is with reliability, internal validity, or external validity. This provides immediate feedback and gamifies the learning process.
Debate (15 mins): Divide the class into two teams: "Team Internal Validity" and "Team External Validity." Pose the question: "For a study to be considered 'good science,' which type of validity is more important?" Each team prepares a brief argument and presents it. This encourages deeper thinking about the purpose of different types of research.
Distinction-Level Thinking: "There is often a trade-off between internal and external validity. A highly controlled lab experiment may have excellent internal validity but poor external validity, while a naturalistic observation may be the opposite. Discuss this trade-off. Is one type of validity inherently more important than the other in psychological research? Justify your answer with examples."
- Video: Research Methods: Experimental Design - An episode explaining the basic process of experimental design, its purpose, and its applications in psychology.
- Article: Reliability and Validity of Measurement - An article that defines reliability and validity and discusses the different types and how they are assessed.
- Article: The importance of establishing the reliability and validity of measurement instruments - A study that sought to evaluate the psychometric properties of two measures to assess mental health problems.
Covered Learning Outcomes
- LO 1.1: Analyse the principles of research design.
Teacher's Checklist
- Explain the concept of Reliability and its types. (LO 1.1)
- Explain the concept of Validity and distinguish between its types (internal, external, construct). (LO 1.1)
- Analyze the importance of control in experimental research to ensure internal validity. (LO 1.1)
- Discuss control techniques such as control groups and random assignment. (LO 1.1)
Session 4
Session 4: Ethical Principles in Psychological Research
Session 4: Ethical Principles in Psychological Research
Teacher's Guidance
- Time Allocation: Historical Context (20 mins), Core Principles (30 mins), Visual Aid/Diagram (10 mins), Activities (40 mins), Q&A/Wrap-up (20 mins).
- Depth of Detail: The historical studies are powerful but can be sensitive. Focus on the ethical *lessons* learned rather than gratuitous detail. For the core principles, use clear, unambiguous language. Informed consent and deception/debriefing are the most complex and warrant the most time.
- Online Facilitation Tips: The IRB exercise in breakout rooms is highly effective. Assign one student in each group to be the "chair" to ensure the discussion stays focused. Use the polling feature for the "Is this ethical?" activity to create a safe way for students to voice opinions before a more open discussion. Be prepared to manage sensitive discussions with empathy.
Educational Content
This session addresses the moral and ethical obligations that researchers have towards their participants. Ethics are not an afterthought; they are a fundamental component of the research design process. We will begin by examining historical studies that, while influential, raised profound ethical questions and led to the development of modern ethical codes. These include:
- The Milgram Obedience Study: Participants were deceived and subjected to extreme psychological distress by being ordered to deliver what they believed were painful electric shocks to another person.
- The Stanford Prison Experiment: The study was terminated early due to the psychological harm experienced by participants who were randomly assigned to be "prisoners" or "guards."
- The "Little Albert" Experiment: A young child was conditioned to fear a white rat, a fear that was never de-conditioned, raising questions about causing lasting harm.
These and other controversial studies prompted organizations like the American Psychological Association (APA) and the British Psychological Society (BPS) to establish strict ethical guidelines. We will explore the core principles of these codes:
Visual Aid: The Ethical Approval Process
Researcher has an Idea ➔ Designs Study & Prepares Proposal ➔ Submits to Institutional Review Board (IRB) ➔ IRB Review (Approve / Request Changes / Reject) ➔ Research Begins (Only if Approved)
This flowchart shows that ethical review is not an optional step but a mandatory gateway. Institutional Review Boards (IRBs) or Ethics Committees are panels that review all research proposals involving human participants to ensure they comply with ethical standards before the research can begin.
- Informed Consent: Participants must be given comprehensive information about the purpose, procedures, potential risks, and benefits of the study before they agree to take part. Consent must be voluntary and documented.
- Right to Withdraw: Participants must be informed that they can leave the study at any time, for any reason, without penalty.
- Confidentiality and Anonymity: Researchers must protect the privacy of their participants. Confidentiality means that data is kept secure and not linked to individuals' names, while anonymity means that the researcher cannot link the data to the individual at all.
- Minimizing Harm: Researchers have a duty to protect participants from physical and psychological harm. The potential risks of the study should not outweigh the potential benefits.
- Deception and Debriefing: Deception (misleading participants about the true purpose of the study) should only be used when it is absolutely necessary and justified by the study's potential value. When deception is used, a full debriefing is mandatory. This involves explaining the true nature of the study, correcting any misconceptions, and offering support if participants are distressed.
Session 4: Activities & Resources
Mini-Activity (5 mins): Ask students to reflect on a time they participated in a survey or study. Did they feel their rights were protected? Did they receive enough information to give informed consent? A brief think-pair-share activity.
Interactive Exercise (20 mins): Present a modern, ethically ambiguous research proposal (e.g., "A study using a fake social media profile to investigate online jealousy by friending participants' partners"). In breakout groups, students act as an IRB. They must identify all the potential ethical issues based on the core principles (consent, deception, harm, etc.) and decide whether to: a) approve the study as is, b) approve it with specific modifications, or c) reject it. Each group must justify its decision.
Classroom Application (Online): Use a poll with several short research scenarios. For each, ask "Is this ethical? Yes/No/Maybe." Use the "Maybe" responses as a springboard for a nuanced discussion about ethical gray areas and the balancing of risks and benefits.
Role-Play (15 mins): Assign students roles: one is the researcher from the "jealousy study" above, and two are IRB members. The researcher must try to justify their study to the IRB, who will challenge them on ethical grounds. This creates a dynamic and memorable learning experience.
Distinction-Level Thinking: "The BPS and APA ethical codes are based on Western cultural values. Discuss how these principles (e.g., individual consent, confidentiality) might conflict with the cultural norms of non-Western societies. How should a researcher navigate these challenges when conducting cross-cultural research?"
- Video: AS Level Psychology: The Ethics of Psychology Research - This video dives into the "Top 5 Ethical Issues in Psychology Research" that every high school psychology student should know.
- Video: Research Ethics in Psychology | Tuskegee, Milgram, and Zimbardo - This video discusses research ethics, specifically in psychology, with a focus on historical studies.
- Article: Psychology Research Ethics - An article that discusses the duty of psychologists to respect the rights and dignity of research participants.
- Article: Ethics in Psychology | Guidelines, Issues & Importance - An article that discusses ethical standards in psychological research and how they have grown over time.
Covered Learning Outcomes
- LO 4.2: Apply and justify the choice of method to a research scenario. (Justifying ethical aspects is part of justifying the methodology).
Teacher's Checklist
- Review historical studies that led to the development of research ethics. (LO 4.2)
- Explain core ethical principles: informed consent, right to withdraw, confidentiality. (LO 4.2)
- Discuss issues of deception, minimizing harm, and the importance of debriefing. (LO 4.2)
- Clarify the role of Institutional Review Boards (IRBs) in protecting participants. (LO 4.2)
Session 5
Session 5: Experimental Research Methods
Session 5: Experimental Research Methods
Teacher's Guidance
- Time Allocation: Core Concepts (30 mins), Visual Aid/Diagram (15 mins), Activities (55 mins), Q&A/Wrap-up (20 mins).
- Depth of Detail: The distinction between the three designs is the core of this session. Use the same research question (e.g., "Does a new study technique work?") and explain how you would test it using each of the three designs to make the comparison clear. Counterbalancing is a key term for Repeated Measures.
- Online Facilitation Tips: The breakout room activity where groups design a study is very effective. Give them a clear template to fill out (e.g., Research Question, IV, DV, Design Choice, Justification). When they present back, encourage other groups to ask clarifying questions. The visual aid is a good anchor to return to throughout the explanation.
Educational Content
This session provides an in-depth look at the experimental method, the gold standard for determining cause-and-effect relationships in psychology. The power of the experiment lies in its two core features: the manipulation of an independent variable (IV) and the strict control of extraneous variables. By systematically changing the IV and observing the effect on the DV while keeping everything else constant, researchers can make strong causal claims.
Visual Aid: Comparison of Experimental Designs
| Design Type | How it Works | Key Advantage | Key Disadvantage |
|---|---|---|---|
| Independent Groups (Between-Subjects) | Different participants in each condition (e.g., Group A gets drug, Group B gets placebo). | No order effects (practice, fatigue). | Individual differences can be a confound. |
| Repeated Measures (Within-Subjects) | Same participants do all conditions (e.g., Test memory with music, then without music). | Controls for individual differences; more powerful. | Order effects can occur. Needs counterbalancing. |
| Matched Pairs | Different but matched participants in each condition (e.g., Pair people by IQ, one does A, one does B). | Controls for key individual differences without order effects. | Matching can be difficult and time-consuming. |
We will explore these three primary types of experimental designs in detail:
- Independent Groups Design (Between-Subjects): In this design, different participants are used in each condition of the experiment. The key control technique is random assignment to try and make the groups equivalent.
- Repeated Measures Design (Within-Subjects): The same participants take part in all conditions of the experiment. This is a powerful design but is susceptible to order effects (e.g., practice or fatigue). This is managed using counterbalancing, where half the participants do condition A then B, and the other half do B then A.
- Matched Pairs Design: This design attempts to get the best of both worlds. Participants are matched into pairs based on a key variable (e.g., IQ, age). Then, one member of each pair is randomly assigned to one condition, and the other member is assigned to the other condition.
Finally, we will distinguish between true experiments, which use random assignment to create groups, and quasi-experiments. In a quasi-experiment, the IV is a pre-existing characteristic of the participants (e.g., gender, age, nationality), so random assignment is not possible. For example, comparing the math skills of 10-year-olds and 12-year-olds is a quasi-experiment because you cannot randomly assign a child to be 10 or 12. This lack of random assignment means we cannot make the same strong causal conclusions as we can with a true experiment.
Session 5: Activities & Resources
Mini-Activity (5 mins): Ask students to quickly list one advantage and one disadvantage for both the Independent Groups and Repeated Measures designs. This reinforces the core trade-offs.
Interactive Exercise (20 mins): Provide three research questions: 1. Does a new "super-learning" technique improve vocabulary retention compared to a standard method? 2. Do people rate a movie as funnier when watching it in a group versus watching it alone? 3. Does a person's gender affect their spatial reasoning ability? For each question, groups (in breakout rooms) must decide which design (Independent Groups, Repeated Measures, Matched Pairs, or Quasi-experimental) is most appropriate and justify their choice, explaining how they would implement it.
Classroom Application (Online): Use breakout rooms. Assign each room a different experimental design. Their task is to create a simple, novel research study that perfectly fits their assigned design. They then present their study idea to the main group, which has to guess the design being used.
Rapid-Fire Polls (10 mins): Present a series of quick scenarios. For each, students vote on the best design. Example: "Testing the same people before and after a treatment." (Poll: A=Independent, B=Repeated Measures). This is a fast-paced check for understanding.
Distinction-Level Thinking: "A pharmaceutical company wants to test a new drug for anxiety. They argue that a Repeated Measures design (testing participants' anxiety before and after the drug) is the most powerful. Critically evaluate this choice. What are the potential confounds (e.g., placebo effect, spontaneous remission) in this simple pre-test/post-test design, and how could you design a more rigorous study (e.g., using a control group) to address these issues?"
- Video: Research Methods: Experimental Design - A video that explains the basic process of experimental design, its purpose, and its applications in the field of psychology.
- Video: A Crash Course on How to Design a Research Study in Psychology - An overview of some of the major types of research design that can be used in psychology.
- Article: Experimental Design – Research Methods in Psychology - An article that explains the difference between between-subjects and within-subjects experiments and lists some of the pros and cons of each approach.
- Article: Experimental Design and Statistics for Psychology - A concise, straightforward, and accessible introduction to the design of psychology experiments and the statistical tests used to make sense of their results.
Covered Learning Outcomes
- LO 1.2: Analyse the way in which scientific method, experimental and descriptive research are interlinked.
- LO 2.1: Analyse the features of research methods used in psychology.
Teacher's Checklist
- Explain the basic characteristics of the experimental method and its ability to determine causality. (LO 1.2, LO 2.1)
- Analyze the independent groups design, its advantages, and disadvantages. (LO 2.1)
- Analyze the repeated measures design, its advantages, disadvantages, and the concept of counterbalancing. (LO 2.1)
- Distinguish between true experiments and quasi-experiments. (LO 1.2, LO 2.1)
Session 6
Session 6: Descriptive Research Methods: Surveys, Observations, and Case Studies
Session 6: Descriptive Research Methods: Surveys, Observations, and Case Studies
Educational Content
While experiments are excellent for determining cause and effect, much of psychology is focused on describing behavior and mental processes as they naturally occur. This is the domain of descriptive research. These methods do not manipulate variables; instead, they aim to provide a snapshot of what is happening. They answer questions of "what," "where," and "when," but not "why."
We will explore three major types of descriptive research:
- Surveys: This method involves collecting self-reported data from a sample of individuals through questionnaires or interviews.
- Strengths: Can gather a large amount of data from many people relatively quickly and inexpensively. Useful for understanding attitudes, beliefs, and reported behaviors.
- Weaknesses: Prone to biases. Social desirability bias occurs when people respond in a way that makes them look good, rather than truthfully. The wording of questions can also heavily influence the answers. For example, asking "Do you support a woman's right to choose?" will get different responses than "Do you support the killing of unborn fetuses?".
- Observation: This method involves systematically watching and recording behavior.
- Naturalistic Observation: Observing behavior in its natural setting without any intervention from the researcher (e.g., Jane Goodall's work with chimpanzees). Its strength is high external validity.
- Structured Observation: Observing behavior in a more controlled, often laboratory, setting. This gives the researcher more control but may reduce the naturalness of the behavior.
- Challenges: The observer effect (or Hawthorne effect) occurs when participants change their behavior because they know they are being watched. Observer bias occurs when the researcher's own expectations influence what they see and record.
- Case Studies: An in-depth, intensive investigation of a single individual, a small group, or a specific event.
- Strengths: Provides a rich, detailed source of information and can be a powerful tool for generating new hypotheses. They are particularly useful for studying rare phenomena. Classic examples include the case of Phineas Gage, whose personality changed after a brain injury, and Freud's analyses of his patients.
- Weaknesses: The findings are highly subjective and cannot be generalized to a wider population. The researcher's interpretation can be biased.
Session 6: Activities & Resources
Mini-Activity (5 mins): Ask students to write two versions of a survey question about environmental attitudes: one that is neutral and one that is clearly biased or leading. This highlights the importance of question wording.
Interactive Exercise (20 mins): Divide the class into three "expert" groups: Surveys, Observations, and Case Studies. Give all groups the same broad research topic: "Understanding the study habits of first-year university students." Each expert group must design a study using their assigned method. They should outline the procedure, identify the key strengths of their approach, and acknowledge its biggest limitation. They then present their design to the class.
Classroom Application (Online): Use a collaborative document (e.g., Google Docs). Create a table with three columns (Surveys, Observations, Case Studies) and two rows (Strengths, Weaknesses). Have students collaboratively populate the table with as many points as they can in a few minutes. This creates a shared study resource.
Distinction-Level Thinking: "A researcher wants to study the private, and potentially illegal, online behaviors of a specific community. A naturalistic observation (e.g., joining the community under a false identity) might yield the most valid data, but it raises significant ethical concerns. A survey would be more ethical but prone to social desirability bias. Critically evaluate the methodological and ethical trade-offs in this scenario. Which method would you recommend, and how would you justify your choice to an ethics committee?"
- Video: Psychological Research: Crash Course Psychology #2 - This video discusses case studies, naturalistic observation, and surveys.
- Article: Overview of Nonexperimental Research - An article that explains non-experimental research, which lacks the manipulation of an independent variable, random assignment of participants to conditions or orders of conditions.
- Article: Observational Research – Research Methods in Psychology - An article that discusses observational research, which is used to refer to several different types of non-experimental studies in which behavior is systematically observed and recorded.
Covered Learning Outcomes
- LO 1.2: Analyse the way in which scientific method, experimental and descriptive research are interlinked.
- LO 2.1: Analyse the features of research methods used in psychology.
Teacher's Checklist
- Define descriptive research and its objectives. (LO 1.2)
- Analyze the survey method, including its pros, cons, and question-wording challenges. (LO 2.1)
- Analyze the observation method in its various forms (naturalistic, structured, participant) and its challenges. (LO 2.1)
- Explain the case study method, highlighting its strengths and weaknesses through examples. (LO 2.1)
Session 7
Session 7: Correlational Research
Session 7: Correlational Research
Educational Content
Correlational research is a type of non-experimental research in which the researcher measures two or more variables and assesses the statistical relationship (i.e., the correlation) between them with little or no effort to control extraneous variables. The goal is to determine if a relationship exists and to describe its strength and direction.
The key statistical tool in this research is the correlation coefficient, typically represented by the letter 'r'. This value ranges from -1.0 to +1.0.
- Strength of the Relationship: The absolute value of the number (how far it is from 0) indicates the strength. A value close to 1.0 (e.g., 0.8 or -0.8) indicates a strong relationship. A value close to 0 (e.g., 0.1 or -0.1) indicates a weak relationship.
- Direction of the Relationship: The sign (+ or -) indicates the direction.
- A positive correlation (+r) means that as one variable increases, the other variable also tends to increase. Example: The number of hours spent studying is positively correlated with exam scores.
- A negative correlation (-r) means that as one variable increases, the other variable tends to decrease. Example: The number of hours spent watching TV is negatively correlated with physical fitness levels.
We will use scatterplots to visualize these relationships. A scatterplot graphs pairs of values for the two variables. The pattern of the dots reveals the strength and direction of the correlation.
The most critical takeaway from this session is the mantra: "Correlation does not imply causation." Just because two variables are related does not mean that one causes the other. There are two main reasons for this:
- The Third-Variable Problem: An unmeasured third variable may be causing the observed relationship between the two measured variables.
Classic Example: There is a strong positive correlation between ice cream sales and crime rates. Does eating ice cream cause crime? No. A third variable, hot weather, causes both to increase.
- The Directionality Problem: Even if there is a causal relationship, a correlation alone does not tell us which variable is the cause and which is the effect.
Example: There is a correlation between low self-esteem and depression. Does low self-esteem cause depression, or does being depressed cause a person to have low self-esteem? It could be either, or both.
Session 7: Activities & Resources
Mini-Activity (5 mins): Show students three scatterplots (one strong positive, one weak negative, one with no correlation). Ask them to estimate the correlation coefficient (e.g., "+0.8", "-0.3", "0") for each one.
Interactive Exercise (15 mins): Present groups with several documented correlations (e.g., "The number of firefighters at a fire is positively correlated with the amount of damage done," or "Children with bigger feet have better reading ability"). For each correlation, the group's task is to come up with a plausible "third variable" that explains the relationship and to explain the directionality problem where applicable.
Classroom Application (Online): Use a polling tool. Present a correlational finding from a (fictional) news headline, e.g., "Study Finds Coffee Drinkers Live Longer!" Then ask students to vote on the most likely explanation: A) Coffee causes longer life, B) Healthier people are more likely to drink coffee, C) A third factor (like social activity) is involved. This reinforces the "correlation is not causation" principle.
Distinction-Level Thinking: "While correlational studies cannot prove causation, they are still incredibly valuable in psychology. Discuss the specific circumstances under which a correlational design is not only appropriate but may even be superior to an experimental design. Consider ethical and practical constraints in your answer."
- Video: Psychology Crash Course #6: The Correlational Method - A video that explains how correlational studies are used to look for relationships between variables.
- Video: HOW TO CONDUCT CORRELATIONAL RESEARCH IN PSYCHOLOGY - A 4-minute animated video that presents how to conduct correlational research in psychology.
- Article: Correlational Study Examples: AP® Psychology Crash Course - An article that provides three correlational study examples that have contributed to the history of psychology.
- Article: Correlational Research | Research Methodology Group - A presentation that defines correlational research and discusses its types, design, and analysis methods.
Covered Learning Outcomes
- LO 2.1: Analyse the features of research methods used in psychology.
- LO 3.2: Analyse the interrelationship between statistics and research hypotheses in psychology.
Teacher's Checklist
- Define correlational research and its objectives. (LO 2.1)
- Explain the correlation coefficient (strength and direction) and how to interpret it. (LO 3.2)
- Use scatterplots to illustrate types of correlations (positive, negative, zero). (LO 2.1)
- Emphasize the core principle "correlation does not imply causation" and explain the third-variable and directionality problems. (LO 2.1, LO 3.2)
Session 8
Session 8: Quantitative vs. Qualitative Research Methods
Session 8: Quantitative vs. Qualitative Research Methods
Educational Content
This session introduces the two major philosophical and methodological paradigms in psychological research: quantitative and qualitative. The choice between these approaches is not about which is "better," but about which is more appropriate for the research question being asked. They represent different ways of seeing and understanding the world.
Quantitative Research is concerned with numbers and measurement. It is a deductive approach that typically starts with a hypothesis and seeks to test it by collecting numerical data and analyzing it statistically. The goal is often to identify general laws of behavior, test causal relationships, and generalize findings from a sample to a population.
- Data Type: Numerical (e.g., reaction times, test scores, ratings on a scale).
- Methods: Experiments, quasi-experiments, correlational studies, and structured surveys with closed-ended questions.
- Analysis: Statistical analysis (e.g., t-tests, ANOVA, correlation).
- Strengths: High objectivity, allows for precise measurement, results can be generalized (if sampling is appropriate), and it's good for testing hypotheses.
- Example: An experiment measuring whether a new drug (IV) reduces the number of panic attacks (DV) per week compared to a placebo.
Qualitative Research is concerned with words, meanings, and experiences. It is an inductive approach that often starts with a broad question and seeks to explore it in depth, generating rich, detailed data from which themes and theories can emerge. The goal is to gain a deep, holistic understanding of a phenomenon in its natural context.
- Data Type: Non-numerical (e.g., interview transcripts, field notes, open-ended survey responses, videos).
- Methods: In-depth interviews, focus groups, case studies, and naturalistic observation.
- Analysis: Thematic analysis, content analysis, discourse analysis (identifying patterns and themes in the text).
- Strengths: Provides rich, in-depth data, explores complex phenomena, gives a "voice" to participants, and is excellent for generating new theories.
- Example: A series of in-depth interviews with individuals to explore their lived experience of recovering from addiction.
Finally, we will introduce the concept of Mixed-Methods Research. This approach involves collecting and analyzing both quantitative and qualitative data in a single study. By integrating the two, researchers can gain a more complete and nuanced understanding of a research problem, leveraging the strengths of both paradigms. For example, a study could use a quantitative survey to identify trends in student well-being and then conduct qualitative interviews to explore the reasons behind those trends in more depth.
Session 8: Activities & Resources
Mini-Activity (5 mins): Ask students to convert a quantitative research question into a qualitative one, and vice versa. For example, "What is the correlation between hours of sleep and GPA?" (Quantitative) becomes "What is the lived experience of high-achieving students with demanding sleep schedules?" (Qualitative).
Interactive Exercise (20 mins): Present a complex research topic, such as "The impact of remote work on employee well-being." Divide the class into two teams: "Team Quant" and "Team Qual." Each team must design a study to investigate the topic using only their assigned methodology. They should specify their research question, method, and the type of data they would collect. This highlights how the two approaches tackle the same problem from different angles.
Classroom Application (Online): Create a two-column table on a collaborative whiteboard labeled "Quantitative" and "Qualitative." Post a series of research methods and data types (e.g., "Experiment," "Interview Transcript," "Likert Scale Score," "Focus Group") as virtual sticky notes. Students drag and drop each item into the correct column.
Distinction-Level Thinking: "A researcher proposes a mixed-methods study. They plan to conduct a quantitative survey first and then use the results to select participants for qualitative interviews. Another researcher suggests doing the qualitative interviews first to help develop the survey questions. Critically evaluate these two different mixed-methods designs (known as sequential explanatory and sequential exploratory). What are the advantages and disadvantages of each sequence?"
- Video: Quantitative vs. Qualitative Research: The Differences Explained - This video explains the differences between the two research methods, as well as the mixed-methods approach.
- Article: Quantitative and qualitative data - An article that discusses how quantitative data most often involves qualitative judgements.
- Article: What are qualitative and quantitative research methods in psychology? - A Quora page that discusses the differences between qualitative and quantitative research methods in psychology.
Covered Learning Outcomes
- LO 3.1: Analyse types of data analysis used in research.
- LO 2.1: Analyse the features of research methods used in psychology.
Teacher's Checklist
- Explain the goals and basic characteristics of quantitative research. (LO 2.1, LO 3.1)
- Explain the goals and basic characteristics of qualitative research. (LO 2.1, LO 3.1)
- Compare the two approaches in terms of data types, collection methods, analysis techniques, and objectives. (LO 2.1, LO 3.1)
- Introduce the concept of Mixed-Methods research as an integrative approach. (LO 2.1)
Session 9
Session 9: Formative Assessment Review and Analysis of Classic Studies
Session 9: Formative Assessment Review and Analysis of Classic Studies
Educational Content
This session is designed to bridge theory and practice by preparing students for their Formative Assessment. The assessment requires a 600-word analysis of the methodology used in at least three early social psychological experiments. This task requires students to apply the concepts we've learned—such as research methods, variables, data types, and ethics—to deconstruct and evaluate real research.
To model this process, we will conduct a guided, in-depth analysis of one classic study, such as Asch's Conformity Experiment, as a template. We will break down the study using the exact framework required for the assignment:
- Identify and Defend the Research Method: We will identify Asch's study as a true experiment. Why? Because he manipulated an IV (the unanimity of the confederates' incorrect answers) and measured a DV (whether the participant conformed). We will discuss the use of a control group (participants who judged the lines alone).
- Identify Qualitative and/or Quantitative Data: The primary data was quantitative: the number of trials on which the participant conformed (e.g., 37% of trials). However, Asch also collected qualitative data through post-experiment interviews, where he asked participants why they conformed. This provided rich insights into their motivations (e.g., not wanting to stand out, genuinely believing they were wrong).
- Discuss Significant Findings: The key finding was that people will conform to a group's incorrect judgment a surprisingly high percentage of the time, even when the correct answer is obvious. This demonstrated the power of normative social influence.
- Identify Ethical Dilemmas: The primary ethical issue was deception, as participants were lied to about the purpose of the study and the role of the other "participants" (confederates). This deception also led to psychological distress, as participants felt anxious and self-conscious. We will discuss the importance of the debriefing Asch conducted to mitigate this harm.
This interactive analysis will serve as a scaffold, providing students with the analytical tools and confidence to approach their chosen studies (e.g., Milgram, Harlow, Pavlov) for the formative assessment. The session will conclude with a Q&A to address any questions about the assignment's requirements, formatting, and referencing.
Session 9: Activities & Resources
Mini-Activity (5 mins): Before analyzing the Asch study, ask students to predict what percentage of people would conform in that situation. This creates engagement and often highlights the counter-intuitive nature of psychological findings.
Interactive Exercise (20 mins): After modeling the Asch analysis, provide a brief summary of another classic study (e.g., Bandura's Bobo Doll experiment). In small groups, students must use the four-point assessment framework (Method, Data, Findings, Ethics) to create a bullet-point outline of how they would analyze this new study. This gives them direct practice with the required skill.
Classroom Application (Online): Use a collaborative platform where the four assessment criteria are headings. As a class, collaboratively write a brief analysis of a chosen study under these headings. The instructor can guide the process, and students can contribute ideas in real-time, creating a shared set of notes.
Distinction-Level Thinking: "Many classic studies from the mid-20th century would not be approved by a modern IRB. Select one of the studies for the formative assessment and propose a modified research design that could investigate the same core research question while adhering to today's strict ethical standards. What methodological compromises might you have to make, and how would these affect the validity of your findings?"
- Video: The psychology of evil | Philip Zimbardo - A TED Talk by Philip Zimbardo that discusses the psychological factors that can cause ordinary, good people to engage in evil behavior.
- Video: Research Ethics in Psychology | Tuskeegee, Milgram, and Zimbardo - A video that discusses research ethics in psychology, with a focus on historical studies like the Milgram experiment.
- Article: How the Classics Changed Research Ethics - An article that discusses how classic studies like Milgram's work can inject clarity into pressing societal issues such as political polarization and police brutality.
- Article: Milgram study ethical issues and participant trauma - A Facebook post that discusses the main objective of the Milgram study and the ethical issues surrounding it.
Covered Learning Outcomes
- LO 1.1: Analyse the principles of research design.
- LO 1.2: Analyse the way in which scientific method, experimental and descriptive research are interlinked.
- LO 2.1: Analyse the features of research methods used in psychology.
- LO 3.1: Analyse types of data analysis used in research.
Teacher's Checklist
- Explain the requirements of the formative assessment in detail. (N/A)
- Apply an analytical framework to a classic study to identify methodology and variables. (LO 1.1, LO 1.2, LO 2.1)
- Analyze the types of data (quantitative/qualitative) used in the study. (LO 3.1)
- Discuss the main findings and ethical dilemmas of the study. (Implicitly LO 4.2)
Session 10
Session 10: Data Collection Techniques
Session 10: Data Collection Techniques
Educational Content
This session focuses on the practical "how-to" of data collection, exploring the specific tools researchers use to gather information. The choice of tool must align with the research question and the overall design (qualitative or quantitative).
- Questionnaires: A set of written questions used to obtain information from respondents. We will delve into effective design principles:
- Types of Questions: Closed-ended questions provide a fixed set of response options (e.g., multiple-choice, yes/no). Likert scales are a common type, asking respondents to rate their agreement with a statement (e.g., from "Strongly Disagree" to "Strongly Agree"). Open-ended questions allow respondents to answer in their own words, providing rich qualitative data.
- Good Practice: Questions should be clear, unambiguous, and neutral. Avoid double-barreled questions (asking two things at once) and leading questions.
- Interviews: A conversation between a researcher and a participant to collect information.
- Structured Interviews: The researcher asks a pre-determined set of questions in a fixed order, like a verbal questionnaire. This ensures consistency across participants.
- Unstructured Interviews: The interview is more like a guided conversation, with the researcher having a topic in mind but allowing the conversation to flow freely. This allows for deep exploration of topics.
- Semi-structured Interviews: A blend of the two, where the researcher has a list of questions but is free to ask follow-up questions and deviate from the script to explore interesting points. This is a very common and flexible method in qualitative research.
- Observation Schedules: When conducting structured observations, researchers need a systematic way to record behavior. An observation schedule (or checklist) is a pre-defined template for this.
- Creating a Schedule: This involves clearly operationalizing the target behaviors (e.g., defining "aggressive act" as "hitting, kicking, or pushing") and deciding on a recording system. Event sampling involves counting the number of times a behavior occurs. Time sampling involves recording what behavior is occurring at pre-determined time intervals (e.g., every 30 seconds).
A crucial step before launching a full-scale study is conducting a pilot study. This is a small-scale trial run of the research with a few participants. It helps the researcher to check that the instructions are clear, the questions are understood, the observation schedule is practical, and the overall procedure runs smoothly. It is an essential step for identifying and fixing problems before investing significant time and resources.
Session 10: Activities & Resources
Mini-Activity (5 mins): Provide a poorly designed, double-barreled survey question (e.g., "Do you think that the university should increase tuition fees and improve library services?"). Ask students to identify the flaw and rewrite it as two separate, clear questions.
Interactive Exercise (20 mins): In groups, students are tasked with designing a data collection tool to measure "stress levels in students during exam season." - Group A must design a short, 5-item quantitative questionnaire using Likert scales. - Group B must create a 5-question semi-structured interview schedule. - Group C must develop a simple observation schedule to record stress-related behaviors in the library. Each group presents their tool and justifies their choices.
Classroom Application (Online): Use a simple online survey tool (like Google Forms). As a class, collaboratively build a short questionnaire on a fun topic (e.g., "Smartphone Usage Habits"). The instructor can guide the process, discussing question types and wording in real-time. Then, students can take the survey themselves to experience it as a participant.
Distinction-Level Thinking: "A researcher is conducting a semi-structured interview. What are the key skills the interviewer must possess to elicit rich, valid data? Discuss the potential for interviewer bias to influence the participant's responses and suggest specific techniques the interviewer can use to minimize this bias and build rapport."
- Video: Quantitative vs. Qualitative Research: The Differences Explained - This video explains the differences between the two research methods, as well as the mixed-methods approach.
- Article: Qualitative Data Analysis: Step-by-Step Guide (Manual vs. Automated) - A guide that walks through the five key steps of qualitative data analysis, breaking down both manual and automated approaches.
- Article: Purposeful sampling for qualitative data collection and analysis in mixed method implementation research - A paper that reviews the principles and practice of purposeful sampling in implementation research.
Covered Learning Outcomes
- LO 4.2: Apply and justify the choice of method to a research scenario.
Teacher's Checklist
- Explain how to design effective questionnaires and the different types of questions. (LO 4.2)
- Analyze types of interviews (structured, semi-structured, unstructured) and the skills for conducting them. (LO 4.2)
- Explain how to develop and use observation schedules for recording behavior. (LO 4.2)
- Emphasize the importance of Pilot Studies in developing data collection tools. (LO 4.2)
Session 11
Session 11: Introduction to Statistics in Psychology: Descriptive Statistics
Session 11: Introduction to Statistics in Psychology: Descriptive Statistics
Educational Content
Once data has been collected, it is often just a raw, meaningless jumble of numbers. The first step in making sense of this data is to use descriptive statistics. These are tools that summarize and describe the main features of a dataset. They don't allow us to make conclusions beyond the data we have, but they provide a vital, organized overview.
We will focus on two main types of descriptive statistics:
- Measures of Central Tendency: These statistics provide a single value that represents the "center" or typical score in a distribution.
- Mean: The arithmetic average (sum of all scores divided by the number of scores). It is the most common measure but is very sensitive to extreme scores (outliers).
- Median: The middle score when all scores are arranged in order. It is less affected by outliers, making it a better measure for skewed data.
- Mode: The most frequently occurring score in a dataset. It is the only measure that can be used for categorical data (e.g., the most common eye color).
Example: For the dataset [2, 3, 3, 5, 7, 10], the Mean is 5, the Median is 4 (the average of 3 and 5), and the Mode is 3.
- Measures of Dispersion (Variability): These statistics describe how spread out the scores are in a distribution.
- Range: The simplest measure, calculated as the highest score minus the lowest score. It is highly affected by outliers.
- Standard Deviation (SD): The most important measure of variability. It represents the average amount that scores differ from the mean. A small SD means the scores are clustered tightly around the mean, while a large SD means they are more spread out.
Example: Two classes might have the same mean exam score of 75%. However, Class A has an SD of 5, meaning most students scored between 70-80%. Class B has an SD of 20, meaning scores were much more spread out, with many students scoring very high and very low.
Finally, we will explore how to visually represent data using graphs. A well-designed graph can make complex data much easier to understand. We will cover histograms and bar charts, discussing when each is appropriate (histograms for continuous data, bar charts for categorical data) and how to interpret them.
Session 11: Activities & Resources
Mini-Activity (5 mins): Give students a small, simple dataset (e.g., [1, 2, 2, 3, 4, 100]). Ask them to calculate the mean and the median. This will immediately demonstrate how a single outlier can dramatically affect the mean but not the median.
Interactive Exercise (15 mins): Provide groups with a slightly larger dataset (e.g., the scores of 15 students on a quiz). Their task is to calculate the mean, median, mode, range, and (conceptually) describe what the standard deviation would tell them. They should then decide which measure of central tendency is the most appropriate for describing this data and justify their choice.
Classroom Application (Online): Use a simple online tool or a shared spreadsheet (like Google Sheets). Input a dataset and use the built-in functions (=AVERAGE, =MEDIAN, =STDEV) to calculate the descriptive statistics in real-time. This demonstrates the practical application and demystifies the calculations.
Distinction-Level Thinking: "A news report states that 'the average salary at Company X is £100,000.' This sounds impressive, but you suspect the data might be skewed. What measure of central tendency are they likely reporting? What other statistic (a different measure of central tendency or a measure of dispersion) would you want to see to get a more accurate picture of the typical employee's salary? Explain why."
- Video: Mode, Median, Mean, Range, and Standard Deviation (1.3) - A video that explains how to calculate the mode, median, mean, range, and standard deviation.
- Video: Statistics intro: Mean, median, & mode - A video that introduces the concepts of mean, median, and mode and how to calculate them.
- Article: Mean, Mode and Median - Measures of Central Tendency - A guide to the mean, median, and mode and which of these measures of central tendency you should use for different types of variable.
- Article: Mean, median, mode, variance & standard deviation - A document that explains how to calculate the mean, median, mode, variance, and standard deviation.
Covered Learning Outcomes
- LO 2.2: Analyse how to conduct statistical tests commonly used in psychology.
- LO 3.1: Analyse types of data analysis used in research.
Teacher's Checklist
- Distinguish between descriptive and inferential statistics. (LO 3.1)
- Explain and calculate measures of central tendency (mean, median, mode). (LO 2.2, LO 3.1)
- Explain and calculate measures of dispersion (range, standard deviation). (LO 2.2, LO 3.1)
- Review methods for the graphical representation of data. (LO 3.1)
Session 12
Session 12: Inferential Statistics (1): Probability and Hypothesis Testing
Session 12: Inferential Statistics (1): Probability and Hypothesis Testing
Educational Content
While descriptive statistics describe our sample, inferential statistics allow us to make inferences or generalizations about the wider population from which the sample was drawn. They help us answer the question: Is the effect or relationship we found in our sample a real effect, or could it simply be due to random chance?
The foundation of inferential statistics is probability. We use probability to determine the likelihood of our results occurring by chance. This leads to the core logic of Null Hypothesis Significance Testing (NHST). This process works as follows:
- Assume the Null Hypothesis (H0) is True: We start with a skeptical stance, assuming there is no real effect or relationship in the population.
- Collect Sample Data: We conduct our study and collect data from our sample.
- Calculate the Probability (p-value): We perform a statistical test to calculate the p-value. The p-value is the probability of obtaining our sample results (or results even more extreme) if the null hypothesis were actually true.
A small p-value means our results are very unlikely to have occurred by chance alone. A large p-value means our results are quite likely to have occurred by chance. But how small is "small enough"? We use a pre-determined cut-off point called the significance level, or alpha (α). In psychology, alpha is almost always set at 0.05.
- If p < 0.05, we conclude that our result is statistically significant. We reject the null hypothesis and accept our research hypothesis. This means there is less than a 5% probability that we would get our result if there was no real effect.
- If p ≥ 0.05, our result is not statistically significant. We fail to reject the null hypothesis. This does not mean the null hypothesis is true, only that we don't have enough evidence to reject it.
This decision-making process is not perfect. We can make two types of errors:
- Type I Error (False Positive): Rejecting the null hypothesis when it is actually true. This is like a "false alarm"—we conclude there is an effect when there isn't one. The probability of making a Type I error is equal to alpha (α).
- Type II Error (False Negative): Failing to reject the null hypothesis when it is actually false. This is a "miss"—we fail to detect an effect that is really there.
Session 12: Activities & Resources
Mini-Activity (5 mins): Use the analogy of a courtroom trial. The defendant is "innocent until proven guilty" (Null Hypothesis). The prosecution presents evidence (data). The jury must decide if there is enough evidence "beyond a reasonable doubt" (p < 0.05) to convict (Reject H0). Discuss what a Type I error (convicting an innocent person) and Type II error (acquitting a guilty person) would be in this context.
Interactive Exercise (15 mins): Present groups with several research findings, each with a p-value. - "The p-value for the difference in memory scores between the caffeine and no-caffeine groups was p = .02." - "The correlation between screen time and anxiety was tested, yielding p = .34." - "An experiment on a new therapy found a result with p = .05." For each, groups must decide: Is the result statistically significant? Should they reject or fail to reject the null hypothesis? What conclusion can they draw?
Classroom Application (Online): Use a poll to ask: "A researcher finds a p-value of .04. What is the probability that they have made a Type I error?" The correct answer is 5% (alpha), not 4%. This is a common misconception and a great teaching point about what the p-value does and does not mean.
Distinction-Level Thinking: "The reliance on a strict p < .05 cut-off has been heavily criticized in recent years for encouraging 'p-hacking' and contributing to a 'replication crisis' in psychology. Discuss the limitations of Null Hypothesis Significance Testing. What are some alternative approaches (e.g., focusing on effect sizes, confidence intervals, or Bayesian statistics) that could provide a more nuanced understanding of research findings?"
- Video: Statistics in 10 minutes. Hypothesis testing, the p value, t-test... - A video that explains how to apply the principles of hypothesis testing and interpret the p-value for various statistical tests.
- Video: Inferential Statistics FULL Tutorial: T-Test, ANOVA, Chi-Square, Correlation & Regression - A tutorial that examines the most common inferential tests, such as t-tests, ANOVA, chi-square, correlation, and regression.
- Article: Understanding T-Statistic, F-Statistic, and P-Value in Statistical Analysis - An article that explains the concepts of t-statistic, f-statistic, and p-value in statistical analysis.
- Article: Basics of statistics for primary care research - An article that discusses inferential statistics, including comparing groups with t-tests and ANOVA.
Covered Learning Outcomes
- LO 2.2: Analyse how to conduct statistical tests commonly used in psychology.
- LO 3.2: Analyse the interrelationship between statistics and research hypotheses in psychology.
Teacher's Checklist
- Explain the purpose of inferential statistics. (LO 2.2)
- Explain the logic of hypothesis testing (null and alternative hypotheses). (LO 3.2)
- Define and explain the concept of the p-value and the significance level (α). (LO 2.2, LO 3.2)
- Discuss Type I and Type II errors in statistical decision-making. (LO 3.2)
Session 13
Session 13: Inferential Statistics (2): t-tests and Analysis of Variance (ANOVA)
Session 13: Inferential Statistics (2): t-tests and Analysis of Variance (ANOVA)
Educational Content
This session introduces specific statistical tests used to compare the mean scores of different groups. The choice of test depends on the research design and the number of groups being compared.
t-tests are used to determine if there is a statistically significant difference between the means of two groups. The t-test produces a 't' statistic, which is essentially a ratio of the difference between the group means to the variability within the groups. A large 't' value suggests a meaningful difference. There are two main types:
- Independent-samples t-test: Used when the two groups are made up of different, unrelated participants. This test corresponds to an independent groups design.
Example: A researcher wants to compare the exam scores of a group of students who used a new study app with a control group who studied normally. Since the groups contain different students, an independent-samples t-test is used.
- Paired-samples t-test (or dependent-samples t-test): Used when the two sets of scores come from the same participants or from matched pairs. This test corresponds to a repeated measures or matched pairs design.
Example: A researcher measures the anxiety levels of a group of participants before they undergo a mindfulness intervention and again after the intervention. Since the same people are measured twice, a paired-samples t-test is used.
What if you want to compare the means of more than two groups? You might be tempted to run multiple t-tests, but this is a bad idea because it inflates the Type I error rate. The correct tool is the Analysis of Variance (ANOVA).
ANOVA works by comparing the variability between the groups to the variability within the groups. If the variation between the groups is significantly larger than the variation within the groups, it suggests that the independent variable has had a significant effect. ANOVA produces an 'F' statistic. A significant F-test tells us that there is a difference somewhere among the groups, but it doesn't tell us which specific groups differ. For that, we need to conduct post-hoc tests.
Example: A researcher wants to test the effectiveness of three different therapies for depression (CBT, Psychodynamic, and a control group). They would use a one-way ANOVA to compare the mean depression scores of the three groups at the end of the study.
Session 13: Activities & Resources
Mini-Activity (5 mins): Present three research scenarios. For each one, students must simply decide which of the three tests (independent t-test, paired t-test, or ANOVA) is the correct one to use. This focuses on the decision-making process rather than the calculation.
Interactive Exercise (15 mins): Give groups a simple research question: "Does the type of music (Classical, Pop, or Silence) affect performance on a spatial reasoning task?" They must: 1. Identify the IV and DV. 2. State the null and research hypotheses. 3. Choose the correct statistical test (ANOVA) and justify why a t-test is inappropriate. 4. Describe what a "statistically significant" result would mean in this context.
Classroom Application (Online): Create a flowchart on a collaborative whiteboard. The chart should guide students through the decision of which test to use based on questions like "How many groups are you comparing?" and "Are the participants in each group the same or different?". Students can then use this flowchart to solve practice problems.
Distinction-Level Thinking: "A one-way ANOVA on three groups yields a significant result (p = .03). A student concludes that all three groups are different from each other. Explain why this conclusion is premature and incorrect. What is the next step the researcher must take (i.e., post-hoc tests), and why is this step necessary?"
- Video: T-test, ANOVA and Chi Squared test made easy. - A video that explains the t-test, chi-square test, p-value, and more.
- Video: Inferential Statistics FULL Tutorial: T-Test, ANOVA, Chi-Square, Correlation & Regression - A tutorial that examines the most common inferential tests, such as t-tests, ANOVA, chi-square, correlation, and regression.
- Article: ANOVA, Regression, and Chi-Square - An article that explains how to calculate a t-test and the information needed to do so.
- Article: Descriptive vs. Inferential Statistics: Understanding and Applying - An article that explains inferential statistics and hypothesis testing, including the t-test.
Covered Learning Outcomes
- LO 2.2: Analyse how to conduct statistical tests commonly used in psychology.
Teacher's Checklist
- Explain when and how to use an independent-samples t-test. (LO 2.2)
- Explain when and how to use a paired-samples t-test. (LO 2.2)
- Explain the purpose of ANOVA and when it is used instead of t-tests. (LO 2.2)
- Introduce the basic concept of a one-way ANOVA. (LO 2.2)
Session 14
Session 14: Inferential Statistics (3): Correlation and Regression
Session 14: Inferential Statistics (3): Correlation and Regression
Educational Content
This session revisits correlation from a statistical testing perspective and introduces regression as a powerful predictive tool. These methods are used when we want to analyze the relationship between continuous variables, rather than comparing group means.
In Session 7, we introduced the correlation coefficient (r) as a descriptive statistic. Now, we use inferential statistics to determine if the correlation we found in our sample is statistically significant. The null hypothesis (H0) for a correlation is that there is no relationship between the two variables in the population (r = 0). A significance test for correlation produces a p-value. If p < .05, we reject the null hypothesis and conclude that there is a significant relationship between the variables in the population.
Example: A researcher finds a correlation of r = +0.45 between study hours and exam scores in a sample of 50 students. A significance test yields p = .008. Since p < .05, the researcher can conclude that there is a statistically significant positive relationship between study hours and exam scores.
Regression analysis takes correlation a step further. While correlation describes the relationship, regression allows us to use that relationship to make predictions. Simple linear regression finds the "line of best fit" that passes through the data points on a scatterplot. This line is represented by a mathematical equation: Y = a + bX.
- Y is the criterion variable (the variable we want to predict, plotted on the y-axis).
- X is the predictor variable (the variable we are using to make the prediction, plotted on the x-axis).
- b is the slope of the line, indicating how much Y changes for a one-unit change in X.
- a is the y-intercept, the predicted value of Y when X is 0.
Once this equation is established, we can plug in a value for X to predict the most likely value of Y.
Example: A university uses regression to predict student success. They find the equation for predicting first-year GPA (Y) from A-level scores (X) is: GPA = 0.5 + (0.08 * A-level score). They can now use an applicant's A-level score to predict their likely GPA, helping them make admissions decisions.
Session 14: Activities & Resources
Mini-Activity (5 mins): Show a scatterplot with a clear positive correlation and its line of best fit. Ask students to visually estimate the predicted Y value for a given X value by following the graph. This builds an intuitive understanding of what regression does.
Interactive Exercise (15 mins): In groups, students are given a research scenario: "A sports psychologist wants to predict an athlete's performance level (rated 1-100) based on their self-reported motivation level (rated 1-10)." They must: 1. Identify the predictor (X) and criterion (Y) variables. 2. State the null hypothesis for the correlation. 3. Explain what a significant positive correlation would mean. 4. Describe in plain English what a regression equation would allow the psychologist to do in this context.
Classroom Application (Online): Use an interactive online scatterplot generator. The instructor can input data points, and the tool will automatically calculate the correlation coefficient and draw the regression line. This allows for a dynamic demonstration of how adding or moving data points changes the relationship and the predictive line.
Distinction-Level Thinking: "Simple linear regression uses one predictor variable. However, human behavior is complex and rarely predicted by a single factor. Research the concept of 'multiple regression.' How does it differ from simple regression, and why is it a more powerful and commonly used tool in psychological research? Provide a hypothetical research question that could only be answered using multiple regression."
- Video: Inferential Statistics FULL Tutorial: T-Test, ANOVA, Chi-Square, Correlation & Regression - A tutorial that examines the most common inferential tests, including correlation and regression.
- Article: Correlational Research – Research Methods in Psychology - An article that defines correlational research as a type of non-experimental research in which the researcher measures two variables and assesses the statistical relationship between them.
- Article: (PDF) CORRELATIONAL RESEARCH DESIGN - A document that defines correlational research as a methodological approach that aims to identify and analyze the relationship between two or more variables without manipulation.
Covered Learning Outcomes
- LO 2.2: Analyse how to conduct statistical tests commonly used in psychology.
- LO 3.2: Analyse the interrelationship between statistics and research hypotheses in psychology.
Teacher's Checklist
- Explain how to test the statistical significance of a correlation coefficient. (LO 2.2, LO 3.2)
- Explain the purpose of regression analysis and how it differs from correlation. (LO 2.2)
- Introduce the basic principles of simple linear regression and its equation. (LO 2.2)
- Define the terms predictor and criterion variable. (LO 2.2)
Session 15
Session 15: Qualitative Data Analysis
Session 15: Qualitative Data Analysis
Educational Content
This session shifts our focus from numbers to words, exploring how researchers make sense of rich, non-numerical data like interview transcripts or open-ended survey responses. Qualitative data analysis is an interpretive and inductive process. Rather than testing a pre-defined hypothesis, the goal is to explore the data to identify patterns, themes, and meanings that emerge from the participants' own words.
We will explore two common approaches:
- Thematic Analysis: This is one of the most flexible and widely used methods. It is a process for identifying, analyzing, and reporting patterns (themes) within data. The process, as outlined by Braun and Clarke (2006), involves several phases:
- Familiarization: The researcher immerses themselves in the data by reading and re-reading it to become deeply familiar with its content.
- Generating Initial Codes: The researcher identifies interesting features of the data and assigns 'codes'—short labels that act as tags.
- Searching for Themes: The researcher groups related codes together to form potential overarching themes.
- Reviewing Themes: The researcher checks if the themes work in relation to both the coded extracts and the entire dataset. Themes may be refined, split, or merged.
- Defining and Naming Themes: The researcher writes a detailed analysis of each theme, explaining its essence and what it captures about the data.
Example: In interviews about work-life balance, a researcher might code extracts as "long hours," "emailing at night," and "pressure to be available." These codes could be grouped into a broader theme called "Inescapable Work Culture."
- Content Analysis: This method can be either qualitative or quantitative. In its quantitative form, it involves creating categories and counting how often they appear. In its qualitative form, it is more interpretive and similar to thematic analysis, focusing on the meaning and context of the categories.
Example: A researcher might perform a content analysis of children's cartoons to see how male and female characters are portrayed, categorizing and analyzing the types of roles and behaviors they exhibit.
Unlike quantitative research, which values objectivity, qualitative research acknowledges the researcher's role in interpretation. Therefore, instead of "validity," we talk about trustworthiness. To enhance trustworthiness, researchers use techniques like keeping a reflective journal to acknowledge their biases and using direct quotes from participants to support their interpretations, ensuring transparency in the analytical process.
Session 15: Activities & Resources
Mini-Activity (5 mins): Provide students with a single, rich quote from a fictional interview (e.g., "I love my job, but the constant expectation to be online after hours means I never truly switch off. It feels like I'm letting the team down if I don't reply instantly."). Ask them to generate as many initial codes as they can for this quote (e.g., "job satisfaction," "after-hours work," "guilt," "team pressure").
Interactive Exercise (20 mins): Give groups a short transcript from a fictional interview about students' experiences with online learning. The transcript should contain several clear patterns. The group's task is to go through the first three phases of thematic analysis: 1) Read and familiarize, 2) Generate at least 10-15 codes, 3) Group their codes into 2-3 potential themes. Each group then presents one of their themes and the codes that support it.
Classroom Application (Online): Use a collaborative tool like a word cloud generator (e.g., Mentimeter). Ask students to read a short text and submit the single word they think is most important or representative. The resulting word cloud can visually highlight key concepts and serve as a starting point for a discussion about themes.
Distinction-Level Thinking: "Qualitative research is sometimes criticized by quantitative researchers for being 'subjective' and 'unscientific.' Construct a robust defense of qualitative research. In your answer, address the concept of trustworthiness and explain how qualitative researchers establish rigor in their work through specific techniques (e.g., triangulation, reflexivity, member checking) that are different from, but parallel to, the concepts of reliability and validity in quantitative research."
- Video: Quantitative vs. Qualitative Research: The Differences Explained - This video explains the differences between the two research methods, as well as the mixed-methods approach.
- Article: Thematic Analysis: A Step by Step Guide - An article that defines thematic analysis as a qualitative research method used to identify, analyze, and interpret patterns of shared meaning within a given data set.
- Article: Using thematic analysis in psychology - A document that explains how thematic analysis differs from other analytic methods that seek to describe patterns across qualitative data.
Covered Learning Outcomes
- LO 3.1: Analyse types of data analysis used in research.
Teacher's Checklist
- Explain the nature and purpose of qualitative data analysis. (LO 3.1)
- Detail the steps of the Thematic Analysis process. (LO 3.1)
- Explain the concept of Content Analysis and distinguish between its quantitative and qualitative forms. (LO 3.1)
- Discuss the importance of transparency and trustworthiness in qualitative research. (LO 3.1)
Session 16
Session 16: Reading and Critically Evaluating Research Papers
Session 16: Reading and Critically Evaluating Research Papers
Educational Content
A core academic skill is the ability to read and critically evaluate primary research literature. This session demystifies the structure of a typical psychology journal article (often following APA style) and provides a framework for active, critical reading, which is essential for Part 1 of the final assessment.
We will break down the standard sections of an empirical paper:
- Abstract: A concise summary (usually ~150-250 words) of the entire study, including the research question, methods, key findings, and implications. It's a vital tool for quickly determining if an article is relevant to your interests.
- Introduction: This section sets the stage. It starts broadly by introducing the research topic, then reviews relevant previous literature to establish what is already known. It identifies a "gap" in that literature and explains why the current study is needed to fill that gap. It ends with a clear statement of the research question and the specific hypotheses.
- Method: This is the "recipe" of the study. It describes exactly how the research was conducted and should be detailed enough for another researcher to replicate it. It is typically divided into subsections:
- Participants: Who took part in the study (number, demographics, how they were recruited).
- Materials/Apparatus: Any questionnaires, scales, software, or equipment used.
- Procedure: A step-by-step account of what participants experienced from start to finish.
- Results: This section presents the findings of the data analysis in a direct, factual manner, without interpretation. It reports the outcomes of the statistical tests (e.g., "An independent-samples t-test revealed a significant difference between the groups, t(48) = 2.54, p = .014."). It often includes tables and figures.
- Discussion: This is where the author interprets the results. They will state whether the hypotheses were supported, connect the findings back to the literature mentioned in the introduction, discuss the theoretical and practical implications of the study, and, crucially, acknowledge the study's limitations. It often concludes with suggestions for future research.
To critically evaluate a paper, students should ask questions as they read: Is the introduction's argument logical? Is the method sound (good validity and reliability)? Are the conclusions in the discussion justified by the data in the results section? What are the key weaknesses or limitations?
Session 16: Activities & Resources
Mini-Activity (5 mins): Give students just the abstract of a research paper. Ask them to write down the study's main research question, the key finding, and the number of participants, based only on the abstract. This demonstrates the power of the abstract for quick comprehension.
Interactive Exercise (20 mins): Provide a one-page, simplified "mock" research paper. In groups, students act as peer reviewers. They must read the paper and identify one major strength and one major weakness in each of the main sections (Introduction, Method, Results, Discussion). This provides structured practice in critical evaluation.
Classroom Application (Online): Find a real (but accessible) open-access psychology article. Share the screen and do a "live reading" of the introduction and discussion sections. Use a highlighter tool to mark the key components in real-time: the literature review, the "gap," the hypothesis, the interpretation of findings, and the limitations.
Distinction-Level Thinking: "The 'Discussion' section of a paper is not just a summary of the results; it is an argument constructed by the author. Critically discuss the potential for author bias to influence the 'spin' placed on the findings. How might an author downplay non-significant results or overstate the importance of their findings? What specific parts of the paper should a critical reader look at to form their own, independent conclusion?"
- Video: A Crash Course on How to Design a Research Study in Psychology - An overview of some of the major types of research design that can be used in psychology.
- Article: Research in Psychology: Evaluating Articles - A guide that provides questions to help determine if an article is appropriate to use for an assignment.
- Article: How to Critique an Article (Psychology) - Research Guides - A tool for psychology students who are required to critically evaluate the research literature.
Covered Learning Outcomes
- LO 4.1: Draw on the findings of psychological papers to inform research design.
Teacher's Checklist
- Explain the standard structure of a psychological research paper (APA style). (LO 4.1)
- Clarify the purpose of each section (Introduction, Method, Results, Discussion). (LO 4.1)
- Provide students with critical questions to evaluate the quality of research (validity, reliability, conclusions). (LO 4.1)
- Train students to identify the strengths and weaknesses (limitations) of a published study. (LO 4.1)
Session 17
Session 17: Developing a Research Question and Hypothesis
Session 17: Developing a Research Question and Hypothesis
Educational Content
This session marks the transition from evaluating others' research to conceptualizing one's own. This is the first and most creative step in the research process and is essential for Part 2 of the final assessment. A good research project is built on the foundation of a good research question.
What makes a research question "good"? We will discuss several criteria:
- Interesting: The question should be interesting to the researcher and, ideally, to others in the field.
- Feasible: It must be possible to answer the question with the resources available (time, money, participants, equipment).
- Specific: A broad topic like "social media and mental health" is not a research question. It needs to be narrowed down. A more specific question would be: "What is the relationship between the number of hours spent on image-based social media (e.g., Instagram) and body dissatisfaction in adolescent girls?"
- Researchable: The question must be one that can be answered through empirical data collection. Philosophical or ethical questions (e.g., "Is it wrong to use social media?") are not empirical research questions.
Where do research ideas come from? We will explore several sources:
- Everyday Observation: Noticing patterns or asking questions about behavior in your own life.
- Psychological Theories: Testing a specific prediction derived from an existing theory (e.g., using Attachment Theory to predict relationship behavior).
- Previous Research: This is one of the most important sources. Reading the "limitations" and "future research" sections of published papers can provide a wealth of ideas for studies that build upon existing work.
Once a research question is formulated, the next step is to conduct a literature review. This involves systematically searching for and reading what has already been published on the topic. The literature review serves two key purposes: 1) It helps to further refine the research question, and 2) It provides the theoretical and empirical basis for formulating a specific, testable hypothesis. A good hypothesis is not a random guess; it is an educated prediction based on the existing body of knowledge.
Session 17: Activities & Resources
Mini-Activity (5 mins): Give students a broad, unspecific research topic (e.g., "Memory" or "Stress"). In pairs, they have two minutes to brainstorm as many specific, researchable questions as they can related to that topic.
Interactive Exercise (20 mins): Assign each group one of the research topics from the final assessment (e.g., "sleep deprivation and short-term memory"). Their task is to go through the narrowing process. They should start with the broad topic and develop at least two different, specific research questions. For one of those questions, they must then formulate a clear, directional research hypothesis.
Classroom Application (Online): Use a "mind mapping" tool (like Miro or Coggle). Start with a central research topic. As a class, brainstorm and add branches for different sub-topics, populations, and variables. This visually demonstrates the process of narrowing down a broad idea into a manageable research question.
Distinction-Level Thinking: "A student proposes the research question: 'Does therapy work?' Critically evaluate this question using the criteria of a good research question (specificity, researchability). Revise and refine this question into a high-quality, testable research question that would be appropriate for a small-scale undergraduate project. Justify the changes you made."
- Video: A Crash Course on How to Design a Research Study in Psychology - An overview of some of the major types of research design that can be used in psychology.
- Article: Chapter 4 Developing Research Questions: Hypotheses and Variables - A chapter that introduces some broad themes in behavioral research, including the purpose of research, types of research, ethical issues, and the nature of science.
- Article: Psychology Notes: Sampling Techniques Overview for Research - A document that discusses preregistration, where researchers post their study's method, hypotheses, or statistical analysis online before collecting any data.
Covered Learning Outcomes
- LO 3.2: Analyse the interrelationship between statistics and research hypotheses in psychology.
- LO 4.2: Apply and justify the choice of method to a research scenario.
Teacher's Checklist
- Explain the characteristics of a good research question. (LO 4.2)
- Discuss sources of research ideas and how to narrow them down. (LO 4.2)
- Explain the importance of a literature review in developing a research question. (Implicitly LO 4.1)
- Train students to formulate clear, testable hypotheses based on previous research. (LO 3.2, LO 4.2)
Session 18
Session 18: Writing a Research Proposal: Structure and Justification
Session 18: Writing a Research Proposal: Structure and Justification
Educational Content
This session is a practical guide to constructing a research proposal, the blueprint for a research study. This is the central task for Part 2 of the final assessment. A research proposal is a persuasive document designed to convince the reader that the proposed study is well-founded, important, and methodologically sound.
We will walk through the key sections of a research proposal, aligning them with the assignment requirements:
- Introduction and Literature Review: This section sets the context. It should start with a broad introduction to the topic, then critically review the findings of at least three relevant research studies. The review should be a synthesis, not just a list, and it must build a logical argument that leads to the identification of a gap in knowledge that the proposed study will address.
- Hypothesis: A single, clear sentence stating the specific, testable prediction. This should flow logically from the literature review.
- Proposed Method: This is the most detailed section.
- Research Design: State clearly whether the design is experimental, correlational, descriptive, etc.
- Participants: Describe the target population and the specific sampling method that will be used to recruit the sample. Include the proposed sample size.
- Materials: Describe any questionnaires, scales, or other tools that will be used to measure the variables. If using an existing scale, it should be cited.
- Procedure: Provide a step-by-step description of what participants will experience.
- Data Analysis Plan: State exactly which statistical test (e.g., t-test, correlation) will be used to analyze the data and test the hypothesis.
- Justification: This is a critical component. For every choice made in the method section, the student must explain why that choice was made. Why was a correlational design chosen over an experimental one? Why was this specific questionnaire selected? This demonstrates critical thinking and understanding of research design principles.
- Ethical Considerations and Limitations:
- Ethics: Identify any potential ethical issues (e.g., sensitive topics, deception) and explain the specific steps that will be taken to address them (e.g., informed consent, debriefing, ensuring confidentiality).
- Limitations: Acknowledge the potential weaknesses of the proposed study (e.g., limitations of the sample, reliance on self-report measures). This shows foresight and a critical perspective.
Session 18: Activities & Resources
Mini-Activity (5 mins): Ask students to focus on the "Justification" aspect. For a hypothetical study using a survey, ask them to list three reasons WHY a survey is an appropriate method for the topic, and one reason why it might be a limitation.
Interactive Exercise (20 mins): Provide a brief, flawed "Method" section of a research proposal. In groups, students must critique it. They should identify missing information (e.g., no mention of sampling method, no data analysis plan) and weaknesses in the design. They should then rewrite it to be more complete and rigorous.
Classroom Application (Online): Create a template for a research proposal in a collaborative document (e.g., Google Docs) with all the required headings. As a class, choose one of the assessment topics and collaboratively fill in bullet points for each section. This provides a scaffold and a shared example for students to follow.
Distinction-Level Thinking: "The 'Limitations' section of a proposal is not just about listing weaknesses; it's about showing critical awareness. For a proposed study on the link between social media and adolescent mental health, identify potential limitations related to (a) sampling, (b) measurement, and (c) causality. For each limitation, suggest a specific way a future, more advanced study could address it."
- Video: Psychological Research: Crash Course Psychology #2 - This video discusses how to apply the scientific method to psychological research, including case studies, naturalistic observation, and surveys.
- Article: Research Proposal Format Example - A document that provides a general outline of the material that should be included in a project proposal.
- Article: How to Write a Research Proposal - An article that explains that a research proposal should contain all the key elements involved in the research process and include sufficient information for the readers to evaluate the proposed study.
Covered Learning Outcomes
- LO 4.1: Draw on the findings of psychological papers to inform research design.
- LO 4.2: Apply and justify the choice of method to a research scenario.
Teacher's Checklist
- Explain the purpose and general structure of a research proposal. (LO 4.2)
- Detail how to write the method section (design, participants, procedure, analysis). (LO 4.2)
- Emphasize the importance of justifying every decision in the research design. (LO 4.2)
- Explain how to integrate a literature review to form the basis of the proposal. (LO 4.1)
- Clarify how to discuss ethical considerations and potential limitations. (LO 4.2)
Session 19
Session 19: Final Assessment Workshop (1): Analyzing a Research Article
Session 19: Final Assessment Workshop (1): Analyzing a Research Article
Detailed Explanation of the Final Assignment - Part 1
This session is a dedicated workshop focused entirely on preparing for Part 1 of the final assessment. The objective of this task is to demonstrate your ability to critically deconstruct and evaluate a piece of published quantitative research. This requires you to apply all the analytical skills you have developed throughout this course.
Assignment Requirements (Part 1)
Task: Locate a peer-reviewed research article published in a scholarly journal related to psychology. The research must include quantitative methods. Write a 1000-word analysis of the article.
Your analysis must include:
- Thesis Statement: This should be a single, assertive sentence at the very beginning of your analysis that encapsulates your overall critical judgment of the paper. It is not just a summary.
Example: "While Smith's (2021) experimental study provides compelling evidence for the link between mindfulness and attention, its conclusions are weakened by a homogenous sample and a failure to control for placebo effects."
- Original Hypothesis: Clearly state the hypothesis or hypotheses the researchers were testing. This is usually found at the end of the Introduction section.
- Description of the Research Design: This is a detailed breakdown of the "Method" section.
- Research Model and Methods: Was it a true experiment, a quasi-experiment, or a correlational design? Be specific (e.g., "a pre-test/post-test repeated measures design").
- Variables: Clearly identify the Independent and Dependent Variables (for experiments) or the Predictor and Criterion Variables (for correlational studies). Provide the operational definitions used in the paper.
- Participants: Describe the sample. Who were they, how many were there, and how were they selected (e.g., convenience sample of university students)?
- Data Collection and Analysis Methods: What tools were used (e.g., "the Beck Depression Inventory")? What specific statistical tests were performed (e.g., "an independent-samples t-test," "a Pearson correlation")?
- Summary of Key Findings: Summarize the main results reported in the "Results" section. Crucially, you must compare these findings to the original hypothesis. Was the hypothesis supported or not? Be precise.
- Limitations and Ethical Concerns: This is your critical evaluation.
- Limitations: Go beyond what the authors state in their discussion. Identify weaknesses in internal validity (e.g., potential confounds) and external validity (e.g., can the results be generalized?). Critique the sample, the measures used, or the design itself.
- Ethical Concerns: Evaluate the study from an ethical standpoint. Was informed consent adequate? Was deception used? How was participant confidentiality protected?
Covered Learning Outcomes & Assessment Criteria (Part 1)
| Learning Outcomes | Assessment Criteria |
|---|---|
| 1. Understand the experimental methods applied in psychology. | 1.1 Analyse the principles of research design. 1.2 Analyse the way in which scientific method, experimental and descriptive research are interlinked. |
| 2. Understand research methods in a psychological context. | 2.1 Analyse the features of research methods used in psychology. |
| 3. Understand types of data analysis and evaluation in a psychological context. | 3.1 Analyse types of data analysis used in research. |
| 4. Be able to carry out research design and review in a psychological context. | 4.1 Draw on the findings of psychological papers to inform research design. |
Session 19: Activities & Resources
Interactive Exercise: Provide students with a sample quantitative research article. In groups, they will work through a checklist that mirrors the assignment requirements, identifying the hypothesis, design, variables, findings, and limitations. Each group will then present their analysis of one section of the paper.
Classroom Application (Online): Use a collaborative document where the assignment's required sections are laid out. The instructor will share a pre-selected article, and the class will collectively "fill in the blanks" for each section, building a model analysis together in real-time.
Distinction-Level Thinking: "Beyond the explicit limitations mentioned by the authors, what is a more subtle or deeper critique of the study's methodology? For instance, does the way they operationalized a key variable truly capture the complexity of the psychological construct? Justify your critique."
- Video: A Crash Course on How to Design a Research Study in Psychology - An overview of some of the major types of research design that can be used in psychology.
- Article: Critical evaluation - Psychology Guide - A guide that provides criteria for critical evaluation of sources before using them for an assignment or research project.
- Article: Evaluate your sources - Psychological Sciences - A guide with tips for evaluating research articles for use.
Teacher's Checklist
- Explain the requirements of Part 1 of the final assignment in detail. (N/A)
- Clarify how to find a suitable research article (peer-reviewed and quantitative). (LO 4.1)
- Guide students on how to identify and analyze the research design and variables in the article. (LO 1.1, LO 1.2, LO 2.1)
- Explain how to identify the data analysis methods used in the article. (LO 3.1)
- Train students on how to identify and evaluate the study's limitations and ethical concerns. (LO 4.1)
Session 20
Session 20: Final Assessment Workshop (2): Formulating a Research Proposal
Session 20: Final Assessment Workshop (2): Formulating a Research Proposal
Detailed Explanation of the Final Assignment - Part 2
This final session is a workshop dedicated to Part 2 of the final assessment: writing a research proposal. The goal of this task is to synthesize all the knowledge from the course to design a coherent, justified, and ethically sound research study from scratch.
Assignment Requirements (Part 2)
Task: Select one of the topics below and write an 800-1000-word research proposal based on the scientific method.
Proposed Topics:
- Determine whether men or women are more likely to be diagnosed with depression.
- Compare attachment styles among children of divorced parents to those raised by married parents.
- Determine if people who score higher on measures of intelligence also score higher on measures of overall well-being.
- Determine if sleep deprivation has an impact on short-term memory.
- Determine if the number of therapy sessions has an impact on patient-reported outcomes.
- Determine whether the number of hours spent on social media impacts overall mental health in adolescents.
The proposal must include:
- A Hypothesis: A clear, specific, and testable prediction based on a review of existing research.
- A Summary of Related Research: You must find, summarize, and properly cite at least three relevant peer-reviewed studies that inform your hypothesis and research question.
- A Proposed Research Design:
- Specify the type of design (e.g., correlational, quasi-experimental, true experiment) and provide a justification for this choice.
- Methods to be employed: Describe the step-by-step procedure.
- Description of the target population: Define your population and explain your proposed sampling strategy.
- Data collection and analysis methods: Describe the tools you will use (e.g., a specific scale, a memory test) and the statistical test you will use to analyze the data.
- A Justification: This is a critical section where you must defend the key choices you made in your design. Why is your chosen method the most appropriate way to answer your research question?
- A Discussion of Potential Ethical Concerns and Other Possible Limitations: Describe any ethical issues and how you will address them. Acknowledge the limitations of your proposed study.
Covered Learning Outcomes & Assessment Criteria (Part 2)
| Learning Outcomes | Assessment Criteria |
|---|---|
| 3. Understand types of data analysis and evaluation in a psychological context. | 3.2 Analyse the interrelationship between statistics and research hypotheses in psychology. |
| 4. Be able to carry out research design and review in a psychological context. | 4.1 Draw on the findings of psychological papers to inform research design. 4.2 Apply and justify the choice of method to a research scenario. |
Important Additional Information for the Final Assignment (Both Parts)
- Formatting: Justified alignment, single spaced, 12 pt Times New Roman font.
- Referencing: Use an appropriate referencing system (Harvard Style is recommended) for all sources.
- Word Count: You must comply with the required word count, within a margin of +/-10%.
- Authenticity: The work must be original and free from plagiarism.
Session 20: Activities & Resources
Interactive Exercise: Students will choose one of the provided research topics. In breakout groups, they will create a basic outline for their proposal, including a draft hypothesis, a choice of research design, a proposed sampling method, and a plan for data analysis. This serves as a structured brainstorming session.
Classroom Application (Online): The instructor will lead a "Proposal Clinic." Students can share their draft ideas or specific questions (e.g., "What's the best way to measure 'well-being'?") and receive live feedback from the instructor and peers.
Distinction-Level Thinking: "For your chosen topic, justify why a mixed-methods approach, while not required for this assignment, could provide a richer and more comprehensive understanding than a purely quantitative design. What specific qualitative component would you add, and what unique insights would it provide?"
- Video: Psychological Research: Crash Course Psychology #2 - This video discusses how to apply the scientific method to psychological research, including case studies, naturalistic observation, and surveys.
- Article: How do you write a research proposal? - A Reddit thread that provides tips on how to write a research proposal, including creating a research question and then making it into a statement.
- Article: Research Proposal Format Example - A document that provides a general outline of the material that should be included in a project proposal.
Teacher's Checklist
- Explain the requirements of Part 2 of the final assignment in detail. (N/A)
- Guide students on how to choose a topic and formulate a hypothesis based on a literature review. (LO 3.2, LO 4.1)
- Help students select and justify an appropriate research design for their topic. (LO 4.2)
- Clarify how to properly describe data collection and analysis methods. (LO 4.2)
- Emphasize the importance of the justification section and the discussion of ethics and limitations. (LO 4.2)
- Remind students of the formatting, referencing, and word count policies. (N/A)