QUALIFI Level 4 Diploma in Psychology
Level 4 — 20 sessions
This foundational session introduces psychology as an empirical science, grounded in the principles of the scientific method. We will move beyond the common perception of psychology as mere intuition or "common sense" to understand it as a rigorous discipline that uses systematic processes to acquire knowledge. The core of this session is understanding that the scientific method is not just a collection of techniques but a mindset—an approach to inquiry that values objectivity, evidence, and critical thinking.
We will deconstruct the key steps of the scientific method:
A critical distinction will be made between scientific knowledge and other ways of knowing, such as intuition, anecdote, or authority. We will emphasize core scientific principles like objectivity (minimizing bias), skepticism (questioning claims and demanding evidence), and falsifiability (the idea that a scientific theory must be testable in a way that it could be proven wrong). For instance, Freud's concept of the "id" is often criticized for being unfalsifiable, as it's difficult to design an experiment that could definitively prove it doesn't exist.
Theory ➔ Leads to Hypotheses ➔ Leads to Research & Observation ➔ Generates or Refines ➔ Theory
This diagram illustrates the dynamic, cyclical relationship between theory and research. A theory is a broad, integrated framework that explains and predicts phenomena (e.g., Attachment Theory). Research is conducted to test specific hypotheses derived from that theory. The results of the research then feed back to either support, refine, or challenge the theory, leading to a continuous advancement of knowledge.
Mini-Activity (5 mins): In pairs, students identify one "psychological fact" they believe to be true (e.g., "opposites attract"). They then discuss how they could turn this belief into a testable, falsifiable hypothesis.
Interactive Exercise (15 mins): Present a short, simplified research scenario (e.g., a study testing a new memory-enhancing drug). In small groups (breakout rooms), students must identify each step of the scientific method within the scenario: the initial observation, the hypothesis, the testing method, the type of data collected, and a potential conclusion. Each group reports one step back to the main session.
Classroom Application (Online): Use a poll to ask students: "Which of the following is the most important principle of the scientific method: A) Proving theories true, B) Falsifiability, C) Relying on expert opinion?" Use the results to launch a discussion about why falsifiability (B) is considered a cornerstone of science.
Collaborative Brainstorm (15 mins): Using a collaborative whiteboard (e.g., Miro, Padlet), pose the question: "What are the challenges of applying the 'ideal' scientific method to studying human behavior?" Students post their ideas as virtual sticky notes. The instructor then groups related ideas to summarize the discussion.
Distinction-Level Thinking: "While the scientific method is presented as a linear process, in reality, it is often messy and iterative. Discuss the limitations of this linear model and provide an example of how a real-world scientific discovery might deviate from these neat steps."
This session delves into the fundamental building blocks of any research study. We begin with variables, which are any characteristics or factors that can vary or change. Understanding how to identify and define variables is the first step in designing a valid experiment.
Example 1: In a study on the effects of caffeine on reaction time, the IV is the amount of caffeine administered (e.g., 0mg, 100mg, 200mg).
Example 2: In an experiment testing a new teaching method, the IV is the type of teaching method used (new vs. traditional).
Example 1: In the caffeine study, the DV is the participants' reaction time, measured in milliseconds.
Example 2: In the teaching method study, the DV is the students' scores on a final exam.
Example: In the caffeine study, confounding variables could include the time of day, participants' natural tolerance to caffeine, or how much sleep they got the night before.
Next, we focus on formulating clear and testable hypotheses. A key concept here is the operational definition, which means defining a variable in terms of the specific procedures used to measure or manipulate it. For example, "anxiety" could be operationally defined as a score on the Beck Anxiety Inventory or as a physiological measure like heart rate. We will distinguish between:
Every member of the population has a known chance of selection. Allows for generalization.
Selection is not random. Practical but limits generalizability.
This diagram contrasts the two main families of sampling—the process of selecting a subset (sample) from a larger group (population). The goal is to obtain a representative sample that accurately reflects the population's characteristics.
Mini-Activity (5 mins): Provide students with a simple research question: "Does listening to classical music while studying improve exam performance?" Ask them to identify the IV, the DV, and at least two potential confounding variables.
Interactive Exercise (15 mins): In groups (breakout rooms), students are given a research topic (e.g., "The effect of social media use on self-esteem"). They must: 1) Formulate a directional research hypothesis and a null hypothesis. 2) Create operational definitions for their IV ("social media use") and DV ("self-esteem"). 3) Propose one probability and one non-probability sampling method they could use.
Classroom Application (Online): Use a collaborative whiteboard (like Miro or Jamboard). Create columns for "IV," "DV," "H1," and "H0." Give students a research scenario and have them post virtual sticky notes in the correct columns to identify the key components of the design.
Live Poll (10 mins): Present several research variables (e.g., "depression," "memory," "intelligence"). For each, provide multiple-choice options for operational definitions. Students vote on the best operational definition, followed by a brief discussion of why it is the most specific and measurable.
Distinction-Level Thinking: "A researcher uses a convenience sample of psychology undergraduate students for a study on memory. Critically evaluate the implications of this sampling choice. What specific biases might be introduced, and how do these biases affect the ability to generalize the findings to the wider population?"
This session builds on our understanding of research design by focusing on the crucial concepts of quality and rigor. For research findings to be meaningful, the methods used must be both reliable and valid.
Reliability refers to the consistency or stability of a measurement. If you use a measure multiple times under the same conditions, will you get the same results? We will discuss several types:
Validity refers to the accuracy of a measure or a study. Does it truly measure what it claims to measure, and are the conclusions drawn from it sound? It's a more complex concept than reliability.
High Internal Validity: Excellent control over extraneous variables.
Low External Validity: Artificial setting may not reflect real life.
Low Internal Validity: Difficult to control confounding variables.
High External Validity: Findings are more likely to apply to the real world.
This diagram illustrates the common trade-off researchers face. The final part of the session focuses on control—the strategies researchers use to minimize the influence of extraneous variables and thus increase internal validity, such as using a Control Group, Random Assignment, and Standardization.
Mini-Activity (5 mins): Present the analogy of a target. Ask students to draw four targets representing: 1) Reliable but not valid, 2) Valid but not reliable (this is tricky/impossible), 3) Neither reliable nor valid, 4) Both reliable and valid. This helps visualize the concepts.
Interactive Exercise (20 mins): Describe a flawed research study (e.g., a study on a new therapy where the therapist knows which patients are in the treatment group, and the control group gets no contact at all). In groups (breakout rooms), students must identify the specific threats to internal validity (e.g., experimenter bias, placebo effects) and suggest ways to improve the study's design using control techniques (e.g., a double-blind procedure, an active control group).
Classroom Application (Online): Create a short quiz using a tool like Kahoot or Mentimeter with scenarios. For each scenario, students must identify whether the primary issue is with reliability, internal validity, or external validity. This provides immediate feedback and gamifies the learning process.
Debate (15 mins): Divide the class into two teams: "Team Internal Validity" and "Team External Validity." Pose the question: "For a study to be considered 'good science,' which type of validity is more important?" Each team prepares a brief argument and presents it. This encourages deeper thinking about the purpose of different types of research.
Distinction-Level Thinking: "There is often a trade-off between internal and external validity. A highly controlled lab experiment may have excellent internal validity but poor external validity, while a naturalistic observation may be the opposite. Discuss this trade-off. Is one type of validity inherently more important than the other in psychological research? Justify your answer with examples."
This session addresses the moral and ethical obligations that researchers have towards their participants. Ethics are not an afterthought; they are a fundamental component of the research design process. We will begin by examining historical studies that, while influential, raised profound ethical questions and led to the development of modern ethical codes. These include:
These and other controversial studies prompted organizations like the American Psychological Association (APA) and the British Psychological Society (BPS) to establish strict ethical guidelines. We will explore the core principles of these codes:
Researcher has an Idea ➔ Designs Study & Prepares Proposal ➔ Submits to Institutional Review Board (IRB) ➔ IRB Review (Approve / Request Changes / Reject) ➔ Research Begins (Only if Approved)
This flowchart shows that ethical review is not an optional step but a mandatory gateway. Institutional Review Boards (IRBs) or Ethics Committees are panels that review all research proposals involving human participants to ensure they comply with ethical standards before the research can begin.
Mini-Activity (5 mins): Ask students to reflect on a time they participated in a survey or study. Did they feel their rights were protected? Did they receive enough information to give informed consent? A brief think-pair-share activity.
Interactive Exercise (20 mins): Present a modern, ethically ambiguous research proposal (e.g., "A study using a fake social media profile to investigate online jealousy by friending participants' partners"). In breakout groups, students act as an IRB. They must identify all the potential ethical issues based on the core principles (consent, deception, harm, etc.) and decide whether to: a) approve the study as is, b) approve it with specific modifications, or c) reject it. Each group must justify its decision.
Classroom Application (Online): Use a poll with several short research scenarios. For each, ask "Is this ethical? Yes/No/Maybe." Use the "Maybe" responses as a springboard for a nuanced discussion about ethical gray areas and the balancing of risks and benefits.
Role-Play (15 mins): Assign students roles: one is the researcher from the "jealousy study" above, and two are IRB members. The researcher must try to justify their study to the IRB, who will challenge them on ethical grounds. This creates a dynamic and memorable learning experience.
Distinction-Level Thinking: "The BPS and APA ethical codes are based on Western cultural values. Discuss how these principles (e.g., individual consent, confidentiality) might conflict with the cultural norms of non-Western societies. How should a researcher navigate these challenges when conducting cross-cultural research?"
This session provides an in-depth look at the experimental method, the gold standard for determining cause-and-effect relationships in psychology. The power of the experiment lies in its two core features: the manipulation of an independent variable (IV) and the strict control of extraneous variables. By systematically changing the IV and observing the effect on the DV while keeping everything else constant, researchers can make strong causal claims.
| Design Type | How it Works | Key Advantage | Key Disadvantage |
|---|---|---|---|
| Independent Groups (Between-Subjects) | Different participants in each condition (e.g., Group A gets drug, Group B gets placebo). | No order effects (practice, fatigue). | Individual differences can be a confound. |
| Repeated Measures (Within-Subjects) | Same participants do all conditions (e.g., Test memory with music, then without music). | Controls for individual differences; more powerful. | Order effects can occur. Needs counterbalancing. |
| Matched Pairs | Different but matched participants in each condition (e.g., Pair people by IQ, one does A, one does B). | Controls for key individual differences without order effects. | Matching can be difficult and time-consuming. |
We will explore these three primary types of experimental designs in detail:
Finally, we will distinguish between true experiments, which use random assignment to create groups, and quasi-experiments. In a quasi-experiment, the IV is a pre-existing characteristic of the participants (e.g., gender, age, nationality), so random assignment is not possible. For example, comparing the math skills of 10-year-olds and 12-year-olds is a quasi-experiment because you cannot randomly assign a child to be 10 or 12. This lack of random assignment means we cannot make the same strong causal conclusions as we can with a true experiment.
Mini-Activity (5 mins): Ask students to quickly list one advantage and one disadvantage for both the Independent Groups and Repeated Measures designs. This reinforces the core trade-offs.
Interactive Exercise (20 mins): Provide three research questions: 1. Does a new "super-learning" technique improve vocabulary retention compared to a standard method? 2. Do people rate a movie as funnier when watching it in a group versus watching it alone? 3. Does a person's gender affect their spatial reasoning ability? For each question, groups (in breakout rooms) must decide which design (Independent Groups, Repeated Measures, Matched Pairs, or Quasi-experimental) is most appropriate and justify their choice, explaining how they would implement it.
Classroom Application (Online): Use breakout rooms. Assign each room a different experimental design. Their task is to create a simple, novel research study that perfectly fits their assigned design. They then present their study idea to the main group, which has to guess the design being used.
Rapid-Fire Polls (10 mins): Present a series of quick scenarios. For each, students vote on the best design. Example: "Testing the same people before and after a treatment." (Poll: A=Independent, B=Repeated Measures). This is a fast-paced check for understanding.
Distinction-Level Thinking: "A pharmaceutical company wants to test a new drug for anxiety. They argue that a Repeated Measures design (testing participants' anxiety before and after the drug) is the most powerful. Critically evaluate this choice. What are the potential confounds (e.g., placebo effect, spontaneous remission) in this simple pre-test/post-test design, and how could you design a more rigorous study (e.g., using a control group) to address these issues?"
While experiments are excellent for determining cause and effect, much of psychology is focused on describing behavior and mental processes as they naturally occur. This is the domain of descriptive research. These methods do not manipulate variables; instead, they aim to provide a snapshot of what is happening. They answer questions of "what," "where," and "when," but not "why."
We will explore three major types of descriptive research:
Mini-Activity (5 mins): Ask students to write two versions of a survey question about environmental attitudes: one that is neutral and one that is clearly biased or leading. This highlights the importance of question wording.
Interactive Exercise (20 mins): Divide the class into three "expert" groups: Surveys, Observations, and Case Studies. Give all groups the same broad research topic: "Understanding the study habits of first-year university students." Each expert group must design a study using their assigned method. They should outline the procedure, identify the key strengths of their approach, and acknowledge its biggest limitation. They then present their design to the class.
Classroom Application (Online): Use a collaborative document (e.g., Google Docs). Create a table with three columns (Surveys, Observations, Case Studies) and two rows (Strengths, Weaknesses). Have students collaboratively populate the table with as many points as they can in a few minutes. This creates a shared study resource.
Distinction-Level Thinking: "A researcher wants to study the private, and potentially illegal, online behaviors of a specific community. A naturalistic observation (e.g., joining the community under a false identity) might yield the most valid data, but it raises significant ethical concerns. A survey would be more ethical but prone to social desirability bias. Critically evaluate the methodological and ethical trade-offs in this scenario. Which method would you recommend, and how would you justify your choice to an ethics committee?"
Correlational research is a type of non-experimental research in which the researcher measures two or more variables and assesses the statistical relationship (i.e., the correlation) between them with little or no effort to control extraneous variables. The goal is to determine if a relationship exists and to describe its strength and direction.
The key statistical tool in this research is the correlation coefficient, typically represented by the letter 'r'. This value ranges from -1.0 to +1.0.
We will use scatterplots to visualize these relationships. A scatterplot graphs pairs of values for the two variables. The pattern of the dots reveals the strength and direction of the correlation.
The most critical takeaway from this session is the mantra: "Correlation does not imply causation." Just because two variables are related does not mean that one causes the other. There are two main reasons for this:
Classic Example: There is a strong positive correlation between ice cream sales and crime rates. Does eating ice cream cause crime? No. A third variable, hot weather, causes both to increase.
Example: There is a correlation between low self-esteem and depression. Does low self-esteem cause depression, or does being depressed cause a person to have low self-esteem? It could be either, or both.
Mini-Activity (5 mins): Show students three scatterplots (one strong positive, one weak negative, one with no correlation). Ask them to estimate the correlation coefficient (e.g., "+0.8", "-0.3", "0") for each one.
Interactive Exercise (15 mins): Present groups with several documented correlations (e.g., "The number of firefighters at a fire is positively correlated with the amount of damage done," or "Children with bigger feet have better reading ability"). For each correlation, the group's task is to come up with a plausible "third variable" that explains the relationship and to explain the directionality problem where applicable.
Classroom Application (Online): Use a polling tool. Present a correlational finding from a (fictional) news headline, e.g., "Study Finds Coffee Drinkers Live Longer!" Then ask students to vote on the most likely explanation: A) Coffee causes longer life, B) Healthier people are more likely to drink coffee, C) A third factor (like social activity) is involved. This reinforces the "correlation is not causation" principle.
Distinction-Level Thinking: "While correlational studies cannot prove causation, they are still incredibly valuable in psychology. Discuss the specific circumstances under which a correlational design is not only appropriate but may even be superior to an experimental design. Consider ethical and practical constraints in your answer."
This session introduces the two major philosophical and methodological paradigms in psychological research: quantitative and qualitative. The choice between these approaches is not about which is "better," but about which is more appropriate for the research question being asked. They represent different ways of seeing and understanding the world.
Quantitative Research is concerned with numbers and measurement. It is a deductive approach that typically starts with a hypothesis and seeks to test it by collecting numerical data and analyzing it statistically. The goal is often to identify general laws of behavior, test causal relationships, and generalize findings from a sample to a population.
Qualitative Research is concerned with words, meanings, and experiences. It is an inductive approach that often starts with a broad question and seeks to explore it in depth, generating rich, detailed data from which themes and theories can emerge. The goal is to gain a deep, holistic understanding of a phenomenon in its natural context.
Finally, we will introduce the concept of Mixed-Methods Research. This approach involves collecting and analyzing both quantitative and qualitative data in a single study. By integrating the two, researchers can gain a more complete and nuanced understanding of a research problem, leveraging the strengths of both paradigms. For example, a study could use a quantitative survey to identify trends in student well-being and then conduct qualitative interviews to explore the reasons behind those trends in more depth.
Mini-Activity (5 mins): Ask students to convert a quantitative research question into a qualitative one, and vice versa. For example, "What is the correlation between hours of sleep and GPA?" (Quantitative) becomes "What is the lived experience of high-achieving students with demanding sleep schedules?" (Qualitative).
Interactive Exercise (20 mins): Present a complex research topic, such as "The impact of remote work on employee well-being." Divide the class into two teams: "Team Quant" and "Team Qual." Each team must design a study to investigate the topic using only their assigned methodology. They should specify their research question, method, and the type of data they would collect. This highlights how the two approaches tackle the same problem from different angles.
Classroom Application (Online): Create a two-column table on a collaborative whiteboard labeled "Quantitative" and "Qualitative." Post a series of research methods and data types (e.g., "Experiment," "Interview Transcript," "Likert Scale Score," "Focus Group") as virtual sticky notes. Students drag and drop each item into the correct column.
Distinction-Level Thinking: "A researcher proposes a mixed-methods study. They plan to conduct a quantitative survey first and then use the results to select participants for qualitative interviews. Another researcher suggests doing the qualitative interviews first to help develop the survey questions. Critically evaluate these two different mixed-methods designs (known as sequential explanatory and sequential exploratory). What are the advantages and disadvantages of each sequence?"
This session is designed to bridge theory and practice by preparing students for their Formative Assessment. The assessment requires a 600-word analysis of the methodology used in at least three early social psychological experiments. This task requires students to apply the concepts we've learned—such as research methods, variables, data types, and ethics—to deconstruct and evaluate real research.
To model this process, we will conduct a guided, in-depth analysis of one classic study, such as Asch's Conformity Experiment, as a template. We will break down the study using the exact framework required for the assignment:
This interactive analysis will serve as a scaffold, providing students with the analytical tools and confidence to approach their chosen studies (e.g., Milgram, Harlow, Pavlov) for the formative assessment. The session will conclude with a Q&A to address any questions about the assignment's requirements, formatting, and referencing.
Mini-Activity (5 mins): Before analyzing the Asch study, ask students to predict what percentage of people would conform in that situation. This creates engagement and often highlights the counter-intuitive nature of psychological findings.
Interactive Exercise (20 mins): After modeling the Asch analysis, provide a brief summary of another classic study (e.g., Bandura's Bobo Doll experiment). In small groups, students must use the four-point assessment framework (Method, Data, Findings, Ethics) to create a bullet-point outline of how they would analyze this new study. This gives them direct practice with the required skill.
Classroom Application (Online): Use a collaborative platform where the four assessment criteria are headings. As a class, collaboratively write a brief analysis of a chosen study under these headings. The instructor can guide the process, and students can contribute ideas in real-time, creating a shared set of notes.
Distinction-Level Thinking: "Many classic studies from the mid-20th century would not be approved by a modern IRB. Select one of the studies for the formative assessment and propose a modified research design that could investigate the same core research question while adhering to today's strict ethical standards. What methodological compromises might you have to make, and how would these affect the validity of your findings?"
This session focuses on the practical "how-to" of data collection, exploring the specific tools researchers use to gather information. The choice of tool must align with the research question and the overall design (qualitative or quantitative).
A crucial step before launching a full-scale study is conducting a pilot study. This is a small-scale trial run of the research with a few participants. It helps the researcher to check that the instructions are clear, the questions are understood, the observation schedule is practical, and the overall procedure runs smoothly. It is an essential step for identifying and fixing problems before investing significant time and resources.
Mini-Activity (5 mins): Provide a poorly designed, double-barreled survey question (e.g., "Do you think that the university should increase tuition fees and improve library services?"). Ask students to identify the flaw and rewrite it as two separate, clear questions.
Interactive Exercise (20 mins): In groups, students are tasked with designing a data collection tool to measure "stress levels in students during exam season." - Group A must design a short, 5-item quantitative questionnaire using Likert scales. - Group B must create a 5-question semi-structured interview schedule. - Group C must develop a simple observation schedule to record stress-related behaviors in the library. Each group presents their tool and justifies their choices.
Classroom Application (Online): Use a simple online survey tool (like Google Forms). As a class, collaboratively build a short questionnaire on a fun topic (e.g., "Smartphone Usage Habits"). The instructor can guide the process, discussing question types and wording in real-time. Then, students can take the survey themselves to experience it as a participant.
Distinction-Level Thinking: "A researcher is conducting a semi-structured interview. What are the key skills the interviewer must possess to elicit rich, valid data? Discuss the potential for interviewer bias to influence the participant's responses and suggest specific techniques the interviewer can use to minimize this bias and build rapport."
Once data has been collected, it is often just a raw, meaningless jumble of numbers. The first step in making sense of this data is to use descriptive statistics. These are tools that summarize and describe the main features of a dataset. They don't allow us to make conclusions beyond the data we have, but they provide a vital, organized overview.
We will focus on two main types of descriptive statistics:
Example: For the dataset [2, 3, 3, 5, 7, 10], the Mean is 5, the Median is 4 (the average of 3 and 5), and the Mode is 3.
Example: Two classes might have the same mean exam score of 75%. However, Class A has an SD of 5, meaning most students scored between 70-80%. Class B has an SD of 20, meaning scores were much more spread out, with many students scoring very high and very low.
Finally, we will explore how to visually represent data using graphs. A well-designed graph can make complex data much easier to understand. We will cover histograms and bar charts, discussing when each is appropriate (histograms for continuous data, bar charts for categorical data) and how to interpret them.
Mini-Activity (5 mins): Give students a small, simple dataset (e.g., [1, 2, 2, 3, 4, 100]). Ask them to calculate the mean and the median. This will immediately demonstrate how a single outlier can dramatically affect the mean but not the median.
Interactive Exercise (15 mins): Provide groups with a slightly larger dataset (e.g., the scores of 15 students on a quiz). Their task is to calculate the mean, median, mode, range, and (conceptually) describe what the standard deviation would tell them. They should then decide which measure of central tendency is the most appropriate for describing this data and justify their choice.
Classroom Application (Online): Use a simple online tool or a shared spreadsheet (like Google Sheets). Input a dataset and use the built-in functions (=AVERAGE, =MEDIAN, =STDEV) to calculate the descriptive statistics in real-time. This demonstrates the practical application and demystifies the calculations.
Distinction-Level Thinking: "A news report states that 'the average salary at Company X is £100,000.' This sounds impressive, but you suspect the data might be skewed. What measure of central tendency are they likely reporting? What other statistic (a different measure of central tendency or a measure of dispersion) would you want to see to get a more accurate picture of the typical employee's salary? Explain why."
While descriptive statistics describe our sample, inferential statistics allow us to make inferences or generalizations about the wider population from which the sample was drawn. They help us answer the question: Is the effect or relationship we found in our sample a real effect, or could it simply be due to random chance?
The foundation of inferential statistics is probability. We use probability to determine the likelihood of our results occurring by chance. This leads to the core logic of Null Hypothesis Significance Testing (NHST). This process works as follows:
A small p-value means our results are very unlikely to have occurred by chance alone. A large p-value means our results are quite likely to have occurred by chance. But how small is "small enough"? We use a pre-determined cut-off point called the significance level, or alpha (α). In psychology, alpha is almost always set at 0.05.
This decision-making process is not perfect. We can make two types of errors:
Mini-Activity (5 mins): Use the analogy of a courtroom trial. The defendant is "innocent until proven guilty" (Null Hypothesis). The prosecution presents evidence (data). The jury must decide if there is enough evidence "beyond a reasonable doubt" (p < 0.05) to convict (Reject H0). Discuss what a Type I error (convicting an innocent person) and Type II error (acquitting a guilty person) would be in this context.
Interactive Exercise (15 mins): Present groups with several research findings, each with a p-value. - "The p-value for the difference in memory scores between the caffeine and no-caffeine groups was p = .02." - "The correlation between screen time and anxiety was tested, yielding p = .34." - "An experiment on a new therapy found a result with p = .05." For each, groups must decide: Is the result statistically significant? Should they reject or fail to reject the null hypothesis? What conclusion can they draw?
Classroom Application (Online): Use a poll to ask: "A researcher finds a p-value of .04. What is the probability that they have made a Type I error?" The correct answer is 5% (alpha), not 4%. This is a common misconception and a great teaching point about what the p-value does and does not mean.
Distinction-Level Thinking: "The reliance on a strict p < .05 cut-off has been heavily criticized in recent years for encouraging 'p-hacking' and contributing to a 'replication crisis' in psychology. Discuss the limitations of Null Hypothesis Significance Testing. What are some alternative approaches (e.g., focusing on effect sizes, confidence intervals, or Bayesian statistics) that could provide a more nuanced understanding of research findings?"
This session introduces specific statistical tests used to compare the mean scores of different groups. The choice of test depends on the research design and the number of groups being compared.
t-tests are used to determine if there is a statistically significant difference between the means of two groups. The t-test produces a 't' statistic, which is essentially a ratio of the difference between the group means to the variability within the groups. A large 't' value suggests a meaningful difference. There are two main types:
Example: A researcher wants to compare the exam scores of a group of students who used a new study app with a control group who studied normally. Since the groups contain different students, an independent-samples t-test is used.
Example: A researcher measures the anxiety levels of a group of participants before they undergo a mindfulness intervention and again after the intervention. Since the same people are measured twice, a paired-samples t-test is used.
What if you want to compare the means of more than two groups? You might be tempted to run multiple t-tests, but this is a bad idea because it inflates the Type I error rate. The correct tool is the Analysis of Variance (ANOVA).
ANOVA works by comparing the variability between the groups to the variability within the groups. If the variation between the groups is significantly larger than the variation within the groups, it suggests that the independent variable has had a significant effect. ANOVA produces an 'F' statistic. A significant F-test tells us that there is a difference somewhere among the groups, but it doesn't tell us which specific groups differ. For that, we need to conduct post-hoc tests.
Example: A researcher wants to test the effectiveness of three different therapies for depression (CBT, Psychodynamic, and a control group). They would use a one-way ANOVA to compare the mean depression scores of the three groups at the end of the study.
Mini-Activity (5 mins): Present three research scenarios. For each one, students must simply decide which of the three tests (independent t-test, paired t-test, or ANOVA) is the correct one to use. This focuses on the decision-making process rather than the calculation.
Interactive Exercise (15 mins): Give groups a simple research question: "Does the type of music (Classical, Pop, or Silence) affect performance on a spatial reasoning task?" They must: 1. Identify the IV and DV. 2. State the null and research hypotheses. 3. Choose the correct statistical test (ANOVA) and justify why a t-test is inappropriate. 4. Describe what a "statistically significant" result would mean in this context.
Classroom Application (Online): Create a flowchart on a collaborative whiteboard. The chart should guide students through the decision of which test to use based on questions like "How many groups are you comparing?" and "Are the participants in each group the same or different?". Students can then use this flowchart to solve practice problems.
Distinction-Level Thinking: "A one-way ANOVA on three groups yields a significant result (p = .03). A student concludes that all three groups are different from each other. Explain why this conclusion is premature and incorrect. What is the next step the researcher must take (i.e., post-hoc tests), and why is this step necessary?"
This session revisits correlation from a statistical testing perspective and introduces regression as a powerful predictive tool. These methods are used when we want to analyze the relationship between continuous variables, rather than comparing group means.
In Session 7, we introduced the correlation coefficient (r) as a descriptive statistic. Now, we use inferential statistics to determine if the correlation we found in our sample is statistically significant. The null hypothesis (H0) for a correlation is that there is no relationship between the two variables in the population (r = 0). A significance test for correlation produces a p-value. If p < .05, we reject the null hypothesis and conclude that there is a significant relationship between the variables in the population.
Example: A researcher finds a correlation of r = +0.45 between study hours and exam scores in a sample of 50 students. A significance test yields p = .008. Since p < .05, the researcher can conclude that there is a statistically significant positive relationship between study hours and exam scores.
Regression analysis takes correlation a step further. While correlation describes the relationship, regression allows us to use that relationship to make predictions. Simple linear regression finds the "line of best fit" that passes through the data points on a scatterplot. This line is represented by a mathematical equation: Y = a + bX.
Once this equation is established, we can plug in a value for X to predict the most likely value of Y.
Example: A university uses regression to predict student success. They find the equation for predicting first-year GPA (Y) from A-level scores (X) is: GPA = 0.5 + (0.08 * A-level score). They can now use an applicant's A-level score to predict their likely GPA, helping them make admissions decisions.
Mini-Activity (5 mins): Show a scatterplot with a clear positive correlation and its line of best fit. Ask students to visually estimate the predicted Y value for a given X value by following the graph. This builds an intuitive understanding of what regression does.
Interactive Exercise (15 mins): In groups, students are given a research scenario: "A sports psychologist wants to predict an athlete's performance level (rated 1-100) based on their self-reported motivation level (rated 1-10)." They must: 1. Identify the predictor (X) and criterion (Y) variables. 2. State the null hypothesis for the correlation. 3. Explain what a significant positive correlation would mean. 4. Describe in plain English what a regression equation would allow the psychologist to do in this context.
Classroom Application (Online): Use an interactive online scatterplot generator. The instructor can input data points, and the tool will automatically calculate the correlation coefficient and draw the regression line. This allows for a dynamic demonstration of how adding or moving data points changes the relationship and the predictive line.
Distinction-Level Thinking: "Simple linear regression uses one predictor variable. However, human behavior is complex and rarely predicted by a single factor. Research the concept of 'multiple regression.' How does it differ from simple regression, and why is it a more powerful and commonly used tool in psychological research? Provide a hypothetical research question that could only be answered using multiple regression."
This session shifts our focus from numbers to words, exploring how researchers make sense of rich, non-numerical data like interview transcripts or open-ended survey responses. Qualitative data analysis is an interpretive and inductive process. Rather than testing a pre-defined hypothesis, the goal is to explore the data to identify patterns, themes, and meanings that emerge from the participants' own words.
We will explore two common approaches:
Example: In interviews about work-life balance, a researcher might code extracts as "long hours," "emailing at night," and "pressure to be available." These codes could be grouped into a broader theme called "Inescapable Work Culture."
Example: A researcher might perform a content analysis of children's cartoons to see how male and female characters are portrayed, categorizing and analyzing the types of roles and behaviors they exhibit.
Unlike quantitative research, which values objectivity, qualitative research acknowledges the researcher's role in interpretation. Therefore, instead of "validity," we talk about trustworthiness. To enhance trustworthiness, researchers use techniques like keeping a reflective journal to acknowledge their biases and using direct quotes from participants to support their interpretations, ensuring transparency in the analytical process.
Mini-Activity (5 mins): Provide students with a single, rich quote from a fictional interview (e.g., "I love my job, but the constant expectation to be online after hours means I never truly switch off. It feels like I'm letting the team down if I don't reply instantly."). Ask them to generate as many initial codes as they can for this quote (e.g., "job satisfaction," "after-hours work," "guilt," "team pressure").
Interactive Exercise (20 mins): Give groups a short transcript from a fictional interview about students' experiences with online learning. The transcript should contain several clear patterns. The group's task is to go through the first three phases of thematic analysis: 1) Read and familiarize, 2) Generate at least 10-15 codes, 3) Group their codes into 2-3 potential themes. Each group then presents one of their themes and the codes that support it.
Classroom Application (Online): Use a collaborative tool like a word cloud generator (e.g., Mentimeter). Ask students to read a short text and submit the single word they think is most important or representative. The resulting word cloud can visually highlight key concepts and serve as a starting point for a discussion about themes.
Distinction-Level Thinking: "Qualitative research is sometimes criticized by quantitative researchers for being 'subjective' and 'unscientific.' Construct a robust defense of qualitative research. In your answer, address the concept of trustworthiness and explain how qualitative researchers establish rigor in their work through specific techniques (e.g., triangulation, reflexivity, member checking) that are different from, but parallel to, the concepts of reliability and validity in quantitative research."
A core academic skill is the ability to read and critically evaluate primary research literature. This session demystifies the structure of a typical psychology journal article (often following APA style) and provides a framework for active, critical reading, which is essential for Part 1 of the final assessment.
We will break down the standard sections of an empirical paper:
To critically evaluate a paper, students should ask questions as they read: Is the introduction's argument logical? Is the method sound (good validity and reliability)? Are the conclusions in the discussion justified by the data in the results section? What are the key weaknesses or limitations?
Mini-Activity (5 mins): Give students just the abstract of a research paper. Ask them to write down the study's main research question, the key finding, and the number of participants, based only on the abstract. This demonstrates the power of the abstract for quick comprehension.
Interactive Exercise (20 mins): Provide a one-page, simplified "mock" research paper. In groups, students act as peer reviewers. They must read the paper and identify one major strength and one major weakness in each of the main sections (Introduction, Method, Results, Discussion). This provides structured practice in critical evaluation.
Classroom Application (Online): Find a real (but accessible) open-access psychology article. Share the screen and do a "live reading" of the introduction and discussion sections. Use a highlighter tool to mark the key components in real-time: the literature review, the "gap," the hypothesis, the interpretation of findings, and the limitations.
Distinction-Level Thinking: "The 'Discussion' section of a paper is not just a summary of the results; it is an argument constructed by the author. Critically discuss the potential for author bias to influence the 'spin' placed on the findings. How might an author downplay non-significant results or overstate the importance of their findings? What specific parts of the paper should a critical reader look at to form their own, independent conclusion?"
This session marks the transition from evaluating others' research to conceptualizing one's own. This is the first and most creative step in the research process and is essential for Part 2 of the final assessment. A good research project is built on the foundation of a good research question.
What makes a research question "good"? We will discuss several criteria:
Where do research ideas come from? We will explore several sources:
Once a research question is formulated, the next step is to conduct a literature review. This involves systematically searching for and reading what has already been published on the topic. The literature review serves two key purposes: 1) It helps to further refine the research question, and 2) It provides the theoretical and empirical basis for formulating a specific, testable hypothesis. A good hypothesis is not a random guess; it is an educated prediction based on the existing body of knowledge.
Mini-Activity (5 mins): Give students a broad, unspecific research topic (e.g., "Memory" or "Stress"). In pairs, they have two minutes to brainstorm as many specific, researchable questions as they can related to that topic.
Interactive Exercise (20 mins): Assign each group one of the research topics from the final assessment (e.g., "sleep deprivation and short-term memory"). Their task is to go through the narrowing process. They should start with the broad topic and develop at least two different, specific research questions. For one of those questions, they must then formulate a clear, directional research hypothesis.
Classroom Application (Online): Use a "mind mapping" tool (like Miro or Coggle). Start with a central research topic. As a class, brainstorm and add branches for different sub-topics, populations, and variables. This visually demonstrates the process of narrowing down a broad idea into a manageable research question.
Distinction-Level Thinking: "A student proposes the research question: 'Does therapy work?' Critically evaluate this question using the criteria of a good research question (specificity, researchability). Revise and refine this question into a high-quality, testable research question that would be appropriate for a small-scale undergraduate project. Justify the changes you made."
This session is a practical guide to constructing a research proposal, the blueprint for a research study. This is the central task for Part 2 of the final assessment. A research proposal is a persuasive document designed to convince the reader that the proposed study is well-founded, important, and methodologically sound.
We will walk through the key sections of a research proposal, aligning them with the assignment requirements:
Mini-Activity (5 mins): Ask students to focus on the "Justification" aspect. For a hypothetical study using a survey, ask them to list three reasons WHY a survey is an appropriate method for the topic, and one reason why it might be a limitation.
Interactive Exercise (20 mins): Provide a brief, flawed "Method" section of a research proposal. In groups, students must critique it. They should identify missing information (e.g., no mention of sampling method, no data analysis plan) and weaknesses in the design. They should then rewrite it to be more complete and rigorous.
Classroom Application (Online): Create a template for a research proposal in a collaborative document (e.g., Google Docs) with all the required headings. As a class, choose one of the assessment topics and collaboratively fill in bullet points for each section. This provides a scaffold and a shared example for students to follow.
Distinction-Level Thinking: "The 'Limitations' section of a proposal is not just about listing weaknesses; it's about showing critical awareness. For a proposed study on the link between social media and adolescent mental health, identify potential limitations related to (a) sampling, (b) measurement, and (c) causality. For each limitation, suggest a specific way a future, more advanced study could address it."
This session is a dedicated workshop focused entirely on preparing for Part 1 of the final assessment. The objective of this task is to demonstrate your ability to critically deconstruct and evaluate a piece of published quantitative research. This requires you to apply all the analytical skills you have developed throughout this course.
Task: Locate a peer-reviewed research article published in a scholarly journal related to psychology. The research must include quantitative methods. Write a 1000-word analysis of the article.
Your analysis must include:
Example: "While Smith's (2021) experimental study provides compelling evidence for the link between mindfulness and attention, its conclusions are weakened by a homogenous sample and a failure to control for placebo effects."
| Learning Outcomes | Assessment Criteria |
|---|---|
| 1. Understand the experimental methods applied in psychology. | 1.1 Analyse the principles of research design. 1.2 Analyse the way in which scientific method, experimental and descriptive research are interlinked. |
| 2. Understand research methods in a psychological context. | 2.1 Analyse the features of research methods used in psychology. |
| 3. Understand types of data analysis and evaluation in a psychological context. | 3.1 Analyse types of data analysis used in research. |
| 4. Be able to carry out research design and review in a psychological context. | 4.1 Draw on the findings of psychological papers to inform research design. |
Interactive Exercise: Provide students with a sample quantitative research article. In groups, they will work through a checklist that mirrors the assignment requirements, identifying the hypothesis, design, variables, findings, and limitations. Each group will then present their analysis of one section of the paper.
Classroom Application (Online): Use a collaborative document where the assignment's required sections are laid out. The instructor will share a pre-selected article, and the class will collectively "fill in the blanks" for each section, building a model analysis together in real-time.
Distinction-Level Thinking: "Beyond the explicit limitations mentioned by the authors, what is a more subtle or deeper critique of the study's methodology? For instance, does the way they operationalized a key variable truly capture the complexity of the psychological construct? Justify your critique."
This final session is a workshop dedicated to Part 2 of the final assessment: writing a research proposal. The goal of this task is to synthesize all the knowledge from the course to design a coherent, justified, and ethically sound research study from scratch.
Task: Select one of the topics below and write an 800-1000-word research proposal based on the scientific method.
Proposed Topics:
The proposal must include:
| Learning Outcomes | Assessment Criteria |
|---|---|
| 3. Understand types of data analysis and evaluation in a psychological context. | 3.2 Analyse the interrelationship between statistics and research hypotheses in psychology. |
| 4. Be able to carry out research design and review in a psychological context. | 4.1 Draw on the findings of psychological papers to inform research design. 4.2 Apply and justify the choice of method to a research scenario. |
Interactive Exercise: Students will choose one of the provided research topics. In breakout groups, they will create a basic outline for their proposal, including a draft hypothesis, a choice of research design, a proposed sampling method, and a plan for data analysis. This serves as a structured brainstorming session.
Classroom Application (Online): The instructor will lead a "Proposal Clinic." Students can share their draft ideas or specific questions (e.g., "What's the best way to measure 'well-being'?") and receive live feedback from the instructor and peers.
Distinction-Level Thinking: "For your chosen topic, justify why a mixed-methods approach, while not required for this assignment, could provide a richer and more comprehensive understanding than a purely quantitative design. What specific qualitative component would you add, and what unique insights would it provide?"