Be sure to read our halogen oven post.
ASSESS CLAIMS THAT COGNITIVE PERSPECTIVE LACKS ECOLOGICAL VALIDITY.
CONSIDER ALTERNATIVE RESEARCH METHODS.
- Ecological validity – the validity that a principle observed in a laboratory setting has outside of that setting, in the field, in the real world. OR …
Ecological validity refers to whether a method measures behaviour that is representative of naturally occurring behaviour. Since there is difficulty in saying what conditions are natural or normal (some field experiments may be conducted under very unusual circumstances, while laboratories are human social situations too), ecological validity is perhaps best measured by the extent to which research finding can be generalised to other research setting (it is a form of criterion-related validity). Norms established using university students might have low ecological validity when applied to blue collar workers of the same age.
- Cognitive perspective studies information processing. Information, as such, is a theoretical construct – there is no “palpable” information in real life. Therefore, psychologists working within the perspective must form models, which by their nature are only an abstract representation of real life. Examples of models in cognitive psychology are: o 3-Box-Memory model (Atkinson & Schiffrin) o levels of processing model (Craik & Lockhart)
- General Problem Solver GPS (Newell & Simon)
- LAD (Chomsky)
- Single filter attention model (Broadbent) o Attenuation attention model (Treisman)
- Cognitive psychology uses the laboratory experimental method.
(+) This method ensures scientific and rigorous way of researching, however … (-) demand characteristic and experimenter expectancy can occur; deception / single blind method and double blind method / inter-observer reliability, respectively, can be used to counteract these;
(-) sample can be unrepresentative (e.g. due to opportunity sample – psychology students!, or self-selecting sample – good subjects), which can be counter-acted by e.g. randomization or stratified sampling.
(-) laboratory experimentation seems culture-biased and gender-biased.
3a. Laboratory experiment – apparatus – artificiality and technical limitations, for example:
- tachistoscope – very complex structure (“pipes” + mirrors + light) – e.g. Sperling’s study on sensory memory;
- computer – poor refreshing (technological weakness), not everybody is equally familiar with it (e.g. in decision making testing – Damasio – “play cards to win as much as possible”) – esp. cross-cultural consideration, but obviously not only. Also, order effect here (tiredness);
- Headphones – e.g. dichotic listening tasks (Cherry, Treisman).
3b. Laboratory experiment – creating an artificial situation (context), which does not happen in real life, for example:
- Murphy & Zajonc study (’93) on affective priming – priming Chinese ideograms with happy and sad faces – in real life, you normally don’t see anything for subliminally short time;
- Ebbinghaus (’29) – learning lists of nonsense syllables;
- Decision-making computer simulation – you are not a manager of a multinational company, usually; moreover, you might care less about gains and losses when simulating…
- Treisman – shadowing or non-shadowing a text.
- Measurement – for example, IAT (Implicit Association Test) by Greenwald and Banaji – they used IAT to study implicit attitudes (e.g. to other races, sex, etc.), which they measured as a reaction time (time taken to press the button when deciding if a word is positive or negative). Were they right to directly “translate” reaction time to attitudes?
- Ecological validity seems poor also when referring cognitive psychology studies to other cultures, for example:
- Cross-cultural studies on memory (possible poor primacy effect, which is used as evidence for existence of 3-Box-Model);
- … on perception (differences in perceiving illusions);
- … on problem solving.
- On the other hand, subjects in cognitive research are human animals (: no issue of generalisability from non-human to human animals!).
- Alternative research methods:
- brain scanning techniques for memory, attention, problem solving (: the field of cognitive neuropsychology); (+) result is not a model, but a ‘real life’ representation of brain activity when memorising smth / solving a problem, etc.
- case studies on damaged brain patients, e.g. H.M. (+) human, and not a model, is a starting point for the study, (-) poor representativeness, poor experimental control;
- field experiment – e.g. research on how students memorise / pay attention at school (+) natural conditions, (-) poor control of extraneous variables, not everything can be studied like that (e.g. sensory memory);
- verbal protocol – people reporting on how they solve a problem, (-) problem of introspection – subjectivity;
- verbal report – e.g. Loftus (but it was also an experiment)
- observation (e.g. at school)
- questionnaire (e.g. for attitudes – ask if people are prejudiced!)
ASSESS THE EXTENT TO WHICH CONCEPTS AND MODELS OF INFORMATION PROCESSING HAVE HELPED IN UNDERSTANDING COGNITION.
|The model||What it explains (strengths of the model).||What it doesn’t explain
(weaknesses of the model).
(3-Box-Model, Atkinson &
Schiffrin, ’68); the stores (sensory memory, STM, LTM) are structural components of the model, but a number of control processes exist, too, such as attention, coding or rehearsal, which operate in conjunction with the stores.
Sensory memory – Sperling’s tachistoscope studies – very short duration;
STM – duration – Peterson &
Peterson – trigram experiment
(nonsense syllables recall after 3,
6, …, 18 seconds – 80% recall,
…., 10% recall results, respectively);
STM – capacity – Miller (’56) – the magical number seven, plus minus two – chunking
information, esp. to give its meaning from LTM (e.g. 1939 –1498 – 1945);
STM – encoding – esp. acoustic – Conrad (’64) – rhyming letters more confusable, e.g. man-map);
LTM – duration – Ebbinghaus – nonsense syllables – delay 20 minutes to 31 days;
LTM – capacity – enormous, impossible to measure;
LTM – encoding – Baddeley (’66) – semantic storing – e.g. great, big, huge and wide were easily confused after 20min.).
|Serial position effect (primacy effect: info retrieved from LTM – asymptote: info lost, e.g. displaced – recency effect: info still in STM) – tested in free recall experiments;
Further evidence for serial position effect –
♣ Slower rates of presentation can improve primacy effect
(more rehearsal time);
♣ Recency effect disappears if the words are not recalled straight away, or when e.g. interfered.
Primacy effect does not seem universal across cultures (effect of schooling?; correlates with literacy);
Brain damaged patients – e.g. anterograde amnesia – incapability to keep memory for longer than the fleeting moment –the model explains this as incapability to transfer new factual info from STM to LTM – biologically: brain damage to the hippocampus;
If these people are given free recall experiments, they show good recency effects, but extremely poor primacy effects.
The model explains e.g. why we have to repeat a telephone number so that it does not escapes our memory – so that new info does not interfere / displace the number; or why we remember a 7-digitphone number, but usually not 14 – unless chunked (Miller).
|In what forms (codes) is the knowledge represented in our brains(e.g. ’85 – Tulving’s subsequent research into procedural and declarative memory, the latter including semantic and episodic memory);|
|If we are aware of what we have in our memory or not (e.g. ’86 – Graf’s subsequent research on explicit and implicit
|How is the information transferred from STM to LTM (e.g. ’72 – Craik and Lockhart’s subsequent model of levels of processing approach – it includes strategies we use when learning – e.g. maintenance rehearsal / rote learning leads to shallow processing, while elaborative rehearsal leads to semantic / deep processing);|
|It does not explain the role of emotional and situational factors in remembering rmation, e.g.
personal relevance – e.g. flashbulb memory, nicer memories stay more easily
(adaptive?); distinctiveness of info – optimally average: it than fits in the existing schemas, but produces some dissonance which enhances accomodation.
emphasises the richness of the information entering the eye and the way that perception can occur from using all the info available; perception occurs directly from sensation.
e.g. Gibson – direct perception theory – we perceive depth using monocular depth cues (e.g. texture gradients, overlap, linear perspective) and binocular depth cues (retinal disparity and ocular convergence);
|Explains some universalities among people in what they perceive, and that vision is generally very accurate even in novel situations;
Explains why the system reacts fast – why we perceive so fast – because we don’t have to search through a store of cognitive schemas;
Explains animal perception;
Explains some aspects of human perception, esp. when data is unambiguous (e.g. good lighting conditions);
Can explain “seeing”, but not “seeing as” – attaching meaning to what you see;
Gibson: How we perceive functions of objects – by affordances – one look at a chair or a post-box and you can see its meaning (what it serves for). (an attempt to explain…)
|Can’t explain some constancies – e.g. colour constancy (pink snow);
Why we perceive illusions (e.g. the Ponzo or Mueller-Lyer Illusion); TOP-DOWN models explain illusions as mistaken hypothesis;
(-) however, illusions are artificial and do not represent perceptual behaviour in the real world;
Why illusions are culturally specific (e.g. the carpentered world hypothesis – Western cultures are more prone to Mueller-Lyer Illusion);
Can’t explain the lack of recovery of cataract patients;
Can’t explain perceptual set: The effect of expectations and context on what we perceive (e.g. ABC – 11,12,13), as well as motivation (you see food as brighter than other pictures when hungry), and emotion (taboo words are recognized less quickly than non-emotionally arousing words – “defense”) –
PERCEPTUAL SET THEORY
– perception as an active process involving selection, inference and interpretation. Perceptual set is a bias or readiness to perceive certain aspects of available sensory data and to ignore others. Set is influenced by expectations, context, motivation, emotion, etc.
Can’t explain TOP-DOWN
(concept-driven) processing or internal representations
–construction of reality that goes beyond info received from the senses; perception is an active process here and a perceived object is a hypothesis to be tested by our senses.