Be sure to read our best halogen oven post.
|Levels of Processing Theory|
|LEVELS OF PROCESSING THEORY|
- Craik & Lockhart (1972) proposed that stimulus inputs undergo successive processing operations. The early stages of processing are “shallow” and involve coding the stimulus in terms of its physical characteristics (e.g. the visual characteristics of the letters and typeface in which a word is printed, or the acoustic features of a sound). “Deep” processing involves coding the stimulus more abstractly in terms of its meaning. So, visual and acoustic coding are shallow, but semantic coding is deep.
- Rehearsing material by simple rote repetition is called maintenance rehearsal and is classified as shallow. Rehearsing material by exploring its meaning and linking it to semantically associated words is called elaborative rehearsal and is classified as deep.
- The crucial assumption of this levels of processing theory is that retention of an item is dependent on the depth or level of processing carried out on to-beremembered material. Superficial processing leads only to shallow, short-term retention; deep processing leads to efficient, durable retention.
- Elias & Perfetti (1973) gave subjects a number of different tasks to perform on each word in a list, such as finding another word that rhymes or finding a word that means the same or similar (synonym) to the word on the list. The rhyming task involved only acoustic coding and hence was a shallow level of processing. The synonym task involved semantic coding and hence was a deep level of processing. The subjects were not told that they would be asked to recall the words, but nevertheless they did remember some of the words when subsequently tested. This is called incidental learning as opposed to intentional or deliberate learning. The subjects recalled significantly more words following the synonym task than following the rhyming task, suggesting that deeper levels of processing leads to better recall.
Hyde & Jenkins (1973) carried out a typical experiment using the incidental learning technique. Different groups of subjects performed one of the following five tasks on a list of words:
o rating the words for pleasantness (e.g. is “donkey” a pleasant word?) o estimating the frequency with which each word is used in the English language
(e.g. how often does “donkey” appear in the English language?)
- detecting the occurrence of the letters “e” & “g” in the list words (e.g. is there an “e” or a “g” in the word “donkey”?)
- deciding the part of speech appropriate to each word (e.g. is “donkey” a verb, noun or an adjective?)
- deciding whether the words fitted into particular sentences (e.g. does the word “donkey” fit into the following sentence > “I went to the doctor and showed him my …………”)
Five groups of subjects performed one of these tasks, without knowing that they were going to be asked to recall the words. An additional five groups of subjects performed the tasks but they were told that they should learn the words. Finally, there was a control group of subjects who were instructed to learn the words but did not do the tasks. All groups were given a test of free recall shortly after completion of the orienting task. Hyde & Jenkins found that the pleasantness rating and rating frequency of usage tasks produced the best recall. They claimed that this was because these tasks involved semantic processing whereas the other tasks did not. Whether you agree with this claim or not (see evaluation below), one interesting finding was that incidental learners performed just as well as intentional learners in all tasks – this suggests that it is the nature of the processing that determines how much you will remember rather than intention to learn. Bear this in mind when you are revising – the more processing you perform on the information (e.g. quizzes, essays, spider diagrams etc.) the more likely you are to remember it.
- It is usually the case that deeper levels of processing do lead to better recall. However, there is an argument about whether it is the depth of processing that leads to better recall or the amount of processing effort that produces the result. Tyler et al (1979) gave subjects two sets of anagrams to solve – easy ones, such as DOCTRO or difficult ones such as TREBUT. Afterwards, subjects were given an unexpected test for recall of the anagrams. Although the processing level was the same, because subjects were processing on the basis of meaning, subjects remembered more of the difficult anagram words than the easy ones. So Tyler et al concluded that retention is a function of processing effort, not processing depth.
- Another problem is that subjects typically spend a longer time processing the deeper or more difficult tasks. So, it could be that the results are partly due to more time being spent on the material. The type of processing, the amount of effort & the length of time spent on processing tend to be confounded. Deeper processing goes with more effort and more time, so it is difficult to know which factor influences the results.
- Associated with the previous point, it is often difficult with many of the tasks used in levels of processing studies to be sure what the level of processing actually is. For example, in the study by Hyde & Jenkins (described above) they assumed that judging a word’s frequency involved thinking of its meaning, but it is not altogether clear why this should be so. Also, they argued that the task of deciding the part of speech to which a word belongs is a shallow processing task – but other researchers claim that the task involves deep or semantic processing. So, a major problem is the lack of any independent measure of processing depth.
- Eysenck (1978) claims “In view of the vagueness with which depth is defined, there is danger of using retention-test performance to provide information about the depth of processing and then using the … depth of processing to ‘explain’ the retention-test performance, a self-defeating exercise in circularity”. What he means is that if a person performs well on a test of recall after performing a particular task then some researchers will claim that they must have performed a deep level of processing on the information in order to remember it – a circular argument.
Another objection is that levels of processing theory does not really explain why deeper levels of processing is more effective. Eysenck (1990) claims that it describes rather than explains what is happening. However, recent studies have clarified this point – it appears that deeper coding produces better retention because it is more elaborate. Elaborative encoding enriches the memory representation of an item by activating many aspects of its meaning and linking it into the pre-existing network of semantic associations. Deep level semantic coding tends to be more elaborated than shallow physical coding and this is probably why it worked better. This view fits in well with the constructivist approach discussed on your previous handout – it emphasises the importance of integrating new information with existing knowledge.
The levels of processing theory suggests that shallow processing occurs at the early stages, and information is processed more deeply as it passes on through the system: the more deeply information has been processed the more likely you are to remember it.