Be sure to read our best halogen oven post.
Several theories of memory are based on the assumption that there are three kinds of memory:
sensory memory, short-term memory and long-term memory.
Most models for memory have the three components and are similar to the Atkinson and Shiffrin model below. The details will be discussed later.
Sensory memory is a storage system that holds information in a relatively unprocessed form for fractions of a second after the physical stimulus is no longer available. It has been suggested (e.g. Baddeley 1988) that one function of this kind of storage is to allow information from successive eye-fixations to last for a long enough time to be integrated and so to give continuity to our visual environment. For example, if you move a lighted sparkler rapidly round in a sweeping arc, you will ‘see’ a circle of sparkling light. This is becausc the trace from the point of the sparkler is momentarily left behind. However, if you move the sparkler slowly, only a partial circle will be seen because the first part of the circumference will have laded by the time the sparkler gets back to its starting point. Similarly, if you watch a film, your conscious experience is of a continuous visual scene in which all of the action appears to be moving smoothly. In fact, the film is actually being presented as a rapid series of frozen images interspersed by fleeting moments of darkness. In order to make sense of it, your sensory store has to hold the information from one frame of film until the next is presented. These everyday examples seem to suggest that we are capable of storing visual images for very brief periods. It is assumed that we have separate sensory stores for all the senses.
Short-term memory (STM) is a system for storing information for brief periods of time. Some researchers (e.g. Atkinson and Shiffrin 1968) see STM simply as a temporary storage depot for incoming information, whereas others (e.g. Baddeley 1986, 1990; Gathercole 1992) prefer to use the term ‘working memory to indicate its dynamic, flexible aspects.
There are three important areas to consider when looking at STM
Capacity, duration & encoding.
Try the activity below before reading any further.
Try to work out the following problems using mental arithmetic. Do not write anything down.
- 5 x 7 =
- 53 x 7 =
- 53 x 78 =
You probably found problem (a) extremely easy, and problem (b) difficult, but possible Problem (c), however, posed much more of a challenge because it stretched the limits of your STM by requiring you to carry too much information at once. It can feel quite frustrating as you struggle to hold on to relevant bits of information while manipulating others. This kind of exercise indicates that STM has a limited capacity, i.e. we can only hold a small number of items at any one time. One way of assessing STM capacity is by measuring immediate digit span. This technique usually involves reading out a list of random digits and requiring the participant to repeat them back in the correct order. The sequences usually begin with about three digits and steadily increase in length until it becomes impossible to recall them in serial order. Over a number of trials, the sequence length at which the participant is correct 50 per cent of the time is defined as their digit span. Most people have a digit span of ‘seven, plus or minus two’ (Miller 1956). Miller claimed that this finding holds good for lists of digits, letters, words or larger ‘chunks’ of information. According to Miller, chunking occurs when we combine individual letters or numbers into a larger meaningful unit, for example your bank pin number.
Some more recent researchers have found that pronunciation time may be a more important indicator of STM capacity than digit span. Schweikert and Boruff (1986) tested immediate span for a number of different types of stimulus, e.g. letters, colours, shapes and nonsense words. They found that people consistently remembered as many items as they were able to pronounce in approximately 1.5 seconds. Baddeley et al (1975) found that participants, in a serial recall test, could remember more one-syllable words than five syllable words. They concluded that long words were harder to recall because participants said the words to themselves under their breath and longer words take longer to articulate. Naveh Benjamin and Ayres (1986) have tested immediate memory span for speakers of various world languages. They found, for example, that the digit span for native English speakers is considerably greater than for Arabic speakers. The only explanation for this finding is that Arabic numbers have more syllables and take longer to pronounce than English numbers.
However capacity is measured, it seems clear that STM is only able to hold a few items at any one time. It is also the case that, by its very nature, STM has a brief duration. The first attempts to measure the duration of STM were made independently by Brown (1958) and Peterson and Peterson (1959). They used a similar experimental method, which is now known as the Brown-Peterson technique. The technique involves presenting participants with consonant trigrams, which are sets of three unrelated consonants, e.g. CPW, NGV. Note that such a sequence should be well within the normal memory span. Participants are then asked to count backwards in threes from a specified number in order to stop them thinking about the letters. After an interval ranging from between 3 and 18 seconds, the participants are asked to recall the original trigram. This procedure is then repeated several times. Typical results of Brown-Peterson experiments show rapid forgetting over a short interval and, after 18 seconds, the percentage of correctly recalled trigrams falls to 10 per cent. Sebrechts et al. (1989) briefly presented participants with lists of three common English nouns and then gave them an unexpected, serial recall test. Correct recall of the items fell to 1 per cent after only four seconds. Studies such as these demonstrate that information can vanish from STM in a matter of a few seconds if rehearsal is prevented, or if people are not making a conscious effort to retain it.
When we receive information into the short-term memory how do we code it. There has been considerable research into how we code acoustic information in the STM. Is it acoustically, or semantically? Much of the evidence about coding comes from studies into so-called substitution errors. These occur when people confuse one item for another. If, for example, they confuse letters, which sound alike, it indicates that acoustic coding is being used. If, however, letters that look similar are confused, it indicates that visual coding is being used. Conrad (1964) showed participants random sequences of six letters taken from the consonants B, C, F, I\A, N, P, S, T, V, and X. Six letters were shown in rapid succession on a screen and participants were required to write them down as they appeared. The rate of presentation was too fast for the participants to keep up so they had to rely on memory. Conrad carefully noted the errors and found that the significant majority involved the substitution of a similar sounding letter (e.g. ‘B’ for ‘V’ and ‘S’ for ‘X’). In a similar study, Conrad demonstrated that participants found it more difficult to recall strings of acoustically similar letters (e.g. P, C, V, T, G, B, D) than strings of acoustically dissimilar letters (e.g. L, Z, K, F, X, H, W) even though they were presented visually. He concluded that such acoustic confusion provided evidence for acoustic coding in STM. Baddeley (1986) explored the effects of acoustic similarity using words rather than letters. He presented participants with sequences of five short words taken from a pool of words, which were acoustically similar (man, mad, mat, map, can, cat, cap). He compared their serial recall performance with that on sequences of short, acoustically dissimilar words (pen, day, few, sup, cow, pit, bar, hot) and sequences of short, semantically similar words (big, large, wide, high, great, tall, long, broad). Like Conrad, Baddeley found that words with similar sounds were much harder to recall than words that did not sound alike. Similarity of meaning had only a very slight detrimental effect on performance. Baddeley concluded that STM relies heavily on acoustic coding. Interestingly, he found that the effects of sound similarity disappeared when he tested participants’ long-term memory. He extended the length of the word lists from five to ten and prevented participants from repeating the words by interrupting them after each presentation. The lists were presented four times and recall was tested after 20 minutes. Under these conditions, participants found recall of the semantically similar words much more difficult than recall of the acoustically similar words. Baddeley concluded that long-term memory makes use of semantic rather than acoustic coding.
Long-term memory (LTM) holds a vast quantity of information which can be stored for long periods of time. The information kept here is diverse and wide-ranging and includes all of our personal memories, our general knowledge and our beliefs about the world.
It also includes plans for the future and is the depository for all our knowledge about skills and expertise. LTM is not a passive store of information, but a dynamic system which constantly revises and modifies stored knowledge in the light of new information. LTM is a much larger, more complex memory system than STM and it is not so easy to characterize in terms of factors like capacity, duration and simple encoding:
♦It is not possible to quantify the exact capacity of LTM, but most psychologists would agree that there is no upper limit – we are always capable of more learning.
♦Similarly, the duration of the memory trace in LTM is considerably longer than in STM and can last anything from a few minutes to a lifetime.
♦As far as encoding is concerned, there is some evidence (see Baddeley) that the meaning of the stimulus is often the factor here, in other words, semantic coding is important. However, it is clear from our own experience that material can be represented in other ways as well. Our ability to recognize sounds such as police sirens and telephones ringing shows that we can store material in an acoustic form. We can also easily bring to mind pictorial images of people or places, which suggests some visual coding in LTM.
Models of Memory:
A number of memory theorists have proposed that the memory system is divided into three stores, as outlined in the previous section. A typical theory of this type was proposed by Atkinson and Shiffrin (1968) and, because it quickly became the standard explanation of the memory system, it is often called the modal model. In this theory, they attempt to encompass all of memory and, in particular, focus on the distinction between short- and long-term memory. Their model arose from the information-processing approach which, in turn, derives from communication and computer science. According to this approach, memory is characterized as a flow of information through a system. The system is divided into a set of stages and information passes through each stage in a fixed sequence. There are capacity and duration limitations at each stage and transfer between stages may require recoding. See earlier diagram.
Atkinson and Shiffrin proposed that external stimuli from the environment first enter sensory memory, where they can be registered for very brief periods of time before decaying or being passed on to the short-term store. STM contains only the small amount of information that is actually in active use at any one time. Verbal information is encoded at this stage in terms of its sounds. Atkinson and Shiffrin believed that memory traces in STM are fragile and can be lost within about 30 seconds unless they arc repeated (rehearsed). Material that is rehearsed is passed on to the long-term store where it can remain for a lifetime, although loss is possible from this store through decay or interference.
Coding in LTM is assumed to be in terms of meaning, i.e. semantic. In addition to describing the structural features of the memory system, Atkinson and Shiffrin also proposed various control processes, which are strategies used by individuals to manipulate the information flowing through the system. One of the most important of these is rehearsal, which allows information to be recycled within STM and passed on into LTM.
Evaluation of the multistore model
A crucial aspect of the multistore model is that there are distinct short-term and long-term stores. We have already looked at some of the experimental evidence which suggests that LTM and STM operate differently in terms of capacity and duration.
Other evidence in support of the distinction between STM and LTM comes from case studies of people with brain damage which gives rise to memory impairment. Milner (1966) reported on a young man, referred to as HM, who was left with severe memory impairment after brain surgery. He was able to talk normally and to recall accurately events and people from his life before surgery, and his immediate digit span was within normal limits. He was, however, unable to retain any new information and could not lay down new memories in LTM. When told of the death of his favourite uncle, he reacted with considerable distress. Later, he frequently asked about his uncle and, on each occasion, reacted again with the level of grief appropriate to hearing the news for the first time. KF, a motorcycle accident victim investigated by Shallice and Warrington (1970), suffered from the reverse of this memory impairment. He had no difficulty in transferring new items into LTM but had a grossly impaired digit span. Cases such as lend support to the Atkinson and Shiffrin model, in that they seem to point to a clear distinction between
LTM and STM.
There does seem to be fairly strong support for a difference between LTM and STM in terms of duration, capacity and effects of brain damage. However, there are problems with the model of Atkinson and Shiffrin. The model is too simple and inflexible and fails to take account of factors such as the strategies people employ to remember things. It also places emphasis on the amount of information that can be processed rather than its nature. Some things are simply easier to remember than others, perhaps because they are more interesting, more distinctive, funnier, or whatever. The multistore model cannot account for this. It is also criticized for focusing on the structure of the memory system at the expense of adequately explaining the processes involved. For example, visual stimuli registering in sensory memory are thought to be changed to an acoustic code for access to STM. In order to translate the pattern of the letter ‘M’ into the sound ’em’, the individual needs to access knowledge about letter shapes and sounds which is stored in LTM. This means that information from LTM must flow backwards through the system to the recoding stage prior to STM. This suggests that the flow of information through the system is interactive rather than strictly sequential as Atkinson and Shiffrin suggested. Their suggestion that rote rehearsal is the only means of transfer from STM into LTM has also been criticized. This criticism will be considered in more detail in the discussion of levels of processing approach. Similarly, another model – the working memory model of Baddeley and Hitch (1974) – casts doubt on the assumption of Atkinson and Shiffrin that STM is a unitary store with a severely limited capacity.
The Working Memory Model
One of the criticisms of the multistore model is that it is too simplistic and assumes that STM and LTM act as unitary stores. It seems much more likely that both memory systems are divided into separate components which have different functions. The first people to explore the notion of a multicomponent, short-term store were Baddeley and Hitch (1974). They conducted a study in which participants were given digit strings to rehearse while, at the same time, carrying out verbal reasoning tasks similar to those in Activity 2. Try Activity 2 before reading any further.
Verbal reasoning task
Read the following set of statements and then decide for each one, as quickly and accurately as you can, whether it is true or false.
|l||B is followed by A||BA|
|2||A does notfollow B||BA|
|3||A is not preceded by B||BA|
|4||A is not followed by B||BA|
|5||B follows A||AB|
|6||B is preceded by A||BA|
|7||A does not precede B||BA|
|8||B is not preceded by A||BA|
|9||B is followed by A||AB|
|10||A follows B||AB|
Imagine trying to do these reasoning tasks and simultaneously rehearsing a string of digits – you probably think that this would be very difficult, if not impossible. Baddeley and Hitch reported that their participants were rather alarmed at the prospect of trying to do both tasks at once. In order not to overload the participants, the investigators first gave only two digits to recall. However, they found no detrimental effects on performance at either task and so increased the number of digits to six. Even with six digits to recall (and note that this is very close to normal digit span), there was no effect on accuracy of performance on the two tasks, although there was a very slight slowing on the reasoning task. This finding is not compatible with the Atkinson and Shiffrin view of a unitary short-term store. Instead, it suggests that STM, or working memory as Baddeley and Hitch prefer to call it, consists of several different components which can work independently of one another.
Baddeley and Hitch concluded on the basis of this and other studies, that STM is a flexible and complex system which consists of a central control mechanism assisted by a number of slavesystems. The model has been modified slightly in the light of experimental studies (e.g. Baddeley 1986) and is shown in simple form below.
The attentional control system.
Modality free: Limited Capacity
The central executive is the most important component in the model and is responsible for monitoring and coordinating the operation of the slave systems. It is flexible in that it can process information from any modality and also has some storage capacity, although
this is very limited. It seems to play a major role in attention, planning and in synthesizing information, not only from the slave systems but also from LTM.
The phonological loop stores a limited number of sounds for brief periods and can be thought of as an inner ear. It is now thought to be made up of two components
(Gathercole and Baddeley 1993). One component is the phonological store, which allows acoustically coded items to be stored for a brief period. The other component is the articulatory control system, which allows subvocal repetition of the items stored in the phonological store.
The visuo-spatial scratch pad stores visual and spatial information and can be thought of as an inner eye. Like the phonological loop, it has limited capacity, but the limits of the two systems are independent. In other words, it is possible, for example, to rehearse a set of digits in the phonological loop while simultaneously making decisions about the spatial layout of a set of letters in the visuo-spatial scratchpad. A good example of the operation of working memory is given in Activity 3, below.
Activity 3. Operating your working memory
Baddeley (1997) has suggested that you can get a good feel for the operation of working memory by the following task. Try to work out how many windows there are in your home.
If you are like most people, you will have formed a mental image of your home and counted the windows either by imagining the outside of the house or by walking through the house room by room. The image will be set up and manipulated in your visuo-spatial scratch pad and the tally of windows will be held in the phonological loop as you count them subvocally. Ihe whole operation will be supervised by the central executive, which will allocate the tasks and recognize when the final total has been reached.
Evaluation of the working memory model
The working memory model appears to have a number of advantages over the simplistic formulation of the Atkinson and Shiffrin concept of STI\A. It effectively accounts for our ability to store information briefly, while, at the same time, actively processing the material. There is a considerable body of empirical research which seems to support the existence of the two slave systems. For example, Baddeley et al. (1975) conducted a series of studies which investigated the word-length effect. They found that memory span for visually presented one syllable words was significantly greater than for polysyllabic words. This suggested that the phonological loop was only able to hold a limited number of syllables. However, subsequent studies demonstrated that articulation time, rather than number of syllables, was the limiting factor. They compared span performance on twosyllable words such as ‘cricket’ and ‘bishop’, which are spoken quickly, with performance on two-syllable words such as ‘harpoon’ and ‘Friday’, which take longer to say. Recall was consistently better for the words which can be articulated more quickly. If participants are prevented from rehearsing the words subvocally by having to repeat an irrelevant sound such as la-la-la…’ (articulatory suppression), the word-length effect disappears. It is assumed that the articulatory suppression task fills the phonological loop and, therefore, takes away the advantage of rehearsal. Based on the results of these studies, Baddeley and colleagues concluded that memory span is dependent on time rather than the number of items, and that people can remember as much as they are able to say in approximately 1.5 seconds.
The visuo-spatial store has not been investigated in the same depth as the phonological store, but there is experimental evidence which supports its existence. For example, Baddeley et ai. (1973) gave participants a simple tracking task which involved holding a pointer in contact with a moving spot of light. At the same time, participants were asked to perform an imagery task. Participants were required to imagine the block capital letter ‘F’ and then, starting at the bottom left hand corner, were to classify each angle as a ‘yes’ if it included the bottom or top line of the letter and as a ‘no’ if it did not. Participants found it very difficult to track the spot of light and accurately classify the angles in the letter imagery task. However, they were perfectly capable of carrying out the tracking task in conjunction with a verbal task. This suggests that the tracking and letter imagery tasks were competing for the limited resources of the visuo-spatial scratchpad, whereas the tracking task and the verbal task were making use of the separate components of the visuo-spatial scratchpad and the phonological loop respectively.
This model has proved influential and is still being developed and expanded. The main weakness, however, is that the component we know least about (the central executive) is the most important. It has a limited capacity, hut no one has been able to quantify it experimentally. Richardson (1984) argues that there are problems in specifying the precise functioning of the central executive. He believes that the terminology is vague and can be used to explain any kind of results. In other words, it can give rise to a circular argument, i.e. if we give participants an articulatory suppression task and this affects performance, we assume the phonological loop is normally utilized in the task, but if performance is not affected, we assume the central executive is normally utilized in the task. Hence, it is difficult to falsify the model.
Levels of Processing
The working memory model has been much more effective than the multistore model in explaining the active nature of short-term memory processing. It allows for different types of processing depending on the nature of incoming information, but it does not consider the effects of differential processing on long-term retention of information. An important approach which looked specifically at this aspect was put forward by Craik and Lockhart (1972). They rejected the idea of separate memory structures put forward by Atkinson and Shiffrin and believed, instead, that stimulus inputs go through a variety of processing operations. According to them, processing varies in terms of depth, ‘Trace persistence is a function of depth of analysis, with deeper levels of analysis associated with more elaborate, longer lasting, and stronger traces’. The first stages of processing are shallow and involve recognizing the stimulus in terms of its physical appearance, e.g. the shape of the letters a word is written in. The deepest level of processing involves coding the input in terms of its meaning. Rehearsing material simply by rote repetition, as in the Atkinson and Shiffrin model, is called maintenance rehearsal and is regarded as shallow processing. It is distinguished from elaborative rehearsal in which links are made to semantic associations. The assumption of the model is that shallow processing will give rise to weak, short-term retention, whereas deep processing will ensure strong, lasting retention. This central assumption has been tested in numerous studies. For example, Hyde and Jenkins (1973) presented auditorily lists of 24 words and asked different groups of participants to perform one of the following so-called orienting tasks:
♦rating the words for pleasantness
♦estimating the frequency with which each word is used in the English language
♦detecting the occurrence of the letters ‘e’ and ‘g’ in any of the words
♦deciding the part of speech appropriate to each word (e.g. noun, adjective)
♦deciding whether the words fitted into a particular sentence frame.
Half the participants were told in advance that they would be expected to recall the words (intentional learning group) and the other half were not (incidental learning group). After testing all the participants for recall of the original word list, Hyde and Jenkins found that there were minimal differences in the number of items correctly recalled between the intentional learning groups and the incidental learning groups. This finding is predicted by Craik and Lockhart because they believe that retention is simply a byproduct of processing and so intention to learn is unnecessary for learning to occur. In addition, it was found that recall was significantly better for words which had been analysed semantically (i.e. rated for pleasantness or for frequency) than words which had been rated more superficially (i.e. detecting ‘e’ and ‘g’). This is also in line with the theory because semantic analysis is assumed to be a deeper level of processing than structural analysis.
Evaluation of levels of processing
The levels of processing approach was influential when it was first formulated, and researchers in the field welcomed its emphasis on mental processes rather than on rigid structures. However, it soon became clear that the model was too simplistic and that it was descriptive rather than explanatory. A major problem is circularity, i.e. there is no independent definition of depth. The model predicts that deep processing will lead to better retention – researchers then conclude that, because retention is better after certain orienting tasks, they must, by definition, involve deep processing. Think back to the Hyde and Jenkins study, for example. The orienting task that gave rise to the lowest level of recall was the sentence frame task. Hyde and Jenkins assumed that the poor recall reflected shallow processing and yet, on the face of it, judgements about sentence frames would appear to require semantic analysis and, thus, deep processing.
Other researchers have questioned the idea that depth of processing alone is responsible for retention. Tyler et al. (1979), for example, gave participants two sets of anagrams to solve. Some were easy like DOCTRO and others were more difficult such as OCDRTO. In a subsequent, unexpected, recall task, participants remembered more of the difficult than the easy anagrams, in spite of processing levels being the same. Tyler and colleagues suggested that retention was influenced by the amount of processing effort rather than depth.
Craik and Lockhart themselves (1986) have since suggested that factors such as elalaoration and distinctiveness are also important in determining the rate of retention; this idea has been supported by research. For example, Hunt and Elliott (1980) found that people recalled words with distinctive sequences of tall and short letters better than words with less distinctive arrangements of letters. Palmere et al. (1983) made up a 32paragraph description of a fictitious African nation. Eight paragraphs consisted of a sentence containing a main idea, followed by three sentences each providing an example of the main theme; eight paragraphs consisted of one main sentence followed by two supplementary sentences; eight paragraphs consisted of one main sentence followed by a single supplementary sentence; and the remaining eight paragraphs consisted of a single main sentence with no supplementary information. Recall of the main ideas varied as a function of the amount of elaboration. Significantly more main ideas were recalled from the elaborated paragraphs than from the single-sentence paragraphs. This kind of evidence suggests that the effects of processing on retention are not as simple as first proposed by the levels of processing model.
Memories for the performance of actions or skills eg how to swim, use a pencil, play the piano etc.
Declarative Memories (Tulvillg, 1985 proposed 2 types)
(1 ) Semantic Memories
Semantic memories are internal representations of the world, independent of any particular context. Includes all our general knowledge about the world and language. There is no specific link of time or place stored with this information. Examples are facts, rules concepts etc.
For example, on the basis of your semantic knowledge of the concept ‘CAT’ you can describe a cat (what it looks like, how it behaves, what it likes to eat etc) even when a cat is not present. You probably do not know how or where you first learned this information.
Information in semantic memory is thought to be hierarchically organised. This is where information is systematically linked to related or relevant information.
(2) Episodic memories
Episodic memories include all the personal and autobiographical information that is affected by time, context, organisation, and place of occurrence. The context of the information, the character of the information, and the purpose for attending to the information all influence what information is encoded and how it is encoded.
The retrieval of information from episodic memories are usually GENERATIVE or CONSTRUCTIVE. We rely on schemas to be able to recall episodic information.
EVIDENCE FOR A DISTINCTION BETWEEN PROCEDURAL AND
DECLARATIVE MEMORIES comes from brain-damaged patients. For example in 1953 H.M. was 27 years old and surgeons removed most of his hippocampus, to relieve severe and life-threatening epilepsy. His memory was affected dramatically. H.M. recalled most events from before the operation, however he could not remember new experiences for longer than 15 minutes. These DECLARATIVE memories vanished.
He could not learn new words, new songs, stories, faces; he could not recall his last meal; doctors had to reintroduce themselves on each occasion. He is now about 75 years old but thinks he is much younger because he is stuck in a time warp, no longer recognising a photograph of his own face.
He can, however, acquire new PROCEDURAL memories; for example he has learnt how to play tennis. Apparently the parts of his brain involved in procedural memories have remained intact.
Another case study, of Clive Wearing, also indicates a distinction between procedural and declarative memories. Clive was a famous musician prior to suffering a rare brain infection in 1985. The virus destroyed a large part of his brain and he only retained a ‘moment to moment’ memory. However, some of the procedural memories that he had previously stored were intact. For example, if you were to ask him if he could play the piano (declarative knowledge) he would answer ‘no’, but when he sits down at the piano he can in fact play his procedural memory is not affected.