کاملا بخاطر بسپار - علم یادگیری موفق

7 فصل

فصل 05

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زیبوک»

این فصل را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زیبوک» بخوانید

دانلود اپلیکیشن «زیبوک»

فایل صوتی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی فصل

Belief in the learning styles credo is pervasive. Assessing students’ learning styles has been recommended at all levels of education, and teachers are urged to offer classroom material in many different ways so that each student can take it in the way he or she is best equipped to learn it. Learning styles theory has taken root in management development, as well as in vocational and professional settings, including the training of military pilots, health care workers, municipal police, and beyond. A report on a 2004 survey conducted for Britain’s Learning and Skills Research Centre compares more than seventy distinct learning styles theories currently being offered in the marketplace, each with its companion assessment instruments to diagnose a person’s particular style. The report’s authors characterize the purveyors of these instruments as an industry bedeviled by vested interests that tout “a bedlam of contradictory claims” and express concerns about the temptation to classify, label, and stereotype individuals. The authors relate an incident at a conference where a student who had completed an assessment instrument reported back: “I learned that I was a low auditory, kinesthetic learner. So there’s no point in me reading a book or listening to anyone for more than a few minutes.”5 The wrongheadedness of this conclusion is manifold. It’s not supported by science, and it instills a corrosive, misguided sense of diminished potential.

Notwithstanding the sheer number and variety of learning styles models, if you narrow the field to those that are most widely accepted you still fail to find a consistent theoretical pattern. An approach called VARK, advocated by Neil Fleming, differentiates people according to whether they prefer to learn through experiences that are primarily visual, auditory, reading, or kinesthetic (i.e., moving, touching, and active exploration). According to Fleming, VARK describes only one aspect of a person’s learning style, which in its entirety consists of eighteen different dimensions, including preferences in temperature, light, food intake, biorhythms, and working with others versus working alone.

Other learning styles theories and materials are based on rather different dimensions. One commonly used inventory, based on the work of Kenneth Dunn and Rita Dunn, assesses six different aspects of an individual’s learning style: environmental, emotional, sociological, perceptual, physiological, and psychological. Still other models assess styles along such dimensions as these:

• Concrete versus abstract styles of perceiving

• Active experimentation versus reflective observation modes of processing

• Random versus sequential styles of organizing

The Honey and Mumford Learning Styles Questionnaire, which is popular in managerial settings, helps employees determine whether their styles are predominantly “activist,” “reflector,” “theorist,” or “pragmatist” and to improve in the areas where they score low so as to become more versatile learners.

The simple fact that different theories embrace such wildly discrepant dimensions gives cause for concern about their scientific underpinnings. While it’s true that most all of us have a decided preference for how we like to learn new material, the premise behind learning styles is that we learn better when the mode of presentation matches the particular style in which an individual is best able to learn. That is the critical claim.

In 2008 the cognitive psychologists Harold Pashler, Mark McDaniel, Doug Rohrer, and Bob Bjork were commissioned to conduct a review to determine whether this critical claim is supported by scientific evidence. The team set out to answer two questions. First, what forms of evidence are needed for institutions to justify basing their instructional styles on assessments of students’ or employees’ learning styles? For the results to be credible, the team determined that a study would need to have several attributes. Initially, students must be divided into groups according to their learning styles. Then they must be randomly assigned to different classrooms teaching the same material but offering it through different instructional methods. Afterward, all the students must take the same test. The test must show that students with a particular learning style (e.g., visual learners) did the best when they received instruction in their own learning style (visual) relative to instruction in a different style (auditory); in addition, the other types of learners must be shown to profit more from their style of instruction than another style (auditory learners learning better from auditory than from visual presentation).

The second question the team asked was whether this kind of evidence existed. The answer was no. They found very few studies designed to be capable of testing the validity of learning styles theory in education, and of those, they found that virtually none validate it and several flatly contradict it. Moreover, their review showed that it is more important that the mode of instruction match the nature of the subject being taught: visual instruction for geometry and geography, verbal instruction for poetry, and so on. When instructional style matches the nature of the content, all learners learn better, regardless of their differing preferences for how the material is taught.

The fact that the evidence is not there to validate learning styles theory doesn’t mean that all theories are wrong. Learning styles theories take many forms. Some may be valid. But if so, we can’t know which: because the number of rigorous studies is extremely small, the research base does not exist to answer the question. On the basis of their findings, Pashler and his colleagues argued that the evidence currently available does not justify the huge investment of time and money that would be needed to assess students and restructure instruction around learning styles. Until such evidence is produced, it makes more sense to emphasize the instructional techniques, like those outlined in this book, that have been validated by research as benefiting learners regardless of their style preferences.6 Successful Intelligence

Intelligence is a learning difference that we do know matters, but what exactly is it? Every human society has a concept that corresponds to the idea of intelligence in our culture. The problem of how to define and measure intelligence in a way that accounts for people’s intellectual horsepower and provides a fair indicator of their potential has been with us for over a hundred years, with psychologists trying to measure this construct since early in the twentieth century. Psychologists today generally accept that individuals possess at least two kinds of intelligence. Fluid intelligence is the ability to reason, see relationships, think abstractly, and hold information in mind while working on a problem; crystallized intelligence is one’s accumulated knowledge of the world and the procedures or mental models one has developed from past learning and experience. Together, these two kinds of intelligence enable us to learn, reason, and solve problems.7 Traditionally, IQ tests have been used to measure individuals’ logical and verbal potential. These tests assign an Intelligence Quotient, which denotes the ratio of mental age to physical age, times 100. That is, an eight-year-old who can solve problems on a test that most ten-year-olds can solve has an IQ of 125 (10 divided by 8, times 100). It used to be thought that IQ was fixed from birth, but traditional notions of intellectual capacity are being challenged.

One countervailing idea, put forward by the psychologist Howard Gardner to account for the broad variety in people’s abilities, is the hypothesis that humans have as many as eight different kinds of intelligence:

Logical-mathematical intelligence: ability to think critically, work with numbers and abstractions, and the like;

Spatial intelligence: three-dimensional judgment and the ability to visualize with the mind’s eye;

Linguistic intelligence: ability to work with words and languages;

Kinesthetic intelligence: physical dexterity and control of one’s body;

Musical intelligence: sensitivity to sounds, rhythms, tones, and music;

Interpersonal intelligence: ability to “read” other people and work with them effectively;

Intrapersonal intelligence: ability to understand one’s self and make accurate judgments of one’s knowledge, abilities, and effectiveness;

Naturalistic intelligence: the ability to discriminate and relate to one’s natural surroundings (for example, the kinds of intelligence invoked by a gardener, hunter, or chef).

Gardner’s ideas are attractive for many reasons, not the least because they attempt to explain human differences that we can observe but cannot account for with modern, Western definitions of intelligence with their focus on language and logic abilities. As with learning styles theory, the multiple intelligences model has helped educators to diversify the kinds of learning experiences they offer. Unlike learning styles, which can have the perverse effect of causing individuals to perceive their learning abilities as limited, multiple intelligences theory elevates the sheer variety of tools in our native toolkit. What both theories lack is an underpinning of empirical validation, a problem Gardner himself recognizes, acknowledging that determining one’s particular mix of intelligences is more an art than a science.

While Gardner helpfully expands our notion of intelligence, the psychologist Robert J. Sternberg helpfully distills it again. Rather than eight intelligences, Sternberg’s model proposes three: analytical, creative, and practical. Further, unlike Gardner’s theory, Sternberg’s is supported by empirical research.9

One of Sternberg’s studies of particular interest to the question of how we measure intelligence was carried out in rural Kenya, where he and his associates looked at children’s informal knowledge of herbal medicines. Regular use of these medicines is an important part of Kenyans’ daily lives. This knowledge is not taught in schools or assessed by tests, but children who can identify the herbs and who know their appropriate uses and dosages are better adapted to succeed in their environment than children without that knowledge. The children who performed best on tests of this indigenous informal knowledge did worst relative to their peers on tests of the formal academic subjects taught in school and, in Sternberg’s words, appeared to be “stupid” by the metric of the formal tests. How to reconcile the discrepancy? Sternberg suggests that the children who excelled at indigenous knowledge came from families who valued such practical knowledge more highly than the families of the children who excelled at the academics taught in school. Children whose environments prized one kind of learning over another (practical over academic, in the case of the families who taught their children about herbs) were at a lower level of knowledge in the academic areas not emphasized by their environment. Other families placed more value on the analytic (school-based) information and less on the practical herbal knowledge.

There are two important ideas here. First, traditional measures of intelligence failed to account for environmental differences; there is no reason to suspect that kids who excelled at informal, indigenous knowledge can’t catch up to or even surpass their peers in academic learning when given the appropriate opportunities. Second, for the kids whose environments emphasized indigenous knowledge, the mastery of academics is still developing. In Sternberg’s view, we’re all in a state of developing expertise, and any test that measures only what we know at any given moment is a static measure that tells us nothing about our potential in the realm the test measures.

Two other quick stories Sternberg cites are useful here. One is a series of studies of orphaned children in Brazil who must learn to start and run street businesses if they are to survive. Motivation is high; if they turn to theft as a means to sustain themselves, they risk running afoul of the death squads. These children, who are doing the math required in order to run successful businesses, cannot do the same math when the problems are presented in an abstract, paper-and-pencil format. Sternberg argues that this result makes sense when viewed from the standpoint of developing expertise: the children live in an environment that emphasizes practical skills, not academic, and it’s the practical exigencies that determine the substance and form of the learning.10 The other story is about seasoned, expert handicappers at horse tracks who devise highly complex mental models for betting on horses but who measure only average on standard IQ tests. Their handicapping models were tested against those devised by less expert handicappers with equivalent IQs. Handicapping requires comparing horses against a long list of variables for each horse, such as its lifetime earnings, its lifetime speed, the races where it came in the money, the ability of its jockey in the current race, and a dozen characteristics of each of its prior races. Just to predict the speed with which a horse would run the final quarter mile, the experts relied on a complex mental model involving as many as seven variables. The study found that IQ is unrelated to handicapping ability, and “whatever it is that an IQ test measures, it is not the ability to engage in cognitively complex forms of multivariate reasoning.”11 Into this void Robert Sternberg has introduced his three-part theory of successful intelligence. Analytical intelligence is our ability to complete problem-solving tasks such as those typically contained in tests; creative intelligence is our ability to synthesize and apply existing knowledge and skills to deal with new and unusual situations; practical intelligence is our ability to adapt to everyday life—to understand what needs to be done in a specific setting and then do it; what we call street smarts. Different cultures and learning situations draw on these intelligences differently, and much of what’s required to succeed in a particular situation is not measured by standard IQ or aptitude tests, which can miss critical competencies.

Dynamic Testing

Robert Sternberg and Elena Grigorenko have proposed the idea of using testing to assess ability in a dynamic manner. Sternberg’s concept of developing expertise holds that with continued experience in a field we are always moving from a lower state of competence to a higher one. His concept also holds that standardized tests can’t accurately rate our potential because what they reveal is limited to a static report of where we are on the learning continuum at the time the test is given. In tandem with Sternberg’s three-part model of intelligence, he and Grigorenko have proposed a shift away from static tests and replacing them with what they call dynamic testing: determining the state of one’s expertise; refocusing learning on areas of low performance; follow-up testing to measure the improvement and to refocus learning so as to keep raising expertise. Thus, a test may assess a weakness, but rather than assuming that the weakness indicates a fixed inability, you interpret it as a lack of skill or knowledge that can be remedied. Dynamic testing has two advantages over standard testing. It focuses the learner and teacher on areas that need to be brought up rather than on areas of accomplishment, and the ability to measure a learner’s progress from one test to the next provides a truer gauge of his or her learning potential.

Dynamic testing does not assume one must adapt to some kind of fixed learning limitation but offers an assessment of where one’s knowledge or performance stands on some dimension and how one needs to move forward to succeed: what do I need to learn in order to improve? That is, where aptitude tests and much of learning styles theory tend to emphasize our strengths and encourage us to focus on them, dynamic testing helps us to discover our weaknesses and correct them. In the school of life experience, setbacks show us where we need to do better. We can steer clear of similar challenges in the future, or we can redouble our efforts to master them, broadening our capacities and expertise. Bruce Hendry’s experiences investing in rental property and in the stock market dealt him setbacks, and the lessons he took away were essential elements of his education: to be skeptical when somebody’s trying to sell him something, to figure out the right questions, and to learn how to go dig out the answers. That’s developing expertise.

Dynamic testing has three steps.

Step 1: a test of some kind—perhaps an experience or a paper exam—shows me where I come up short in knowledge or a skill.

Step 2: I dedicate myself to becoming more competent, using reflection, practice, spacing, and the other techniques of effective learning.

Step 3: I test myself again, paying attention to what works better now but also, and especially, to where I still need more work.

When we take our first steps as toddlers, we are engaging in dynamic testing. When you write your first short story, put it in front of your writers’ group for feedback, and then revise and bring it back, you’re engaging in dynamic testing, learning the writer’s craft and getting a sense of your potential. The upper limits of your performance in any cognitive or manual skill may be set by factors beyond your control, such as your intelligence and the natural limits of your ability, but most of us can learn to perform nearer to our full potential in most areas by discovering our weaknesses and working to bring them up.12 Structure Building

There do appear to be cognitive differences in how we learn, though not the ones recommended by advocates of learning styles. One of these differences is the idea mentioned earlier that psychologists call structure building: the act, as we encounter new material, of extracting the salient ideas and constructing a coherent mental framework out of them. These frameworks are sometimes called mental models or mental maps. High structure-builders learn new material better than low structure-builders. The latter have difficulty setting aside irrelevant or competing information, and as a result they tend to hang on to too many concepts to be condensed into a workable model (or overall structure) that can serve as a foundation for further learning.

The theory of structure building bears some resemblance to a village built of Lego blocks. Suppose you’re taking a survey course in a new subject. You start with a textbook full of ideas, and you set out to build a coherent mental model of the knowledge they contain. In our Lego analogy, you start with a box full of Lego pieces, and you set out to build the town that’s pictured on the box cover. You dump out the pieces and sort them into a handful of piles. First you lay out the streets and sidewalks that define the perimeter of the city and the distinct places within it. Then you sort the remaining pieces according to the elements they compose: apartment complex, school, hospital, stadium, mall, fire station. Each of these elements is like a central idea in the textbook, and each takes more shape and nuance as added pieces snap into place. Together, these central ideas form the larger structure of the village.

Now suppose that your brother has used this Lego set before and dumped some pieces into the box from another set. As you find pieces, some might not fit with your building blocks, and you can put them aside as extraneous. Or you may discover that some of the new pieces can be used to form a substructure of an existing building block, giving it more depth and definition (porches, patios, and back decks as substructures of apartments; streetlights, hydrants, and boulevard trees as substructures of streets). You happily add these pieces to your village, even though the original designers of the set had not planned on this sort of thing. High structure-builders develop the skill to identify foundational concepts and their key building blocks and to sort new information based on whether it adds to the larger structure and one’s knowledge or is extraneous and can be put aside. By contrast, low structure-builders struggle in figuring out and sticking with an overarching structure and knowing what information needs to fit into it and what ought to be discarded. Structure building is a form of conscious and subconscious discipline: stuff fits or it doesn’t; it adds nuance, capacity and meaning, or it obscures and overfreights.

A simpler analogy might be a friend who wants to tell you a rare story about this four-year-old boy she knows: she mentions who the mother is, how they became friends in their book club, finally mentioning that the mother, by coincidence, had a large load of manure delivered for her garden on the morning of the boy’s birthday—the mother’s an incredible gardener, her eggplants took a ribbon at the county fair and got her an interview on morning radio, and she gets her manure from that widowed guy in your church who raises the Clydesdale horses and whose son is married to—and so on and so on. Your friend cannot winnow the main ideas from the blizzard of irrelevant associations, and the story is lost on the listener. Story, too, is structure.

Our understanding of structure building as a cognitive difference in learning is still in the early stages: is low structure-building the result of a faulty cognitive mechanism, or is structure-building a skill that some pick up naturally and others must be taught? We know that when questions are embedded in texts to help focus readers on the main ideas, the learning performance of low structure-builders improves to a level commensurate with high structure-builders. The embedded questions promote a more coherent representation of the text than low-structure readers can build on their own, thus bringing them up toward the level achieved by the high structure-builders.

What’s happening in this situation remains an open question for now, but the implication for learners seems to reinforce a notion offered earlier by the neurosurgeon Mike Ebersold and the pediatric neurologist Doug Larsen: that cultivating the habit of reflecting on one’s experiences, of making them into a story, strengthens learning. The theory of structure building may provide a clue as to why: that reflecting on what went right, what went wrong, and how might I do it differently next time helps me isolate key ideas, organize them into mental models, and apply them again in the future with an eye to improving and building on what I’ve learned.13 Rule versus Example Learning

Another cognitive difference that appears to matter is whether you are a “rule learner” or “example learner,” and the distinction is somewhat akin to the one we just discussed. When studying different kinds of problems in a chemistry class, or specimens in a course on birds and how to identify them, rule learners tend to abstract the underlying principles or “rules” that differentiate the examples being studied. Later, when they encounter a new chemistry problem or bird specimen, they apply the rules as a means to classify it and select the appropriate solution or specimen box. Example learners tend to memorize the examples rather than the underlying principles. When they encounter an unfamiliar case, they lack a grasp of the rules needed to classify or solve it, so they generalize from the nearest example they can remember, even if it is not particularly relevant to the new case. However, example learners may improve at extracting underlying rules when they are asked to compare two different examples rather than focus on studying one example at a time. Likewise, they are more likely to discover the common solution to disparate problems if they first have to compare the problems and try to figure out the underlying similarities.

By way of an illustration, consider two different hypothetical problems faced by a learner. These are taken from research into rule learning. In one problem, a general’s forces are set to attack a castle that is protected by a moat. Spies have learned that the bridges over the moat have been mined by the castle’s commander. The mines are set to allow small groups to cross the bridges, so that the occupants of the castle can retrieve food and fuel. How can the general get a large force over the bridges to attack the castle without tripping the mines?

The other problem involves an inoperable tumor, which can be destroyed by focused radiation. However, the radiation must also pass through healthy tissue. A beam of sufficient intensity to destroy the tumor will damage the healthy tissue through which it passes. How can the tumor be destroyed without damaging healthy tissue?

In the studies, students have difficulty finding the solution to either of these problems unless they are instructed to look for similarities between them. When seeking similarities, many students notice that (1) both problems require a large force to be directed at a target, (2) the full force cannot be massed and delivered through a single route without an adverse outcome, and (3) smaller forces can be delivered to the target, but a small force is insufficient to solve the problem. By identifying these similarities, students often arrive at a strategy of dividing the larger force into smaller forces and sending these in through different routes to converge on the target and destroy it without setting off mines or damaging healthy tissue. Here’s the payoff: after figuring out this common, underlying solution, students are then able to go on to solve a variety of different convergence problems.

As with high and low structure-builders, our understanding of rule versus example learners is very preliminary. However, we know that high structure-builders and rule learners are more successful in transferring their learning to unfamiliar situations than are low structure-builders and example learners. You might wonder if the tendency to be a high structure-builder is correlated with the tendency to be a rule learner. Unfortunately, research is not yet available to answer this question.

You can see the development of structure-building and rule-learning skills in a child’s ability to tell a joke. A three-year-old probably cannot deliver a knock-knock joke, because he lacks an understanding of structure. You reply “Who’s there?” and he jumps to the punch line: “Door is locked, I can’t get in!” He doesn’t understand the importance, after “Who’s there?”, of replying “Doris” to set up the joke. But by the time he’s five, he has become a knock-knock virtuoso: he has memorized the structure. Nonetheless, at five he’s not yet adept at other kinds of jokes because he hasn’t yet learned the essential element that makes jokes work, which, of course, is the “rule” that a punch line of any kind needs a setup, explicit or implied.15 If you consider Bruce Hendry’s early lesson in the high value of a suitcase full of scarce fireworks, you can see how, when he looks at boxcars many years later, he’s working with the same supply-and-demand building block, but within a much more complex model that employs other blocks of knowledge that he has constructed over the years to address concepts of credit risk, business cycles, and the processes of bankruptcy. Why are boxcars in surplus? Because tax incentives to investors had encouraged too much money to flow into their production. What’s a boxcar worth? They cost $42,000 each to build and were in like-new condition, as they had been some of the last ones built. He researched the lifespan of a boxcar and its scrap value and looked at the lease contracts. Even if all his cars stood idle, the lease payments would pay a pretty yield on his investment while the glut worked through the system and the market turned around.

Had we been there, we would have bought boxcars, too. Or so we’d like to think. But it’s not like filling a satchel with fireworks, even if the underlying principle of supply and demand is the same. You had to buy the boxcars right, and understand the way to go about it. What in lay terms we call knowhow. Knowledge is not knowhow until you understand the underlying principles at work and can fit them together into a structure larger than the sum of its parts. Knowhow is learning that enables you to go do.

The Takeaway

Given what we know about learning differences, what’s the takeaway?

Be the one in charge. There’s an old truism from sales school that says you can’t shoot a deer from the lodge. The same goes for learning: you have to suit up, get out the door, and find what you’re after. Mastery, especially of complex ideas, skills, and processes, is a quest. It is not a grade on a test, something bestowed by a coach, or a quality that simply seeps into your being with old age and gray hair.

Embrace the notion of successful intelligence. Go wide: don’t roost in a pigeonhole of your preferred learning style but take command of your resources and tap all of your “intelligences” to master the knowledge or skill you want to possess. Describe what you want to know, do, or accomplish. Then list the competencies required, what you need to learn, and where you can find the knowledge or skill. Then go get it.

Consider your expertise to be in a state of continuing development, practice dynamic testing as a learning strategy to discover your weaknesses, and focus on improving yourself in those areas. It’s smart to build on your strengths, but you will become ever more competent and versatile if you also use testing and trial and error to continue to improve in the areas where your knowledge or performance are not pulling their weight.

Adopt active learning strategies like retrieval practice, spacing, and interleaving. Be aggressive. Like those with dyslexia who have become high achievers, develop workarounds or compensating skills for impediments or holes in your aptitudes.

Don’t rely on what feels best: like a good pilot checking his instruments, use quizzing, peer review, and the other tools described in Chapter 5 to make sure your judgment of what you know and can do is accurate, and that your strategies are moving you toward your goals.

Don’t assume that you’re doing something wrong if the learning feels hard. Remember that difficulties you can overcome with greater cognitive effort will more than repay you in the depth and durability of your learning.

Distill the underlying principles; build the structure. If you’re an example learner, study examples two at a time or more, rather than one by one, asking yourself in what ways they are alike and different. Are the differences such that they require different solutions, or are the similarities such that they respond to a common solution?

Break your idea or desired competency down into its component parts. If you think you are a low structure-builder or an example learner trying to learn new material, pause periodically and ask what the central ideas are, what the rules are. Describe each idea and recall the related points. Which are the big ideas, and which are supporting concepts or nuances? If you were to test yourself on the main ideas, how would you describe them?

What kind of scaffold or framework can you imagine that holds these central ideas together? If we borrowed the winding stair metaphor as a structure for Bruce Hendry’s investment model, it might work something like this. Spiral stairs have three parts: a center post, treads, and risers. Let’s say the center post is the thing that connects us from where we are (down here) to where we want to be (up there): it’s the investment opportunity. Each tread is an element of the deal that protects us from losing money and dropping back, and each riser is an element that lifts us up a notch. Treads and risers must both be present for the stairs to function and for a deal to be attractive. Knowing the scrap value of boxcars is a tread—Bruce knows he won’t get less than that for his investment. Another tread is the guaranteed lease income while his capital is tied up. What are some risers? Impending scarcity, which will raise values. The like-new condition of the cars, which is latent value. A deal that doesn’t have treads and risers will not protect the downside or reliably deliver the upside.

Structure is all around us and available to us through the poet’s medium of metaphor. A tree, with its roots, trunk, and branches. A river. A village, encompassing streets and blocks, houses and stores and offices. The structure of the village explains how these elements are interconnected so that the village has a life and a significance that would not exist if these elements were scattered randomly across an empty landscape.

By abstracting the underlying rules and piecing them into a structure, you go for more than knowledge. You go for knowhow. And that kind of mastery will put you ahead.

7 Increase Your Abilities

IN A FAMOUS study from the 1970s, a researcher showed nursery school children one at a time into a room with no distractions except for a marshmallow resting on a tray on a desk. As the researcher left the room, the child was told he could eat the marshmallow now, or, if he waited for fifteen minutes, he would be rewarded with a second marshmallow.

Walter Mischel and his graduate students observed through a mirror as the children faced their dilemma. Some popped the marshmallow into their mouths the moment the researcher left, but others were able to wait. To help themselves hold back, these kids tried anything they could think of. They were observed to “cover their eyes with their hands or turn around so that they can’t see the tray, start kicking the desk, or tug on their pigtails, or stroke the marshmallow as if it were a tiny stuffed animal,” the researchers wrote.

Of more than six hundred children who took part in the experiment, only one-third succeeded in resisting temptation long enough to get the second marshmallow.

A series of follow-up studies, the most recent in 2011, found that the nursery school children who had been more successful in delaying gratification in this exercise grew up to be more successful in school and in their careers.

The marshmallow study is sublime in its simplicity and as a metaphor for life. We are born with the gift of our genes, but to a surprising degree our success is also determined by focus and self-discipline, which are the offspring of motivation and one’s sense of personal empowerment.1

Consider James Paterson, a spirited, thirty-something Welshman, and his unwitting seduction by the power of mnemonic devices and the world of memory competitions. The word “mnemonic” is from the Greek word for memory. Mnemonic devices are mental tools that can take many forms but generally are used to help hold a large volume of new material in memory, cued for ready recall.

James first learned of mnemonics when one of his university instructors fleetingly mentioned their utility during a lecture. He went straight home, searched the web, bought a book. If he could learn these techniques, he figured, he could memorize his classwork in short order and have a lot more time to hang out with friends. He started practicing memorizing things: names and dates for his psychology classes and the textbook page numbers where they were cited. He also practiced parlor tricks, like memorizing the sequence of playing cards in a shuffled deck or strings of random numbers read from lists made up by friends. He spent long hours honing his techniques, becoming adept and the life of the party among his social set. The year was 2006, and when he learned of a memory competition to be held in Cambridge, England, he decided on a lark to enter it. There he surprised himself by taking first place in the beginner category, a performance for which he pocketed a cool 1,000 euros. He was hooked. Figuring he had nothing to lose by taking a flyer, he went on to compete in his first World Memory Championships, in London, that same year.

With mnemonics James had figured to pocket some easy facts to ace his exams without spending the time and effort to fully master the material, but he discovered something entirely different, as we will recount shortly.

Memory athletes, as these competitors call themselves, all get their start in different ways. Nelson Dellis, the 2012 US Memory Champion, began after his grandmother died of Alzheimer’s disease. Nelson watched her decline over time, with her ability to remember being the first cognitive faculty to go. Although only in his twenties, Nelson wondered if he were destined for the same fate and what he could do about it. He discovered mind sports, hoping that if he could develop his memory to great capacity, then he might have reserves if the disease did strike him later in life. Nelson is another memory athlete on his way up, and he has started a Foundation, Climb for Memory, to raise awareness about and funds for research for this terrible disease. Nelson also climbs mountains (twice reaching near the summit of Mt. Everest), hence the name. We meet others in this chapter who, like Paterson and Dellis, have sought successfully to raise their cognitive abilities in one way or another.

The brain is remarkably plastic, to use the term applied in neuroscience, even into old age for most people. In this chapter’s discussion of raising intellectual abilities, we review some of the questions science is trying to answer about the brain’s ability to change itself throughout life and people’s ability to influence those changes and to raise their IQs. We then describe three known cognitive strategies for getting more out of the mental horsepower you’ve already got.

In a sense the infant brain is like the infant nation. When John Fremont arrived with his expeditionary force at Pueblo de Los Angeles in 1846 in the US campaign to take western territory from Mexico, he had no way to report his progress to President James Polk in Washington except to send his scout, Kit Carson, across the continent on his mule—a round-trip of nearly six thousand miles over mountains, deserts, wilderness and prairies. Fremont pressed Carson to whip himself into a lather, not even to stop to shoot game along the way but to sustain himself by eating the mules as they broke down and needed replacing. That such a journey would be required reveals the undeveloped state of the country. The five-foot-four-inch, 140-pound Carson was the best we had for getting word from one coast to the other. Despite the continent’s boundless natural assets, the fledgling nation had little in the way of capability. To become mighty, it would need cities, universities, factories, farms and seaports, and the roads, trains, and telegraph lines to connect them.2 It’s the same with a brain. We come into the world endowed with the raw material of our genes, but we become capable through the learning and development of mental models and neural pathways that enable us to reason, solve, and create. We have been raised to think that the brain is hardwired and our intellectual potential is more or less set from birth. We now know otherwise. Average IQs have risen over the past century with changes in living conditions. When people suffer brain damage from strokes or accidents, scientists have seen the brain somehow reassign duties so that adjacent networks of neurons take over the work of damaged areas, enabling people to regain lost capacities. Competitions between “memory athletes” like James Paterson and Nelson Dellis have emerged as an international sport among people who have trained themselves to perform astonishing acts of recall. Expert performance in medicine, science, music, chess, or sports has been shown to be the product not just of innate gifts, as had long been thought, but of skills laid down layer by layer, through thousands of hours of dedicated practice. In short, research and the modern record have shown that we and our brains are capable of much greater feats than scientists would have thought possible even a few decades ago.

Neuroplasticity

All knowledge and memory are physiological phenomena, held in our neurons and neural pathways. The idea that the brain is not hardwired but plastic, mutable, something that reorganizes itself with each new task, is a recent revelation, and we are just at the frontiers of understanding what it means and how it works.

In a helpful review of the neuroscience, John T. Bruer took on this question as it relates to the initial development and stabilization of the brain’s circuitry and our ability to bolster the intellectual ability of our children through early stimulation. We’re born with about 100 billion nerve cells, called neurons. A synapse is a connection between neurons, enabling them to pass signals. For a period shortly before and after birth, we undergo “an exuberant burst of synapse formation,” in which the brain wires itself: the neurons sprout microscopic branches, called axons, that reach out in search of tiny nubs on other neurons, called dendrites. When axon meets dendrite, a synapse is formed. In order for some axons to find their target dendrites they must travel vast distances to complete the connections that make up our neural circuitry (a journey of such daunting scale and precision that Bruer likens it to finding one’s way clear across the United States to a waiting partner on the opposite coast, not unlike Kit Carson’s mission to President Polk for General Fremont). It’s this circuitry that enables our senses, cognition, and motor skills, including learning and memory, and it is this circuitry that forms the possibilities and the limits of one’s intellectual capacity.

The number of synapses peaks at the age of one or two, at about 50 percent higher than the average number we possess as adults. A plateau period follows that lasts until around puberty, whereupon this overabundance begins to decline as the brain goes through a period of synaptic pruning. We arrive at our adult complement at around age sixteen with a staggering number, thought to total about 150 trillion connections.

We don’t know why the infant brain produces an overabundance of connections or how it subsequently determines which ones to prune. Some neuroscientists believe that the connections we don’t use are the ones that fade and die away, a notion that would seem to manifest the “use it or lose it” principle and argue for the early stimulation of as many connections as possible in hopes of retaining them for life. Another theory suggests the burgeoning and winnowing is determined by genetics and we have little or no influence over which synapses survive and which do not.

“While children’s brains acquire a tremendous amount of information during the early years,” the neuroscientist Patricia Goldman-Rakic told the Education Commission of the States, most learning is acquired after synaptic formation stabilizes. “From the time a child enters first grade, through high school, college, and beyond, there is little change in the number of synapses. It is during the time when no, or little, synapse formation occurs that most learning takes place” and we develop adult-level skills in language, mathematics, and logic.3 And it is likely during this period more than during infancy, in the view of the neuroscientist Harry T. Chugani, that experience and environmental stimulation fine-tune one’s circuits and make one’s neuronal architecture unique.4 In a 2011 article, a team of British academics in the fields of psychology and sociology reviewed the evidence from neuroscience and concluded that the architecture and gross structure of the brain appear to be substantially determined by genes but that the fine structure of neural networks appears to be shaped by experience and to be capable of substantial modification.5 That the brain is mutable has become evident on many fronts. Norman Doidge, in his book The Brain That Changes Itself, looks at compelling cases of patients who have overcome severe impairments with the assistance of neurologists whose research and practice are advancing the frontiers of our understanding of neuroplasticity.

One of these was Paul Bach-y-Rita, who pioneered a device to help patients who have suffered damage to sensory organs. Bach-y-Rita’s device enables them to regain lost skills by teaching the brain to respond to stimulation of other parts of their bodies, substituting one sensory system for another, much as a blind person can learn to navigate through echolocation, learning to “see” her surroundings by interpreting the differing sounds from the tap of a cane, or can learn to read through the sense of touch using Braille.6 One of Bach-y-Rita’s patients had suffered damage to her vestibular system (how the inner ear senses balance and spatial orientation) that had left her so unbalanced that she was unable to stand, walk, or maintain her independence. Bach-y-Rita rigged a helmet with carpenters’ levels attached to it and wired them to send impulses to a postage-stamp-sized strip of tape containing 144 microelectrodes placed on the woman’s tongue. As she tilted her head, the electrodes sparkled on her tongue like effervescence, but in distinctive patterns reflecting the direction and angle of her head movements. Through practice wearing the device, the woman was gradually able to retrain her brain and vestibular system, recovering her sense of balance for longer and longer periods following the training sessions.

Another patient, a thirty-five-year-old man who had lost his sight at age thirteen, was outfitted with a small video camera mounted on a helmet and enabled to send pulses to the tongue. As Bach-y-Rita explained, the eyes are not what sees, the brain is. The eyes sense, and the brain interprets. The success of this device relies on the brain learning to interpret signals from the tongue as sight. The remarkable results were reported in the New York Times: The patient “found doorways, caught balls rolling toward him, and with his small daughter played a game of rock, paper and scissors for the first time in twenty years. [He] said that, with practice, the substituted sense gets better, ‘as if the brain were rewiring itself.’ ”7 In yet another application, interesting in light of our earlier discussions of metacognition, stimulators are being attached to the chests of pilots to transmit cockpit instrument readings, helping the brain to sense changes in pitch and altitude that the pilot’s vestibular system is unable to detect under certain flight conditions.

Neural cell bodies make up most of the part of our brains that scientists call the gray matter. What they call the white matter is made up of the wiring: the axons that connect to dendrites of other neural cell bodies, and the waxy myelin sheaths in which some axons are wrapped, like the plastic coating on a lamp cord. Both gray matter and white matter are the subject of intense scientific study, as we try to understand how the components that shape cognition and motor skills work and how they change through our lives, research that has been greatly advanced by recent leaps in brain imaging technology.

One ambitious effort is the Human Connectome Project, funded by the National Institutes of Health, to map the connections in the human brain. (The word “connectome” refers to the architecture of the human neurocircuitry in the same spirit that “genome” was coined for the map of the human genetic code.) The websites of participating research institutions show striking images of the fiber architecture of the brain, masses of wire-like human axons presented in neon colors to denote signal directions and bearing an uncanny resemblance to the massive wiring harnesses inside 1970s supercomputers. Early research findings are intriguing. One study, at the University of California, Los Angeles, compared the synaptic architecture of identical twins, whose genes are alike, and fraternal twins, who share only some genes. This study showed what others have suggested, that the speed of our mental abilities is determined by the robustness of our neural connections; that this robustness, at the initial stages, is largely determined by our genes, but that our neural circuitry does not mature as early as our physical development and instead continues to change and grow through our forties, fifties, and sixties. Part of the maturation of these connections is the gradual thickening of the myelin coating of the axons. Myelination generally starts at the backs of our brains and moves toward the front, reaching the frontal lobes as we grow into adulthood. The frontal lobes perform the executive functions of the brain and are the location of the processes of high-level reasoning and judgment, skills that are developed through experience.

The thickness of the myelin coating correlates with ability, and research strongly suggests that increased practice builds greater myelin along the related pathways, improving the strength and speed of the electrical signals and, as a result, performance. Increases in piano practice, for example, have shown correlated increases in the myelination of nerve fibers associated with finger movements and the cognitive processes that are involved in making music, changes that do not appear in nonmusicians.8 The study of habit formation provides an interesting view into neuroplasticity. The neural circuits we use when we take conscious action toward a goal are not the same ones we use when our actions have become automatic, the result of habit. The actions we take by habit are directed from a region located deeper in the brain, the basal ganglia. When we engage in extended training and repetition of some kinds of learning, notably motor skills and sequential tasks, our learning is thought to be recoded in this deeper region, the same area that controls subconscious actions such as eye movements. As a part of this process of recoding, the brain is thought to chunk motor and cognitive action sequences together so that they can be performed as a single unit, that is, without requiring a series of conscious decisions, which would substantially slow our responses. These sequences become reflexive. That is, they may start as actions we teach ourselves to take in pursuit of a goal, but they become automatic responses to stimuli. Some researchers have used the word “macro” (a simple computer app) to describe how this chunking functions as a form of highly efficient, consolidated learning. These theories about chunking as integral to the process of habit formation help explain the way in sports we develop the ability to respond to the rapid-fire unfolding of events faster than we’re able to think them through, the way a musician’s finger movements can outpace his conscious thoughts, or the way a chess player can learn to foresee the countless possible moves and implications presented by different configurations of the board. Most of us display the same talent when we type.

Another fundamental sign of the brain’s enduring mutability is the discovery that the hippocampus, where we consolidate learning and memory, is able to generate new neurons throughout life. This phenomenon, called neurogenesis, is thought to play a central role in the brain’s ability to recover from physical injury and in humans’ lifelong ability to learn. The relationship of neurogenesis to learning and memory is a new field of inquiry, but already scientists have shown that the activity of associative learning (that is, of learning and remembering the relationship between unrelated items, such as names and faces) stimulates an increase in the creation of new neurons in the hippocampus. This rise in neurogenesis starts before the new learning activity is undertaken, suggesting the brain’s intention to learn, and continues for a period after the learning activity, suggesting that neurogenesis plays a role in the consolidation of memory and the beneficial effects that spaced and effortful retrieval practice have on long-term retention.9 Of course, learning and memory are neural processes. The fact that retrieval practice, spacing, rehearsal, rule learning, and the construction of mental models improve learning and memory is evidence of neuroplasticity and is consistent with scientists’ understanding of memory consolidation as an agent for increasing and strengthening the neural pathways by which one is later able to retrieve and apply learning. In the words of Ann and Richard Barnet, human intellectual development is “a lifelong dialogue between inherited tendencies and our life history.”10 The nature of that dialogue is the central question we explore in the rest of this chapter.

Is IQ Mutable?

IQ is a product of genes and environment. Compare it to height: it’s mostly inherited, but over the decades as nutrition has improved, subsequent generations have grown taller. Likewise, IQs in every industrialized part of the world have shown a sustained rise since the start of standardized sampling in 1932, a phenomenon called the Flynn effect after the political scientist who first brought it to wide attention.11 In the United States, the average IQ has risen eighteen points in the last sixty years. For any given age group, an IQ of 100 is the mean score of those taking the IQ tests, so the increase means that having an IQ of 100 today is the intelligence equivalent of those with an IQ 60 years ago of 118. It’s the mean that has risen, and there are several theories why this is so, the principal one being that schools, culture (e.g., television), and nutrition have changed substantially in ways that affect people’s verbal and math abilities as measured by the subtests that make up the IQ test.

Richard Nisbett, in his book Intelligence and How to Get It, discusses the pervasiveness of stimuli in modern society that didn’t exist years ago, offering as one simple example a puzzle maze McDonald’s included in its Happy Meals a few years ago that was more difficult than the mazes included in an IQ test for gifted children.12 Nisbett also writes about “environmental multipliers,” suggesting that a tall kid who goes out for basketball develops a proficiency in the sport that a shorter kid with the same aptitudes won’t develop, just as a curious kid who goes for learning gets smarter than the equally bright but incurious kid who doesn’t. The options for learning have expanded exponentially. It may be a very small genetic difference that makes one kid more curious than another, but the effect is multiplied in an environment where curiosity is easily piqued and readily satisfied.

Another environmental factor that shapes IQ is socioeconomic status and the increased stimulation and nurturing that are more generally available in families who have more resources and education. On average, children from affluent families test higher for IQ than children from impoverished families, and children from impoverished families who are adopted into affluent families score higher on IQ tests than those who are not, regardless of whether the birth parents were of high or low socioeconomic status.

The ability to raise IQ is fraught with controversy and the subject of countless studies reflecting wide disparities of scientific rigor. A comprehensive review published in 2013 of the extant research into raising intelligence in young children sheds helpful light on the issue, in part because of the strict criteria the authors established for determining which studies would qualify for consideration. The eligible studies had to draw from a general, nonclinical population; have a randomized, experimental design; consist of sustained interventions, not of one-shot treatments or simply of manipulations during the testing experience; and use a widely accepted, standardized measure of intelligence. The authors focused on experiments involving children from the prenatal period through age five, and the studies meeting their requirements involved over 37,000 participants.

What did they find? Nutrition affects IQ. Providing dietary supplements of fatty acids to pregnant women, breast-feeding women, and infants had the effect of increasing IQ by anywhere from 3.5 to 6.5 points. Certain fatty acids provide building blocks for nerve cell development that the body cannot produce by itself, and the theory behind the results is that these supplements support the creation of new synapses. Studies of other supplements, such as iron and B complex vitamins, strongly suggested benefits, but these need validation through further research before they can be considered definitive.

In the realm of environmental effects, the authors found that enrolling poor children in early education raises IQ by more than four points, and by more than seven if the intervention is based in a center instead of in the home, where stimulation is less consistently sustained. (Early education was defined as environmental enrichment and structured learning prior to enrollment in preschool.) More affluent children, who are presumed to have many of these benefits at home, might not show similar gains from enrolling in early education programs. In addition, no evidence supports the widely held notion that the younger children are when first enrolled in these programs the better the results. Rather, the evidence suggests, as John Bruer argues, that the earliest few years of life are not narrow windows for development that soon close.

Gains in IQ were found in several areas of cognitive training. When mothers in low-income homes were given the means to provide their children with educational tools, books, and puzzles and trained how to help their children learn to speak and identify objects in the home, the children showed IQ gains. When mothers of three-year-olds in low-income families were trained to talk to their children frequently and at length and to draw out the children with many open-ended questions, the children’s IQs rose. Reading to a child age four or younger raises the child’s IQ, especially if the child is an active participant in the reading, encouraged by the parent to elaborate. After age four, reading to the child does not raise IQ but continues to accelerate the child’s language development. Preschool boosts a child’s IQ by more than four points, and if the school includes language training, by more than seven points. Again, there is no body of evidence supporting the conclusion that early education, preschool, or language training would show IQ gains in children from better-off families, where they already benefit from the advantages of a richer environment.13 Brain Training?

What about “brain training” games? We’ve seen a new kind of business emerge, pitching online games and videos promising to exercise your brain like a muscle, building your cognitive ability. These products are largely founded on the findings of one Swiss study, reported in 2008, which was very limited in scope and has not been replicated.14 The study focused on improving “fluid intelligence”: the facility for abstract reasoning, grasping unfamiliar relationships, and solving new kinds of problems. Fluid intelligence is one of two kinds of intelligence that make up IQ. The other is crystallized intelligence, the storehouse of knowledge we have accumulated through the years. It’s clear that we can increase our crystallized intelligence through effective learning and memory strategies, but what about our fluid intelligence?

A key determiner of fluid intelligence is the capacity of a person’s working memory—the number of new ideas and relationships that a person can hold in mind while working through a problem (especially with some amount of distraction). The focus of the Swiss study was to give participants tasks requiring increasingly difficult working memory challenges, holding two different stimuli in mind for progressively longer periods of distraction. One stimulus was a sequence of numerals. The other was a small square of light that appeared in varying locations on a screen. Both the numerals and the locations of the square changed every three seconds. The task was to decide—while viewing a sequence of changed numerals and repositioned squares—for each combination of numeral and square, whether it matched a combination that had been presented n items back in the series. The number n increased during the trials, making the challenge to working memory progressively more arduous.

All the participants were tested on fluid intelligence tasks at the outset of the study. Then they were given these increasingly difficult exercises of their working memory over periods ranging up to nineteen days. At the end of the training, they were retested for fluid intelligence. They all performed better than they had before the training, and those who had engaged in the training for the longest period showed the greatest improvement. These results showed for the first time that fluid intelligence can be increased through training.

What’s the criticism?

The participants were few (only thirty-five) and were all recruited from a similar, highly intelligent population. Moreover, the study focused on only one training task, so it is unclear to what extent it might apply to other working-memory training tasks, or whether the results are really about working memory rather than some peculiarity of the particular training. Finally, the durability of the improved performance is unknown, and the results, as noted, have not been replicated by other studies. The ability to replicate empirical results is the bedrock of scientific theory. The website PsychFileDrawer.org keeps a list of the top twenty psychological research studies that the site’s users would like to see replicated, and the Swiss study is the first on the list. A recent attempt whose results were published in 2013 failed to find any improvements to fluid intelligence as a result of replicating the exercises in the Swiss study. Interestingly, participants in the study believed that their mental capacities had been enhanced, a phenomenon the authors describe as illusory. However, the authors also acknowledge that an increased sense of self-efficacy can lead to greater persistence in solving difficult problems, encouraged by the belief that training has improved one’s abilities.15 The brain is not a muscle, so strengthening one skill does not automatically strengthen others. Learning and memory strategies such as retrieval practice and the building of mental models are effective for enhancing intellectual abilities in the material or skills practiced, but the benefits don’t extend to mastery of other material or skills. Studies of the brains of experts show enhanced myelination of the axons related to the area of expertise but not elsewhere in the brain. Observed myelination changes in piano virtuosos are specific to piano virtuosity. But the ability to make practice a habit is generalizable. To the extent that “brain training” improves one’s efficacy and self-confidence, as the purveyors claim, the benefits are more likely the fruits of better habits, such as learning how to focus attention and persist at practice.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.