Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
PART III. Improving Motor Performance
Mental Pract ic 1 . A Rev ed Heta-analysis of the Mental Practice Literature on Motor Sk Deborah L. Feltz ill Learning Mich iga n S tat e Uni ve rs ity Da nie l M . Lande rs Ariz one State Uni ve rs ity Betsy J. Becker Michigan State Univers ity Running hesd: MENTAL PRACTICE Send correspondence to Deborah L . Fe ltz Rm. 210, I.M. Sports Circle School of Health Education, Couns e li ng Psy ch olo gy and Huma n Pe rf ormance Michigan State University Eas t Lsnsing, MI 48824
M e nt a 1 P ra c t i c e Abstract The effect of mental practice on subsequent performance of a motor skill has been the subject of many reviews. The present review of mental practice effects differed from previous reviews by examining: (1) learning effects by means of effect sizes for pretest-to-posttest differences, (2) mental practice effects compared to no practice, physical practice, and mental and physical practice, and (3) effect sizes using more contemporary meta-analytic procedures recommended by Hedges and Olkin (1985). An overview of meta-analytic procedures is also presented. From the 48 studies identified as having pretest/posttest comparisons, the overall average effect size for all practice conditions was 0.43 p<.05). Analysis of categorical comparisons among practice conditions revealed that physical practice had the largest effect size followed by combined practice, mental practice, and no practice (control) conditions. This ordering of effect sizes was also found for moderating variables of task type (motor tasks) and dependent measures (accuracy tasks). None of the other moderating variables were statistically significant. These findings are discussed in relation to: (a) the conclusions advanced by previous reviewers of the mental practice literature, and (b) varying ratios of physical to mental practice for enhancing learning of motor skills.
M e nt a 1 P ra c t i c e A Revised Meta-analysis of the Mental Practice Literature on Motor Skill Learning Concomitant with the cognitive revolution in psychology has been the resurgence of research on mental practice. As a specific form of practice, mental practice has also been referred to as symbolic rehearsal (Sackett' 1935) ~ imaginary practice (Perry, 1939), covert rehearsal (Corbin, 1967), implicit practice (Morrisett, 1956), mental rehearsal (Whiteley, 1962), conceptualizing practice (Egstrom, 1964), mental preparation ('Weinberg, 1982), and visualization (Seiderman & Schneider, 1983). According to Richardson (1967, p. 95), "mental practice refers to the symbolic rehearsal of a physical activity in the absence of any gross muscular movements." Such covert activity is commonly observed among musicians and athletes prior to their performances. For example, when a gymnast imagines going through the motions of performing a still ring routine he is engaged in mental practice. Since the 19 30s there have been over 100 studies on mental practice. The specific research question addressed in these studies has been whether a given amount of mental practice prior to performing a motor skill will enhance one's subsequent motor performance. Unfortunately, definitive answers to this question have not been readily forthcoming. Although there are existing narrative (Corbin, 1972; Richardson, 1967 a, b; Weinberg, 1982) and meta-analytic (Feltz & Landers, 1983) reviews of the mental practice literature, the conclusions have been contradictory. There is a need, therefore, to conduct a comprehensive review of the mental practice literature using more sophisticated meta-
M e nt a 1 P ra c t i c e 4 analytic procedures and examining more s tudy f eatures than used in previous studies (e . g., Feltz & Landers , 1983) . MENTAL PRACTI CE PARAD IGMS Most experiments on skill acquisition have been variants on a research design which employs four groups of sub jects randomly selected f ram a homogeneous parent population or equated on initial levels of performance. These groups have been (a) mental practice, (b) physical practice, (c) combined physical and mental practice, and (d) no physical or mental practice (i.e., control). Most studies compared the performances (pre-post) of sub jects who had previous mental practice to a control group that had not received mental ins tractions . In the mental practice group the time intervening between pre and posttest was usually occupied in sitting or standing and rehearsing the skill in imagination for a set amount of time. The members of the no practice group were simply instructed not to practice the skill physically or mentally during the interval. A more appropriate control has required members of the no practice group to participate in the same number of practice sessions as the mental and physical practice groups, but with activity that has been irrelevant to the task. Quite often, these groups were also contrasted to a physical practice group and a group receiving combined mental and physical practice. A practice period was then instituted which varied cons iderably in the number of trials in each practice session and in total number and spacing of trials. In combined mental-physical practice groups ~ practice periods involved either
Mental Pract ice alternating mental and physical practice trials, mentally practicing a number of trials followed by physical practice, or physically practicing a number of trials followed by mental practice. Following this practice Period, the sub Sects' skills were tested under standard conditions to determine whether their performance scores differed as a result of the practice condition administered. The scope of the present meta-analytic review is considerably broader than in previous reviews. Whereas Feltz and Landers (1983) limited their review to only comparisons between mental practice and no practice, all four groups are compared in the present review. The previous meta-analytic study included only studies that had pretest scores or a control group with which to be compared. By contrast, the present review included only single or multiple group studies having pre and posttest scores. The use of pre-post designs permitted a determination of a change-score effect size for each group examined in this set of mental practice studies. PREVIOUS REVIEWS Research studies examining the effects of mental practice on motor learning and skilled performance have been reviewed on a selective basis. The reviews by Richardson ( 1967a ~ and Corbin (1972) included from 22 to 56 studies and provided contradictory conclusions . Richardson ~ 1967a ~ reviewed studies of three types: (a ~ those that focused on how mental practice could facilitate the initial acquisition of a perceptual motor skill, (b) those that focused on aiding the continued retention of a motor skill,
Ment al Pra ct ice 6 and (c ~ those that focused on improving the immediate performance of a skill. He concluded that in a ma jority of the studies reviewed, mental practice facilitates the acquisition of a motor skill. There were not enough s tudies to draw any conclusions regarding the effect of mental practice on retention or immediate perf ormance of a task. Five years later, Corbin ( 1972) who reviewed many other factors that could affect mental practice was much more cautious in his interpretation of the effects of menta' practice on acquisition and retention of skilled motor behavior. In fact, he maintained that the studies were inconclusive and that a host of individual, task and methodological factors used with mental practice produced different mental practice results. In a 1982 review of "mental preparation, ~ Weinberg reviewed 27 studies dealing with mental practice. Although Weinberg noted the equivocal nature of this literature, he maintained that the following consistencies were apparent: (a) physical practice is better than mental practice; and (b) mental practice combined and alternated with physical practice is more effective than either p by s i ca 1 p ra c t i ce or me nt a 1 p ra ct i c e a 1 o n e . Th e 1 a t t e r conclusion is similar to Richardson's (1967a) cautious inference that the combined practice group is as good as or better than physical practice trials only. Another conclusion reached by Weinberg ~ 19 82) was that f or mental practice to be effective individuals had to achieve a mi ni mu m s k i 1 1 p r o f i c i e n c y . H ow e vie r , i n t h e i r me t a -a na ly s i s , Feltz and Landers ~ 1983) found no significant differences between
M e nt a 1 P ra c t i ce the effect sizes determined for novice and experienced p e r f or me r s . It is not surprising that with all of the significant and nonsignificant findings in the numerous mental practice studies, it is exceedingly difficult in these narrative reviews (Corbin, 1972; Richardson, 1967; Weinberg, 1982) to obtain any clear patterns. The insights about directions for future research that were provided in previous reviews by Richardson (1967), Corbin (1972) and Weinberg (1982) were helpful. In the above reviews, however, the conclusions about mental practice effects may have been distorted for one or more of the following reasons: (a) too few studies have been included to accurately portray the overall empirical findings in the area; (b) only a subset of possible studies was included, leaving open the possibility that bias on the reviewers' part may have influenced them to include studies that supported their position, while excluding those that may have contradicted their beliefs; (c) although the reviewers speculated about a range of variables that may influence the effectiveness of mental practice, the style used in these reviews was more narrative and rhetorical than technical and statistical, thus making it difficult to systematically identify the variables; and (d) the reviews have ignored the issue of relationship strength, which may have allowed weak disconfirmation, or the equal weighting of conclusions based on f ew studies with conclusions based on several studies (see Cooper, 1979). In other words, they had a smaller pool of studies, and at that time, more sophisticated tools for research
Mental Pract ice integration were not widely available. Thus, some of their conclusions may no longer be tenable. Given the current confusion that may have resulted from the basic limitations of previous reviews, there is a need for a more comprehensive review of existing research, using a more powerful method of combining results than summary impression. The methodology recommended for such a purpose is meta-analysis, which examines the magnitude of differences between conditions as well as the probability of finding such differences. AN OVERVIEW OF META-ANALYSIS TECHNIQUES This section provides an overview of the concept and practice of meta-analysis, the quantitative synthesis of research findings. A brief introduction is followed by a discussion of Cooper's (1984) formulation of the process of integrative research reviewing. The effect size, 85 popularized by Glass (1976), is next introduced: this measure serves as an index of the effectiveness of mental practice training in our review. A Overview of hypotheses tested by statistical method designed specifically for analyzing effect-size data (e.g., Hedges & Olkin, 1985) concludes the section. In t r o d u c t i o n "Meta-analysis," (Glass, 1976) or the analysis of analyses, is an approach to research reviewing that is based upon the quantitative synthesis of results of related research studies. Although the idea of statistically combining measures of study outcomes is not new in the agricultural or physical sciences
Ment al Pra ct ice 9 (e.g., Birge' 1932; Fisher, 1932), used to summarize research results Glass ( 1976) proposed the idea of meta-analysis rt i c ~ ~ ~ i: i ~ n t: this approach was not often in ache social sciences until mete -a na lys i s . Glass described meta-analysis as "a rigorous alternative to the casual, narrative discussions of research studies which typify our attempts to make sense of the rapidly expanding research literature" (1976, p. 3). The book by Glass, McGaw, and Smith (1981) presents an overview of the process as it was first conceptualized. In Glass's view, the task of the meta-analyst is to explore the variation in the findings of studies in much the same way that one might analyze data in primary research. Questions of the effects of differences in study design or treatment implementation on study results are addressed empirically. Thus we avoid the practice of eliminating all but a few studies not believed to be deficient in design or analysis, and basing the conclusions of the review on the remaining results . Some critics (e.o., Eysenck, 1978; Slavin, 1984) have claimed that meta-analysis (as it is generally applied) is littl snore than the thoughtless application of statistical summaries the results of studies of questionable quality. In fact, as is true for some published primary research, some published meta- analyses are flawed because of problems in data collection, data analysis, or other important aspects. However, when thoughtful! conducted, a meta-analysis can provide a more rigorous and objective alternative to the traditional narrative review. Additionally, the development of statistical analyses designed to
Mental Practice 10 especially for effect sizes makes the thoughtful meta-analysis a necessity ether than an option. The Integrative Review Both Jackson ~ 1980) and Cooper ~ 1982, 1984) have conceived of the steps involved in an integrative research review as parallel to those familiar in the conduct of primary research. Cooper (1984) outlines and details five steps in a research review and the "functions, sources of variance, and potential threats to validity associated with each stage of the review process" ( 1984, p. 12~ . These five stages are outlined below. Problem Formulation At this first stage of the review, the researcher must outline the research questions f or the review and the kinds of evidence that should be sought in order to address those questions. Here the reviewer deals with the conceptualization and operationalization of constructs, the specificity versus generality of conclusions to be drawn, and the question of whether to conduct a review which tests hypotheses on the basis of "s tudy-generated evidence ~ or a review which proposes hypotheses on the basis of "review-generated evidence." Study- generated evidence comprises information about effects examined within studies, such as treatment effects or the relationships of critical sub ject characteristics to treatment effects. Review- generated evidence concerns effects that cannot be, or usually are not, tested within single studies. For example, evidence about the relationship to study results of features of research design or methodology would be review-generated evidence.
Mental Practice 11 Data Collection At this stage of the review, the issue is the identif ication and collection of studies. Cooper details many literature-search procedures, and discusses ways to evaluate their adequacy. Data Evaluation This stage of the research involves the accumulation of study results and the "coding" of study features which may later serve as explanations f or patterns of study outcomes . During this step, the meta-analyst computes quantitative indices of study outcomes (representing treatment effects, degrees of relationships between variables, or other outcomes ) which will later be analyzed. Also at this stage the issues of sub ject and treatment characteristics and study quality become crucial. Features of the subjects (both experimental and control sub Sects), the treatments, and the context of the study may be related either purposely or accidentally to study outcomes. Some guidance about which features should be important will come from the problem formulation stage of the review. Important treatment features and sub ject characteristics that have theoretical importance must be noted for each study in order to examine plausible explanations for differences (or similarities ) in study results . Cooper describes two approaches for evaluating study quality, the "threats-to-validity" approach and the "methods- description° approach. The threats-to-validity approach involves determining whether each study in the review is sub ject to any of a number of threats to validity (such as those listed by Campbell
Mental Practice 12 a n d S t a n l ey , 1 9 6 3 ) a n d th e me th o ds -d e s c r i p t i on app r oa ch i n vo 1 ve s the description of the features of study design via coding of the primary researchers' descriptions of the methodology of the studies . Clearly, either approach has the weakness that different reviewers may choose to list different threats to validity or methodological features, but the methods-description has the advantages of requiring fewer judgments and being more detailed (because f iner details of study methods are noted ~ . Data Analysis and Interpretation At this stage the reviewer selects and applies procedures in order to draw inferences about the questions formulated at the first stage of the review procedure. Different procedures are available for analyzing measures of effect magnitude such as correlations and standardized mean differences, and for analyzing probability values from independent studies. Different inferences can be based on these two kinds of analyses. Public Presentation of Result s Finally, the reviewer must prepare the results of the integrative review for public consumption. Here issues of amount of detail that should be reported about the conduct four previous stages are critical. Clearly the inclusion every detail, regardless of its eventual importance in the findings of the review, is unwise. However, Cooper argues that the omission of details about the conduct of the review constitute a primary threat to the validity of the review. Summa ry The clarify cation alone of the process of conducting an integrative review has done much to enable researchers to take a the of th e of
Mental Pract ice 13 more rigorous and systematic approach to research reviewing. Even so, in each review there will be special considerations suggested by the nature of the research topic or the data available that do not allow the conduct of such a review to be an automatic, thoughtless process. Glas s ' s Ef f ect Size . For many years the quantitative summary zation of measures of effect magnitude was not possible for much of the research in the social sciences. Glass ~ s popularization of the effect size, or standardized mean difference, as a measure of treatment effect that could be compared across studies using nonidentical instruments or measures, was the breakthrough that allowed the broad application of quantitative research synthesis techniques in the social and behavior sciences. The effect size for a comparison between the experimental and control groups in a study is the standardized mean difference yE ye where yE and yC are the _ _ respectively' and S is the common populat ion Glass proposed using t Hedges ( 1981) noted t precise estimate of o va ri ance s is s at is f fed . ~ 1) experimental and control group means, the pooled within-groups estimate of o, standard deviation of the scores. (Though he control group standard deviation as S. hat the pooled standard deviation is a more when the assumption of equal population
Mental Pract ice 14 The effect size represents the difference between the means of the experimental and control groups relative to the amount of ran do m va ri at i on w i thi n th as e gr oup s . Ma ny re Hi ewe rs d is cu s s values of the effect size in terms of "standard-deviation units," in much the same way that a z score or standard score would be discussed. Thus, an effect size of 0.75 indicates that the means of the experimental and control groups differ by three fourths of one standard deviation. Another way to interpret the effect size is in terms of the performance of any average sub ject in the control group. An effect size of 0.75 indicates that the treatment implemented raises the score of the average sub ject three fourths of one standard deviation. Statistical Analyses for Effect-size Data ~ . . G1 a s s ' s An a 1 y s e s When Glass proposed using quantitative methods to summarize effect sizes, he argued that the effect sizes could be treated as "typical" data and analyzed using familiar procedures (e.g., ANOVA, regression). The rationale for using such analyses was that the reviewer wanted to examine variation in the results of studies, in much the same way that a researcher might examine differences between subjects in a primary data analysis. Thus, analysis of variance was used to compare results of classes or categorizations of studies, and regression was used to examine the relationships of continuous predictor variables to the study results . Though many meta-analyses have been based on this approach to summarizing data from series of studies, including our
Mental Pract ice original review of the mental practice literature (Feltz & Landers, 1983), the approach is problematic because the effect- size "data (or the correlations or proportions) do not usually satisfy the homoscedasticity assumption required of standard statistical analyses. The variance of the effect-size estimate is inversely related to the size of the sample for which it is calculated (Hedges, 1981), and sample sizes of studies in research reviews often differ by several orders of magnitude. Furthermore, though the influence on decisions of violations of this sort has not been well studied, it seems likely deco be associated with serious errors in the significance levels of tests (e.g., t and F tests) based on the analyses (Hedges, 1984). Thus, analyses designed specifically for the examination of effect-size data are to be preferred over the seemingly sensible ad hoc methods used initially. Analyses for Effect Sizes ~ . Analyses based on sample effect sizes allow inferences about corresponding population parameters. Hedges (1981) noted that estimates a population effect size, 8, which may be written as ME ICY ~S = ~ 6 -~~~'7 ~ ~ The parameters ME and TIC are the population means on Y for the experimental and control groups, respectively, and 6 is the population standard deviation of the Y scores within the groups of the study. When the reviewer considers a set of k studies, the parameters ol, ..., Ok are the population values about which inferences are made when sample effect sizes are analyzed. ( 2 )
Mental Practice 16 Though there are many similarities between the familiar analyses first employed in meta-analysis (like ANOVA and regression), and the analyses designed specifically for effect sizes, there are also differences. Statistical analyses designed specifically for effect sizes not only avoid the statistical problems of traditional analysis methods, but also provide tests of the adequacy of proposed models for the effect sizes which are not available from traditional methods. Rather than detail the statistical theory for the effect-size analyses, which is presented clearly by Hedges (e.g., Hedges, 1982a,b; Hedges & Olkin, 1985) we outline here the hypotheses that are addressed by the analyses. Hypotheses for Effect-size Analysis The hypotheses appropriate for effect-size data are discussed here for the context of studies comparing one experimental group to a control group on a simple pastiest measure. The simplest null hypothesis for effect-size data is that all of the s tudies are of populations in which there are essentially no treatment effects. This is typically tested in two steps using statistical analyses for effect sizes. First, the hypothesis that all of the studies provide similar or consistent results is tested. This is the model that tI = 82 = ~ = ok = 3' where k is the number of studies being summarized. Hedges (1982a) and Rosenthal and Rubin (1982) showed how to test this model using a chi-square statistic with k - 1 degrees of freedom, which provides a test similar to the goodness-of-fit test from a log-linear model. Because this homogeneity test informs the (3)
M e nt a 1 P ra c t i c e 17 reviewer about differences in the size of the treatment effect across studies, in a sense it provides a test for a "study by treatment n interaction. If the results from all of the studies are consistent with the model of a single underlying population effect size (i.e., one treatment effect ), the meta-~vst can test whether the value of that single effect (I) differs from zero. The formal hypothesis to be tested is that ~ = 0. A z score is calculated by dividing the weighted mean effect size by its standard error. The test is done by comparing that sample z to a table of standard normal values. Further Hypothe s e s If the tes t of homogeneity for the re jected and the reviewer concludes that co n s i s t e nt, " ma ny a 1 t e rn ~ t i ve me t h o d s of sizes are available as a next step. Many _ _ covered in detail by Hedges ( 1982b, c; 1983) (Raudenbush & Bryk, 1985; Rosenthal & Rubin, behind these alternatives is described brief ly. The goal of the alternative statistical analyses designed for effect sizes is to either "explain, r estimate, or identify the sources of variability in study results. Tests for the significance of specific explanatory models are accompanied by tests for the adequacy of those models. Similarly, methods for identifying outliers provide ways to assess the impact of the omission of the outliers on the data analysis. effect sizes (model 3) is the results are "not analyzing the effect Of the~:e methnr1~: ore and others 1982). The logic
Ment al ~ ra ct ice 18 Fixed -ef f e cts ~ ana lys es as sume that a 11 ef f e at -s iz e parameters are functions of known concomitant variables (study or sample characteristics), and thus can be "explained in terms of an appropriate statistical model. The model may be a regression- like linear model relating predictors (e.g., sample or study features ) to the effect-size outcomes (fledges, 1982c ) or a categorical model (conceptually similar to ANOVA) which posits different population effect sizes for qualitatively different sets of studies. Other analyses assume that "random-effects" or mixed models are more appropriate for describing effect-size outcomes. The underlying assumption of these methods is that the effect-size parameters vary in much the same way as their sample realizations. The goal of random-effects analyses is to estimate the amount of random parameter variability in a set of outcomes. Mixed models do not obviate the possibility of between-study differences due to fixed factors. Such models simply do not presume that such f ixed differences can explain all variability in outcomes. Thus, a reviewer using mixed-model methods might seek to reduce outcome variation via explanatory models but would not expect to eliminate that variation. In this approach tests of model 'adequacy' are often accompanied or replaced by estimates of residual variation in effects. Another approach to the analysis of effect sizes, which is often combined with those mentioned above, involves the identification of outliers, or unusual effect-size estimates. Methods described by Hedges and Olkin ( 1985) allow the reviewer to locate studies that contribute heavily to the misspecification
_` f M e nt a 1 P ra c t i c e 19 of proposed models for differences in effect sizes. The studies from which these estimates arise sometimes differ from other studies in ways that were not coded or thought important during preliminary data evaluation. Sometimes the features of such unusual studies can be included in a model which then explains adequately the pattern of results. Occasionally outliers are eliminated if they result from incommensurate outcome measures or because of problems in effect-size computation. The methods for identifying unusual studies can be used not only to identify problem studies, but also to identify exemplary studies. THE NEED FOR THE PRESENT STUDY This reanalysis of the mental practice literature will be valuable for several reasons. First, the analysis will improve upon the earlier review by expanding the set of studies investigated to include those examining a treatment featuring combinations of mental and physical practice. The Feltz and Landers meta-analysis ( 1983) examined only the comparison of mental practice to no practice at all. Second, our present study will improve upon the earlier review by Feltz and Landers (1983) by using modern statistical analyses for effect sizes. Feltz and Landers employed the meta- analysis strategy initially proposed by Glass, which is problematic both because of the violation of the assumption of homogeneity of variances discussed previously, and because of the inability of this strategy to assess the adequacy of the models
M e nt a 1 ~ ra c t i ce 20 for differences in effect sizes. We will use the methods described by Hedges and Olkin to avoid these problems. Furthermore, we will use the methods described by Hedges and Olkin for identifying outliers or unusual studies to pinpoint very large erfect-sizes. Thus, we will be able to select studies that show particularly strong mental-practice or combined mental and physical practice effects, which might serve to identify problem studies or exemplars for the design of mental-practice i nt e rventions. Our reanalysis will also use a slightly modified version of Glass's effect size as a measure of the effectiveness of mental practice training. In their previous review, Feltz and Landers (1983) used the typical experimental versus control effect size, contrasting motor-skill performance between mental-practice and control groups. In our reanalysis, we will use separate effect sizes for the mental practice and control groups (as well as for combined mental and physical practice groups) to represent change in motor skill performance. The use of this "difference-score effect size" (discussed in more detail below) will enable us to estimate not only the difference in performance due to the mental-practice intervention, but also the amount of change that would be expected for groups receiving no training or a combination of mental and physical practice. Thus, our overall null hypothesis will be that all studies show on average the same degree of change in motor skill for the mental practice, physical practice, combined, and control groups.
M e nt a 1 P ra c t i c e 21 METHOD In this section, we detail the methodology for our meta- analysis. The first section details the literature search procedures used to identify our collection of studies. Next, the definition and computation of effect-size measures, and the coding of study features are discussed. The remaining section deals with the analysis of our effect-size data. We discuss the comparisons or practice paradigms to be made, as well as other discrete ~grouping) variables that may be related to the amount of change in motor skills. Then we present a rationale for the investigation of several continuous predictor variables for the amount of change in motor skill performance. Finally, we discuss our rationale and methodology for the examination of outliers. The Collection of Studies Study sources were obtained from the Feltz and Landers ( 1983) review and from a manual search of the literature subsequent to 1982. From this search we identified 60 unpublished sources, 48 of which were obtainable and 48 published sources, all of which were obtainable. This resulted in a total of 96 distinct sources that were retrieved and identified as having examined the effects of some form of mental practice on motor performance. Each article was then read, effect-size measures were extracted where sufficient data were provided, and relevant study features were coded. This procedure produced 55 studies from which effect sizes could be obtained. Of the 41 studies that could not be used, 37 did not report enough
Ment al Pra ct ice 22 information on which to calculate effect sizes and four were not re le va nt to th e pu rp as e of th is re vi ew . Definition and Computation of Effect-Size Measure s Notation for a Series of Studies Consider a series of k studies each examining the treatment effect in one or several samples. Let Xi j1 and Yi j1 be the pretest ( X) and posttest ( Y), respectively, for the 1th person in the With sample of the ith study. If a study examines the pretest and posttest motor-skill performance of sub jects in mental- practice and control groups, it has two independent samples . Denote the Ji as the number of independent samples in the ith study, ni ~ as the sample size in the with sample in study i, and assume that in sample i of study i, Xi jl and Yi j1 are independently normally d~ributed with means ~= and ;~= and with variances 6iJ and By, respectively. Thus Xi 1~w N(?li~, hid), for 1 = ~···· ·T~i:~ i _i and The Difference Score Effect Size me, Yi jl~ Nail, - i), for 1 = 1' ~ ~J' i = 1,...,J~, We def ine the difference-score effect size as the standardized difference between the posttest and pretest means for a single sample, divided by the pretest standard deviation.
Mental Practice 23 We write YiJ - XiJ = Si J (4) where Yi: and Xi: are the posttest and pretest means, respectively, and Sip is the pretest standard deviation in the ith sample from the with study. We define the difference-score effect size in the metric of the pretest scores for two reasons. The primary reason is interpretability. By dividing by a standard deviation of scores (rather than of change scores) we obtain an effect size in score units. Thus a difference-score effect size of 0.75 for a mental practice group indicates that the average subject in that group increased his or her performance by three-fourths of one standard deviation. If the skill in question were basketball jump shots, and the standard deviation of the number of pretreatment shots made was 10, then the average change is easily seen to be 7.5 additional shots made. The second reason is that the pretest standard deviations would not be influenced by the treatments. They should be roughly equivalent across groups withing studies, assuming that subjects were randomly assigned to groups, thus large difference- score effects should not result from decreased variation in scores in groups where the treatment may have affected score variability (note influence of 62~ W2 on variance of g). The s ample change -s core ef f ect s iz e, ~ij' es timat es a population effect size, [ij' which may be written as
Mental Practice 24 Hi - Hi I ~ e = 1J ( 5 ) 6.. `~ ha As above, Exit and yin are the population means on X and Y for the ith sample in Beth study and 6ij is the population standard deviation of the X scores within the ith sample of study i. Below we will see that the sampling distribution fly the effect size is greatly simplified if we assume that 62 = `2. Inferences are made about the k parameters Ills ~kJ when sample effect sizes or their significance values are analyzed. Computation of Ef feet Size s Most studies provided the pretest and posttest means and the pretest standard deviation needed to compute the effect size directly, as shown in equation 1. Effect sizes were computed for as many distinct control, mental practice, physical practice, or combined mental/physical practice groups as were examined. Thus, a single source could provide any number of difference-score effect sizes. In the present review, the maximum number of effect sizes from any one source was 96 (Wills, 1966~. The Wills (1966) study measured 8 outcomes for 12 independent samples of sub jects. When several outcomes were studied, when single outcomes were scored in more than one way (e.g., in terms of both speed and accuracy), and when multiple test trials were reported, we computed several (dependent) effect sizes for each group. (No interdependent data are combined in our analyses, however ). When raw means and pretest standard deviations were not reported, effect sizes were computed in other ways. In two studies, the posttest standard deviation replaced the pretest
Mental Practice 25 standard deviation in the computation of A. In some cases, ga~n- score standard deviations (Sg) or t tests for change in performance were reported. In these cases g was computed via a simple algebraic ideal ity. (Y - X) 2 ( 1-r) Sg using the gair~-score standard deviation, and via 2 ( 1-r ) n when n is the sample size of the group and t is the t test ~ change f or only that group. ~ Note that the square root of corresponding change-score F could also be used in place of here. ~ The correlation r represents the correlation between r or the and Y. the pretest and posttest measures. Values of r were not generally reported, thus had to be obtained from a subset of s tudies which reported either the pre-pos t correlation or raw data (which allowed computation of r ). The values of r used f or the f our treatment groups were r = +.69 for control groups, r = +.64 for the mental practice groups, r = +.20 for the physical practice groups, and r = +.16 for the combined mental/physical groups. These values are the median correlations retrieved from the subset of studies which reported r or raw data. Each median r is based on a set of between seven and 10 correlations. The values of the median correlations suggest that the pretest-posttest correlation is quite strong for
M e nt a 1 P ra c t i c e 26 control and mental-practice groups. Where some intervening physical practice has taken place, the relationship is weaker; the correlations f or physical and combined groups are less than one-third the size of the control and mental practice c o r re 1 at i o n s . We also computed some effect sizes by approximating the value of Sg with the pooled within-groups mean square from a gain-score analysis of variance. Thus, with this method, we used the same standard deviation f or all groups resulting from one art icle or s tudy . Our f ormula f or ~ was ~ , (Y - X) V 2 ( 1-r) I\/ MSW Preliminary analyses indicated, however, that eff ect sizes computed using this approach were systematically larger than effect sizes from studies similar in other aspects. This may have resulted because of between-group differences in variation or pretest versus posttest differences in variation which could not be detected (because the necessary variances were not reported). Six studies with effect sizes computed via this method were eliminated from further statistical analysis. Variance of the Ef f ect Size Hedges (1981) presented asymptotic distribution theory for Glass's estimate of effect size. The gain-score effect size has a similar distribution. The gain-score effect size is biased, but an unbiased estimate of the population value is computed as d = c (n-l)~, where _(m) = 1 - (3/(4m-l)), and the variance of d is approximately_
Mental Pra ct ice 27 2(1-r) + d2 V = 2(n-1 ~ Again r is the estimated pre-post correlation and n is the sample size. The estimate d is asymptotically normal with an expected ~Je 14~ value of i the population difference-score effect size and a variance given by V. Analyses of our difference-score effect sizes are based on those described in detail by fledges (e.g., Hedges ~ 1982; Hedges & Olkin, 1985) e Coding of Study Feature s Numerous study characteristics were coded for the 55 studies in the final collection. Table 1 presents a list of the study features used in our analyses. These study features are the same as those used by Feltz and Landers ( 1983) with the exception of sub ject 's sex and design characteristics as well as categories of open/closed skills. Sub ject's sex was not found to be important in moderating the effect of mental practice and was, therefore, not coded in our review. Because difference-score effect sizes were computed in our analysis, the design characteristics used by Feltz and Landers were not appropriate. Types of Comparisons Our primary comparison of interest was among the treatment groups or different types of practice. It has been theorized
Mental Practice 28 that combined mental and physical practice is better than either physical practice or mental practice alone (Corbin, 1972). However, this comparison has not yet been made within a meta- analysis. In addition, as was done in the Feltz and Landers (1983) review, comparisons were made by task type, publication status, subject experience, and time of posttest. Comparisons that had not been made previously were between studies using different types of dependent measures and between studies using sub jects with different levels of imagery ability. The continuous predictor variables that were investigated were number of practice sessions and number of practice trials per session or length of each practice session in seconds. Some researchers have suggested that the greater the number of mental rehearsals the greater the effect on performance (Sackett, 1935; Smyth, 1975), whereas others have suggested that there may be an optimal number of practice sessions and length of practice in which mental practice is most effective (Corbin, 1972; Twining, 1949). Feltz and Landers (1983) found no linear or curvilinear relationship between number of practice sessions and effect size; however, they did find curvilinear relationships between length of practice and effect size. Unfortunately, they were not able to determine, statistically, whether other variables (e.g., task type ) ma de rated th es e re lati onsh ip s ~ Rationale and Methodology for Outliers ~ ~ . Outliers were examined in the first step of the data analysis to identify unusual studies that could bias subsequent results. Confidence intervals were computed and plotted for each effect size. Unusual results were identified by examining the
Mental Practice 29 confidence interval plots for the separate treatment groups. The studies identified were then re-read to determine any unusual features. On the basis of this preliminary analysis, six studies that had effect sizes computed by approximating the value of Sg with the pooled within-groups mean square were eliminated from further analysis. One study (Corbin, 1966) was eliminated because the pretest task was different than the posttest task. In addition, the Kelsey (1961) study was eliminated because it was the only study that measured muscular endurance. Consequently the physical practice sample in this study had extremely high effect sizes. RE SIT LT S Overall Test of Homogeneity From the 55 studies in which effect sizes were computed, 48 were used in our meta-analysis. These 48 studies had examined change in motor skills for 223 separate samples. ~ summary of the characteristics for these studies is presented in Table 2. Included in this table is an indication of random assignment of subjects to groups, whether pretreatment group differences existed, and how effect sizes were computed.] We first tested the consistency of change in motor skill across 223 samples. The overall homogeneity test BT value was 788.32, which as chi-square variable with k-1 = 221 degrees of 1 The effect sizes for these studies can be obtained by writing the first author.
M e nt a 1 P ra c t i ce 30 freedom, is quite large (~.001). All the change-score effect sizes cannot be represented with one population parameter. This does not seem surprising since the biased uncorrected effect sizes range from -0.38 to 13.91. The weighted average effect size for all studies is estimated to be C.43 standard deviations, which differs from zero (p<.05~. This value represents the average change effect from pre- to pastiest across all types of practice treatments. The value is just slightly lower than the unweighted average effect size (0.48) reported by Feltz and Landers ( 1983) which was computed using the mental practice versus control means rather t h a n co mp u t i ng d i f f e re nce -s co r e e f f e ct s iz e s . Categorical Comparison s We next grouped the effects according to treatment group or type of practice. Table 3 shows the homogeneity statistics obtained f or this categorical analysis and the overall homogeneity test (Hedges, 1982b). An overall test of the within- groups homogeneity, He, is the sum of the homogeneity values for each subgroup. Its value, 668.69 is significant at the .001 level (_=218 ~ . Thus, there is still considerable variation in the sizes of change over practice within the treatment groups The results within the four treatment categories are also not homo ge ne ou s . The test for differences among mean effect sizes for the treatment groups is given by HB · which is also a chi-square variable, with 3 degrees of f reedom. We conclude that the f our
Mental Pract ice 31 sets of pre-post differences have different population effect sizes, since HB = 119.63 is significant. Mean change differences for all of the treatment groups were significantly greater than zero with physical practice showing the greatest change effects (0.79) and, as we would expect, the control groups showing the smallest change effects (0.22). The average weighted change-score effect size for mental practice groups (0.47) is very close to the unweighted effect size reported by Feltz and Landers (1983). Contrary to what has been previously theorized in the literature (Corbin, 1972), combined mental and physical practice does not appear to be more effective than either mental or physical practice alone. We ne xt subdi vi de d th e dif f ere nt t restme nt gr oups accor di ng to task type since this was the categorized variable that Feltz and Landers ( 1983) found to be most significant in differentiating effect sizes. The task-type categories were motor tasks, cognitive tasks, and strength tasks. The homogeneity statistics for task type divided by treatment group are shown in Table 4. An inspection of Table 4 indicates that most of the variation in effect sizes occur with the motor tasks. The overall tes t of within-groups homogeneity is signif icant ~ Hw(_=155) = 547.74 as well as the four treatment categories. Since grouping the s tudies by task type f or f our treatment groups did not fully explain the variations in pre/post differences, we explored the use of another study feature, type of dependent measure used, as 8 grouping variable f or motor type tasks. The dependent measure categories were accuracy, speed, f arm, distance, and time on target or in balance . The
Mental Pract ice 32 homogeneity statistics for measure type by treatment group are shown in Table 5. It appears that most of the variation in effect sizes for motor tasks is from studies using measures of a c cu ra cy or t ime on t a rge t /i n be la n ce . Analyses Using Continuous Predictor s In order to determine the inf luence of number of practice sessions and length of practice per session, we conducted separate regression analyses for each predictor variable for each of the four treatment groups. In each regression analysis, we tested for (a) overall significance of the regression model using four polynomial predictors (linear, quadratic, cubic, and quartic), (b) the fit of the regression model (analogous to EIW homogeneity tests), and (c) Z tests for significance of individual predictors. Table 6 contains the summary statistics for these analyses. For the number of practice sessions variable, the overall models were significant for mental practice, physical practice and combined practice groups, but the chi squares f or model f it were also significant indicating a large amount of error in the models. For the length of practice per session variable which was measured in terms of number of practice trials, the overall models were significant for control, mental practice and physical practice groups with the control group having the only nonsignificant chi square for model fit. Although the control group regression analysis was significant and showed good fit, none of the individual polynomial predictors were significant using a Z test. This may be due to the multicolinearity among
M e n t a 1 P ra c t i c e 33 the predictors. Thus, unlike Feltz and Landers (1983) who found a curvilinear relationship between length of practice and effect size, we found no linear or curvilinear relationships between the continuous variables measured and effect size. Discussion Comparing across all types of tasks and practice conditions used in the 48 studies reviewed, the results of the meta-analysis showed that the average difference in effect size from pretest to posttest was 0.43 standard deviations (p<.05). Likewise, the average effect size for mental practice was 0.47 (p<.05). The overall learning, as indicated by the magnitude of the difference in pretest to posttest effect sizes, is of similar magnitude to the overall mental practice effect size (0.48) reported by Feltz and Landers (1983). Regardless of whether the effect size was computed using mental practice versus control (Feltz and Landers, 1983) or computed using change-score effect sizes, the resulting effect sizes represent approximately one-half a standard deviation. Considering the marked differences in types of tasks, ages, background of subjects, and research designs/methodologies employed in the studies subjected to meta-analysis, it is clear that: (a) mental practice does facilitate learning, (b) these results are replicable, and (c ) they have surprisingly good generality. When the overall effect sizes were broken down to examine moderating variables of task type and type of dependent measure, most of the variation was found in tasks that predominantly involved accuracy or tasks that were primarily "motor n in nature
M e nt ~ 1 P ra c t i ce 34 (versus cognitive and strength ) . The f allure to f ind Variation for strength and cognitive tasks, as well as speed, distance, time-on-target/in balance and form-dependent measures was most likely due to the insufficient number of samples in some practice conditions ~ N < 5 ~ . - Examinatior~ of the categorical comparisons of practice conditions f or the motor and accuracy tasks showed that the learning associated with mental practice was twice as great as that achieved from the minimal (but significant) learning demonstrated by the subjects in the no practice (control) condition. Compared to the physical practice, however, mental practice was 41-45: less effective than physical practice. These results support the general findings in the literature that physical practice is a more effective learning strategy than mental practice (Weinberg, 1982). Although some learning was achieved by the control subjects, it was 71-73% less than that achieved through physical practice. Of particular interest in the present meta-analytic review was the cate gori ca 1 compa ris ons f or the combine d p ra cti ce condition. Previous reviewers (Richardson, 1967; Weinberg, 1982) have maintained that a combination of mental and physical practice "is more effective than either physical practice or mental practice alone" (Weinberg, 1982, p. 203). Richardson (1967a) is much more cautious suggesting only a trend for the motor performance of combined practice to be "as good or better than physical practice trials only" (p. 103). These conclusions were not supported by the findings of the meta-analysis. Where
M e nt a 1 P ra c t i ce 35 the number of effect sizes were sufficient for legitimate statistical comparisons to be made,2 the results showed that the effect sizes for combined practice was always less than those for physical practice. For the effect size summed across types of tasks as well as the effect sizes for motor and accuracy tasks, the combined practice was respectively 22%, 87 and 27% less than that achieved by the exclusive employment of physical practice. It appears that overall there is a reduction in performance efficiency when physical practice is replaced by mental practice. However, there are times when such a loss may be acceptable or even des irable . For example ~ some motor or accuracy tasks f or which actual physical practice may either be expensive, time- consuming, physically or mentally fatiguing or potentially dangerous, the small decrements in performance resulting from combined practice may be an effective teaching-learning strategy, since its effects are nearly as good as physical practice with only half the number of physical practice trials. With only one exception (Oxendine, 1969), most of the combined practice consisted of 8 50:50 ratio of physical practice to mental practice trials. In Oxendine 's ( 1969) study, only one of the three tasks examined showed differences among the following ratios of physical practice to mental practice trials: 8:0, 6:2, 4:4, and 2:6. The 8:0 and 6:2 ratios had the greatest improvement in time-on-target scores with means of 4.37 and 4 .43, 2 For task measures of time-on-target/in balance, combined practice actually had a larger difference score effect size than either physical or mental practice. However, this finding is of questionable significance due to the relatively small number of samples and a much larger standard error of measurement.
r e s p e ct i ve 1 y . Ment al Pra ct ice 36 With f ewe r physical practice trials, the scores were considerably less (i.e., 3.98 for the 4:4 ratio and 2.94 for the 2:6 ratio) . Although much more research is needed to confirm these findings, it appears that the conclusions of Richardson ( 1967a ~ and Weinberg ( 1982) may be valid, but only if the ratio of the physical to mental practice trials is at least 75:25 .
Mental Pract ice 37 Table 1 Features of Studies Study Feature C at e g 0 ri e s Treatment (Type of Practice Task Types Type of Dependent Measure Time of Posttes t Sub je c t Age Gr oup s Sub je ct Expe rience Sub ject Imagery Ability Control Ment al Pra ct ice Physical Practice Ment al/Phy s ice 1 Pra ct ice M ot 0 r Cogn i t ive St rength Endu ra nce Ac cu ra cy Speed St rength Power Form D i s t a n ce Time on Target Time in Balance En du ran ce Imme di ate ly Delayed Elementsry High School College A du 1 t Novice Expe ri ence d Low High after practice
Table 1 Feature s _ Studie s M e nt a 1 P ra c t i c e 38 ( C on t i nu e d ) Study Feature C a t e g 0 ri e s Effect Size Computation Study Date Numbe r of Pra ct ice Ses s ions Length of Practice (Trials or Secs. Numbe r of Tes t Tri als Pretes t SD used Pos ttes t SD used Gain Score SD used t test used MS within used
- ~ Id a: ~ o P4 o ~ v Id so us · ~ v o In : A) · ~ lo ~ ? X O Z CO a) cn v v C: a In V on C) C. so o Cal : so Ct Mental Pract ic 39 us us us In us us ce of ~ ~ ~ ~ ~ ~ cat o O O O o C ~ U~ o _ ~ ~ ~ _ _ u~ c~3 ~ L~ c~ 0 ~ ~ ~ ~ ~ u~ ~ _ _ _ _ _ c~ ~ 0 x 0 x 0 0 0 z ~ z ~ z z z <: C c~ p4 E~ ~ s" ~ ~ ~ u~ ~ 0 c. ~o ~ ~ ~ ~ ~o ~ ~ ~ ~ ~ cs~ ~ o~ ~ ~ ~ ~ a~ a~ ~ ox _ _ ~ _ _ ~ _ t_ _ _ _ 0 0 0 c~ ~ c~ ~4 ~; cn u, 0 0 0 := ~ c~ c~ P~ P~ ~; ~ ~ :~: ~: c~ ~: ~: ~ c~ £ ~: ~: o x o o o o x z ~ z z z z ~ o o o o c~ ~ c~ ~ p" o ~ ~. oo ~ 0 o~ oN - - - c) V ~C O? iC c, d ~ ~ 3 ~ dC ~ ~ O O ~1 0 0 ~. ~ ~ ~ ~ ~ ~ ~ O 0 b0 ~ U: ~ ~ ,= O O ~ ~ ~ ~ ~ O <: ¢ ~ ~ ~ ~ c~ c.~ c.~ a ~ ~ ~ c:> - e
In us ~ 0 a o ~ v bo In 0 · v a Id 0 In v: · ~ v x o ~ at u] no an v A: o 0 c) a' ~ c; - o : it In ~ En cn A: v JO it En it Id a v ¢ M en t a 1 P r a c t i c e 40 to 0 0 :~ _ Or - r ~ ~ 0 4 o z c~ o - - ~o oo o~ oo 0 0 ~ s~ v v o Lm - - t u o o z ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ v o o o o o o o o o x x o z z z z z z z z z ~ ~ ~ o o ~ o o o ~ ~ ~ ~ ~ c~ c~ ~ u~ u] c: s" ~ ~ v c~ ~ - - o z o o u: c: ~ := ~ ~ p. ~ £ ~ = P4 ~ ~ ~ ~ ^ p4 ~ ~ ~ ~ ~ ~ ~ ~ £ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ P~ ~ ^ ^ ~ P4 c~ ~ c~ ~ ~ c~ ~ ~ ~ ~ c~ ~ c~ I_ ~ _ ~ ~ t oo I~ oo ~ 0 a~ u~ ~ ~D ~D ~ 1 ~D ~ ~ ~ 1 00 o~ ~ ~ o~ ~ ~ ~ a~ - - - - - - - - - - - - - ~c ~: :~ a, * 0 ;- v ~ ~ 3 ~ ~ ~ ~ u' ~ s" ~ ~ ~ 0 0 ~ ~ ~ .= ~ ~ 0 c~ ~ ~ 3 ,4 3 ~ ~ ~ ~ ~ x ~ 0 0 ~ 0 c: ~ ~ ~ ~ ~ 0 x := ~ ~ ~ ~ ~ ~ £ £ ~ >: z 0 0 s" v oo - E o c~
Mental Practice 41 ~ In ~ ox 0 v to u: · ~ 0 u, In ~ oh · x 0 z co v: v c; ct so :~ a v u' ~ v u s 0 c' can - a u: to ~5 In c: 0 so u: In In 0 an u, c: ct c: _ c; ~ ~ ~ can ct ~ ~ ~ ~ co ~ ~ ~ ~ ~ In ~ u' ~ £ £ ~ E E E ~ ~ 0 c~ L~ ~ 0 0 _ ~ t_ _ ~ 1 C~ ~ C~ u~ C - ) CS' 0 _ oo 1~ ~ _ ~ a~ C~ ~ ~ O O O O O O O O O X Z Z Z Z Z Z Z Z Z .e 0 ~1 0 0 0 0 0 0 C C~ ~ C~ C~ ~ ~ C~ C~ C~ C ~ ~ ~ ~ p4 ~ ~ £ P" ~ ~ ¢ c: ~ ~ :~: ~ ~ ~ ~ ~ :~ ~ ~ ~ m- ~ ~ ~ ~ ~ ~ ~ E" c~ ~ ~ ~ C~ C~ ~ C~ C~ 1 ~ C~ _ ct ~ r~ ~ 1 ~ ~ 0 O0 0o o~ _ _ _ _ _ ~ _ _ ~C dC ~C Ct U] 0 oo . ~ ~ ~ 0 0 0 ~ v £ £ ~ * N .,1 - 1 .,1 U] 0 U~ ~ U: U: V) t0 ~ t0 ~ dC ~ S" ~ ~ ~ ~ ~ ~ ~ C O ~ ~ ~ ~ ~ ~ ~ ~ - - ~ ~ ~ ~ ~ ~ 1 3 ~ 3 3 3 ~ ct ~ tt ~ 1 ~ ~ o ~ ~ ~ ~ 0 ~ ~ ~ 1 ¢ ~ C4 ~ ~ ~ ~ ~ ~ P: ~ ~ ,
M en t a 1 P r a c t i c e 42 In 0 ~ 0 P4 0 ~ to ~ so us · ~ ~ 0 ct so u: us u, · ~ cow x 0 0 u: u v c: so - so a 0 c; cn cam c: 0 can cam - a u' in ~ 0 in A, in ct cam ~ ~ ~ ~ 0 ~ ~ ~ ~ ~ ~ 0 0 ~ ~ ~ ~ ~ c: E ~ E ~ ~ ~ ~ E ~ E 0 0 ~ 0 s" ~ ~ u~ _ C~ '' C~ ~ Z ~ U^' ~ _ _ _ 'l - oo ~ o0 \0 ~ ~ ~n ~ 0 ~t _ ~ _ C~ O O X O O Z; Z ~1 Z Z E s" E~ s" Ct o X o o o Z ~ :~ Z . Z O N .,1 ~1 ~1 ~i E o o ~ o o o ~ ~ o o o U. =: ~ ~ C~ U~ C~ ~ C~ e ~ ~ P4 ~ £ X :: ¢4 ~: C4 ~ ~ ~ ~ ~ P~ ~: ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ - ~ ~ ~ c' ~ ~ ~ ~ ~ ~ ~ c' .' c~ - c~ oo ~ 0 ~ c~ ~ ~ 0 ~ ~ ~ ~ ~ ~ ~ ~ In ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ o~ ~ - - - - - - ~ - ~ - - - dc o u] dc s" ct ~ ct :~: ~ i U] ~q iC tc ~5 i ~ ~ ~ ~ ~ ~ ~ ~ 0 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ u C ~ ~ s" ~ r: ~ ~ ~ ~ :' ~ ~ ~ 3 ~ ~ <: u: u: u~ c~ u~ cn u~ ~ E~ E~ 3 3
Mental Pract ice 43 ct us a' ~ 0 ~ 0 0 ~ cat v cat us · ~ c; 0 ~ us p4 co In · x 0 Lo Z In u: cat cat so c: o c, In so 0 ~ cay - o ~ cat ED us on At: E cat EN a, a, E ~ cot x us It cat cat so ~ Jo 0 oo cot LO 0 x 0 z ~ z u, :e ~ o o ~ ~ o o ~: ~ ~: c~ ~ ~ ~ P4 c~ ~ ~ ~ ~ ~ ~ o ~ ~ ~ ~o ~ ~ 0 - - - - - - ~c . ~ c~ · ~ - · o ~ ~ ~ o 3 u' ~ 0 ~ cn ~ ~ ~ ~ ~ 0 ~ ~ ~ ~ 0 3 3 3 3 3
Mental Pract ice 44 u so u' :: sac 0 ~ N · U: Ct ~ V C) Cal an cn Car C) Ct is: So o C) to Ct O Can Cry _ ~ o ~ 6 En up c' c; ~ ~ ct . cat 0 ~ ~ ~ ~ ~ c; ~ 0 ~ ~ :^ c; ~ c; cad cat ~ cat cat c; ~ ~ cat ~ ~ ~ 5- so ~ ct ~ cat ~ cat so ~ ~ ~ ~ ~ ~ ~ ~ :' ~ ~ ~ ~ ~ ~ 6 c~ c~ ~ c~ c~ £ ~ 3 s~ c~ c~ ~ £ c~ c~ ~ c; c; ~ ~ 0 0 ct ~ c~ u' ~ ~ ~ ~ ~ ~ ~ ~ ~ v cn ·^ u' 0 tn a, ~ ~ ~ ~ ~ ~ ~ ~ a' S~ S" 1 Li ~ ~ 1 ~ ~ L~ u: ~ Pe ~n · ~ ct E~ 1 ~ ~ 0 s~ s m s" ~ ~ ~ 3 0 s" ~ 0 s" O ~ 1 ^ t0 ~ ^ 1 0 0 ~ ~ ~ ~ ~ ~ ~ £ o ~ 3 o ~ ~ o 3 ~ o £ ~ ^ 0 ~ s" ~ v 0 ^ £ ~ s" ~ ~ ~ ~ 0 v ~ 0 ~ ~ ct _ 0 ~ o~ 0 .= F: U' ~ E3 .= 0 ~ _ V V ~ ~ ~ ~ ~ ~ `^ 0 ~ ~ 0 0 0 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 0 _ ~ ~ r, ~ a.' ~ v ~ ~ ~ ~ v 1 ~ 0 ~ S" ~ ~ ~ ~ s~ 0 0 3 ~ ~ 3 ~ ~1 ~ 3 ~ ~ 3 E 3 o ~ ~ o ~ ~ ~ ~ o ~ o s" ~ 0 ~ ~ ~ ea ~ ~ s" ~ ~ o~ s" U] ~ ~ ~ ~ ~ ~ V CD C: V Ct S~ -r, U:} 0 Ct5 ~ ~ ~ C,) ~ ~ ~ ~: O ~ ^ 1 =: O v ~ E ~ v I v ~ ~ ~ ~ ~ I v v ~ 0 ~ ~ ~ ~ o ~ ~ o ~ ~ ~ 0 ~ 0 c: ~ ~ ~ c~ ~ E :t s" s" 0 c ~ ~ '¢ ct c~ a ~ ~ u: ~ 0 - 3 c~ ~ ~ ~ 0 ~ ~ .= · · ~ 0 ~ ~ ~ ~ 0 ~ 0 s~ oc ~ ~ ~ oo ~ ~ so c~ v ~ ~ ~ v v v ~ ~ u: u: 0 0 0 0 0 cn 0 u~ 0 0 u: 0 ~ z :~ z z z :z z z z z z z z ~-c 1 c ~ c c c c c 1 N (IJ ~ S~ 4} 411 (V ~ (V ? ~ ~ ~ ? o oo os ~ so ~ t0 oo V ~ V V V ~ V ct tn 0 0 0 0 0 0 0 =.C ~ ~ Z Z Z ~ ~ Z ~ ~ ~ ~ ~ Z V V iC o~ dC ~c E 0 3 s: ~ ~ s~ o 0 ~ 0 0 tn =: 0 ~ ~ ~n ~ ~ ~ ~ ~ v v 0 ~ ~ ~ ~ ~ 0 <: ~ ~ ~ ~ c~ c~ c~
Mental Pract ice 45 so al en tic 0 ~ N · U: Cal al C: Cal U: V U] V U V Ct Ct A: 0 V 0 So V ~ V V Cal O ~ _ ~ C Con Ct E E ~ 3 ED us .~ a · ~ v ~ ~ a so 1 0 so cat N O 0 to 0 ~5 o 3 C ~5 co v us al ~ :- ~ v ~ ~ :^ ~ ~ an, ~ _ u c~ v c: c~ ~ ~ c~ c: ~ c: ~ v v ~ ct ct ~ cD ~ ~ ct ~ cD c: ~ ~ c~ ct 0 0 0 i4 ka 0 ~ u ~ ~ 0 ~ c~ c~ ~ c~ c~ ~ c~ 0 c~ c~ so ~ c~ u c' c~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ cn u' 0 ct ~ ~ ~ ~' ~ ~ G} ~ 1 ~ 06 ~ ~ ~ ~ u: S~ 1 S" ~ S~ ~ ~ 1 1 ~ U: ~ ~ O 1 ~ U: ~ ~ ~ pc p4 1 ^ 1 ~ O _` ~ ~ s" ~ s~ _` 3 ~ 0 3 0 G ^ ~ . O ° ° ~ O JJ £ ~ ~ s" ~ ~ 0 0 ~ s" 0 ~ ~ 0 0 s. 0 ~: O E ~ ~ £ O s" ~ ~ 0 £ O v ~ v ~ 3 ^ v 0 0 0 v _ E so 0 ~ O . v ~ E O ~ a, ;: V ~ 5" 0 E 0 ~ ~ E t0 ~ ~ ~ ~ ~ ~ _ ~ _ O _ a, v 1 - ~ ~ ~ ~ ~ ~ 1 u, ~ ~ £ 3 ~ c~ e,o ~ u: 0 ~ ~ o 0 m 'm ~ ~ ~ ~ ~ ~ ~ ~ v 0 O ~ ~ ~ O ~ ~ ~ ~ t al v ~ ~ 1 ~ V ~ ~ 0 P4 ~ ~ ~ ~ ~ ~ 0 ~ ~ ~ ~ o ~ ~ ~ s" £ `: ~ £ s" 0 _ ~ := u: ~ ~ O ~ ~ ~ ~ E~ cn 0 a ~ u JJ u: u: u: u~ u~ c~ u~ v: 0 0 u~ 0 cn u: u: z z z z z z z z z :~ ~ z z z z ~ v o o z z z ~ z z z z z ~ ~ ~ z z z E 3 ~s * :^ C~J ~ I 3 N >. u: a) s" u~ `= `¢ ~ 3 a; ~ P. s" 3 ~ ~ ~ ~ a, X ~ ;; ~ a, ct 0 0 ~ 0 ct ~ ~ c~ ~ ~ 0 x :~: ~ ~ ~ ~ ~ ~ £ ~ ~ ~ 0
M en t ~ 1 P r a c t i c e 46 in Hi: tic 0 ~ N · U: ~ V V C. Cal U] U en V C: SO Ct V in _% C: O Cal _ ~ o C~J ~ Ct EN u: · ~ U Ct SO EN 1 ~ O ;4 Cal N O 0 ~ in Pt ~ SO o U ¢ · e ~ ~ · ~1 ~ C) : - C) ~ ~ SO J ~ ~ ~ ~ Ad V C) V Cal V ~ ~ _d ~ _4 Lit V Cd V V V C~ c: c~ c~ ct Ct ~ ~ 1 ~ C) .= C) tu C) ~d V ~ ~ ~ U O O ~ O o ~ ~ S.J v v ~ ~ v v E E i4 ~ E ~ E ~ ~ ~ P. 0 0 u: u' 0 ~ C~ ~ ~ 0 ~ ~ tn ~ 0 0 C: oo a) so s~ ~ ~ ~ ~ s~ U~ s~ ~ s U. ~ ~ ~ U. ~ U: o ^ o ~ ^ ~ 0 ~ ~ G} ? O ~ S" O O O S" ~ :> c: 0 ~ ~ e ~ ~ ~ O ~ O ~ O °0 £ ~ ~ o o v ~ ~ ~ ~ ~ ° ~ ° ~ ~ £ E ~ o ~ o ~ o c~ ~ v ~ ~ ~ ~ £ ~ E o ~ ~ ~ ~ o ~ ~ o ~ to ~ ~ E ~ ~ ~ ~ b0 0 4-, ~ ~ 1 ~ O ~ ~ S4 a, ~ V ~ ~ ~ ~ 0 ~ ~ ~ ~ .,, ~ c~ ~ 3 3 0 ° ~ ~ ~ ~ N O ~ ~ ~l _I O O S" ~ _I ~ (U N ~ tt S" s~ ~ ~ ~ 0 s~ ~ ~ ~ E ~ E E o ~, .s ~ ~ v u ~ ~ o E o I cn c: ~ ~ 1 ° ° ° ~ :: ~ 6 ~ ~ s" ~ 1 cn ~ ~ ~ ~ s~ ~ ~ ~ ~ ~n 0 ~ ~ 0 ~ ~ c: 0 s~ oo ~ ~ ~ s" ~ ~ s" ~ ~ c~ ~ c: cc ~ ~a ~ ~ :^ ~ ~ ct ~ ~ ~ ~ 0 ~ 0 u ~ ~ E~ ~ ~ ~ C~ ~ ~ a ~ ~ ~ ~ u~ E: u~ a ~ ? ~ t0 t0 b0 ~ U) C~ U) U) U~ O O O C~ CQ U) U: O O U: Z Z :~ :~ Z Z Z ~ Z Z Z :~ Z ~ Z ~: ~ CO V V ~ O O :~ Z Z :- Z ~ ~ ~ ~ Z dC dC dC U: 0 U] O O O ~ ~ e E E dC N ·,1 or1 0 u~ 0 (V u~ tn u t~ ~ ~ S" ~ ~ ~ ~ ~ ~ ~ _ (U ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ - ~ 3 ~ 3 3 3 ~ ~ c: cc ~ ~ O ~ Ct C~ ~G O ~ ~ ~ _ P" ~ ~ ~ P~ ~ ~ ~ ~ =: u, 1
Mental Practice 47 . ~ ~ to tic 0 ~ N · U: Ct V to V V Ct a: SO o 0 V 0 JO SO O ~ o Cal So ~ Ct En us U] Cal Ed · C: a) Ed 1 ~ O SO C: U N O ~ 00 at: ¢ S" o ~5 is, V V be C) ~ By ~ ~ SO ~ ~ . - S-l ~ ~ Al Cal ~ ~ ~ O SO ) ~ ~ ~ ~ O ~ C) - ~ >` ~ ~ ~ ~ V _ _ c: c: C 1 0 ~ ~ ~ ~ 0 ~ ~ ~ S~ S~- ~ ~ ~ ~ ~ S~ E ~ :' ~ ~ ~ . ~ ~ ~ ~ oo ~ ~ 3 c' 0 E ~ v v E ~ v v v ~ ~ c~ c.~ c; ~ ~ ~ c: c~ ~ ~ v v v oe ~ c: c: u, 0 ;: v c: 0 ~ ~ ~ 0 v ~ ~ ~ 0 tn 1 ~ S~ O 0 i4 S~ ~ 1 u, v ~ c~ ~ ~ P~ s" ~ s~ s~ 0 0 ~ c: ~ 1 ~ UO 0 0 0 ~ ~ ~ '_ ~ ~ 3 ~ ~ ~ s" E ~ ~ ~ 0 ~ ~ 0 0 ~ ~ 0 v ~ ~ ~ ~ ~ 3 ^ ~ v 0 0 ~ 0 ~ O 0 3 v ~ ~ o E ~ ~ o ~ E ,-~ Ct .~ ~ U: ~ o ~ ~ ° ~D O ~ S" ~ O O ~ u, ~ ~ I E ~ ~ so 0 0 ~ E _ u' :~ ~~ s~ 3 c: 0 c: ~ E ~ u: ~ ~ ~ 0 ~ ~ ~ 0 _ c~ ~ 0 ~ 3 ~ 3 ~ 0 ~ E c. 3 ~ ~ .v ~ . ~ 0 ~ co 0 0 O0 ~ JJ S~ ~ t0 U] ~ S~ O ~ ~ ~ ~ , - ~ ~ ~ ~ ~ 0 1 ^ .= ~ ~ (V S~ ~ ~ S~ ~ ~ Ct ~ ~ r ~ C ~ $ ~ U ~ C: O Ei ~ o ~u ~ o ~ ~rl O O ~ ~ 0 ~ E ~ ~ oo ~ ~ u ~ ~ ~ t~o ~ ., , - o ~ ~ o s" ~ s" o s~ ~ 3 c: ~ o 3 ~ E c: ~ 0 ~ ~ c: 0 ~ v E 't '0 u: ~ E~ ~ ~ ~. ~ ~ ~ ¢ ~ ~' ' - ~ o ~ ~ ~ ~ ~ a~ s~ c) ~ ~ ~ ~ to c~ ~ to to ~ to ~ ~ ~ ~ ~ ~ v 0 C~ 0 0 u~ 0 0 u: 0 0 u: u: cn cn := z z z z z z z z z z z z z ~ o ~ ~ ~ s" to ct t o o o o ~ z z z z ~ ~ ~ z ~ ~ o 0 ~ ~c s" : ~: oG ·4c ~ u) 0 dc ~ ~ v " ~c ~ ~ ~ ~ a, 0 0 ~ ~ ~ ~ ~ ~ ~ 0 u ~ ~ c~ a, ~ v ~ ~ ~ ~ ~ s" ~ 3 ~: cn u: u: cn cn u: u: E~ E~ ~ 3
Mental Practice 48 a Hi: Tic 0 ~ N U] Ct V Cay On .~ en Cal ~ Ed U) C: of: o in U: Cal Sol - o Cal _ ~ o ~ Ct Cal Ed u: · ~ SO Ed 1 SO lo' o By N 0) O U: ¢ ~ SO 1 ~ ~ O . ~ ~ ~ By Cal ~ C) ~ Cal c: ct Ed ~ ~ ~ 1 SO V C~ ~ U] SJ V O c: v ~ ~ ~ v s" tn~o a ~ a) a s~ ~ ~ ~ s~ ~~ t s" s" ^ ~ s~ o o - ~ - - o v ~ ~ o c~ u o o ~ ~ ~ ~ o E ~ ~ ~ O ~ ~ _ _ ~ ~ ~ _ r, ~ _ S" ? ~ ~ ~ ~ ~ ~ O O Ct ~U t0 cr. 0 ~1 ~ ~ ~ O ~ ~ 0 1 =: t0 U. ~ ~ ~ ~ _ ~ ~ ~ ~ ~ 0 ~ ~ ~D ~ ~ ~ ~ ~ O O ~ O ~ ~ ~ UO ~ O =: ~ u: ~ ~ = 0 u~ u~ u: u: cn cn Z Z Z Z Z Z Z ? V C 3 o z o U: 3 p ~c o o 3
Table 3 Treatment Group Differences Among Effect Sizes Mental Practice 49 Test of Mean effect-size Source df Homogeneity p estimate (s.e. Total 221 788.32 .001 0.43 (0.02~* Between groups 3 119.63 .001 Within groups 218 668.69 .001 Control 47 116.71 .001 0.22 ~ 0.03~* Mental 68 236.54 .001 0.47 (0.03~* pra ct ice Physical 53 148.28 .001 0.79 ~ 0.05~* practice Combined 50 167.17 .001 0.62 ~ 0.05~* practice
M e n t a 1 P ra c t i c e 50 Table 4 Analysis of Change in Type of Task by Treatment Group Test of Mean effect-size Source df Homogeneity p estimate (s.e. MOTOR 156 681.69 .001 0.47 (0.02)* Between groups 3 133.96 .001 W i th i n gr ou p s 1 5 5 5 4 7 . 7 4 . O 0 1 Control 32 96.57 .001 0.24 (0.03~* Mental 53 208.93 .001 O. 49 (0.03~* pra ct ice Physical 38 112.13 .001 0.88 ~ 0.06~* practice Combined 32 130.10 .001 0.81 (0.07~* p ra c t i c e COGNITIVE 2 13 .7 2 .01 0. 9 5 ~ O .20 ~ * Between groups 2 13.72 .01 Within gr oups O O .00 Control O 0.00 0.47 (0.24) Ment al O O .00 2.09 ~ O .4 9 practice Physica1 0 0.00 2.08 (0.49) p ra c t i ce C omb i ne d - - - practice STRENGTH 59 54.47 ns 0.21 ~ 0.04) Between groups 3 5.33 ns W~ thin groups 56 49.15 ns Control 13 16.26 ns 0.11 ~ 0.07 Mental 13 9.31 ns 0.27 (0.08) p ra ct i ce Physical 13 14.36 ns 0.38 (0.11) practice Combined 17 9.22 ns 0.18 (0.10) p ra ct i ce
M e nt a 1 P ra c t i ce 51 Tab le 5 Analysis of Change in Type of Measure by Treatment Group for Motor Type Tasks Test of Mean effect-size Source df Homogeneity p estimate (s.e. ~ . ACCURACY MEASURE 112 438.55 .001 0.43 ( 0.02)* Between groups 3 74.94 .001 Within groups '09 363.61 .001 Control 26 76.57 .001 0.24 (0.03~* Mental 43 154.08 .001 0.46 ~ 0.03~* practice Physical 22 74.68 .001 0.84 ~ 0.07~* practice Combined 18 58.27 .001 0.61 (0.08~* practice SPEED MEASURE 8 27.48 .001 0.62 ~ 0.11 ) * . Between groups 2 14.72 .001 Within groups 6 12 .7 6 .05 Control - - Mental 2 1.67 ns 0.31 (0.14) practice Physical 2 10.67 .01 0.70 ~ 0.22) practice Combined 2 0.42 ns 1.40 (0.23) p ra c t i ce FORM MEASURE 2 1.55 ns 0.41 (0.21) , , . Between groups 2 1.55 ns Within groups O O .00 ns Control - - - Mental O 0.00 0.25 ~ 0.27) practice Physical O 0.00 0.94 (0.48) practice Combi ned O 0.00 0.40 (0.42) practice
Tab le S M e nt a 1 P ra c t i ce 52 Analsis of Change ~n Type of Measure by Treatment Group for Motor Type Tssks ( t;ontinueu ) Tes t of Mean ef f e ct -s i ze Source df Homogeneity ~ estimate (s.e. DISTANCE MEASURE 3 11.25 .02 0.56 ( 0.13) Between groups 3 11.25 .02 Within groups O O .00 ns Control O O .00 0. 17 ~ O . 18 Mental O 0.00 1.06 ~ 0.26) practice Physical O 0.00 0.68 ~ O. 30 practice Combined O 0.00 1.32 ~ 0.47) p ra c ~c i ce T IM E ON TARGET/ IN BALANCE 29 16 3.91 .001 0.89 ~ O.07 Between groups 3 67.77 .001 Within groups 25 96.14 .001 Control 4 19.44 .001 0.16 (0.11) Menta, 4 19.69 .001 1.34 (0.17) practice Physical 10 20.21 .05 1.21 (0.14) practice Combined ~ 36.81 .001 1.57 ~ 0.17) practice
Mental Practice 53 Table 6 Summary of Regression Analyses for Continuous Predictors Chi Chi Squa re Squa re Predictor Variable df Model df Error - No. of Practice Sessions ~ _ , Cont rol Mental Practice Physical Practice Me nt a 1/ P by s i ca 1 No. of Practice Trials ~ _ Cont rol Mental Practice P by s i ca 1 P ra c t i ce Me ntal/Phys i Cal 4 2.13 23 47.79* 4 20. 59* 36 84. 72* 4 18.74* 36 98.32* 4 15. 12* 4 17.75* 4 37 .8 6* 39 146. 39* 23 32.17 36 67. 45* 4 29.82* 36 87.23* 4 3.62 39 . 157. 89* * P<. 05
Mental Practice 54 References Arnold, E.L. 1965 The relationship between physical and mental practice and initial ability in learning a simple motor skill. Unpublished doctoral dissertation, Indiana University. Bagg, E.J.K. 1966 Effect of mental and physical practice on baseball _ _ batting. Unpublished master's thesis, University of California at Los Angeles. Beckow, P.A. 1967 Birge, R.T. A comparison of the effectiveness of mental practice upon the learning of two gros s motor skill s . Unpublished master's thesis, University of Oregon. 1932 The calculation of errors by the method of least squares. Physical Review, _' 2O7-227. Bissonette, R. 1965 Burns, P.L. The relative ef fects of mental practice upon the learning of two gross motor skills. Unpublished ~ . master's thesis, Springfield College. 1962 The effect of physical practice, mental practice, and ~ . _ ~ mental-physical practice on the development of a motor skill. Unpublished master 's thesis, The Pennsylvania State University. Campbell, D.T. and Stanley, J.C. 1963 Experimental and Qussi-experimental Designs for . . , Research. Chicago: Rand-McNally.
Ment al Pra at ice 55 Clark, L.V. 1960 Effect of mental practice on the development of a certain motor skill. Research Quarterly, 31, 560-569. Cooper ~ H.M. 1979 Statistically combining independent studies: A meta- analysis of sex differences in conformity research. Journal of Personality and Social Psychology, 37, 131- 146 . 1982 Scientif ic guidelines for conducting integrative research reviews. Review of Educational Research' 59, 29 1- 30 2 . 1984 The Intergrative Review, Beverly Hills: Sage. Corbin, C .B . 1967 The effect of covert rehearsal on development of a complex motor skill. Journal of General Psychology, 76, 143-150. 1972 Mental practice. In W.P. Morgan (Ed. I, Ergogenic Aids and Muscular Performance. New York: Academic Press. Crank, J.M. 1967 The effect of physical practice, mental practice, and physical-mental practice on the development of arm strength. Unpublished doctoral dissertation, Florida St at e Uni ve rs ity. Dunbar, D. W. 1970 The effect of four designs of physical-mental practice upon the learning of the front crawl. Unpublished master's thesis ~ University of Maryland.
M e nt a 1 P ra c t i ce 56 Egs from, G . HI. 1964 Effect of an emphasis on conceptualizing techniques during early learning of a gross motor skill. Research Ouarterlv, 3 5, 47 2-481 . Eideness, C.L. 1965 The effect of physical, mental-physical, and mental practice on the learning of a motor skill. Unpublished _ a~ master's thesis ~ South Dakota State University. Epstein, M.L. 1980 The relationship of mental imagery and mental rehearsal to perf ormance of a motor task. Journal of Sport Psychology, 2, 2 11-220 . - Eysenck ~ H. 1978 An exercise in mega-silliness. American Psychologists 33, 5 17 . Feltz, D.L. and Landers, D.M. 1983 The ef f ects of mental practice on motor skill learning a n d p e rf or ma n c e : A met a -a na ly s i s . Jo u rna 1 0 f Sp 0 r t Psychology, _, 25-57 . Fisher, R.A . 1932 Statistical Methods for Research Workers ~ 4th ed. ). London: Oliver & Boyd. Glass, G.V. 1976 Primary, secondary, and meta-analysis of research. Educational Researcher, 5, 3-8. - Glass, G.V., McGaw, B., and Smith, M.L. 1981 Meta-analysis in Social Research. Beverly Hills, CA: Sage ~
Mental Practice 57 Gondola, J.C. 1966 A comparison of the effectiveness of programs of physical practice, mental practice, and a combined physical and mental practice on the performance of a selected test of balance. Purdue University. Unpublished master's thesis, Hall, E.G. 1981 The effect of positive visual imagery on free throw accuracy of intercollegiate women basketball players. Unpublished manuscript. (Available from E.G. Hall, School of Health, Physical Education and Recreation, Louisiana State University, Baton Rouge, LA 70803~. Harby, S.F. 1952 Comparisons of mental and physical practice in the learning of a physical skill. U.S.N. Spec. Dev. Cen. Tech. Rep. S.D.C.' 269' 7-25. Hedges, L.V. 1981 __ Distribution theory for Glass's estimator of effect size and related estimators. Journal of Educational Statistics, _' 107-128. 1982a Estimation of effect size from a series of independent experiments. Psychological Bulletin, 92, 490-499. 1982b Fitting categorical models to effect sizes from a series of expe riments . Journal of Educational Statistics, 7, 119-137. 1982c Fitting continuous models to effect size data. Journal of Educational Statistics, 7, 245-270. _
Ment al Pra ct ice 58 19 8 3 1 9 8 4 A random effects model for effect sizes. Psychological Bulletin, _, 388-39 5. Advances in statistical methods for meta-analysis. In W.H. Yeaton and P.M. Wortman (Eds. ). Issues in Data Synthesis. New Directions for Program Evaluation, no. 24. San Francisco: Jossey-Bass. Hedges, L.V. and Olkin, I. 1985 Statistical Methods for Meta-analysis. New York: Academic Press. Howe, D.P. 1967 Jackson' G.8. The influence of five schedules of mental practice upon ~ _ , . _ . the physical performance of a novel gross motor skill ~ _ _ after a criterion measure of skill has been attained. _ _ . ~ Unpublished doctoral dissertation, Texas Woman's University. 1980 Methods for integrative reviews. Review of Educational Research, 50, 438-460. Johns on, B . L . 1967 Kelsey, I.B. An examination of some factors which might be related to effective utilization of mental practice in learning a gross motor skill. Unpublished master's thesis, Uni ve rs ity of Oregon. 1961 Effects of mental practice and physical practice upon muscular endurance. Research Quarterly, 32, 47-54. Kovar, S.V. 1969 The relative effects of physical' mental, and combined
Mental Practice 59 mental-physical practice in the acquisition of a motor skill. Unpublished master's thesis, University of Illinois. LaLance, R.C., Jr. 1974 A comparison of traditional instruction, mental practice, and combined physical-mental practice upon ~ . the learning of selected selected motor skills. Unpublished doctoral dissertation, Middle Tennessee State University. Luebke, L.L. 1967 A comparison of the effects of varying schedules of - mental and physical practice trials on the performance of the overarm softball throw. Unpublished master's thesis. University of Wisconsin-Madison. Maxwell, Jew. 1968 The effect of mental practice on the learning of the ~ . _ _ ~ _ ~ overhand volleyball serve. Unpublished master's thesis, Central Missouri State College. McKeown, B.C. The effect of physical, mental-physical, and mental practice on the learning of the modified triple jump. Unpublished master's thesis, South Dakota State Uni ve rs i ty . ~ Uno btai nab le Mendoza, D., ~ Wickman, 1978 " Inner" darts: 1967 H. Effects of mental practice on perf ormance of dart throwing. Perceptual and Motor Skills, 47, 1195-1199.
M e nt 8 1 Practice 60 Murphy' T.J 1977 Noel, R.C. 1980 The effects of mental warm_up on Jump shooting accuracy ~ _ ~ _ among selected boys' high school basketball players. Unpublished master's thesis, South Dakota State Univers ity. The e f f e ct of vi s u o -ma t or be h a vi or re h ea rs a 1 on t e nni s performance. Journal of Sport Psychology, 2, 221-226. Oxendine, B. 1969 Effect of mental and physical practice on the learning of three motor skills. Research Quarterly, 40, 755- 763. Perry, H.M. 1939 The relative eff iciency of actual and imaginary practice in five selected tasks. Archives of Psychology, _ , 5-75. G.E. Negative and positive mental practice in motor skill acquisition. Perceptual and Motor Skills, 37, 312. ~ . _ Pruner' S.W. 19 71 The ef feats of three methods of practice on improving the performance of a modified free-throw by sixth grade girls. Unpublished master's thesis . University. Raudenbush, S.W. and Bryk, A.S. 1985 Empirical Bayes meta-analysis. Statistics. 10. 75-98. North Texas State Journal of Educational -
Ment al Pra ct ice 61 Rawlings, East., & Rawlings, I.L. 1974 Rotary pursuit tracking following mental rehearsal as a f un at i on of vo lu nt a ry c ant r o 1 of vi sue 1 i ma ge ry . Perceptual and Motor Skills, 38, 302. Rawlings , E.I., Rawlings, I.L., Chen, S.S., and Donis Yilk, M. 1972 The facilitating effects of mental rehearsal in the acquisition of rotary pursuit tracking. Psychonomic Science, _, 71-73. Ri ch a r ds on, A . 1967a Mental practice: A review and discussion. Part I. Research Quarterly, _, 95-107. 1967b Mental practice: A review and discussion. Part II. Research Quartlery, 38, 264-273. Rodriguez, G.J. 19 67 A comparison of the effects of mental and physical practice upon abdominal strength ~ n high school girl s . Unpublished master's thesis, University of North Caro line at Greens boro. Ros enthal, R . and Rubin D .B . 1982 Comparing effect sizes of independent studies. Psychological Bulletin, 92, 500-504. Ryan, D.E., and Simons, J. 1981 Cognitive demand, imagery, and frequency of mental rehea rsa 1 as f a at ors inf luencing 8 cquis ition of mot or skills . Journal of Sport Psychology, 3, 35-45 . 1982 Efficacy of mental imagery in enhancing mental rehearsal of motor skills. Journal of Sport Psychology, 4, 4 1-51 .
Ment al Pra ct ice 62 1983 What is learned in mental practice of motor skills: A tes t of the cognitive-motor hypothes is . Journal of Sport Psychology, 51, 419-426. Sackett, R . S . 1935 The relationship between amount of symbolic rehearsal and retention of a maze habit. Journal of General Psychology, _, 113-128 . , A. and Schneider, S. The Athletic Eye. New York: Hearst Books. ~ . - _ M.F. An investigation of the relative effects of mental ~ _ ~ _ practice and physical practice in improving the efficiency of the breast stroke. Unpublished master's thesis, University of Oregon. Slavin, R.E. 1984 Meta-analysis in education: How has it been used? Educational Researcher, 13, 6-15. Smith, L.E., ~ Harrison, J.S. 1962 Comparison of the effects of visual, motor, mental and guided practice upon speed and accuracy of pe of or man ce of a simple eye-hand coordination task. Research Quarterly, 33, 299-307. Smyth ~ M .M ~ 1975 The role of mental practice in skill acquisition. Journal of Motor Behavior, 7, 199-206. Spears (Alexander I, C .L. 1966 The effect of mental practice and physical practice in ~ _ . ~ _ S ei de r ma n 1 9 8 3 Sheldon, 1 9 6 3
M e nt a 1 P ra c t i ce 63 learning the running high jump for college wome n. Unpublished master's thesis, Arkansas St-ate College. S t a n d ri d g e, J .0 . 1971 The effect of mental, physical, and mental-physical ~ _ . ~ practice in learning the whip kick. Unpublished _ a_ , master's thesis, University of Tennessee. Sta rt, K . B . 1 9 6 2 The influence of subjectively assessed games ability on gain in motor perf ormance after mental practice. Journal of General Psychology, 6 7, 169-172. Stebbins, J. 1968 A comparison of the effects of physical and mental practice in learning 8 motor skill. Research Quarterly, 39, 714-720. Stephens, M.~. 19 66 The relative ef festiveness of combinations of mental and physical practice on performance scores and level of aspiration scores for an accuracy task. Unpublished ~ _ master's thesis, University of North Carolina at Gre e ns ho r 0. Surburg, P.R. 1968 Audio, visual, and audio-,risual instruction with mental practice in developing the f orehand tennis drive. Research Quarterly, 3 9, 728-734 . Aging and effect of physical-mental practice upon acquis ition and retention of a motor skill. Journal of Gerontology, 31' 64-67. 1976
Ment al Pra ct ice 64 Trussell, E.M. 19 52 Mental practice as a factor in the learning of a complex motor skill. Unpublished master's thesis. University of California at Berkeley. Tuf ts, S .A. 1963 The effects of mental practice and physical practice on the scores of intermediate bowlers. Unpublished . , , master's thesis, University of North Carolina at G re ensboro. Twining, W. E . 1949 Mental practice and physical practice in learning a motor skill. Research Quarterly, 20, 432-435. R . S . Th e re la t i on s h ip be t w e e n me nt a 1 p re p a ra t i on s t ra t e gi e s and motor perf ormance: A review and critique. Ques t, 3 3, 19 5-213 . White' K.D., Ashton, R., and Lewis, S. 1978 Learning a complex skill: Effects of mental practice, physical practice, and imagery ability. International Journal of Sport Psychology, 10, 71-78. Whitehill, H.P. 19 64 The ef fects of variations of mental practice on learning a motor skill. Unpublished master 's thesis, Uni ve rs ity of Ore gon . The effects of mental practice on children's learning and retention of gross-motor skill s . Unpublished d o ct or a 1 di s s e rt a t i on . Uni ve r s i t y of O re go n . Weinberg, 1982 19 65
Mental Practice 65 Whiteley, G. 1962 The effect of mental rehearsal on the acquisition of ~ ~ . _ - ~ , _ motor skill. Unpublished diploma in education dissertation, University of Manchester, 1962. Whitworth, P. 1986 Effects of internal imagery and experiental state on _ the performance of intercollegiate smallbore rifle shooters. Unpublished Master's thesis. Department of Physical Education, Western Kentucky University. Wills, B.J. 1966 Mental practice as a factor in the performance of two motor tasks. Unpublished doctoral dissertation, University of Wisconsin, Madison. Wills, K.C. 1965 The effect of mental practice and physical practice on learning a motor skill. Unpublished master's thesis' Arkansas State College. Wilson, M. 1960 The relative effect of mental practice and physical ~ _ ~ practice in learning the tennis forehand and backhand _ ~ . ~ . drives. Unpublished doctoral dissertation, State University of Iowa. Woolfolk, R.L., Murphy, S.M., Gottesfeld, D., and Aitken, D. 1985 Effects of mental rehearsal of task motor activity and mental depiction of task outcome on motor skill performance. Journal of Sport Psychology, 7, 191-197.