1+Literature+Review

 **Review of Related Literature** //Mega Math Blaster // is educational math software designed by Davidson to help develop students’ skills and confidence in math. Designed for students ages 6-12, the software contains five different games involving addition, subtraction, multiplication, division, number patterns, estimation, fractions, decimals and percents. There are six levels of difficulty in which students can progress. The game setting is a new universe which is controlled by robots and machines. The goal is for the ‘heroes’ to save the humans from the robots and machines. Students build speed and accuracy of math skills while engaging in a fun game ( [|//http://www.tcnj.edu/~technj/spr97/blaster.html//]//) //.  Harcourt included the //Mega Math Blaster// as a supplemental tool with the textbooks and materials recently purchased by my school system. Most of the teachers in my school have yet to use the software, mostly due to time constraints. To determine whether or not use of this software is worth the time and effort required, I plan to conduct research to determine what effect the use of //Mega Math Blaster// has on students’ performance in math.   Computer-assisted instruction (CAI) is a widely studied and supported method of teaching. Quyang (1993) defines CAI as “any program that augments, teaches, or simulates the learning environment used in the traditional classroom.” Many studies have been published showing significant results in improving student performance by using CAI. For my research, I reviewed many articles describing studies previously conducted in order to determine the effectiveness of CAI on student achievement in math. While some of the studies showed a positive correlation between the use of CAI and student performance, some yielded no significant results. One actually showed a negative correlation between the use of CAI and student performance in math. When examining the effects of researcher-designed CAI known as FLASH on number combination skill in at-risk first graders, Fuchs, et al. (2006) analyzed data collected via pre- and post-tests. Using a one-way analysis of variance (ANOVA), the researchers found that there was a significant effect of CAI on students’ addition number skill. However, they also examined the effect of the CAI on subtraction and transfer to arithmetic story problems and found no significant results. The researchers admitted that they did not account for the keyboarding skills of the participants which could have resulted in inaccuracies regarding the effectiveness of the CAI. This is something to keep in mind when conducting this type of research on the elementary level. Shoppek and Tulis (2010) conducted similar research to determine how much a moderate amount of CAI could contribute to improvement of students’ achievements in arithmetic and word problem solving. The researchers conducted a study using // Merlin’s Math Mill // software which they developed. They wanted to determine the effectiveness of their software in improving students’ math performance, even when only minimal amounts of CAI were administered. Via pre- and post- testing and analysis using ANOVA, the researchers found that students’ performance was significantly improved with even a small amount of CAI. Tienken and Maher (2008) used a quasi-experimental pretest/posttest control-group design to determine the effect of CAI on eighth-grade students’ math performance. Rather than using a particular software program, the researchers used various math drill-and-practice websites for their research. They found that the CAI intervention did not improve student achievement significantly. In fact, students who received CAI performed significantly lower than students who did not in two categories researchers examined from post-test data. The researchers stated that the post-test used, the New Jersey Grade Eight Proficiency Assessment (GEPA), required the use of problem-solving skills which the drill-and-practice CAI did not address. <span style="background: white; line-height: 200%; margin: 0in 0in 10pt; mso-margin-bottom-alt: auto; mso-outline-level: 4; text-indent: 0.5in;"><span style="background: white; color: black; font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: 200%;">For their research, Vogel, Greenwood-Ericksen, Cannon-Bowers, and Bowers (2006) looked at CAI using a gaming format with simulations and CAI using a traditional format to determine the effects of each on student motivation and performance. They found that there was a significant improvement in math skills in the gaming format CAI condition but not in the traditional CAI condition. They feel that students are more motivated to learn the material when presented in a game-like format. Therefore, they believe that a simulation-based approach should be integrated into CAI. They also state that “the connection between the material to be learned and the game play experience must be seamless” (Vogel, Greenwood-Ericksen, Cannon-Bowers, & Bowers, 2006). <span style="background: white; line-height: 200%; margin: 0in 0in 10pt; mso-margin-bottom-alt: auto; mso-outline-level: 4; text-indent: 0.5in;"><span style="background: white; color: black; font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: 200%;">Martindale, Pearson, Curda, and Pilcher (2005) examined the effect of FCAT Explorer CAI software on elementary and high school students’ performance on the Florida Comprehensive Assessment Test (FCAT). The researchers used hierarchial analysis of variance and analysis of covariance to compare scores for schools that used the CAI and those that did not use it. At the elementary level, they found a significant difference in scores. However, they did not find a significant difference in students’ scores at the high school level. One possible explanation for this lack of effect is that high school teachers may not feel the program is necessary, or they may not have time to implement it. Elementary teachers may perceive more pressure to prepare students for future rounds of high stakes testing. Thus, they may be more likely to use all resources available to them. Based upon their findings, the researchers concluded that CAI is effective in the elementary grades in promoting student achievement. <span style="background: white; line-height: 200%; margin: 0in 0in 10pt; mso-margin-bottom-alt: auto; mso-outline-level: 4; text-indent: 0.5in;"><span style="background: white; color: black; font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: 200%;">Researchers have found that the provision of challenging material at the right instructional level leads to improved outcomes in reading and math (Ysseldyke & Bolt, 2007). CAI is one means of providing instruction that is on the appropriate student level. So often, teachers tend to “teach to the middle.” Therefore, students who need acceleration and remediation do not always receive it. According to Ke (2008), there are three potential uses of computer games in a school environment. These are to promote general cognitive abilities and skills, affective and motivational aspects, and knowledge- and content-related learning. Though research has shown mixed results regarding the use of computer games for developing content-related learning, much empirical data is available which supports the claim that computer games increase student motivation. They find computer games engaging and fun. If students are more motivated, they will put forth greater effort. By putting forth a greater effort, students will complete more problems than they would if they were not motivated. That has to have a positive impact on student learning.

= ARTICLE SUMMARIES = = = = = <span style="color: black; font-family: 'Arial','sans-serif'; font-size: 12pt;">Butler, M., Pyzdrowski, L., Goodykoontz, A., & Walker, V. (2008). The effects of feedback on online <span style="font-family: 'Arial','sans-serif'; font-size: 12pt;">quizzes. //The International Journal for Technology in Mathematics Education,// 15(4), 131-136. Retrieved August 22, 2010, from Research Library. (Document ID: 1624435951).

<span style="color: black; font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 10pt;">This article outlines a study conducted at West Virginia University to determine the effects of online feedback on student performance. Participants in this study were 373 students from five sections of a Pre-Calculus course at the university during the fall 2006 semester. Students in three of the five sections were randomly assigned to receive feedback on online homework quizzes taken prior to the unit tests. Through this feedback, students learned their quiz scores and which questions were incorrect. The correct answers were not revealed. The students in the remaining two sections received no feedback on their online homework quizzes. In addition to the homework quizzes, participants took a retired version of the ACT. Math data collected from this test was also used to determine the overall effectiveness of the online feedback. Overall, the research findings supported the notion that the immediate feedback of online quizzes is valuable and may lead to improved student achievement.

Eid, G.K. (2005). An investigation into the effects and factors influencing computer-based online math problem-solving in primary schools. //Journal of Educational Technology Systems//, 33(3), 223-240. Retrieved from ERIC database.

<span style="color: #333333; font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 0pt;">The purpose of this study was to determine how students taking online computer-based math problem-solving tests compare by performance to students taking paper-and-pencil math problem-solving tests. In addition to their primary focus, the researchers wanted to determine whether or not computer anxiety and experience affect students’ online test scores. Experimental research was conducted with two classes at an all girls private K-12 school in Kuwait. The computer-based tests were identical to the paper-and-pencil tests except for some changes in format. Class A consisted of 14 subjects. They completed the online tests during the first week and the paper-and-pencil test the second week. Class B consisted of 17 subjects. They completed the paper-and-pencil test first and the online tests second. A 5-point rating scale was used to measure students’ attitudes toward using the computer, and a 7-point rating scale was used to measure students’ computer anxiety. Statistical analyses revealed students achieved similar scores for computer-based tests and paper-and-pencil tests. It also revealed that computer anxiety and experience did not affect students’ online test scores.

Franklin, T. & Peng, L. (2008). Mobile math: math educators and students engage in mobile learning. //Journal of Computing in Higher Education//, 20(2), 69-80. Retrieved September 4, 2010 from ERIC databases.

<span style="color: black; font-family: 'Arial','sans-serif'; font-size: 12pt;">This article outlines a case study conducted to determine the effectiveness of using the //iPod Touch// to help middle school students learn about algebraic equations. The researcher is a tenured faculty member at a Midwestern university with 20 years of experience of teaching in the areas of math and math-science. Participants included 39 eighth grade students from two classes in rural Southeastern Ohio. Students were provided //iPod Touch// technology and were asked to create videos to explain mathematical concepts selected by their teachers. Students also had access to textbooks//, iTunes//, and computers for creating and downloading student movies. The two teachers participating in the project received professional development to learn how to use the //iPod//. The researcher collected qualitative data regarding student’s and teacher’s perceptions of using the technology in the math curriculum through student journals, surveys, and interviews. They received much positive feedback and concluded that “the use of the //iPod Touch// to build math videos was viable in the middle school studied” (Franklin & Peng, 2008).

Fuchs, L.S., Fuchs, D., Hamlet, C.L., Powell, S.R., & et al. (2006). The effects of computer-assisted instruction on number combination skill in at-risk first graders. Journal of Learning Disabilities, 39(5), 467-475. Retrieved September 19, 2010, from ProQuest Education Journals. (Document ID: 1134779611).

<span style="font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 10pt;">This article describes a pilot study conducted to assess the potential for computer-assisted instruction (CAI) to enhance number combination skill. Participants were 33 students from nine first-grade classrooms whom teachers had rated as having low reading and mathematics competencies. The students were randomly assigned in blocks within classrooms to receive either math CAI or spelling CAI. (Spelling CAI was a secondary focus. The researchers chose to incorporate it into the study as a control measure. This ensured that both groups received the same instruction in the classroom.) Students in the two groups were comparable on demographic variables. They received 50 ten-minute sessions of CAI over 18 weeks. Researchers gathered data based on a pre- and posttest model. They found that the math CAI was effective in promoting addition but not subtraction skills and that transfer to addition story problems did not occur. They also found that the spelling CAI proved more effective than the math. The researchers feel that more study is necessary and should involve more participants over a longer time period.

Houssart, J. & Sams, C. (2008). Developing mathematical reasoning through games of strategy played against the computer. //The International Journal for Technology in Mathematics Education//, 15(2), 59-71. Retrieved September 24, 2010, from ProQuest Education Journals. (Document ID: 1508949131).

<span style="font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 10pt;">This article details a study of the attempts of 9-11 year old students in England to beat the computer at //Lines,// a program from the //Co-ordinates// software package. //Lines// is a game of strategy which is similar to //Connect Four.// It was part of a larger project known as “Thinking together with SMILE project” which was directed by Rupert Wegerif at The Open University. Researchers wanted to answer two questions: How far and under what circumstances do children develop mathematical reasoning when using computer games of the strategic type and how significant are rules of discourse in facilitating any such development (Houssart & Sams, 2008)? They began by allowing the teachers to select appropriate software. They felt it was important to select software which went beyond drill and practice. Researchers then introduced the nine participating teachers to the “talk rules” and suggested lesson outlines which they would be using with students. These “talk rules” required that all ideas be shared and respected, reasons be given for students’ ideas, and that students working together try to agree in the end. Lessons consisted of whole class introductions, children working in groups of two or three to try to beat the computer at the game, and whole class sessions during which students evaluated the games and the quality of the group talk. Sessions were videotaped and analyzed. Through these observations, the researchers found that students used mathematical language and predictions. They were motivated to try to beat the computer. As the computer used “expert” strategies, the students had to develop their own expert strategies in order to win. The researchers felt that the teachers played key roles in modeling appropriate behavior and in facilitating the discussions. All of these components led to the use of mathematical reasoning skills by the students. Thus, it was concluded that computers can play a valuable role in the primary mathematics classroom.

Ke, F.. (2008). Alternative goal structures for computer game-based learning. //International// //Journal of Computer-Supported Collaborative Learning//, 3(4), 429-445. Retrieved from ProQuest Education Journals. (Document ID: 1897191811).

<span style="font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 10pt;">This article examines research conducted to examine the effects of game-based learning with alternative classroom goal structures. A pretest-posttest control group design was employed in this study. Participants included 160 fifth grade students from eight public school classes in Pennsylvania. Participants varied by gender and socioeconomic status. They were randomly assigned by classes to one of four groups: Teams-Games-Tournament cooperative game-based learning, competitive game-based learning, individualistic game-based learning, and the control group who did not participate in gaming at all. Four ASTRA EAGLE games were used for this research. A web-based, 30-item multiple choice test known as the Games Skills Arithmetic Test (GSAT) and a modified version of Tapia’s “Attitudes Toward Math Inventory” (ATMI) were used for pre- and post-testing. A MANCOVA was conducted and revealed no significant difference among the three gaming groups in terms of math performance. However, there was a significant difference between math test performances of the three game groups and the control group.

Martindale, T., Pearson, C., Curda, L.K., and Pilcher, J. (2005). Effects of an online instructional application on reading and mathematics standardized test scores. //Journal of Research on Technology in Education//, 37(4), 349-360//.// Retrieved from ProQuest Education Journals. (Document ID: 856778061).

<span style="font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 10pt;">This article outlines research conducted to determine the impact that students’ use of FCAT Explorer software had on student performance in reading and math on the Florida Comprehensive Assessment Test (FCAT). Fourth, fifth, eighth, and tenth grade students from 24 schools participated in the study. Twelve schools formed the experimental group and utilized the software. The remaining twelve schools formed the control group and did not utilize the software. The researchers analyzed FCAT scores for reading and math from 2001 and 2002. They used a hierarchical analysis of variance for each grade level. After analyzing the data, the researchers concluded that there were significant gains in scores for students who used the software at the elementary level but no significant gains for students who used the software at the high school level. Possible reasons for lack of significant gains among high school students are given in the article. The researchers feel that more research is needed.

Mendicino, M., Razzaq, L., & Heffernan, N.T. (2009). A comparison of traditional homework to computer-<span style="font-family: 'Arial','sans-serif'; font-size: 12pt;">supported homework. //Journal of Research on Technology in Education//, 41(3), 331-359. Retrieved September 4, 2010 from ERIC databases.

<span style="color: black; font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 10pt;">This article details a study conducted to determine whether or not students learn more in math by using computers to do their homework than by completing traditional paper-and-pencil homework. The researchers studied 92 students in four fifth grade classrooms in a small, rural town. Two classes were assigned paper-and-pencil homework problems for math, while the other two were assigned the same problems using computers and the ASSISTment system. Via the ASSISTment system, students received immediate feedback as well as scaffolded tutoring when they struggled with problems. Teachers had access to reports which detailed which concepts students grasped and did not understand. They also benefitted by having student homework graded for them. Students who completed the same 10 problems using paper-and-pencil received a review the next class day and were allowed to ask questions about the problems. All students were pre- and post- tested on the material and t-tests were run. Though learning occurred in both groups, the mean gain for students in the Web-based homework group more than doubled that of the students in the paper-and-pencil homework group. Thus, the researchers concluded that by using a Web-based system such as the ASSISTment system, fifth grade math students can learn more than they would by doing traditional paper-and-pencil homework. More research is planned to determine whether the results will be similar in other grades and other areas of the curriculum.

Merrett, S. & Edwards, J. (2005). Enhancing mathematical thinking with an interactive whiteboard. //Micromath//, 21(3), 9-12. Retrieved September 11, 2010 from ProQuest Education Journals.

<span style="font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 10pt;">This article provides details of research conducted by a recipient of a Best Practice Research Scholarship to examine aspects of the use of interactive whiteboards (IWBs) to teach mathematics. The author focused her research on the visual impact of IWBs. She recognized the importance of selecting the proper software to use in conjunction with her IWB and opted to use //Crocodile Math// software to deliver shape, space, and measurement lessons. Her participants were “more able 7 year students 11-12 year olds” (Merrett & Edwards, 2005). She utilized mixed methods of data collection in her research. She used quantitative data in the form of KS2 levels and CAT scores as her baseline data and as a means to comparison after instruction. The researcher also used qualitative data in the form of student learning logs and interviews in order to gain evidence of students’ reactions to the IWB lessons. Students were asked to share what they liked and disliked about the use of IWBs to learn the math concepts. When examining her results, the researcher concluded that there was a slight improvement in National Curriculum levels from the previous year. However, she admitted that they were “non-identical cohorts” (Merrett & Edwards, 2005). Via student interviews and learning logs, the researcher was able to conclude that students enjoyed the lessons as presented with the IWBs. By examining test score data and students’ learning logs, she was also able to determine that the use of IWBs and the selected software were effective in teaching the math concepts.

Rosen, D., & Hoffman, J. (2009) Integrating concrete and virtual manipulatives in early childhood mathematics. //YC Young Children//, 64(3), 26-29, 31-33. Retrieved August 22, 2010 from ProQuest Education Journals.

<span style="color: black; font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 0pt;">In this article, the authors explore two kinds of virtual manipulatives: virtual pattern and attribute blocks and virtual geoboards. They provide useful information for teachers, including free ways to access the materials and ideas for how teachers in kindergarten and first grade classrooms could use them in instruction. The virtual manipulatives work much like three-dimensional manipulatives. Students can use the mouse to slide, flip, rotate, and turn them. The authors do not advocate replacing three-dimensional manipulatives with virtual ones. They simply offer using virtual manipulatives as a way to appropriately use technology to enhance young children’s mathematical development. It is important that teachers give students time to explore the technology before asking them to work on specific activities. “Through early exposure to computer-based mathematics activities, children become better equipped to successfully navigate our digital world” (Rosen & Hoffman, 2009).

Schoppek, W. & Tulis, M. (2010). Enhancing arithmetic and word-problem solving skills efficiently by individualized computer-assisted practice. //The Journal of Educational Research//, 103(4), 239-252. Doi: 10.1080/00220670903382962.

<span style="font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 10pt;">This article summarizes a study performed in Bayreuth, Germany to “investigate how much a moderate amount of individualized practice contributes to the improvement of pupils’ achievements in arithmetic and mathematical problem solving” (Schoppek & Tullis, 2010). The authors developed a software program, //Merlin’s Math Mill// (MMM), and used it for the basis of their research. Participants included students in nine third-grade classrooms. Two experiments were conducted. For the first experiment, student participants used the software for one hour each week for seven weeks. For this study, researchers were unable to establish a control group. For the second experiment, the researchers were able to place students in groups themselves. Participants from two classes used the software in the regular classroom for one 45-minute lesson each week. Participants from the other two classes did not use the MMM software and served as the control group. The data collected from both experiments was quantitative and involved the analysis of students’ scores on tests administered at the end of the experiment (one made by the researchers and one was the DMAT 3+). Researchers calculated the MANCOVA to determine the effects of individualized practice on computation and word problems. While methodological limitations from the field research would not allow the researchers to draw definite conclusions, the research did support the notion that a moderate amount of computer-assisted individualized practice through can improve students’ mathematics performance significantly (Schoppek & Tullis, 2010).

Sturdivant, R., Dunham, P., & Jardine, R. (2009). Preparing mathematics teachers for technology-rich environments. //Primus: Problems, resources, and issues in mathematics undergraduate studies,// 19(2), 161-173. Retrieved August 22, 2010 from ProQuest Education Journals.

<span style="color: black; font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 10pt;">The authors of this article describe key elements for faculty development programs to prepare mathematics teachers for technology-rich environments. The key points they recommend that faculty development leaders follow when preparing others to teach with technology are: focus on one course at a time; provide concrete examples and classroom-ready materials; use interesting problems to introduce functionalities/syntax in context; involve participants with hands-on activities; show technology in a variety of roles; show examples of good and bad uses of technology; provide resources to support implementation; suggest ways to monitor student progress and manage group work; engage participants in dialogue about classroom practice and research; and promote gradual change (Sturdivant, Dunham, & Jardine, 2009). The hope is that by following these guidelines, faculty development leaders can promote the use of technology to teach mathematics and create exciting learning environments.

Tienken, C., & Maher, J.. (2008). The influence of computer-assisted instruction on eighth grade mathematics achievement. RMLE Online, 32(3), 1-13. Retrieved September 19, 2010, from ProQuest Education Journals. (Document ID: 1805427831).

<span style="color: black; font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 10pt;">This article describes a study conducted in a New Jersey middle school to determine if there was a measurable difference in achievement on the New Jersey Grade Eight Proficiency Assessment (GEPA) for eighth grade students who received computer-assisted instruction (CAI) in mathematics as compared to those who did not receive CAI. This was a quantitative, quasi-experimental study which used a pre- and posttest model to assess student achievement. Data showed that students in the experimental group (received drill and practice CAI) did not outperform students in the control group (did not receive CAI). In some instances, the data suggested that CAI may have had a negative influence on student achievement. The researchers believe that teachers in middle schools need to consider replacing drill and practice CAI with problem-based instruction or other types of active learning in order to improve students’ problem-solving skills in mathematics.

Vogel, J.J., Greenwood-Ericksen, A., Cannon-Bowers, J., & Bowers, C.A. (2006). Using virtual reality with and without gaming attributes for academic achievement. //Journal of Research on Technology in Eduation//, 39(1), 105-118. Retrieved September 24, 2010, from ProQuest Education Journals. (Document ID: 1145391471).

<span style="color: black; font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 12pt;">This article examines the results of a study conducted in a public elementary school in Florida to determine whether students learned material better from computer programs with attributes in a three-dimensional game format or from CAI presented in traditional two-dimensional format. The researchers incorporated traditional educational approaches in both formats, opting to forgo the use of simulation-related approaches used in other studies. They wanted to see if such programs were as effective as those using simulation approaches since they are easier and less expensive to create. Forty-two students from second through fifth grade participated. Some students were deaf, and some could hear normally. A quasi-experimental design was used for the research. Students were randomly split into two groups. The control group used the CAI program while the experimental group used the program with gaming attributes. Each group worked on their respective program 10 minutes a day over a two-week period. A pre- and posttest model was used. Researchers analyzed the quantitative data using a one-tailed, exact probability test and found that there was no significant improvement for either group in the area of language arts. However, this same type of analysis indicated a significant increase in scores under both conditions for math. When a two-tailed, exact probability test was used for the math section, there was a significant difference in the pre- and posttest scores of the two groups, which suggested that students using the traditional CAI program learned more than those using the experimental program with gaming attributes. They had anticipated that the use of three-dimensional images would make the concepts more concrete and would result in improved performance. The researchers provided several possible reasons for the surprising data results. In summation, they stated that learning games must be engaging and motivating in order to promote improved student performance. This study suggests that software which uses a traditional linear approach to teach concepts may be less effective that those using simulation formats in regards to student achievement.

<span style="color: black; font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 0pt;">Ysseldyke, J. & Bolt, D.M., (2007). Effect of technology-enhanced continuous progress monitoring on math achievement. School Psychology Review, 36(3), 453-467. Retrieved August 22, 2010 from ProQuest Psychology Journals. (Document ID: 1347553591).

<span style="color: black; font-family: 'Arial','sans-serif'; font-size: 12pt; line-height: normal; margin: 0in 0in 10pt;">This article details a study conducted by the authors to determine if the implementation of a progress monitoring system would result in improved scores by students on standardized math tests. The participants included teachers and students in 8 schools in 7 districts in 7 states. The students ranged from elementary to middle school levels. AM software, from Renaissance Learning, was provided to each of the teachers in the experimental groups. The teachers also had access to training and technical support to enable them to use the progress monitoring software to gather data needed to differentiate instruction to meet the needs of all students. The researchers used two independent measures for data collection: STAR Math from Renaissance Learning and the math subtests of the Tera Nova. The data showed that students whose teachers used the AM progress monitoring system continuously significantly outperformed students whose teachers only used the math curriculum provided by their school districts.