- Messages : 18235
"Upgrading Education with Technology: Insights from Experimental Research"
- Loys
- Auteur du sujet
Moins
Plus d'informations
23 Jan 2021 14:35 - 23 Jan 2021 14:41 #23146
par Loys
Dans "Journal of economic litterature" de 2020 :
"Upgrading Education with Technology: Insights from Experimental Research"
par Maya Escueta, Andre Joshua Nickow, Philip Oreopoulos, and Vincent Quan.
Attention : Spoiler !
7. Conclusion
Technology has transformed large segments of society in ways that were once considered unimaginable. Education is no exception. Around the world, there is tremendous interest in leveraging technology to transform how students learn. In the coming years, new uses of edtech will continue to flood the market, providing students, parents, and educators with a seemingly limitless array of options. And experimental literatures are beginning to emerge in new domains, including inclass technology like iClickers (Lantz and Stawiski 2014) and adult education offered through text messages and other new platforms (Ksoll et al. 2014). Amid the buzz and sizeable investment in edtech, we aim to step back and take stock of what we currently know from the experimental evidence in this nascent field and shed light into how technology may impact the education production function. This review hopes to advance the knowledge base by identifying and discussing the most promising uses of edtech to date that have been explored with experimental research and highlighting areas that merit further exploration. We contextualize our discussion within a framework for how education technology may or may not address traditional education constraints and increase the efficiency of investments in cognitive and noncognitive skill formation. Our review categorizes the existing literature into four categories: (i) access to technology, (ii) CAL, (iii) online courses, and (iv) technologyenabled behavioral interventions. Taken together, these studies and their potential mechanisms of impact suggest a few areas of promise for edtech and potential reasons for why edtech in some circumstances successfully contributes to the development of cognitive and noncognitive skills.We found that simply providing students with access to technology yields largely mixed results. At the K–12 level, much of the experimental evidence suggests that giving a child a computer may have limited impacts on learning outcomes, but generally improves computer proficiency and other cognitive outcomes. While access to technology likely improves the learning environment by expanding opportunities for learning, by and large, increased access alone does not seem to advance cognitive skill formation. One bright spot that warrants further study is the provision of technology to students at the postsecondary level, an area where the limited RCT evidence has reported positive impacts. This may be due to the increased necessity of technology—and perhaps technology’s more critical role in skill formation—during later stages of the education life cycle.From our review, CAL and technology enabled behavioral interventions emerge as two areas that show considerable promise. CAL has the potential to advance cognitive skill formation by mitigating standard teacher and classroom constraints, thereby increasing the efficiency of investments. Especially when equipped with a feature of personalization, CAL has been shown to be quite effective in helping students, particularly with math. Two interventions in the United States stand out as being particularly promising—a fairly lowintensity online program that provides students with immediate feedback on math homework was found to have an effect size of 0.18 standard deviations, and a more intensive softwarebased math curriculum intervention improved seventh and eighth grade math scores by a remarkable 0.63 and 0.56 standard deviations. These results mirror those from promising interventions examined in the developing country literature, such as the adaptive Indian learning software Mindspark, which was found to have large, positive impacts on math and Hindi. The consistent results of personalized software across different contexts suggests that this feature may be an effective mechanism that helps to overcome common educational challenges in classrooms with heterogeneous learning levels. However, it should be noted that quality of implementation is an important feature of CAL, and the degree to which a software successfully helps teachers to engage students in dedicated time spent on academic learning that appropriately matches student’s actual learning levels may be an important determinant of CAL software’s ability to drastically improve student learning. Far more research is needed to help us isolate the mechanisms for when and why certain CAL programs improve cognitive skill formation and where these findings may be generalizable.Like with CAL, evaluations of behavioral interventions generally find positive effects across all stages of the education life cycle, although they are generally smaller than those found with the most effective CAL models. Given that technologyenabled behavioral interventions generally target noncognitive skill formation, it is perhaps unsurprising that the positive impacts found in this category were, for the most part, smaller in magnitude than CAL models that successfully augmented the development of cognitive skills that may be crucial to understanding specific concepts. At the same time, technologyenabled behavioral interventions, such as largescale text message campaigns, are often incredibly cheap to carry out and hold great promise as a costeffective solution to many challenges associated with behavioral barriers in education. Given the promising impacts reported across all stages of the education life cycle, it would be valuable to build on and test technologyenabled models that can further facilitate the development of noncognitive skills in a costeffective, scalable manner.Though online learning courses have exploded in popularity over the last decade, there continues to be limited rigorous research on its effectiveness. From our review, we found that online courses, in some cases, do increase access to education, particularly when easy alternates are not available. Yet relative to courses with some degree of facetoface teaching, students taking onlineonly courses may experience negative learning outcomes. This may be due to the difficulty of online courses to replicate the pedagogical strengths of inperson instruction, including noncognitive characteristics such as the rapport between a teacher and student. On the other hand, the effects of blended learning are generally on par with those of fully inperson courses. This suggests that the appropriate combination of online and inperson learning may be cost effective, but further research on cost effectiveness of online learning are needed to better understand its potential.Nevertheless, we also note the potential unintended consequences of edtech, and encourage more work to identify these mechanisms for both failure and success with more clarity through further research. The evidence on access to technology and online learning seems to caution against merely providing technological devices or transferring content to online platforms for technology’s sake, which suggests that merely improving the learning environment with technology is insufficient if the aim is to bolster learning more broadly. On the other hand, a body of positive evidence exists for using technology to overcoming binding constraints to skill formation, whether that be through noncognitive or cognitive processes. However, it should be noted that technology without a mechanism for facilitating skill formation may serve as a distraction with negative consequences. There is empirical evidence that concerns about “brain drain”—the hypothesis that the mere presence of a phone may undercut cognitive performance—may be substantiated. For example, one experimental study found that even when people are successful at not checking their phones, the mere presence of a smartphone reduces cognitive capacity, and that these costs are highest for those with the greatest smartphone dependence (Ward et al. 2017).Given that our review specifically focuses on those edtech products and strategies that have been evaluated using experimental methodologies, it is important to note that there are many popular edtech products that have not been evaluated using experimental methods to date, and that the ones that have been tested may be limited in their generalizability given recruitment and study sample characteristics. This, along with the limited scope of experimental research, even on products and strategies that show promise, suggests that there is still a substantial need for more rigorous research on edtech. We recommend the following areas for future experimental research in edtech.First, replications of interventions showing promise, such as CAL and technologyenabled behavioral nudges, are needed to better understand both the generalizability and the credibility of the current findings. We recognize that the large effect sizes in some of these studies may be cause for skepticism, but the fact that such effects emerge in more than one study by different authors suggests that these models may truly be effective and worth exploring in greater depth. Such results warrant replication to see if the same effect is found in different contexts and with different populations of students, and over longer periods of time. For CAL, additional research is needed to understand to what extent the observed impacts are related to specific implementation models. Open questions also remain regarding the underlying mechanisms of effective CAL programs, specifically how the software interacts with teachers and current curriculum. Additionally, it is worth noting that many CAL interventions that are currently being used by schools and families have not been rigorously tested, and therefore the impacts of such CAL products that do not retain the characteristics we identify as promising mechanisms of impact—personalization, adaptivity, or fast feedback of data—remain unknown. We caution policy makers and educational researchers to not take labels of CAL software at face value, but to discern the potential mechanisms of impact before judging the promise of a particular software. For example, to what extent does a software that claims personalization truly adapt to match lessons to a student’s learning level without requiring the teacher to recognize and redirect struggling students? Additionally, as mentioned previously, the personalized and adaptive component of CAL software seems to be more consistently effective for math, yet a handful of studies show that CAL can be effective for language as well. More research will be needed to test mechanisms and impacts of newly emerging CAL products of varying subjects.Given the promising evidence on technologyenabled behavioral interventions, researchers should prioritize understanding when these nudges most effectively support noncognitive skill formation by varying the timing and content of messages and testing how these models interact with other educational supports. New experimental studies should test natural extensions of existing behavioral models, for example messaging campaigns with a high degree of personalization or parentinformation flow systems that draw on predictive analytics to trigger timely alerts to parents based on students’ performance in school. As artificial intelligence enters the realm of education technology, more research is needed to understand the extent to which AI systems can substitute for human guidance in navigating complex processes, such as helping students fill out the FAFSA or providing parents with actionable guidance on how to help a student who is at risk of dropping out of school. Meanwhile, subsequent studies may need to evaluate different methods of delivery of messages such as social media. With many parents, teachers, and students being inundated with text messages on a daily basis, engagement with these messages may decrease with time. Taken together, the experimental research on technologyenabled behavioral interventions is quite promising, and additional research should focus on building upon this already exciting body of evidence.More work to understand effects of online learning is also needed. We recognize that the rigorous evidence is sparse, but given its growing popularity, this is all the more reason to recognize what studies do exist and pursue further work. Future research can help us better understand costs and cost effectiveness of online learning, and to test different types of online and blended courses relative to facetoface. In particular, there is a need for studies disentangling why online learning on its own does worse than facetoface in some circumstances. Is it the loss of interaction with instructors and peers, or is it the lack of a structured environment for time management and study skills, or both? Better understanding these issues will provide information on whether it is possible to make online learning work better if a blended component is not possible, and if so, how. Additionally, as the online learning field is constantly evolving, new research is needed to understand how new models—such as MicroMasters programs and nanocredentials—may impact or democratize learning.Fourth, more experiments that identify the key mechanisms of impact at different stages in the process of skill formation, and that measure longer term outcomes, are also needed. We need better theories to model how and when technology is likely to help processes of skill formation so that we can design studies that specifically test these hypotheses. We hope that the discussion in this paper will provide some roadmaps to testing potential mechanisms of impact so that findings can be more generalizable, and that designing rigorous studies that measure longterm outcomes is more feasible. We recognize that there is a shortage of RCTs and encourage more research that follows samples over time to determine whether effects fade or sustain in the long term. Given the framework described in this paper, there is also a need to better understand how edtech impacts children of younger ages. The role of technology to facilitate skill formation at these younger ages is likely to be different, so as more and more technologies are developed for children of all ages, these potential mechanisms also need to be tested.Finally, more rigorous evaluations of popular edtech products that have not yet been evaluated experimentally. Evaluations on products such as flipped classrooms, smart boards, or even virtual reality technologies to explore other parts of the world or to foster empathy are also needed. We recognize that some types of products are less amenable to an experimental design than others, but this challenge in itself should be a part of the conversation about which edtech products and strategies are working and the extent to which we have certainty about their impacts. Other popular products, such as Khan Academy, have begun engaging in rigorous evaluations over the last few years, but studies are not yet complete or results are not yet published. Researchers should keep their eye on this emerging work and continue to build on this.The edtech field is rapidly changing, and innovative tools and programs are frequently considered outofdate after only several years. When faced with purchasing decisions, education administrators often demand research that is timely, relevant, and actionable. The direction and form of the research may need to change to integrate more seamlessly into decision making. New tools have emerged to address some of these challenges, including Mathematica’s EdTech RapidCycle Evaluation Coach and the EduStar RCT platform. While rapidcycle product testing is of course valuable, more research is needed to evaluate how underlying mechanisms—rather than a specific product—can advance learning. In the end, it should not be about the most popular product or even necessarily the technology itself, but about the best way to help students of all ages and levels learn.
Technology has transformed large segments of society in ways that were once considered unimaginable. Education is no exception. Around the world, there is tremendous interest in leveraging technology to transform how students learn. In the coming years, new uses of edtech will continue to flood the market, providing students, parents, and educators with a seemingly limitless array of options. And experimental literatures are beginning to emerge in new domains, including inclass technology like iClickers (Lantz and Stawiski 2014) and adult education offered through text messages and other new platforms (Ksoll et al. 2014). Amid the buzz and sizeable investment in edtech, we aim to step back and take stock of what we currently know from the experimental evidence in this nascent field and shed light into how technology may impact the education production function. This review hopes to advance the knowledge base by identifying and discussing the most promising uses of edtech to date that have been explored with experimental research and highlighting areas that merit further exploration. We contextualize our discussion within a framework for how education technology may or may not address traditional education constraints and increase the efficiency of investments in cognitive and noncognitive skill formation. Our review categorizes the existing literature into four categories: (i) access to technology, (ii) CAL, (iii) online courses, and (iv) technologyenabled behavioral interventions. Taken together, these studies and their potential mechanisms of impact suggest a few areas of promise for edtech and potential reasons for why edtech in some circumstances successfully contributes to the development of cognitive and noncognitive skills.We found that simply providing students with access to technology yields largely mixed results. At the K–12 level, much of the experimental evidence suggests that giving a child a computer may have limited impacts on learning outcomes, but generally improves computer proficiency and other cognitive outcomes. While access to technology likely improves the learning environment by expanding opportunities for learning, by and large, increased access alone does not seem to advance cognitive skill formation. One bright spot that warrants further study is the provision of technology to students at the postsecondary level, an area where the limited RCT evidence has reported positive impacts. This may be due to the increased necessity of technology—and perhaps technology’s more critical role in skill formation—during later stages of the education life cycle.From our review, CAL and technology enabled behavioral interventions emerge as two areas that show considerable promise. CAL has the potential to advance cognitive skill formation by mitigating standard teacher and classroom constraints, thereby increasing the efficiency of investments. Especially when equipped with a feature of personalization, CAL has been shown to be quite effective in helping students, particularly with math. Two interventions in the United States stand out as being particularly promising—a fairly lowintensity online program that provides students with immediate feedback on math homework was found to have an effect size of 0.18 standard deviations, and a more intensive softwarebased math curriculum intervention improved seventh and eighth grade math scores by a remarkable 0.63 and 0.56 standard deviations. These results mirror those from promising interventions examined in the developing country literature, such as the adaptive Indian learning software Mindspark, which was found to have large, positive impacts on math and Hindi. The consistent results of personalized software across different contexts suggests that this feature may be an effective mechanism that helps to overcome common educational challenges in classrooms with heterogeneous learning levels. However, it should be noted that quality of implementation is an important feature of CAL, and the degree to which a software successfully helps teachers to engage students in dedicated time spent on academic learning that appropriately matches student’s actual learning levels may be an important determinant of CAL software’s ability to drastically improve student learning. Far more research is needed to help us isolate the mechanisms for when and why certain CAL programs improve cognitive skill formation and where these findings may be generalizable.Like with CAL, evaluations of behavioral interventions generally find positive effects across all stages of the education life cycle, although they are generally smaller than those found with the most effective CAL models. Given that technologyenabled behavioral interventions generally target noncognitive skill formation, it is perhaps unsurprising that the positive impacts found in this category were, for the most part, smaller in magnitude than CAL models that successfully augmented the development of cognitive skills that may be crucial to understanding specific concepts. At the same time, technologyenabled behavioral interventions, such as largescale text message campaigns, are often incredibly cheap to carry out and hold great promise as a costeffective solution to many challenges associated with behavioral barriers in education. Given the promising impacts reported across all stages of the education life cycle, it would be valuable to build on and test technologyenabled models that can further facilitate the development of noncognitive skills in a costeffective, scalable manner.Though online learning courses have exploded in popularity over the last decade, there continues to be limited rigorous research on its effectiveness. From our review, we found that online courses, in some cases, do increase access to education, particularly when easy alternates are not available. Yet relative to courses with some degree of facetoface teaching, students taking onlineonly courses may experience negative learning outcomes. This may be due to the difficulty of online courses to replicate the pedagogical strengths of inperson instruction, including noncognitive characteristics such as the rapport between a teacher and student. On the other hand, the effects of blended learning are generally on par with those of fully inperson courses. This suggests that the appropriate combination of online and inperson learning may be cost effective, but further research on cost effectiveness of online learning are needed to better understand its potential.Nevertheless, we also note the potential unintended consequences of edtech, and encourage more work to identify these mechanisms for both failure and success with more clarity through further research. The evidence on access to technology and online learning seems to caution against merely providing technological devices or transferring content to online platforms for technology’s sake, which suggests that merely improving the learning environment with technology is insufficient if the aim is to bolster learning more broadly. On the other hand, a body of positive evidence exists for using technology to overcoming binding constraints to skill formation, whether that be through noncognitive or cognitive processes. However, it should be noted that technology without a mechanism for facilitating skill formation may serve as a distraction with negative consequences. There is empirical evidence that concerns about “brain drain”—the hypothesis that the mere presence of a phone may undercut cognitive performance—may be substantiated. For example, one experimental study found that even when people are successful at not checking their phones, the mere presence of a smartphone reduces cognitive capacity, and that these costs are highest for those with the greatest smartphone dependence (Ward et al. 2017).Given that our review specifically focuses on those edtech products and strategies that have been evaluated using experimental methodologies, it is important to note that there are many popular edtech products that have not been evaluated using experimental methods to date, and that the ones that have been tested may be limited in their generalizability given recruitment and study sample characteristics. This, along with the limited scope of experimental research, even on products and strategies that show promise, suggests that there is still a substantial need for more rigorous research on edtech. We recommend the following areas for future experimental research in edtech.First, replications of interventions showing promise, such as CAL and technologyenabled behavioral nudges, are needed to better understand both the generalizability and the credibility of the current findings. We recognize that the large effect sizes in some of these studies may be cause for skepticism, but the fact that such effects emerge in more than one study by different authors suggests that these models may truly be effective and worth exploring in greater depth. Such results warrant replication to see if the same effect is found in different contexts and with different populations of students, and over longer periods of time. For CAL, additional research is needed to understand to what extent the observed impacts are related to specific implementation models. Open questions also remain regarding the underlying mechanisms of effective CAL programs, specifically how the software interacts with teachers and current curriculum. Additionally, it is worth noting that many CAL interventions that are currently being used by schools and families have not been rigorously tested, and therefore the impacts of such CAL products that do not retain the characteristics we identify as promising mechanisms of impact—personalization, adaptivity, or fast feedback of data—remain unknown. We caution policy makers and educational researchers to not take labels of CAL software at face value, but to discern the potential mechanisms of impact before judging the promise of a particular software. For example, to what extent does a software that claims personalization truly adapt to match lessons to a student’s learning level without requiring the teacher to recognize and redirect struggling students? Additionally, as mentioned previously, the personalized and adaptive component of CAL software seems to be more consistently effective for math, yet a handful of studies show that CAL can be effective for language as well. More research will be needed to test mechanisms and impacts of newly emerging CAL products of varying subjects.Given the promising evidence on technologyenabled behavioral interventions, researchers should prioritize understanding when these nudges most effectively support noncognitive skill formation by varying the timing and content of messages and testing how these models interact with other educational supports. New experimental studies should test natural extensions of existing behavioral models, for example messaging campaigns with a high degree of personalization or parentinformation flow systems that draw on predictive analytics to trigger timely alerts to parents based on students’ performance in school. As artificial intelligence enters the realm of education technology, more research is needed to understand the extent to which AI systems can substitute for human guidance in navigating complex processes, such as helping students fill out the FAFSA or providing parents with actionable guidance on how to help a student who is at risk of dropping out of school. Meanwhile, subsequent studies may need to evaluate different methods of delivery of messages such as social media. With many parents, teachers, and students being inundated with text messages on a daily basis, engagement with these messages may decrease with time. Taken together, the experimental research on technologyenabled behavioral interventions is quite promising, and additional research should focus on building upon this already exciting body of evidence.More work to understand effects of online learning is also needed. We recognize that the rigorous evidence is sparse, but given its growing popularity, this is all the more reason to recognize what studies do exist and pursue further work. Future research can help us better understand costs and cost effectiveness of online learning, and to test different types of online and blended courses relative to facetoface. In particular, there is a need for studies disentangling why online learning on its own does worse than facetoface in some circumstances. Is it the loss of interaction with instructors and peers, or is it the lack of a structured environment for time management and study skills, or both? Better understanding these issues will provide information on whether it is possible to make online learning work better if a blended component is not possible, and if so, how. Additionally, as the online learning field is constantly evolving, new research is needed to understand how new models—such as MicroMasters programs and nanocredentials—may impact or democratize learning.Fourth, more experiments that identify the key mechanisms of impact at different stages in the process of skill formation, and that measure longer term outcomes, are also needed. We need better theories to model how and when technology is likely to help processes of skill formation so that we can design studies that specifically test these hypotheses. We hope that the discussion in this paper will provide some roadmaps to testing potential mechanisms of impact so that findings can be more generalizable, and that designing rigorous studies that measure longterm outcomes is more feasible. We recognize that there is a shortage of RCTs and encourage more research that follows samples over time to determine whether effects fade or sustain in the long term. Given the framework described in this paper, there is also a need to better understand how edtech impacts children of younger ages. The role of technology to facilitate skill formation at these younger ages is likely to be different, so as more and more technologies are developed for children of all ages, these potential mechanisms also need to be tested.Finally, more rigorous evaluations of popular edtech products that have not yet been evaluated experimentally. Evaluations on products such as flipped classrooms, smart boards, or even virtual reality technologies to explore other parts of the world or to foster empathy are also needed. We recognize that some types of products are less amenable to an experimental design than others, but this challenge in itself should be a part of the conversation about which edtech products and strategies are working and the extent to which we have certainty about their impacts. Other popular products, such as Khan Academy, have begun engaging in rigorous evaluations over the last few years, but studies are not yet complete or results are not yet published. Researchers should keep their eye on this emerging work and continue to build on this.The edtech field is rapidly changing, and innovative tools and programs are frequently considered outofdate after only several years. When faced with purchasing decisions, education administrators often demand research that is timely, relevant, and actionable. The direction and form of the research may need to change to integrate more seamlessly into decision making. New tools have emerged to address some of these challenges, including Mathematica’s EdTech RapidCycle Evaluation Coach and the EduStar RCT platform. While rapidcycle product testing is of course valuable, more research is needed to evaluate how underlying mechanisms—rather than a specific product—can advance learning. In the end, it should not be about the most popular product or even necessarily the technology itself, but about the best way to help students of all ages and levels learn.
Dernière édition: 23 Jan 2021 14:41 par Loys.
Connexion ou Créer un compte pour participer à la conversation.