With the Institute of Education Sciences, Exploration type projects (formerly called Goal 1 projects) seek to: “identify relationships between individual-, educator-, school-, and policy-level characteristics and education outcomes, and factors outside of education settings that may influence or guide those relationships.” (IES RFP, 2020). I wanted to collect of my thoughts around what I think makes an excellent, high-quality exploration proposal. This is not a recipe blog, so without further gilding the lily, and with no more ado:
1. Clearly defined research questions that are well justified by the problem outlined in the literature review. To be fair, this one is kind of a given, and is the first thing you should do on any proposal regardless of the type or agency. I couldn’t not mention it. 2. Good proposals identify a potentially malleable factor as the outcome. The onus is on the authors to demonstrate that the factor has the potential to be malleable in the significance section. There should be at least one pilot study or other intervention study that you review that demonstrates this thing you want to study is, in fact, something that can be changed. Think about it like this: IES doesn’t want to spend their money helping to understand the potential mechanisms that are correlated with children’s height, because once kids get to school, nothing the school does will change children’s height. 3. Go back and read the second one, but this time think about your predictors. You also have to demonstrate that your predictors are potentially malleable. Some of these are easy to argue: For example, teacher qualifications are potentially malleable because the school system can change their requirements. 4. A good proposal will have sufficient variance in the predictors to actually examine what you want to examine. In short, if your key predictor is a feature of a school, then you need to make sure you have plenty of schools. If you only have three schools involved, then you can’t really run any statistical tests for your key question. For example, maybe you’re interested in principal leadership style. You need to make sure you have lots of principals; if you only have three principals, there’s a chance that all three of them will have the same leadership style and then you won’t be able to compare differences between them. There are some exceptions to this rule: Maybe you know that the three schools in question are relatively similar except for these particular (and very disparate) leadership qualities, though this research question may be better addressed with a qualitative interview. Maybe you have multiple years of data from each school both before and after they implemented a particular leadership practice (though then this is starting to sound like a quasi-experimental efficacy project). Finally, you can also potentially justify it if it’s not your primary research question. 5. You need a description of a statistical analysis for every single quantitative research question you propose to ask. That may mean you have multiple analyses described for one research question if it has multiple parts (e.g., 1a, 1b, 1c). If 1a and 1b can be answered with the same analysis, you need to say so. Don’t make the reviewer do the mental work to figure out that the two questions will probably be answered with the same analysis. Tell them. 6. Just as with the previous tip, your power analyses must be directly linked to each specific analysis. Any question you plan to answer using quantitative data should also have a power analysis. 7. You have to power your analyses for a particular effect size. It is the responsibility of the researcher to demonstrate that the effect sizes you are powered to detect are both plausible and educationally relevant. By plausible I mean that the effects you are powered to detect are in the realm of possible effects you might find. If you are powered to detect an effect of d = 1.5 (1.5 standard deviations on your outcome), you have to demonstrate using either prior literature or a pilot study of some sort that you actually have a chance to find a relation that that big. For example, if your pilot study shows a latent pathway between your key predictor and your targeted outcome of effect size d = 1.0, you either have to power for an effect of 1.0, or you need to do a lot of explaining to convince me that you could possibly find something bigger. By educationally relevant, I mean whether the effect is large enough that it means anything for the children, teachers, or schools. The effect size may be d = .50, but how big is that? For teacher or program implementation fidelity, you can translate an effect size into the number of lessons delivered. For elementary to high school children’s reading and math scores, Hill, Bloom. Black, and Lipsey (2007) provides you with benchmarks for how much children are expected to learn developmentally in any given grade. In some work with my colleagues, we replicated that work with language assessments in Schmitt et al., (2017), which provides effect size benchmarks for language development for age three to age nine. These benchmarks make demonstrating educational relevance much simpler: Powering to detect a relation that is the same size as two months of schooling is educationally relevant, where an argument of “d = .30” is not.
3 Comments
|
Archives
June 2022
Categories |