I have the following (simple?) question about statistics: I have a dataset where I look for correlations between variables and would like to control for differences between factor levels. For visualization, consider this example: N = 100 people perform 20 tasks and the time taken for each task serves as performance measure. I now wish to correlate the performance in these tasks with the people's IQ, which is also known. Here it seems reasonable to somehow account for differences between the tasks, and two ways to do this come to mind:
- I first z-standardize the performance measures in each task, then compute a simple correlation with IQ (possibly after aggregating data across tasks)
- I allow for a random intercept for each task in a linear mixed model predicting IQ
Are these approaches essentially identical or do they differ in any way?