Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How/should we use task participants to "localize" a task in the task space? #30

Open
JamesPHoughton opened this issue Feb 25, 2021 · 2 comments

Comments

@JamesPHoughton
Copy link
Collaborator

JamesPHoughton commented Feb 25, 2021

Some of the people who will be most knowledgeable about a task will be those who actually do it. We will probably want to survey them with the questions we use to localize a task in the space. The point at which they know the most about the task - and therefore are likely to give the most "precise" answers will be after they have done the task.

However, some individuals will be placed with teams that lead them to success, while (if we are to get any variance in the DV) others must be placed with teams that promote less success. We should expect that individuals who perform well might assess the task in different ways than those who don't, especially on dimensions such as difficulty or complexity. So, we should expect that their responses will be "biased". As we are adaptively sampling the population in response to predicted drivers of performance, we cannot naively expect the biases introduced in one team's assessments to be canceled out by those of other teams.

We have several options:

  1. We could try and correct for this bias, possibly by making comparisons between the ratings of different tasks performed by the same types of teams.
  2. We could survey participants after they receive instructions but before they complete the survey. This will necessarily leave out some of the information we are trying to measure the task on, as they will only be able to answer questions about the task prototype at this stage, not the particular task instance itself.
  3. We could localize tasks only using the responses of participants who performed the task on their own. This would require each task to be performed by a set of individuals - which we may or may not have been planning to do otherwise. (Although there are arguments for doing so, to get a baseline performance score that we can use to compare across task instances and task prototypes).
  4. We could use a hierarchical model of task location, first localizing the general region of the task prototype, and then looking at deviations between task instances. Then we can use information from team performers prior to task completion to get information about the task prototype, and information from individuals to get more detailed/relative information about task instances.
  5. Something else.

Some open questions:

  • Can participants do all the tasks as individuals?
  • Do all the task dimensions make sense when the task is performed by individuals?
@JamesPHoughton
Copy link
Collaborator Author

Asking questions after teams have performed the task may be a way to measure the "halo effect", even if we don't use it for localization in this experiment.

@markwhiting
Copy link
Member

Thinking about how to add this in.

We could do these right after the task is done, as part of our experiment, or we could follow up a few days later.

I'd lean toward following up, because of the already long task design, but open to other perspectives.

In consideration of the options you outline, I think it's an interesting empirical question about how performance influences perception of dimensions like this, so I'd certainly want to collect all the data and do something like a multi level model. Of course, we might find that we think some types of participant responses are not useful through that process.

Another consideration here is what a budget for this looks like. I will check in on that to see if this is actually feasible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants