Experts for MrP modelled weighting & data fusion

We are looking for expertise using MrP on household questionnaire surveys to emulate population-representative findings.

We have previously used indicators from an existing complex-sample, nationally representative survey as the benchmark target for our less costly multi-mode based MrP modelled weighting.

We have an upcoming pilot. There are three key objectives:
• Repeat this MrP-based modal weighting optimisation exercise in a different domain – gendered content
• Fuse data from the previous round with the upcoming round
• Impute data missing from data collection modes (e.g. SMS) with likely values obtained in other data collection modes (e.g. face-to-face surveys).

The overall objective is to test the multi-mode collected MrP modelled weighting on a women-specific indicator set collected in 4 countries in East Africa and Asia.

As far as possible, we would like to share our RFP with people and organisations that have experience with:
• MrP
• Household survey data fusion
• Missing data imputation

Please do indicate interest or recommend people or organisations that may be appropriate or likely to assist with this request.

Cool. This sounds right up @lauren’s alley.

Thanks Jonah for linking me with

Lauren, would you like me to send you our terms of reference / request for proposal?

Sounds like a cool project! I don’t think I’m currently in a position to assist on a request for proposal though. Best of luck with it all though! :)

Great thanks @lauren and @jonah. I hope you can give an opinion - or direct me to someone who can - on a specific issue I’m struggling with that I think is key capturing in an RFQ and evaluating the eventual proposals…

This is if MrP is the appropriate technique to use throughout the upcoming study.
We will have around 5-10 indicators we want to model values for.

  1. For some indicators, we will have benchmark/reference- surveys to use as the basis for the newly collected surveys to emulate. I assume that MrP is the correct approach for these ones?
  2. For some indicators, we won’t have a reference survey so I’m not sure how to emulate those ones?
    Are there relationships from the reference survey that modelled weights in 1. would be used to inform those in 2?
  3. Finally, we may have some indicators that are in the reference survey but we may expect the modelled weights should account for differences in data collection? Is this something that can be accounted for? How? And how should an adjustment be treated?

As background, we are using respondent surveys to collect data. We have 3 data collection modes biased towards the cheapest mode - SMS. SMS is cheapest because there is no interviewer. And this creates an anonymity that may be beneficial for capturing answers to sensitive indicators, as in 3. above.
The most expensive mode is a complex sample design done face-to-face. So it is the basis for the reference / benchmark surveys. But the face-to-face data collection may induce a bias in how sensitive questions are answered…

Please do let me know what you think or suggest good people to approach with these questions.

Thanks again!