Facebook Pixel
Does Paying or Compensating Survey Respondents Negatively Affect Response Quality or Reliability?

At We All Count, we think a lot about how to increase the equity of the data gathering process. We make a living off of the data science ecosystem and so do many of our project members and the people who read these posts. We all know that data is valuable, bringing us to an interesting question: should we be paying for it when we collect it?

The opinions about this vary widely between sectors and industries that use data. We’ve worked with a social-sector program evaluation firm that would immediately discount any survey data where the respondents were paid because it was felt that it would be hopelessly skewed by misplaced incentives. We’ve worked with data-oriented marketing firms that would consider any unpaid responses to be junk, as the respondents weren’t properly incentivized to provide accurate, careful answers.

We all like simple yes or no answers, but We All Count isn’t about yes or no. The truth is that sometimes it may be the equitable thing to do and sometimes it may not be.

We All Count is going to continue to explore this question, and the first step we’ve taken is to start assembling a research brief of the wide range of answers to this question that already exist. The information gathered below is only a starting point and it’s not even close to comprehensive. Please send in any information on this topic you have, whether it’s a study or a personal anecdote, and we’ll add it to the list.

We also recognize that this list of research skews heavily towards academic or corporate perspectives and ways of knowing which We All Count does not consider inherently superior or more valid than other avenues and types of expertise. We do, however, have a lot of project members interested in answering this question with support from the types of places that are trusted by their bosses, funders, etc. We encourage you to look beyond these traditional gatekeepers if you haven’t already and if you have any information on this subject from other types of sources please share it so we can round out this list!

Before we give you the list of the research we’ve found on the subject so far, here are some useful conversation starters for your team/organization/project to think about when making this decision for yourselves:


  • Would I take this survey for free?

  • If I am “trading” my data for program delivery, who is setting the value of my data? Who gets to suggest the monetary value of the program to me?

  • How much are we being paid to process this data?

  • Who will own this valuable data at the end?

  • How much is my data worth to me? More than a gift card?

  • How can I know how much my data is worth?

  • Do I have the option of benefitting from this data project without contributing my data? If not, is that a form of coercion?

  • Is there a free market where I can choose who gets my data or how much they would pay?

  • How would paying me to take this survey make me feel about its results?

  • Would a richer person charge more or less to offer this data and take this time?

*Notes from our research team:


Monetary and non-monetary incentives are a widely used tool in several sectors to increase the rate of respondents in surveys.

In our first pass, we found more studies supporting the idea that paying or compensating survey respondents does not affect results. The quality of the responses is maintained, and the content of the information is maintained, in addition to a reduction in the rate of non-responses.

The most commonly cited concerns of the works arguing against offering incentives are the possibility of coercion and significant changes in the sample population.

It is important to note that there are few sources available on this topic. In most of the published works, the authors talk about the scarcity of research on the subject, mainly experimental research. This factor makes it inaccurate to define whether, in fact, there is any effect on the results of surveys that provide some form of incentive to their respondents. Another pertinent observation about the sources is that the vast majority come from English-speaking countries, though we also surveyed studies in French, Spanish and Portuguese.

“Paying or Compensating Survey Respondents Positively Impacts the Data Work”


United States of America

World Trade Center Health Registry is a longitudinal health study¹ that periodically surveys a cohort of ~ 71,000 people exposed to the 9/11 terrorist attacks in New York City. Since Wave 1, the Registry has conducted three follow-up surveys every 3–4 years and utilized various strategies to increase survey participation. A promised monetary incentive was offered for the first time to survey non-respondents in the recent Wave 4 survey, conducted 13–14 years after 9/11.

The 2017 study¹ assessed the effectiveness of a monetary incentive in improving the response rate five months after the survey was launched and assessed whether or not the integrity of the response was compromised due to the use of the incentive.

A $ 10 monetary incentive offer was found to be effective in increasing Wave 4 response rates. Specifically, the $ 10 incentive offer was useful in encouraging participants who were initially reluctant to respond to the survey. The probability of returning a survey increased by 30% for those who received an incentive offer, and the incentive increased the number of returned surveys by 18%. Also, the results revealed no significant differences in the completeness of responses between those who received an incentive offer and those who did not.


In an article published in 2008², based on experimental evidence, the authors seek to understand whether, under what conditions, the use of monetary incentives could be a coercive method to induce participation in research. The vignette-based study addressed this issue and the evidence suggests that monetary incentives (or other forms of larger incentives) do not induce research participants to accept risks greater than they would not be willing to take if they were not paid for it.

To develop the experiment², the researchers chose a very different sample group to answer a Web questionnaire. The questions aimed to understand whether greater monetary incentives would lead respondents to take different risks than they would for a smaller monetary incentive.

None of the experiments reviewed during the research found evidence of this effect, and the experiment developed by the authors is no exception. Larger incentives induce greater participation than smaller ones, both for higher and lower risks, and higher risks induce less participation than smaller ones. But there is no statistically significant interaction between the size of the risk and the size of the incentive; participants do not appear to exchange greater incentives for greater risks.


Institutional Review Boards (IRB)³ members conducted, in 2016, the first national study in the United States to explore attitudes about whether, and why, the payment of survey participants constitutes undue coercion or influence. In general, the article demonstrates that the predominant concerns about payment are largely mistaken. Regarding the possibility of coercion, or actors say that “since payment is an offer, not a threat, payment is never coercive.”

Although the opinion of some of the respondents points to concerns about the possibility of undue influence on the results of the surveys, through monetary incentives, the authors of the article claim that there is no reason to be concerned.

Given that there³ is a tradition of research ethics that claims that participation should always be altruistic and, at its best, that participants should identify with the purposes of the study, it is not surprising that some IRB members will find it unseemly to introduce payment into the research equation. That said, such justifiable caution about payment does not warrant misconceiving and misapplying the concepts of coercion and undue influence. Even if offering payment to research subjects is unseemly – and we do not agree that it is – it does not follow that such offers compromise the validity of a subject’s consent. If, as in other contexts of life, people can reasonably regard the value of payment as greater than the risks of engaging in some activity, be it ordinary employment or participation in research, then we do not protect subjects when we mistakenly preclude their activity on grounds of coercion or undue influence.”


The book “The Silent Minority: Non-respondents In Sample Surveys4, written by John Goyder and published in 1989, is still quoted in research that differs about the incentive for respondents. This book provides characteristics of non-respondents in sample surveys and reports on various empirical studies conducted to test theories of response to surveys and unresponsive behavior.

Goyder’s research shows that offering incentives to research participants does not lead to differences in results. On the contrary, the author points out that basing the participation upon incentive exchanges proves to be the most effective way to guarantee the participation of the “minorities” described in the book.


The company InfoSurvey Research uses the research conducted by the University of Nebraska-Lincoln to answer this question to its potential customers:

“But doesn’t offering incentives have a negative impact on data quality?5

Again, many researchers have looked into this and found no support that data quality decreases with incentive. Indeed, some of these studies show that data quality might even improve with incentives. For example, in a customer satisfaction study, a sample of respondents without incentives may be skewed to dissatisfied customers who perceive that they have a stake in the results. By offering incentives, you may get a more balanced sample of customers, including those without an ax to grind.”


There is a continuing need to conduct surveys of U.S. veterans in order to examine important health questions. In surveys of veterans and members of society in general, achieving high response rates has become increasingly challenging. The survey “The Effectiveness of a Monetary Incentive on Response Rates in a Survey of Recent U.S. Veterans6 was based on this to assess the effects that offering a payment to respondents may have.

In general, there was no change in the content or quality of the responses – they did not improve or worsen. Only a considerable increase in the participation rate was observed.


The article “Could provide financial incentives to research participants be ultimately self-defeating?7, 2016, creates several hypothetical scenarios to try to demonstrate that offering a reward for the participation of a respondent can be detrimental to the research result. This study concludes that, after the tests were carried out, this work failed to demonstrate that this practice has any effect on the content of the responses, it only increases the participation rate.

“In the absence of harm to the individual, encouraging more people to participate in the research appears to be a good thing, as it will lead to statistically more robust research results, which can then be translated into better health care and other practices.”


The article “The ethics and implications of paying participants in qualitative research8 (2008) aims to demonstrate that offering an incentive to the respondent does not compromise the results. On the contrary, the author presents as a case study a survey conducted with more single women, in social vulnerability, in which it was made possible by the payment offered to the participants. Otherwise, the study would have a non-response bias. It also makes it clear that there are not enough experimental studies on the effects that this practice can have.


The Use of Monetary Incentives in Census Bureau Longitudinal Surveys9 is a report developed by the US Census Bureau, based on a case study that took place in 1996. Aimed at increasing the adhesion of families in a situation of vulnerability to research and decreasing the rates of non- response, the Bureau experimented with offering incentives to survey respondents. The response to the increase in adherence was satisfactory, and there was no evidence that any change in the result may have occurred.

“Effects of incentives on data quality. An incentive resulted in lower rates of missing data for wage amounts in the SIPP Wave 1 experiment. A $ 20 incentive targeted to nonrespondents was differentially effective in the poverty stratum, resulting in a significantly greater representation of poverty stratum households in the $ 20 treatment group compared to either the control group or the $ 40 group. Additional research is needed on the possible effects of incentives on completeness and quality of data, on response distributions, and on sample composition.”


The present study10 aimed to analyze monetary incentives and shortening the questionnaire as a means of increasing response rates in a mailed follow-up survey 1 year after inpatient psychotherapeutic treatment. Additionally, effects on partial nonresponse and the assessment of treatment outcome were examined.

Incentives and a shorter questionnaire led to higher return rates but did not affect partial nonresponse and self-report of treatment outcome in a randomized postal survey.



A Canadian survey published in 200611 used large-scale data, from a UK government panel survey on Youth, to assess some effects of the incentive given to respondents. According to the article, Respondent incentives are increasingly used as a measure of combating falling response rates and resulting risks of nonresponse bias. Nonresponse in panel surveys is particularly problematic since even low wave-on-wave nonresponse rates can lead to substantial cumulative losses; if nonresponse is differential, this may lead to increasing bias across waves.

The study shows that the benefits brought by the incentive are very large, considering that the rate of adherence to the study, the quality, and the practical effects of the research increase. This ends up making the risk insignificant if there is any risk of any complications related to the influence that the incentive may have on the results, which is small or nonexistent. The research also points out that the type of incentive that offers the best response is monetary, being better received than prizes or gifts.


According to the University of Toronto’s guide “Research Participant Compensation and Reimbursement12, the practice of providing financial incentives to survey respondents is fair. The document takes into account that the reward (monetary, raffle or other) is the price paid for the time spent by the respondent to collaborate with the survey results.

There is no evidence that this practice influences the final research result, although there is an addendum: “While it is understandable that incentives may be needed to recruit and retain study participants, these incentives cannot be set at levels that would unduly influence a participant to participate or remain in a study.” This condition could result in altered responses.


The Survey Methods for Health Services Research: Theory & Application published by the University of Toronto, does not reveal any data regarding the interference that incentives may (or may not) have in the research result. However, the document reveals that the practice of offering a payment (monetary incentive) ranks first on the list of “Factors increasing the response13.


The “Survey Guidelines and Best Practices14 produced by the British Columbia Institute of Technology to support its researchers. In this guide, monetary incentive practices are encouraged to obtain a better rate of adherence to the questionnaire. The incentives cited are of two types: payments for the survey (a lower amount) and a prize offered by lottery among the survey participants (offers a higher amount).

This document does not mention studies but relies on evidence to define that the research results are not corrupted if there are monetary incentives involved in the agreement.


The study carried out by the Higher Education Quality Council of Ontario (HECQO)15 in 2017 also endorses this finding. The Council points out that offering a financial incentive is the best way to increase the participation rate of questionnaires aimed at young university students.


A study conducted with populations in situations of social vulnerability16 in Canada concluded that offering incentives (and increasing rates) are worth the research budget, as it is an effective methodology in attracting these populations. Furthermore, the study finds no difference in results.


An article published in 2001 tested different rates of research incentives to determine changes in the rate of adherence and the quality of responses. According to the study “The level of incentive did not influence the quality of the data.”17

New Zealand

An experiment was conducted in New Zealand18 to determine the effectiveness of a contingent, a non-monetary incentive in inducing college students to participate in self-administered, self-initiated research. A contingent and non-monetary incentive is a gift that is offered and delivered when making the research request, only if the potential respondent agrees to complete the questionnaire.

The experiment was conducted outside a large university library. An interviewer approached the students for an interview. Half of those approached were offered a contingent, non-monetary incentive; for the other half, no. The incentive increased the response rate to the survey by 40%.

The same study did not observe changes18 in the content of the responses that could influence the result of the survey, among respondents who received encouragement and those who did not. On the contrary, the study points out that there are many positive points. The article still feels that another advantage of the contingent incentive is that it can generate a survey response from a person who is not interested in the incentive. Since the contingent incentive is offered with the survey request, the potential respondent can refuse the incentive while agreeing to complete the survey. This feature can improve the quality of the study.



The research company Forethought19, located in Melbourne, uses academic research to affirm that the use of incentives (monetary or not), does not interfere in the result and the quality of the research. Despite increasing the participation rate, this document uses an interesting argument that corroborates this hypothesis: very long questionnaires usually result in bad and poor-quality responses. In experiments where an incentive is offered to the respondent, the response rate increases, but the responses remain poor, or the questionnaire is incomplete.

That is, incentives cannot guarantee that you will increase the quality of the data you expect to obtain.


The Australian article “Incentives in Surveys20, 2019:

“Surveys typically use hypothetical questions to measure subjective and unverifiable concepts like happiness and quality of life. We test whether this is problematic using a large survey experiment on health and subjective well-being. We use Prelec’s Bayesian truth serum to encourage the experiment and defaults to introduce biases in responses. Without defaults, the data quality was good, and incentives had no impact. With defaults, incentives reduced biases in the subjective wellbeing questions by inducing participants to spend more effort. Incentives had no impact on the health questions regardless of whether defaults were used.”



A master’s thesis published in Brazil in 201021 examined the effects and ethical conflicts involved in research that pays for the participation of respondents. The author creates hypothetical situations involving payment in four scenarios: i) Phase I Clinical Trial, ii) Sociological Research, iii) Behavioral Study and iv) Study with Indigenous population.

In general, the survey indicates that offering a monetary incentive for participating in the survey does not interfere with the survey’s results. For all hypothetical scenarios, it seemed natural to offer a reward for the time dedicated to answering questionnaire questions and assisting with research.

Only concerning Clinical Trials, the possibility of coercion is revealed, when this practice is applied in countries where a considerable portion of the population lives on the edge of poverty, as is the case in Brazil. The study proposes that, in those cases, the financial link can lead research participants to make risky decisions, driven by necessity.


A literature review carried out in 201922 reported the history of testing and validating the data collection strategies used in applied social sciences. Among the results analyzed, the work concludes that there is no change in the results of surveys that offer any type of reward to their respondents.

However, the research offers another interesting fact, which differs from most of the works compiled here. According to the authors, some research published in the past shows that offering incentives to participants may not have a significant increase in the participation rate.

“Cash monetary incentives have already produced a significantly higher response rate than under non-incentive (control) and charitable incentives (Bosnjak, Tuten, & Wittmann, 2005). However, several studies have obtained divergent results. In Cycyota and Harrison (2002), the response rate, when there was an incentive for the respondent, was 19%, being 18%, when the incentive was not offered. Similar results are found in Summers and Price (1997) – 51% versus 49% – and in Schneider and Johnson (1995) – 44% versus 41%, indicating that even after several studies, there is still a gap about the effects of the use of the monetary incentive in the response rate in field surveys.”



A 2010 Spanish study23 aimed to determine the practices that improve surveys conducted by questionnaires sent via mail. The work carried out a theoretical survey on the factors that influence respondents’ adherence as well as what influences their responses. According to the conclusion, offering incentives does not interfere with the content of the responses or the result of the survey. However, monetary incentives, especially prepaid ones, increase the participation rate.

“The combined effect of reciprocity and the principle of redistributive justice will have a tremendous impact on whether prepayment increases response rates. Experiments with applications of the cognitive dissonance theory to the study of response in postal surveys were carried out: from their point of view, response rates can increase, creating a feeling of dissonance among respondents (receiving an incentive would create this situation), a dissonance that can be solved by answering the questionnaire and sending it to the researcher.”


A 2009 study24 that aims to determine how to improve the quality of online surveys attested that prepaid incentives are a good way and that there is no evidence to suggest the possibility of altering the result.



German researcher Anja Gritz has developed several studies related to the effects that monetary incentives can have in the development of research, mainly based on the Web. In this article25, she deals with theoretical and methodological issues that involve the use of incentives in online access panels. An online access panel is a group of people who have repeatedly agreed to participate in web surveys. Online dashboards are an important form of reactive web-based research in the medium term.

The results of all the experiments they conducted point to the fact that offering an incentive (monetary or not, pre or postpaid) does not interfere with the responses to the questionnaire or the final result.

“The experiment confirmed that the denomination and the value of the money drawn did not influence the quantity of the response, the quality of the response, the composition of the sample, and the result of the study. Self and non-self-selected panel members do not differ in their susceptibility to incentive conditions.”


Another more recent study (2017)26 published by her maintains the same opinion, based on other experiments.




The Argentine study that addresses research methodology via web questionnaires emphasizes that “material incentives are just a way to influence a good quality and good data27. It presents a set of bibliographic references that support this statement, demonstrating that the research results do not change through the incentive offered to respondents.


When the subject of the survey may cause embarrassment to the respondent, offering payment or a gift to the participants may be the only way to guarantee the progress of the survey. The author uses the example of Argentine research28 on the alcoholism index among young people, which would not have been possible without the incentives offered. The survey argues that, even if paying respondents would result in some degree of change in responses (and there is no evidence of that), incentives would be a valid practice.


United Kingdom

Differential Incentives: Beliefs About Practices, Perceptions of Equity, and Effects on Survey Participation29 from 1999, provides a complete overview of the effect that incentives can have on the research result. According to the study, there is no evidence that offering any type of reward to the respondent could affect the result of the survey. However, he stresses that the definition of terms must be carried out within ethical standards and the value (or reward) must be fair.

“Paying or Compensating Survey Respondents Negatively Impacts the Data Work”


The Survey Monkey online questionnaire platform30 has a caveat about monetary incentives, despite not being opposed to the practice. According to the site, offering payment for conducting the questionnaire does increase the rate of adherence to the survey. However, this can lead to problems with the results, if the study is not sufficiently controlled. Through the internet, where control may be less, the financial incentive may end up attracting many people from the same group or who do not have identification with the intended target population. This can result in a bias in the research, which could decrease its quality or even invalidate its results.


United Kingdom

A survey conducted at the University of Oxford31, based on a literature review and case study, sought to identify whether the incentives offered in research are unethical or lead to corruption of judgment. The study revealed that, for the most part, the use of incentives to recruit and retain research subjects is harmless. However, some situations were identified in which it is not:

“Specifically, incentives become problematic when conjoined with the following factors, singly or in combination with one another: where the subject is in a dependency relationship with the researcher, where the risks are particularly high, where the research is degrading, where the participant will only consent if the incentive is relatively large because the participant’s aversion to the study is strong, and where the aversion is a principled one. “


United States of America

A study published in 201232 aimed to determine the effects of incentives on research results. To this end, the authors have worked with the main bibliographic reviews published on the subject since 2002 and supplemented them with the most recent information.

In general, the review suggests that the impact of monetary incentives (payments, gifts, or sweepstakes) on the quality and content of the results obtained by the questionnaire is neutral. Many experimental studies have tried to show that there is interference (1, 2, 3, 4), but the result has remained. However, the article32 makes it clear that, although this occurs in a few cases, incentives can lead to problems in the composition of the sample (which changes the result of the research).

This study is extremely complete and addresses the use of incentives for questionnaires via the web, questionnaires made via cell phones, questionnaires for panel surveys, surveys mediated by an interviewer and carried out by mail.

“The foregoing review enables us to draw six basic conclusions about the influence of incentives on survey response.

  1. Incentives increase response rates to surveys in all modes, including the Web, and in cross-sectional and panel studies.
  2. Monetary incentives increase response rates more than gifts, and prepaid incentives increase them more than promised incentives or lotteries, although they are difficult to implement in Web surveys.
  3. There is no good evidence for how large an incentive should be. In general, though response rates increase as the size of the incentive increases, they do so at a declining rate.
  4. Relatively few studies have evaluated the effect of incentives on the quality of response. Most studies that have done so have found no effects, although the variables used to assess quality have generally been limited to item nonresponse and length of responses to open-ended questions. Research is needed on what effect if any, incentives have on reliability and validity.
  5. Relatively few studies have examined the effect of incentives on sample composition and response distributions, and most studies that have done so have found no significant effects. However, such effects have been demonstrated in some studies in which the use of incentives has brought into the sample larger (or smaller) than expected demographic categories or interest groups. Incentives, thus, have clear potential for both increasing and reducing nonresponse bias.”

The article also presents an interesting note:

Rosenberg (2004)32 report that patients who received a $5 incentive in advance letters sent to a random half of colorectal cancer patients scheduled to be interviewed by telephone participated in the survey at a significantly lower rate compared with patients who did not receive an incentive, whereas an incentive increased participation in the control group (nonpatients). They speculate that when an adequate motive for responding exists, sending an incentive may actually reduce that motive.”


This article deals with the effects that offering incentives for participation in research can have on young people (adolescents and children). The author reviewed all the main works on the subject and concluded that, when dealing with this audience, the incentive may have an undue effect on the result and end up invalidating the research.

Fisher’s (2003)33 survey of adolescents and parents about research participation is the only empirical study identified for this review in which respondents were asked about the potential of financial incentives to influence the validity of research findings. Parents and youth were both split about whether other teenagers may be more likely to provide honest survey responses if paid. African American youth were more likely than their parents to report a belief that incentives would encourage more honesty. White parents were less likely than Asian, African American, or Hispanic parents to believe adolescents would provide responses they thought researchers wanted if offered an incentive; however, responses to the question were divided overall. A majority of parents and adolescents and parents in every ethnic category reported a belief that some youth would be willing to lie to enter a paid research study, and low socioeconomic status was associated with an increased likelihood of agreement to the question among Hispanic respondents. Grady (2005)33 makes a similar caution. She notes that some prior research has indicated potential research participants are willing to misrepresent themselves to investigators, but whether money would increase the willingness is unknown. In a conceptual article in which ethical recommendations are provided, Kendall and Suveg (2008)33 also write that potential participants (both youth and their parents) may lie to enter research studies to receive payments if they are experiencing economic hardship, which is supported by the evidence from Fisher’s (2003)33 study. Grady (2005)33 suggests sufficient attention be provided to appropriate eligibility criteria to counteract this concern. She also argues, however, that the prospect of misrepresentation may be even more likely by research participants who see therapeutic value in being a part of the study.”


According to the National Business Research Institute (NBRI)34, offering payments or another type of incentive has a change in research results, even if it is a positive change.

“Several studies have indicated that the use of incentives reduces to some extent item non-response and “bad answers”, such as “don’t know” or “no answer”. It was also noted in a study published by Public Opinion Quarterly that respondents who received incentives have lengthier answers to open-ended questions. There are logical reasons for these findings. When you offer someone an incentive, they will view completing the survey as returning a favor and will feel obligated to do a good job. This is known as the “norm of reciprocity”. Respondents who receive an incentive are also more likely to say the survey subject matter was interesting, and this causes them to place a greater value on their task. Though it seems likely that offering an incentive would bring apathetic participants to the study, research has come otherwise. The data quality with an incentive, therefore, can actually be considered higher than if the incentive was not offered, as respondents have put more thought into answering the survey questions. There is also evidence that providing incentives will increase respondents ’willingness to participate in future studies because they complete the customer survey feeling positive about the overall experience.”


Researchers sought to answer35 whether the payment may be associated with the participant’s fraud about the eligibility of the survey. In this randomized research experiment of a nationally representative sample of 2,275 US adults, payment offers to participate in an online survey were associated with a substantial mistake by participants about their eligibility compared to the control condition, with estimated proportions of individuals ineligible persons who were involved in deception ranging from 10.5% to 22.8%. Thus, they ended up concluding that the payment may be associated with a mistake about the eligibility to participate in the study, which causes a change in the results of the research due to sample population error.



The Swiss Center of Expertise in the Social Sciences (FORS) published, in 2019, an Incentive Guide for Questionnaires. This document was built based on a thorough bibliographic review of the topic and aims to determine the advantages and consequences of using incentives to increase the rate of adherence in research. According to the guide36:

“Incentives may impact response behavior because the attention dedicated to the task may increase with an incentive. This increase in attention may reduce errors or the number of missed items. Furthermore, incentives could impact the mood of respondents, which, according to the feelings-as-information hypothesis, influence decision outcomes. (…) In addition, providing an incentive could encourage respondents to provide favorable responses in a desire to please.”

However, the document makes it clear to the reader that the sources consulted do not agree with these hypotheses. Some studies present experimental evidence, but “find only weak indications of these possible effects.”



When it comes to biomedical research, there are several opinions opposed to the practice of paying participants. This discussion is more present when it comes to sampling populations in a situation of vulnerability. According to an article published by Fio Cruz37, this practice can lead to changes in the research result, in addition to presenting ethical problems. Analysts who criticize this practice use the expression “Undue Induction”:

“Undue inducements could be called” coercive offers”. They are offers because they propose to make the person “improve” about his condition. They offer the subject a good or an option that did not exist before. But they are coercive because, due to the subject’s lack of options, the proposal is likely to be the only eligible choice (all victims of coercion have a choice; nevertheless, the consequences of not agreeing with the proposal are the greatest evil). For extremely poor people with no medical alternatives, offering any medical treatment, even in trials in which they have a 50% chance of not receiving any treatment, is better than their current alternative to no medical treatment – agreeing with the rehearsal your only choice. They are coerced into accepting the offer because of their miserable conditions. Offers of money or other resources to poor people with little or no alternative can lead them to glimpse only the promised reward – regardless of the conditions for achieving it.”




  1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5406995/
  2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2600442/
  3. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4943210/
  4. https://books.google.com.br/books?hl=pt-BR&lr=&id=Y6yhDwAAQBAJ&oi=fnd&pg=PT5&dq=related:TmaqdNikWhcJ:scholar.google.com/&ots=zPsLQu3lTF&sig=Sgotc5Q1r720qE3P0_c9xFkrmvE&redir_esc=y#v=onepage&q=incentives&f=false
  5. https://www.infosurv.com/survey-incentives-to-use-or-not-to-use/
  6. https://www.researchgate.net/profile/Samar_Debakey2/publication/323563954_The_Effectiveness_of_a_Monetary_Incentive_on_Response_Rates_in_a_Survey_of_Recent_US_Veterans/links/5ab41952458515ecebf0f8b6/The-Effectiveness-of-a-Monetary-Incentive-on-Response-Rates-in-a-Survey-of-Recent-US-Veterans.pdf
  7. https://journals.sagepub.com/doi/full/10.1177/1747016115626756
  8. https://www.tandfonline.com/doi/abs/10.1080/13645570802246724
  9. https://www.census.gov/srd/papers/pdf/rsm2007-02.pdf
  10. https://www.sciencedirect.com/science/article/abs/pii/S0895435607001424
  11. https://www150.statcan.gc.ca/n1/pub/12-001-x/2008001/article/10607-eng.pdf
  12. https://research.utoronto.ca/compensation-reimbursement-research-participants
  13. https://www.canadiancentreforhealtheconomics.ca/wp-content/uploads/2017/10/Intro20to20HSRM_CCHE20seminar20Oct202017.pdf
  14. https://www.bcit.ca/files/ir/pdf/survey-guidelines-best-practices.pdf
  15. https://heqco.ca/pub/the-impact-of-incentives-communications-and-task-demand-on-postsecondary-student-participation-in-online-research/?utm_source=Academica+Top+Ten&utm_campaign=1df5919f2a-EMAIL_CAMPAIGN_2017_12_05&utm_medium=email&utm_term=0_b4928536cf-1df5919f2a-51492553
  16. https://link.springer.com/article/10.1186/1756-0500-5-572
  17. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1089193/
  18. http://marketing-bulletin.massey.ac.nz/V21/MB_V21_N1_Dommeyer.pdf
  19. https://www.forethought.com.au/wp-content/uploads/2018/09/Forethought-White-Paper-Questionnaire-Design-and-Implications-on-Data-Quality.pdf
  20. https://www.rse.anu.edu.au/media/3136129/Bleichrodt-Paper-2019.pdf
  21. https://repositorio.unb.br/bitstream/10482/8817/1/2010_ThiagoRochadaCunha.pdf
  22. http://webcache.googleusercontent.com/search?q=cache:dCtKTyAIE-sJ:revistas.unisinos.br/index.php/base/article/download/base.2019.163.04/60747344+&cd=1&hl=pt-BR&ct=clnk&gl=br
  23. https://webcache.googleusercontent.com/search?q=cache:JBeaB-VYQY8J:https://academica-e.unavarra.es/xmlui/handle/2454/26569+&cd=1&hl=pt-BR&ct=clnk&gl=br
  24. https://www.researchgate.net/publication/28319450_Como_mejorar_la_tasa_de_respuesta_en_encuesta_on_line
  25. https://journals.sagepub.com/doi/pdf/10.1177/147078530404600307
  26. https://www.goeritz.net/SSCR4.pdf
  27. https://cdsa.aacademica.org/000-106/392.pdf
  28. https://www.scielo.br/pdf/csp/v31n1/0102-311X-csp-31-01-00039.pdf
  29. https://www.jstor.org/stable/2991257?seq=1
  30. https://www.surveymonkey.com/curiosity/offer-survey-incentives-without-sacrificing-good-data/#:~:text=Monetary%20incentives%20include%20cash%2C%20checks,for%20our%20SurveyMonkey%20Contribute%20panelists.
  31. https://academic.oup.com/jmp/article-abstract/29/6/717/857653?redirectedFrom=fulltext
  32. https://journals.sagepub.com/doi/10.1177/0002716212458082
  33. https://journals.sagepub.com/doi/full/10.1177/1556264619892707
  34. https://www.nbrii.com/customer-survey-white-papers/survey-incentives-response-rates-and-data-quality/
  35. https://jamanetwork.com/journals/jamanetworkopen/article-abstract/2722570
  36. https://forscenter.ch/fors-guides/fg-2019-00008/
  37. https://www.arca.fiocruz.br/bitstream/icict/17574/2/6.pdf