Discussion Questions

 Read the article attached and browse the website: www.mturk.com (you do not have to create an account) and answer the following questions: 

1. What are MTurk and other online participant recruitment sites? How are they different than the normal college sample?

2. What advantages do these recruitment sites offer? What are some disadvantages?

Each answer should be 250- 300 words each. No direct quotes or plagiarism. Scholarly sources only.  

Amazon’s Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data?

Michael Buhrmester, Tracy Kwang, and Samuel D. Gosling Department of Psychology, University of Texas at Austin

Abstract Amazon’s Mechanical Turk (MTurk) is a relatively new website that contains the major elements required to conduct research: an integrated participant compensation system; a large participant pool; and a streamlined process of study design, participant recruitment, and data collection. In this article, we describe and evaluate the potential contributions of MTurk to psychology and other social sciences. Findings indicate that (a) MTurk participants are slightly more demographically diverse than are standard Internet samples and are significantly more diverse than typical American college samples; (b) participation is affected by compensation rate and task length, but participants can still be recruited rapidly and inexpensively; (c) realistic compensation rates do not affect data quality; and (d) the data obtained are at least as reliable as those obtained via traditional methods. Overall, MTurk can be used to obtain high-quality data inexpensively and rapidly.

Keywords Amazon Mechanical Turk, Internet, online, web, data collection, research methods

Amazon’s Mechanical Turk (www.MTurk.com) is a novel,

open online marketplace for getting work done by others. Here,

we describe and evaluate the potential contributions that

MTurk might make in psychology and other social sciences

as a site for Web-based data-collection.

Introduction to MTurk

How Does MTurk Work?

MTurk functions as a one-stop shop for getting work done,

bringing together the people and tools that enable task creation,

labor recruitment, compensation, and data collection. The site

boasts a large, diverse workforce consisting of over 100,000

users from over 100 countries who complete tens of thousands

of tasks daily (Pontin, 2007). Individuals register as ‘‘reques-

ters’’ (task creators) or ‘‘workers’’ (paid task completers).

Requesters can create and post virtually any task that can be

done at a computer (i.e., surveys, experiments, writing, etc.)

using simple templates or technical scripts or linking workers

to external online survey tools (e.g., SurveyMonkey). Workers

can browse available tasks and are paid upon successful com-

pletion of each task. Requesters can refuse payment for subpar

work. Being refused payment has negative consequences for

workers because requesters can limit their tasks to workers with

low refusal rates.

How Are Workers Compensated?

Requesters deposit money into an account using a credit card.

Requesters set the compensation amount prior to posting a task;

payments can be awarded automatically or manually based on

the quality of each worker submission. Amazon charges a 10% commission.

Why Do Workers Participate?

Compensation in MTurk is monetary, but the amount awarded

is typically small (e.g., nickels and dimes for 5-10 minute

tasks). Our analyses (see online supporting materials at http://

pps.sagepub.com/supplemental) of worker motivation suggest

that they are internally motivated (e.g., for enjoyment).

Corresponding Author:

Michael Buhrmester, Department of Psychology, University of Texas at Austin,

1 University Station A8000, Austin, TX 78712

E-mail: buhrmester@gmail.com

Perspectives on Psychological Science 6(1) 3–5 ª The Author(s) 2011 Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/1745691610393980 http://pps.sagepub.comhttp://crossmark.crossref.org/dialog/?doi=10.1177%2F1745691610393980&domain=pdf&date_stamp=2011-02-03

Evaluating the Quality of MTurk Data

How Do MTurk Samples Compare With Other Samples?

Commentators have long lamented the heavy reliance on

American college samples in the field of psychology (Sears,

1986) and more generally those from a small sector of human-

ity (Henrich, Heine, & Norenzayan, 2010). Recent evidence

suggests that collecting data via the Internet, although far from

perfect, can reduce the biases found in traditional samples

(Gosling, Vazire, Srivastava, & John, 2004).

To examine how MTurk samples compare with the diver-

sity of standard Internet samples, we compared the demo-

graphics of 3,006 MTurk participants with those in a large

Internet sample (Gosling et al., 2004). MTurk participants

came from over 50 different countries and all 50 U.S. states.

Gender splits were similar in the standard Internet (57% female) and MTurk (55% female) samples. A greater percent- age of MTurk participants were non-White (36%) and almost equally non-American (31%) compared with the Internet sam- ple (23% and 30%, respectively). MTurk participants were older (M ¼ 32.8 years, SD ¼ 11.5) than the Internet partici- pants (M ¼ 24.3 years, SD ¼ 10.0). In short, MTurk partici- pants were more demographically diverse than standard

Internet samples and significantly more diverse than typical

American college samples.

How Do Compensation Amount and Task Length Affect Participation Rates?

MTurk’s major appeal is its potential for collecting data

inexpensively and rapidly. To investigate participant response

rates at various compensation levels and task lengths and to

explore the tradeoffs between these parameters, we adminis-

tered personality questionnaires via MTurk in a 3 � 3 design, crossing compensation level (2, 10, or 50 cents) with estimated

task-completion time (5, 10, and 30 minutes).

There was a main effect of compensation level, F(2, 6) ¼ 20.67, p < .01, with participation rates lowest in the 2-cent

payment (see Table 1). With the exception of the 2-cent con-

dition (due to a possible floor effect), there was a main effect

of survey length such that response rates were lowest for the

30-minute survey, F(1, 6) ¼ 7.05, p < .05. Note that although participation rates decreased as a function of both payment

amount and survey length, we were still able to recruit parti-

cipants for all conditions.

To explore the lower limits of compensation amount for task

completion, we tested whether MTurk workers would complete

a task for the lowest allowable payment rate: a penny. We

posted a task that paid workers 1 cent for answering two pieces

of information: age and gender. In 33 hours, we collected 500

responses or about 15 participants per hour. These results

demonstrate that workers are willing to complete simple tasks

for virtually no compensation, again suggesting that workers

are not driven primarily by financial incentives.

These analyses suggest that participants can be recruited

rapidly and inexpensively. Participation rates are sensitive to

compensation amounts and time commitments, but our find-

ings demonstrate that it is possible to collect decent-sized sam-

ples via MTurk for mere dollars. Even when offering just

2 cents for a 30-minute task, we accumulated 25 participants,

albeit at a slower rate (i.e., in about 5 hours of posting time).

Moreover, by increasing the compensation just slightly (e.g.,

to 50 cents) we were able to obtain the same number of parti-

cipants in less than 2 hours of posting time.

How Does Compensation Amount Affect Data Quality?

To examine compensation-level effects on data quality, we

computed alpha reliabilities for data collected at three levels

of compensation (2, 10, and 50 cents) in a set of six personality

questionnaires administered to MTurk participants. The mean

alphas were within one hundredth of a point across the three

compensation levels (see online supporting materials), suggest-

ing that even at low compensation rates, payment levels do not

appear to affect data quality; the only drawback appears to be

data collection speed (as shown in the previous section), a find-

ing consistent with previous research on nonsurvey tasks

(Mason & Watts, 2009).

Do MTurk Data Meet Acceptable Psychometric Standards?

The absolute levels of the mean alphas were in the good to excel-

lent range (a¼ .73–.93; mean a¼ .87 across all scales and com- pensation levels). Moreover, with three exceptions, the

MTurk alphas were within two hundredths of a point of the

traditional-sample alphas (see online supporting materials).

To provide another index of data quality, we estimated

test–retest reliabilities in a set of individual difference mea-

sures administered 3 weeks apart via MTurk. Participants

Table 1. Effects of Compensation Amount and Task Length on Participation Rates (Submitted Surveys per Hour of Posting Time)

Compensation amount Short survey (5 min) Medium survey (10 min) Long survey (30 min)

2 cents 5.6 5.6 5.3 10 cents 25.0 14.3 6.3 50 cents 40.5 31.6 16.7

Note. Surveys consisted of a series of demographic questions and personality scales. For the medium length survey, 60 participants were recruited per compensation amount. For the short and long surveys, 25 participants were recruited per compensation amount.

4 Buhrmester, Kwang, and Gosling

were paid 20 cents for completing Wave 1 and 50 cents for

Wave 2 (60% completed them). Test–retest reliabilities were very high (r ¼ .80–.94; mean r ¼ .88) and compared favor- ably with test–retest correlations of traditional methods (see

online supporting materials).

Summary and Conclusions

Our investigation into MTurk as a potential mechanism for

conducting research in psychology and other social sciences

yielded generally promising findings. The site has the neces-

sary elements to successfully complete a research project from

start to finish. Our analyses of demographic characteristics sug-

gest that MTurk participants are at least as diverse and more

representative of noncollege populations than those of typical

Internet and traditional samples. Most important, we found that

the quality of data provided by MTurk met or exceeded the psy-

chometric standards associated with published research.

Still, the process of validating MTurk for use by researchers

has only just begun. Some of MTurk’s current strengths—the

open market design and large, diverse participant pool—may

change in the future (see online supporting materials for further

discussion). That said, if future data continue to be as promising

as they have proven here and elsewhere (e.g., Mason & Watts,

2009), we anticipate that MTurk will soon become a major tool

for research in psychology and elsewhere in the social sciences.


We thank Matthew Brooks and William B. Swann, Jr. for feedback on

an earlier version of this article.

Declaration of Conflicting Interests

The authors declared that they had no conflicts of interest with respect

to their authorship or the publication of this article.


Gosling, S.D., Vazire, S., Srivastava, S., & John, O.P. (2004). Should

we trust Web-based studies? A comparative analysis of six precon-

ceptions about Internet questionnaires. American Psychologist, 59,


Henrich, J., Heine, S.J., & Norenzayan, A. (2010). The weirdest peo-

ple in the world? Behavioral and Brain Sciences, 33, 62–135.

Mason, W.A., & Watts, D.J. (2009). Financial incentives and the

‘‘performance of crowds.’’ Association for Computing Machinery

Explorations Newsletter, 11(2):100–108.

Pontin, J. (2007, March 25). Artificial intelligence: With help from the

humans. The New York Times. Retrieved from http://www.nytimes.


Sears, D.O. (1986). College sophomores in the lab: Influences of a

narrow data base on social psychology’s view of human nature.

Journal of Personality and Social Psychology, 51, 515–530.

Evaluating MTurk 5

<< /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.3 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages false /CreateJDFFile false /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.1000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness false /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Remove /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages false /ColorImageMinResolution 266 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 200 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.00000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages false /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /CropGrayImages false /GrayImageMinResolution 266 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 200 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.00000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /CropMonoImages false /MonoImageMinResolution 900 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Average /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox false /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (U.S. Web Coated \050SWOP\051 v2) /PDFXOutputConditionIdentifier (CGATS TR 001) /PDFXOutputCondition () /PDFXRegistryName (http://www.color.org) /PDFXTrapped /Unknown /Description << /ENU <FEFF005500730065002000740068006500730065002000530061006700650020007300740061006e0064006100720064002000730065007400740069006e0067007300200066006f00720020006300720065006100740069006e006700200077006500620020005000440046002000660069006c00650073002e002000540068006500730065002000730065007400740069006e0067007300200063006f006e006600690067007500720065006400200066006f00720020004100630072006f006200610074002000760037002e0030002e00200043007200650061007400650064002000620079002000540072006f00790020004f00740073002000610074002000530061006700650020005500530020006f006e002000310031002f00310030002f0032003000300036002e000d000d003200300030005000500049002f003600300030005000500049002f004a0050004500470020004d0065006400690075006d002f00430043004900540054002000470072006f0075007000200034> >> /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ << /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) ] /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy >> << /AllowImageBreaks true /AllowTableBreaks true /ExpandPage false /HonorBaseURL true /HonorRolloverEffect false /IgnoreHTMLPageBreaks false /IncludeHeaderFooter false /MarginOffset [ 0 0 0 0 ] /MetadataAuthor () /MetadataKeywords () /MetadataSubject () /MetadataTitle () /MetricPageSize [ 0 0 ] /MetricUnit /inch /MobileCompatible 0 /Namespace [ (Adobe) (GoLive) (8.0) ] /OpenZoomToHTMLFontSize false /PageOrientation /Portrait /RemoveBackground false /ShrinkContent true /TreatColorsAs /MainMonitorColors /UseEmbeddedProfiles false /UseHTMLTitleAsMetadata true >> << /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /BleedOffset [ 9 9 9 9 ] /ConvertColors /ConvertToRGB /DestinationProfileName (sRGB IEC61966-2.1) /DestinationProfileSelector /UseName /Downsample16BitImages true /FlattenerPreset << /ClipComplexRegions true /ConvertStrokesToOutlines false /ConvertTextToOutlines false /GradientResolution 300 /LineArtTextResolution 1200 /PresetName ([High Resolution]) /PresetSelector /HighResolution /RasterVectorBalance 1 >> /FormElements true /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MarksOffset 9 /MarksWeight 0.125000 /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /DocumentCMYK /PageMarksFile /RomanDefault /PreserveEditing true /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ] /SyntheticBoldness 1.000000 >> setdistillerparams << /HWResolution [288 288] /PageSize [612.000 792.000] >> setpagedevice