This study written with Nathaniel Rivers examines how the complex concept of readability emerges on Amazon’s global online crowdsource marketplace Mechanical Turk (MTurk) through interactions among task requesters, responders (turkers), and non-human elements including algorithms, human-computer interfaces (HCI), and platform conventions. It does this by collating 2,200 surveys in multiple permutations posted to MTurk that asked respondents to produce and evaluate readable summaries of four different types of media content and to provide their perceptions of these tasks. Findings suggest that readability manifests variably on MTurk through alignments of rhetorical elements including metadiscourse, enthymeme, and exigence. The study interprets these findings using Timothy Ingold’s (1993) anthropological concept of the taskscape, informing the ongoing theorizing of situation in rhetoric studies and helping identify a productive role for the discipline within scholarly conversations about Natural Language Processing (NLP) systems and automated text generation applications.