My first episode probably ends up asking more questions than it answers. While considering if technology has the agency to be Sophist (and adapt itself to our needs), I return to Brooke and Lanham, momentarily touch on Miller, then primarily use White as a guide to understand how Kairos, opportunity, seduction, and relational rhetoric might look when deployed by technology as free agents outside of human utilization.
Also, a late news story that didn't make it into the mix (plus I couldn't fit it in at 7 minutes talk time), but provocative to the content/style distinction with relation to Google instead of Amazon is here ('dirty' secrets about gaming search results and essentially manipulating chronos [long term search algorithm history] for kairos [the results google displays]):
http://www.nytimes.com/2011/02/13/business/13search.html?_r=1&smid=fb-nytimes&WT.mc_id=BU-SM-E-FB-SM-LIN-SOI-021311-NYT-NA&WT.mc_ev=click
Introductory Clip from Sealab 2021 "Red Dawn"
Music is "A Fistfull of Rupees" - Fatboy Roberts, Geek Remixed 3
Comments
Extending the ethical question
I really enjoyed the discussion that broserf_uprising offers in this week’s podcast. The part of this discussion that I would like to emphasize here is the part where broserf_uprising points to google instant and compares it with amazon. I feel that this discussion works well with OrganizedChao’s and Moorpark’s talk about the audience while also opening up the discussion to incorporate steve.urkel’s focus on the ethical practices and theory behind technology. The way that broserf_uprising compares amazon to google instant makes it possible to see that google instant is much more rhetorically driven than amazon.com and seeks to orient the user in a number of ways. Although, as broserf_uprising notes, this sort of instant adaptation can certainly be useful to the user in that it responds to the types of interests that the user appears to have, there can also be some downfalls to this responsive site. As steve.urkel points out, the sort of power that goes along with these technological forces present some serious ethical questions. If this website has the ability to recognize and record user’s use of the site and adapt accordingly, it also has the ability to alter itself to better persuade and manipulate the user. Broserf_uprising also points out that user’s interests can change at any time, and this can present somewhat of a problem for google instant, which relies on an amount of predictability from the user. Still, this adaptability allows the site to direct the user to certain sites and products. I think it is important to think about the factors that go into the options that google instant offers and/or presents to the user as well as think about what google instant is not offering and why. Even though these search engines like google instant can appear and, in some cases, are most definitely helpful, they also limit the user in many ways, and this is where the question of ethics becomes significant.
Overall, broserf_uprising’s content was not only helpful in clarifying the readings for this week but was also helpful in extending the conversation.
Thank you!
Kairotic Search Engines = Wave of the Future?
Broserf_Uprising, interesting points here (and introductory score). You begin to draw a productive distinction between Amazon and Google as sites of sophistic practice by distinguishing between various instances of Chronos and Kairos. Drawing from White, a sophistic practice is one where “truth is relative to the speaker and the immediate context” (White 15). Sophistic practices are therefore contextually contingent, and your podcast specifically mentions Gorgias as an exemplar, with whom White draws from to situate kairos as both opportunity and unpredictability (20). I think this limited prescriptive aspect of kairos threatens the database driven model of predictive features embedded within Amazon’s “You May Also Like” feature, which operates – as you suggest – based on records of your past browsing, purchasing, and personal preferential history. These databases are essentially predictions based on past performance, which a kairotic approach would marginalize. Google’s “Instant Search” feature, while deceptively appearing spontaneously generative, is also derived from database functionality, albeit one that is compiled from millions of users. These results, derived from past similar searches, may be modified based on your own preferences and history (if you are signed in), but I would argue that they are no more kairotic than Amazon, in that they are still attempting to be predictive based only on previous occurrences. If we narrow our definition of kairos to merely include the opportune moment, one that continuously adapts to user input, then perhaps Google Instant is more adroitly kairotic than Amazon, but otherwise I find them to simply be a difference of degree. I don’t know what a truly Kairotic search engine would look like, but I think it would have to be popularly decided, perhaps a compendium of user-added values democratically assembled and ranked based on your specific needs, more like the community aggregator sites mentioned in the Rheingold reading. (Note: Should anyone want to get some angel investors on this and start a Kairotic Search Engine, I’m game :)
The basic premises of your podcast bring up a number of related questions though, especially in regard to what it might mean if technology WAS found to be capable of acting sophistically. What would that mean for education, science, industry, or…religion? I’ll drill this down a bit further. In attempting to locate technological agency in the aforementioned web examples, you start by hypothesizing that such agency can exist, while continuously acknowledging how problematic this assumption is (or as sci-fi genres put it, the “Terminator” problem). Your entire podcast hinges upon the consensus that technology, in being able to enact kairotic action, possesses intrinsic agency. Still, I was stuck by how often you returned to hedging this central claim, not out of any structural critique, but because I think that nicely illustrates the philosophical difficulties that this assumption raises. The ability to create, use, and manipulate tools in an evolving (or kairotic / opportunistic) manner has long been central to a social definition of humanity. As our previous readings suggest, many of our tools have the capacity to change the way we think and act; for instance, Ong makes this claim about writing, while Johnson describes the naturalization of a wide-variety of technological instantiations within our social and cultural structures. Perhaps it isn’t a big step to extrapolate further and suggest that the naturalization of technology use (for instance, language itself) constitutes identity itself (and that social order is determined in part based off the skillful use of tools in kairotic contexts, comprising metis or arête). Any assumption that technology may be(come) capable of performing these same tasks therefore necessarily produces social anxiety. It is in many respects the threat of competition and the fear of losing our biological exceptionalism. This line of thought is very visible in a recent article in the NYT discussing the potential for economic competition among various types of learning machines, found here: http://www.nytimes.com/2011/02/15/science/15essay.html?_r=1&src=dayp
We can easily complicate these assumptions, beyond merely pointing out that other species also use tools, by suggesting that tool use and kairos are not equivalent, but I wonder… is this also a matter of degree?
"Legen...wait for it, and I hope you're not lactose-intolerant because the last part is...dary!"