Categories
Journal

Against Lynch: An Apology for the Internet

This week’s discussion is a review and criticism of Clifford Lynch’s article “Searching the Internet.” This article contains a number of fallacies, or more precisely anachronisms, which sap the central claim that the internet is not similar to a library.

Upon initial reading of the article I was struck by the presence of anachronistic web sites and technologies. Yes, Altavista and Lycos, the two mentioned internet search engines, do in fact still exist, but since the presumed writing of the article have been absorbed by larger companies and have fallen into relative obscurity. Google, the most advanced (in terms of discrimination, ambiguation, aggregation etc) and prolific engine is not even given lip service, presumably because it did not exist when the article was written, and Netscape Navigator is mentioned as the most common HTML client.

While this article is an interesting look back on the fearful Terminator 2, Skynet fearing days of the early internet, I believe that it is counterintuitive to base our opinions of the internet today on such outdated commentary. Why? Since the late 1990s there has been a fundamental revolution in the nature of the internet: while during those former years the internet was un-policed, today the largest and most commonly accessed bastions of knowledge are policed. They are policed in the sense that the community can evaluate, to help discriminate, false and unreliable content from good, quality content. See: Wikipedia, WebMD, Amazon.com, Twitter (to a lesser degree).This capacity for interaction did not exist in the 1990s, and it is understandable that Mr. Lynch feared that the democratic chaos of the internet would misinform those using it. Today, if someone wants to learn about something and puts keywords into Google, the first links will most likely be high quality and trustworthy, thanks to the nature of the reliable technology behind our modern search engines.

Google discriminates flimflam from gold by using a number of principles. Sites which have been in existence longer have precedence over newer sites, and sites which have other quality sites linking to them have precedence over those which do not. You do not get a random dump of links when you search using Google; you are being returned an examined list. Notice if you search for “cancer treatment” nothing on Faith Healing comes up. Why? Because Faith Healing does not work, and Google knows this algorithmically and quantifiably. So to summarize: the software we use to discriminate data today also qualifies what is returned to us, and it is buttressed by the collaboration and oversight of an interactive community.

And who can discount the amazing abilities of Google Scholar or Google Books? I know that I would have been unable to envision, write and complete my BA thesis without using both of those engines on a daily and constant basis. A traditional library does not allow me to “ctrl-f,” and even the most wisely structured library information databases do not allow me to search for obscure single words or sentences inside the text of a document. I guess this only becomes apparent if you research truly arcane topics, such as those contained within my thesis. Truth be told, at least 75% of my sources for my thesis derive from using the scholastic engines provided by Google.

While one could still posit the claim that the internet as a whole is still not evolving toward “organized publication and retrieval of information” and in that sense, is not becoming more like a library, it would be hard pressed to make that argument about Wikipedia. While many of the old guard of academia still are suspicious over Wikipedia, as a longtime contributor myself, I can say without any doubt, that similar collaborative systems will one day replace “curriculum” learning and lectures as we know it. The nature of knowledge is such that one expert is never sufficient to explain the case, regardless of how honed their skills of deduction and research are. No, in order to truly understand a topic, we must have the fluid and dynamic input of all of humanity’s experts. This, in a philosophical nutshell, is what Wikipedia aims to do: to offer freely all knowledge, and to present knowledge in such a fashion that it cannot be censored or made static by the old technologies of print. One cannot demand a citation upon reading a traditional book, or amend the text to make more sense, but on Wikipedia, and as a corollary, the other Wiki projects including Wikibooks, this is possible.

While we might fear the ignorance of the mob, their inability to discriminate data as experts do, it is not the casual viewer which typically edits articles, but professionals. If a random, ill-cited edit occurs, it is usually reverted within a manner of minutes to hours. The slanderous and fallacious claim that “you can put anything on Wikipedia” is easily discounted by practical examples: attempt to vandalize any article and you will be amazed at how fast it is reverted. Wikipedia is the future, and it is the library we have all been hoping for: one which is manned 24/7 for free by professionals who edit it for love of knowledge rather than for love of money.

Leave a Reply

Your email address will not be published.