Parker v. Search Engines, Part II: Challenge to Search Engine Caching Dismissed on Most (But Not All) Grounds

22 02 2009

The practice of search engine crawling and caching of Web site content has infrequently been litigated. (The Perfect 10 case is a significant exception.) This may be because most Web site operators want their content to be indexed and available on search engines. Those Web site operators that do not want their content copied and indexed can stop crawling and caching by deploying a robots.txt file. By generally accepted convention, a robots.txt file is consulted by most search engine crawlers for instructions from a Web site operator as to whether, and to what extent, Web site content may be copied and indexed. The inclusion of the “noindex” metatag in a robots.txt file instructs a crawler that the content may not be copied and indexed.

In Parker v. Yahoo!, Inc., 2008 U.S. Dist. LEXIS 74512 (E.D. Pa. Sep. 26, 2008), the district court held that a Web site operator’s failure to deploy a robots.txt file containing instructions not to copy and cache Web site content gave rise to an implied license to index that site. The court did say, however, that the implied license in the case may have, at some point, been terminated by the operator.


The content in this post was found at and was not authored by the moderators of Clicking the title link will take you to the source of the post.



Leave a comment

You must be logged in to post a comment