[With due thanks to Jason Wilson's brilliant post, "Secondary Materials are Like Cheeseburgers," I propose below, a concept of how law librarians, law review editors, scholars and bloggers can cooperate and build a better (well, new!) cheeseburger. These are random thoughts. I welcome feedback. RL]
Take Web 2.0 + Digital Commons + Durham Statement; Combine them, process until well-done and place between slices of WWW, Web 2.0 and app-technology.
The Next Generation of Secondary Materials
It is generally understood that secondary materials serve two very important purposes (beyond earning money for publishers and money and prestige for authors): First, a secondary resource, such as a treatise, practice material, looseleaf or scholarly article, provides users with clear statements of the meaning and application of the legal principles or concepts that are reflected in court opinions, statutory and administrative materials. They are essentially syntheses of rules and ideas expressed in these disparate resources, which are created and published by necessarily disparate entities for necessarily disparate audiences with necessarily disparate interests.
Second, they provide important indexing of these disparate resources through citation and analysis of the various materials. For example, if you are interested in finding the most significant cases that explain the difference between civil and criminal contempt, one need only read the relevant chapter of Wright and Miller’s Federal Practice and Procedure, because it is there that recognized experts in the field not only express their opinions as to those differences, but they also provide citations to the authorities that support their conclusions and analysis. Indeed, there may well be more cases available on the topic, but we trust that the ones cited by the authors of this treatise are the most important and most significant.
As publishers grapple with a variety pressures from shareholders and corporate boards as well as with changes in the technology and practical aspects of publishing, they have tended to respond with practices and policies that have actually served to run contrary to their underlying function which is to offer research tools to lawyers, students, practitioners and lay people who come to them for answers to pressing legal questions. Instead of, as once was the case, of serving the legal community by offering helpful tools and distributing them as widely as possible, they are narrowing their distribution to customers who can and must pay.
It is my opinion that several recent advances in “technology” generally can provide us with a new mode of secondary materials that may be as useful as traditional secondary materials, but that may be available for free for all.
The New Mode of Secondary Legal Materials
OK, here's the idea. What we're seeking is modern indexing to help the researcher focus on the most important cases - and, if possible clear commentary about what the cases mean.
Law review articles and blogs can give us a glimpse of which cases are important by examining which cases are written about and mentioned in articles and blogs. It is possible that wire services, too, can help identify which cases are important by analyzing the frequency with which cases are reported and commented upon. (There are significant problems with using data on cases reported in commercial media sources, but the problems can be accommodated for in various ways.)
The proliferation of digital law reviews in digital commons and services like SSRN, as well as articles and commentary on blogs can provide the substance for building a free database that consists of analysis of primary materials and commentary on policies and procedures.
Given the disparate forms of materials that are readily available on the web already, it is my opinion that new technology can be developed that can efficiently mine them to give researchers valuable information as they conduct research on any topic. Essentially, this new form of research tool would aggregate material from many sources, index them and offer searching and sorting in forms and of the most benefit to researchers.
In order for such as project to be successful, several foundational things should happen:
1. Law reviews should adopt the practice of asking authors to not only supply abstracts of articles, but should tag them with an approved list of subjects headings. They should also agree to tag digital articles with metadata that accurately reflects author and copyright information.
2. An approved list of metadata tags could also be circulated among bloggers and periodicals that produce digital editions.
3. Articles should be mined for citation data, including references to cases, courts, judges, scholars, etc.
4. Search results should be able to be ranked based on a variety of factors, including reputation and productivity of the authors and citation frequency.
5. Full text of cases should be indexed by computer and archived in a secure location. Search results should be available either as full text or as citation lists.
Fantasy or Possibility?
Is it possible to build a research tool that aggregates and searches information from such disparate sources? What’s more, what will such a service look like, and will it actually be valuable to researchers?
It is clear that collectively, blogs, law reviews, digital commons and various websites that report and comment upon legal matters cover a substantial portion of the most important cases of the day. A systematic method of crawling and indexing this content should provide researchers with a viable starting point for researching any current legal topic.
With Google Scholar and efforts on the part of law reviews and digital commons to build retrospective collections of secondary materials, the potential exists to build a rich, publicly accessible free resource.
Looking forward, legal scholars and other experts may endeavor to regularly comment upon breaking cases and other developments, thus providing a continuing source of present commentary about modern legal matters.
Modern developments in AJAX and HTML5, as well as new functionality of apps on iOS and Android platforms and their respective hardware platforms have the potential for the development of entirely new research tools. Unlike the present generation of online databases that are essentially flat text files built from print treatises, these new tools can give people access to a new concept of secondary materials in ways that would have been hard to imagine just a few years ago.
We’ve often heard of the difficulty of building a better mouse trap. Is it possible to build a better cheeseburger?
Berkman Center announces leadership transition - The Berkman Center for Internet & Society has announced a significant leadership transition as Professor William (Terry) Fisher steps down after 12 years...
16 hours ago