Public library ratings

{{Short description|Evaluation systems for public libraries}}

There are several national systems for assessing, evaluating, or otherwise rating the quality of public libraries.

United States

Basic library statistics (not rankings) were initially maintained by the National Center for Educational Statistics; that body continues to collect data for academic libraries, but administration of the Public Libraries Survey and the State Library Agencies Survey was transferred to the Institute of Museum and Library Services (IMLS) in October 2007.[https://nces.ed.gov/surveys/libraries/ NCES: Library Statistics Program] IMLS continues to conduct public library surveys as well as distribute historical data from surveys back to 1988.[https://www.imls.gov/research-evaluation/data-collection/public-libraries-survey IMLS: Public Libraries Survey]

The Library Data Archives includes longitudinal data sets. Molyneux, Robert E. [https://drdata.lrs.org/ Library Data Archives] continuously updated.

=HAPLR and subsequent debate=

The system that would become Hennen's American Public Library Ratings (HAPLR) was first published in the January 1999 issue of American Libraries prepared by Thomas J. Hennen Jr., Director of Waukesha County Federated Library System in Wisconsin.[https://www.jstor.org/stable/25635283 Go Ahead, Name Them: America's Best Public Libraries], American Libraries Vol. 30, No. 1 (January 1999), pp. 72-76. Libraries were ranked on 15 measures with comparisons in broad population categories. HAPLR was updated annually through 2010 and was the focus of widespread professional debate in the field of librarianship.

Oregon State Librarian Jim Scheppke noted that the statistics that HAPLR relies on are misleading because they rely too much on output measures, such as circulation, funding, etc. and not on input measures, such as open hours and patron satisfaction. "To give HAPLR some credit, collectively, the libraries in the top half of the list are definitely better than the libraries in the bottom half, but when it gets down to individual cases, which is what HAPLR claims to be able to do, it doesn't work."Scheppke, Jim. (1999-11-15) “The Trouble with Hennen”, Library Journal 124 (19): p. 36, {{ISSN|0363-0277}}

In contrast, Library Journal editor, John N. Berry, noted: "Unfortunately, when you or your library receives any kind of honor, it stimulates the flow of competitive hormones in your professional colleagues. This jealousy rears its ugly head in many ways. We've suffered endless tutorials on the defects in Hennen's rankings. So what? They work!" {{Cite web |url=http://www.libraryjournal.com/article/CA158847.html?q=hennen+1999 |title=On the Uses of Recognition |access-date=2007-09-18 |archive-url=https://web.archive.org/web/20110607121337/http://www.libraryjournal.com/article/CA158847.html?q=hennen+1999 |archive-date=2011-06-07 |url-status=dead }} Library Journal April 15, 1999.

Keith Curry Lance and Marti Cox, both of the Library Research Service, took issue with HAPLR reasoning backwards from statistics to conclusion, point out the redundancy of HAPLR's statistical categories, and question its arbitrary system of weighting criteria.Lance, Keith Curry and Marti A. Cox. (June/July 2000), “Lies, Damn Lies, and Indexes”, American Libraries 31 (6): p. 82, {{ISSN|0002-9769}}

Hennen responded, saying Lance and Cox seem to suggest "that the job of comparing libraries cannot be done, so I am at fault for having tried. Somehow, unique among American public or private institutions, libraries are just too varied and too local to be compared. Yet despite these assertions, the authors urge individuals to use the NCES Public Library Peer Comparison tool (nces.ed.gov/surveys/libraries/publicpeer/) to do this impossible task."American Libraries; Jun/Jul 2000, Vol. 31 Issue 6, p87, 1p

A 2006 Library School Student Writing Award article questioned HAPLR's weighting of factors, and its failure to account for local factors (such as a library's mission) in measuring a library's success, the index's failure to measure computer and Internet usage, and its lack of focus of on newer methods of evaluation, such as customer satisfaction or return on investment.Nelson, Elizabeth. (Winter 2007) “Library Statistics and the HAPLR Index” Library Administration & Management 21 (1) : p. 9, {{ISSN|0888-4463}}

Ray Lyons and Neal Kaske later argued for greater recognition of the strengths and limitations of ratings."Honorable Mention: What Public Library National Ratings Say" (Nov/Dec 2008)Public Libraries p.36-41. They point out that, among other factors, imprecision in library statistics make ratings scores quite approximate, a fact rarely acknowledged by libraries receiving high ratings. The authors also note that HAPLR calculations perform invalid mathematical operations using ordinal rankings, making comparisons of scores between libraries and between years meaningless.

=Star Libraries system=

America's Star Libraries and Index of Public Library Service, an alternative system developed by Keith Curry Lance and Ray Lyons, was first introduced in the June 2008 issue of Library Journal.{{Cite web |url=http://www.libraryjournal.com/article/CA6566452.html |title=The New LJ Index |access-date=2019-01-31 |archive-url=https://web.archive.org/web/20090414071628/http://www.libraryjournal.com/article/CA6566452.html |archive-date=2009-04-14 |url-status=bot: unknown }} Library Journal June 15, 2008. This method rates on four equally weighted per-capita statistics with comparison groups based on total operating expenditures: library visits, circulation, program attendance, and public internet computer use.{{Cite web |url=http://www.libraryjournal.com/article/CA6634704.html |title=Better Than Hennen: LJ introduces "America's Star Libraries," rating public library service |access-date=2019-01-31 |archive-url=https://web.archive.org/web/20090221072451/http://www.libraryjournal.com/article/CA6634704.html |archive-date=2009-02-21 |url-status=bot: unknown }} Library Journal February 15, 2009. The system awards 5-star, 4-star, and 3-star designations rather than numerical rankings. Creators of the LJ Index stress that it does not measure service quality, operational excellence, library effectiveness, nor the degree to which a library meets existing community information needs.

Australia and New Zealand

There is some interest in developing an index in Australia and New Zealand{{cite web |url=http://conferences.alia.org.au/alia2000/proceedings/alan.bundy.html |title=Best value: Libraries |last=Bundy |first=Alan |date=2000 |website=ALIA |access-date=July 29, 2022 |archive-url=https://web.archive.org/web/20070907041145/http://conferences.alia.org.au/alia2000/proceedings/alan.bundy.html |archive-date=September 7, 2007}}

Great Britain

Great Britain adopted national standards, and in 2000 the Audit Commission began publishing both a summary annual reports of library conditions and individualized ratings of libraries. Audit Commission personnel base the reports on statistical data, long-range plans, local government commitment to the library, and a site visit. The Audit Commission is an independent body. Every library is assigned a score.[http://www.bestvalueinspections.gov.uk/ Audit Commission – homepage] {{webarchive|url=https://web.archive.org/web/20051123124210/http://www.bestvalueinspections.gov.uk/ |date=2005-11-23 }}

Germany

Bertelsmann Publishing partners with the German library association to produce BIX, a library index quite similar to HAPLR. The main difference between BIX and HAPLR, is that BIX was designed to provide comparisons of one library to another in a given year as well as over time. HAPLR compares all libraries to one another only during a given year.{{cite web |url=http://bix-bibliotheksindex.de/en/about-us.html |title=About Us |website=BIX |access-date=July 29, 2022}}

See also

References

{{Reflist}}