Over on TechEssence, Roy Tennant has posted a manifesto about library software.
The part of the manifesto that hit me most was the list of consumer responsibilities… and though the manifesto is good and deserves discussion on its own point, I’m going to drift in another direction.
I just spent two days among very smart, dedicated people who for five years or more have been struggling with a standard that like the ball of yeast dough in one of Leo Rosten’s stories has been bulging out of its container in a most scary way. NCIP started as a “circulation” protocol, but as happens with most library standards, it has been larded up with additional stuff to the point where it’s almost unrecognizable (and it’s going to be tough to fit the dough into a pan to go in the oven). This good group of folks has been kneading this ball of dough… with excellent effect… but they’ve inherited an old-school standards process that is shoving them around the kitchen in a most frustrating way.
I only have 20 minutes to blurb this out before a very crazy day begins, so here goes.
- Standards should never get in the way of interoperability… nor should they be confused with interoperability. A lot of times librarians think they want a product that is X-compliant, where X is a standard. But the standard is a means to an end; you really want interoperability and a bit of insurance against future change. We should know enough about standards to realize what we’re really asking for.
- Standards can’t take five years to come out of the oven. It is better to have a leaner standard that gets everyone moving (q.v. SUSHI) than a “perfect” standard that is irrelevant because everyone has already left the restaurant.
- For another thing, there is no oven. Trying to keep standards in sync with technology is like changing a tire on a moving vehicle. It’s a given that standards development has some lag time, and a reality check has to be built into the process. Standards have to be flexible enough to be extended as needed, or they are useless. (NCIP’s new proposed “any” tag is a huge step forward in that direction.)
- Some of our standards impose near-ridiculous requirements. Look at SRU and CQL: achtung, the target must conform to the source! Come on, who’s zooming who? (Look at Z39.50. Look at MARC. I know, I’m making your eyes hurt.)
- We can’t keep designing standards around long, long lists of possible scenarios.
- We need to decide when a standard absolutely must be available — e.g. we need it in a year — and let that be a primary requirements restraint.
- We can’t keep insisting standards comply with one-offs. If you have the most amazingly unique circulation system in the world, then don’t lobby for that scenario in the standard; instead, lobby for a flexible standard that allows your chefs to whip up extensions. Eyes on the prize.
- Uncooked thought: we need to rethink the balloting process.
- De facto standards (such as Coins, and don’t make me spell out those stupid Studley Caps) need more study to see where and how they have been successful, so their practices can inform the standards process. What helps? Transparency? Broader discussion within librarianship? Iterative design? Put a standard in a Google Doc and let smart people whale away at it for a while to see what happens?
- Standards avoidance needs more study to see why this happens. When someone says to me that they are told to wait two years for a standard to address what seems like a central problem and they can handle the issue with web services, SOAP, and a REST-ful approach, we need to ask if they are just doing the responsible thing — taking a unique situation and addressing it locally — or if they are forced to avoid the standard to fulfill a crucial task (or avoid a fatal weakness).
- Vendors aren’t evil. Librarians aren’t saintly. Finger-pointing is pointless. The user comes first.
- Plain English is a blessing.
The user comes first? Heresy.
Great post! I just want to add that the standardization process is highly political and the best technical standard doesn’t always win. This leaves us with competing standards for the same thing in some situations. In web services, we’ve had OASIS vs W3C. Just when we thought we had OpenDocument sorted out, Microsoft pushes for its OpenXML to be ratified as an ISO standard. It never ends, and users will be the ones losing out.
The best standards come _out_ of practice. You have a bunch of people trying different things, and experimenting. After a bit of this, you look at what happened, and you abstract and generalize out the best of what happened, and make it into a standard. This way your standard was actually based on practice, and a certain kind of evidence.
Compare this to what we do in the library world all too often. “No, we can’t try that, it violates the standards!” “No, there’s no reason to do that, it’s not standard.” “Once the standard comes out, THEN we’ll have a way to do that.” So standards can’t be based on best practices, because there are no practices to decide what’s best—we’re all waiting for the standards people to tell us. And waiting. And waiting. And then we expect the standards people to give us a gigantic document covering all possible cases that will never need to be revised—which is NOT based on what worked in the field, because nobody was working in the field, but is instead invented whole cloth by the standards people!
No wonder it doesn’t work.
Very good, Jonathan!
[…] practice so we can learn what the issues are, and work to resolve them. We need to get away from the idea that any kind of standard can somehow be produced in isolation from practice in one monolit… and then never be returned […]
[…] cataloging, library2.0, standards The Free Range Librarian recently had an interesting post about “Standards 2.0″ in librarianship.  Obviously “2.0″ has become a catch-phrase for any forward-looking […]