Thanks to Ian Jones, I found a very interesting article on Ars Technica entitled Using crowdsourced librarians to outsmart Google. It's about a project that aims to "out-Google Google" using the collective brainpower of librarians in crowdsourcing mode. It's based on what the project leaders say is sound science. There is a growing body of research on the credibility of online material, so their claims may be valid, but I haven't had time to vet them yet, so I am making no claims of my own here.
Still, the idea is very interesting. I've been hyperfocused on the problematic findability of quality consumer health information via the commercial search engines. If there is a better way of building an online search engine in terms of returning credible results, I'm all for it, and who knows? This could be it.
The Reference Extract project hopes to turn all of this credibility research into something practical. The team consists of professors like Lankes and the OCLC, an academic technology organization that counts more than 60,000 (!) libraries among its members. The initial work is funded by the MacArthur Foundation. Their first goal is certainly ambitious: use the web sites that librarians suggest most often as the basis of a more credible search engine that can return reliable results.
Reference Extract already believes that it can best Google, even before work has begun in earnest. That's because it has already used librarian-recommended sites to populate a custom Google search engine. Simply doing this produced search results that testers ranked as "more credible" than searches run on the main Google index, even when the testers had no idea that one of the searches was based only on librarian-approved sites.
I don't have time to think about this much at the moment, much less write intelligently about it at any length, being in crunch mode on a couple of projects all at once. Still, I wanted to pass it along. Others with a similar passion out there may find this very useful.
Comments