Talk: The Future of Science Publishing

Picture 2 I think about the fusion of mobile and Web all the time. And I’ve been talking about and thinking about designing services and software for years. But I also was an academic researcher for about 10 years, with 18 co-authored papers.

All this converged about a year and half ago when I met Matt Cockerill from BioMedCentral, an Open Access publisher of scientific papers.

He had a sort of embarrassment of riches – servers full of papers, videos, info. The problem was how to take all that info and make it work, derive relevance, give value back to the scientists.

That got me thinking. I framed it as a problem – how to make it easy to find-navigate-recombine-share? Suddenly, I saw this as one of the big challenges for the Web.

Now, I see it everywhere in other areas, but science publishing catches my attention, mostly due to my recent focus back on science.

The Rise of the Scientific Paper
Scientific papers arose about 450 years ago as a way to distribute, between scientists, public letters and correspondence on findings and reports. The natural scarcity of publication and distribution made this a necessity.

From this arose lead publishers (for example, Nature and Science) and all that science publishing entails – star editors, reputation, authority, impact factors, and so on.

But that’s so Web 1.0.

Waves of the Web
OK, so I try really hard not to use the Web 1.0, Web 2.0 etc terminology. I view the Web more in waves than labels. Each of these waves take the cycle of create, consume, connect to another level.

For me Wave 1 was the Age of the Hyperlinked Document.  The first wave was characterized by a rush to digitize traditional publishing assets, such as databases, newspapers, encyclopedias. This wave also saw the rise of Web indexes (Yahoo), search (AltaVista), email, and the browser wars. But in the end, the creators were traditional publishers and indexers. Regular folk just “browsed” stuff, without any contribution.

Wave 2 was the Age of the Fragmentation of the Web. This wave saw the coming of micropublishing (blogs, wikis), emergent (crowd-sourced) indexes (wikis, delicious), social networks, and new ways to search (Google, Technorati). And expectations of interactions with people and content was heavily influnced by IM (rapid morsels of conversational text) and rich interfaces (through flash, video, and AJAX). But the biggest change (at least in this story) was that everyone became a publisher

Publishing, therefore, had gone from static monoliths to morsels of info free to socialize. This has caused the collapse of traditional publishing (witness the record and newspaper industries). Furthermore, there has been an explosion of morsels of data on the Web. Everything has become search-able, comment-able, link-able, embed-able, feed-able. Data and people mix in a social, living, Web.

In short, Wave 1 weakened traditional publishing that used to be based on scarcity. Wave 2 made everyone a source of info, everyone an annotator of data, everyone a publisher; it took hyperlinked documents and morselized the web.

How have scientific publishers fared in this Wave 2?
They’ve basically kept the status quo. Online. Stuck in Wave 1.

As with many other traditional publishers, science publishers replicated their closed subscription-based model on the Web, republishing their content online.

Open Access has been battling the status quo for 10 years (at least in terms of access). Only now are they getting strong recognition, impact factors, authority, and a little respect. But they are predicated mostly on and restricted heavily by the traditional model of science publishing (for example, stuck to impact factors).

Recently science publishers have been experimenting with comments and annotations. But with little traction (and I have a few ideas as to why). And, granted, the non-paper publishing part of traditional publishers have embraced the Web, but I am speaking of the core product here.

So many similarities…
The irony is that Tim Berners-Lee actually envisioned the Web as a way to share science information and publications. Openness and sharing are at the heart of science. And the core cultural structures replicate well online. Wave 2 behaviors are the same as in research: find, navigate, recombine, share.

And the Web also has structures found in traditional publishing, such as ways to deal with authority and primacy.

In short, science publishing as it should be mirrors the Web.

If there is a Wave 1 and Wave 2, is there a Wave 3?
My view is that we are entering the true era of (and need for) the Semantic Web. Context is about relevancy is about meaning is about semantics. I claim that the semantic Web has not advanced in the past many years because the focus has been on what I call “librarian” tasks of formatting data and manually building ontologies and so on.

What we know now (from Wave 2 behavior) is that emergent semantics, created through data-mining, but especially via people just using the Web, will be key in helping us navigate the sea of data. In short, the next wave of the Web will require a mix of data mining, librarian tasks, and people to make sense of it all.

How do I see science publishing taking advantage of the Web?
I mapped out behaviors and how it could be on the Web.

 

Traditional_vs_social_publishing

Culture vs tech
Risking sounding dramatic, I think more changes are inevitable, despite publishers wishes to hold on to traditional structures. But the sad irony is that the future of science publishing depends on culture not tech. All the tech is here, and it’s evolving, mixing Web, mobile, context, semantics and other wonder, whether the scientific publishers want to or no.

But will scientists lead the way?

This post was written from the notes of my talk at the 3rd WLE Symposium in London back in March (presentation below).