Molecular Playground: Architectural Scale Interactive Molecules

I just found out that my thesis advisor is working on a cool project.

There's a new science building going up at UMass Amherst (where I got my PhD) and Craig T Martin (my thesis advisor) thought it would be cool to do an art installation where molecules are projected on the walls. He also realized that it would be cool if folks could interact with these molecules.

As he says on the project's site, molecules and chemicals are sort of "inaccessible and uninteresting" to the general public. His vision is to develop a large scale "molecular playground" where folks can actually go and manipulate the molecular projections.

Craig received a grant from the Camille & Henry Dreyfus Foundation to "develop and install in a prominent public space a
system for displaying large scale interactive molecules." The molecules will be animated and artistic, so that they can be appreciated even without direct manipulation.

Craig is collaborating with Allen Hanson from the UMass Amherst Computer Science Department.

Cool. I'm looking forward to seeing it.

And check out the video of a demo of the concept (below). The protein is HU, found in the bacterial nucleoid and involved in chromosome compaction. It makes a dramatic kink in DNA, and does some funky things. Check out the tongues going down the grooves on the opposite surface of the DNA. That's a tight grip. There more to it, though. You can manipulate the molecule yourself (with some explanations) on Craig's site.

What can we learn from Asilomar?

Asilomar There was a flurry of indignation recently on the DIYbio discussion group over an article in the Wall Street Journal over the safety of bio hackers (with added aggravation from Fox News' dramatic title to the exact same article).

Interestingly, this kind of alarm is not new, especially to biology. In the early days of molecular biology, there was a sudden panic that recombinant DNA was inherently unsafe. There was no basis to understand what was possible, what was ethically permissible, and what was unsafe.

Asilomar
In a landmark event, that went on to change the nature of science policy and public outreach, Maxine Singer and Paul Berg, pioneers in molecular biology, assembled about 140 scientist, lawyers, and politicians to discuss the future of recombinant DNA.

The Asilomar Conference on Recombinant DNA, named after the place it was held at, addressed the principles for safely conducting recombinant DNA experiments, listing potential risks and outlining containment principles. The discussions also involved assessment of organisms, principles for choosing bacterial hosts, and what constituted good microbial practices. And finally, it explored the need for proper education and training of research personnel to carry out the recommendations that came out of the discussions.

There were a few interesting, non-science, aspects to this conference, as well. There was a desire to be transparent in the discussion and involve the public, to allay any fears non-scientists might have. Also, the Asilomar scientists drew up a series of voluntary guidelines rather than a regulatory body.

What can we learn?
Asilomar is part of the culture and history of any molecular biologist (at least it was for me, I learned about it early in my career). Therefore, the precautionary thinking, the openness and public discourse, and the self-organizing regulation is part of molecular biology.

DIY biology is part of all this, and the same culture is part of a community that already is a cautious as it is curious and open. I am not sure if there's a need for an Asilomar for DIYbio, but with calls for licensing and calls from the FBI, clearly something definitive needs to be established.

It's been great to see the discussions around this by the DIYbio enthusiasts. They clearly understand the situation, now it's a matter of getting the message across.

Image from MIT archives.

links for 2009-05-21

links for 2009-05-20

Talk: The Future of Science Publishing

Picture 2 I think about the fusion of mobile and Web all the time. And I’ve been talking about and thinking about designing services and software for years. But I also was an academic researcher for about 10 years, with 18 co-authored papers.

All this converged about a year and half ago when I met Matt Cockerill from BioMedCentral, an Open Access publisher of scientific papers.

He had a sort of embarrassment of riches – servers full of papers, videos, info. The problem was how to take all that info and make it work, derive relevance, give value back to the scientists.

That got me thinking. I framed it as a problem – how to make it easy to find-navigate-recombine-share? Suddenly, I saw this as one of the big challenges for the Web.

Now, I see it everywhere in other areas, but science publishing catches my attention, mostly due to my recent focus back on science.

The Rise of the Scientific Paper
Scientific papers arose about 450 years ago as a way to distribute, between scientists, public letters and correspondence on findings and reports. The natural scarcity of publication and distribution made this a necessity.

From this arose lead publishers (for example, Nature and Science) and all that science publishing entails – star editors, reputation, authority, impact factors, and so on.

But that’s so Web 1.0.

Waves of the Web
OK, so I try really hard not to use the Web 1.0, Web 2.0 etc terminology. I view the Web more in waves than labels. Each of these waves take the cycle of create, consume, connect to another level.

For me Wave 1 was the Age of the Hyperlinked Document.  The first wave was characterized by a rush to digitize traditional publishing assets, such as databases, newspapers, encyclopedias. This wave also saw the rise of Web indexes (Yahoo), search (AltaVista), email, and the browser wars. But in the end, the creators were traditional publishers and indexers. Regular folk just “browsed” stuff, without any contribution.

Wave 2 was the Age of the Fragmentation of the Web. This wave saw the coming of micropublishing (blogs, wikis), emergent (crowd-sourced) indexes (wikis, delicious), social networks, and new ways to search (Google, Technorati). And expectations of interactions with people and content was heavily influnced by IM (rapid morsels of conversational text) and rich interfaces (through flash, video, and AJAX). But the biggest change (at least in this story) was that everyone became a publisher

Publishing, therefore, had gone from static monoliths to morsels of info free to socialize. This has caused the collapse of traditional publishing (witness the record and newspaper industries). Furthermore, there has been an explosion of morsels of data on the Web. Everything has become search-able, comment-able, link-able, embed-able, feed-able. Data and people mix in a social, living, Web.

In short, Wave 1 weakened traditional publishing that used to be based on scarcity. Wave 2 made everyone a source of info, everyone an annotator of data, everyone a publisher; it took hyperlinked documents and morselized the web.

How have scientific publishers fared in this Wave 2?
They’ve basically kept the status quo. Online. Stuck in Wave 1.

As with many other traditional publishers, science publishers replicated their closed subscription-based model on the Web, republishing their content online.

Open Access has been battling the status quo for 10 years (at least in terms of access). Only now are they getting strong recognition, impact factors, authority, and a little respect. But they are predicated mostly on and restricted heavily by the traditional model of science publishing (for example, stuck to impact factors).

Recently science publishers have been experimenting with comments and annotations. But with little traction (and I have a few ideas as to why). And, granted, the non-paper publishing part of traditional publishers have embraced the Web, but I am speaking of the core product here.

So many similarities…
The irony is that Tim Berners-Lee actually envisioned the Web as a way to share science information and publications. Openness and sharing are at the heart of science. And the core cultural structures replicate well online. Wave 2 behaviors are the same as in research: find, navigate, recombine, share.

And the Web also has structures found in traditional publishing, such as ways to deal with authority and primacy.

In short, science publishing as it should be mirrors the Web.

If there is a Wave 1 and Wave 2, is there a Wave 3?
My view is that we are entering the true era of (and need for) the Semantic Web. Context is about relevancy is about meaning is about semantics. I claim that the semantic Web has not advanced in the past many years because the focus has been on what I call “librarian” tasks of formatting data and manually building ontologies and so on.

What we know now (from Wave 2 behavior) is that emergent semantics, created through data-mining, but especially via people just using the Web, will be key in helping us navigate the sea of data. In short, the next wave of the Web will require a mix of data mining, librarian tasks, and people to make sense of it all.

How do I see science publishing taking advantage of the Web?
I mapped out behaviors and how it could be on the Web.

 

Traditional_vs_social_publishing

Culture vs tech
Risking sounding dramatic, I think more changes are inevitable, despite publishers wishes to hold on to traditional structures. But the sad irony is that the future of science publishing depends on culture not tech. All the tech is here, and it’s evolving, mixing Web, mobile, context, semantics and other wonder, whether the scientific publishers want to or no.

But will scientists lead the way?

This post was written from the notes of my talk at the 3rd WLE Symposium in London back in March (presentation below).

Changing the journal impact factor through real-time transparent statistics

744px-PageRanks-Example I've mentioned Mendeley before. They refer to themselves as a Last.fm for science papers, but I think it'll be much more.

One thing they realize they are changing, as a side effect, is the impact factor (sort of like a Page Rank of science papers, based on incoming links (citations) to the paper and the journal).

Link: Changing the journal impact factor | Mendeley Blog:

At a higher level then, Mendeley’s significance isn’t just about real-time impact factors and article-level metrics. It’s about using technology for the first time to crowd source data and forever change how research is done. That is why I’m crazy enough to move half-way around the world. Mendeley literally isn’t just another “Silicon Valley” start-up.

Spot on. When I heard Victor (one of the founders) talk about this at Next09 I practically jumped out of my seat.

Thompsons was set up in an age when you needed someone to manually go through references and such and report to the community. That's probably part of the reason it takes three years to establish an impact factor. [I pointed this out already a while back.]

PLoS and BMC, who imported the broken authority model from the print world, missed an opportunity in the past 10 years to upturn Thomspons world. So, it's good to hear that PLoS is starting to be transparent in their traffic and links, providing the start of a new way to look at authority.

One thing: being a bit publisher-minded, I, myself, missed the other side effect of opening up stats that could show authority – basically, such transparency might be able to highlight a high-impact paper from an obscure journal. In the traditional world, that paper would have been buried by the journal's own impact factor.

Yeah, we need to open up these stats on a real-time paper level. There's no reason not to do it.

(and go read the rest of the article on Mendeley's site)

Image from wikipedia, on Page Rank

links for 2009-05-17

links for 2009-05-16

When the Central Dogma is not enough – microbial small RNAs

Centraldogma_nodetails One thing that has always bugged me was a sort of pendatic repetition of what's called the Central Dogma of molecular biology – that DNA goes to RNA goes to protein.

What bothered me was that it way oversimplified the complexity of information transfer and control in organisms. And for me, the excitement has been in all the exceptions to this Dogma.

For example, I had a sort of Rip van Winkle gap between when I left science to when I re-engaged years later (missed the deep stuff, while keeping up lightly with the superficial stuff). Back in 1999 we were talking about some weird things going on in nematodes, where you could control gene expression simply by adding some small RNAs to cells. Fast forward to 2006 and I find out that these small RNAs have been found everywhere as a control mechanism.

Now mix that with the resurgence of microbiology (or at least it looks like a resurgence to me) and folks are starting to use small RNAs as a way to read gene expression patterns in micro-organisms. The idea is that it's a quick readout before the organism starts responding to the effects of collection and removal from its native environment.

"If we think of marine bacteria and their proteins as tiny factories performing essential biogeochemical activities — such as harvesting sunlight to create oxygen and synthesize sugar from carbon dioxide — then the sRNAs are the internal switches that turn on and off the factories' production line. Their discovery in the ocean samples opens the way to learning even more detailed information in the lab: the researchers can now conduct lab experiments to look at the effects of environmental perturbation on microbial communities. These new sRNAs also expand our general knowledge of the nature and diversity of these recently recognized regulatory switches." [apologies to the person I got the link of, as I have forgotten who it was] 

Cool.

Image from wikipedia