-
Images of the amazing contraption.
-
It's the Atomic Kindle!
"It's not elegant and it's not sexy – it looks like a large photocopier – but the Espresso Book Machine is being billed as the biggest change for the literary world since Gutenberg invented the printing press more than 500 years ago and made the mass production of books possible. Launching today at Blackwell's Charing Cross Road branch in London, the machine prints and binds books on demand in five minutes, while customers wait."
-
-
Holy moly. So want to check this out.
-
"An amateur is full of wonder and speculation, tinkering towards the truth but suffering from a lack of knowledge and idleness; he's not even sure if someone else has already made these discoveries. "Is this a worthwhile pursuit?"
"A scientist performs experiments to confirm or disprove a hypothesis, and in that way he grinds out the truth.
"A genius has three abilities, which are actually the union of amateur and scientist: 1. to know the state of the art, what is known and what is not known. 2. To be able to think "out of the box". 3. To be disciplined enough to concentrate on the tedium of a formal investigation of his wondrous speculations."
[via @mrgunn]
Molecular Playground: Architectural Scale Interactive Molecules
I just found out that my thesis advisor is working on a cool project.
There's a new science building going up at UMass Amherst (where I got my PhD) and Craig T Martin (my thesis advisor) thought it would be cool to do an art installation where molecules are projected on the walls. He also realized that it would be cool if folks could interact with these molecules.
As he says on the project's site, molecules and chemicals are sort of "inaccessible and uninteresting" to the general public. His vision is to develop a large scale "molecular playground" where folks can actually go and manipulate the molecular projections.
Craig received a grant from the Camille & Henry Dreyfus Foundation to "develop and install in a prominent public space a
system for displaying large scale interactive molecules." The molecules will be animated and artistic, so that they can be appreciated even without direct manipulation.
Craig is collaborating with Allen Hanson from the UMass Amherst Computer Science Department.
Cool. I'm looking forward to seeing it.
And check out the video of a demo of the concept (below). The protein is HU, found in the bacterial nucleoid and involved in chromosome compaction. It makes a dramatic kink in DNA, and does some funky things. Check out the tongues going down the grooves on the opposite surface of the DNA. That's a tight grip. There more to it, though. You can manipulate the molecule yourself (with some explanations) on Craig's site.
What can we learn from Asilomar?
There was a flurry of indignation recently on the DIYbio discussion group over an article in the Wall Street Journal over the safety of bio hackers (with added aggravation from Fox News' dramatic title to the exact same article).
Interestingly, this kind of alarm is not new, especially to biology. In the early days of molecular biology, there was a sudden panic that recombinant DNA was inherently unsafe. There was no basis to understand what was possible, what was ethically permissible, and what was unsafe.
Asilomar
In a landmark event, that went on to change the nature of science policy and public outreach, Maxine Singer and Paul Berg, pioneers in molecular biology, assembled about 140 scientist, lawyers, and politicians to discuss the future of recombinant DNA.
The Asilomar Conference on Recombinant DNA, named after the place it was held at, addressed the principles for safely conducting recombinant DNA experiments, listing potential risks and outlining containment principles. The discussions also involved assessment of organisms, principles for choosing bacterial hosts, and what constituted good microbial practices. And finally, it explored the need for proper education and training of research personnel to carry out the recommendations that came out of the discussions.
There were a few interesting, non-science, aspects to this conference, as well. There was a desire to be transparent in the discussion and involve the public, to allay any fears non-scientists might have. Also, the Asilomar scientists drew up a series of voluntary guidelines rather than a regulatory body.
What can we learn?
Asilomar is part of the culture and history of any molecular biologist (at least it was for me, I learned about it early in my career). Therefore, the precautionary thinking, the openness and public discourse, and the self-organizing regulation is part of molecular biology.
DIY biology is part of all this, and the same culture is part of a community that already is a cautious as it is curious and open. I am not sure if there's a need for an Asilomar for DIYbio, but with calls for licensing and calls from the FBI, clearly something definitive needs to be established.
It's been great to see the discussions around this by the DIYbio enthusiasts. They clearly understand the situation, now it's a matter of getting the message across.
Image from MIT archives.
Video: The Future of Science Publishing
In February, in a Barcelona restaurant, Mark Kramer caught up with me and asked me what I would be speaking about at the 3rd WLE Symposium (notes from the talk are in a preceding post).
He was kind enough to give me the video, so check it out below.
(and, no, I don't lisp like that – it's the audio quality)
links for 2009-05-21
-
"A report launched by the Academy today highlights an emerging but critical new field of innovation and technology that has potential for major societal benefit and wealth creation in such areas as healthcare, energy and the environment. Synthetic biology – the insertion of carefully engineered DNA into bacteria cells to make them behave in new ways – is an emerging technology that could bring great benefits. Synthetic Biology: scope, applications and implications identifies the next steps to build on the UK's position in the field, create a regulatory framework and to explore, with the public, the ethical and societal issues involved."
via http://2020science.org/ -
"Synthetic biology is a new field, but it's targeting an old question: How did life begin?"
-
(same WSJ article, different title)
-
"These hobbyists represent a growing strain of geekdom known as biohacking, in which do-it-yourselfers tinker with the building blocks of life in the comfort of their own homes. Some of them buy DNA online, then fiddle with it in hopes of curing diseases or finding new biofuels. But are biohackers a threat to national security?"
links for 2009-05-20
-
Wicked video of a microchip controlling bacteria via electromagnetism making the chip move through a solution up a pH gradient.
via @genegeek via @edyong209
-
"I’ve started looking through the metagenomics literature(links to fulltext library) for simple protocols that could be adapted to a basic garage lab. I’m planning on outsourcing the actual sequencing ($50-$100?), but doing the rest of the sample preparation myself: isolating and purifying genomic DNA and doing PCR to amplify the species-specific DNA barcode, probably a 16s or 18s ribosomal subunit gene."
Talk: The Future of Science Publishing
I think about the fusion of mobile and Web all the time. And I’ve been talking about and thinking about designing services and software for years. But I also was an academic researcher for about 10 years, with 18 co-authored papers.
All this converged about a year and half ago when I met Matt Cockerill from BioMedCentral, an Open Access publisher of scientific papers.
He had a sort of embarrassment of riches – servers full of papers, videos, info. The problem was how to take all that info and make it work, derive relevance, give value back to the scientists.
That got me thinking. I framed it as a problem – how to make it easy to find-navigate-recombine-share? Suddenly, I saw this as one of the big challenges for the Web.
Now, I see it everywhere in other areas, but science publishing catches my attention, mostly due to my recent focus back on science.
The Rise of the Scientific Paper
Scientific papers arose about 450 years ago as a way to distribute, between scientists, public letters and correspondence on findings and reports. The natural scarcity of publication and distribution made this a necessity.
From this arose lead publishers (for example, Nature and Science) and all that science publishing entails – star editors, reputation, authority, impact factors, and so on.
But that’s so Web 1.0.
Waves of the Web
OK, so I try really hard not to use the Web 1.0, Web 2.0 etc terminology. I view the Web more in waves than labels. Each of these waves take the cycle of create, consume, connect to another level.
For me Wave 1 was the Age of the Hyperlinked Document. The first wave was characterized by a rush to digitize traditional publishing assets, such as databases, newspapers, encyclopedias. This wave also saw the rise of Web indexes (Yahoo), search (AltaVista), email, and the browser wars. But in the end, the creators were traditional publishers and indexers. Regular folk just “browsed” stuff, without any contribution.
Wave 2 was the Age of the Fragmentation of the Web. This wave saw the coming of micropublishing (blogs, wikis), emergent (crowd-sourced) indexes (wikis, delicious), social networks, and new ways to search (Google, Technorati). And expectations of interactions with people and content was heavily influnced by IM (rapid morsels of conversational text) and rich interfaces (through flash, video, and AJAX). But the biggest change (at least in this story) was that everyone became a publisher
Publishing, therefore, had gone from static monoliths to morsels of info free to socialize. This has caused the collapse of traditional publishing (witness the record and newspaper industries). Furthermore, there has been an explosion of morsels of data on the Web. Everything has become search-able, comment-able, link-able, embed-able, feed-able. Data and people mix in a social, living, Web.
In short, Wave 1 weakened traditional publishing that used to be based on scarcity. Wave 2 made everyone a source of info, everyone an annotator of data, everyone a publisher; it took hyperlinked documents and morselized the web.
How have scientific publishers fared in this Wave 2?
They’ve basically kept the status quo. Online. Stuck in Wave 1.
As with many other traditional publishers, science publishers replicated their closed subscription-based model on the Web, republishing their content online.
Open Access has been battling the status quo for 10 years (at least in terms of access). Only now are they getting strong recognition, impact factors, authority, and a little respect. But they are predicated mostly on and restricted heavily by the traditional model of science publishing (for example, stuck to impact factors).
Recently science publishers have been experimenting with comments and annotations. But with little traction (and I have a few ideas as to why). And, granted, the non-paper publishing part of traditional publishers have embraced the Web, but I am speaking of the core product here.
So many similarities…
The irony is that Tim Berners-Lee actually envisioned the Web as a way to share science information and publications. Openness and sharing are at the heart of science. And the core cultural structures replicate well online. Wave 2 behaviors are the same as in research: find, navigate, recombine, share.
And the Web also has structures found in traditional publishing, such as ways to deal with authority and primacy.
In short, science publishing as it should be mirrors the Web.
If there is a Wave 1 and Wave 2, is there a Wave 3?
My view is that we are entering the true era of (and need for) the Semantic Web. Context is about relevancy is about meaning is about semantics. I claim that the semantic Web has not advanced in the past many years because the focus has been on what I call “librarian” tasks of formatting data and manually building ontologies and so on.
What we know now (from Wave 2 behavior) is that emergent semantics, created through data-mining, but especially via people just using the Web, will be key in helping us navigate the sea of data. In short, the next wave of the Web will require a mix of data mining, librarian tasks, and people to make sense of it all.
How do I see science publishing taking advantage of the Web?
I mapped out behaviors and how it could be on the Web.
Culture vs tech
Risking sounding dramatic, I think more changes are inevitable, despite publishers wishes to hold on to traditional structures. But the sad irony is that the future of science publishing depends on culture not tech. All the tech is here, and it’s evolving, mixing Web, mobile, context, semantics and other wonder, whether the scientific publishers want to or no.
But will scientists lead the way?
This post was written from the notes of my talk at the 3rd WLE Symposium in London back in March (presentation below).
The Future Of Scientific Publishing
Changing the journal impact factor through real-time transparent statistics
I've mentioned Mendeley before. They refer to themselves as a Last.fm for science papers, but I think it'll be much more.
One thing they realize they are changing, as a side effect, is the impact factor (sort of like a Page Rank of science papers, based on incoming links (citations) to the paper and the journal).
Link: Changing the journal impact factor | Mendeley Blog:
At a higher level then, Mendeley’s significance isn’t just about real-time impact factors and article-level metrics. It’s about using technology for the first time to crowd source data and forever change how research is done. That is why I’m crazy enough to move half-way around the world. Mendeley literally isn’t just another “Silicon Valley” start-up.
Spot on. When I heard Victor (one of the founders) talk about this at Next09 I practically jumped out of my seat.
Thompsons was set up in an age when you needed someone to manually go through references and such and report to the community. That's probably part of the reason it takes three years to establish an impact factor. [I pointed this out already a while back.]
PLoS and BMC, who imported the broken authority model from the print world, missed an opportunity in the past 10 years to upturn Thomspons world. So, it's good to hear that PLoS is starting to be transparent in their traffic and links, providing the start of a new way to look at authority.
One thing: being a bit publisher-minded, I, myself, missed the other side effect of opening up stats that could show authority – basically, such transparency might be able to highlight a high-impact paper from an obscure journal. In the traditional world, that paper would have been buried by the journal's own impact factor.
Yeah, we need to open up these stats on a real-time paper level. There's no reason not to do it.
(and go read the rest of the article on Mendeley's site)
Image from wikipedia, on Page Rank
links for 2009-05-17
-
Hmmm…
"The Omics Gateway provides life scientists a convenient portal into publications relevant to large-scale biology from journals throughout NPG. By organizing our papers and web focus projects on large-scale biology into this comprehensive, regularly updated, one-stop web portal, we hope to help you quickly reach the resources you need to study the -ome of your choice and to keep you up-to-date with the most significant research in that area."
via @genegeek
-
via @igenomics
-
-
Ok. Not enthused by the "2.0" moniker (though this was published in 2008). But I am happy to see others thinking the same things as I have been. Just wished I'd seen this back then.
"Scientific American calls it science2.0. They write: A small but growing number of researchers (and not just the younger ones) have begun to carry out their work via the wide-open tools of Web 2.0. And although their efforts are still too scattered to be called a movement—yet—their experiences to date suggest that this kind of Web-based “Science 2.0” is not only more collegial than traditional science but considerably more productive."
links for 2009-05-16
-
"For each group of parts, we’ve added text and pictures to better explain why and how you’d want to use those parts in larger systems."
This is a distinct improvement over the previous version. Though I would like to see even more visual guides to exploring the catalog.
-
"The sheer size and diversity of the DNA samples collected allowed the researchers to construct a human family tree based on their analyses. Not unexpectedly, the tree they constructed fits well with current theories on the genetic relationship between Africans and non-Africans; namely that all non-Africans are descended from a particular group or groups of people who were the first humans to migrate out of Africa tens of thousands of years ago."
-
"People actually read, and many of them are willing to pay to support their habit, as the multi-billion dollar publishing industry would attest (and this is just in U.S., and quite frankly, we aren’t the most reading-intensive country in the world)."
-
Pretty nifty. A computational command line on the web with a really nice graphical summary. Information rich without being cluttered.
"Wolfram|Alpha just went live for the very first time, running all clusters. This first run at testing Wolfram|Alpha in the real world is off to an auspicious start, although not surprisingly, we’re still working on some kinks, especially around logging."
