Current Exchange - Spring 2013

Page 1


[ Editorial Board ] Robert Aboukhalil Dr. Charla Lambert

[ Contributors ] Yevgeniy Plavskin Josh Sanders Irene Liao Kristen Delevich Dana M. King Will Donovan John Sheppard Manosij Majumdar Seth Baum

[ Images ] • • • • • •

flickr.com/geneticdesigns/5074840249 flickr.com/hermida/249513096 flickr.com/ethanhein/3448961386 flickr.com/braubach/1120571090 flickr.com/ideonexus/2955592485 flickr.com/tueksta/839920747

CONTENTS How common is common knowledge? Which way to Europe?

3

4

Open source instruments for brain research

6

The Neanderthal in us 8 Valuable brain cells generated from unlikely source 9 Q&A — Genspace: A DIY biology lab 10 Biology of Genomes Twitter trends

12

Q&A — Gustavo Stolovitzky on Computational Biology 14 Transformative research and the future of science 16 Current Exchange is a joint collaboration between the Meetings & Courses program (meetings.cshl. edu) and the Watson School of Biological Sciences (cshl.edu/gradschool) at Cold Spring Harbor Laboratory.

Genome envy 17 Object recognition: The hard problem 18 Why yes, your child should learn chemistry

20

Making the universe a better place 22 Current Exchange is published by Technophilic Magazine Inc., For more information, visit our website at technophilicmag.com or contact us at info@technophilicmag.com. The opinions expressed in the articles herein only reflect the opinions of their respective authors. All articles here are licensed under the Creative Commons License.


EDITOR’S DESK

How common is common knowledge? Robert Aboukhalil / PhD Student, Watson School of Biological Sciences, CSHL “Proteins are the workhorses of the cell”. According to Google, some 42,800 articles and web pages have used this sentence verbatim to preface their work. What’s more, they did so without citing whomever first thought it clever to compare proteins to workhorses. Does this constitute plagiarism? And what about those who claim that proteins are instead the machinery of the cell? Examples such as these illustrate that, except in rare cases of blatant copying, plagiarism has many shades of grey. On the issue of plagiarism, Fang and Casadevall argue that good scientists are those who “strike out on their own paths, using their own words” [1]. Indeed, shamelessly copy-pasting without proper citation is one of the most common definitions of plagiarism. However, with such a definition, would it then be acceptable to re-publish Fang and Casadevall’s article in a different journal, replacing each word with a synonym? The National Academy of Science (NAS)—and the author of this article—would not think so. In their handbook that treats responsible conduct in research, the NAS goes further. It insists that plagiarism goes beyond using the same turns of phrase; it is about stealing ideas and is an infraction committed “intentionally, or knowingly, or recklessly” [2]. In that same handbook, the NAS suggests a case study where a certain Professor Lee is writing a research grant. In the background section, he includes short sentences copied from a review paper he did not write. These sentences are not novel ideas,

but summarize what is known in the field. He ends the section with a one-sentence summary of that review paper and cites it.

«

Whether this is plagiarism is debatable. The case study suggests that the ‘borrowed’

sized by someone else, he may want to heed the NAS’ warning that, in a stroke of misfortune, the author of the review paper may be sitting on the committee that evaluates his grant [2] ■

“Proteins are the workhorses of the cell”. According to Google, some 42,800 articles and web pages have used this sentence verbatim to preface their work. »

sentences are common knowledge, much like the introductory sentence of this essay. If so, I would argue this isn’t plagiarism on purely practical grounds: Whom would he cite if a dozen other papers also used a similar sentence? That said, one would do better to choose different words in any case, if only to avoid clichés. Otherwise, if Professor Lee uses sentences that constitute novel ideas synthe-

References 1. Fang, F. C. & Casadevall, A. Retracted Science and the Retraction Index. Infection and Immunity 79, 3855 (2011). 2. Committee on Science, E. & Policy, P. On being a scientist: a guide to responsible conduct in research. (National Academies Press, 2009).


4

FEATURE ARTICLE

Which way to Europe?

Biological tools reveal the roots of a language family tree. Yevgeniy Plavskin / PhD Student, Watson School of Biological Sciences, CSHL

The steppes of Southern Russia and Ukraine are dotted with lone hills, each one sticking out like a sore thumb against the flat, grassy landscape. Their presence is not some quirk of local geology: these hills conceal giant burial mounds, built by an ancient horseriding tribe for its fallen chieftains. Little else remains of this people, but when archaeologists began looking for the origins of the Indo-European culture – whose languages are spoken by some three billion around the globe today – their suspicions quickly fell on these mound builders. Five thousand years ago, researchers surmised, the horsemen left their ancestral home, carried by a powerful invention: the wheel. As they swept across all of Europe and much of what is now Iran and northern India, the horse tribes spread their own culture and language throughout these lands, where they persist today. Perhaps the most amazing thing about this popular hypothesis is how drastically different the alternative is. Many linguists and archeologists have argued that the birthplace of the Indo-European mother tongue was not southern Russia, but Anatolia, in what is now Turkey; that its first speakers were farmers rather than herders; and that they began their spread through the

«

flickr.com/hamed/266139768/

A team of researchers headed by Dr. Quentin D. Atkinson from the University of Auckland, began to wonder whether the powerful tools developed by biologists to study evolution could help solve the mystery of the origin of Indo-European languages. »

world some nine millennia ago, propelled not by the chariot, but by the plough. It may seem uncanny that we know so little about the cultural ancestors of much of Western civilization, Hinduism, and the ancient Persian empires. But the one factor uniting all the descendants of these mysterious ancient people – language – is not preserved in archaeological artifacts. And that makes the study of Indo-European origins incredibly tricky.

things evolve over time. Just like in biological evolution, such changes in language take time to accumulate. British and American English, which split apart a couple of centuries ago, are almost identical; German and English, separate for ~1500 years, are incredibly different. We know German and English are descended from one ancestral language because of many cognates, or words of common origin, that they share (for example, in German phrases like “das Bier ist gut” and “das Wasser ist warm”). So if it seems surprising that Russian, Hindi, and English are all descended from the same Proto-IndoEuropean language, that’s because each of them has had a very long time to evolve; nevertheless, their cognates give away their ancient common roots.

Last year, however, a team of researchers, headed by Dr. Quentin D. Atkinson from the University of Auckland, began to wonder whether the powerful tools developed by biologists to study evolution could help solve the mystery of the origin of IndoEuropean languages. As the ancient IndoFor biologists, the problem of studying Europeans - whoever they were - settled into their new lands, their language began the long-gone ancestors of today’s living to split apart and change, much like living things is akin to trying to figure out your


5 family tree at a reunion. You might start by grouping relatives that have certain unique characteristics in common: for example, the only two redheads are likely to be siblings. From there, you can begin to reconstruct ancestors: one of the parents of those redheads had red hair too, you may guess. Sophisticated versions of this analysis allow biologists to determine the relationships between living things. In recent decades, the wealth of DNA sequence data from a huge variety of organisms has provided millions of characteristics that can be used to build family trees for everything from primates to viruses. Dr. Atkinson and his team decided to apply the same methods to construct a family tree for 103 Indo-European languages, using cognates as the shared characteristics. For example, English shared twice as many words of Indo-European origin with German as it did with Russian; hence, English is more closely related to German. The resulting tree of languages itself held few surprises - linguists had determined these relationships

using different methods a long time ago. But around 8500 years ago. This is also the time for Atkinson’s team, the tree was just the first when the farmers of Anatolia are thought to have begun to spread out of their homeland. step. Biologists often use fossils – which come with a chemical “timestamp” – to determine how long ago an ancient creature lived, and apply this information to calibrate the time scale on an evolutionary tree. (Imagine you had to guess the ages of a friend’s grandparents; you’d do a much better job if you knew how old his parents were.) In place of fossils, Dr. Atkinson’s team decided to use ancient languages. They knew, for example, that Old English was spoken in Britain around 1000 years ago. Placing Old English on the tree – along with Sanskrit, Latin, and many other ancient Indo-European languages – allowed Atkinson’s team to approximate when many groups of Indo-European languages first began splitting up. Extrapolating to the rest of the tree resulted in a predicted date for the first split in Indo-European languages – the time when Proto-Indo-European’s ancient speakers would’ve parted ways - at

This date already argued against the hypothesis that the first Indo-European speakers were the mound-builders of Russia, who lived only 5000 years ago. However, the researchers also wanted to directly test the competing claims about where Indo-European languages come from. Scientists studying the evolution of viruses have recently been able to use DNA-based evolutionary trees of virus samples, together with information about the locations at which these samples were collected, to predict where each ancestral strain of virus originated. This provided precious information about where outbreaks began and how infections spread. Atkinson’s team again decided to borrow from the biologists, using the locations where each language on their tree is spoken today to pinpoint ancestral languages on a map. This approach has a number of caveats, and to test it, the group decided to look at

language.cs.auckland.ac.nz/media-material


6 where their analysis would place Latin, the ancestor of Romance languages such as Spanish, French, and Romanian. As expected, the statistical analysis placed their common ancestor smack on top of Rome, where Latin originated. Satisfied, the researchers let the algorithm run some more steps back in time, to the origin of all Indo-European languages. This language was placed in southern Turkey. The limitations of Atkinson’s analysis notwithstanding, it appears that the languages of Europe, Iran, and northern India don’t come from a tribe of chariot-riding raiders, but from a group of ancient farmers.

While this study weighs in on an important controversy, perhaps even more exciting than the answers it provides are the ways in which it arrived at them. Over the course of 150 years of studying evolution, biologists have developed a set of incredibly complex family tree-based statistical tools to investigate the long-extinct ancestors of today’s organisms. They have managed to learn where and when these ancient creatures lived, and even to reconstruct pieces of their DNA. By tapping into this amazing tool set, Dr. Atkinson’s research helps usher in a new age for linguistics. So far, their analysis and family

tree were based on relatively simple characteristics – a set of shared words. More complex future studies may include elements of grammar and pronunciation. Along with more tools borrowed from biology, these analyses can be used on language families from all over the world. Recently, they have even allowed linguists to more accurately reconstruct ancient languages - much like biologists have been doing with ancient genes – lending a voice to the hereto silent pre-historic remains of our ancestors ■

OPEN HARDWARE

Open source instruments for brain research Josh Sanders / PhD Student, Watson School of Biological Sciences, CSHL

Useful tools will usually set you back a few bucks, as a rule of thumb. Yet, a surprising fraction of the sophisticated software systems that quietly subserve our daily lives are open source – free of charge, and licensed to the public on the condition that derivative works properly acknowledge the original author. Household examples are Android, Firefox and Linux. Slightly more obscure examples are Apache (a common web

server that effectively runs the majority of the Internet), MySQL (a database platform that powers commercial and consumer information systems), and Python (a powerful, intuitive programming language). Open source software is found everywhere from the phone in your pocket to the critical computers that power nuclear submarines for the US department of defense. It is ubiquitous because it is too good to be true – it is powerful, transparent, often brilliantly engineered, and free.

openpcr.org

While almost fifteen years have proven that open source software can actually be sensible for business, a three year old legal definition applying the same principles of open source to entire electronic devices remains largely untested. Open Source Hardware (OSH), in general terms, are any electronic devices whose design files and firmware are free and licensed in the public domain. A paper definition contributes a sense of identity, but what has really pro-


7 pelled open source hardware into being is the advent of open source microcontroller and computing platforms like Arduino, Beagleboard and LeafLabs. As a person not educated in the intricacies of assembly language (very few people on Earth are), these technologies allow someone with rudimentary programming skills to use a microcontroller - a programmable computer on a chip - to intelligently control something they design for as little as $3. These game changing platforms have found their initial

«

ing up the design files in a public repository. With technological sophistication, our era has heavily delegated the task of reimagining bioscience methods that depend on electronics to commercial bioscience engineers who generally keep their innovations secret, and enormous potential for benchside innovation by bioscience researchers goes unrealized. Though it is arguable that this division is necessary for either professional to achieve expertise, perhaps it has gone too far. With some well-funded and

Behavioral Systems Neuroscience is a natural candidate for testing the reach of Open Source Hardware. It is a tinkerer’s science by necessity. »

niche powering commercial hobby products – for example, 3d printers (MakerBot), thermocyclers (Open PCR) and submarines (OpenROV), but have been largely absent from business and academic research. What is missing is a precedent to generate trust. Behavioral Systems Neuroscience, the field of research seeking to understand how a live brain functions at the circuit level to produce behavior, is a natural candidate for testing the reach of Open Source Hardware. It is a tinkerer’s science by necessity, a Cell Biology waiting for its electron microscope. Three key applications in this field necessitate rapidly evolving electronic devices which are often proprietary and very expensive: 1. acquiring weak electrical or optical signals from the brain and storing them to disk, 2. electrically or optically manipulating brain activity, and 3. precisely controlling and capturing aspects of the subject’s environment. Signals acquired from the brain are aligned to the record of the environment, and decrypted to determine what information they contain about manipulations and behavioral events. Using this strategy, the codes used by the brain to represent faces, places, sounds, judgement errors, movements and visual scenes to name a few, have been at least partially solved. An open hardware toolset in Behavioral Systems Neuroscience would provide flexibility in experimental design – since the process of changing how the equipment functions under the hood (or knowing this information at all) begins as simply as look-

forward thinking laboratories being the exception, it is generally the case that when the answer to an idea is to hire a consulting engineer, good ideas end up going unexplored. Thankfully, development of an open source toolset for Systems Neuroscience is already well underway, and the initial results are quite promising. The Open Ephys project (available at www.open-ephys.org) was spearheaded by graduate student co-founders Josh Siegle and Jakob Voigts in the Picower Institute for Learning and Memory at MIT. The Open Ephys team has engineered a full-featured electrophysiology acquisition system that has been validated on awake, behaving mice.

«

cial vendors (e.g. Neuralynx, Tucker Davis Technologies, Blackrock Microsystems) typically costs well over $80,000, while Open Ephys weighs in at well less than a twentieth of that price if assembled in-house. The development time course of Open Ephys was greatly accelerated by the decision to use bioamplifier chips from Intan Technologies, a company that has been very supportive of the team’s goal of keeping the design completely open. In the past month, the team has upgraded their original design to include an embedded accelerometer for recording head motion – a sensible innovation that simply isn’t available in commercial alternatives. Cost and hardware advantages aside, where Open Ephys really outshines commercial instruments is in its software. The Open Ephys application takes stylistic cues from pro audio processing suites like Ableton Live and Reason, providing the ability to define a sequence of digital processing steps for each channel by dragging and dropping configurable filters onto a visual pipeline. Unlike commercial alternatives which sometimes still rely on configuring cryptic text files for configuration, the experience is rife with elegant visualizations, feels intuitive and makes the behavior of the instrument explicit. These categorical improvements upon existing methods illustrate the power of a tool that is actively curated in the same setting where it is used for research. Though its inception required the design

Regardless of whether they are actually produced, the availability of transparent, quality instrument designs in the public domain has the potential to transform what possibilities researchers think to entertain, and how well they understand the black-boxes that empower their research. »

In plain terms, their instrument amplifies up to 128 weak electrical signals captured from brain probes (often only tens of microvolts in amplitude), converts them to a digital format that a computer can read, filters parts of the signals that are useful for analysis, and stores the processed data to disk – and it does this for each of its 128 channels 30,000 times per second. A comparable instrument from leading commer-

of an open source instrument from scratch, since the design files and software are in the public domain, the barrier for researchers to add similar-minded improvements as their needs arise is now much, much lower. While Open Ephys has done a spectacular job open-sourcing the process of acquiring data from the brain, a complementary project to develop open-source hardware for orchestrating brain manipulation and


8 stimulus control has taken root here at Cold Spring Harbor Laboratory. The instrument is called Pulse Pal, and it presently powers almost a dozen ongoing research projects by controlling lasers and generating simple psychoacoustic waveforms. Pulse Pal can be assembled at a soldering bench in under one hour, costs less than $200 in common electronic parts, and improves upon the functionality of commercial instruments costing thousands. The project is in the final preparation stages for its public debut, and a more sophisticated derivative work based on newer surface mount technology is slated to converge with the Open Ephys project. So far, both initiatives have been graduate student side projects – but graduate students eventually graduate. These projects are different from open source software tools which can often still do useful

work out of the box if they are perfected and released without further active support. Even with thorough documentation, the assembly process in low quantity orders (as is almost always the case in an academic setting) requires expertise, and people willing to dedicate their time to learn the ropes. Finding these technically minded researchers is less difficult in institutions that foster an engineering culture like at MIT, where Open Ephys got its start – and a much taller order for institutions that specialize in nonengineering disciplines. The instruments could be produced commercially, but it remains an open question whether a business model based on open source scientific instruments can survive on its profits alone. Regardless of whether they are actually produced, the availability of transparent, quality instrument designs in the public domain has the potential to transform what

possibilities researchers think to entertain, and how well they understand the blackboxes that empower their research. The public availability of these designs will also encourage commercial instrument designers to innovate more quickly to stay above the bar. Behavioral systems neuroscience research is critical to understanding how the brain functions in health and in disease; yet as a field, it is especially throttled by the sophistication of its instruments. Perhaps, a family of rapidly evolving open source tools can leverage good ideas contributed by the research community at large, allow more ambitious research to proceed in settings with tight funding, and generally bridge disciplines to un-throttle our rate of progress along several fronts in the quest to understand the brain ■

EVOLUTION

The Neanderthal in us

Irene Liao / PhD Student, Watson School of Biological Sciences, CSHL If you have European or Asian ancestry, 1 to 4% of your entire DNA sequence may come from Neanderthals, resulting from interbreeding between modern humans and Neanderthals. This finding comes from analyses of the Neanderthal genome – the DNA sequences of the closest evolutionary relative to modern humans. In May of 2010, a group of researchers led by Svante Pääbo of the Max-Planck Institute for Evolutionary Anthropology in Leipzig, Germany published 60% of the Neanderthal genome. The genome was reconstructed from three female Neanderthal bone samples found in the Vindija Cave in Croatia.

For many years, the question of whether modern humans and Neanderthals mated has been intensely debated. Coming as a surprise to the researchers, portions of the human genome resemble parts of the Neanderthal genome. In particular, sequences from humans of non-African descent (European, Asian, Papaun) are found to be more similar to Neanderthals than to Africans.

Recently, Sriram Sankararaman and David Reich of the Department of Genetics at Harvard Medical School in Boston, Massachusetts and co-author Pääbo published a study estimating that the last interbreeding events between Europeans and Neanderthals most likely occurred 47,000–65,000 years ago. This recent time frame counters the hypothesis Researchers compared the human and that the sequences shared between nonNeanderthal genomes to identify genes Africans and Neanderthals may have come unique to modern humans. Some of the from a more ancient ancestor. genes found only in humans are importWith advances in sequencing and analyzant for skin development, metabolism, and cognitive abilities. Specifically, differences in ing ancient DNA, more studies will address the RUNX2 gene may explain morphological the question of interbreeding between moddifferences in the brain and the upper body ern humans and Neanderthals and elucidate the genes that make humans unique ■ between humans and Neanderthals.


FEATURE ARTICLE

Valuable brain cells generated from an unlikely source Kristen Delevich / PhD Student, Watson School of Biological Sciences, CSHL At this moment, millions of toilets are flushing. Whirling down the drains is something that might one day be used to treat Parkinson’s disease and ALS – urine. That is, the cells in urine. It turns out that living cells found in human urine can be programmed to generate a steady stream of brain cells. Researchers at Guangzhou Institute in China found a quick and relatively efficient method that converts urine cells into a parental cell type that gives birth to a variety of nerve cells. The hope is that that one day, a patient could submit a simple urine sample and get back the neurons he or she has lost to neurodegenerative disease. It might be surprising that urine contains living cells in the first place. They come from the kidneys, where they line the tubules, until they detach, get excreted and become so-called urine cells. The key to transforming these simple epithelial cells into bona fide neurons is to turn back their developmental clocks. Through a process called reprogramming, scientists push adult cells to revert to an earlier, less specialized form. This process essentially takes cells through the plotline of The Curious Case of Benjamin Button. Their fixed identities unravel until a small fraction of cells ultimately resemble what they were in their infancy: embryonic stem cells. Embryonic stem cells have limitless potential for self-renewal and can give rise to any cell type. Similarly, we begin life with myriad possibilities of who we will become, but as we go through life we make decisions that set us on a more defined path.

The same goes for cells as they develop: embryonic stem cells commit to lineages that give rise to mature cell types, and normally after fate decisions are made, there’s no going back. This made the breakthrough discovery of reprogramming factors by 2012 Noble Prize winner Shinya Yamanaka so surprising. He found that merely four gene products, called transcription factors, were sufficient to turn back the clock and reprogram mature cells to stem cells. Transcription factors are proteins that bind DNA and recruit machinery that transcribes the DNA code into mRNA that then serves as the template for protein production. The idea is that Yamanaka factors must turn on gene products that give stem cells their unique abilities. Only recently scientists have begun to carry out detailed molecular studies that track how cells change during reprogramming. It appears that cells go through distinct intermediate steps, so that like the Benjamin Button story, reprogramming really does look like development in reverse. Reprogramming efficiency is very low in general: only a tiny fraction of the cells end up resembling embryonic stem cells. These fully reprogrammed cells are called induced pluripotent stem cells, or iPSCs. Apart from the amniotic sac and placenta (only made by embryonic stem cells) they can become any type of cell in the body – from the bone cells that form your elbow to the nerve cells that respond when you whack it. Reprogramming adult cells – be it from a cheek swab or a urine sample – into iPSCs

skirts the ethical issue of harvesting stem cells from human embryos. In addition, they don’t cause immune rejection like embryonic stem cells do, because they come from the patient’s own tissue. What makes urine cells a great cell source for reprogramming experiments is that it’s plentiful and easy to collect. More commonly used cell sources require invasive procedures such as skin biopsies and blood draws. The Chinese researchers were studying methods to increase the efficiency of reprogramming when they made a surprising observation. The gee whiz moment came when Wang and colleagues noticed that urine cells grown in a specially-defined media clumped together in a flower-like shape. This rosette pattern was reminiscent of the way that neuronal progenitors, the parental cell type that gives rise to all types of nerve cells, grow. They had stumbled on a shortcut for producing brain cells: the cells picked from the rosette expressed genetic markers that were characteristic of neuronal progenitors, not iPSCs. The urine cells acquired the ability to grow and self renew without ever becoming iPSCs. By repeating the same steps, the scientists made the same progenitor cells from three men. While reprogramming efficiency was only a fraction of a percent (.2%) they could produce enough progenitors to generate many types of mature cells found in the brain. Besides being faster, this shortcut method is also safer than previous methods used to make brain cells. Before, scientists made brain cells by forcing mouse connec-


10 tive tissue cells to make proteins and RNAs normally found in neurons. This method required that viruses insert foreign DNA into the mouse cell’s genome. Unfortunately, viral integration can lead to the unwanted side effects of genomic instability and cancer. Instead, Wang and colleagues introduced a small circular piece of bacterial DNA that served as a platform for reprogramming factor delivery. Bacterial platforms multiply in the cytoplasm, and reprogramming factor expression is driven directly off of them, no integration into the host cell’s genome required. It’s all well that urine cells could produce brain cells in a dish, but in order for replace-

ment therapy to work, they would have to convert and survive in a living brain. To test this, researchers transplanted neuronal progenitors made from human urine into the brains of newborn mice. They saw that the urine-made neuronal progenitors produced mature brain cells that survived for at least a month in the mice brains. Follow up studies are needed to test if these neurons actually participate in brain function. Importantly, there was no evidence of tumor formation, indicating that the virus-free system could be safe for therapeutic applications.

quire reprogramming to iPSCs. The higher efficiency of reprogramming mature cells directly to neuronal progenitors plus the abundance of source material, make peemade brain cells an exciting tool for the study and treatment of nervous system disorders ■

Overall, the ability to convert excreted urine cells directly into brain cells is faster and safer than older methods that first re-

Q&A

Genspace: A DIY Biology Lab Oliver Medvedik is a co-founder, and the Director of Scientific Programs, at Genspace, a non-for-profit organization that aims to create a do-it-yourself biology hackerspace, where science enthusiasts can work on interesting side projects, and where the general public can attend classes to learn about the fundamentals of biology.

What first got you interested in do-ityourself (DIY) biology? In 2008, I started a lab with my colleague Mitchell Joachim, called the Bioworks Institute. It was a biodesign laboratory, where we worked at the intersection of biotech and design. While I wanted to work on more biology-heavy project, Mitch was more interested in design projects. But there was some overlap, and I thought it would be interesting to have a lab where we could work on mutual projects. We found a place in Brooklyn, NY and we opened the lab. My initial goals were more focused on working on projects that may not fit in a traditional lab. What was the inspiration behind Genspace? Later, I met Nurit Bar-Shai [Editor’s note: Nurit is now the Director of Cultural Programming at Genspace] and she introduced me to other DIY biology enthusiasts

interested in working on biotech-related Can members work on any projects? The projects, who later became the other co- project being worked on just needs to be a safe project (biosafety level 1). Other than founders. that, there are no requirements for the proThey wanted to start a lab that was even ject to make a revenue stream or to have imbroader and more open than Bioworks. Since mense scientific merit. I had this lab here, we decided to co-found Genspace. How does Genspace compare to academic labs? The difference is that there is no one How would you describe Genspace? It’s overarching scientific project, and there is no essentially a community lab for anybody inprincipal investigator of the lab that guides terested in using biotech for purposes of art, the direction of the research. The main simientrepreneurship and citizen science. larity is that we’re always scrambling for funding! It’s also like a gym: once you become a members, you can go there anytime, 24/7. What is the most exciting project you Since this was new, at the beginning, we have seen come out of Genspace? There wondered: “if we build it, will they come?” is one exciting bio-art project done by artist Heather Dewey-Hagborg. For her proDid they? They did! That’s because there’s ject, she collected hair samples and cigarette a great need for cheap space to do science. buds in New York City, from which she exAnd they came in great quantity, which was tracted and sequenced the DNA. She used impressive.


genspace.org

Genspace to isolate the DNA from her sam- for you rate someone’s insurance based on At this stage, most people are probably that data, or to discriminate candidates for early adopters, but when would it come to ples for later analysis. a job, for example. If an insurance company a point where the general public can take When she got the sequence data, she looked knows that they can have the data but can- courses here on bio-tech? Well, they can at known indicators of eye color, hair color take the courses now! I think what’s holding not use it, they will not go for it. and ethnicity. She would then use that inus back is a broader acceptance of DIY biolformation to make sculptures of what she thought the people looked like. I thought it raised some really important issues about Genspace is a community lab for anybody interested in privacy. Although some areas have laws reusing biotech for purposes of art, entrepreneurship and garding how others can use that data, a lot of other places don’t have such laws. citizen science. »

«

How do we protect genetic privacy? When it comes to information, I think it’s much Of course, medical data sharing is very immore critical to put laws in place to protect portant for the advancement of medical research, but it cannot be used to hurt somecitizens from that data being misused. one’s career or livelihood. It’s almost impossible to have your medical and genetic data be completely locked down, You also teach courses at Genspace. What especially given the fact that we’re shedding is the main audience of the courses? cells constantly. Everybody. Although interestingly, most of But we should make that data useless to the people taking our courses have a professomebody who wants to profit from it. sional degree in something else (e.g. softRather than having penalties if you find out ware, finance, art, etc.), but are interested in someone’s information, it should be illegal learning about biotech.

ogy, more funding, and having more spaces like these available. It’s not even a matter of technology at this point, it’s a matter of access ■

+

Check out Genspace at genspace.org


Biology of Genomes 2013 The Biology of Genomes is a yearly conference at Cold Spring Harbor Laboratory with attendees who are known to be Twitter aficionados. Conference speakers can choose whether they want their talks live-tweeted; many speakers opt in and also live-tweet other talks. Shortly after the meeting concluded, we compiled all tweets containing the hashtag #bog13 and looked at their distribution across the five days of the conference. Analysis: Robert Aboukhalil and Charla Lambert

May 8

May 7 1

2

3 4

5


Conference Sessions 1 2 3 4 5 6 7 8 9 10 11 12 13

May 9 6

High Throughput Genomics and Genetics Genetics of Complex Traits Poster Session I ENCODE Tutorial Functional and Cancer Genomics Computational Genomics Poster Session II ELSI Panel and Discussion Evolutionary Genomics Genetics and Genomics of Non-Human Species Poster Session III Keynote Speakers: Andrew Fire and Eric Lander Population Genomic Variation

May 10 7

8

9

10

May 11 11

12

13


14

Q&A

Gustavo Stolovitzky on Computational Biology Gustavo Stolovitzky is the Manager of the Functional Genomics and Systems Biology Group at IBM Research. With over 100 papers and a dozen patents under his belt, his work has been featured in the New York Times, The Economist and Scientific American. He sat down recently to discuss his career path, research interests, and life in industry Tell us about your career path. I grew up in Argentina, where high schools generally offer specialized programs such as commerce, industry or humanities. I chose the ‘industrial’ specialization and was particularly interested in electronics. There, some of my teachers awoke my curiosity in physics and it was then that I realized I wanted to be a physicist. I did my Masters in Physics at the University of Buenos Aires, where I studied chaos theory, which is a theory that explains how complexity arises from simple dynamical systems.

«

tational biology researchers. And here I am. What was it like working at IBM? It felt like touching the sky with my hands. I started working there in 1998. Ten years before IBM had led the way with breakthroughs in superconductivity and the scanning tunneling microscope. Soon after I joined IBM research, I met some of the researchers I had admired for many years such as Charles H. Bennett, Benoit Mandelbrot and Gregory Chaitin, just to name a few. I could see them in the cafeteria and go talk to them. Previously, I could only read about them but now I felt so close to the action.

The DNA transistor project is a new concept of doing DNA sequencing by controlling the passage of DNA through a nanopore that only allows a single molecule to thread through. »

For my PhD, I went to Yale because I wanted to work with Prof. K.R Sreenivasan a mechanical engineering and physics professor who studied chaos theory and in particular, fluid turbulence. When I completed my PhD in 1994, the Human Genome Project was already underway.

Back when I was doing my PhD at Yale, one of my good friends from my University of Buenos Aires years had completed a postdoctoral fellowship at IBM Research. He told me: “I’ve met the most interesting and intelligent people I know at IBM Research”. And that’s exactly how it feels working here.

What is the most interesting project you worked on at IBM? The DNA transistor project! Basically, it’s a new concept of doing DNA sequencing by controlling the passage of DNA through a nanopore that only allows a single molecule to thread through. We have tiny electrodes set in place that create an electric field to trap the DNA at each base This drove me to Rockefeller University, and interrogate it. where I did a 3-year postdoctoral fellowship in computational biology. Then I did another Of course there are other interesting profellowship for 6 months at the NEC Research jects too, but the DNA transistor merges Institute and towards the end, I attended basic technologies that IBM is very good at: a bioinformatics conference, where I saw a semiconductors and manipulating materials listing for IBM Research looking for compu- at small scales (even atoms!) with an applicaAnd although fluid turbulence is an interesting and yet unsolved problem of physics, it was not at the frontier of intellectual breakthrough. And just as I was trying to understand turbulence from a statistics point of view, I thought we could understand genomes using statistics.

tion to biology. It’s really at the frontier between many disciplines. When did the DNA transistor project begin? It was around 2006, when Sanger DNA sequencing was still very much alive but the 454 sequencing machine had appeared and changed the landscape. It’s easy to forget that sequencing is so recent! This is the incredible thing: we are living a revolution even though we may not realize it. We are a generation that will make history for the rest of humankind: we are the first generation that can “read”, study and interpret the human genome in its totality. Think about that. It’s like being blind to something and suddenly, you see it in its 3 billion part majesty. We are also living the important revolution that is the Internet and the democratization of information, which intertwines informatics with biology in a synergistic way. You are in industry but you also do academia-like research. Do you feel like you are in industry? On a day-to-day basis, when you read or write papers, write code, or collaborate with other labs, it feels the same. In industry, however, when it comes to what the focus has to be when you interact with a customer, your research takes on a different kind of urgency. In academia, professors must write and procure grants. I don’t have this problem but I have to satisfy IBM’s customer and protect IBM assets and intellectual property. In industry, there is a sense that everyone in the company is rowing the same boat and if one of its parts fails, we all suffer the consequences. I think in an academic research lab the feeling is more of an independent effort: the lab succeeds or fails alone. When I told people we were interviewing a computational biologist from IBM, the first question was: “IBM does biology?”


When did this start at IBM? It was more or less in 1994 or so that IBM Research toyed with the idea of doing some work on genomes. It wasn’t deep biology, it was more the mindset that here we have a set of letters (the genomes) and we know how to analyze zeros and ones so it can’t be all that different. And they were already doing other work in computer science and image processing that they could apply to study genomes and proteins, so they did it, and that was a successful first step. Eventually those earlier efforts prompted the creation of the IBM Computational Biology Center in 1998. Since then we have become a very active research center.

Questions from our readers: To get a feel for working in industry, are internships during the PhD a distraction? Not at all, internships are great if you are curious about how research is conducted in industry. And let’s be clear: every industry is different. For example, pharmaceutical companies, biotech companies, and IBM Research all have different cultures, business needs and pressures. Doing an internship not only gives you a informative view of how research is conducted in an industrial context, but it also gives you a feel for what it takes to be successful in industry. What do you think is the most interesting, unsolved problem in computational biology? One of the most necessary ingredients to understand biological systems is to develop a mechanistic understanding of biological process. For some specific pathways, we do understand some things; I am familiar for example with some details of the pathways for apoptosis, P53 signaling and B-cell differentiation. Those are areas that people have studied a lot but there is a lot more happening that we don’t know about. Specifically, we need to understand causal interactions between cells in a dynamic way (i.e. predict what happens when you change one component). And we must do that in a way that can be biologically validated ■

+

Check out Gustavo’s research at researcher. ibm.com/person/usgustavo


16

Transformative Research and the Future of Science Will funding high-risk projects reinvigorate our current research approach? Dana M. King / PhD Student, Washington University in St. Louis Flickr.com/horiavarlan/4273968248

In current scientific research, the pressure to secure grants and renew funding might be limiting the risks that researchers are willing take in their research projects and subsequently stifling innovation. As the United States continues to lag behind in science (as highlighted by the OECD’s 2011 Education rankings and Senator Bob Casey’s 2012 report on STEM education), funding agencies have begun to invest in projects that foster innovation and collaboration, focusing less on the results of the projects and more on the tools that they will develop. A pioneer, the Howard Hughes Medical Institute (HHMI) implemented a Collaborative Innovation Award, awarding eight projects in 2008 and funding another six projects in 2012. The goal of the initial Award was to provide support for longer term collaborations that “...could yield important results, but may never directly further the lab’s own mission,” according to Philip Perlman, a senior HHMI scientific officer overseeing the program. Additionally, the National Science Foundation (NSF), a lead source of research capital in the United States, followed suit in 2011, offering $1 million in support for five years to small teams through its Creative Research Awards for Transformative Interdisciplinary Ventures (CREATIV), expanding the funds available for new lines of research.

a prime example of a project that’s impact reaches far beyond the tomes of scientific journals and instead transformed the way humanity views itself. This description does little to define what criteria a “transformative” project would need to meet in an initial proposal. The NSF outline for CREATIV submissions is more direct in its requirements; it states that these grants are NOT for “projects that continue along well-established lines of research” and is expected to “integrate across multiple disciplines, as opposed to incorporating disciplinary contributions additively.”

Has this infusion of funds revolutionized research methods or shifted the public perception of scientific pursuits? A Google News search for “HHMI Collaborative Innovation Awards” yielded one result prior to press (for scale: querying “Human Genome” yields about 7,000 news results and querying “puppies” yields over 19,500 results). So, while awareness within the research community might have increased, the public impact is not clear at the level of mainstream media. It may just be too early to see the effects of transformative research grants. It could also be that while there is a shift in funding requirements, the groups that receive the awards remain relatively unchanged (see HHMI Award lists, 2008 and 2012). Well-established senior researchers are a safe bet, The biggest caveat of this new award sysbut is funding established groups the best tem is how “transformative research” is deway to break down barriers for researchers fined. Robert Frodeman, philosophy profesor stimulate fresh ideas? sor and director of the Center for the Study of While these funding mechanisms might Interdisciplinarity at the University of North Texas, cites Watson and Crick’s presenta- not be targeting early career scientists, they tion of a model for DNA structure as being do emphasize that fostering cross-discipline

communication should be given credit as transformative in that sense. Research areas are often characterized by a high degree of specialization, creating a language barrier between researchers and reducing the tools and resources available to solve the complex problems faced in current research areas. However, creating diverse networks of experts and emphasizing public outreach as part of grant requirements could stimulate innovation more than investments in the aims of any singular research project. The cultivation of challenging intellectual environments at research institutions have contributed to some of science’s greatest discoveries as much as the power of the individual minds credited. In this rapidly developing age of technology, it could be possible to cultivate an innovative environment independently of the confines of an institution or specialization. To truly transform the way science is done, the way science as a discipline is approached must also change ■ Further Reading • HHMI Collaborative Innovation Award press release: hhmi.org/news/20081120.html • Collaborative Innovation Award, Team Leaders: hhmi.org/news/20081120_list.html • Science Careers, Innovation: sciencecareers.sciencemag.org/career_magazine/ previous_issues/articles/2012_08_10/caredit. a1200091#box1 • NSF CREATIV press release: nsf.gov/ pubs/2012/nsf12011/nsf12011.jsp • Great Science article on transformative research: sciencecareers.sciencemag.org/career_ magazine/previous_issues/articles/2012_08_10/ caredit.a1200091


Genome Envy

Will Donovan / PhD Student, Watson School of Biological Sciences, CSHL Humans have an inferiority complex when it comes to our genetic material. Back in the early 1970’s, when scientists started characterizing the amount of DNA in different organisms, they were perplexed to realize that we don’t have as much DNA as we should. Or, at least, we don’t have as much as we thought that we should. If DNA is the blueprint of life for all organisms, then why don’t humans, clearly the greatest and most complex organisms, have the most DNA? And it’s not like it’s a photo finish between us and close relatives like chimpanzees. Some salamanders have as much as 40 times the amount of DNA as humans, and most flowering plants have more DNA than us too. This was a big problem for biologists, but after learning a bit more about the genome, they came up with what seemed like a plausible explanation.

«

genetically exciting than humans? It just… sits there. This time there was no one obvious higher level of information to appeal to – there were a lot of them. Scientists ascribed our genomic inferiority away to factors like isotypes (a single gene can make several gene products), pseudogenes (duplicate copies of genes in DNA which are no longer functional), and postranslational modifications (little add-ons to gene products that make them act differently).

Rice may have up to 50,000 genes. Rice. The kind we eat. How could rice be more genetically exciting than humans? It just... sits there. »

Most of our DNA, scientists said, is junk. The real value of a genome is not total amount of DNA, but the number of genes contained therein. Genes are the real centerpiece of biological complexity. These jewels of the genome are like little recipes encoded in the DNA to make the main cellular components and effector molecules. We weren’t finished finding all of the genes quite yet, but it seemed clear that flowers and salamanders must just have a whole lot more junk than we do, obscuring their true simplicity. In other words, size doesn’t matter if you don’t have the genes to fill the space. However, as we started sequencing and analyzing genomes of many different organisms, scientists found that this just wasn’t true either. Humans only have around 25,000 genes, while rice, for example, may have up to 50,000 genes. Rice. The kind we eat. How could rice be more

Looming over all these explanations is the idea of regulation. The genetic program in any one cell-type is usually tightly controlled. Countless genes and pieces of DNA exist only to regulate other genes. There is even a level of regulation scientists refer to as “epigenetics”, gene regulatory changes which persist and can be passed on through cell division absent the signal that set up the original change. Epigenetic regulation states are different through development and across cell types, and change how the same genetic information is interpreted in each context. So which of these reasons can fully explain the complexity of humans? All of them. Combining all of these different means of regulation gives such an enormous number of potential outcomes that it is easy to see how complex, multicellular organisms can arise. Humans have 25,000 genes, and if you only consider how all of those gene

products interact with each other (and most gene products interact with many others), then simply turning them on and off in different combinations already gives an exponential number of possibilities. Add in all the other levels of regulation and modification, and it starts to seem like we’re almost too simple for our genomes. Organisms, and their genomes, have evolved over billions of years. We may not think of rice as being as complex as us. After all, it can’t even move. But maybe rice had to evolve more complexity because of this. Plants have strategies for warding off predators, fighting parasites, and getting nutrition, each one of them hard-wired into their DNA. They have developmental phases, many different cell types, and a way weirder reproduction strategy than humans do. Who is really to say that we are the more complex organisms? We know so little about most species on the genetic and molecular level, and there is no objective method to determine which are the “higher order” organisms anyway. Now, humans will always focus on humans. And we should, in order to further understand human health and cure disease. But sometimes, our ego gets in the way of our understanding. If we had accepted long ago that maybe we are not the undisputed kings of complexity, we might, for example, not have written off huge parts of the genome as “junk” DNA (it turns out, it’s not junk). Or we might have made groundbreaking discoveries earlier, such as RNAi, first discovered in worms and thought to be limited to “simple” animals. If we, as biologists, want to study life, we learn the most by studying all the different ways life has succeeded, evolved and thrived - not just how we did ■


COMPUTATIONAL NEUROSCIENCE

Object recognition: The hard problem John Sheppard / PhD Student, Watson School of Biological Sciences, CSHL

MIT in the 1960s stood at the forefront of a digital revolution. Led by many of the fields early giants —among them Robert Fano, John McCarthy and Marvin Minsky -- advances in computing were already ushering in a paradigm shift in technology and communication. John McCarthy had recently pioneered the Lisp programming language, Marvin Minsky and Seymour Papert were developing the “computational geometry” that would herald in decades of research in artificial neural networks, and for the first time, students and researchers enjoyed access to a central computer system with over a hundred access points across the university. The rapid development of more powerful computers, operating systems, and programming environments culminated in 1963 at MIT with the launch of Project MAC, a DARPA-funded venture with the ultimate goal of developing a full artificial intelligence rivaling even human cognition. As a stepping-stone, one of the first projects was meant to be completed within a single summer: the creation of a functioning computer vision system (Fig. 1). In spite of the great success achieved by Project MAC encompassing several decades of breakthroughs in AI, progress in computer vision languished due to the seemingly insurmountable challenge of object detection. Though dissecting the spatial frequencies and simple features of images posed little challenge, the AI Lab’s vision system failed in detecting the boundaries and categories of objects in the visual scene, which

continues to be a central challenge in computer vision today. Only a decade earlier, the physiologist Stephen Kuffler had begun experiments on the mammalian visual system that would lead neuroscientists to start tackling the same computational problems in reverse: How did our brains allow us to see? Using microelectrodes to record action potentials of single ganglion cells in the retinas of anesthetized cats, Kuffler quantitatively mapped the responses of ganglion cells to precisely controlled visual stimuli. As early as the 1930s, Haldan Hartline had recorded the output of single ganglion nerve fibers and discovered that ganglion cells could exhibit various response patterns to light flashed on a particular part of the visual field: so-called “ON” units spiked signifi-

cantly more in the presence of light shone at the center of the visual field, while “OFF” units were suppressed by light and had transient responses to a change from light to darkness (Hartline, 1938). Yet like the early endeavors in computer vision, this pioneering work on the vertebrate retina hardly accounted for the complexity of human vision, and the neural mechanisms subserving our ability to identify objects in the world remain poorly understood to this day. Why have the computations underlying object recognition posed such a challenge for computer and neural scientists alike? At least two explanations come to mind. First, unlike the three dimensions needed to describe a simple visual stimulus in space, the categories of mental abstraction occupy an extremely

Figure 1. The 1966 report outlining the MIT Artificial Intelligence Group’s planned summer project to develop computer vision.


19 high-dimensional space. From the neuroscientist’s standpoint, this makes it difficult to even identify stimuli that reliably excite a given cell. Second, the operations needed to achieve properties such as stimulus invariance from raw visual input (e.g., detecting a face as a face whether viewing it head-on or in profile) are highly nonlinear. By and large, the greatest advances in computational neuroscience and applied mathematics in general have relied upon linear systems theory. Thus, object recognition confronts

Dorsal stream

clear that our progress in either endeavor will require a fusion of the two seemingly disparate fields of artificial intelligence and computational neuroscience. This is evident when considering the current state-of-the-art in computer object recognition. The current record is held by a team at Google Research, who recently used a 9-layered artificial neural network to identify cats, human faces, and 20,000 other object categories via unsupervised learning implemented on a cluster of 16,000 computing cores (Le et al., 2012).

Ventral stream

Figure 2. Diagram of the macaque monkey visual system. From the retina, visual information travels through the lateral geniculate nucleus of the thalamus and, after passing through cortical areas V1 and V2, is diverted into the “dorsal” and “ventral” processing streams. The large extent of the primate neocortex devoted to ventral stream processing (> 150 million neurons, over 10% of all cortical neurons) reflects the complexity of nonlinear transformations required for object recognition and other functions performed by ventral stream areas. Figure obtained from (DiCarlo et al., 2012).

us with computations that are both difficult to conceptualize and computationally intensive to implement. Despite these challenges, both our understanding of object recognition in the brain and our ability to carry it out in silico are growing. Moreover, it is increasingly

Their network could be used to detect cat faces in random images from YouTube with 75% accuracy, and achieved 16% accuracy when identifying objects from a wider set of 20,000 object categories. Though these results may seem modest, they represent a 70% improvement in accuracy over the previous state-of-the-art.

The most interesting aspect of this work, however, is the biological inspiration behind their computational algorithm: their network consisted of 9 layers of artificial neural populations trained to represent various features of the image (Fig. 2). Using a stacked network architecture featuring 9 identically organized layers, the team adjusted weights on the 1 billion connections between neurons so that image features were represented as efficiently as possible. After training, object recognition was performed by identifying neurons that attained the most selective tuning for specific objects, such as cats and human bodies. Notably, object classifications were read out from the most selective units using the same basic methods neurophysiologists employ to characterize coding in real neurons. Indeed, their computational algorithm is based on four fundamental concepts of neural computation: local receptive fields associated with individual neurons, pooling of inputs across many neurons, divisive normalization or “gain control” to provide invariance to changes in stimulus contrast, and serial processing in successive layers to encode increasingly complex, higher-level features (Fig. 2). Considering the development of object recognition from the dual perspectives of biology and artificial intelligence lends insight into our evolutionary history as a species and our technological future. The ability to categorize abstract objects in the world or distinguish among individual members of our species is central to our lives as humans, and was clearly critical to our ancestors‘ adaptation to the cognitive niche. Yet these capabilities appear limited to a few select branches of the evolutionary tree. Physiological evidence underscores the costs of these computations: in the macaque monkey (our close ancestor), the brain circuits believed to carry out object recognition and related processes account for over 150 million neurons, spanning over 10% of an already metabolically expensive brain. But neither mathematical complexity nor metabolic constraints prevented natural selection from giving rise to the incredible power of our visual systems,


20

and it remains a question of when our computational algorithms will attain the performance achieved by evolution. A slump in neural network research ensued after Marvin Minsky and Seymour Papert noted in their book Perceptrons that single-layer networks failed at fundamental nonlinear operations, but the field was reborn decades later when theoreticians began solving far more complex problems simply by stacking many network layers on top of one another. Training unsupervised networks to recognize images of cats on Youtube thus marks another humble yet pivotal step forward. These advances hint at the devel-

opments to come in both computational neuroscience and artificial intelligence, two emerging fields that can no longer proceed in isolation ■ References • Hartline HK (1938) The response of single optic nerve fibers of the vertebrate eye to illumination of the retina. Am J Physiol 121:400-415. • Hubel DH, Wiesel TN (1959) Receptive fields of single neurones in the cat’s striate cortex. J Physiol 148:574-591.

fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol 160:106-154. • Kuffler SW (1953) Discharge patterns and functional organization of mammalian retina. J Neurophysiol 16:37-68. • Le Q, Ranzato M, Monga R, Devin M, Chen K, Corrado G, Dean J, Ng A (2012) Building High-level Features Using Large Scale Unsupervised Learning. In: Proceedings of the 29th International Conference on Machine Learning. Edinburgh, Scotland, UK.

• Hubel DH, Wiesel TN (1962) Receptive

GUEST ARTICLE

Why yes, your child should learn chemistry Manosij Majumdar / Chemical Engineer An article in the Washington Post by David Bernstein asks why his son, fifteen, is being taught chemistry in high school when it is not mandated by the state and is not likely to lead to a career as a scientist, as his son shows little interest or faculty in chemistry.

The question is a valid one. The conclusions tivity solely by its ability to earn a profit. My first defence of chemistry would also serve Mr. Bernstein drives himself to are less so. just as well as a defence of literature, mathMr. Bernstein is an executive at a non- ematics, economics or even Mr Bernstein’s profit organization and should be familiar own subject, philosophy. with the folly of assaying the worth of an ac-


21

«

I would have jettisoned some parts of my own syllabi quite gleefully, but in retrospect, I am better off for not having been able to do so. »

Each field of knowledge ultimately aims to explain and explore either humanity or the context in which human affairs occur. Chemistry is part of that. To not know chemistry is to not know the universe one inhabits. That is not a place any educated person should be comfortable with. While in university, I was frustrated and annoyed by students who insisted that subjects such as poetry, politics and philosophy alone constituted a true education, and what scientists and engineers were engaged in was mere training for a trade, a bourgeois affair pretending to be an intellectual pursuit.

ity: all of these are issues which a voter or a buyer might need to grapple with at some point in their lives, and where more than a vague familiarity with chemistry would be helpful.

A scientifically-illiterate constituency leads to misconceptions that range from the amusing (‘‘contains no chemicals’’ – so what is it made of, then?), to the frustrating (the insistence that ‘natural’ anything is better than ‘artificial’) to the seriously consequential (public opinion about energy policy, climate change, genetically-modified foods, or even vaccination). And what May I now retort that an education that if he were to find himself in a position of omits chemistry, of all things, may best be influence and as utterly lost as Yes, Miniscalled inadequate if one is being generous, ter’s Jim Hacker? and not much of an education at all if one is being frank.The early philosophers spent This argument applies to any science their lives trying to unravel the workings of one can think of. Would an electorate that the natural world around them; we have the good fortune of being born in a time when we can know most of those answers from simply opening a book. Seen in the context of history, this is a staggering privilege. Mr. Bernstein says that his son is unlikely to become a chemist or a chemical engineer and would be better served by learning oratory or music (delivered with an idiotic remark suggesting that those of us who were busy studying chemistry would not understand the economic concept of ‘opportunity cost’).

«

to think that this alone constitutes some sort of justification as to why his son should not be required to educate himself about the way in which matter interacts in the universe around him. Difficulty alone does not prove a subject’s unfitness for study; nor does being interesting earn it a place in the curriculum. As someone whose own high school experience isn’t too distant, I put it to him that adolescents are not always the best judge of what a complete education is. I would have jettisoned some parts of my own syllabi quite gleefully, but in retrospect, I am better off for not having been able to do so. In arguing otherwise, that teenagers ought to be allowed to self-specialize at an age when they should be acquiring a holistic view of the world and all that’s in it, Mr. Bernstein comes off as little more than a parent disgruntled at his own somewhat pitiable inability to help his son with grade school homework without the aid of a tutor, and wishing the world to mould to his little

Difficulty alone does not prove a subject’s unfitness for study; nor does being interesting earn it a place in the curriculum. »

did not panic and stampede at phrases like ‘Frankenstein food’ be better at recognizing the merits of genetically modified organisms? Would that same electorate recognize the differences between various designs and generations of atomic power plants instead of running scared at the very sound of Let us assume that Mr. Bernstein is right, the word ‘nuclear’? Would it be less willing and his son will not engage with chem- to accept pseudoscientific bases for justifyistry in a professional context. Would he, ing racism, sexism and homophobia? as a citizen and a consumer, still be better equipped for life without a working knowI should imagine so. ledge of chemistry? Yes, chemistry is a challenging subject, Sooner or later he will face an issue in that it does not yield without some sinwhere chemistry will come into play: chem- cere effort. They all are. Physics, biology, ical and radioactive contaminants, nutrition, geology, mathematics, computer science, toxins, climate change, water and air qual- you name it. However, Mr. Bernstein is amiss

snowflake’s needs and allow him to pick easy, immediate pickings than challenge him to push his limits and strive for something difficult yet richly rewarding ■

References • Why are you forcing my son to take chemistry?, Washington Post, October 16 2012. http://www.washingtonpost.com/blogs/ answer-sheet/wp/2012/10/16/why-are-youforcing-my-son-to-take-chemistry/


22

GUEST ARTICLE

Making The Universe A Better Place Seth Baum / Executive Director, Global Catastrophic Risk Institute (gcrinstitute.org) I think it’s important to dream big, to be ambitious, to want to make the world a better place. The question is how we can best go about doing this. To answer this, we need both science and ethics. In the process, we get to see an amazing journey from our own lives to the very end of the universe – and a major plot twist.

« This story’s personal for me. As an engineering student in college and grad school, I often wondered about which technologies I should be trying to design. My training left me good at design, but not at making these decisions about what to design. So I started looking elsewhere, and found ethics, which is the study of what is good/bad, right/wrong, and what we should/shouldn’t do. It’s quite a different field of study from science and engineering, but critically important if we are to make the right decisions about our lives, our work, and the world that we live in.

The Wright brothers’ first plane flight was just 110 years ago. Since then we’ve visited the Moon and most recently started landing rovers on Mars. If we can accomplish all this in a century, just imagine how much we can accomplish in the billions of years we have left on Earth. »

ter place?’ There are many ways of answering this question, corresponding with different views about ethics. My own view (which is a fairly common view) is that we make the world a better place by improving quality of life for people around the world, as well as for sentient nonhuman animals. (Don’t kick puppies!) In formal terms, the goal is to maximize total quality of life for everyone out there. Those of you with some calculus can imagine maximizing quality of life integrated across space and time.

for starters, in a few billion years, the Sun will become too large and too hot for life on Earth to continue. But maybe our civilization can colonize space. If it can, it could probably survive for much, much longer. The physics here is not well understood. Maybe life would end when the stars stop shining, or as protons decay. We probably can’t live without protons! Either way, it’s clear that colonizing space creates enormous opportunities for our civilization, and for the quality of life it can enjoy.

Here’s where the science comes in. If we To summarize, we may now be at the beWith ethics, we can answer questions like care about everyone across space and time, ginning of a grand journey across the gal‘What does it mean to make the world a bet- then where and when can people live? Well, axy – and maybe even beyond. Our lives can


23 contribute to something special, something good on literally astronomical scales. It all depends on whether space colonization is possible. Fortunately, we are already making great strides.

Because of these threats, our journey into the universe is facing some great turbulence. The turbulence may even prove fatal. And so as important as it is, the space colonization can wait. We still have a few billion years left for that. Unless we survive our very imminent threats, we’ll never have the chance to try. Our role as people alive today is to confront these threats, so that future eras can go on to colonize space.

infectious diseases. If you’re in computer science, you could study artificial intelligence. These are some simple possibilities. If you take a closer look at the various global threats, you’ll find much more. Some good references are below; many more can be found at the bibliography of my organization, the Global Catastrophic Risk Institute: gcrinstitute.org/bibliography. (Please feel free to get in touch with us if you’d like to learn more; my email is seth@gcrinstitute. org). These great threats demand great response. No less than the fate of the universe is at stake. Are you up for it? ■

The Wright brothers’ first plane flight was just 110 years ago. Since then we’ve visited the Moon and most recently started landing rovers on Mars. If we can accomplish all this in a century, just imagine how much we can And with that, we now can begin anaccomplish in the billions of years we have left on Earth. Likewise if space colonization swering with some specificity just how we is possible, then I’d really like to think we’ll can best go about making the world – and indeed the universe – a better place. The figure out how. question becomes how we can most effectNow here’s the plot twist. A few billion ively help avoid civilization-ending global Bibliography years should be plenty of time to colonize catastrophe. Answering this question respace – as long as nothing really bad hapquires the best science we can muster to • Bostrom, Nick and Milan Ćirković, 2008. Global pens first. Should some major global catasunderstand the threats, the best engineerCatastrophic Risks. Oxford: OxfordUniversity trophe come along and knock our civilizaPress. http://www.global-catastrophic-risks. ing to design solutions, plus whatever else tion out, then we’ll never have the chance to com/book.html is needed to make it all happen. It is a great realize our full potential across the universe. challenge. • Rees, Martin, 2003. Our Final Century: Will the We can’t colonize space if we no longer exist. Human Race Survive the Twenty-first Century? I believe that all fields of study have imThat would, to put it in basic ethics terms, Oxford: William Heinemann. be really, really bad. Here it is in terms of the portant contributions to make to this challenge of preventing global catastrophe. For quality of life integral mentioned above:

Unfortunately, humanity today does face several global threats with the potential to knock us out. You may have heard of some of them: climate change, nuclear war, pandemics, even disruptive new technologies like nanotechnology and artificial intelligence. These threats all have the potential to end global civilization. And they could do it within our lifetime, or soon after. These are the impediments we face to achieving astronomically great things.

my part, I actually switched from engineering to a PhD program in geography to be more able to synthesize contributions from across different fields. Stepping away from engineering was difficult for me, but once I adjusted things went well. But while having interdisciplinary abilities is very helpful, you probably don’t need such a dramatic shift. I invite you to consider how your own training and abilities best fit in. For example if you’re in microbiology, you could study


Cold Spring Harbor Laboratory 2013 Meetings & Courses

Lunch on Blackford lawn

2013 Fall Meetings Wiring the Brain July 18 - 22 Catalina Betancur, Ed Bullmore, Z. Josh Huang Helen Mayberg, Kevin Mitchell

Metabolic Signaling & Disease: From Cell to Organism August 13 - 17 Daniel Kelly, Mitchell Lazar, Susanne Mandrup

Eukaryotic mRNA Processing August 20 - 24 Tom Blumenthal, Kristen Lynch, Karla Neugebauer

Mechanisms of Eukaryotic Transcription August 27 - 31 Stephen Buratowski, Katherine Jones, John Lis

Behavior & Neurogenetics of Nonhuman Primates September 6 - 9 Jeffrey Rogers, Nelson Freimer

Eukaryotic DNA Replication & Genome Maintenance September 9 - 13 Anindya Dutta, Joachim Li, Johannes Walter

November 5 - 9 Martha Cyert, Daniel Lew, Kenneth Sawin

Microbial Pathogenesis & Host Response

Precision Medicine: Personal Genomes & Pharmacogenomics

September 17 - 21 Andrew Camilli, Lalita Ramakrishnan, Malcolm Whiteway

November 13 - 16 Ann Daly, Nicholas Katsanis, Deanna Kroetz, Jim Lupski

Stem Cell Biology

Harnessing Immunity to Prevent & Treat Disease

September 24 - 28 Konrad Hochedlinger, Fiona Watt, Ting Xie

Neurobiology of Drosophila October 1 - 5 Thomas Clandinin, Linda Restifo

Cell Death October 8 - 12 Douglas Green, Sally Kornbluth, Scott Lowe

Genome Informatics October 30 - November 2 Jennifer Harrow, Michael Schatz, James Taylor

2013 Fall Courses Programming for Biology October 14 - 29 Simon Prochnik

X-Ray Methods in Structural Biology October 14 - 29 William Furey, Gary Gilliland, Alexander McPherson, James Pflugrath

Computational & Comparative Genomics November 6 - 12 William Pearson, Lisa Stubbs

Cell Biology of Yeasts

November 20 - 23 Susan Kaech, Robert Seder, Susan Swain

Plant Genomes & Biotechnology December 4 - 7 Mary Lou Guerinot, Todd Mockler, Detlef Weigel

Rat Genomics & Models December 11 - 14 Edwin Cuppen, Aron Geurts, Michael Gould, Bina Joe

www.cshl.edu/meetings

Antibody Engineering & Phage Display November 6 - 19 Carlos Barbas, Don Siegel, Gregg Silverman

Advanced Sequencing Technologies & Applications November 12 - 24 Elaine Mardis, Gabor Marth, W. Richard McCombie, Aaron Quinlan, Michael Zody

The Genome Access Course July 18 - 20, November 17 - 19 Ben King, Jeremy Ward, Charla Lambert


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.