20W Print Journal

Page 1

DUJS

Dartmouth Undergraduate Journal of Science WINTER 2020

|

VOL. XXII

|

N O. 1

VISION

LOOKING TO THE FUTURE

Green Architecture: Drawing on Biomimicry

Hypnosis: Myth or Medicine

Origins of Electron Crystallography

p. 13

p. 56

p. 80

1

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE



Note from the Editorial Board Dear Readers, Science is a field of the future. It hinges on envisioning a better world and having the dedication and courage to pursue the unknown. Vision is a scientist’s greatest weapon: it underlies the ability to hypothesize, innovate, and imagine new solutions to seemingly insurmountable problems. Putting a man on the moon, editing genes, and the building the internet were all once impossible visions of a lofty future; they are now landmark moments of the past.

The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community and beyond by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EXECUTIVE BOARD

This print issue of the Dartmouth Undergraduate Journal of Science represents our writers’ and editors’ “vision” of how we can contribute to the continuously improving and changing scientific landscape. As a part of the next generation of scientists, fifteen writers discuss the latest developments in a variety of fields, including but not limited to immunology, technology, chemistry, and environmental studies. These articles represent the wide range of interests within our community, mirroring how research today is not narrowly focused but rather often interdisciplinary. While Audrey Herrald scrutinizes the health impacts of 5G wireless technology, Dev Kapadia questions the current state of bioprinting. Additionally, Anahita Kodali evaluates the role of tertiary lymphoid organs in chronically rejected transplants, and Sahaj Shah examines artificial neural network approaches to echocardiography. In biological research, Allan Rubio explores microfluidics and soft-lithography, Bryn Williams investigates the “Anti-CRISPR” protein as new biological insurance, and Dina Rabadi describes the VISTA immune checkpoint. On the other hand, Teddy Press focuses on the methods and thermodynamic applications of isothermal titration calorimetry, and Kamren Khan analyzes the perception on cognition through hypnotic analgesia and its clinical applications. Furthermore, our senior staff looks to both past and future as Sam Neff dives into the history of medical education and current reform in “Making the Medical Profession” and Anna Brinks considers green architecture and how architects can use biomimicry to be more sustainable. Senior staff members Liam Locke and Nishi Jain probe the genetic susceptibility and neural mechanisms of alcoholism and the origins of electron crystallography and merits of data merging techniques, respectively. We would like to thank our writers, editors, faculty advisors, and readers for supporting this issue of the Dartmouth Undergraduate Journal of Science. Vision is the first step in creating change, but it is not the last. Every vision is backed by its history: the past failures and successes of the generations of scientists who preceded us. In scientific discovery, it is crucial to use these past pursuits as building blocks while working with peers in all fields. We can only achieve our visionary goals through collaboration, which depends fundamentally on scientific communication—the goal of this Journal. We hope that these articles may inspire and inform your own visions and hopes for the future. Warmly, Anna Brinks ‘21 Megan Zhou ‘21

DUJS Hinman Box 6225 Dartmouth College Hanover, NH 03755 (603) 646-8714 http://dujs.dartmouth.edu dujs.dartmouth.science@gmail.com Copyright © 2020 The Trustees of Dartmouth College

President: Sam Neff '21 Editor-in-Chief: Nishi Jain '21 Chief Copy Editors: Anna Brinks '21, Liam Locke '21, Megan Zhou '21 EDITORIAL BOARD Managing Editors: Anahita Kodali '23, Dev Kapadia '23, Dina Rabadi '22, Kristal Wong '22, Maddie Brown '22 Assistant Editors: Aditi Gupta '23, Alex Gavitt '23, Daniel Cho '22, Eric Youth '23, Sophia Koval '21 STAFF WRITERS Allan Rubio '23 Anahita Kodali '23 Anna Brinks '21 Audrey Herrald '23 Bryn Williams '23 Dev Kapadia '23 Dina Rabadi '22 Georgia Dawahare '23 Kamren Khan '23 Liam Locke '21 Melanie Prakash '21 Nishi Jain '21 Sahaj Shah '21 Sam Neff '21 Teddy Press '23 SPECIAL THANKS Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Office of the Provost Office of the President Undergraduate Admissions R.C. Brayshaw & Company


Table of Contents Smaller Than Ever: Microfluidics and Soft-Lithography in Biological Research Allan Rubio '23, pg. 3

3

The Role of Tertiary Lymphoid Organs (TLO's) in Chronically Rejected Transplants Anahita Kodali '23, pg. 8 Green Architecture: Drawing on Biomimicry Anna Brinks '21, pg. 13 The Health-Related Implications of Fifth Generation (5G) Wireless Technology: Science and Policy

19

Audrey Herrald '23, pg. 19

The "Anti-CRISPR" Protein as New Biological Insurance Bryn Williams '23, pg. 34 The Current State of Bioprinting Dev Kapadia '23, pg. 38

44

VISTA: A Molecule of Conundrums Dina Rabadi '22, pg. 44

Depression: The Journey to Happiness Georgia Dawahare '23, pg. 50

56

Hypnosis: Myth or Medicine? Kamren Khan '23, pg. 56 Alcoholism: Genetic Susceptibility and Neural Mechanisms Liam Locke '21, pg. 60

A Branch of Precision Medicine and a Glimpse of the Future: Gene Therapy Melanie Prakash '21, pg. 72

80

The Origins of Electron Crystallography and the Merits of Data Merging Techniques Nishi Jain '21, pg. 80 Artificial Neural Network Approaches to Echocardiography Sahaj Shah '21, pg. 86 Making the Modern Medical Profession: the 19th Century Standardization of Medical Education Sam Neff '21, pg. 92

96 4

Not Your Average Coffee Cup: Methods and Thermodynamic Applications of Isothermal Titration Chemistry Teddy Press '23, pg. 96 DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Smaller Than Ever: Microfluidics and SoftLithography in Biological Research BY ALLAN RUBIO '23

Introduction One of the most distinct hallmarks of biology is its small scale. The cell, the main character in most areas of this study, is microscopic — a fifth of the size of a regular grain of salt. It is typically <100 µm in diameter within humans, and the macromolecules that compose its membrane and organelles are exponentially smaller. While many research and laboratory procedures have been developed in the past century which allow us to probe cells and the living organisms they compose at a level much deeper than before, there are very few that have the capability to interact with such systems at the scale they are naturally found. Microfluidics, a relatively new and innovative technology, has set itself apart from other methods of research because it allows researchers to manipulate fluids and other similar matters at the micro-/nano- scale. At miniscule amounts, ranging down to 10WINTER 2020

18 liters, researchers are able to utilize small fluidic channels (often <100 µm) to analyze solutions and reactions at much smaller yet still controllable quantities9. In other fields, microfluidics have been used to study chemical detection, chemical reactions, and microelectronics, leading to a rise in popularity and an introduction into other fields, namely biology9. Because of its manipulability, large milestones have been reached accurately studying proteins, nucleotides, or any biological compound at the scale of a cell. Practically, the benefits of working with such microscopic sizes mean that less reagents are used and that more samples can be tested independently. This is highly beneficial in medicine, where the precise detection of costly samples and reagents is ever-important7. Genomic sequencing, “pollution monitoring, clinical diagnostics, drug discovery and biohazard detection” have also all benefitted greatly from microfluidics, and there are still many more ideas currently

Cover Photo: Microfluidic Palette (Source: Wikimedia Commons, Photo Credit: G. Cooksey, National Institute of Standards and Technology)

“Microfluidics, a relatively new and innovative technology, has set itself apart from other methods of research because it allows researchers to manipulate fluids and other siimilar matters at the micro-/nanoscale."

5


Figure 1: Cleanroom – photolithography lab (Source: Wikimedia commons, Photo credit: O. Usher, UCL Mathematical and Physical Sciences from London, UK)

being tested1.

“Currently, due to its low cost and favorable physical properties, softlithography is the most popular form of fabricating microfluidic devices."

Figure 2: PDMS layer on patterned silicon (Source: Wikimedia commons)

A few decades ago, it would have been extremely difficult for researchers to access this technology and use it for their own research due to the specialized machinery and technical skill that was required to fabricate microfluidic7. Nowadays, however, the addition of a cleanroom (Figure 1), a facility with very few airborne particulates, to many laboratories has allowed for the development of techniques that make microfluidic device production much more accessible to everyone — one of the most prominent being soft-lithography. In the late 90s, scientists discovered how to utilize various elastomeric polymers (flexible materials, hence the name ‘soft’ lithography) with the same fabrication techniques used in larger scale industry to create incredibly intricate designs on stamps and molds7. Softlithography is enabled by photolithography, a popular form of microfabrication predominantly used in association with silicon and glass to produce detailed microelectronics and microelectromechanical systems8. When it was realized that such materials were unnecessary, or even incompatible with biological research, the usage of biomaterials or materials that can interface with the biological environment arose, and the advent of softlithography followed soon after6.

Currently, due to its low cost and favorable physical properties, soft-lithography is the most popular form of fabricating microfluidic devices. Products of soft-lithography are also versatile, allowing researchers to not only manipulate the size, but also add electrodes, magnetic fields, and other factors that can simulate both natural and synthetic environments within the microfluidic channels7. A key factor that makes soft-lithography ideal for biological research is poly(dimethyl siloxane) (PDMS), a transparent, flexible, and hydrophobic material that is able to attach itself to many surfaces5. It is also commercially available and fairly affordable at around $80/kg, making it an all-round great tool for microfluidics production5. Further, because it doesn’t promote microbial growth and allows oxygen and CO2 to diffuse, PDMS is also attractive for biomedical and medical applications3. Through these many factors, softlithography offers researchers great freedom to efficiently execute a variety of methods and procedures at a low cost.

Protocol for Soft Lithography Performing soft-lithography is a fairly simple procedure, yet requires extensive training: the procedure itself has only a few steps, but it still requires knowledge of the machinery and chemicals involved. In some ways it is similar to printing because a design is first selected and then embedded onto the surface of choice. In other ways, it resembles metal casting due to the utilization of molds in the procedure. There are many protocols to perform soft-lithography, each one with its own unique tweaks. However, according to Harvard microfabrication and nanotechnology pioneer, George M. Whitesides, most procedures will follow four major steps: pattern design, mask fabrication, PDMS stamp fabrication, and fabrication of the nano-/micro- structures5. As with any form of fabrication, design comes first. Typically, researchers use design software such as Adobe Freehand, Illustrator, or AutoCAD to create an intricate pattern and specify dimensions to use in the development of a mask4. This mask will be printed on a clear and transparent material which will form the pattern later. This can be done via an inhouse printer with the right capabilities and resolution, but the design can also be sent out to a commercial service with better equipment. To form the PDMS stamp, a master, or template (usually on a circular silicon wafer), has to be formed first through photolithography techniques commonly done in a cleanroom5.

6

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Another vital component for this step are photoresists, light sensitive materials that can form the shape of the specified pattern. To apply a photoresist to the silicon wafer, researchers use spin coating (Figure 3) to evenly distribute the liquid photoresist across the substrate to a desired thickness5. This step is crucial in allowing the fabricated device to be thinned down to the micrometer scale. After the photoresist has been deposited, the mask will then be used in the presence of UV light to create the pattern on the light-sensitive photoresist4. Depending on the type of photoresist utilized, exposed or non-exposed areas will become chemically modified and allow for removal in the following stage, development (Figure 4). At this stage, a pattern will appear on the silicon wafer which will then be washed off with various chemicals depending on the photoresist, leaving the wafer visibly embossed (Figure 4). A microscope can then be used to ensure the quality and resolution of the lithography before it is used to create the PDMS stamp4. The silicon wafer is then used as a mold for the flexible PDMS to conform to. Curing agent is mixed with elastomer, poured on top of the master in a petri dish, and placed into a desiccator for degassing4. After some time in the vacuum, the PDMS will have degassed, and after curing in the oven it will harden in the shape of the master and can be peeled off gently (Figure 5). It can be placed on a glass slide or any other desired substrate. Finally, on the side with the panels exposed, fluid inlets/outlets can be punched out to allow for connection to the outside world (Figure 6)4.

Modern Innovations in Soft Lithography In the past couple of decades, soft-lithography and microfluidics have gone from a nascent technology to one that has many new and innovative developments. New assays and imaging techniques allow for a myriad of discoveries to be made and devices to be created with the micro-scale channels, significantly impacting research and the medical field.

to use a scanning superconducting quantum interference device (SQUID) microscope to detect protein-protein interactions (PPI) at much higher readouts than fluorescent antibody detection. The latter technique, fluorescent labeling, is currently the most popular method of biophysical screening platforms but is fairly weak in sensitivity to very small and limited sample sizes. However, these minute interactions are not any less important to researchers, “motivat[ing] the development of new sensitive detection methods that are compatible with high-throughput screening�10. While techniques have been developed in the past where magnetic immunoassays were used for screening and sensing, the sensitivity has never been sufficient to reach one nanoparticle. By combining the highly sensitive magnetic SQUID sensor to the microfluidics device, there was a significant improvement in performance

Figure 3: Photoresist during spin coating on silicon wafer (Source: Wikimedia Commons)

“Detection methods using microfluidic technology, for one, have been improving sensitivity to a point that far surpasses conventional fluorescence."

Figure 4: Photoresist scheme depicting the two types of photoresists and their response to UV light exposure. Positive photoresists dissolve after exposure and negative photoresists cure (harden) after exposure. (Source: Wikimedia commons)

Detection methods using microfluidic technology, for one, have been improving sensitivity to a point that far surpasses conventional fluorescence. By using magnetic nanoparticles that were conjugated with the protein of interest, Wissberg et al. were able WINTER 2020

7


Figure 5: Creating the PDMS master. PDMS is placed on top of the cured photoresist after exposure to UV light. It molds to the shape of design, creating the PDMS microfluidic device. (Source: Wikimedia commons)

“The scale of microfluidics, as aforementioned, can range all the way down to the cell.”

Figure 6: Microfluidic devices formed from PDMS and glass, including micrographs of one of the channels. The drilled inlets and outlets are visible on the PDMS. (Source: Wikimedia commons, Photo credit: Richard Wheeler)

8

that allowed researchers to read measurements at up to 1:106 levels of dilution10. This improved scanning method could be extrapolated to a new microscope altogether and could potentially be a key discovery for discovering weaker PPIs in biological systems everywhere. In other areas of research, the flexible property of PDMS allows for researchers and engineers to develop devices that could perform more efficiently and in more practical settings for medical purposes. In most areas of medicine, analysis is limited to laboratory environments with sophisticated devices and existing technologies that are too large to be portable2. This poses challenges to doctors and researchers who need to collect samples outside of such a setting or if tests need to be run in a limited time frame. By combining microfluidics with color-responsive materials and a near-field communication (NFC) magnetic loop antenna, Dr. Koh and his team of the biomedical engineering department at Binghamton University developed a wearable microfluidic device that can collect sweat, analyze fluid samples, and transfer information to one’s smartphone. With a 3 cm diameter and 700 µm thickness, the device is incredibly small and light, barely felt by test subjects even during strenuous physical movement2.

With the device patched onto a limb, analysis begins when sweat starts to form on the skin. It is captured through openings on the bottom of the device (which is attached to skin) where it will flow into the microfluidic channels and into four reservoirs, each lined with a unique colorimetric chemical assay2. While the larger channel forming a serpentine circle around the device simply measures the total amount of sweat lost, the reservoirs in the middle allow for “simple, rapid quantitative assessment of the … pH, as well as the concentration of chloride, lactate, and glucose in the sweat”2. Finally, to be able to conveniently visualize the data, a smartphone with analysis software and a camera can receive information using NFC to see exact concentrations based on color2. Tests determined that the device demonstrated excellent accuracy in assaying the sweat. This presents a new path for science to extend beyond the conventional laboratory, performing experiments more quickly and precisely. While the device has not been used with any other biological fluid, it opens up new possibilities for others, namely blood, to be collected without having to rely on hospital settings. The scale of microfluidics, as aforementioned, can range all the way down to the cell. The most basic unit of life, as much as we know about it, is usually studied in populations rather than individually. As such, areas such as cellto-cell interactions and precision medicine are usually confounded by the presence of large cultures11. Although soft-lithography wasn’t utilized to create this microfluidic device, researchers at the University of Macau manipulated microdroplets using digital microfluidics (combining microfluidic channels with electrodes) to separate individual cells into wells11. By lowering the voltage running through the device with oil, they moved a droplet containing cells through the device and separated the cells into microscopic wells in the patterned array11. With a lowered voltage, the cells were able to remain viable as drug tests were carried out further along in the study. With this advancement, single-cell research would be more feasible, shedding light on the effects of isolating different types of cells and the effects of various drugs.

Discussion

Microfluidic technologies have opened up many new paths for the field of biology, allowing researchers to reach smaller scales with more control. Soft-lithography in combination with

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


PDMS has also made microfabrication easier and more accessible while offering exciting properties such as flexibility and portability. New methods of assaying and collecting data will equate to innovative devices being invented to challenge and improve all kinds of existing technologies. Assays and arrays could be performed in the field and data could be collected immediately, giving researchers important and valuable information for both biology and medicine. The world in the future might not have to rely on bulky equipment permanently stationed in laboratories. Instead, the focus will be small, cheap, devices that only require tiny amounts of sample to give back the same essential information.

with 3D microstructures for single-cell culture. Microsystems & Nanoengineering, 6(1), 1–10. https://doi.org/10.1038/s41378019-0109-7

References [1] Holmes, D., & Gawad, S. (2010). The Application of Microfluidics in Biology. In M. P. Hughes & K. F. Hoettges (Eds.), Microengineering in Biotechnology (pp. 55–80). Humana Press. https://doi.org/10.1007/978-1-60327-106-6_2 [2] Koh, A., Kang, D., Xue, Y., Lee, S., Pielak, R. M., Kim, J., Hwang, T., Min, S., Banks, A., Bastien, P., Manco, M. C., Wang, L., Ammann, K. R., Jang, K.-I., Won, P., Han, S., Ghaffari, R., Paik, U., Slepian, M. J., … Rogers, J. A. (2016). A soft, wearable microfluidic device for the capture, storage, and colorimetric sensing of sweat. Science Translational Medicine, 8(366), 366ra165-366ra165. https://doi. org/10.1126/scitranslmed.aaf2593 [3] Kumbar, S. G., Laurencin, C. T., & Deng, M. (2014). 18.2.8 Poly(sulfone). In Natural and Synthetic Biomedical Polymers (p. 304). Elsevier. https://app.knovel.com/hotlink/pdf/ id:kt00U9G331/natural-synthetic-biomedical/poly-sulfone [4] Mazutis, L., Gilbert, J., Ung, W. L., Weitz, D. A., Griffiths, A. D., & Heyman, J. A. (2013). Single-cell analysis and sorting using droplet-based microfluidics. Nature Protocols, 8(5), 870–891. https://doi.org/10.1038/nprot.2013.046 [5] Qin, D., Xia, Y., & Whitesides, G. M. (2010). Soft lithography for micro- and nanoscale patterning. Nature Protocols, 5(3), 491–502. https://doi.org/10.1038/nprot.2009.234 [6] Tran, K. T. M., & Nguyen, T. D. (2017). Lithography-based methods to manufacture biomaterials at small scales. Journal of Science: Advanced Materials and Devices, 2(1), 1–14. https://doi. org/10.1016/j.jsamd.2016.12.001 [7] Velve-Casquillas, G., Le Berre, M., Piel, M., & Tran, P. T. (2010). Microfluidic tools for cell biological research. Nano Today, 5(1), 28–47. https://doi.org/10.1016/j.nantod.2009.12.001 [8] Whitesides, G. M. (2006). The origins and the future of microfluidics. Nature, 442(7101), 368–373. https://doi. org/10.1038/nature05058 [9] Whitesides, G. M., Ostuni, E., Takayama, S., Jiang, X., & Ingber, D. E. (2001). Soft Lithography in Biology and Biochemistry. Annual Review of Biomedical Engineering, 3(1), 335–373. https:// doi.org/10.1146/annurev.bioeng.3.1.335 [10] Wissberg, S., Ronen, M., Oren, Z., Gerber, D., & Kalisky, B. (2020). Sensitive Readout for Microfluidic High-Throughput Applications using Scanning SQUID Microscopy. Scientific Reports, 10(1), 1–8. https://doi.org/10.1038/s41598-020-58307-w [11] Zhai, J., Li, H., Wong, A. H.-H., Dong, C., Yi, S., Jia, Y., Mak, P.-I., Deng, C.-X., & Martins, R. P. (2020). A digital microfluidic system WINTER 2020

9


The Role of Tertiary Lymphoid Organs (TLOs) in Chronically Rejected Transplants BY ANAHITA KODALI '23 Cover Image: SLO anatomy has been studied in depth. The cover image shows a mouse SLO imaged using multiphoton microscopy. B-cells (shown in red) are found in the B-cell follicle of the SLO. Myeloid cells (shown in green) surround the B-cells. Collagen fibres (shown in blue) are found in the lymph conduits. Capillary networks (shown in pink) contain blood, which is sent directly into the B-cell follicle. These images can be compared to multiphoton images of TLOs; by making this comparison, researchers were able to confirm the presence of several different anatomical structures in TLOs and therefore confirm the similarity of TLOs to SLOs. (Source: Wikimedia Commons)

10

A Brief History of Modern Transplantation Transplantation is the medical process of taking a section of tissue or organ from its natural site and transferring it to a different site on the same person or placing it in the body of a different individual.1 The process dates back hundreds of years - early accounts from 6th century BCE say that Hindu surgeons were able to develop techniques for taking flaps of skin from a patient’s arm and reconstructing noses.1, 2 The first major technical advancement in transplant procedure came in 1869, when Jacques-Louis Reverdin, a Swiss surgeon, found that taking extremely thin grafts and placing them over burns, open wounds, and ulcers would heal them.2 During the early 1900s, more progress was made in understanding why some transplants failed and some succeeded. In 1903, Danish scientist Carl Jensen found that transplant failure was caused by immune reactions. In 1912, Georg Schöne (often credited as the first transplant immunologist) studied allografts, transplants where the donor is of the same species but does not have the same genetic makeup as the recipient. Schöne found that these differences on a genomic level are what caused allografts to

often.2, 3 By the end of the 1920s, other tenants of transplant immunology had been found. James Murphy of the Rockefeller Institute determined that the lymphatic system, which produces immune cells, played a key role in resistance to allografts. And, while he could not prove it at the time, he was convinced that lymphoid cells infiltrating allografts were the key cells responsible for graft failure.2, 4 His belief was later proven correct. In addition to advancing scholarly understanding of graft failure, the early 1900s also saw the first documented successful organ transplants. In 1902, Emerich Ullmann, an Austrian surgeon, successfully performed a dog autotransplant (a transplant in which an organ from the patient is taken and placed somewhere else) and a dog to goat xenograft (a transplant in which tissue is taken from a different species). 2, 5, 6 Furthermore, extensive animal transplantation studies done by Alexis Carrel, a French biologist and surgeon, found that autografts (grafts taken from a patient’s own body) could be successful consistently.2, 7 Over the next half of the 20th century, several more advances were made, and surgeons tried desperately to successfully transplant human organs. Finally, in 1954, Joseph Murray, an DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


American plastic surgeon, performed the first successful human kidney transplant.2 His work marked a transition into the modern era of transplantation. Key distinctions between the three types of transplant rejection were then defined. The first kind of rejection is hyperacute rejection, occurring minutes to several hours after the procedure is performed. It is primarily caused by a patient’s innate alloimmunity, which is the body’s natural response to nonself-cells.8, 9 The second kind is acute rejection – the most common form of transplant rejection – that takes place anywhere from days to three months after transplant. It is primarily caused by inflammation and damage to the organ caused by the patient’s innate immune system fighting the organ (see Figure 1). T and B-cells play a key role in acute rejection.8 The third kind of rejection is chronic rejection, which is categorized as any rejection that happens more than three months after the operation and is usually caused by ischemia (restriction of blood flow to organs) and fibrosis.8 Historically, hyperacute and acute rejection were the significant barriers to graft survival, as most transplants did not succeed in the long term. However, in the modern day, advancements have made both types of rejections less problematic. Due to the advancement of both surgical techniques and immunosuppressive conditions during surgery, hyperacute rejection is an uncommon occurrence.10 While acute rejection is still the most common form of rejection, it is relatively well-understood, and all current immunosuppressive medications target it.11 Unfortunately, despite these advances, immunosuppressants cannot ensure infinite graft survival. The vast majority of transplants

that make it past the three-month mark are chronically rejected; very little is understood about the chronic rejection process, and no current drugs are available to treat it.11 Thus, understanding the immune system’s role in chronic rejection of transplanted organs is of critical importance to the search for treatment options that increase the longevity of transplants.

Function and Structure of the Natural Immune System One of the main roles of the immune system is to distinguish self from non-self, i.e. the body’s own cells from foreign invaders. Once this is done, the immune system can defend the body against pathogens, microbes, and other external threats. This defense has two phases of attack. The first is the innate response, which is the immediate response to microbial exposure and is the body’s first line of defense; it is relatively broad and unspecific.12 It involves physical barriers like skin and cell membranes, and signaling molecules such as cytokines that are released when immune cells are activated against invaders, and certain proteins that bind to invader’s cell surfaces and mark them for attack.13 The second phase is the adaptive response, which is acquired through exposure to different foreign invaders; this response is slower than the innate response, but is more targeted to the specific invader that the immune system has identified.12 The responses are mainly mediated by T-cells and B-cells, as they are the cells that can recognize and subsequently respond to the antigen found on foreign cells the immune system is trying to remove.13, 14 T-cells especially are key in the self versus non-self recognition process. Before T-cells can specify a cell for immune attack, they find structures on the targeted cell to confirm that it is indeed nonself.14 B-cells also play an important, but more accessory role; when they encounter foreign invaders, they act as antigen-presenting cells for T-cells, which in turn allows T-cells to more quickly find and kill the invader (see Figure 2).15 T-cells and B-cells are not produced with the ability to immediately target one specific antigen. Rather, the immune system has developed two “parts” of immune cell production. The first is the production of naive cells, which are T-cells and B-cells that have not yet been primed to find and attack one specific antigen; priming is done in the primary lymphoid organs, which include the thymus,

WINTER 2020

“Historically, hyperacute and acute rejection were the significant barriers to graft survial... However, in the modern day, advancements have made both types of rejections less problematic."

Figure 1: H&E stained grafts of rejected organs help scientists understand the role of immune cells in acute and chronic rejection. Figure 1 shows neutrophil infiltration into an acutely rejected renal graft (neutrophils shown in pink). By identifying the presence of neutrophils, as well as a host of other immune cells, researchers were able to determine markers of rejection caused by the body’s autoimmune response and also helped researchers when they were first trying to understand why organs got rejected. (Source: Wikimedia Commons)

11


in the aforementioned SLOs, which are natural parts of the immune system that humans are born with. However, over the past few years, researchers have been studying the role of a third type of immune structure that could be a significant contributor to chronic rejection: tertiary lymphoid organs (TLOs).

Figure 2: An excellent everyday example of the interplay of the innate and the adaptive immune system is the nose. In the nose, the body’s innate immune response is constantly fighting pathogens – skin, hair, and mucus block several foreign invaders, and cells like mast cells, NK cells, and phagocytes are constantly monitoring for pathogens that break through the baseline skin and hair defenses. Further down the nasal pathways, groups of T and B-cells constantly fight antigenpresenting pathogens.

TLOs and Their Role in Chronic Rejection

(Source: Wikimedia Commons)

"The immune system's recognition of self and non-self is vital to its ability to defend the botdy against attack. However, in the case of transplantation, it is clear why the self versus non-self discrimination is problematic"

bone marrow, and (for fetuses only) the liver.16 They then mature and activate in the secondary lymphoid organs (SLOs), which are found all throughout the body and include lymph nodes, the spleen, Peyer’s patches, nasal-associated lymphoid tissue, adenoids, and tonsils; in these SLOs, naïve T and B-cells are presented antigen from dendritic cells, allowing them to form a memory of the antigen. When they encounter the antigen in the body, they are then primed to quickly attack.17 The immune system’s recognition of self and non-self is vital to its ability to defend the body against attack. However, in the case of transplantation, it is clear why the self versus non-self discrimination is problematic – even in the case of autografts and with the use of immunosuppressants, the body will eventually recognize the transplanted tissue as foreign and attack it, resulting in the loss of the tissue to chronic rejection. Previously, it was thought that immune cell activation could only occur

TLOs are ectopic lymphoid structures found in non-lymphoid tissues. Structurally, they are very similar to SLOs, but their formation and impacts on the body are very different.16 SLOs are formed before a person is born. TLOs, on the other hand, form through a process called lymphoid neogenesis, which occurs when organs have chronic inflammation. Transplantation is one scenario that can cause the long-term inflammation that triggers lymphoid neogenesis.16, 18 Once made, TLOs cause a local immune response against the source of the chronic inflammation, which, in the case of transplantation, is the donated organ.16 TLOs are not exactly SLOs, as they are not as tightly organized and regulated as true organs. This is why several authors refer to them as tertiary lymphoid structures rather than organs. Regardless, there are still several key similarities between the two. While much is still unknown about the specifics of TLO structure, both TLOs and SLOs are identified through the use of microscopy techniques including immunofluorescence and immunohistochemistry.19

Figure 3: T-cells are arguably the most critical cell in the adaptive immune system because of their ability to mark and kill foreign invaders. Though the primary focus of this study is on TLOs, a major area of study regarding T-cells is their role in fighting cancer. Pictured here is three killer T-cells surrounding and preparing to attack a cancerous cell. When they come into contact with a target cell (in this case, a cancer cell), they wrap themselves around the cell and release chemicals to kill the cell (the chemicals are pictured in red); they then move on to the next target cell. This process has been termed the “kiss of death” (Source: Wikimedia Commons) 12

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Immunofluorescence is a process in which scientists target molecules with fluorescent tags; while the organism being studied is still alive, scientists hit the molecules with lasers, which causes the tags to produce visible light. Researchers can then track the movement and behavior of the targeted cells in vivo.20 Immunohistochemistry also uses immunological staining that targets specific molecules, but it is done on tissue samples rather than on living subjects; while it does not allow scientists to study the behavior of living cells, immunohistochemistry does lets them look at tissue architecture with high levels of detail.21 In addition to being looked at through the same procedures, TLOs share some physical characteristics with SLOs. Researchers characterize TLOs through the presence of four markers.19 The first marker is stromal cells, which are cells that make up connective tissues.22 The second marker is B-cell follicles, which is an area where follicular B-cells (see Cover Image). 23 The third is T-cell zones, which are the areas of the structure that contain T-cells.24 The fourth is specialized vessel known as the highendothelial venule, which helps the structure communicate with the rest of the body.25 These four markers are also found in SLOs and help show how TLOs are structured and organized, giving later researchers hints about the role TLOs play in transplant rejection. TLOs first caught the interest of transplant immunologists when they were first discovered in several rejected organs. However, it was initially unclear whether they hindered the rejection process or accelerated it, especially since TLOs have been beneficial in some situations (for example, several studies have shown that TLO formation helps the body’s immune system fight cancer - see Figure 3). In order to determine if TLOs were helpful or harmful to transplants, researchers used an immunohistochemical study of several chronically rejected renal allografts from humans to search for the presence and distribution of TLOs and look at their distribution along the rejected organs. Ultimately, they determined that TLOs were a consequence of chronic inflammation associated with transplants and accelerated the chronic rejection process.26 With this finding, transplant immunologists wanted to understand exactly how TLOs contributed to chronic rejection. Researchers hypothesized that because TLOs are so similar to SLOs, they may play a role in the activation of naive immune cells. To test this hypothesis, researchers used mouse models WINTER 2020

and immunofluorescent imaging to view T-cell kinetics in allograft renal tissue. They were able to confirm that TLOs are sites of naive T-cell activation.27 This was a critical finding - if TLOs are able to activate T-cells directly at the site of transplant, then the immune response will be significantly stronger because the T-cells will be able to rapidly go to and target the transplanted organ, resulting in accelerated rejection.

Conclusions and Further Questions It is clear from previous research that TLOs play a significant role in the chronic transplant rejection process by acting as a site for local activation of naive T-cells. However, much is still unknown about both the structure and function of TLOs. By employing more imaging studies, researchers will be able to better understand the stages of TLO development, as well as the key differences between mature TLOs and SLOs. Perhaps more significantly, researchers must work towards understanding how TLOs propagate the body’s natural immune response at the local level by studying B-cell behavior so that scientists have a more holistic understanding of the local immune response. There are researchers who are currently working towards determining if TLOs can activate B-cells, though results are yet to be published. As a result of these findings, clinicians and researchers have a better understanding of the chronic rejection of transplanted organs. As the knowledge regarding TLOs continues to broaden, the hope is that new treatment options that slow or even stop chronic rejection will be made. References [1] Calne, R. Y. (2019, September 10). Transplant. Britannica. Retrieved from https://www.britannica.com/science/ transplant-surgery

“It is clear from previous research that TLOs play a significant role in the chronic transplant rejection process by acting as a site for local activation of naive T-cells."

[2] Barker, C. F., & Markmann, J. F. (2013). Historical Overview of Transplantation. Cold Spring Harbor Perspectives in Medicine, 3(4). doi: 10.1101/cshperspect.a014977 [3] Allotransplant Medical Definition. (n.d.). Merriam-Webster. Retrieved from https://www.merriam-webster.com/medical/ allotransplant [4] Cueni, L. N., & Detmar, M. (2013). The Lymphatic System in Health and Disease. Lymphatic Research and Biology, 6(3-4), 109–122. doi: 10.1089/lrb.2008.1008 [5] Schiel, W. C. (2018, December 21). Definition of Autotransplantation. Medicine Net. Retrieved from https://www.medicinenet.com/script/main/art. asp?articlekey=40487 [6] NCI Dictionary of Cancer Terms. (n.d.). National Cancer Institute. Retrieved from https://www.cancer.gov/ publications/dictionaries/cancer-terms/def/xenograft 13


[7] Allograft vs. Autograft. (n.d.). Hartford Hospital. Retrieved from https://hartfordhospital.org/health-professionals/tissuebank/human-tissue-graft-information/allograft-vs-autograft

anatomy of T-cell activation and tolerance. Proceedings of the National Academy of Sciences of the United States of America, 93, 2245–2252. Retrieved from https://www.pnas. org/content/pnas/93/6/2245.full.pdf

[8] Nasr, M., Sigdel, T., & Sarwal, M. (2016). Advances in diagnostics for transplant rejection. Expert Review of Molecular Diagnostics, 16(10), 1121–1132. doi: 10.1080/14737159.2016.1239530

[25] Ruddle, N. H. (2016). High Endothelial Venules and Lymphatic Vessels in Tertiary Lymphoid Organs: Characteristics, Functions, and Regulation. Frontiers in Immunology. doi: https://doi.org/10.3389/fimmu.2016.00491

[9] Treleaven, J., & Barrett, J. (2009). Hematopoietic Stem Cell Transplantation in Clinical Practice. Elsevier Limited.

[26] Xiaoguang Xu, Yong Han, Qiang Wang, Ming Cai, Yeyong Qian, Xinying Wang, Haiyan Huang, Liang Xu, Li Xiao & Bingyi Shi (2016) Characterisation of Tertiary Lymphoid Organs in Explanted Rejected Donor Kidneys. Immunological Investigations. 45(1), 38-51. doi: 10.3109/08820139.2015.1085394

[10] Platt, J. L. (2010). Antibodies in Transplantation. Discovery Medicine, 10(51), 125–133. Retrieved from https://www.ncbi. nlm.nih.gov/pmc/articles/PMC3056494/ [11] Kloc, M., & Ghobrial, R. M. (2014). Chronic allograft rejection: A significant hurdle to transplant success. Burns & Trauma, 2(1), 3–10. doi: 10.4103/2321-3868.121646 [12] Hoebe, K., Janssen, E. M., & Beutler, B. (2004). The interface between innate and adaptive immunity. Nature Immunology, 5(10), 971–974. doi: 10.1038/ni1004-971

[27] Nasr, I. W., Reel, M., Oberbarnscheidt, M. H., Mounzer, R. H., Baddoura, F. K., Ruddle, N. H., & Lakkis, F. G. (2007). Tertiary Lymphoid Tissues Generate Effector and Memory T Cells That Lead to Allograft Rejection. American Journal of Transplantation. doi: https://doi.org/10.1111/j.16006143.2007.01756.x

[13] Chaplin, D. D. (2010). Overview of the Immune Response. Journal of Allergy and Clinical Immunology, 125(2), S3–23. doi: 10.1016/j.jaci.2009.12.980 [14] Lopera, D., & Cano, L. E. E. (n.d.). Autoimmunity: From Bench to Bedside. https://www.ncbi.nlm.nih.gov/books/ NBK459471/ [15] Häusser-Kinzel, S., & Weber, M. S. (2019). The Role of B Cells and Antibodies in Multiple Sclerosis, Neuromyelitis Optica, and Related Disorders. Frontiers in Immunology. doi: https://doi.org/10.3389/fimmu.2019.00201 [16] Thompson, E. C. (2012). Focus issue: Structure and function of lymphoid tissues. Trends in Immunology, 33(6). doi: https://doi.org/10.1016/j.it.2012.05.001 [17] Ruddle, N. H., & Akirav, E. M. (2009). Secondary Lymphoid Organs: Responding to Genetic and Environmental Cues in Ontogeny and the Immune Response. The Journal of Immunology, 183(4), 2205–2212. doi: https://doi. org/10.4049/jimmunol.0804324 [18] Haschek, W. M., Rousseaux, C. G., & Wallig, M. A. (n.d.). Haschek and Rousseaux's Handbook of Toxicologic Pathology. https://www.sciencedirect.com/topics/medicineand-dentistry/tertiary-lymphoid-structure [19] Lin, L., Hu, X., Zhang, H., & Hu, H. (2019). Tertiary Lymphoid Organs in Cancer Immunology: Mechanisms and the New Strategy for Immunotherapy. Frontiers in Immunology, 10. doi: 10.3389/fimmu.2019.01398 [20] Donaldson, J. G. (2015). Immunofluorescence Staining. Current Protocols in Cell Biology, 1–4. doi: 10.1002/0471143030.cb0403s69. [21] Ramos-Vara, J. A. (n.d.). Principles and Methods of Immunohistochemistry. Drug Safety Evaluation, 83–96. Retrieved from https://link.springer.com/ protocol/10.1007/978-1-60761-849-2_5 [22] stromal cell: NCI Dictionary of Cancer Terms. (n.d.). National Cancer Institute. Retrieved from https://www.cancer. gov/publications/dictionaries/cancer-terms/def/stromal-cell [23] Wallace, D. J., Hahn, B., & Dubois, E. L. (2019). Dubois lupus erythematosus and related syndromes. Edinburgh: Elsevier. [24] Mondino, A., Khoruts, A., & Jenkins, M. K. (1996). The 14

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Green Architecture: Drawing on Biomimicry BY ANNA BRINKS '21

Introduction From hunting practices to energy use, prehistoric humans exerted a nearly negligible influence on nature compared to the behemoth impact of modern man. Early habitats, a crucial aspect of human communities, also conformed to natural limitations. From caves to simple structures, humans used natural resources together with local site characteristics to protect themselves from the elements. However, these habitats have changed dramatically as living conditions evolved in tandem with the explosion of modern energy use and multiplying global populations. The development of heating, ventilation, and air conditioning (HVAC) technology has freed building sites, structures, and materials from many natural restrictions.15 Livable conditions can be artificially created with HVAC technology, allowing people to live anywhere from the 100 F (38 C) summer temperatures of Dubai to the -58 F (-50 C) winter temperatures of Russia. Urbanization and the development of sprawling suburbs have transformed building scale both vertically and horizontally, putting considerable strain on non-renewable global resources. It is estimated that energy consumed in the building of cities and suburbs accounts for 20 to 40% of total energy consumption in developed nations, and that the construction industry alone produces 40% of the total volume of carbon dioxide emitted globally.15 As developing nations continue to urbanize and WINTER 2020

implement HVAC technology, this excessive energy consumption will very likely increase in the coming years. Consequently, the innovation and improvement of green architecture has come to be one of the main considerations driving modern architectural design. The concept of green architecture encompasses the entire life cycle of a building, from construction to operation to demolition.8 It requires careful selection of materials and implementation of designs that complement the unique challenges of local landscapes and climates. Green architecture has considerable benefits in a number of areas. Environmentally, it reduces pollution, conserves natural resources, and protects against environmental degradation. Economically, green buildings use energy and water more efficiently, reducing operating costs. Green architecture even provides social benefits, as buildings are often designed to incorporate living plants and green spaces, providing beautiful homes for their residents and visitors. Increasingly, green architects are turning directly to nature for innovative designs and ways to reduce waste.12

Cover Image: The roof of the nave in the Sagrada Familia church in Barcelona, Spain. The church represents one of the most famous examples of biomimicry in architecture. Architect Antoni Gaudí designed the towering columns to mirror trees and branches. (Source: Wikimedia Commons)

“The concept of green architecture encompassses the entire life cycle of a building, from construction to operation to demolition."

Green Building Rating Systems Green building rating systems provide an effective framework for integrating sustainability into development projects.2 As defined by the United Nations Commission on Environment and Development, sustainable 15


“In the effort to protect nature, nature itself can be an invaluable resource for engineers and innovators.”

development “is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.”13 Green building programs with specific guidelines and benchmarks are an important way to incentivize and evaluate sustainable buildings. The most common program used in North America is called Leadership in Energy and Design, or LEED. This program was launched in 1998 and is now administered by the United States and Canada Green Building Councils.10 The LEED system encourages an integrated design approach by awarding credits for features that improve sustainability, which may include reductions in energy, water, or resources use.10 Despite its popularity, there are some limitations to the LEED qualification system. Energy performance credits are allotted based on simulated performance predicted by designs, rather than actual performance after the building is constructed and inhabited.10 Luckily, when a large number of buildings are taken into account, the modeled and actual results were remarkably consistent, with an average ratio of 0.92 between the measured and designed energy use intensity (a building's annual energy consumption relative to its gross square footage).10,14 The problem lies with a certain subset of green homes. While overall average energy savings of LEED buildings range from 18-39%, 28-35% of LEED buildings are actually using more energy than their conventional counterparts. Therefore, even though the program has contributed substantial energy savings at a societal level, further refinement is needed to minimize variability in energy expenditure at the individual building level.10 Investigating post-occupancy performance to determine how well buildings operate once they are in use will be an important step towards ensuring consistency.

Figure 1: A sunflower demonstrating phototaxis as it opens for the day (Source: Wikimedia Commons). 16

thrive in their specific regions, perhaps even learning from and evolving with each other.

Biomimicry and Green Architecture In the effort to protect nature, nature itself can be an invaluable resource for engineers and innovators. Nature is a vital source of inspiration – its adaptations and structures have been honed by eons of evolution into highly efficient and ingenious designs. In the 3.8 billion years since bacteria first emerged on Earth, living organisms have evolved the capacity to transform sunshine into physical forms, fly, migrate across the globe, live in the deepest ocean trenches and on the highest mountains, glow in the dark, manufacture miraculous materials, and ponder philosophy and art. Biomimicry draws from the inspiration that is life on Earth: it is “the study of the structure, characteristics, principles, and behavior of biological systems to provide novel design ideas, working principles, and system compositions as well as an interdisciplinary subject that provides new ideas, principles, and theories for scientific and technological innovation”.15 Biomimetic architecture treats the building like an integrated living organism striving for a symbiotic relationship with the environment. Just as evolution has molded the forms and functions of every organism on Earth to the constraints of local ecosystems, biomimetic architecture can be used to better adapt modern construction to the environment.

Solar Energy The sun is a vital source of energy for the hundreds of thousands of plants and photosynthetic bacteria that transform light

In addition to LEED, there are several other programs that are used to rate the sustainability of buildings. Launched in 1990, the Building Research Establishment Energy and Environmental Assessment Model (BREEAM) is the predominant program in the UK. While it has many similarities to LEED, including its credit awarding system, its guidelines are more tailored to the UK’s legislative environment. It has also been credited with a high level of adaptability, with specific programs that cater to an area’s unique needs.5 The LEED program is also linked to the US dollar, which causes problems internationally if exchange rates are unfavorable or change drastically. It is likely that both programs will continue to coexist and DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


exposure in bedrooms and turn north in the hot summer to minimize sunlight (figure 2). Combined with triple pane windows and a heat absorber on top capable of rotating in four directions to follow the sun, the Heliotrope became the first energy autonomous house (producing as much energy as it consumes) when it was constructed in 1994.

Figure 2: The Heliotrope building in Freiburg, Germany (Source: Wikimedia Commons).

Wind and Ventilation

into storable chemical energy. Since the first solar cell was invented in the 1880s, humans have also harnessed solar power.6 Some species of plants have evolved to optimize their sunlight intake in especially noteworthy ways, providing sources of inspiration for architectural biomimicry in green buildings. Using photo-sensory organs, the leaves of sunflower, peanut, and cotton plants can harness the energy of sunlight during the day and shut together at night – an ability called phototaxis (figure 1).15 The Japanese sunflower fiber-optic light guide system mimics natural phototaxis, using proactive solar tracking to automatically follow the sun. This technique increases sunlight absorption by 20-36% over passive, fixed tracking systems.15 Taking phototaxis to another level, the Heliotrope building in Freiburg, Germany can rotate to the south in the cold winter to maximize sunlight

Termite mounds are an incredible feat of natural architecture. In human proportions, a 3 to 8-meter high termite mound scales up to a 1,500-meter tall skyscraper, nearly twice as tall as the 828-meter Burj Khalifa in Dubai (currently the highest building in the world)! Inside the mound, the average temperature remains steady at 28 degrees Celsius, even when the temperature outside fluctuates by as much as 50 degrees Celsius.15 The termites achieve this by adding cool, wet mud as needed to the bottom of the mound. In addition, they construct an ingenious vent system. Fresh air enters through vents near the bottom of the mound and absorbs heat, eventually rising and escaping from the top. At night, most vents are plugged, and the termites rely on the soil’s thermal storage capacity to provide heat.

“In human proportions, a 3 to 8-meter high termite mound scales up to a 1500-meter high skyscraper..."

Architect Mike Pearce modeled the Eastgate Center building in Zimbabwe’s capital Harare after the termite’s mechanism of ventilation. The structure is a multifunctional business and office building that consists of two slabtype apartments connected by an atrium. The center of the building is equipped with doubledeck air shafts, with an inner deck that releases heated air and an outer deck that delivers cold air. The intake for the ventilation system is situated over a footbridge filled with green plants, which have a shading and cooling effect and allow cool air to be absorbed by the system and transferred to the offices. Once the air is hot, it travels from the inner deck ventilation shaft to the roof where it is finally discharged.15

n

aut

Figure 3: The tenebrionidae Namibian fog basking beetle and its vapor catching technique. (Source: Wikimedia Commons). WINTER 2020

17


Figure 4: The Eden Project with the Core education center in front and the Biomes in the background. (Source: Wikimedia Commons)

Like the termite mounds, the building admits new air at the bottom and displaces old air upwards, efficiently cycling between hot and cold temperatures.

Water Collection and Conservation “The Eden Project... This enormously successful project transformed an old china clay pit into a tourist destination that attracted two million visitors and an estimated 430 million euros of additional income to the Cornish economy during its first year.”

Figure 5: The working china clay pit that was near the end of its economic life before it became the site of the Eden Project. (Source: Wikimedia Commons) 18

The Namib Desert in Southern Africa is a vast coastal desert that stretches inland from the Atlantic Ocean. Rainfall is sporadic, with some regions only receiving 2 millimeters annually. Remarkably, some life forms still thrive in this harsh environment. The tenebrionidae Namibian fog basking beetle can catch water droplets that are 1-40 μm large in diameter when high winds from the Atlantic blow vapor into the region (figure 3).15 Slanting their backs in the air, they condense the moisture and allow the droplets to roll into their mouths. Nanoscale hydrophilic bumps alternated with waxy, hydrophobic patches allow the drops to collect and travel down their backs. Forming the droplets into spheres allows the water to collect far more efficiently than if it was simply accumulated as a thin film.11 The proposed structure for the Las Palmas Water Theatre on the Canary Islands (a Spanish archipelago off the coast of northwestern Africa) involves a similar technique to produce large quantities of fresh water. As a backdrop to an outdoor amphitheater, the structure would catch the constant sea wind and use sunlight to make the air warm and humid. The air would then condense when it runs into cold pipelines and travel to recycling facilities for fresh water. Adjustable panes capable of changing angles based on the direction of the wind would enhance the efficiency of the condensation process. This would allow fresh water to be sustainably collected and distributed to nearby buildings and landscapes.15

Nature’s Shapes: Soap Bubbles and Shells The Eden Project in Cornwall, England, conceived by businessman Tim Smit and designed by architect Nicholas Grimshaw, is an educational charity that aims to better connect people with the living world and explore opportunities to create a better, greener future. The popular attraction consists of a “Core” education center and two enormous Biomes, as well as outdoor gardens and an arena (figure 4). One biome contains plants from humid, tropic regions and the other focuses on plants that grow in warm, temperate zones.4 Visitors are able to walk through the biomes and directly experience these different environments. This enormously successful project transformed an old china clay pit into a tourist destination that attracted two million visitors and an estimated 430 million euros of additional income to the Cornish economy during its first year (figure 5).4 The unique construction of the domes relies on a geodisc concept, which minimizes weight while maximizing surface area and

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


strength of the structure. Architect Nicholas Grimshaw was inspired by the idea of soap bubbles, which adapt to the surfaces they land on and form perpendicular lines of joining when they fuse together. This was well adapted to the uneven and changing sands of the clay pits. The structures consist of two steel work frames made of interlocking hexagons and triangles with three layers of ethylene tetra­fluoroethylene copolymer (ETFE) inflated between the steel. The steel framework weighs only slightly more than the air contained by the Biomes, and the ETFE windows weigh less than 1 percent of the equivalent area of glass.1 Additionally, ETFE is self-cleaning, can last for more than 25 years, and can transmit UV light. The Core education center contains the Invisible Worlds exhibition, which explores the interconnectedness of life and the environment at every scale.1 The building’s design draws on the natural shapes of pinecones, sunflowers, shells, and pineapples. These plants contain opposing spirals that exhibit Fibonacci’s sequence, where the next number is found by adding the two numbers before it (figure 6). Solar panels as well as sustainable materials were incorporated, such as the recycled wood floorings and concrete from the china clay sand the structure was built upon. Even the building’s green tiles were made of recycled Heineken bottles.1 During every step of construction, care was taken to minimize waste and utilize resources in the most efficient and innovative ways possible.

Conclusion People are now urban animals, with the majority of the human race living in cities.7 Air pollution and waste disposal have plagued cities since their conception, and city planners currently rely on resources from far beyond their borders to solve these problems. The rapid expansion of the urban space over the last century has been accompanied by health and environmental problems as governments have struggled to provide resources WINTER 2020

and infrastructure for erupting slums and booming populations. Furthermore, the growth of consumer economies has increased the appeal and attainability of high-energy and materials-intensive lifestyles.7 Ecological economics and sustainable development theories pose serious critiques of the global economy, warning that a finite ecological system cannot support exponentially growing consumer demand.9 Green architecture is an important step towards addressing these concerns and mitigating human impact on the environment. While this article explores only a few select case studies that illustrate the benefits of biomimicry in green architecture, the potential for future innovation is substantial.

Figure 6: Mathematical model of the Fibonacci sequence and an example in nature. Spirals of two consecutive Fibonacci numbers in a pinecone: 8 clockwise spirals and 13 counterclockwise spirals. (Source: Wikimedia Commons; wallpaperflare image labeled for reuse)

References [1] Architecture of the Eden Project, Cornwall. (n.d.). Retrieved March 10, 2019, from https://www.edenproject.com/edenstory/behind-the-scenes/architecture-at-eden [2] Awadh, O. (2017). Sustainability and green building rating systems: LEED, BREEAM, GSAS and Estidama critical analysis. Journal of Building Engineering, 11, 25–29. https://doi. org/10.1016/j.jobe.2017.03.010 [3] Benyus, J. M. (1997). Biomimicry: Innovation inspired by nature. New York, NY: Perennial. [4] Blewitt, J. (2015). The Eden Project – making a connection. Museum and Society, 2(3), 175-189. Retrieved from https:// journals.le.ac.uk/ojs1/index.php/mas/article/view/48/70 [5] “BREEAM or LEED - Strengths and Weaknesses of the Two Main Environmental Assessment Methods.” (2009, February). BREEAM or LEED, BSRIA, www.bsria.co.uk/news/article/ breeam-or-leed-strengths-and-weaknesses-of-the-two-mainenvironmental-assessment-methods/. [6] Dumke, K. (2011, January 21). “The Power of the Sun.” National Geographic. Retrieved from https://www. nationalgeographic.org/news/power-sun/ [7] John Robert McNeill, & Peter Engelke. (2014). The Great Acceleration: An Environmental History of the Anthropocene Since 1945. The Belknap Press of Harvard University Press.

“The building's design draws on the natural shapes of pinecones, sunflowers, shells, and pineapples. These plants contain opposing spirals that exhibit Fibonacci's sequence...”

[8] Mahdavinejad, M., Zia, A., Larki, A. N., Ghanavati, S., & Elmi, N. (2014). Dilemma of green and pseudo green architecture based on LEED norms in case of developing countries. International Journal of Sustainable Built Environment, 3(2), 235–246. https://doi.org/10.1016/j.ijsbe.2014.06.003 [9] Martinez-Alier, J. (2015). Ecological Economics. In International Encyclopedia of the Social & Behavioral Sciences (pp. 851–864). Elsevier. https://doi.org/10.1016/B978-0-08097086-8.91008-0 [10] Newsham, G. R., Mancini, S., & Birt, B. J. (2009). Do LEED-certified buildings save energy? Yes, but…. Energy and Buildings, 41(8), 897–905. https://doi.org/10.1016/j. enbuild.2009.03.014 [11] Pawlyn, Micheal. (2011, November). Using Nature’s Genius in Architecture. Retrieved from https://www.ted. com/talks/michael_pawlyn_using_nature_s_genius_in_ architecture 19


[12] Ragheb, A., El-Shimy, H., & Ragheb, G. (2016). Green Architecture: A Concept of Sustainability. Procedia - Social and Behavioral Sciences, 216, 778–787. https://doi.org/10.1016/j. sbspro.2015.12.075 [13] United Nations. (1987). Our Common Future; Brundtland Report. Commission on Environment and Development (WCED). Oxford University Press [14] Yang, C., & Choi, J.-H. (2015). Energy Use Intensity Estimation Method Based on Façade Features. Procedia Engineering, 118, 842–852. https://doi.org/10.1016/j. proeng.2015.08.522 [15] Yuan, Y., Yu, X., Yang, X., Xiao, Y., Xiang, B., & Wang, Y. (2017). Bionic building energy efficiency and bionic green architecture: A review. Renewable and Sustainable Energy Reviews, 74, 771–787. https://doi.org/10.1016/j. rser.2017.03.004

20

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


The Health-Related Implications of Fifth Generation (5G) Wireless Technology: Science and Policy BY AUDREY HERRALD '23

Introduction If you use a mobile phone, you depend on mobile broadband to call, text, stream, and access the internet. Mobile broadband forms the basis of wireless cellular networks, and telecom companies are constantly working to improve these networks. 5G technology is the most recent product of these improvements; it is the fifth generation of wireless cellular networks. Eventually, it will augment or replace existing 4G and 3G networks. The new technology will bring mobile data speeds that are as much as 100 times faster than the fastest home broadband network currently available to consumers. Just as the advent of 4G technology set the stage for new services like Uber and AirBnb, enabled expansion of video streaming services (Netflix), and helped grow social media platforms (Instagram and Facebook live, for example), 5G technology will bring unprecedented advancements. Faster connections could give rise to predictive analytics in medical equipment, self-driving vehicles, and advancements in virtual reality— just to name a few. In addition to faster speeds, 5G technology offers more reliable connections. This means that it also has the potential to expand cell service access in rural and underserviced areas. However, the new technology comes with its own set of challenges. Fifth-generation wireless technology will operate within three bands of wireless frequency: low, middle, and high-band, or WINTER 2020

“mmWave." The mmWave bands are of a higher frequency than existing cellular networks, and the novelty of the frequency has raised some concern. As 5G networks expand, humans will be increasingly exposed to the high frequency radiation (RFR) emitted by new 5G cell stations. Governing bodies like the Federal Communications Commission have given telecommunication companies the green light to roll out 5G services, but many remain hesitant about the safety of the new technology. The World Health Organization advises a “precautionary approach” to 5G rollout, and the scientific literature is divided as to whether or not the high frequency radiation poses health risks. As scientists, doctors, legislators, and grassroots community organizations all work to understand the potential adverse health effects of 5G, state and local governing bodies grapple with their role in the process of 5G development. Some states hope to curtail the rollout until further research becomes available. Others have begun delegating some authority to local governments, giving cities and counties the choice to impose fees that would temporarily discourage 5G rollout in their area. Scientists continue to investigate the healthrelated implications of 5G technology. The body of research is still developing, and it remains inconclusive. However, initial findings are far from insignificant, and have already influenced governing bodies, regulatory agencies, and local legislators. Science rarely exists in a vacuum, and the research on 5G radiofrequency exposure is no exception. This

Cover Image: The Biological Implications of 5G Wireless Technology Source: Health Effects of 5G. 2019, https://www. adweek.com/wp-content/ uploads/2019/12/5g-causecancer-CONTENT-2019.jpg.

“Governing bodies like the Federal Communications Commission have given telecommunication coompanies the green light to roll out 5G services, but many remain hesitant about the safety of the new technology.” 21


paper aims to explore the current scientific landscape surrounding 5G and public health, and then present these findings within social and political contexts.

“Thousands of studies related to the biological effects of RFR from wireless technology have been published in peer-reviewed scientific journals over the past decade.”

In Section 1 of this paper, a review is performed of the most recent scientific literature pertaining to the biological effects of non-ionizing radiation. The review reveals that exposure to radio frequency radiation (RFR) can result in adverse health effects related to the thermal (external) effect of radiation, but that exposure limits set by the FCC are adequate in protecting everyday consumers against these thermal health risks. However, there are recent preliminary studies that point to the existence of non-thermal health effects, and the FCC regulations account only for thermal effects of RFR exposure. In light of this evidence, many scientists stress the importance of adhering to a guiding legislative theory, developed and supported by the World Health Organization (WHO), known as the “precautionary principle” during the establishment of 5G networks. Essentially, these scientists suggest that when an emerging health risk has the potential to become substantially adverse, “acknowledged scientific uncertainty” should not stop governing bodies from taking

precautionary legislative action.3 In Section 2, an outline and analysis of healthrelated 5G legislation in U.S. states find that more than a quarter of states have passed legislation that responds in some form to health-related concerns of 5G technology. Most of this legislation involves the commission of a study or the formation of an investigative task force. Many states have also passed legislation that widens the regulatory ability of municipalities, which might limit construction of 5G cell towers in areas where constituents harbor health concerns. A small proportion of states have written declaratory letters to the FCC or taken direct action on wireless companies, such as requiring that small cells are routinely monitored and registered for public access. Finally, Section 3 includes a summary of recent health-related opposition to 5G. In the U.S., grassroots organizations are growing—some contingency of 5G opposition has a base in every state. These groups have filed lawsuits against the FCC, prepared sample legislation for states and municipalities, and worked with state legislators to develop informational fact sheets complete with all known effects of exposure to cellular radiation and best practices for limiting one’s own exposure. Municipal opposition has inhibited small cell deployment in a few instances, but health-related opposition to 5G has posed a much more significant barrier to deployment in a number of European countries. In the U.S., the FCC (not state or municipal governments) holds most of the authority over the telecom companies that are working to establish 5G networks.

Section 1: A Review of Scientific Literature

Figure 1: 5G Licensing Status by Frequency.5

22

Thousands of studies related to the biological effects of RFR from wireless technology have been published in peer-reviewed scientific journals over the past decade. In general, these studies investigate one of two types of potential health effects from exposure to non-ionizing radiation: non-thermal (internal) effects and thermal (external) effects, related to the heating of skin. This divide comes from a lack of scientific consensus regarding the penetrative capabilities of non-ionizing radiation. On the electromagnetic spectrum, non-ionizing radiation is a type of low-energy radiation that does not have enough energy to remove an electron from an atom or molecule.

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Most prominent governing bodies, including the FCC and WHO, maintain that there is no convincing scientific evidence for any internal effects of this non-ionizing radiation. All cellular networks, including the higher frequency bands that will be utilized in some 5G networks, are classified as non-ionizing forms of radiation. Thus, if the effects of non-ionizing radiation are indeed limited solely to thermal (external) bodily changes, then the new technology should pose no greater health risk than any preceding cellular network.4 Figure 1 is a graph showing the current allocation of wavelengths for 5G networks. These wavelengths each occupy a different “lane” in the wireless world, and telecommunications companies cannot utilize a band of operation until that band is allocated or licensed to them by the FCC. While all 5G networks will operate using frequencies that have been used in preceding network generations, some networks—particularly those in denselypopulated regions—will also utilize especially high frequencies known as “mmWave.” These frequencies, shown in the top-third of the figure, are defined as any frequency above 24 GHz. Importantly, these higher-frequency mmWaves are still fall far below the 100 GHzthreshold for ionizing radiation.5 Thus, despite concern regarding the novelty of mmWaves, even these higher-frequency bands lack the ionizing power that makes other types of radiation (like x-rays, for example) harmful to human health. The distribution of 5G frequencies displayed in Figure 1 offers important context for a literature review. The distribution demonstrates that all cellular networks—including the novel “mmWave” bands upon which some 5G networks will operate—are well below the threshold for ionizing radiation. Governing bodies like the FCC can point to this “nonionizing” classification as a sort of safety-label, since external effects (the heating of skin) are generally considered to be the only adverse biological effects of non-ionizing radiation— and these effects can be easily measured and then limited in cellular devices used by consumers. The issue here, supported by a growing body of scientific literature, is that this “non-ionizing” designation might not mean all that governing bodies presume; some studies suggest that non-ionizing radiation might have internal bodily effects. Sections 1.1 and 1.2 will address the distinctions between thermal (external) effects and non-thermal (internal)

WINTER 2020

effects of non-ionizing radiation. Briefly, however, it must also be noted that the extensive distribution of wavelengths in Figure 1 explains the necessity of gathering healthrelated data on all non-ionizing waves, even when 5G technology is the primary concern; 5G technology will operate on a wide range of frequencies. Though the new mmWave bands have become the face of 5G, less than onethird of 5G technology (at least within the next few years) is expected to utilize this mmWave range—and such utilization will take place primarily in large urban centers. In less densely populated regions, lower-frequency health effects are especially relevant. For this reason, the following literature review includes studies pertaining to electromagnetic radiation at all non-ionizing frequencies, and not simply those utilized by the new mmWave 5G bands. Thermal Effects As can be observed in Figure 1, the high frequency radio waves upon which 5G and preceding network generations function are all below a frequency of 100 GHz. This means that all cellular radio frequency radiation is classified as “non-ionizing.” These radio waves are incapable of separating intracellular ions from other particles, which is why other low-frequency, non-ionizing waves such as microwaves and visible light are markedly less harmful than ionizing x-rays and gamma rays. However, non-ionizing radiation (like sunlight) is known to have biological effects including heating, burns, and shocks. The regulatory activity of the FCC addresses only these thermal (heat-related) effects of nonionizing radiation—despite a growing body of evidence suggesting that these non-ionizing wavelengths could very well also have deeper, non-thermal effects.

“The degree of temperature change in skin varies proportionally to the intensity of the radiation given off by a wireless device.”

The degree of temperature change in skin varies proportionally to the intensity of the radiation given off by a wireless device. This relationship has enabled scientists and industry regulators to develop a standardized “safe limit” for the amounts of radiation that smartphones marketed in the U.S. can emit. The limit requires that no wireless technology causes any harmful increase in the temperature of human skin during regular use. Compliance with this limit is mandatory for all cellular devices sold in the U.S.7 Steven Liu is the Vice President of Engineering, Regulatory Compliance and RF Safety at PCTEST, the FCC’s primary wireless testing facility. Liu

23


emitted by all cellular networks, including 5G). The literature includes no definitive accounts of adverse health effects, but multiple peerreviewed, high sample-size, carefully replicated studies are generally required before such definitive conclusions are reached. The Federal Communications Commission (FCC) maintains that there is no convincing scientific evidence for any internal bodily effects of non-ionizing radiation. However, many scientists have published research that offers preliminary evidence for biological damage, and the International Agency for Research on Cancer (IARC) classifies the 5G-related radiation as “possibly carcinogenic.” 9

Figure 2: Predicted Skin Permittivity by Radio Frequency Radiation Across Studies.11

Figure 3: Penetration Depth of Different Radio Frequencies Across Studies.12

“Claims regarding adverse health effects that arise as a result of heating, such as temporary decreases in sperm count or heat-initiated cataract development, are scientifically unfounded.”

explains that the degree of radiation exposure is described in terms of Specific Absorbance Rate, or SAR. Different cellular devices emit radiation with slightly different SARs, but the FCC requires that all cellular devices marketed in the U.S. must emit radiation with an SAR below 1.6 watts of energy per kilogram of mass. During compliance testing, cellular devices are activated at maximum power for each of their possible frequency bands. The device is placed in a number of common positions around the head and body of a human-simulating model, where a robotic probe then records data from the model’s electric fields. The highest SAR values for each frequency band are incorporated into a final determination of SAR value for the device. In most circumstances, cellular devices operate far below the maximum power levels used in testing. Even still, cellular devices that exhibit maximum SARs at or below the threshold of 1.6 W/kg do not emit enough energy to have any thermal effect on the human body.8 Data from two of the world’s largest scientific databases, PubMed and ScienceDirect (with collective coverage of over 46 million articles), show that approximately 45,000 peer-reviewed research studies on the health effects of nonionizing radiation have been published in the past five years. (Recall that this non-ionizing radiation is the type of radio frequency radiation

24

Wu and colleagues (2015) reviewed the initial unveiling of 5G mmWave technology, and presented a wide body of research on the thermal effects of RFR. The review focused on the new mmWave technology that would be utilized in 5G networks. While some argue that the higher mmWave frequencies could have worse thermal effects than lower frequencies, Wu’s review, as well as reports by International Electrotechnical Commission and a researchbacked database from the Foundation for Research on Information Technologies in Society, suggest that a definite conclusion cannot be drawn; the studies found that the thermal effects of mmWave frequencies do not differ significantly enough from lower bands to warrant different thermal safety standards from the FCC.10 Figure 2 shows a graph from Wu’s review of studies regarding the penetrative effects of mmWaves. Each of the six models (shown in the top-right) were developed in separate studies, yet they appear to corroborate one another in their predictions of an inverse relationship between wave frequency and the relative permittivity of human skin. Figure 3 displays similar results pertaining to the heating of tissue. In short, recent scientific literature suggests that the FCC’s contracted testing facilities screen cellular technology quite adequately for any potential thermal health effects. Claims regarding adverse health effects that arise as a result of heating, such as temporary decreases in sperm count or heat-initiated cataract development, are scientifically unfounded. However, justifiable uncertainty does remain on the question of whether FCC regulations are sufficient in protecting consumers from any potential non-thermal effects of 5G

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


radiation. The potential non-thermal effects under investigation include carcinogenesis, neurological damage, and harm to the immune system, among other concerns. Non-Thermal Effects Since the advent of preliminary 5G technology in 2015, tens of thousands of peer-reviewed studies have been published regarding the biological implications of 5G in specific and non-ionizing RFR in general. Many of these studies present contradictory or inconclusive results, and further research is necessary to determine with certainty how RFR affects the human body. More specifically, the existing evidence suggests that RFR exposure could have adverse effects on reproductive, immune, neurological, and cognitive health as well as possible carchinogenic effects. The available findings seem sufficient to demonstrate the possibility of non-thermal biological effects and justify a cautionary approach in the development of new cellular networks. The following review examines results from the most widely cited studies investigating the biological effects of 5G and non-ionizing radiation. Studies published prior to the year 2000 are not included, given the recency of developments in cellular technology. Articles have been selected using the PubMed bibliographic database with keywords “electromagnetic fields”; “cellular phone”; “mobile phone”; “base station”; “RF-EMF”; “radiofrequency”; “millimeter waves”; “wi-fi”; “MMW”; “5G”; and “cellular networks,” coupled with keywords from each area of health effect (i.e., “carcinogenic” or “reproductive”). Carcinogenic Effects: Cancer has been one of the most widely studied health concerns with regard to cellular and wireless radiation. One of the largest RFR studies to date, the 2018 National Toxicology Program report commissioned by the FDA, found “clear evidence” for an association between radiation exposure and the development of tumors in the hearts of male rats. The study also demonstrated an increase in DNA damage associated with RF radiation exposure. However, the researchers stress that heart cancer was, “only seen in 5–6% of rats exposed to a higher power level—four times higher than the maximum human exposure [and] should not be directly extrapolated to human cell phone usage.” Still, the researchers urge adherence to the FDA guidelines on cautionary reduction to cell phone exposure.13,14

WINTER 2020

In 2018, the Ramazzini Institute in Italy conducted the largest long-term study ever performed in rats on the health effects of RFR radiation. The results aligned with those of the NTP study; both studies found evidence for increased tumors in exposed rats, and the tumors were similar to those observed in some human epidemiological studies. While this data cannot yet be applied to humans, the researchers did suggest that the results merit re-evaluation of IARC conclusions regarding the carcinogenic potential of RFR in humans.15 The NTP and Ramazzini Institute studies are two of the most recent publications on carcinogenicity, and they are prominently referred to in current policy decisions. However, many earlier studies found modest association between cancers and RFR exposure.16,17,18,19 Some scientists maintain that small sample sizes, biases, or experimental errors inhibit a causal interpretation of this data. A number of case-control studies, especially those investigating the association between prior cellular telephone use and malignant tumors in cancer patients, found no significant increases in the risk of cancer for individuals who reported higher-than-average use of their cellular telephone.20,21,22,23

“As with carcinogenity, recent literature regarding effects of RFR on the reproductive system varies in its conclusions.”

Nearly every carcinogenicity publication reviewed here concludes with some degree of ambiguity. Many scientists cite the extended path of carcinogenicity research on tobacco to support the statement that carcinogens often take many years to be confirmed, even when preliminary evidence gives reason for caution.24 Accordingly, many scientists urge legislators to temporarily suspend or limit the development of 5G networks until the body of literature grows. Reproductive Effects: As with carcinogenicity, recent literature regarding effects of RFR on the reproductive system varies in its conclusions. However, the proportion of studies reporting adverse health effects is significant enough to warrant caution. Out of thirteen peer-reviewed reproductive studies from 2015 and later, all report adverse reproductive effects of RFR. Among these adverse effects are decreases in sperm count, sperm motility, and in levels of testosterone— all of which are related to reduced reproductive ability. However, multiple studies also found no significant effects on other indicators of reproductive ability, including sperm vitality,

25


DNA structure, or testosterone levels.26

“Though high frequency mm waves do not penetrate human skin, a wide variety of systemic effects arising from wave-skin interaction have been reported.”

The methods of exposure, including intensity, duration, and RFR source, vary between studies. Some reproductive studies investigate consenting men in fertility clinics, while others draw data from rats or mice. Understandably, variation in conclusions is common, but a lack of consensus does not necessarily mean a lack of harm—nor, of course, do the studies prove the definitive existence of adverse health effects. What the scientific literature does show is that there are biological mechanisms for RFR interaction with the reproductive system past just heating of the tissue.27 Tissue heating is the only health effect considered and curtailed by the FCC maximum exposure limits and compliance testing. A common argument against the potential for adverse reproductive effects of RFR is that the observed decreases in sperm count are temporary and simply a consequence of short-term temperature elevation. The studies reviewed here involve tissue exposure durations and intensities that model real-world scenarios. In a world where RFR exposure is increasing, even “temporary” effects of exposure warrant discussion. Cognitive Effects: The effects of RFR exposure on cognitive function are particularly ambiguous. A broad 2017 meta-analysis of radio frequency exposure and cognitive function found no significant association between the two.28 Recent animal model studies tend to reflect this lack of association.29,30 However, animal model RFR exposure in a prenatal environment is linked to cognitive deficits later in life.31,32,33 Such a result

makes up part of a wider call for legislation that protects particularly vulnerable populations from 5G and RFR radiation through setback requirements near daycares and residencies. A number of peer-reviewed studies published within the last three years suggest adverse effects of RFR exposure on long-term memory and spatial reasoning as well as increased hyperactivity, headaches, and fatigue.34,35,36,37 However, when it comes to the assessment of subjective symptoms like hyperactivity, headaches, and fatigue, confounding variables (such as cell phone usage among adolescent study participants) tend to lessen the validity of these associations.38 Though these cognitionrelated studies do prove particularly vulnerable to confounding evidence, some scientists deem the results consistent enough to warrant caution and further investigation. Millimeter-Wave Technology: The preceding sections investigated exposure to all types of non-ionizing (cellular) radiation because the new 5G networks will build substantially upon these existing wireless bands. An initially small and urban-centric proportion of 5G service will utilize higher frequency millimeter waves. Though high frequency mm Waves do not penetrate human skin, a wide variety of systemic effects arising from wave-skin interaction have been reported. Five high-profile studies since 2008 have reported altered gene expression as a result of mmWave exposure,39,40,41,42,43 while five more report mmWave-influenced changes in the function of neuro-muscular systems and the endoplasmic reticulum.44,45,46,47,48 A 2016 study presents evidence to suggest that even in the absence of deep penetration, surface-level

Figure 4: Health-Related Small Cell and 5G Legislation From NCSL Data (generated using mapcharts.net).55

26

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


effects on glycolysis lead to changes in gene expression.49 In a 2014 study that found 665 altered genes as a result of mmWave exposure, the researchers concluded that current exposure limits (informed by the International Committee on Non-Ionizing Radiation Protection, or ICNIRP) are likely “too permissive to prevent biological response.” 50 In general, the researchers do not deny the existence of non-thermal biological effects with mmWave exposure, even at levels defined as “safe” by FCC guidelines. When presented with a summary of the relevant literature, the Chairman of the FCC released a statement affirming that the present safety standards as recommended by the ICNIRP and other international governing bodies would stay in effect.51 The current ICNIRP guidelines were initially established in 1998, but were reaffirmed in 2009 on the basis of research from reviews published by the World Health Organization (2007), the UK Health Protection Agency (2006, 2008), and by ICNIRP themselves (2003).52 These studies offer evidence for a lack of adverse biological effects in association with “safe” levels of non-ionizing RFR exposure. However, the FCC has not updated their maximum safe exposure threshold since the 2015 advent of mmWaves and 5G technology. There is not sufficient scientific evidence to prove that exposure to RFR is entirely safe, nor is there sufficient evidence to prove that exposure comes with adverse health effects. Nevertheless, because data indicate the possibility of non-thermal health effects, many members of the scientific community urge legislators to adopt a cautionary approach in the development of new cellular networks like 5G.

Section 2: Health-Related 5G Legislation When it comes to direct legislation on the placement of 5G small cells and the construction of new cell towers, state governments are relatively bound by national preemption and FCC regulations. However, some regulatory actions remain possible, and those that are health-related will be reviewed in this section. Here, legislation as it pertains specifically to the potential health effects of 5G will be reviewed. Since 2017, 44 U.S. states have passed

WINTER 2020

legislation pertaining to the development of 5G cellular networks (Figure 4).53 In most of the remaining six, legislation is currently being drafted, and many municipalities have passed their own legislation. Of the states where legislation has been passed, 12 have enacted legislation pertaining specifically to the potential health effects of 5G technology.54 In general, health-related legislation involves some combination of task-force establishment and the commission of a study. In Louisiana, Massachusetts, New Hampshire, New Jersey, and Oregon, reports on the implications of 5G technology have been commissioned.56 Each of these reports will include some reference to the potential health effects of 5G technology. Three of these reports had due dates prior to March 1st, 2020, while the remaining two are still to be published. A similar bill has been drafted, but not yet passed, in New York.57

“Perhaps the most in-depth report has been commissioned by the New Hampshire state legislature. There, Bill 522 establishes a commission to study the environmental and health effects of 5G technology.”

Perhaps the most in-depth report has been commissioned by the New Hampshire state legislature.58 There, Bill 522 establishes a commission to study the environmental and health effects of 5G technology. The bill language includes research questions such as, “why have 1,000s of peer-reviewed studies, including the recently published U.S. Toxicology Program 16-year $30 million study, that are showing a wide-range of statistically significant DNA damage, brain and heart tumors, infertility, and so many other ailments, being ignored by the Federal Communication Commission (FCC)?” or, “why are the FCCsanctioned guidelines for public exposure to wireless radiation based only on the thermal effect on the temperature of the skin and do not account for the non-thermal, non-ionizing, biological effects of wireless radiation?”59 The commission has released one interim report and is expected to release a final report in the coming months. Of note, Massachusetts, Montana, and New York are three states where proposed and enacted 5G legislation involves actions outside the commissioning of a study or establishment of a task force. In Massachusetts, HB 1272 requires a registry of wireless facilities to allow for small cell monitoring and to ease access to contact information.60 Another bill, HB 1273, would ban “especially dangerous wireless facilities, emissions, and products.”61 Both bills were presented by Senator Donald Humason as the result of a constituent petition, although

27


Figure 5: Recent 5G Legislation and Municipality Jurisdiction

“Health-related legislation on 5G technology is clearly still fairly new. Massacusetts is perhaps the current leader in the introduction of such legislation, and committee referral has been the most prominent bill destination.”

Sen. Humason did not sponsor the measures. On February 22nd, 2020, the legislation accompanied a study order authorizing the joint Committee on Public Health to investigate current health-related senate documents.62 In Montana, House Joint Resolution 13 would urge Congress to amend the federal Telecommunications Act to account for health effects of siting small cell network equipment in residential areas. After passing in the House, HJR 13 was denied in standing Senate committee.63 Another Montana bill, HB 469, would restrict the siting of small cell network equipment near schools on account of health effects, but the bill died in process in April 2019.64 In New York, SB 3046 would require notification of nearby residents and municipalities prior to the siting of wireless facilities. On February 1st, 2020, this bill was referred to the Committee on Energy and Telecommunications where it awaits further action.65 Health-related legislation on 5G technology is clearly still fairly new. Massachusetts is perhaps the current leader in the introduction of such legislation, and committee referral has been the most prominent bill destination. Proposed legislation that targets the legitimacy of federal rulings or that directly restricts the construction of wireless facilities appears to have had little success. However, health-related bills that invite municipalities to the regulatory scene appear to harbor the best chances of passing into law. Even modest municipality-engaging interventions, such as the requirement of resident notification or the registration of small cells for public reference, have been well-received by health advocacy groups. Of the 44 U.S. states that have passed health-

28

related 5G legislation, five include language associated with increased municipal regulatory control. In these cases, state laws do not directly limit 5G implementation. Instead, local authorities are able to account directly for any health-related (or aesthetic, environmental, etc.) concerns. Figure 5 is a map showing the distribution of this municipality-engaging legislation. The five states highlighted in Figure 5 (California, Kansas, Arkansas, Wisconsin, and Maine) are simply those states where 5G legislation explicitly grants increased power to municipal governments; local-level legislation is concentrated in, but not limited to, these states. California municipalities have been some of the most active. In Berkeley, the 2018 “Right to Know” ordinance requires cellular retailers to inform consumers that cell phones emit radiation and that “if you carry or use your phone in a pants or shirt pocket or tucked into a bra when the phone is ON and connected to a wireless network, you may exceed the federal guidelines for exposure to RF radiation.” 66 The ordinance was challenged and upheld in a July court case, with the panel explaining that the public health issues at hand were “substantial” and that the “text of the Berkeley notice was literally true,” and “uncontroversial.” 67 In Los Altos, California, citing guidelines for the construction of small cells assuage some health concerns by mandating that cells are constructed at least 500 feet from schools and at least 1,500 feet from one another. Installation of small cells on public utility easements in residential neighborhoods is prohibited, and a 500-foot setback is imposed near multifamily residences in commercial districts.68 In San Diego and Marin County, CA, recent draft DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


ordinances require that small cells are not located within 1,000 feet of schools, child care centers, hospitals, or churches.69 Petaluma, CA has passed an ordinance with similar setback requirements, and it also requires that an encroachment permit is obtained for any work in the public right-of-way.70 In each of these ordinances, “protect residents against adverse health effects” is cited as a primary aim. A Warren, CT ordinance distinguishes between rural and urban levels of adequate coverage and exposure levels, as well as limits the total number of towers in the area.71 While the Warren ordinance has been successful, a Burlington, MA attempt to charge cellular companies annual recertification fees simply deterred the cell provider in question from pursuing small cell development in Burlington.72 A Little Silver, NJ ordinance requires notification of residents within 500 feet of construction sites and mandates that telecommunications companies prove existing wireless infrastructure does not accommodate regional needs.73 Local-level ordinances that mandate fees or extend cell tower application processes for telecom companies have found relative success in limiting the construction of 5G small cells, especially since these companies, eager to establish the new technology, tend to simply switch targets rather than comply with fees or work through cumbersome application processes. In general, local-level ordinances that impose setback, notification, “right to know,” and/or equipment-burial requirements tend to allow for some small cell construction while also beginning to mitigate potential adverse health effects. State-level legislation pertaining to an increase

in municipal control is also one of the most widely called-for legislative movements among health and safety advocates.74 Further legislative requests and recommendations from concerned constituents, both in the U.S. and internationally, are addressed in the following section.

Section 3: Domestic and International 5G Opposition and Legislative Responses Constituent concerns surrounding 5G technology are growing across the nation. While a small number of these groups encourage caution on the basis of sound science, many 5G "education" campaigns are riddled with misinformation and led by groups with questionable advocacy track records (including support of the scientifically disproven anti-vaccination movement). Claire Edwards, a retired editor on the United Nations staff, has put together a chronology of prominent 5G push-back around the world and in the U.S.75 Her list, as well as an interview with RF-EMF health advocate, grassroots organizer, and former tech-writer Cecelia Ducaine of Wireless Education Massachusetts, are among the sources that inform this section.

“Online, social networking pages dedicated to the issue of 5G and its health effects have also grown immensely in the past year.”

At a local level, constituent phone calls and attendance at town meetings regarding 5G establishment are increasingly regular occurrences. Online, social networking pages dedicated to the issue of 5G and its health effects have also grown immensely in the past year.76 It is on these social networking pages where the thin line between science-backed caution and egregiously false disinformation is consistently crossed. The substantial proliferation of anti-5G

Figure 6: Increases in Online Popularity of Anti-5G Campaigns.77

WINTER 2020

29


Facebook pages and groups is shown in Figure 6. The graph tracks 5G Facebook activity from December 2018 through May 2019. The blue line shows Facebook pages, while the green shows Facebook groups.

“Some grassroots organizations, such as the Americans for Responsible Technology, have even published online templates for legislation that would mitigate the potential health effects of 5G technology.”

These Facebook groups, mirrored by a rise in related Twitter hashtags and Instagram pages, mark the existence of numerous grassroots movements in opposition to 5G.78 In the legal realm, the groups tend to call for three courses of action from state governments: a moratorium on 5G development, a transfer of regulatory control to municipalities, and a public information campaign. No state government has declared a moratorium on 5G rollout, as recent FCC regulations preempt such state-level action. Since 2018, a number of states have considered or enacted legislation that widens local jurisdiction on 5G development. However, more states have enacted legislation to centralize and streamline 5G regulation at the state level. Finally, municipalities have passed “Right to Know” ordinances in an effort to provide the public with pertinent information. The Massachusetts legislature published an RF-EMF Safety factsheet in 2016 for public reference, but the sheet has since been removed from their website.79 Some grassroots organizations, such as the Americans for Responsible Technology, have even published online templates for legislation that would reduce any the potential health effects of 5G technology.80 The Massachusetts nonprofit Wireless Education has also built a half-hour Schools & Families educational course, as well as a Corporate course, both of which are intended to spread awareness of the health effects that may accompany 5G technology.81 Online campaigns, however, are only the beginning of public pushback to 5G technology. In March of 2019, 62 entities and municipalities across the U.S. filed a class action lawsuit against the FCC, aiming to dismiss the FCC ruling that would streamline the deployment of wireless facilities.82 In September of 2019, a class action lawsuit was brought against Apple and Samsung on the basis of overexposure to RF radiation.83 Numerous citizens’ lawsuits against the FCC and ADA are underway in the U.S., including lawsuits filed as recently as February of 2020 by the Children's Defense Fund and by Dr. Devra Davis, Nobel co-laureate on climate change and founder of the Environmental Health Trust.84 As of yet, no prominent citizen lawsuit on 5G technology has been filed against a state

30

government. These lawsuits mainly target the FCC in an effort to mediate the overwhelming control that chairman Ajit Pai and the agency exercise in 5G regulation. However, the citizens involved in these lawsuits are often the same who urge for state-level legislation. Many groups, like the one led by Dr. Davis, are comprised of scientists and doctors from organizations like the National Institute of Environmental Health Sciences (NIEHS), the National Toxicology Program (NTP), and the International Agency for Research on Cancer, to name a few.85 On an international level, one of the broadest appeals against the establishment of 5G technology takes the form of a letter sent to the EU in 2017 and signed by more than 260 scientists and medical doctors. The signees request a moratorium on the deployment of 5G until the associated health risks are fully investigated by industry-independent scientists. The appeal and four rebuttals to the EU over a period of more than two years have garnered no legislative action. Some suggest, without evidence, that the reason for the inaction is the reliance of international government on regulatory bodies like the ICNIRP and the Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR), both of which employ senior members with personal ties to the cellular technology industry.86 Recent work by grassroots opposition groups has investigated ties between regulatory agencies and the industry. The results of such investigation, as well as a record of lobbying action and evidence in support of monetary industry-agency relations, are detailed in a 2014 report by Harvard professor Norm Alster.87 Internationally, the UN has also been approached by anti-5G lobbyists. However, it maintains a pro-development position on the grounds of ICNIRP recommendations and states that “the primary responsibility for protecting the public from potential harmful effects of electromagnetic fields remains with the Member States.”88 Importantly, when U.S. regulatory agencies refer to EU actions in defense of limited regulation on 5G, it must be taken into account that the EU does not regard it as their role to account for any potentially harmful effects of 5G technology. In many European nations, legislators are responding to constituent concerns by adopting legislation that slows or suspends the rollout of 5G technology until more definitive scientific

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Figure 7: WHO Guiding Regulatory Principles.96

conclusions are reached. In March of 2019, the Belgian Environment Minister announced the halt of a planned 5G pilot project, stating that “the people of Brussels are not guinea pigs whose health I can sell at a profit.”89 In April, the Geneva government adopted a motion for a moratorium on 5G and called on the WHO to monitor independent scientific studies to determine the harmful effects of 5G.90 Many European municipalities, including the city of Glastonbury in England, the Swiss canton of Vaud, and the cities of Cuneo and Casarta in Italy, have adopted motions opposing 5G on the basis of health effects.91 Unlike the U.S., where health-based moratoriums are strongly opposed by the FCC, public opposition to 5G development in Europe has disrupted the activity of multiple large telecom companies. Europe’s biggest carrier, Deutsche Telekom AG, as well as mobile operator Orange Belgium and Swiss carrier Sunrise Communications AG, report having to modify their 5G development in response to health concerns that have resulted in public protests and strict municipal-level emissions standards.92 Often, parties in opposition to 5G reference prominent WHO and EU guidelines for legislation on ambiguous health issues. In 1999, the European Union adopted the Precautionary Principle (PP). The PP states that governing bodies should “take prudent action when there is sufficient scientific evidence (but not necessarily absolute proof ) that inaction could lead to harm.” 93 Another widely referenced guide is the ALARA principle. ALARA is an acronym for "as low as (is) reasonably

WINTER 2020

achievable," and urges governing bodies to make every reasonable effort to maintain exposures to ionizing radiation as far below the dose limits as is practical.94 Many concerned parties support these principles, maintaining that regulatory and governing bodies should adhere to them more closely. A graphic (Figure 7) from Dr. Leeka Kheifets of the WHO presents a number of guiding regulatory principles as they relate to the levels of knowledge and potential for harm surrounding RF-EMF exposure.95 The WHO, in conjunction with the Centers for Disease Control (CDC) and other international bodies, maintains that the Precautionary Principle and related guidelines (pictured in Figure 7) should be adhered to in the development of new drugs, technologies, or clinical practices that have the potential to generate adverse health effects. The existing body of scientific evidence does not definitively confirm the existence of 5G-related adverse health effects, but it does provide evidence for the possibility of such harm at levels as severe as the development of cancer. Accordingly, scientists and others who advocate for the precautionary treatment 5G development are advocating for regulation in line with the WHO and CDC.

“Unlike the US, where health-based moratoriums are strongly opposed by the FCC, public opposition to 5G development in Europe has disrupted the activity of multiple large telecom companies."

The precautionary measures that anti-5G organizations advocate for include, but are not limited to: a moratorium on the rollout of 5G until conclusive research emerges (during which resources could be devoted towards the upgrade of 2G and slower networks in rural areas), a public information campaign to educate consumers on the potential health effects of 5G, setback requirements for

31


small cells from schools and residential areas, mandates for recurring demonstration of small cell safety and compliance when installed in the public right-of-way, considerations of wired fiber optic networks (especially in rural areas), a legislative petition to the FCC with a request for their response to recent scientific publications, and/or a temporary halt on any current legislation enabling expedited establishment of 5G small cells.

Section 4: Conclusion Without a doubt, the advent of 5G technology promises exciting new innovations in many industries and will result in faster internet speeds and more reliable broadband access for many individuals. However, the new technology is not without its complications. Extensive increases in speed and reliability are made possible through use of a wider range of broadband channels, operating at a wider range of frequencies, than in previous network generations. The novelty of the technology raises questions about its safety; consumers and scientists alike have requested that the Federal Communications Commission update its current 25-year-old safety guidelines for cellular networks. The FCC maintains that the new 5G technology poses no health risks and warrants no changes to safety guidelines.

“As more and more research becomes avaiable showing that non-ionizing radiation might interact differently with the human body than was once thought, many urge legislators and governing bodies to slow the rollout of 5G technology until additional research is completed."

While the novelty of 5G technology means that much relevant research is still in progress, an emerging body of scientific evidence suggests that exposure to the type of radiation utilized by 5G and some earlier network generations might affect humans differently than previously thought. In particular, some scientists suggest that the discovery of mechanisms for internal damage counter the commonly held belief that radiation from cellular networks (non-ionizing radiation) only affects the body by heating the skin. As of yet, studies that suggest additional health risks are preliminary. Many of them are conducted on animals, which means that their results may not be applicable to humans, but these studies do provide a baseline for future in-vivo research. As more and more research becomes available showing that non-ionizing radiation might interact differently with the human body than was once thought, many urge legislators and governing bodies to slow the rollout of 5G technology until additional research is completed. States across the country have taken various positions on the rollout of 5G technology. The FCC retains almost all authority when it comes to the regulation of telecom companies; state

32

governments have a limited menu of options to choose from, including citing fees and setback requirements, if they attempt to temporarily slow 5G rollout. Nevertheless, a number of states have commissioned studies on the health implications of 5G technology. Some have shifted their limited regulatory control to municipalities, allowing local governments to decide for themselves whether or not they will attempt to discourage 5G rollout in their area. And as some governments approach 5G with caution, many others pass laws to expedite the development of 5G networks in their communities. The newest generation of cellular service brings with it a host of complicated scientific, social, and legislative issues. The research continues to develop, and policy discussions are in-progress. Despite a wide range of opinions, I believe it is the hope of all involved that these and future technological developments will help create a more advanced and better-connected world, without sacrificing human health or safety. References 1. Health Effects of 5G. 2019, https://www.adweek.com/wpcontent/uploads/2019/12/5g-cause-cancer-CONTENT-2019. jpg.

2. “Synopsis of IEEE Std C95.1TM-2019 ‘“IEEE Standard for Safety Levels With Respect to Human Exposure to Electric, Magnetic, and Electromagnetic Fields, 0 Hz to 300 GHz”’” 7 (December 11, 2019): 171346–55. https://doi.org/10.1109.

3. World Health Organization Health Impact Assessment Precautionary Principle. “World Health Organization Health Impact Assessment - Precautionary Principle,” 2020. https:// www.who.int/hia/examples/overview/whohia076/en/.

4. FCC Radio Frequency Safety.

5. Centers for Disease Control and Prevention - Radiation and Your Health, Ionizing Radiation. “Centers for Disease Control and Prevention - Radiation and Your Health, Ionizing Radiation,” 2020. https://www.cdc.gov/nceh/radiation/ ionizing_radiation.html.

6. Global Mobile Supplier Alliance: 5G Licensing Developments Worldwide. “Global Mobile Supplier Alliance: 5G Licensing Developments Worldwide,” 2019. https:// gsacom.com/paper/spectrum-for-5g-jan-2019/.

7. FCC Wireless Devices and Health Concerns. “FCC Wireless Devices and Health Concerns,” 2020. https://www.fcc.gov/ consumers/guides/wireless-devices-and-health-concerns. 8. Wu, T, T. S. Rappaport, and C. M. Collins. “The Human Body and Millimeter-Wave Wireless Communication Systems:

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Interactions and Implications.” 2015 IEEE International Conference on Communications (ICC), June 2015. https:// arxiv.org/pdf/1503.05944.pdf.

9. FCC Radio Frequency Safety. “FCC Radio Frequency Safety,” 2020. https://www.fcc.gov/general/radio-frequency-safety-0.

10. Hasgall PA, Di Gennaro F, Baumgartner C, Neufeld E, Lloyd B, Gosselin MC, Payne D, Klingenböck A, Kuster N, “IT’IS Database for thermal and electromagnetic parameters of biological tissues,”Version 4.0, May 15, 2018, DOI: 10.13099. itis.swiss/database

11. Wu, et al.

12. Ibid.

13. U.S. Food and Drug Administration: Reducing Radio Frequency Exposure from Cell Phones. “U.S. Food and Drug Administration: Reducing Radio Frequency Exposure from Cell Phones,” 2020. https://www.fda.gov/radiation-emittingproducts/cell-phones/reducing-radio-frequency-exposurecell-phones.

14. “NTP Technical Report on the Toxicology and Carcinogenesis Studies in Sprague Dawley SD Rats Exposed to Whole-Body Radio Frequency Radiation.”TR. Research Triangle Park, NC 27709: U.S. National Toxicology Program, November 2018. https://ntp.niehs.nih.gov/ntp/ htdocs/lt_rpts/tr595_508.pdf?utm_source=direct&utm_ medium=prod&utm_campaign=ntpgolinks&utm_ term=tr595.

15. Falcioni, L, I Belpoggi, and V Strollo. “Report of Final Results Regarding Brain and Heart Tumors in Sprague-Dawley Rats Exposed from Prenatal Life until Natural Death to Mobile Phone Radiofrequency Field Representative of a 1.8 GHz GSM Base Station Environmental Emission.” Environmental Research 165, no. 2018 (n.d.): 469–503. https://doi. org/10.1016.

16. Momoli, F, L McBride, and M Parent. “Probabilistic MultipleBias Modeling Applied to the Canadian Data From the Interphone Study of Mobile Phone Use and Risk of Glioma, Meningioma, Acoustic Neuroma, and Parotid Gland Tumors.” American Journal of Epidemiology 186, no. 7 (October 1, 2017): 885–93. https://doi.org/10.1093/aje/kwx157.

17. “Mobile Phone Use and Risk of Brain Neoplasms and Other Cancers: Prospective Study.” International Journal of Epidemeology 42, no. 3 (May 8, 2013): 792–802. https://doi. org/10.1093/ije/dyt072.

18. Lerchl, Alexander, Melanie Klose, and Karen Grote. “Tumor Promotion by Exposure to Radiofrequency Electromagnetic Fields below Exposure Limits for Humans.” Biochemical and Biophysical Research and Communications 459, no. 4 (April

WINTER 2020

17, 2015): 585–90. https://doi.org/10.1016/j.bbrc.2015.02.151.

19. "Brain Tumour Risk in Relation to Mobile Telephone Use: Results of the INTERPHONE International Case–Control Study.” International Journal of Epidemiology 39, no. 3 (May 5, 2010): 675–94. https://doi.org/10.1093/ije/dyq079.

20. Hardell, L, A Hallquist, and Mild Hansson. “Cellular and Cordless Telephones and the Risk for Brain Tumours.” European Journal of Cancer Prevention 11, no. 4 (August 2011): 377–86.

21. Hauri, Dimitri, Ben Spycher, and Anke Huss. “Exposure to Radio-Frequency Electromagnetic Fields From Broadcast Transmitters and Risk of Childhood Cancer: A Census-Based Cohort Study.” American Journal of Epidemiology 179, no. 7 (April 1, 2014): 843–51. https://doi.org/10.1093/aje/kwt442.

22. Yoon, Songyi, Jae-Wook Choi, and Nam Kim. “Mobile Phone Use and Risk of Glioma: A Case-Control Study in Korea for 2002-2007.” Environmental Health and Toxicology 30 (2015). https://doi.org/10.5620/eht.e2015015.

23. Coureau, Gaelle, Ghislaine Bouvier, and Pierre Lebailly. “Mobile Phone Use and Brain Tumours in the CERENAT Case-Control Study.” BMJ Occupational and Environmental Medicine 2014, no. 71 (October 9, 2014): 514–22.

24. Carlberg, Michael, and Lennart Hardell. “Evaluation of Mobile Phone and Cordless Phone Use and Glioma Risk Using the Bradford Hill Viewpoints from 1965 on Association or Causation.” Edited by Steven Vleeschouwer. BioMed Research International 2017 (March 16, 2017). https://doi. org/10.1155/2017/9218486.

25. Jafar, Farah, Khairul Osman, and Nur Ismail. “Adverse Effects of Wi-Fi Radiation on Male Reproductive System: A Systematic Review.”The Tohoku Journal of Experimental Medicine 248, no. 3 (July 26, 2019): 169–79. https://doi. org/10.1620/tjem.248.169.

26. Jafar, Farah, Khairul Osman, and Nur Ismail. “Adverse Effects of Wi-Fi Radiation on Male Reproductive System: A Systematic Review.”The Tohoku Journal of Experimental Medicine 248, no. 3 (July 26, 2019): 169–79. https://doi. org/10.1620/tjem.248.169.

27. Merhi, Zaher. “Challenging Cell Phone Impact on Reproduction: A Review.” Journal of Assisted Reproduction and Genetics 29, no. 4 (January 4, 2012): 293–97.

28. Zubko, O, R Gould, and H Gay. “Effects of Electromagnetic Fields Emitted by GSM Phones on Working Memory: A Meta‐ analysis.” International Journal of Geriatric Psychiatry 32, no. 2 (September 20, 2016). https://doi.org/10.1002/gps.4581. 29. Stasinopoulou, M, A Fragopoulou, and A Stamatakis.

33


“Effects of Pre- and Postnatal Exposure to 1880–1900 MHz DECT Base Radiation on Development in the Rat.” Reproductive Toxicology 65, no. October 2016 (August 17, 2016): 248–62.

30. Klose, Melanie, Karen Grote, and Oliver Spathmann. “Effects of Early-Onset Radiofrequency Electromagnetic Field Exposure (GSM 900 MHz) on Behavior and Memory in Rats.” Official Journal of the Radiation Research Society 182, no. No. 4 (September 24, 2014): 435–47. https://doi.org/10.1667/ RR13695.1.

31. Ibid.

32. Ciaula, Agostino. “Towards 5G Communication Systems: Are There Health Implications?” 221, no. 3 (February 2, 2018): 367–75. https://doi.org/10.1016/j.ijheh.2018.01.011.

33. Chen, Chuanhiu, Qinlong Ma, and Chuan Liu. “Exposure to 1800 MHz Radiofrequency Radiation Impairs Neurite Outgrowth of Embryonic Neural Stem Cells.” Scientific Reports, no. 5013 (May 29, 2014). https://www.nature.com/articles/ srep05103.

34. Barthelemy, Amelie, Amandine Mouchard, and Marc Bouji. “Glial Markers and Emotional Memory in Rats Following Acute Cerebral Radiofrequency Exposures.” Environmental Science and Pollution Research 23 (September 30, 2016). https://link. springer.com/article/10.1007/s11356-016-7758-y.

35. Tang, Jun, Yuan Zhang, and Qianwei Chen. “Exposure to 900 MHz Electromagnetic Fields Activates the Mkp-1/ERK Pathway and Causes Blood-Brain Barrier Damage and Cognitive Impairment in Rats.” Brain Research 1601 (March 19, 2015): 92–101.

36. Kim, Hwan, Da-Hyeon Yu, and Yang Hoon Huh. “LongTerm Exposure to 835 MHz RF-EMF Induces Hyperactivity, Autophagy and Demyelination in the Cortical Neurons of Mice.” Scientific Reports, January 20, 2017. https://www.nature.com/ articles/srep41129.

37. Ibid.

34

Zhadobov. “Whole‐genome Expression Analysis in Primary Human Keratinocyte Cell Cultures Exposed to 60 GHz Radiation.” Bioelectromagnetics 33, no. 2 (August 3, 2011). https://doi.org/10.1002/bem.20693.

41. Millenbaugh, Nancy, Caleb Roth, and Roza Sypniewska. “Gene Expression Changes in the Skin of Rats Induced by Prolonged 35 GHz Millimeter-Wave Exposure.” Radiation Research 169, no. 3 (March 3, 2008): 288–300.

42. Mahamoud, Yonis, Mexaine Aiete, and Denis Habazuit. “Additive Effects of Millimeter Waves and 2-Deoxyglucose CoExposure on the Human Keratinocyte Transcriptome.” Public Library of Science 11, no. 8 (August 16, 2016). https://www. ncbi.nlm.nih.gov/pmc/articles/PMC4986955/.

43. Le Quement, Catharine, Christophe Nicholas, and Maxim Zhadobov. “Whole‐genome Expression Analysis in Primary Human Keratinocyte Cell Cultures Exposed to 60 GHz Radiation.” Bioelectromagnetics 33, no. 2 (August 3, 2011). https://doi.org/10.1002/bem.20693.

44. Cosentino, Katia, Amerigo Beneduci, and Alfonsia Ramundo-Orlando. “The Influence of Millimeter Waves on the Physical Properties of Large and Giant Unilamellar Vesicles.” Journal of Biological Physics 39 (February 2, 2013): 395–410.

45. Alekseev, Stanislav, Oleg Gordiienko, and Alexander Radzivesky. “Millimeter Wave Effects on Electrical Responses of the Sural Nerve in Vivo.” Bioelectromagnetics 31, no. 3 (September 21, 2009). https://doi.org/10.1002/bem.20547.

46. Donato, Loreto, Maria Cataldo, and Pasquale Stano. “Permeability Changes of Cationic Liposomes Loaded with Carbonic Anhydrase Induced by Millimeter Waves Radiation.” Radiation Research 178, no. 5 (November 1, 2012): 437–46. https://doi.org/10.1667/RR2949.1.

47. Pikov, Victor, Xianghong Arakaki, and Michael Harrington. “Modulation of Neuronal Activity and Plasma Membrane Properties with Low-Power Millimeter Waves in Organotypic Cortical Slices.” Journal of Neural Engineering 7, no. 4 (July 19, 2010). https://iopscience.iop.org/ article/10.1088/1741-2560/7/4/045003/meta.

38. Birks, Laura, Monica Guxens, and Eleni Papadopoulou. “Maternal Cell Phone Use during Pregnancy and Child Behavioral Problems in Five Birth Cohorts.” Environment International 104, no. July 2017 (April 7, 2017): 122–31. https:// doi.org/10.1016/j.envint.2017.03.024.

48. Shapiro, Mikhail, Michael Priest, and Peter Siegel. “Thermal Mechanisms of Millimeter Wave Stimulation of Excitable Cells.” Biophysical Journal 104, no. 12 (June 18, 2013): 2622–28.

39. Habauzit, Dennis, Catharine Quement, and Yves Drean. “Transcriptome Analysis Reveals the Contribution of Thermal and the Specific Effects in Cellular Response to Millimeter Wave Exposure.” Public Library of Science, October 10, 2014. https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC4193780/.

49. Mahamoud, Yonis, Mexaine Aiete, and Denis Habazuit. “Additive Effects of Millimeter Waves and 2-Deoxyglucose CoExposure on the Human Keratinocyte Transcriptome.” Public Library of Science 11, no. 8 (August 16, 2016). https://www. ncbi.nlm.nih.gov/pmc/articles/PMC4986955/.

40. Le Quement, Catharine, Christophe Nicholas, and Maxim

50. Habauzit, Dennis, Catharine Quement, and Yves Drean.

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


“Transcriptome Analysis Reveals the Contribution of Thermal and the Specific Effects in Cellular Response to Millimeter Wave Exposure.” Public Library of Science, October 10, 2014. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4193780/.

51. “Proposed Changes in the Commission’s Rules Regarding Human Exposure to Radiofrequency Electromagnetic Fields; Reassessment of Federal Communications Commission Radiofrequency Exposure Limits and Policies.” Federal Communications Commission, December 4, 2019. https:// www.fcc.gov/document/fcc-maintains-current-rf-exposuresafety-standards.

52. “ICNIRP Statement on the Guidelines for Limiting Exposure to Time-Varying Electric, Magnetic, and Electromagnetic Fields.” Health Physics 97, no. 3 (2009): 257–58.

53. National Conference of State Legislatures 5G and Small Cell Legislation. “National Conference of State Legislatures 5G and Small Cell Legislation.” Accessed February 12, 2020. https://www.ncsl.org/research/telecommunications-andinformation-technology/mobile-5g-and-small-cell-legislation. aspx

54. National Conference of State Legislatures 5G and Small Cell Legislation. “National Conference of State Legislatures 5G and Small Cell Legislation.” Accessed February 12, 2020. https://www.ncsl.org/research/telecommunications-andinformation-technology/mobile-5g-and-small-cell-legislation. aspx.

61. Ibid.

62. Commonwealth of Massachusetts Order S.2547. “Commonwealth of Massachusetts Order S.2547,” 2019. https://malegislature.gov/Bills/191/S2547.

63. Montana Legislature Bill LC3134. “Montana Legislature Bill LC3134,” 2019. http://laws.leg.mt.gov/legprd/ LAW0203W$BSRV.ActionQuery?P_SESS=20191&P_BLTP_ BILL_TYP_CD=&P_BILL_NO=&P_BILL_DFT_NO=LC3134&P_ CHPT_NO=&Z_ACTION=Find&P_SBJT_SBJ_CD=&P_ENTY_ ID_SEQ=.

64. Open States Montana Senate Bill HB496. “Open States Montana Senate Bill HB496,” 2019. https://openstates.org/mt/ bills/2019/HB496/.

65. New York State Assembly https://nyassembly.gov/ leg/?default_fld=&leg_video=&bn=S03046&term=2019&Su mmary=Y&Actions=Y&Committee%26nbspVotes=Y&Memo =Y&Chamber%26nbspVideo%2FTranscript=Y

66. Court Upholds Landmark Berkeley Cell Phone Radiation Right To Know Ordinance And Rejects Industries Appeal. “Court Upholds Landmark Berkeley Cell Phone Radiation Right To Know Ordinance And Rejects Industries Appeal.” Environmental Health Trust, July 2, 2019. https://ehtrust.org/ court-upholds-landmark-berkeley-cell-phone-radiationright-to-know-ordinance-and-rejects-industries-appeal/.

55. Ibid.

56. Ibid.

57. New York State Assembly S01607. “New York State Assembly S01607.” Accessed February 12, 2020. https:// nyassembly.gov/leg/?bn=S01607&term=2019.

58. New Hampshire Track Bill HB522. “New Hampshire Track Bill HB522,” May 1, 2019. https://trackbill.com/ bill/new-hampshire-house-bill-522-establishing-acommission-to-study-the-environmental-and-healtheffects-of-evolving-5g-technology/1630657/?fbclid=IwAR28psMtRFU7mBGMmA8SKxoS0AIkf8LzcQR7e7vO_ MiifUzs0N4GfUNcLC4.

59. Commission to Study the Environmental & Health Effects of Evolving 5G Technology. “Commission to Study the Environmental & Health Effects of Evolving 5G Technology,” 2019. http://www.gencourt.state.nh.us/statstudcomm/ committees/1474/default.html.

60. Commonwealth of Massachusetts Bill S.1272. “Commonwealth of Massachusetts Bill S.1272,” 2019. https:// malegislature.gov/Bills/191/S1272.

WINTER 2020

67. “CTIA - THE WIRELESS ASSOCIATION, Plaintiff-Appellant, v. CITY OF BERKELEY, California; CHRISTINE DANIEL, City Manager of Berkeley, California, in Her Official Capacity,” July 2, 2019. https://ehtrust.org/wp-content/uploads/CTIA-vBerkeley-9th-Circuit-opinion-7-2-2019.pdf.

68. State of California Resolution No. 2019-35 (2019). https:// www.losaltosca.gov/sites/default/files/fileattachments/ city_council/page/48421/resolution_no._2019-35.pdf.

69. Guth, Anna. “Marin Drafts Preferences for 5G Rollout,” June 19, 2019. https://www.ptreyeslight.com/article/marin-draftspreferences-5g-rollout?fbclid=IwAR3woG2LODTZWNb258v DDb1JtkweNWEFWGNjtRpHCSH5elHZtC3c6yMoVxg. “San Diego County Planning and Development Services - Small Cell Wireless Facilities,” 2019. https:// www.sandiegocounty.gov/content/sdc/pds/advance/ smallcellwirelessfacilities.html.

70. Petaluma Municipal Code. “Petaluma Municipal Code,” 2019. https://petaluma.municipal.codes/.

71. “Town of Warren Special Permission for Small Cell Facilities and Wireless,” November 12, 2012. https://ehtrust.org/ wp-content/uploads/Warren_Zoning_Telecom_Regs_-_

35


December_11_2012-4.pdf.

72. “Town of Burlington Policy Applications for Small Cell Wireless Installations,” October 22, 2018. http://cms2.revize. com/revize/burlingtonma/Small.Cell.Wireless.Equiptment. Policy.Approved.10.22.2018.BURLINGTON.MA.pdf.

73. Borough of Little Silver Legislation. “Borough of Little Silver Legislation.” Accessed February 14, 2020. http://www. littlesilver.org/ls/Announcements/Telecommunications%20 Ordinance.pdf?1548971889&fbclid=IwAR0NDX2oaUbreLseq 9Z_HO1rmYV7a16s-kdtWdTxFdkTfbiIwNgEuiAamqc.

74. Duccaine, Cecelia. “Understanding EMFs.” Understanding EMFs, 2020. https://sites.google.com/site/understandingemfs/ solutions.

75. Edwards, Claire. “5G Cell Phone Radiation: How the Telecom Companies Are Losing the Battle to Impose 5G Against the Will of the People.” 5G Cell Phone Radiation: How the Telecom Companies Are Losing the Battle to Impose 5G Against the Will of the People, December 30, 2019. https://www.globalresearch. ca/telcos-losing-battle-impose-5g/5691065.

76. Decker, Ben. “Global Disinformation Index: Mapping the Anti-5G Campaign,” May 31, 2019. https://disinformationindex. org/2019/05/mapping-the-anti-5g-campaign/.

85. EHT Meet the Team. “EHT Meet the Team,” 2020. https://ehtrust. org/about/meet-the-team/.

86. Hardell, Lenart, and Richard Nyberg. “Appeals That Matter or Not on a Moratorium on the Deployment of the Fifth Generation, 5G, for Microwave Radiation.” Mol Clinic Oncology 12, no. 3 (March 3, 2019): 247–57. https://doi.org/10.3892/mco.2020.1984.

87. Alster, Norm. “Captured Agency: How the Federal Communications Commission Is Dominated by the Industries It Presumably Regulates.” Harvard University, 2019. https://ethics. harvard.edu/files/center-for-ethics/files/capturedagency_alster. pdf.

88. “Treaty on European Union.” Office for Official Publications of the European Union, 1992. https://europa.eu/european-union/ sites/europaeu/files/docs/body/treaty_on_european_union_ en.pdf.

77. Ibid.

89. “Radiation Concerns Halt Brussels 5G Development, for Now.” April 1, 2019. https://www.brusselstimes.com/brussels/55052/ radiation-concerns-halt-brussels-5g-for-now/. Le Temps. “Geneva Adopts a Moratorium on 5G,” April 11, 2019. https://www.letemps.ch/suisse/geneve-adopte-une-motion-unmoratoire-5g-for-now/.

78. 5G Crisis: Join a Local Group. “5G Crisis: Join a Local Group.” Accessed February 15, 2020. https://www.5gcrisis.com/join-agroup.

90. Le Temps. “Geneva Adopts a Moratorium on 5G,” April 11, 2019. https://www.letemps.ch/suisse/geneve-adopte-une-motion-unmoratoire-5g.

79. Mass.gov - Electromagnetic Fields (EMF) and LASERs. “Mass. Gov - Electromagnetic Fields (EMF) and LASERs,” 2019. https:// www.mass.gov/lists/electromagnetic-fields-emf-and-lasers.

80. 5G Activist Toolkit. “5G Activist Toolkit.” Accessed February 15, 2020. https://www.5gcrisis.com/toolkit.

81. Wireless Education Schools and Family Course. “Wireless Education Schools and Family Course,” 2020. https://www. wirelesseducation.org/store/l2/.

82. “US Court of Appeals for the Ninth Circuit: Sprint Corporation v. Intervenors,” September 27, 2018. https:// scientists4wiredtech.com/wp-content/uploads/2019/03/20190307-Joint-Opposition-to-FCC-Motion-to-Hold-in-Abeyanc. pdf.

83. “United States District Court for the Northern District of California - Class Action Complaint,” September 10, 2019. https://www.courtlistener.com/recap/gov.uscourts. cand.348059/gov.uscourts.cand.348059.3.0_1.pdf.

84. “Robert F. Kennedy, Jr.’s Children’s Health Defense Submitted Historic Case Against U.S. Government for Wireless Harms,” February 3, 2020. https://childrenshealthdefense. org/news/robert-f-kennedy-jr-s-childrens-health-defense36

submitted-historic-case-against-u-s-government-for-wirelessharms/. Action Alert: Lawsuit Against The FCC. “Action Alert: Lawsuit Against The FCC,” 2019. https://ehtrust.org/action-alert-lawsuitagainst-the-fcc/.

91. Glastonbury Town Council. “Town Council Challenges 5G,” June 19, 2019. https://glastonbury.gov.uk/2019/06/19/town-councilchallenges-5g/. Martucci, Maurizio. “Stop 5G Mayors, Here Is Italy’s First Suspension Order. And the List of 14 Town Council Resolutions and Motions for Precaution.” Oasiana, July 4, 2019. https://oasisana. com/2019/07/04/stop-5g-mayors-here-is-italys-first-suspension-order-and-the-list-of-14-town-council-resolutions-andmotions-for-precaution/?fbclid=IwAR1KN-p-iemDnVMz7HPza82EI7DVAQTzmtlZLlg9M9zbaXLQ6DEuQ1UD-Q4.

92. Seal, Thomas, and Albertina Torsoli. “Health Scares Slow the Rollout of 5G Cell Towers in Europe.” Bloomberg Businessweek, January 15, 2020. https://www.bloomberg.com/news/articles/2020-01-15/health-scares-slow-the-rollout-of-5g-cell-towersin-europe.

93. “Treaty on European Union.” Office for Official Publications of the European Union, 1992. https://europa.eu/european-union/ sites/europaeu/files/docs/body/treaty_on_european_union_ en.pdf.

94. United States Nuclear Regulatory Commission. “ALARA,” March 21, 2019. https://www.nrc.gov/reading-rm/basic-ref/glossary/ alara.html.

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


95. Kheifets, Leeka. “The Precautionary Principle and EMF.” Seoul, South Korea, October 22, 2001. https://www.who.int/peh-emf/ meetings/southkorea/Precaution-Leeka.pdf.

96. Ibid.

WINTER 2020

37


M AT E R I A L S S C I E N C E

The "Anti-CRISPR" Protein as New Biological Insurance BY BRYN WILLIAMS '23 Cover: CRISPR gene editing is making its way from mice to humans as newly discovered “Anti-CRISPR” proteins potentially make CRISPR safer. (Source: Wikimedia Commons)

“Gene editing with CRISPR holds the key to many scientific advancements, particularly in the medical field. However, the system is not perfect.” 38

Introduction The discovery and utilization of the gene editing CRISPR-Cas system resulted in a significant change in the way we approach genetics and medical treatments. It plays a crucial role in our attempts to stop certain diseases, especially those with genetic origins like certain cancers, cystic fibrosis, and sicklecell anemia. Advancements in this field are made possible by the discovery of the CRISPRCas system. The system utilizes an enzyme that cuts targeted DNA sequences and renders target genes ineffective. By manipulating the genetic sequence, CRISPR can remove harmful genes and/or add novel ones. Gene editing with CRISPR holds the key to many scientific advancements, particularly in the medical field. However, the system is not perfect. CRISPR’s use is limited by a variety of safety concerns spanning from unintended gene editing to biological warfare, where diseases are genetically altered for higher mortality and morbidity rates. For

example, CRISPR is intended to target specific gene sequences but sometimes it can veer off course and result in significant off-target deletions or lesions.13 Off-target changes can have significant pathogenic consequences.5 Recently scientists have discovered the “AntiCRISPR” protein which could eliminate the limitations on CRIPSR, making gene editing significantly safer.2 The ability to turn CRISPR off using the “Anti-CRISPR”, or Acr, protein gives scientists the power to control CRISPR and use it more precisely.2 The possibilities for Acrs are extensive and expanding as research remains ongoing. The discovery of Acrs provides more insight into the evolutionary arms race occurring between bacteria and viruses.6 They are constantly attacking each other and as a result continuously develop new defenses against the other’s respective weapons. With CRISPR-Cas systems found in almost 50% of bacteria and 90% of archaea, it is logical that viruses would DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


develop a way to fight against such a prevalent weapon that threatens phage survival.6, 10

Figure 1: There are two DNA repair mechanisms: NonHomologous End-joining and Homology Directed Repair. The NHEJ pathway is predominantly used in human cells and results in insertions and deletions.

What is CRISPR? To understand the various ways Acrs dismantle CRISPR, we first have to understand how CRISPR works. CRISPR was first identified in bacteria as a defense mechanism against invading phages, chopping up the viral RNA.2 To do this, CRISPR first identifies the invading viral DNA strand using two short strands of DNA.7 One of these strands, the sgRNA, contains a short sequence, called the PAM sequence, that matches part of the viral DNA.6,7,11 Each of these segments is separated by a short palindromic repeat that causes the DNA to fold in on itself, creating the CRISPR array construction.6 These two RNAs join with the nuclease Cas9 to form the CRISPR complex.6 When the complex binds to the invading viral DNA at the matching sequence, it makes a double stranded DNA break.6,7 When this break is made, the cell’s natural DNA repair mechanisms step in to fill in the broken segment with nucleotides.6 There are two main DNA repair mechanisms in the human cell, homology directed repair and the non-homologous endjoining pathway.11 Homology directed repair (HDR) uses homologous sequences from another strand of DNA to replace the missing segment of DNA.11 This results in precise implantations of genetic material.11 Typically in human cells, double stranded DNA breaks are repaired by the non-homologous end-joining (NHEJ) pathway which is error-prone and produces insertions and deletions.8,11 These deletions lead to the disruption of typical gene function, and allow for desired changes to be made in the genome.8 This is how genes can be changed or dismantled in the genome. This system has been effective in various treatments of diseases like cancer, but the technology still has limitations. There are often off target deletions where the CRISPR-Cas9 recognizes the wrong sequence.1,6 This occurs because cleavage by CRISPR-Cas can arise in the PAM-distal region of the sgRNA when as many as 3 to 5 base pairs are mismatched.13 Furthermore, the use of different sgRNA structures affects the accuracy of CRISPR binding.13 The genotoxicity of CRISPR is especially evident in experiments that show large scale deletions, 9.5 kb, next to the target sequence.9 The frequency of off-target deletions can be as great at 50%.13 CRISPR is an extremely useful tool in removing malfunctioning genes and treating some of the biggest diseases that threaten us today, so the discovery and implementation of a protein that can control CRISPR editing will allow us to expand our use of CRISPR even further. WINTER 2020

(Source: Wikimedia Commons)

Discovery of the “Anti-CRISPR” Before the discovery of Acrs, the only known way for a virus to escape destruction from the CRISPR-Cas 9 system were point mutations which are a random and slow way to change the genome to protect against CRISPR.6 The “Anti-CRISPR” protein was discovered by BondyDenomy and his team in 2012 when Denomy was infecting bacteria with viruses that should’ve been destroyed by the bacteria’s CRISPR system.1 Yet some of the viruses survived.1 At first, he thought this was a mistake, but then he hypothesized that the viruses might have a way to fight against the bacteria’s CRISPRCas9 system.1 This led Denomy to sequence the surviving viruses’ genome, and he quickly found a sequence of genes that coded for a protein that could inactivate the CRISPR-Cas9 system–the “Anti-CRISPR”.1 To further test the anti-CRISPR protein they tested the plaquing efficiency of three different “CRISPR-sensitive” phages on a collection of prophages which are bacteria containing various phage genomes.1 The higher plaquing efficiency of the viruses corresponded to increased Acr activity because the viruses were able to survive an attack from the prophages with CRISPR.1 Through this they could identify the specific proteins that served as Acrs, discovering over 50, each of which used a unique strategy to dismantle CRISPR.2

“Acrs function either by halting gene expression or stopping gene editing by CRISPR.”

Function of “Anti-CRISPR” Acrs function either by halting gene expression or stopping gene editing by CRISPR. They can inhibit CRISPR-based gene regulation in mammalian cells, limit CRISPR gene editing ability, and provide pulsatile gene expression.7 Acrs function in various ways, each one of the protein families attacking CRISPR from a different angle.7 As an example, one group of Acr proteins inhibit dCas9 from binding to DNA.7 CRISPR depends on dCas9 to bind to a segment of DNA identified by a sgRNA.7 By inhibiting 39


Figure 2: The presence of an Acrs gene in the bacterial genome will inhibit gene editing through CRISPR-Cas9.

against them, making Acrs a long-term tool for controlling CRISPR.

(Source: Markolin, Philipp)

“Researchers initially discovered nine genetically unique main families of Acr proteins, all located in the same type of phages that infected P. aeruginosa.”

the binding of dCas9 to DNA, the Acrs can prevent the bacteria from regulating specific genes.7 The researchers tested the efficacy of various Acrs proteins on the CRISPR system, specifically gene edits caused by CRISPRa and CRISPRi .7 The two variations of CRISPR, CRISPRa and CRISPRi, were used to demonstrate the effectiveness of the Acrs across multiple types of CRISPR systems.7 They could test the efficacy by using a dCas9 to activate a GFP gene placed in a bacterium through a plasmid.7 The higher the rate of GFP expression the less effective the Acrs were at targeting and stopping the dCas9 from activating gene expression.6 The results showed that one group of Acrs, AcrIIA4, were proficient at targeting and stopping gene expression by stopping dCas9.6 In addition, another study identified a different mechanism for CRISPR-Cas9 inhibition.3 They found that AcrIIA6 functions as an allosteric inhibitor, reducing the DNA binding affinity of St1Cas9 (another nuclease) within living cells.3 This is because the PAM and AcrIIA6 recognition sites are close and linked allosterically, so AcrIIA6 can inhibit Sti1Cas9 and PAM binding.3

Mechanics of “Anti-CRISPR”

Researchers initially discovered nine genetically unique main families of Acr proteins, all located in the same type of phages that infected P. aeruginosa.5,6 Therefore, Denomy et al. hypothesized that the Acrs might just be located in a very specific type of phage and only protect specific genomic sequences unique to the phage.1 But they soon realized that the more they looked, they found many different types of Acrs across multiple types of phages, and they did not protect specific viral sequences.1 When comparing the viral DNA regions coding for Acrs, the sequences among the 9 different families originally discovered were vastly different with only 15 significant similar nucleotides identified.1 The great variety of Acrs could make it difficult for bacteria to quickly develop a defense mechanism 40

Many hypothesized that a potential limitation of the Acrs would be a lack of Cas enzymes in the cell; these proteins are necessary for the production of CRISPR RNA (crRNA).1 If there is a lack of crRNA, the CRISPR system would completely stop in the cell.1,4 The goal of Acrs is to control the use of CRISPR by making it more precise, not to stop the function of CRISPR altogether.1,4 If there is not enough crRNA, then the CRISPR cannot edit genes and the Acrs will not serve their intended purpose. Denomy et al. found that the concentration of Cas and crRNA was not decreased by Acrs, meaning that the Acrs shut down CRISPR after the CRISPR proteins are produced.1 This result is promising for the future use of Acrs to control CRISPR. Scientists are using this information to determine a way for Acrs to work alongside CRISPR and make it more precise. Acrs are a newly discovered genetic tool, so the research is just beginning. Given the diversity and prevalence of Acrs already discovered, it is likely there are many more types of Acrs hiding in viral genomes yet to be found, especially in archaeal viruses because almost all archaea contain a CRISPR-Cas system.6 Many bacterial and archaeal viruses have gene clusters that encode for small proteins and are surrounded by HTH-containing protein genes which can be a signal that an Acr is present in that gene cluster.4 These small clusters make up a large portion of bacterial and archaeal virus genomes and serve as string candidates for more Acr genes.4 Ongoing research continues to explore these small gene clusters in order to find new Acrs that can inhibit even more varieties of CRISPR systems.

Conclusion

Looking towards the future, we still need more research into the application of “Anti-CRISPR” proteins to determine their potential for use in controlling CRISPR gene editing. Currently ample evidence supports its ability to shut down CRISPR, yet we need to utilize this ability to control CRISPR, not simply shut it down. If we could control or even reverse the effects of CRISPR, the possibilities would be exponential. We would be able to develop effective and tailored treatments for fatal diseases like cancer and dispel the rising fears of biological warfare.2,12 These small proteins created through an evolutionary need for survival could be the answer to many of the medical problems we face today.

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


References 1. Bondy-Denomy, J., Pawluk, A., Maxwell, K. L., & Davidson, A. R. (2012, December 16). Bacteriophage genes that inactivate the CRISPR/Cas bacterial immune system. Retrieved January 16, 2020, from https://www. nature.com/articles/nature11723 2. Dolgin, E. (2020, January 15). The kill-switch for CRISPR that could make gene-editing safer. Retrieved January 16, 2020, from https://www.nature.com/ articles/d41586-020-00053-0 3. Fuchsbauer, O., Swuec, P., Zimberger, C., Amigues, B., Levesque, S., Agudelo, D., … Goulet, A. (2019, October 8). Cas9 Allosteric Inhibition by the AntiCRISPR Protein AcrIIA6. Retrieved February 25, 2020, from https:// www.sciencedirect.com/science/article/pii/S1097276519306975 4. Hynes, A. P., Rousseau, G. M., Agudelo, D., Goulet, A., Amigues, B., Loehr, J., … Moineau, S. (2018, July 25). Widespread anti-CRISPR proteins in virulent bacteriophages inhibit a range of Cas9 proteins. Retrieved February 27, 2020, from https://www.nature.com/articles/s41467-018-05092-w 5. Kosicki, M., Tomberg, K., & Bradley, A. (2018, July 16). Repair of double-strand breaks induced by CRISPR–Cas9 leads to large deletions and complex rearrangements. Retrieved February 27, 2020, from https://www. nature.com/articles/nbt.4192 6. Maxwell, K. L. (2017, October 5). The Anti-CRISPR Story: A Battle for Survival. Retrieved January 20, 2020, from https://www.sciencedirect.com/ science/article/pii/S1097276517306500?via=ihub 7. Nakamura, M., Srinivasan, P., Chavez, M., Carter, M. A., Dominguez, A. A., Russa, M. L., … Qi, L. S. (2019, January 14). Anti-CRISPR-mediated control of gene editing and synthetic circuits in eukaryotic cells. Retrieved February 27, 2020, from https://www.nature.com/articles/s41467-018-08158-x 8. Silva, J. F. da, Salic, S., Wiedner, M., Datlinger, P., Essletzbichler, P., Hanzl, A., … Loizou, J. I. (2019, October 31). Genome-scale CRISPR screens are efficient in non-homologous end-joining deficient cells. Retrieved from https://www.nature.com/articles/s41598-019-52078-9 9. What is CRISPR? (2020). Retrieved February 27, 2020, from https://www.jax.org/personalized-medicine/precision-medicine-andyou/what-is-crispr 10. Wright, A. V., Nuñez, J. K., & Doudna, J. A. (2016, January 14). Biology and Applications of CRISPR Systems: Harnessing Nature's Toolbox for Genome Engineering. Retrieved February 27, 2020, from https://www. sciencedirect.com/science/article/pii/S0092867415016992?via=ihub 11. You, L., Tong, R., Li, M., Liu, Y., Xue, J., & Lu, Y. (2019, March 15). Advancements and Obstacles of CRISPR-Cas9 Technology in Translational Research. Retrieved January 20, 2020, from https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC6447755/ 12. Zhan, T., Rindtorff, N., Betge, J., Ebert, M. P., & Boutros, M. (2018, April 16). CRISPR/Cas9 for cancer research and therapy. Retrieved from https:// www.sciencedirect.com/science/article/pii/S1044579X17302742 13. Zhang, X.-H., Tee, L. Y., Wang, X.-G., Huang, Q.-S., & Yang, S.-H. (2016, December 14). Off-target Effects in CRISPR/Cas9-mediated Genome Engineering. Retrieved February 27, 2020, from https://www.sciencedirect.com/ WINTER 2020

41


CHEMISTRY

The Current State of Bioprinting

BY DEV KAPADIA '23 Cover Image: One example of a 3D bioprinter developed by 3D Bioprinting Solutions, a Russian biotech company. (Source: Wiikimedia Commons)

“over 100,000 people are currently waitlisted for an organ transplant, and twenty people die each day waiting for a transplant.” 42

Introduction One of the biggest problems in the United States healthcare system is the shortage of organs. Although we are born with them, over 100,000 people are currently waitlisted for an organ transplant, and twenty people die each day waiting for a transplant.1 Organ shortages occur because many organs, like hearts or livers, must be collected from recently deceased humans; thus, there is no reliable, steady supply that can be amplified when more organs are needed for transplants. This problem could be solved if we could simply make organs, but right now, there is no way to synthetically manufacture reliable and functional organs.

design process given the complexity of an organ. In particular, these scaffolds need to be able to grow cells to replace organ function as best as possible while being degradable post integration. If scaffolds do not match regular organ geometry, or if the cells growing on them do not function properly, host cells may exhibit adverse responses, primarily in the form of immune responses from the body. To simplify the process of satisfying the complexities of scaffolds, researchers proposed the idea of using additive measures to better control the resulting scaffold layer-by-layer to account for the complexities of organ design, and one promising additive method has been the idea of bioprinting.2

This problem is one of the main focuses of researchers in the field of tissue engineering, a field in which researchers use scaffolds made out of metals, plastic, biological material, and more to attach and grow cells that will eventually mimic original organ function. There are many techniques used to design tissue scaffolds, but researchers need to be able to specifically engineer each step of the

This idea of using 3D printing to print synthetic organic material, called “bioprinting,” was introduced in 1999. The foundation of the idea was based on the notion that just as traditional 3D printers use plastics, ceramics, or metals, similar printers can be designed to use “bioink,” which consist of these traditional materials along with tissues and stem cells. Similar to traditional 3D printing, the bioink must be DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


collected and loaded along with the information for the design being printed in the bioprinting process. Once uploaded, the 3D printer will follow a process similar to the process of layering and printing in traditional 3D printing. It then proceeds to form the scaffold and implant the cells that will mimic the function of the target organ. Scientists using bioprinting, however, have to also ensure that the materials are chosen and sculpted in a way that the cells not only survive but also thrive and grow within the body.3

scale, precise procedures that expose less of the body to potential infection and require less screening, which also lowers the overall cost of the procedure.5 Several methods are currently being explored to perform biopsies with one highly promising method being a needle biopsy, a form of biopsy in which only 15 to 20 mg of tissue is collected through a microneedle. This method has been shown to extract cells for regenerative purposes without inflicting damage on the target cells.6

However, the challenges that these scientists must overcome are still large. Many scholars consider engineering completely authentic organs an impossibility, given the scientific and economic limitations of bioprinting that will be explored in this article.4 These issues must be kept in mind when attempting to understand the potential for this technology to eventually develop and become a norm in the organ transplantation industry. This article explores the current beliefs in the development of bioprinting research, the barriers that scientists will have to overcome and, lastly, the economic feasibility for the industry along with recommendations to ensure scalability of the product in the future.

Once the cells are harvested from the body, they must be replicated and differentiated. Researchers have identified several cell types that can be replicated and differentiated efficiently and accurately. The most popular choice, currently, is to use embryonic stem cells (ESCs), which are isolated from the bone marrow and adipose tissue. ESCs can differentiate into any cell type and group themselves into their own cell type groups, making them the most popular and the most researched source of stem cells. However, there are several concerns with the use of ESCs. These concerns include the ethical questionability of destroying a human embryo to produce the stem cells and the lack of researchers’ ability to control the differentiation process to maturity.7 Instead of using ESCs, some researchers use induced pluripotent stem cells (iPSCs). Unfortunately, these iPSCs have their own limitations: they are time-consuming and expensive to produce while also lacking the reliability that ESCs provide.6

Pre-Bioprinting While there are many steps to consider during the bioprinting process, there are also several factors to consider when preparing for printing in order to ensure that the printed organ will function and grow in the host body. The first factor to consider is the body’s future acceptance of the printed organ. Because the organs are synthetic, there is a chance that the foreign cells can trigger an autoimmune response that rejects and kills the bioprinted organ. The simplest way of ensuring that the host cells do not label the bioprinted organ as “foreign” is to use the host body’s cells in the printing process. To acquire such cells, researchers have turned to minimally invasive biopsy procedures in which doctors take small segments of tissue from the body to harvest cells. Minimally invasive treatments are smaller-

“To match the biological specificity required for organs to be accepted by the body, researchers must model an extracellular matrix (ECM) specific to the body.”

To match the biological specificity required for organs to be accepted by the body, researchers must model an extracellular matrix (ECM) specific to the body. The ECM is a network of macromolecules that not only influences cell structure and position relative to the rest of the body but also allows it to send and receive molecules from its environment in order to survive. Consequently, the process of medical imaging–3D imaging in particular–is extremely important for an understanding of the geometry of organs and the transport

Figure 1: Image A shows a CT scan of the brain. Image B and C are T1-weighted and T2weighted MRI scans of a brain, respectively. As apparent, the MRI scans have more detail than the CT scan, but the CT scan can be more time-efficient and costeffective. WINTER 2020

(Source: Wikimedia Commons)

43


Figure 2: A 3D image of a bioprinted skin tissue model showing the extracellular matrix in red and the cytoskeleton of the cell in various shades of blue. Models such as these are uploaded to the bioprinter and created using additive manufacturing. (Source: Liu, L., Boutin, M., & Song, M. J. (n.d.). 3-D Image of Bioprinted Skin Tissue. photograph)

“There are two methods of bioprinting that are commonly researched and used: drop-based bioprinting and extrusion bioprinting.�

of nutrients between them.8 There are two popular techniques in use today: MRI (magnetic resonance imaging) and CT (computed tomography). MRIs are most commonly used for the visualization of soft tissue structures in the body, including those that surround and support other organs. MRI alters the proton density of the water molecules that are bound to the tissue, which affects the intensity of magnetic energy absorption that the MRI machine detects using magnetic coils. This technique is fairly safe, though it can leave some patients with discomfort, and its efficacy is dependent on the scan time and patient movement. CT scans, on the other hand, use X-rays to create cross-sections of the body, which exposes the patient to high radiation needed to detect the contrast agents that the machine uses to image the tissues. CT scans also provide less detailed images than MRI scans, but they are far quicker and less expensive [Figure 1]. Therefore, the method of imaging is usually dependent on the resolution required, time and radiation limit of the tissue sample.6 The next step in the pre-printing process is to generate the design of the tissue that will be printed. Researchers use medical image processing software to perform image preprocessing, image segmentation, feature extraction, and data mining. Pre-processing allows researchers to scale and enhance the image produced by the medical imaging device through manipulation of brightness and contrast correction [Figure 2]. Once the image has been pre-processed, it is segmented to identify the areas of the image that are of particular interest to the researchers. The features of this region are then extracted along with spatial patterns of the image to better develop the geometry of the imaged region in the body. Lastly, a design software is utilized to produce the blueprint model that will then be used for the bioprinting process.6

44

Blueprint modeling can vary in production based on the modeling software that researchers use. The main requirement is for the software to be able to rapidly analyze the processed image to account for the various geometric complexities of the organ. Models then must be produced using constructive solid geometry features. Currently, CAD software is the most commonly used software. While CAD fulfils all the major requirements of blueprint modeling needs, there are a couple of shortcomings specific to bioprinting blueprint design. Firstly, a CAD-based system is usually unable to model the irregular and complex geometric patterns that the organ designs are characterized by. Another issue is that many of the popular design systems are incompatible with the bioprinting devices most commonly used by researchers. To stimulate development in this vital intermediary step of the preprinting process, many scholars have brought up the idea of making bioprinting software open source, thereby allowing researchers to view and change the source code so anyone can alter and augment the functionality to enhance bioprinting compatability.6 In the last step of the pre-printing process, the design is segmented into smaller parts to model the ECM of the target host. At this stage, differentiated cells (initially stem cells taken in the biopsy) have undergone replication and differentiation. The ultimate goal for the cells is to achieve biomimicry of the target organ, which is the ability to mimic the geometry and function of the actual organs in the body. One common approach to biomimicry is to first produce a small functional element. Then, once the parts have been made, researchers can arrange them into a larger product that will eventually become the design of the bioprinted organ.8 When these steps are complete, the printing process is ready to start.

Methods of Bioprinting There are two methods of bioprinting that are commonly researched and used: dropbased bioprinting and extrusion bioprinting. Drop-based bioprinting, commonly known as Inkjet, is characterized by the output of bioink in individual drops onto the surface of the printer, forming the desired structure. Because the substrate surface, the surface that the scaffold is being built on, is filled with calcium ions (Ca2+), the Ca2+ diffuses directly into the droplets of bioink once they come into contact with the surface. This causes the drop to polymerize and solidify with other droplets on the surface. The main benefit DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


of this method is that it is relatively quick. However, along with bioink limitations that will be discussed in the next section, its speed can also be a disadvantage if the polymerization of the bioink is slower than the speed of the printing. In this case, the droplets could spread before they are polymerized and deviate from the specific design originally outlined by the blueprint. This droplet spreading can result to a smaller, but still noticeable, degree even if the droplet polymerization speed is fast enough to keep up with the printing process. For this reason, new designs of drop-based bioprinting are being explored that utilize lipid bilayers to form networks that can better capture and organize the drops in the intended design, but this technique is still in the early stages of research.9 The second existing method of bioprinting is extrusion bioprinting. In this method, the bioink exits from a nozzle while the surface, which is in direct contact with the nozzle, moves to manipulate the organization of the bioink into the organ design. The bioink is exposed to a UV illumination source that immediately follows the nozzle to photo-polymerize the ink into a solidified structure, much like the job of the Ca2+ in the drop-based method. This process is far more controlled than the dropbased method because the nozzle is in direct contact with the surface it is printing on and the UV light solidifies the ink before it can start to spread. However, it is not nearly as fast as the drop-based method and carries many of the same bioink limitations that will be discussed in the following section. As a result, extrusion bioprinting is often used for more complex and specific designs while drop-based bioprinting is used for designs that require less control.10

The Materials of Bioprinting As stated previously, bioink can consist of plastics, ceramics, metals, cells, and more to produce the resulting scaffold and cell structure. One of the main ingredients in bioink is the cells that will grow to mimic organ function. Researchers need to use differentiated cells that were originally taken as stem cells from the host; these cells are most likely to be accepted by the host because they are host cells, and therefore will not be labeled as “foreign” and subject to an immune response from the body. However, these cells cannot assimilate into the host’s body without the assistance of carriers called biopolymers which form the 3D scaffold these cells need to proliferate. Biopolymers are classified as a hydrogel because of their ability to retain water and subsequently shift between WINTER 2020

more liquid and more solid properties. This shift in properties is key to deciding the best way at how the gel will be printed, but there are other factors that are important to determine the efficacy of bioink. Bioink must strike a balance between rigidity and fluidity once the compounds polymerize with each other. If the bioink and the subsequent final structure is too rigid, it will not allow nutrient flow and proliferation of cells that are necessary to grow the organs. On the other hand, if the bioink flows too easily, it may not hold its position where the printer intended and flow to other spaces, thereby having a lower printing accuracy relative to the design intended by the printer. Temperature is another factor that needs to be controlled when using bioink. The gel needs to be heated enough to partially melt some of the polymers to allow the bioink to flow through the printing device and onto the surface; if the temperature is too low, then the diffusion rates of the Ca2+ in drop-based methods will slow and not allow for the partial melting of polymers that allows the bioink to flow easily out of the nozzle and onto the surface. But, if the temperature is too high, then the polymers start to fully melt, which causes the cells to die, rendering the bioink useless. Another key factor to consider is the degradation of the material when inside the body. Degradation of these materials is necessary for the body to make room for new tissue growth as well as for the integration of any delivered cells into the body. Without degradation, the metal, plastic, and other noncellular material that makes up the originally printed organ can take up unnecessary space in the body and even elicit an immune response from the body that destroys the organ. To combat this, researchers use means of expediting the time that the bioink will take to break down in the body, thereby eliminating the potential for these non-living substances to cause problems for the cells inside the body. For instance, using gamma-irradiation to eliminate high molecular weight polymers makes the polymers degrade faster once inside the body. This can lead to improved bone formation and cell integration once the polymers are degraded.

“bioink can consist of plastics, ceramics, metals, cells, and more to produce the resulting scaffold and cell structure.”

The last consideration is ensuring that the administered cells have a food source to take in and metabolize. If supplied with food sources, these cells will be able to take in the substances and recycle it to the environment; 45


consequently, they will produce an ECM that allows them to receive compounds from the body and increase their chance of survival.11 There are a few polymers that are commonly found in bioink along with cells. Agarose is a favorite among researchers. It is both extremely biocompatible (it is not harmful to living tissue), and it can retain water extremely well. However, it is not the sturdiest of polymers and therefore does not provide the anchoring that the cells need to stay in their formation. Alginate is another favorite because of its ability to surround and isolate water and nutrients from the medium in the body, which allows for great nutrient diffusion to the cells. Collagen, commonly found in animal muscles, is a polymer that is known for its stability, elasticity, and ability to stick to the cells while allowing for nutrient flow. Lastly, the polymer Hyaluronic acid lacks the sturdiness of other polymers, but it binds to cells very well; many synthetic variants of the acid have been produced to maintain its cell affinity while augmenting its mechanical sturdiness.12

Economic Analysis of Bioprinting “While the discussion of bioprinting from pre-printing to consideration of the materials used to the actual printing process is interesteing, it is irrelevant unless companies that are developing the product are financally sustainable.�

While the discussion of bioprinting from preprinting to consideration of the materials used to the actual printing process is interesting, it is irrelevant unless companies that are developing the product are financially sustainable. Fortunately, there are several market factors that could make this technology mainstream soon. Researchers at the Tepper School of Business at Carnegie Mellon University have identified several key growth drivers in the bioprinting industry: the shortage of organ donations, ethical considerations that dissuade researchers from animal testing, and the need for wound care and repair and replacement surgeries (specifically those involving joint repair and replacement). Another key factor that signals the possibility of future growth is the popularity of the industry along with the number of investors that are drawn to bioprinting companies. These potential financiers include universities, biotech and pharmaceutical companies, internal venture funds in healthcare companies, and traditional venture capital firms. Many financers within these categories have ample supply of capital, business strategies, and tools for technological innovation that signal a bright future for the bioprinting industry.13 While many growth factors are favorable for

46

the industry, there is fear that the excitement is causing the public and acquirers to overpay for stock in these companies, thereby artificially increasing the value of the companies in the industry. Researchers at Carnegie Mellon have two main recommendations which may work to stimulate growth and success in the industry to ensure that companies are not overvalued: making bioprinting devices open source and ensuring that different versions of printers specifically suit the needs of different customer segments.13 To stimulate the technological innovation that is necessary to make bioprinting a common process, bioprinting companies need to make their products open source. Open source products are those that have their original source code openly available for redistribution and modification by the public. The process of making products open source has proven beneficial for many companies, including Android. Android, who achieved 86.6% market share in the worldwide smartphone market in 2019, made their products opensource for developers to augment the Android suite with apps and software upgrades that Android could then introduce to the general public.14 Not only will open-source products encourage more users to buy and tinker with bioprinting products, but making the product open source will also allow users to practice home research and development that can signal to the company the product designs that consumers want. Information on the wants of consumers will help companies better develop their products while also helping them position their products better in the market, leading to the success of the company and eventual success of the bioprinting industry in general if adopted.13 The second is to make products adaptable to different segments of the market. Different consumers will want different functionalities and care about different features of the product (like accuracy, price, time, etc.). Therefore, the industry needs to be attuned to different demographics within the healthcare industry to best suit the consumers. This will make the products more attractive to consumers, drive up profits, and assist bioprinting in its path to widespread acceptance.13

Conclusion Bioprinting can be a truly revolutionary technology, but there are steps that need to be taken before it can become a widespread practice. The current pre-printing and printing DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


processes are extremely robust, but there are areas for improvement from the polymers used in the bioink to the method of printing that is used. Furthermore, startups and companies that produce successful developments for bioprinters should ensure that these printers are both open source and specific to the preferences of the target market, thus expanding bioprinters potential acceptance and prevalence in the healthcare system while ensuring companies producing these printers can continue to stay profitable and in business.

[13] Thakur, P. C., Cabrera, D. D., DeCarolis, N., & Boni, A. A. (2018). Innovation and commercialization strategies for three-dimensional- bioprinting technology: A lean business model perspective: Research and regulation. Journal of Commercial Biotechnology, 24(1) doi:http://dx.doi.org.dartmouth.idm. oclc.org/10.5912/jcb856 [14] Smartphone Market Share. (2020, January 20). Retrieved March 1, 2020, from https://www.idc.com/ promo/smartphone-market-share/os

References [1] Organ Donation Statistics. (2019, December 18). Retrieved March 1, 2020, from https://www.organdonor.gov/statisticsstories/statistics.html [2] Norotte, C., Marga, F. S., Niklason, L. E., & Forgacs, G. (2009). Scaffold-free vascular tissue engineering using bioprinting. Biomaterials, 30(30), 5910–5917. doi: 10.1016/j. biomaterials.2009.06.034 [3] Malyala, S. K., Kumar, Y. R., & Rao, C. (2017). Organ Printing With Life Cells: A Review. Materials Today: Proceedings, 4(2), 1074–1083. doi: 10.1016/j.matpr.2017.01.122 [4] Mironov, V., Visconti, R. P., Kasyanov, V., Forgacs, G., Drake, C. J., & Markwald, R. R. (2009). Organ printing: Tissue spheroids as building blocks. Biomaterials, 30(12), 2164–2174. doi: 10.1016/j. biomaterials.2008.12.084 [5] Kettritz, U. (2011). Minimally Invasive Biopsy Methods – Diagnostics or Therapy? Personal Opinion and Review of the Literature. Breast Care, 6(2), 94–97. doi: 10.1159/000327889 [6] Datta, P., Barui, A., Wu, Y., Ozbolat, V., Moncal, K. K., & Ozbolat, I. T. (2018). Essential steps in bioprinting: From pre- to postbioprinting. Biotechnology Advances, 36(5), 1481–1504. doi: 10.1016/j.biotechadv.2018.06.003 [7] Lo, B., & Parham, L. (2009). Ethical Issues in Stem Cell Research. Endocrine Reviews, 30(3), 204–213. doi: 10.1210/er.2008-0031 [8] Chokshi, A. (2016, March 26). 3D Bioprinting. Retrieved March 1, 2020, from http://princetoninnovation.org/ magazine/2016/03/21/3d-bioprinting/ [9] Auger, F. A., Gibot, L., & Lacroix, D. (2013). The Pivotal Role of Vascularization in Tissue Engineering. Annual Review of Biomedical Engineering, 15(1), 177–200. doi: 10.1146/annurevbioeng-071812-152428 [10] Bajaj, P., Schweller, R. M., Khademhosseini, A., West, J. L., & Bashir, R. (2014). 3D Biofabrication Strategies for Tissue Engineering and Regenerative Medicine. Annual Review of Biomedical Engineering, 16(1), 247–276. doi: 10.1146/annurevbioeng-071813-105155 [11] Augst, A. D., Kong, H. J., & Mooney, D. J. (2006). Alginate Hydrogels as Biomaterials. Macromolecular Bioscience, 6(8), 623–633. doi: 10.1002/mabi.200600069 [12] Pires, R. (2018, November 26). What Exactly is Bioink? – Simply Explained. Retrieved March 1, 2020, from https:// all3dp.com/2/for-ricardo-what-is-bioink-simply-explained/

WINTER 2020

47


P S YC H O LO G Y

VISTA: A Molecule of Conundrums BY DINA RABADI '22

Introduction VISTA Expression in the CT26 Tumor Microenvironment. DAPI is a fluorescent blue DNA stain, and VISTA expression is shown in red. Ly6G is a marker for a type of of myeloid-derived suppressor cell (a suppressive immune cell produced by inflammation) in A, and CD11b in B is a marker for tumor-associated macrophages (a type of macrophage that attacks cells that attempt to infiltrate the tumor). The orange in both images that appears in both images results from overlap of the red and green, indicating an overlap in expression. A shows the overlap in expression between VISTA and MDSCs, whereas the orange in B displays the overlap between VISTA and TAMs. Images generously provided by the Noelle lab at DartmouthHitchcock Medical Center.

The immune system must be tightly controlled so that it can carry out two major functions— recognition and response to pathogens. Recognition is a critical aspect of the immune system and involves the distinguishing of self versus non-self, preventing the immune system from attacking the body. One way in which immune responses are regulated is by the action of immune checkpoints. As shown in Figure 1, there are some immune checkpoints that can activate T-cells (activating checkpoints are known as co-stimulatory immune checkpoints), and others that suppress T-cells (suppressive checkpoints are known as coinhibitory immune checkpoints). While costimulatory immune checkpoints activate T-cells, co-inhibitory immune checkpoints suppress T-cell activation, thus preventing exaggerated auto-immune responses1. One such co-inhibitory response is the V-domain Ig suppressor of T-cell activation (VISTA), which is a negative immune checkpoint that plays a major role in the immune system and its response to various stimuli. VISTA is expressed by the Vsir gene3. This molecule is part of the B7 family of immune-

48

checkpoint receptors, despite only sharing 24% homology with the nearest B7 family member, programmed cell death protein 1 (PD-1)4. The B7 protein VISTA is located on either surface or endosomal cell membranes of immune cells such as dendritic cells (DCs –a type of antigenpresenting cell) (and macrophages, which are also types of APCs. Binding of VISTA to receptors on APCs elicits co-inhibitory signals to inhibit T-cells5. VISTA is also expressed on naïve T-cells, which is unusual because other immune checkpoints are induced only after T-cell activation. This molecule’s widespread impact on the immune response, and its many unanswered questions, will guide future research on VISTA. In mouse models, there are three main tools for studying this molecule. The first involves knocking out the Vsir gene, allowing for the study of the impacts of VISTA deficiency on the immune response, proinflammatory mediators, and other immune factors. The second method is research through blockade, which prevents binding to the receptor. The third studies VISTA’s effects through agonism, which is a process that uses antibodies that activate VISTA. Researchers now need to develop a DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


better understanding of the differences that VISTA deficiency, blockade, and agonism may have on numerous immune cell types in order to gain a more holistic understanding of VISTA’s impact on the immune system.

VISTA in Innate and Adaptive Immunity Innate immunity is the immune system's first line of defense and involves immediate, and generally eliminatory, responses towards foreign substances or pathogens. Several subsets of myeloid cells, such as macrophages, DCs, and neutrophils, that are involved in the innate immune response are also phagocytes (a type of cell that engulfs and absorbs small particles, such pathogens). VISTA deficiency shows decreased chemokine consumption by macrophages, which in turn can help manipulate macrophage migration in disease6. In addition, the deletion of Vsir causes high expression of inflammatory cytokine genes, including the neutrophil chemotactic gene (S100a9), as well as chemokine genes in a mouse model of psoriasis8. Adaptive immunity is another part of the primary response. Generally, adaptive immunity is slower than innate immunity yet is a highly specific response against pathogens. Adaptive immunity involves recognition of pathogens, as well as specificity. After initial exposure to a pathogen, ‘memory’ occurs. This is the process by which the immune system has previously been exposed to a pathogen, so B and T-cells are able to produce a more efficient immune response and prevent re-infection. The ability of T-cells to tolerate the self is lacking in patients with autoimmune diseases, causing T-cells in a patient with autoimmune disease to attack the WINTER 2020

patient’s own tissues10. These changes that occur due to both types of immunity re-emphasize the importance of studying VISTA in both innate and adaptive immunity, so therapies can be optimized to target-cells in each type.

Structure and Expression of VISTA VISTA is expressed in myeloid cells and T-cells, and is therefore detected in lymphoid tissues and tissues with high immune cell infiltration, such as the spleen, thymus, lung, and bone marrow4. Non-lymphoid tissues, such as the brain and muscle, do not express high levels of VISTA. Even though VISTA can be detected in the aforementioned tissues, the expression is due to the infiltration of hematopoietic cells rather than the tissue itself expressing VISTA11. This infiltration is known as inflation, which is defined by enhanced activity of the immune system and increased immune cell migration to areas in the body where inflammatory signals are present. These signals include high levels of chemokines and cytokines, which are inflammatory signaling molecules. Understanding VISTA’s impact on inflammatory signals and migration is important when evaluating this molecule’s effects on disease because it permits cell recruitment, and therefore the immune response, to occur.

Figure 1: Immune Checkpoint Signaling. This figure displays the mechanistic differences between T-cell activation and T-cell inactivation. Antigenpresenting cells (APCs) are a type of immune cell that presents antigen to the surface of T-cells, acting as mediators between the innate and adaptive immune system), causing the stimulation of T-cells. Costimulatory immune checkpoints are necessary for immune activation. ‘Signal 1’ involves that interaction between antigen and receptors on lymphocytes (white blood cells that help control the immune response). ‘Signal 2’ occurs by co-stimulatory immune checkpoint, and this allows for immune response and T-cell activation. ‘Signal 2’ occurs with, or after, ‘signal 1,’ and lymphocyte activation cannot occur until ‘signal 2’ occurs1,2. *The author created the image using information gathered while writing this article.

“Innate immunity is the immune system's first line of defense and involves immediate, and generally eliminatory, responses towards foreign substances or pathogens."

VISTA’s expression varies in cell types with regard to each type of immunity, as well as between different diseases, such as cancer and autoimmunity. VISTA is heavily studied in tumor models, and expression of VISTA in a mouse tumor model is shown in Figure 2, which shows that VISTA expression in the tumor is high in DCs, macrophages, and myeloid-derived suppressor cells; this finding also points towards the use of VISTA blockade therapy in cancer treatment7. It is 49


Figure 2: VISTA Expression in the CT26 Tumor Microenvironment. DAPI is a fluorescent blue DNA stain, and VISTA expression is shown in red. Ly6G is a marker for a type of myeloid-derived suppressor cell (a suppressive immune cell produced by inflammation) in A, and CD11b in B is a marker for tumorassociated macrophages (a type of macrophage that attacks cells that attempt to infiiltrate the tumor). The orange in both iimages that appears in both images results from overlap of the red and green, indicating an ooverlap in expression. A shows the overlap in expression between VISTA and MDSCs, whereas the orange in B displays the overlap between VISTA and TAMs. (Images generously proided by the Noelle lab at DdartmouthHitchcock Medical Center)

“There are two main challenges that VISTA targeting could address; the first involves increasing the response rate to other clinically used checkpoints, while the second involves improving the longevity of response rates in order to prevent remission.�

50

known that T-cells highly express VISTA, which is important for autoimmunity. VISTA blockade causes decreased suppressive function of Tregs (a type of T-cell that regulates effector, or killer, T-cells), as well as reduced Tregs formation in tumor-bearing mice, showing that effector T-cells are also critical in understanding the impact of VISTA on the immune system9. In addition, VISTA deficient T-cells highly resemble malfunctioning T-cells in patients with autoimmune diseases such as lupus and Rheumatoid arthritis (RA), demonstrating variations in T-cell expression of VISTA between those with and without autoimmune disorders.

VISTA Targeting

There are several challenges present in cancer checkpoint immunotherapy treatments that VISTA targeting may help overcome11. There are two main challenges that VISTA targeting could address; the first involves increasing the response rate to other clinically used checkpoints, while the second involves improving the longevity of response rates in order to prevent remission3. Some partially successful immune-checkpoint blockade therapies that are shown to slow tumor growth include anti-PD-1 and anti-CTLA-4 (cytotoxic-T-lymphocyte-associated-protein 4), which are both clinically used. Despite their success, patient response rates are still very low, hovering anywhere from 10-40% 12,13 . Blocking VISTA could result in decreased immunosuppressive activities of the tumor, and it has been shown that VISTA blockade monotherapy slows growth in numerous tumor models14. Each therapy works by a different mechanism, so combination therapy that utilizes the blockade of all three pathways is likely to result in increased effectiveness. The approach of using multiple checkpoint blockade has a bright future in the realm of

cancer treatment. VISTA also has promising therapeutic benefits for patients suffering autoimmune diseases like Rheumatoid arthritis (RA) and lupus15,16. In RA, studies have shown that mice with VISTA deficiency have decreased joint inflammation, tissue damage, and degradation of connective tissue, compared to mice with regular VISTA expression17. On the other hand, VISTA blockade worsens RA symptoms, showing a mismatch between the effects of VISTA blockade and deficiency. Anti-VISTA agonists have similar anti-inflammatory effects as VISTA deficiency on RA, as well as lupus15. This contradiction is still not completely understood and is the topic of study of several researchers. Another major contradiction in VISTA is its impacts in RA and lupus. While VISTA deficiency can help halt RA progression, VISTA deficiency in lupus causes the disease to progress more rapidly and severely than in mice with regular VISTA expression. A recent study showing defective migration of VISTA deficient macrophages suggests that in addition to negative regulation of T-cells, VISTA may have roles in macrophage function6. Therefore, VISTA deficiency may have a different result if the disease is primarily macrophage-dependent versus primarily T-cell driven. Thus, it would be beneficial to study this molecule in depth with other models, such as those of HIV/ AIDS or cardiovascular disease, which will allow for increased understanding of VISTA in autoimmunity and disease.

The Cell-extrinsic and Cell-intrinsic Roles of VISTA Interestingly, VISTA is a molecule that may have DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


both cell-extrinsic and cell-intrinsic mechanisms that regulate T-cells and other immune cells9. An extrinsic mechanism involves a molecule that causes a change in a cell by being expressed on a different-cell type. An example of VISTA’s extrinsic mechanism is the overexpression of VISTA on APCs in a form of lymphoma, which causes decreased proliferation and cytokine production of T-cells4. VISTA’s extrinsic role in inhibition of T-cells has been demonstrated in vivo, where VISTA deficiency rapidly accelerates progression of autoimmune disease in mice, and this acceleration was further increased in VISTA deficient mice18. An intrinsic mechanism involves a molecule that causes a change within the cell type that it is expressed. Intrinsically, VISTA deficiency on T-cells and APCs results in excessive T-cell proliferation following T-cell activation5. Additionally, the deletion of the cytoplasmic domain of the VISTA molecule triggers cytokine production on monocytes—the same cell in which VISTA was deleted—indicating that there may be an intrinsic VISTA mechanism at play. On the other hand, research also shows that VISTA agonists enhance cytokine production, which is surprising considering that deletion of the molecule resulted in the same effect. VISTA’s intrinsic mechanisms may involve a transmembrane inhibitory receptor, or even self-regulation9,19. There are many unanswered questions about VISTA’s extrinsic and intrinsic mechanisms due to the complexity of the molecule and its multifaceted impacts on the immune system, especially since these mechanisms likely impact each other. One way to study the effects of VISTA’s intrinsic mechanisms is to generate conditional knockout mutations in mouse models20. For example, a VISTA knockout mouse could be bred with VISTA knocked out only in a specific cell type, such as T-cells. Then, properties of VISTA could be studied to see how VISTA deficiency in this subpopulation of cells affects the entire immune system. To take this one step further, one could use multiple disease models, other than tumor and autoimmunity, to show VISTA deficiency in cellular subpopulations and how this impacts disease outcome and treatment.

Ligand versus Receptor Mechanisms of VISTA Some studies of VISTA’s biochemical makeup suggest that VISTA is a ligand, rather than a receptor, while other groups show that VISTA may be a receptor. While this may appear WINTER 2020

Figure 3: Extrinsic and Intrinsic Functions of VISTA. This figure shows possible mechanisms of how VISTA may interact in different-cell-to-cell interactions2. *The author created the image using information gathered while writing this article.

contradictory, it is possible that VISTA can act as both a ligand and a receptor. Currently, evidence more strongly supports VISTA’s ligand activity. First, VISTA’s biochemical structure is quite similar to other receptors, such as PD-121. Second, it has been shown that VISTA deficiency in both T-cells and APCs causes optimal cell activation, whereas VISTA expression on both T-cells and APCs elicits lowest-cell activation5. According Wang et. al, VISTA may act as a ligand because isolated parts of this molecule alone can activate DCs7. This data resulted from the removal of the entire VISTA domain, rather than the extracellular domain, which weakens their claim for VISTA’s ligand activity. However, the receptor to which VISTA binds to as a ligand is mostly unknown7. One group found that VISTA binds to P-selectin glycoprotein ligand-1 (PSGL-1) and that antibodies which block this interaction counter VISTA’s immunosuppressive effects in acidic environments, such as tumors22. This finding is applicable in settings in which the pH is at or below 6.5; some groups have found that the tumor environment is quite acidic, while other groups maintain that this effect of anti-VISTA is restricted because physiological pH is greater than 6.5. Clearly, the data supporting VISTA as a ligand varies significantly among groups and is often based on interpretation, making it challenging to definitively describe VISTA’s mechanisms.

“There are many unanswered questions about VISTA's extrinsic and intrinsic mechanisms due to the complexity of the molecule and its multifaceted impacts on the immune system.”

VISTA may also act as a receptor that moderates immunosuppression. Some relationships display the potential for this, but the specific mechanism by which VISTA functions as a receptor is still relatively unknown. One group showed that V-set and immunoglobulin domain containing 3 (VSIG-3—a large protein primarily expressed in the brain and testes) is a ligand for VISTA that inhibits T-cell activity and cytokine production, though the specific 51


“Much of the ongoing research on VISTA occurs in a translational setting, which is important because of the clear implications VISTA may have for treating disease. However, it is important to understand the true nature of this molecule in order to more effectiely discovery therapeutic methods"

functionality of VSIG-3 remain unknown in the setting of immunity, as this molecule is not expressed by immune cells or organs23. Because VISTA functions in the immune system, this makes it challenging to argue that VSIG-3 is VISTA’s ligand. Furthermore, this relationship has only been shown in vitro thus far, so VSIG-3 must be studied in vivo in order to understand the mechanisms of the VSIG-3 and VISTA interaction, as well as whether it is biologically relevant. Understanding VISTA’s functions as a ligand as well as receptor is critical in countering disease because if both mechanisms are indeed present, VISTA may have greater strength as an immune regulator in disease when compared to other immunotherapies like anti-PD-1 and antiCTLA-4.

(3) ElTanbouly, M. A.; Croteau, W.; Noelle, R. J.; Lines, J. L. VISTA: A Novel Immunotherapy Target for Normalizing Innate and Adaptive Immunity. Seminars in Immunology 2019, 42, 101308. https://doi.org/10.1016/j.smim.2019.101308.

Conclusion

(7) Wang, G.; Tai, R.; Wu, Y.; Yang, S.; Wang, J.; Yu, X.; Lei, L.; Shan, Z.; Li, N. The Expression and Immunoregulation of Immune Checkpoint Molecule VISTA in Autoimmune Diseases and Cancers. Cytokine & Growth Factor Reviews 2020. https://doi. org/10.1016/j.cytogfr.2020.02.002.

Much of the ongoing research on VISTA occurs in a translational setting, which is important because of the clear implications VISTA may have for treating disease. However, it is important to understand the true nature of this molecule in order to more effectively discover therapeutic methods of countering VISTA. If the basic mechanisms of VISTA itself are more holistically understood, it is likely that this method will allow for greater understanding of why anti-VISTA treatment has the effects that it does. Therefore, basic science research that answers questions about VISTA’s mechanisms, its holistic role in the immune response, understanding the ligand/ receptor conundrum and cell-extrinsic versus intrinsic mechanisms, would be immensely beneficial alongside translational research. VISTA is a promising immunotherapeutic target for cancer and autoimmune disease. This B7 negative immune checkpoint regulator regulates migratory signals, such as cytokines and chemokines, of immune cells, and it negatively regulates T-cell function. Though much about this molecule is unknown, it is certain that studying VISTA further will lead to dramatic improvements in immunotherapy in the future. References (1) Davis, R. J.; Ferris, R. L.; Schmitt, N. C. (2012). Costimulatory and Coinhibitory Immune Checkpoint Receptors in Head and Neck Cancer: Unleashing Immune Responses through Therapeutic Combinations. Cancers of the Head & Neck, 1 (1), 12. https://doi.org/10.1186/s41199-016-0013-x. (2) Bretscher, P. A. A Two-Step, Two-Signal Model for the Primary Activation of Precursor Helper T-cells. Proc Natl Acad Sci U S A 1999, 96 (1), 185–190.

52

(4) Wang, L.; Rubinstein, R.; Lines, J. L.; Wasiuk, A.; Ahonen, C.; Guo, Y.; Lu, L.-F.; Gondek, D.; Wang, Y.; Fava, R. A.; et al. VISTA, a Novel Mouse Ig Superfamily Ligand That Negatively Regulates T-cell Responses. J Exp Med 2011, 208 (3), 577–592. https://doi.org/10.1084/jem.20100619. (5) Flies, D. B.; Han, X.; Higuchi, T.; Zheng, L.; Sun, J.; Ye, J. J.; Chen, L. Coinhibitory Receptor PD-1H Preferentially Suppresses CD4+ T-cell–Mediated Immunity. J Clin Invest 2014, 124 (5), 1966–1975. https://doi.org/10.1172/JCI74589. (6) Broughton, T. W. K.; ElTanbouly, M. A.; Schaafsma, E.; Deng, J.; Sarde, A.; Croteau, W.; Li, J.; Nowak, E. C.; Mabaera, R.; Smits, N. C.; et al. Defining the Signature of VISTA on Myeloid Cell Chemokine Responsiveness. Front Immunol 2019, 10. https:// doi.org/10.3389/fimmu.2019.02641.

(8) Li, N.; Xu, W.; Yuan, Y.; Ayithan, N.; Imai, Y.; Wu, X.; Miller, H.; Olson, M.; Feng, Y.; Huang, Y. H.; et al. Immune-Checkpoint Protein VISTA Critically Regulates the IL-23/IL-17 Inflammatory Axis. Scientific Reports 2017, 7 (1), 1–11. https://doi. org/10.1038/s41598-017-01411-1. (9) Xu, W.; Hiếu, T.; Malarkannan, S.; Wang, L. The Structure, Expression, and Multifaceted Role of Immune-Checkpoint Protein VISTA as a Critical Regulator of Anti-Tumor Immunity, Autoimmunity, and Inflammation. Cellular & Molecular Immunology 2018, 15 (5), 438–446. https://doi.org/10.1038/ cmi.2017.148. (10) ElTanbouly, M. A.; Zhao, Y.; Nowak, E.; Li, J.; Schaafsma, E.; Mercier, I. L.; Ceeraz, S.; Lines, J. L.; Peng, C.; Carriere, C.; et al. VISTA Is a Checkpoint Regulator for Naïve T-cell Quiescence and Peripheral Tolerance. Science 2020, 367 (6475). https:// doi.org/10.1126/science.aay0524. (11) Nowak, E. C.; Lines, J. L.; Varn, F. S.; Deng, J.; Sarde, A.; Mabaera, R.; Kuta, A.; Le Mercier, I.; Cheng, C.; Noelle, R. J. Immunoregulatory Functions of VISTA. Immunol Rev 2017, 276 (1), 66–79. https://doi.org/10.1111/imr.12525. (12) ElTanbouly, M. A.; Schaafsma, E.; Noelle, R. J.; Lines, J. L. VISTA: Coming of Age as a Multi-Lineage Immune Checkpoint. Clinical & Experimental Immunology n/a (n/a). https://doi.org/10.1111/cei.13415. (13) Xu, W.; Dong, J.; Zheng, Y.; Zhou, J.; Yuan, Y.; Ta, H. M.; Miller, H. E.; Olson, M.; Rajasekaran, K.; Ernstoff, M. S.; et al. ImmuneCheckpoint Protein VISTA Regulates Antitumor Immunity by Controlling Myeloid Cell–Mediated Inflammation and Immunosuppression. Cancer Immunol Res 2019, 7 (9), 1497–1510. https://doi.org/10.1158/2326-6066.CIR-18-0489. (14) Mercier, I. L.; Chen, W.; Lines, J. L.; Day, M.; Li, J.; Sergent, P.; Noelle, R. J.; Wang, L. VISTA Regulates the Development of Protective Antitumor Immunity. Cancer Res 2014, 74 (7), 1933–1944. https://doi.org/10.1158/0008-5472.CAN-13-1506. (15) Han, X.; Vesely, M. D.; Yang, W.; Sanmamed, M. F.; Badri, T.; Alawa, J.; López-Giráldez, F.; Gaule, P.; Lee, S. W.; Zhang, J.-P.; et al. PD-1H (VISTA)–Mediated Suppression of Autoimmunity

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


in Systemic and Cutaneous Lupus Erythematosus. Science Translational Medicine 2019, 11 (522). https://doi.org/10.1126/ scitranslmed.aax1159. (16) Ceeraz, S.; Sergent, P. A.; Plummer, S. F.; Schned, A. R.; Pechenick, D.; Burns, C. M.; Noelle, R. J. VISTA Deficiency Accelerates the Development of Fatal Murine Lupus Nephritis. Arthritis Rheumatol 2017, 69 (4), 814–825. https://doi.org/10.1002/art.40020. (17) Ceeraz, S.; Eszterhas, S. K.; Sergent, P. A.; Armstrong, D. A.; Ashare, A.; Broughton, T.; Wang, L.; Pechenick, D.; Burns, C. M.; Noelle, R. J.; et al. VISTA Deficiency Attenuates Antibody-Induced Arthritis and Alters Macrophage Gene Expression in Response to Simulated Immune Complexes. Arthritis Res Ther 2017, 19. https://doi. org/10.1186/s13075-017-1474-y. (18) Liu, J.; Yuan, Y.; Chen, W.; Putra, J.; Suriawinata, A. A.; Schenk, A. D.; Miller, H. E.; Guleria, I.; Barth, R. J.; Huang, Y. H.; et al. ImmuneCheckpoint Proteins VISTA and PD-1 Nonredundantly Regulate Murine T-Cell Responses. PNAS 2015, 112 (21), 6682–6687. https:// doi.org/10.1073/pnas.1420370112. (19) Bharaj, P.; Chahar, H. S.; Alozie, O. K.; Rodarte, L.; Bansal, A.; Goepfert, P. A.; Dwivedi, A.; Manjunath, N.; Shankar, P. Characterization of Programmed Death-1 Homologue-1 (PD-1H) Expression and Function in Normal and HIV Infected Individuals. PLoS One 2014, 9 (10). https://doi.org/10.1371/journal. pone.0109103. (20) Yoon, K. W.; Byun, S.; Kwon, E.; Hwang, S.-Y.; Chu, K.; Hiraki, M.; Jo, S.-H.; Weins, A.; Hakroush, S.; Cebulla, A.; et al. Control of SignalingMediated Clearance of Apoptotic Cells by the Tumor Suppressor P53. Science 2015, 349 (6247), 1261669. https://doi.org/10.1126/ science.1261669. (21) Mehta, N.; Maddineni, S.; Mathews, I. I.; Sperberg, R. A. P.; Huang, P.-S.; Cochran, J. R. Structure and Functional Binding Epitope of V-Domain Ig Suppressor of T-cell Activation. Cell Reports 2019, 28 (10), 2509-2516.e5. https://doi.org/10.1016/j.celrep.2019.07.073. (22) Johnston, R. J.; Su, L. J.; Pinckney, J.; Critton, D.; Boyer, E.; Krishnakumar, A.; Corbett, M.; Rankin, A. L.; Dibella, R.; Campbell, L.; et al. VISTA Is an Acidic PH-Selective Ligand for PSGL-1. Nature 2019, 574 (7779), 565–570. https://doi.org/10.1038/s41586-019-1674-5. (23) Wang, J.; Wu, G.; Manick, B.; Hernandez, V.; Renelt, M.; Erickson, C.; Guan, J.; Singh, R.; Rollins, S.; Solorz, A.; et al. VSIG‐3 as a Ligand of VISTA Inhibits Human T‐cell Function. Immunology 2019, 156 (1), 74–85. https://doi.org/10.1111/imm.13001.

WINTER 2020

53


Depression: The Journey to Happiness BY GEORGIA DAWAHARE '23

Cover: Tractographic reconstruction of neural connections in a human brain using diffusion tensor imaging (Source: Wikipedia Commons, Creator: Thomas Schultz)

“An objective study of the underlying mechanisms of happiness is difficult because happiness is an intangilble idea and a subjective state.”

54

Introduction

aberrant levels of norepinephrine10.

An objective study of the underlying mechanisms of happiness is difficult because happiness is an intangible idea and a subjective state. At the opposite end of the spectrum, depression is a complex disorder characterized (in part) by a lack of happiness and is similarly challenging to study. Currently, there are multiple theories of depression, all derived from an understanding of known chemical relationships in the brain. Professor B.L. Jacobs, former director of the Neuroscience program at Princeton University, suggests that depression is caused by the suppression of neurogenesis3. In contrast, R.H. Sprengelmeyer, a clinical neuropsychologist, and several colleagues at the University of St. Andrews propose that depression is the result of an impaired insula8 [Figure 1]. Additionally, Michelle Chandley and Gregory Ordway, professors at East Tennessee State University, connect depression to

Following from these theories, recent studies have revealed the different areas of the brain that become active or inactive when a person is happy or depressed. Suardi et al. at the University of Bergamo and the University of Turin in Italy looked at patients’ PET and MRI scans and found that happy memories are associated with the activation of the anterior cingulate cortex [Figure 2], prefrontal cortex, and insula11. However, George et al. at the Medical University of South Carolina found that depression is associated with the inactivity of the prefrontal cortex25 [Figure 3]. These contradictions have prompted further studies about genetics, drugs, and lifestyle choices that make people more or less likely to be happy. Twin studies specifically are used to examine the heritability of happiness and depression, indicating that the individual experience of these emotions can be associated DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


with specific genes. Many of these studies have also led to the creation and modification of different methods of treatment which have been used to alleviate the symptoms of depression. Fregni et al. at Harvard Medical School found that transcranial direct current stimulation (tDCS) is an effective treatment for depression12. Additionally, various forms of therapy combined with increasingly functional antidepressants have demonstrated substantial success in the treatment of depression. Better understanding the relationship between the genetics and lifestyle choices may eventually lead to the ability to manipulate the brain in such a way that will promote consistent happiness.

The Genetics of Happiness (and Depression) Both monozygotic and dizygotic twin studies have demonstrated that happiness is, to a large extent, influenced by genetic factors. Monozygotic (or identical) twins are made when one egg is fertilized by a single sperm; two weeks after conception, the embryo then splits, and two genetically identical babies develop. On the other hand, dizygotic (or fraternal) twins are formed when two eggs are released at a single ovulation and are fertilized by two different sperm. These two fertilized eggs then implant independently in the uterus. Unlike identical twins, dizygotic twins share the same genetic relationship as non-twin siblings20. Since monozygotic twins are genetically identical, any similarities in their behavior can be correlated to their genetic profile. This relationship between genetics and behavior can be strengthened and further validated by similar study of fraternal twin behavior. In studies, fraternal twin data can act as a control, or point of comparison, to identical twin data, confirming whether or not the studied behavior has any genetic basis. WINTER 2020

In her article “How Happy Are You and Why?,” Professor Lyubomirsky discusses a study which found that identical twins who were separated at birth had very similar happiness scores. In contrast, the happiness levels of fraternal twins were completely uncorrelated1. Twin studies like this one have led to a consensus among researchers that the heritability of happiness is approximately 50%1. For example, Sullivan et al. collected diagnostic data from family studies and twin studies (both fraternal and identical) that met their inclusion criteria by using direct interviews or, in one instance, questionnaires14.

Figure 1: The insular cortex, or “Island of Reil,” lies deep within the lateral sulcus of the brain. The insula serves a wide variety of functions in humans ranging from sensory and affective processing to high-level cognition.17 This image divides the insula into its anterior, mid, and posterior regions, each denoted by a different color.

The extensive inclusion criteria included (1) an explicit distinction between unipolar major depression and bipolar disorder, (2) a systematic recruitment of individuals to be studied as well as the confirmation of relatives, (3) direct collection of diagnostic data from all or nearly all subjects, (4) use of defined diagnostic criteria, and (5) diagnostic determination by assessors who were blind to the confirmation source and diagnoses of other relatives14. For family studies, they required a comparison group which was studied in a similar manner. For twin studies, they required that zygosity was estimated with reasonable accuracy14. On the basis of prior data about the biology of twinning (i.e., that identical twins are genetically identical and that fraternal twins are half genetically identical), Sullivan et al. generated a set of expectations14. By comparing the experimental data to these assumptions, they found that the heritability of major depression ranges from 31%–42%14. Sullivan et al. acknowledges, however, that this range is probably the lower bound and that the level of heritability is likely to be substantially higher for reliably diagnosed major depression or for subtypes such as recurrent major depression14.

“Twin studies like this one have led to a concensus among researchers that the heritability of happiness is approximately 50%.”

(Source: Wikimedia Commons, Creator: Schappelle)

Professor Avshalom Caspi and his associates at King's College London discovered one of the genes that are indicative of a person’s

Figure 2 - The anterior cingulate cortex is in a unique position in the brain which allows it to make connections with both the “emotional” limbic system and the “cognitive” prefrontal cortex18. (Source: Wikimedia Commons, Creator: Mysid Brodmann) 55


predisposition to experience (or to not experience) depression. In this study of how depression is impacted by life stresses, Caspi determined that a specific gene, 5-HT T, moderates the influence of stressful life events on depression2. Individuals with one or two copies of the short allele of the 5-HT T gene exhibited more depressive symptoms, diagnosable depression, and suicidality after reported stressful life events than individuals with both copies of the long allele2. This epidemiological study provides evidence of a gene-by-environment interaction, in which an individual's response to environmental stresses is moderated by their genetic makeup2. Although genetic predisposition does play a large role in our experience of happiness, the activation of this “depression gene” is largely determined by external factors that can to an extent be controlled such as highly stressful situations, familial support, and professional therapeutic help1.

The Brain Chemistry of Depression “The neural mechanisms underlying depression must be better understood to improve the treatment of depression and show what steps can be taken to fosster happiness.”

The neural mechanisms underlying depression must be better understood to improve the treatment of depression and show what steps can be taken to foster happiness. The explicit circuity of depression in the brain is not known, but several scientists have proposed theories using known correlative and causal factors. Professor B.L. Jacobs proposed a theory of depression based on the process of neurogenesis – the birth of new neurons. This process, which continues postnatally and into adulthood, is prominent in the dentate gyrus of the hippocampal formation3 [Figure 4]. Jacobs hypothesized that the waning and waxing of neurogenesis in the hippocampal formation are important causal factors, respectively, in the precipitation of and recovery from depressive episodes3. In support of this theory, Jacobs found that increasing levels of serotonin, a neurotransmitter associated with happiness, enhances the rate of neurogenesis

Figure 3 - The prefrontal cortex is located at the front of the frontal lobe. It is involved in a variety of complex behaviors such as planning and focusing, and it contributes significantly to personality development19. (Source: Wikimedia Commons, Creators: Natalie M. Zahr, Ph.D. and Edith V. Sullivan, Ph.D) 56

in the dentate gyrus while stress suppresses neurogenesis3. Furthermore, a stress-induced decrease in dentate gyrus neurogenesis is an important causal factor that precipitates episodes of depression. Therefore, therapeutic interventions for depression that increase serotonergic neurotransmission work in part by augmenting dentate gyrus neurogenesis and thereby promoting recovery from depression3. Another theory of depression involves the insula, a small region in the cerebral cortex. Sprengelmeyer et al. tested two cohorts of participants with Major Depressive Disorder (MDD). In the first MDD cohort, the researchers used standardized facial expression recognition tasks; for the second cohort, they focused on facial disgust recognition, a function associated with the insular cortex. In the first study, they found that participants with MDD were particularly impaired in recognizing facial expressions of disgust4. Since the recognition of disgust is linked to the insular cortex, this suggests that the insula might be dysfunctional in MDD4,5,6,7,8,9. In a second study aimed to test this hypothesis, Sprengelmeyer et al. used discrimination accuracy for disgust as a dependent variable4. Discrimination accuracy uses a variable to indicate a better ability to discriminate targets (facial expressions of disgust) from distracters (other facial expressions including no emotional expression). Again, they found impaired processing of facial expressions of disgust suggesting involvement of the insula. In addition, voxel-based morphometry analyses revealed a strong gray matter reduction in the insular cortex in MDD participants. From this data, Sprengelmeyer et al. concluded that discrimination accuracy for disgust was significantly correlated with volumetric reduction within the anterior insula. Thus, cognitive and emotional functions assumed to be associated with the insula are adversely affected in patients with MDD4. Scientists Chandley and Ordway proposed their own theory of depression in “The Neurological Basis of Suicide,” which implicates the actions of the neurotransmitter norepinephrine (NE). Their postmortem findings demonstrate the pathology of NE neurons and their surrounding glia in the brains of depressed victims of suicide, strongly implicating a role of NE in the development of depression10. NE is produced primarily by neurons in the locus coeruleus (LC) and is described as a participant in the modulation of several behaviors, including “the stress response, attention, memory, the sleep–wake cycle, decision making, and regulation of sympathetic states”10. Chandley DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


and Ordway describe how an increase in NE can result in “insomnia, anxiety, irritability, and hyperactivity,” while reduced NE activity leads to “lethargy and loss of alertness and focus”10.

Figure 4 - The dentate gyrus is a region of the hippocampus that processes incoming information and prepares it for further processing21. (Source: Wikimedia Commons, Creator: Hmstradecki)

Interestingly enough, the depletion of NE precipitates depression in individuals with a history of depression, but not in individuals with no history of psychiatric illness10. This suggests that depression occurs only in individuals that have “a particular susceptibility to NE depletion-induced mood changes”10. This theory is supported by the successful use of NE reuptake inhibitors as antidepressants and attention enhancers for mood and attention disorders10.

The Treatment of Depression

tDCS as an effective treatment of depression12.

Transcranial Direct Current Stimulation (tDCS): The options for the treatment of depression have significantly increased as new information about depression and happiness has been discovered. Different types of brain stimulation, for example, have been shown to be effective in the treatment of depression.

Cognitive Therapy: The growing understanding of the connection between genetics and happiness has in turn increased awareness of one’s propensity to be happy or sad. Although one’s genetics are determined at birth, one can take certain steps to minimize the unfortunate effects of genetics on depression. New methods of therapy have been found to help individuals suffering from depression and teach avoidance of environmental triggers. Cognitive therapy (CBT) is a common type of talk therapy (psychotherapy) that can “[help one] become aware of inaccurate or negative thinking so [one] can view challenging situations more clearly and respond to them in a more effective way”22. It is often the preferred method of psychotherapy because it can help someone quickly identify and cope with specific challenges22.

Electroconvulsive therapy is currently the most effective treatment available for depression, but it is associated with anesthetic risks, adverse cognitive effects, and social burden. Repetitive transcranial magnetic stimulation, on the other hand, offers a less invasive option for depression treatment, but it is still expensive, and the results are inconsistent. Deep brain stimulation and vagal nerve stimulation are also being studied as potential depression treatments, but both are invasive12. Recently, a dated brain stimulation technique has received renewed attention: transcranial stimulation with weak direct currents12. Fregni et al. found that tDCS is effective, inexpensive, non-invasive, and painless. In a randomized, controlled, double‐blind trial, Fregni et al. investigated the effects of 5 days of tDCS on the brains of 10 patients with major depression by randomly assigning them into one of two groups: active or sham tDCS. Electrodes were placed on each subject and then tDCS was applied at a constant current for 20 minutes per day. At the end of treatment, there were four treatment responders in the active group and no responders in the sham group. The patients that received active stimulation had a significant decrease in the Hamilton Depression Rating Scale and Beck Depression Inventory scores when compared to the baseline. This change was not observed in the patients that received sham stimulation which further legitimizes WINTER 2020

“Electroconvulsive therapy is currently the most effective treatment available for depression, but it is associated with anesthetic risks, adverse cognitive effects, and social burden.”

Furthermore, a meta-analysis by R. Dobson identified twenty-eight studies that used a common outcome measure of depression and compared other therapeutic modalities with cognitive therapy16. They investigated behavioral therapy, which is based on the theory that all behavior is learned. Behavioral therapists believe faulty learning causes abnormal behavior which means the individual has to learn the correct or acceptable behavior23. Dobson also studied pharmacotherapy, which is “the safe, appropriate and economical use of medications as part of interprofessional treatment”24. When studying these methods of therapy, Dobson found that cognitive therapy is more effective than both behavioral therapy and pharmacotherapy in the treatment of clinical depression16. Dobson suggests that these findings confirm the superiority of cognitive therapy over other forms of psychotherapy. 57


Antidepressants: Although existing antidepressants such as serotonin-reuptake inhibitors and NE reuptake inhibitors produce subtle changes that require weeks or months to take effect, recent studies show that treatment with new agents result in improvements in mood within hours of dosing patients who are resistant to typical antidepressants15. Professor R. Duman at the Yale University School of Medicine studies how a single dose of ketamine, a medication traditionally used for anesthesia, produces “a rapid onset of antidepressant response that can last several days in the majority of individuals with treatment-resistant unipolar and bipolar depression.” Other recent studies report that ketamine also reduces suicidal ideation, which Duman claims is “a major advance over typical antidepressants that have low efficacy and delayed onset of action.” In addition to ketamine, there is evidence that low doses of scopolamine produce similar actions in subjects with depression. The rapid antidepressant and anti-suicide actions of ketamine and scopolamine represent a significant discovery for the treatment of mood disorders15.

The Proactive Pursuit of Happiness "...by being aware of one's environmental triggers, a person can minimize the effect of the genetic variables that make one less likely to be happy and gain a certian degree of control over their happiness.”

Recent research about happiness and depression has enabled scientists to find several new ways to combat depression and experience greater levels of happiness. Twin studies show that to a large extent, one’s predisposition to be more or less happy depends on genetics. Caspi’s study on stress concluded that while the 5-HT T gene controls the influence of stressful life events on depression, activation of this gene is required before the experience of depression. Professor Lyubomirsky argues that this part of Caspi’s study demonstrates that genetics do not entirely control one’s capacity to be happy. She suggests that by being aware of one’s environmental triggers, a person can minimize the effect of the genetic variables that make one less likely to be happy and gain a certain degree of control over their happiness. Today, there are several treatment options for depression— cognitive therapy, antidepressants, and tDCS may enable individuals to overcome a genetic predisposition to depression. References [1] Lyubomirsky, Sonja. (2015). How Happy Are You and Why?. Pursuing Happiness. Ed. Matthew Parfitt and Dawn Skorczewski. Boston: Bedford/St. Martin’s. pp. 192-196. Print. [2] Caspi, A., Sugden, K., Moffitt, T. E., Taylor, A., Craig, I. W.,

58

Harrington, H. L., Poulton, R. (2003, July 18). Influence of Life Stress on Depression: Moderation by a Polymorphism in the 5-HTT Gene. https://science.sciencemag.org/ content/301/5631/386 [3] Jacobs, B., van Praag, H. & Gage, F. (2000). Adult brain neurogenesis and psychiatry: a novel theory of depression. Mol Psychiatry, 5, pp. 262–269. [4] Sprengelmeyer, R., Steele, J., Mwangi, B., Kumar, P., Christmas, D., Milders, M., & Matthews, K. (2011). The insular cortex and the neuroanatomy of major depression. Journal of Affective Disorders, 133(1-2), pp. 120–127. [5] A. Hennenlotter, U. Schroeder, P. Erhard, B. Haslinger, R. Stahl, A. Weindl, H.G. von Einsiedel, K.W. Lange, A.O. CeballosBaumann. (2005). Neural correlates associated with impaired disgust processing in pre-symptomatic Huntington's disease. Brain, pp. 1446-1453. [6] C.M. Kipps, A.J. Duggins, E.A. McCusker, A.J. Calder. (2007). Disgust and happiness recognition correlate with anteroventral insula and amygdala volume respectively in preclinical Huntington's disease. J. Cogn. Neurosci., 19, pp. 1206-1217. [7] M.L. Phillips, A.W. Young, C. Senior, M. Brammer, C. Andrew, A.J. Calder, E.T. Bullmore, D.I. Perrett, D. Rowland, S.C. Williams, J.A. Gray, A.S. David. (1997). A specific neural substrate for perceiving facial expressions of disgust. Nature, 389, pp. 495-498. [8] R. Sprengelmeyer, M. Rausch, U.T. Eysel, H. Przuntek. (1998) Neural structures associated with recognition of facial expressions of basic emotions. Proc. Biol. Sci., 265, pp. 19271931. [9] B. Wicker, C. Keysers, J. Plailly, J.P. Royet, V. Gallese, G. Rizzolatti. (2003) Both of us disgusted in My insula: the common neural basis of seeing and feeling disgust. Neuron, 40, pp. 655-664. [10] Chandley MJ, Ordway GA. (2012). Noradrenergic Dysfunction in Depression and Suicide. In: Dwivedi Y, editor. The Neurobiological Basis of Suicide. Boca Raton, FL: CRC Press/Taylor & Francis; Chapter 3. [11] Suardi, A., Sotgiu, I., Costa, T., Cauda, F., & Rusconi, M. (2016). The neural correlates of happiness: A review of PET and fMRI studies using autobiographical recall methods. Cognitive Affective & Behavioral Neuroscience, 16(3), pp. 383–392. [12] Fregni, F., Boggio, P., Nitsche, M., Marcolin, M., Rigonatti, S., & Pascual‐Leone, A. (2006). Treatment of major depression with transcranial direct current stimulation. Bipolar Disorders, 8(2), pp. 203–204. https://doi.org/10.1111/j.13995618.2006.00291.x [13] Holden, C. (2003). Future brightening for depression treatments. Science, 302(5646), pp. 810–813. [14] Sullivan, P., Neale, M., Kendler, K. Genetic Epidemiology of Major Depression: Review and Meta-Analysis. (2000). Am J Psychiatry, 157, pp. 1552–1562. [15] Duman, R., Aghajanian, G., Sanacora, G. et al. (2016). Synaptic plasticity and depression: new insights from stress and rapid-acting antidepressants. Nat Med, 22, pp. 238–249. [16] Dobson, K. (1989). A Meta-Analysis of the Efficacy of Cognitive Therapy for Depression. Journal of Consulting and Clinical Psychology, 57(3), pp. 414–419. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


[17] Uddin, L.Q., Nomi, J.S., HĂŠbert-Seropian, B., Ghaziri, J., Boucher, O. (2017). Structure and Function of the Human Insula. Journal of Clinical Neurophysiology, 34(4), pp. 300-306. [18] Stevens, F.L., Hurley, R.A., Taber, K.H., Ph.D., Hayman, L.A. (2011, April 1). Anterior Cingulate Cortex: Unique Role in Cognition and Emotion. Journal of Neuropsychiatry and Clinical Neurosciences, 23(2), pp. 121-125. [19] Prefrontal Cortex. (2019, September 4). Retrieved from https://www.goodtherapy.org/blog/psychpedia/prefrontalcortex. [20] Types of Twins. (n.d.). Retrieved from https://www.twins. org.au/research/twin-and-data-resource/76-types-of-twins. [21] Jonas, P., & Lisman, J. (2014, September 10). Structure, function, and plasticity of hippocampal dentate gyrus microcircuits. Front Neural Circuits, 8, pp. 107. [22] Cognitive Behavioral Therapy. (2019, March 16). Mayo Clinic. Retrieved from https://www.mayoclinic.org/ tests-procedures/cognitive-behavioral-therapy/about/pac20384610 [23] Mcleod, S. (n.d.). Behavioral Therapy. Retrieved from https://www.simplypsychology.org/behavioral-therapy.html [24] Pharmacotherapy. (n.d.). Retrieved from https://www. bpsweb.org/bps-specialties/pharmacotherapy/ [25] George, M.S., Ketter, T.A., Post, R.M. (1994). Prefrontal cortex dysfunction in clinical depression. Depression, 2(2), 59-72

WINTER 2020

59


Hypnosis: Myth or Medicine? BY KAMREN KHAN '23 Cover: Hypnosis demonstrates the complex relationship between perception and reality (Source: www.shutterstock. com/image-illustration/ brain-thinking-concept-3dillustration-629171159)

“Though still in the early stages of development, hypnotic analgesia (pain relief through hypnosis) clearly shows promise.”

Introduction For many, the term “hypnosis” likely brings up images of a psychic behind a hazy crystal ball, but despite this skeptical interpretation, hypnosis has become increasingly recognized as a viable technique of clinical pain management. Though still in the early stages of development, hypnotic analgesia (pain relief through hypnosis) clearly shows promise. On a basic level, hypnosis results in an altered level of awareness.12 More specifically, hypnosis refers to the induction of a receptive state of consciousness from which perception, or more specifically the perception of pain, can be manipulated as in the case of hypnotic analgesia.

General Perception and Study of Hypnosis The mystical ‘pop-culture’ perception of 60

hypnosis was around from the beginning. Hypnosis first appeared during the 18th Century in the clinical practices of Franz Mesmer who defined it as a product of supernatural forces.12 It then grew to be a somewhat widely accepted, though not fully understood practice among psychologists and psychiatrists. The widespread implementation of hypnosis climaxed with the widespread treatments of World War I and II veterans suffering from combat neuroses. Ultimately, the history of clinical hypnosis is largely characterized by psychological applications and a vague understanding of the underlying mechanisms. Modern science has yielded a far greater understanding of the effects of hypnosis on the brain. Hypnosis was specifically tied to neural processing in a study of visual illusion. Through positron emission tomography (PET), a method of functional imaging using radioactive tracers, a group of experimenters monitored activity in a subregion of the fusiform gyrus sensitive to color perception. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Figure 1: The physical process by which an external stimulus (‘pain’) is recognized and the path by which a response to said stimulus occurs. (Source: Shutterstock)

The experimenters found that hypnotized subjects showed brain activation consistent with color perception when told to perceive color regardless of whether they were shown a colored or gray-scale image.10 Similarly, when experimenters told hypnotized subjects to see gray-scale, they demonstrated decreased activation of the fusiform region regardless of whether or not the image was colored. These findings, though relevant only to subjects deemed “highly hypnotizable,” contradict minimizing explanations of hypnosis such as the response expectancy theory, which seek to defined hypnosis as a merely a consequence of social expectation.15 Instead, this study suggests that hypnosis has concrete effects on neural processing that occur beyond the consciousness of individuals.

Analgesic Functions of Placebo Placebos alter typical patterns of perception and responsiveness through ‘top-down’ neurological processes rather than through any physiological ‘bottom-up’ effects, but can generate therapeutic benefits nonetheless. Hypnosis operates through a similar mechanism, taking advantage of the executive network in the brain to inhibit pain perception. Placebos therefore constitute a valuable entry point for the consideration of the effect of altered consciousness on pain perception. In a study on the analgesic effects of placebos, patients who received thoracotomies were split into three post-operative treatment groups. Each group was given the same treatment narcotics by request and a saline solution (with no analgesic effect). The experimenters gave the first group no information about the saline solution. The second group was informed that the saline solution was either a highly effective painkiller or a placebo and the third group was WINTER 2020

led to believe that the saline solution was a highly effective painkiller. The experimenters then studied the relative perceived pain of each group by comparing the average quantity of narcotics requested. The second and third group demonstrated a significant reduction in opioid use as compared to the first group.13 This trend is consistent with the assertion that verbal instructions about potential analgesia lead to an altered experience of pain. More simply, placebos can reduce the perceived intensity of pain. While it is clear that placebos have analgesic effects, the mechanism by which this occurs remains unclear. Additionally, one is left to wonder whether the analgesic effects constitute an alteration in the immediate reception and processing of pain or simply in the later conscious valuation of said pain. In an effort to specify the biological processes underlying the analgesic effects of placebos, a group of researchers used positron emission tomography (PET) to study brain activity in response to a sustained pain stimulus. The researchers found that presence of a placebo was associated with increased activation of the endogenous opioid system, a neurological pain-relieving system.16 While this finding does not yield a complete understanding of the analgesic function of placebos, it indicates the presence of a biological component that intercedes prior to the conscious valuation of pain. The analgesic effects of placebos, and by extension hypnosis, can therefore be better understood as the integrated alteration of conscious and unconscious processing.

“Placebos alter typical patterns of perception and responsiveness through 'topdown' neurological processes rather than through any physiological 'bottom-up' effects, but can generate therapeutic benefits nonetheless.”

Successes and Shortcomings of Clinical Applications of Hypnosis This analgesic role of hypnosis translates logically into a clinical context. One such 61


“The analgesic role of hypnosis translates logically into a clinical context. One such application is the management of chronic pain.”

Figure 2: Pharmaceutical drugs, the most commonly used form of pain-reliever, often have troubling side-effects and economically burden their users. (Source: Shutterstock)

62

application is the management of chronic pain. The term chronic pain refers to pain that cannot be exhaustively explained by biological or neuropathic processes and instead is attributed to a complex psychobiological mechanism.4 In 2016, researchers estimated that chronic pain affects 20.4% of the United States population.5 Given the wide variety of causes, doctors address it with an array of treatments and therapies, including hypnosis.9 A recent literature review on hypnotic analgesia as a treatment for chronic pain concluded that clinical trials consistently yielded decreases in self-reported pains for subjects treated with hypnosis. The main biological component of this influence is the manipulation of brain and spinal cord function.9 In the study, subjects also reported a variety of improvements unrelated to pain including “improved positive affect, relaxation, and increased energy.” Ultimately, hypnotic treatment of chronic pain yielded significant improvement in self-reported pain as well as other largely unrelated aspects of general health. One shortcoming of the use of hypnosis to treat chronic pain is the high degree of variability in responsiveness. The success rates of the treatment were found to vary by type of injury and type of pain, more specifically between neuropathic (“chronic pain that is initiated by nervous system lesions or dysfunction”11) and nonneuropathic pain.11 The success rate also likely varied on the basis of hypnotic susceptibility, defined by the American Psychological Association as an individual’s ability to enter a state of hypnosis.3 Hypnotic susceptibility has been found to be stable on

an individual level and therefore excludes a subset of the population with low hypnotic susceptibility from fully benefiting from hypnotic analgesia.

Application of Hypnosis to Chronic Pain and the Biophysical Healing Process Given the psychological component of chronic pain, one might expect it to be affected by hypnotic treatment. However, hypnosis has also been shown to have an influence on acute pain. In 1990, researches compared the childbirth experience of hypnotized women to that of control subjects. They found that in addition to reduced pain, hypnotically led to shorter Stage 1 labors, less medication, and higher Apgar scores.8 Just like in the study of chronic pain, researchers determined that hypnosis not only treats pain, but also yields benefits beyond the scope of analgesia. As in the case of chronic pain, the efficacy of hypnotic treatment of acute pain has been found to vary on the basis of hypnotic susceptibility.14 However, studies have shown that even those in the “low susceptibility” category benefit from hypnotic treatment.2 In addition to the treatment of pain, hypnosis has been shown to influence the healing process. In a randomized control pilot study, subjects suffering from bone fractures in the ankle were divided into a hypnosis group and a control group. The subjects were periodically administered regular clinical assessments and radiographs in order to track bodily tissue healing.6 The researchers noted faster healing and greater ankle mobility after nine weeks in the group treated with hypnosis when compared with the control group. A similar study was conducted on post-surgery wound healing. The researchers divided subjects who received mammaplasties into three treatment groups: usual care, adjunctive supportive attention sessions, and adjunctive hypnosis sessions.7 As in the previous study, hypnotic intervention led to accelerated healing in comparison to the control groups. Both studies, however, were of limited population (twelve and eighteen subjects respectively) and therefore may fail to represent the general public. Additionally, the hypnosis was not directed towards pain but instead designed specifically to influence the healing process. Therefore, benefits of hypnosis are closely related to administrative intent and its resulting psychological effects. Regardless of the meager sample size or the nature of the hypnosis, the DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


experimental conclusion still provides valuable insight into the possible clinical applications of hypnosis in not only pain management but also healing. The clinical promise of hypnosis cannot be denied. Its analgesic function applies to both chronic and acute pain and even leads to reduced pain in those classified as having low susceptibility to hypnosis. Moreover, hypnosis provides benefits beyond simply reducing pain that can accelerate the healing process and improve quality of life. Aside from the apparent health benefits of hypnosis, it also presents a variety of socioeconomic benefits. Firstly, hypnosis is extremely accessible and inexpensive as it requires little training and can even be self-administered or digitally administered. Additionally, unlike other forms of pain management such as opioids, hypnosis has no addictive properties or known sideeffects. Hypnosis could therefore be seen as a valuable alternative to traditional mechanisms of pain management.

What Can We Learn from Hypnotic Analgesia? The efficacy of hypnosis may also better our understanding of the connection between the body, mind, its underlying biological and neurological substrates. The fact that altering one’s psychological state can lead to concrete physiological changes suggests a high degree of cognitive control over sensory perception. This raises the question of whether one’s perspective may influence bodily function on a variety of levels as the mechanisms by which hypnotic analgesia operates almost certainly exists beyond the scope of hypnosis. While the clinical benefit of hypnosis is clear, the subtleties of the mind-body connection continue to evade current scientific understanding and demand further investigation. References [1] Stoelb, B. L., Molton, I. R., Jensen, M. P., & Patterson, D. R. (2009). The efficacy of hypnotic analgesia in adults: a review of the literature. Contemporary hypnosis : the journal of the British Society of Experimental and Clinical Hypnosis, 26(1), 24–39. https://doi.org/10.1002/ch.370 [2] Andreychuk, T., & Skriver, C. (1975). Hypnosis and biofeedback in the treatment of migraine headache. International Journal of Clinical and Experimental Hypnosis, 23(3), 172–183. https://doi. org/10.1080/00207147508415942

[4] Crofford L. J. (2015). Chronic Pain: Where the Body Meets the Brain. Transactions of the American Clinical and Climatological Association, 126, 167–183. [5] Dahlhamer J, Lucas J, Zelaya, C, et al. Prevalence of Chronic Pain and High-Impact Chronic Pain Among Adults — United States, 2016. MMWR Morb Mortal Wkly Rep 2018;67:1001–1006. DOI: http://dx.doi.org/10.15585/mmwr. mm6736a2external icon. [6] Ginandes, C. S., & Rosenthal, D. I. (1999). Using hypnosis to accelerate the healing of bone fractures: a randomized controlled pilot study. Alternative therapies in health and medicine, 5(2), 67–75. [7] Ginandes, C., Brooks, P., Sando, W., Jones, C., & Aker, J. (2003). Can medical hypnosis accelerate post-surgical wound healing? Results of a clinical trial. The American journal of clinical hypnosis, 45(4), 333–351. https://doi.org/10.1080/00 029157.2003.10403546 [8] Harmon, T. M., Hynan, M. T., & Tyre, T. E. (1990). Improved obstetric outcomes using hypnotic analgesia and skill mastery combined with childbirth education. Journal of Consulting and Clinical Psychology [9] Jensen, M. P., & Patterson, D. R. (2014). Hypnotic approaches for chronic pain management: clinical implications of recent research findings. The American psychologist, 69(2), 167– 177. https:/doi.org/10.1037/a0035644 [10] Kosslyn, S. M., Thompson, W. L., Costantini-Ferrando, M. F., Alpert, N. M., & Spiegel, D. (2000). Hypnotic visual illusion alters color processing in the brain. The American journal of psychiatry, 157(8), 1279–1284. https://doi.org/10.1176/appi. ajp.157.8.1279 [11] Nicholson B. (2006). Differential diagnosis: nociceptive and neuropathic pain. The American journal of managed care, 12(9 Suppl), S256–S262. [12] Orne T., & Hammer A, (2020, February 27). Hypnosis. Retrieved from https://www.britannica.com [13] Pollo A, Amanzio M, Arslanian A, Casadio C, Maggi G, Benedetti F. Response expectancies in placebo analgesia and their clinical relevance. Pain. 2001;93(1):77–84. doi:10.1016/ s0304-3959(01)00296-2 [14] Stoelb, B. L., Molton, I. R., Jensen, M. P., & Patterson, D. R. (2009). The efficacy of hypnotic analgesia in adults: a review of the literature. Contemporary hypnosis : the journal of the British Society of Experimental and Clinical Hypnosis, 26(1), 24–39. https://doi.org/10.1002/ch.370 [15] Whalley M. (n.d.) Scientific Theories of Hypnosis. Retrieved from https://hypnosisandsuggestion.org [16] Zubieta, J. K., Bueller, J. A., Jackson, L. R., Scott, D. J., Xu, Y., Koeppe, R. A., Nichols, T. E., Stohler, C. S. (2005). Placebo effects mediated by endogenous opioid activity on mu-opioid receptors. The Journal of neuroscience : the official journal of the Society for Neuroscience, 25(34), 7754–7762. https://doi. org/10.1523/JNEUROSCI.0439-05.2005

[3] APA Dictionary of Psychology. (n.d.). Retrieved from https:// dictionary.apa.org/hypnotic-susceptibility

WINTER 2020

63


Alcoholism: Genetic Susceptibility and Neural Mechanisms BY LIAM LOCKE '21

1. Introduction Cover: Fluorescent image of neurons and their synaptic connections (Source: Wikimedia Commons)

“Alcohol dependence, or alcoholism, is a disease characterized by compulsive alcohol consumption, impaired judgement, and a high likelihood of relapse.” 64

Dr. Robert Smith, Dartmouth Class of 1902, was the co-founder of the most effective program for addiction rehabilitation known to date: Alcoholics Anonymous (AA). One of the tenants of AA is that an individual who has at one point shown uncontrollable drinking and physical dependence is permanently unable to moderate their alcohol consumption: “one drink will elicit another.”1 The program has helped many individuals cope with their addiction and has had a similar success rate to medical intervention in keeping individuals sober over an 8-year study.1 Alcohol dependence, or alcoholism, is a disease characterized by compulsive alcohol consumption, impaired judgement, and a high likelihood of relapse. The transition from intentional to compulsive alcohol drinking is caused by molecular changes in

the brain reward circuit which are not seen in every individual who consumes alcohol. It is estimated that 15-20% of individuals who regularly consume alcohol develop a dependence, and that these individuals have a genetic predisposition towards alcoholism.2,3,4 An early study of familial alcoholism conducted in 1929 showed that among the families of 39 German alcoholics, 53% of fathers, 6% of mothers, 30% of brothers, and 3% of sisters were themselves physically dependent on alcohol.5 Twin studies, in which identical twins adopted into different foster families are tested for behavioral differences, are often used to estimate the contribution of genetics in a particular behavior. A recent meta-analysis of twin studies estimated a 50% heritability of alcoholism, meaning that genetic factors explain 50% of the variance in alcohol dependence. Furthermore, having a firstdegree relative who is an alcoholic increases an individual’s risk of developing alcoholism fiveDARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


fold.6 Two different types of alcoholism are outlined in the DSM-V, the fifth edition of the diagnostic and statistical mental health manual of the American Psychiatric Association: Type-I (late-onset alcoholism) appears after age 25 and is often comorbid with other psychiatric disorders while Type-II (early-onset alcoholism) appears before 25 and is characterized by socially destructive (often violent) behavior.7,8 Type-II alcoholism is found only in men and is seen frequently in individuals with antisocial personality disorder.9 Although the DSM has provided clinicians with a method to diagnose and treat various forms of alcohol dependence, alcoholics exhibit immense behavioral variation and the disease is highly pleiotropic (relying on the combined contribution of many genetic variants). Some researchers have even proposed that there are “as many types of alcoholism as alcohol abusers.”10 The development of alcoholism is highly

dependent on how an individual reacts to acute ethanol intoxication. Ethanol intoxication is biphasic; the primary phase is stimulatory and is observed during the ascending limb of the blood alcohol curve. It is accompanied by feelings of euphoria as well as increased confidence and sociability. The secondary phase is inhibitory and observed during the descending limb of the blood alcohol curve. Individuals report depressive effects during this phase including drowsiness, withdrawal symptoms, and negative mood.11 The potency and duration of each of the two phases is predictive of an individual’s propensity to abuse alcohol, but not necessarily their susceptibility to alcoholism. An individual with a longer and more potent primary phase and a shorter and less potent secondary phase will derive more pleasure from consuming alcohol and this behavior will be perpetuated by positive reinforcement. In a subset of these individuals, chronic exposure to ethanol will cause neuroadaptations, or changes in the weighting of synaptic connections within a particular circuit, that lead to compulsive alcohol consumption and a neglect of the adverse outcomes of their drinking. Conversely, individuals with a relatively short and weak primary phase and a longer and more potent secondary phase will have an aversion to ethanol intoxication and will tend to avoid drinking alcoholic beverages, even if they are genetically predisposed to alcoholism.

Figure 1: Robert ‘Bob’ Smith: Dartmouth Class of 1902 & Co-Founder of Alcoholics Anonymous.

Ethanol has a wide range of physiological effects that make it difficult to elucidate the cellular and molecular processes leading to alcoholism. Linkage and association studies have helped identify alcoholism-related polymorphisms and several studies have investigated the transcriptional and epigenetic status of alcoholic brain tissue post-mortem.12,13,14,15,16,17 However, even with modern techniques, many of the neuroadaptations caused by chronic ethanol exposure remain poorly understood. Consequently, there is currently no highly

“The development of alcoholism is highly dependent on how an individual reacts to acute ethanol intoxication.”

(Source: Wikimedia Commons)

Figure 2: Quote on genetic disposition for addiction from the NIDA director Dr. Nora Volkow (Source: Wikimedia Commons, Creator: Tyler ser Noche)

WINTER 2020

65


Figure 3: Hypnosis demonstrates Metabolism of Ethanol in the liver. Having one or more copies of ADH1B, ADH1C, and/ or ALDH2*2 makes ethanol intoxication aversive and protects against alcoholism. *Created by the writer in ChemDraw

successful treatment for alcoholism and relapse occurs in more than two-thirds of individuals. 18 The purpose of this review is to discuss why some individuals are more susceptible to alcoholism than others, outline the structural and molecular changes that occur in the brains of alcoholics, and identify particular gene variants that increase the risk of developing alcoholism.

2. Biological Activity of Ethanol Metabolism of Ethanol

“The primary effects of ethanol in the CNS are inhibition of the NMDA receptor and indirect activation of the GABA receptor.”

Figure 4: Ion exchange in the NMDA receptor. Acute ethanol exposure inhibits channel conductance reducing membrane permeability to Na+ and Ca2+. Chronic exposure upregulates GluN2B subunit concentrations. (Source: Wikimedia Commons, Creator: 5-HT2AR)

66

In addition to having psychoactive properties within the central nervous system (CNS), ethanol is metabolized in the liver as a source of calories and can also be considered a food source.19 Metabolism of ethanol is a two-step process. The first step is the conversion of ethanol to acetaldehyde by the enzyme alcohol dehydrogenase. Acetaldehyde is a toxic intermediate which causes nausea and flushed skin In the second step, acetaldehyde is converted to acetic acid (vinegar) by the enzyme acetaldehyde dehydrogenase.12 Because acetaldehyde is toxic, gene variants that cause a greater buildup of acetaldehyde will make alcohol consumption aversive and will protect the individual from developing compulsive alcohol drinking. Two different gain of function mutations in the gene for alcohol dehydrogenase (ADH1B and ADH1C) and one loss-of-function mutation in the gene for acetaldehyde dehydrogenase (ALDH2*2)

have been shown to reduce incidence of alcoholism.19 The ALDH2*2 allele has the strongest effect and is observed primarily in Asian populations. Individuals homozygous for ALDH2*2 have nearly zero risk of developing alcoholism.20 Metabolism of Ethanol in the liver. Having one or more copies of ADH1B, ADH1C, and/or ALDH2*2 makes ethanol intoxication aversive and protects against alcoholism. One medication currently used in the treatment of alcoholism is the acetaldehyde dehydrogenase inhibitor Antabuse (disulfiram). The drug causes an unpleasant and dangerous physiological reaction to alcohol due to excessive buildup of acetaldehyde in the blood, but does not reduce alcohol cravings or motivation to drink.21 A meta-analysis of patients prescribed Antabuse demonstrated that there is no significant difference in return to drinking outcomes in patients prescribed Antabuse and placebo.22 The duration of abstinence, however, is longer for this medication than for other prevalent alcohol medications (naltrexone, acamprosate), but this is likely a reflection of the 14 days the drug continues to be active following cessation. Ethanol’s Activity at Glutamate Receptors The primary effects of ethanol in the CNS are inhibition of the N-methyl-D-aspartate (NMDA) glutamate receptor and indirect activation of the γ-aminobutyric acid (GABA) receptor. Ethanol is a non-competitive inhibitor at NMDA receptors.23 In the absence of ethanol, activation of an NMDA receptor by glutamate leads to an influx of sodium (Na+) and calcium (Ca2+) ions which generate excitatory postsynaptic potentials (EPSPs). The influx of Ca2+ is also important for the activation of calcium/calmodulin-dependent protein kinase II (CaMKII). CaMKII along with protein kinase A (PKA) facilitate NMDA activity by adding phosphate groups to the receptor. The inhibitory effects of acute ethanol intoxication at glutamatergic synapses are therefore

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


twofold: direct inhibition of the NMDA receptor and downregulation of CaMKII activity.24 Such inhibition of NMDA receptors in the hippocampus may be responsible for alcohol induced blackouts.25 Neurotoxicity of alcohol is also due to ethanol’s activity at the NMDA receptor; excessive NMDA inhibition causes an increased release of glutamate from the presynapse leading to glutamate excitotoxicity and cell death.2 The NMDA receptor is comprised of two subunits: GluN1 and GluN2 (types A-D).26 The GluN1 subunit is required for NMDA functionality, so the properties of a particular NMDA receptor are dependent on which GluN2 subunit dimerizes with GluN1. In mature cells, most NMDA receptors are GluN1/GluN2A heterodimers, a stable form of the receptor with mild excitability. During early development, the brain has a higher concentration of GluN2B subunits which are phosphorylated more frequently by CaMKII. Neurons rich in GluN2B have also been shown to have higher dendritic spine density and greater levels and duration of long-term potentiation, the process by which synapses are strengthened.27 While acute ethanol intoxication inhibits NMDA receptors, chronic exposure to ethanol increases the concentration of NMDA receptors in the brain leading to an increased excitability of glutamatergic synapses. Chronic ethanol exposure also increases the concentration of GluN2B containing subunits which may generate maladaptive habits through the reopening of developmental plasticity.28 Different gene variants of NMDA receptor subunits may affect an individual’s susceptibility to alcoholism. A mutation in the gene for the GluN2B (Grin2B) has been found in increased prevalence among groups of alcoholic individuals. This mutation causes hypomethylation of the Gin2B allele and results in higher levels of transcription and more NMDA receptors with the GluN2B subunits.29 Ethanol’s Activity at GABA Receptors GABA is the major inhibitory neurotransmitter in the brain and exerts its effects through chemically gated chloride channels which can be either ionotropic (allowing charged ions into the cell) or metabotropic (which release G-proteins when activated) depending on their subunit composition. In 1954, Polish chemist Leo Sternbach accidently discovered benzodiazepines while trying to synthesize alternatives to barbiturates.30 Benzodiazepines

WINTER 2020

Figure 5: The delta (δ) subunit of the GABAA chloride channel is thought to be the site of action for alcohol’s inhibitory effects. High concentrations δ-containing GABA receptors are expressed in the nucleus accumbens (NAc), an important region in the mesolimbic dopamine pathway. (Source: Wikimedia Commons, Creator: Law3liu)

(e.g. Valium) are GABA receptor agonists used in the treatment of anxiety, seizures, and muscle spasms. Due to similarities in the behavioral effects of alcohol and benzodiazepines, GABA receptors became a strong candidate in alcoholism research. In 1986, Peter Suzdak and his colleagues designed an experimental preparation called a synaptosome to characterize the effects of ethanol on the GABAA receptor. The synaptosomes were created by pinching the pre- and post-synaptic membranes of a GABAergic synapse to form a vesicle and the influx of radioactive chloride (36Cl¯ ) was measured in the presence of ethanol. The experiment showed that concentrations of ethanol ranging from 10-50mM were sufficient to activate GABAA receptors—a concentration comparable to the blood alcohol content of an intoxicated individual.31 It is well established that ethanol activates the GABAA receptor and promotes hyperpolarization of the post-synaptic membrane, although the exact mechanism remains somewhat elusive.32 There is increasing evidence that the presence of a delta (δ) subunit may be required for ethanol to activate the GABAA receptor. GABAA receptors with δ-subunits are differentially expressed in the brain with high concentrations in the nucleus accumbens (NAc), a midbrain region crucial for reward learning.19 GABA receptors are also found outside the synapse. Whereas synaptically localized GABA receptors are important for transient inhibitory post-synaptic potentials (IPSPs) generated by GABAergic synaptic transmission, these extra-synaptic receptors are important for the tonic inhibition of particular brain areas. Extra-synaptic GABAA receptors containing a δ subunit are thought to be ethanol’s primary site of action and likely contribute to alcohol’s effects on anxiety, motor control, sociality, and judgement.33 While acute exposure to ethanol increases the inhibitory effects of GABA neurotransmission, chronic exposure reduces the net concentration of GABA receptors in the CNS.34

“While acute exposure to ethanol increasess the inhibitory effects of GABA neurotransmission, chronic exposure reduces the net concentration of GABA receeptors in the CNS”

67


Figure 5: Co-localization of dopamine (blue) and glutamate (red) in the ventral striatum. VTA = ventral tegmental area, AMY = amygdala HC = hippocampus PFC = prefrontal cortex NAc = nucleus accumbens (i.e. ventral striatum) (Source: Wikimedia Commons, Creator: OldakQuill)

“Several mutations have been identified in the genes coding for the GABAA receptor that increase susceptibility to alcoholism.”

Figure 6: Neuroanatomy of the brain’s reward circuit. The closed loop between the ventral tegmental area (VTA) and the nucleus accumbens (NAc) is modified by chronic alcohol exposure in individuals with alcoholism susceptibility. (Source: Wikimedia Commons, Creator: Was a Bee) 68

Several mutations have been identified in genes coding for the GABAA receptor that increase susceptibility to alcoholism. A prolineserine substitution in the α6 subunit, originally implicated in a study of benzodiazepine sensitivity, has also been found to occur at high rates in alcoholic individuals, especially those with comorbid anti-social personality disorder (ASPD).42 Both positive and negative associations have been identified for a mutation in the β2 subunit. Polymorphisms of the γ2 subunit have been shown to predict the severity of alcohol withdrawal symptoms in mice which may determine whether an individual’s reaction to acute ethanol intoxication is positive or negative.2 In summary, alcohol has a wide variety of physiological effects that differentially modulate an individual’s subjective experience to acute ethanol intoxication. Gene variants that cause a buildup of acetaldehyde provide the individual with an alcoholism resistance phenotype. Antabuse targets this biochemical pathway to discourage alcohol consumption, but does not effectively treat alcohol cravings and is ineffective in ensuring abstinence. Alcohol inhibits NMDA receptors and potentiates GABAA receptors. Activity at the GABAA receptor is important for the neurological effects of acute ethanol intoxication and subunit variants that comprise this receptor may modify an individual’s subjective experience to acute ethanol intoxication. A mutation in the GluN2B subunit of the NMDA receptor has shown to increase risk of alcoholism, presumably through its effects on dendritic branching and LTP. From ethanol’s biological activity, an understanding emerges of the heritable factors that may predispose an individual to enjoy alcohol consumption. However, only 15-20% of individuals who regularly consume alcohol go on to develop alcoholism.2,3,4 Understanding the neurobiological adaptions that occur in the

CNS following chronic ethanol exposure in alcoholics is an important area of research to assist the development of treatment.

3. Natural Reward Learning and Motivation Natural reward learning is an important evolutionary mechanism that reinforces behaviors suited for survival. Sex, food, and social interaction elicit positive emotions which motivate an organism to engage in these behaviors. The coupling of positive emotions with behaviors suited to survival ensures that these behaviors will be passed down among generations and eventually become the dominant behavior of a species.46 In addition to more obvious behaviors like sex, food, and social interaction, the positive emotions elicited by exercise and caring for children may also be viewed as natural rewards that improve the inclusive fitness of the individual. Our ancestors encountered a wide variety of situations that required diverse behavioral responses. The mammalian brain has evolved a system by which the details of an event are coupled with the event’s emotional effects to either reinforce or inhibit a particular response in the future. A famous phrase from B.F. Skinner rings true in the process of reward learning: “behavior is controlled by its consequences.”57 Simply stated, positive emotions will promote a behavior and negative emotions will discourage it. The Brain’s Reward Circuit The emotional processing of an event and the learning of a behavioral response is dependent DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


on two neurotransmitter systems: dopamine and glutamate. The dopaminergic mesolimbic pathway, often referred to as the brain’s reward circuit, originates in the ventral tegmental area (VTA) of the midbrain. VTA cells project to a variety of limbic structures including the striatum, hippocampus, and amygdala. The striatum also receives glutamatergic inputs from the prefrontal cortex, hippocampus, amygdala, and thalamus. The striatum is able to facilitate reward learning by integrating signals from dopaminergic and glutamatergic systems.52 The striatum has historically been divided along a ventral-dorsal boundary between the nucleus accumbens (NAc) and the caudate-putamen complex, although there is some controversy about this distinction. Some researchers have favored a 45° shift in the boundary, dividing instead the ventromedial and dorsolateral striatum.56 There is no clear distinction that can be made between the ventral and dorsal striatum based on cytoarchitecture, myeloarchitecture, or chemoarchitecture – so the distinction is one of functionality. The resulting debate about the functional divisions of the striatum is due in part to the generality of its function. The striatum is thought to mediate reward, emotion, habit formation, motor planning, action selection, decision making and sometimes even executive function.37 The ventral-dorsal distinction is, however, pertinent to the topic of alcoholism because each region has a specialized function in the maintenance of addictive behavior; the ventral striatum facilities the pleasurable effects of alcohol and the dorsal striatum establishing habitual and semi-autonomic responses in individuals with alcoholism.19,34,35,36,37 The striatum is comprised of 95% GABAergic medium spiny neurons (MSNs).35 Dopaminergic projections from the VTA and glutamatergic WINTER 2020

projections from the cortex, hippocampus, amygdala, and thalamus synapse on the same dendritic branches of striatal MSNs. This colocalization of dopamine and glutamate is a key feature of the molecular mechanism of reward reinforcement. MSNs express one of two classes of G-protein-coupled dopamine receptors: D1-type dopamine receptors (D1type MSNs) or D2-type dopamine receptors (D2-type MSNs).38 These two classes of neurons respond differently to dopamine. Activation of D1-type receptors increases the cell’s sensitivity to glutamate while activation of D2-type receptors decreases the cell’s sensitivity to glutamate.39 The opposing effects of dopamine activation in D1 and D2-type MSNs are due to the different G-proteins coupled to these receptors which have opposite effects on the enzyme adenylate cyclase.

Figure 7: Morphology of a GABAergic medium spiny neuron labeled with green fluorescent protein.

D1-type and D2-type MSNs also separate two important pathways through the striatum known as the direct and indirect pathways. The direct pathway uses D1-type MSNs which project to the basal ganglia output nuclei (i.e. the substantia nigra reticularis (SNr) and the globus pallidus internal(GPi)). The indirect pathway uses D2-type MSNs which project indirectly to the basal ganglia output nuclei through the subthalamic nucleus (STN) and globus pallidus external (GPe). The direct and indirect striatal pathways have opposing effects on basal ganglia output and behavior. The direct pathway increases basal ganglia activity through a positive feedback loop and serves as a ‘go’ signal for a behavior. The indirect pathway decreases basal ganglia activity through negative feedback and is thought to act as a

“The emotional processing of an event and the learning of a behavioral response is dependent on two neurotransmitter systems: dopamine and glutamate."

(Source: Wikimedia Commons, Creator: Sergb95)

Figure 8: Mechanism of G-protein coupled receptor (GPCR) regulation of cAMP and CREB activity. D1-type receptors are coupled to stimulatory G-proteins (Gαs) which upregulate cAMP and traffics AMPA receptors into the post-synaptic density, increasing these cells sensitivity to glutamatergic signals from areas like the prefrontal cortex. D2-type receptors are coupled to inhibitory G-proteins (Gαi) which downregulate cAMP and AMPA receptor concentrations. The end result is a strengthening of the direct pathway and attenuation of the indirect pathway. (Source: Wikimedia Commons, Creator: Evrae8)

69


engage in behaviors that improve their survival fitness. The rewarding effects of a behavior are dependent on the release of dopamine in the nucleus accumbens. Repeated exposure to dopamine in the striatum will eventually increase the activity of the direct pathway (the “go” pathway) and inhibit the activity of the indirect pathway (the “stop” pathway) when the event is encountered again. The output of the basal ganglia influences cortical activity and makes rewarding behaviors more salient to the individual.

Figure 9: Neuroanatomy of the brain’s reward circuit. The closed loop between the ventral tegmental area (VTA) and the nucleus accumbens (NAc) is modified by chronic alcohol exposure in individuals with alcoholism susceptibility. (Source: Wikimedia Commons, Creator: Was a Bee)

‘stop’ signal to inhibit a behavior.37 Rewarding events release dopamine in the striatum; due to the opposing effects of dopamine at D1 and D2type receptors, rewarding events increase direct pathway activity and suppress indirect pathway activity. The effect will be a greater likelihood of behavioral initiation in response to rewarding stimuli and a motivation to engage in these behaviors.

“The process of reward learning is well suited for ensuring the evolutionary success of a species, but is often maladaptive for the wellbeing of humans in modern society.”

The evolutionary development of the neocortex in humans placed behavior under executive control. In 1993, Robinson and Berridge published their incentive-sensitization theory of addiction, proposing that mesotelencephalic dopamine neurotransmission increases the incentive salience of rewarding stimuli.40 Incentive salience is the psychological process by which a stimulus becomes more attractive and desirable to an individual. Cortical activity is the most important determinant of behavioral output, but excitation of the basal ganglia is also dependent on reward history. Cortical activation of the basal ganglia will require significantly less cortical drive for behaviors that have a high reward association than for those that are not rewarding.36 This “resistance” of the basal ganglia results in a greater excitation of cortex in response to behaviors that are highly rewarding, endowing them with incentive salience. In summary, neural circuitry for the processing of rewards has evolved to motivate organisms to

70

The process of reward learning is well suited for ensuring the evolutionary success of a species, but is often maladaptive for the wellbeing of humans in modern society. Our primitive reward circuitry is challenged by excessively rewarding stimuli. Processed foods that are extremely high in sugars and fats increase striatal dopamine to a much greater extent than most foods found in nature. This leads to a greater salience of unhealthy foods and the acquisition of behavior that is detrimental to the health of the individual. Similar unnaturally high increases in striatal dopamine have been observed in individuals with other addictions including gambling, kleptomania, compulsive sexual behavior, compulsive shopping, and internet use.41 However, the greatest increases in mesolimbic dopamine are observed during the use of addictive drugs.

4. The Development of Alcoholism: Effects of Chronic Ethanol Exposure on the CNS A Skinnerian perspective of alcoholism would predict that alcohol consumption is controlled by its affective consequences. The learned association between alcohol consumption and the euphoric effects of ethanol intoxication reinforces drinking behavior. Although an individual’s subjective reactions to ethanol intoxication are a reasonable predictor of their risk for addiction, liking alcohol does not cause alcohol dependence. The transition between liking alcohol and needing alcohol is contingent on circuit dysfunctions in the mesolimbic pathway. It has also been proposed that chronic ethanol exposure changes the allostatic set-point of this pathway to cause reward deficiency syndrome: the incentive salience of alcohol diminishes the rewarding effects of other activities by inhibiting their relative contributions to cortical activation.34 Variable susceptibility to alcoholism must

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


therefore reflect genetic differences in the ability of chronic ethanol exposure to cause mesolimbic neuroadaptations. The effects of addictive drugs on striatal dopamine concentrations are generated both through the positive emotions they elicit and (to a much greater extent) their psychoactive properties in the CNS. Psychostimulants such as cocaine and amphetamines act directly at the synapse to increase dopamine concentrations while alcohol acts through an indirect mechanism involving GABAA receptor activation. The NAc is rich in extra-synaptic GABAA receptors with δ subunits.42 Ethanol activates δ-subunit containing MSNs in the NAc causing chloride influx and hyperpolarization of these cells. There is a closed loop between the NAc and the VTA. The VTA is normally inhibited by GABAergic projections from the striatum, but in the presence of ethanol, the VTA is released from this tonic inhibition causing these neurons to secrete dopamine in the NAc. Ethanol’s indirect mechanism of action may explain why alcohol is not initially as addictive as other drugs such as cocaine or amphetamines. However, the prevalence and availability of alcohol, as well as the casual manner in which alcohol is consumed, allows for a comparable reward association to develop in alcoholics.19 A simplified view of alcoholism can be understood via the effects of dopamine on the direct and indirect pathways of the basal ganglia. Alcohol increases the concentration of dopamine in the NAc. This increase in dopamine differentially modulates the sensitivity of D1type and D2-type MSNs to glutamate. Repeated exposure to ethanol will eventually reduce the quantity of D2-type receptors through changes in gene transcription (discussed later), and this reduced concentration of D2 receptors results in lower activity in the indirect pathway of the striatum. Impaired inhibition of the basal ganglia by the indirect pathway shifts the control of behavior to the direct pathway, giving alcohol consumption greater salience. A mutation in the gene for monoamine oxidase A (MAOA) has been shown to occur at higher rates in alcoholic individuals. MAOA is the enzyme responsible for the breakdown of the monoamines (dopamine, norepinephrine and serotonin). A functional polymorphism in the gene (MAOA-LPR) causes reduced functionality of the MAOA protein and appears more frequently in alcoholics. A possible mechanism

WINTER 2020

Figure 10; Genes related to dopamine metabolism like MAOA and COMT commonly appear in genetic studies comparing populations of alcoholics to healthy controls. (Source: Wikimedia Commons, Creator: Was a Bee)

by which the MAOA-LPR allele could cause a greater susceptibility to alcoholism is by extending and exacerbating the levels of synaptic dopamine in the NAc following alcohol consumption. The MAOA-LPR allele has also been associated with an increase in violent tendencies and impulsivity.15,43 Another gene variant commonly seen at higher frequencies in alcoholics is a functional polymorphism in the D2-receptor, DRD2 TaqI A. The DRD2 TaqI A allele is transcribed at a lower rate than the wild-type DRD2 allele. This reduced transcription was initially thought to originate with a promotor mutation that reduced RNA polymerase recruitment, however, the advent of better sequencing technology has traced the mutation to a nearby gene ANKK1.44 Low functionality of the D2 receptor has also been shown to be a predictor of impulsivity, like that of the aforementioned MAOA-LPR mutation.45 Delay-discounting is a measurement of impulsivity in which an individual is given the choice between an immediate reward or a larger reward at a later point in time. Reduced functionality of the D2 receptor is predictive of an individual choosing the immediate reward. Alcoholics also tend to choose the immediate reward.

“A simplifiied viiew of alcoholism can be understood via the effects of dopamine on the direct and indirect pathways of the basal ganglia."

5. Molecular Changes in Alcoholism Dopamine Modulation of Glutamate Sensitivity during Acute Ethanol Exposure It was previously mentioned that ethanol inhibits NMDA receptors, but the effects of dopamine at glutamatergic synapses in the striatum is much more pronounced than the inhibitory effects of ethanol. Dopamine

71


Figure 10: Glutamatergic synapses are highly plastic and AMPA receptors are constantly being recycled. PKA increases trafficking of AMPA receptors into the post-synaptic density and increases the strength of the synapse. PKA is activated by cAMP, so the opposing effects of D1 and D2-type receptors on adenylate cyclase result in PKA being turned on direct pathway neurons and turned off in indirect pathway neurons. The result is a strengthening of the direct pathway pathway glutamatergic synapses in response to mesolimbic dopamine. (Source: Wikimedia Commons, Creator: Psy165s2011)

“This result agrees wiith the incentivesensitizationo theory of addiction which proposes that the development of an addiction incresases the 'wanting' of a drug but decreases the 'liking' of a drug...”

the proteins themselves are not stable, meaning that changes in the concentrations of AMPA and NMDA receptors in striatal neurons cannot support persistent cravings and chronic relapsing in alcoholics. Molecular research has identified two transcription factors that are differentially expressed in addicts even following decades of abstinence: the cAMP Response Element Binding protein (CREB) and ∆FosB. 48,49,50 CREB Hypofunctionality Increases Alcoholism Risk receptors are coupled to G-proteins that interact with adenylate cyclase, the enzyme responsible for cyclic AMP (cAMP) production. Understanding the molecular cascades that promote reward learning is an active area of research for pharmaceutical interventions. Activation of D1-type receptors increases the excitability of glutamatergic synapses by activating protein kinases. Binding of dopamine to D1-type receptors releases a stimulatory G-protein (Gαs) that travels along the inner membrane of the post-synaptic cell, activates adenylate cyclase, and increases levels of cAMP in D1-type MSNs. Increasing the level of cAMP results in a higher activity of protein kinase A (PKA) which phosphorylates specific residues on NMDA receptors and AMPA receptors (another glutamate receptor) increases channel conductance. The end result of D1-dependent PKA activity is a greater output of the direct pathway. Activation of D2-type receptors in the NAc has the opposite effect. The D2-type receptor is coupled with an inhibitory G-protein (Gαi). The activation of D2-type MSNs reduces the activity of adenylate cyclase and results in lower concentration of cAMP, reduced activity of PKA, reduced phosphorylation of NMDA and AMPA receptors and an ultimate decrease in the output of the indirect pathway.47 The above cellular mechanism has evolved to support natural reward learning, and is the same mechanism over-activated by alcohol and other addictive substances. The temporary phosphorylation of NMDA and AMPA is important for the associative learning of a behavior and a reward, but are insufficient to generate reward memory. Alcoholism is a chronic relapsing disorder, so the molecular changes that support maladaptive drinking must remain relatively stable over time. AMPA and NMDA receptors are being added to and removed from the synaptic membrane all the time. In addition,

72

The cAMP response element binding protein (CREB) is a transcription factor that can be induced by a number of extracellular signals. It has a variety of actions including the setting of circadian rhythms, growth control, pituitary proliferation, long-term potentiation, learning and memory.51 Active CREB requires the phosphorylation of the protein at a particular residue, serine 133, which takes place at the end of a several molecular cascades.52 One such cascade is the NMDA dependent activation of CaMKII. CaMKII is a threonine-serine specific protein kinase and therefore has the correct functionality to activate CREB. After CaMKII phosphorylates CREB, the active transcription factor localizes to the nucleus where it facilitates transcription of its targets. Activity of CaMKII is crucial for LTP and contributes to the trafficking of NMDA and AMPA receptors into the postsynaptic density.53 Repeated exposure to drugs of abuse chronically increases the activity of CREB in both D1-type and D2-type MSNs. It has been shown that increased CREB activity decreases the pleasurable effects of alcohol.35 This result agrees with the incentive-sensitization theory of addiction which proposes that the development of an addiction increases the ‘wanting’ of a drug but decreases the ‘liking’ of a drug and that these behaviors are separable.40 CREB acts through a negative feedback loop to diminish drug responsiveness and is thought to serve as a homeostatic and natural satiety mechanism in the NAc. In addition, acute modulation of CREB activity is thought to cause the withdrawal symptoms of many drugs of abuse. The opioid peptide dynorphin is expressed in D1-type MSNs and is induced by CREB in the NAc. Release of dynorphin causes dysphoria-associated withdrawal by inhibiting dopamine release in the NAc.52 Individuals with particular CREB gene variants

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


exhibit different responses to repeated alcohol exposure. The dominant-negative variant of the protein which lacks the serine residue at position 133 (and is therefore functionally inactive) does not display the same desensitizing effects on the pleasurable sensations associated with the drug. Gene variants resulting in CREB insufficiency have been shown to reduce alcohol dependence in a mouse model.54 Loss of function mutations in CREB significantly lower the aversive effects of withdrawal through a dynorphin dependent mechanism.49 Genetic differences in the CREB protein and CREB target genes are therefore possible causes for the differential and familial inheritance of addictive behavior. ∆FosB: A Control Module for Structural Plasticity The transcription factor ∆FosB, obtained through alternative splicing of the gene FosB, is a control module that organizes changes in structural plasticity.55 Unlike CREB, which shows increased expression in both D1 and D2-type MSNs, repeated exposure to alcohol increases ∆FosB only in the D1-type MSNs, suggesting its importance in modulating differences between striatal direct and indirect pathways. The alternative splicing in ∆FosB protein removes two degron domains that are normally present in the full-length FosB protein. These missing degron domains cause a four-fold increase in protein stability. Furthermore, phosphorylation of ∆FosB at serine 27 by any number of serine kinases (such as CaMKII) increases protein stability by an additional 10-fold.50 This increased stability explains why chronic, but not acute, exposure to alcohol causes induction of ∆FosB. As Fos family proteins are transcribed, they are degraded, but ∆FosB is degraded the slowest. Repeated exposure to a drug will soon allow ∆FosB to become the dominant Fos family protein, which is a possible mechanism for why alcohol may exert its effects several weeks following disuse.52 However, consistent transcription of the FosB gene is still necessary for the chronic changes. There is considerable evidence that CREB and ∆FosB transcription levels are modulated by epigenetic factors that are temporally stable.53 ∆FosB exerts its effects on alcoholic behavior by orchestrating the structural reorganization of the NAc and dorsal striatum. Several targets of the ∆FosB transcription factor include synaptotagmin, microtubule-associated proteins, actin proteins, and genes related to dendritic spine architecture. In addition to

WINTER 2020

Figure 11: Biochemical pathways of CREB and ∆FosB induction. Chronic alcohol exposure upregulates CREB in D1 and D2type MSNs and ∆FosB in D1-type MSNs. These molecular pathways are thought to underlie the shift towards direct pathway control of behavior in response to alcohol cues. (Source: Wikimedia Commons, Creator: Seppi333)

chronic drug exposure, ∆FosB induction has also been shown for natural rewards like food, sex, and excerise.28

6. Conclusion

Alcoholism is a highly complex behavioral disorder with many contributing factors. The ultimate effect of chronic ethanol exposure in an alcoholic is an imbalance of the brain’s reward circuitry. The gene variants and systems outlined in this review only provide a small window into the neural and molecular basis of alcoholism. Other sites of action that have been implicated in alcoholism include serotonin synapses, the opiate system, and the hypothalamic-pituitary-adrenal axis. Elucidating the molecular changes that occur in the brain of an alcoholic will help researchers to design and implement better treatments in the future.

“Elucidating the molecular changes that occur in the brain of an alcoholic will help researchers to design and implement better treatments in the future.”

References [1] Moos, R. H., & Moos, B. S. (2004). Long-term influence of duration and frequency of participation in Alcoholics Anonymous on individuals with alcohol use disorders. Journal of Consulting and Clinical Psychology, 72(1), 81. [2] Enoch, M. A., & Goldman, D. (2001). The genetics of alcoholism and alcohol abuse. Current psychiatry reports, 3(2), 144-151. [3] Everitt, B. J., Belin, D., Economidou, D., Pelloux, Y., Dalley, J. W., & Robbins, T. W. (2008). Neural mechanisms underlying the vulnerability to develop compulsive drug-seeking habits and addiction. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 363(1507), 3125-3135. [4] Spanagel, R. (2018). Aberrant choice behavior in alcoholism. Science, 360(6395), 1298-1299. [5] Boss, M. (1929). On the question of the hereditary significance of alcohol. Mdr. Psychiatry, Neurology. 72: 264.

73


[6] Cotton, N. S. (1979). The familial incidence of alcoholism: a review. Journal of studies on alcohol, 40(1), 89-116. [7] Cloninger, C. R., M. Bohman & S. Sivardsson. 1981. Inheritance of alcohol abuse. Cross-fostering analysis of adopted men. Arch. Gen. Psychiatry 38: 861-868. [8] Johnson, B. A. (2010). Medication treatment of different types of alcoholism. American Journal of Psychiatry, 167(6), 630-639. [9] Mulder, R. T. (2002). Alcoholism and personality. Australian and New Zealand Journal of Psychiatry, 36(1), 46-51. [10] Vaillant GE. The natural history of alcoholism revisited. Cambridge: Harvard University Press; 1995. [11] Addicott, M. A., Marsh‐Richard, D. M., Mathias, C. W., & Dougherty, D. M. (2007). The biphasic effects of alcohol: comparisons of subjective and objective measures of stimulation, sedation, and physical activity. Alcoholism: Clinical and Experimental Research, 31(11), 1883-1890. [12] Hines, L. M., Ray, L., Hutchison, K., & Tabakoff, B. (2005). Alcoholism: the dissection for endophenotypes. Dialogues in clinical neuroscience, 7(2), 153. [13] Ray, L. A., Mackillop, J., & Monti, P. M. (2010). Subjective responses to alcohol consumption as endophenotypes: advancing behavioral genetics in etiological and treatment models of alcoholism. Substance use & misuse, 45(11), 17421765. [14] Salvatore, C., Cerasa, A., Battista, P., Gilardi, M. C., Quattrone, A., & Castiglioni, I. (2015). Magnetic resonance imaging biomarkers for the early diagnosis of Alzheimers disease: a machine learning approach. Frontiers in Neuroscience, 9. doi: 10.3389/fnins.2015.00307 [15] Cervera-Juanes, R., Wilhem, L. J., Park, B., Lee, R., Locke, J., Helms, C., ... & Ferguson, B. (2016). MAOA expression predicts vulnerability for alcohol use. Molecular psychiatry, 21(4), 472.

[23] Zhu, W., Bie, B., & Pan, Z. Z. (2007). Involvement of nonNMDA glutamate receptors in central amygdala in synaptic actions of ethanol and ethanol-induced reward behavior. Journal of Neuroscience, 27(2), 289-298. [24] Halt, A. R., Dallapiazza, R. F., Zhou, Y., Stein, I. S., Qian, H., Juntti, S., ... & Hell, J. W. (2012). CaMKII binding to GluN2B is critical during memory consolidation. The EMBO journal, 31(5), 1203-1216. [25] White, A. M. (2003). What happened? Alcohol, memory blackouts, and the brain. Alcohol Research & Health, 27(2), 186-197. [26] Traynelis, S. F., Wollmuth, L. P., McBain, C. J., Menniti, F. S., Vance, K. M., Ogden, K. K., ... & Dingledine, R. (2010). Glutamate receptor ion channels: structure, regulation, and function. Pharmacological reviews, 62(3), 405-496. [27] Gambrill, A. C., & Barria, A. (2011). NMDA receptor subunit composition controls synaptogenesis and synapse stabilization. Proceedings of the National Academy of Sciences, 108(14), 5855-5860. [28] Kyzar, E. J., & Pandey, S. C. (2015). Molecular mechanisms of synaptic remodeling in alcoholism. Neuroscience letters, 601, 11-19. [29] Mahnke, A. H., Miranda, R. C., & Homanics, G. E. (2017). Epigenetic mediators and consequences of excessive alcohol consumption. Alcohol (Fayetteville, NY), 60, 1. [30] Hanson, David. “Librium.” ACS Publications, pubs.acs.org/ cen/coverstory/83/8325/8325librium.html.

[16] Silvia Alfonso-Loeches & Consuelo Guerri (2011) Molecular and behavioral aspects of the actions of alcohol on the adult and developing brain, Critical Reviews in Clinical Laboratory Sciences, 48:1, 19-47, DOI: 10.3109/10408363.2011.580567

[31] Suzdak, P. D., Schwartz, R. D., Skolnick, P., & Paul, S. M. (1986). Ethanol stimulates gamma-aminobutyric acid receptor-mediated chloride transport in rat brain synaptoneurosomes. Proceedings of the National Academy of Sciences, 83(11), 4071-4075.

[17] Kapoor, M., Wang, J. C., Farris, S. P., Liu, Y., McClintick, J., Gupta, I., ... & Tischfield, J. (2019). Analysis of whole genometranscriptomic organization in brain to identify genes associated with alcoholism. Translational Psychiatry, 9(1), 89.

[32] Förstera, B., Castro, P. A., Moraga-Cid, G., & Aguayo, L. G. (2016). Potentiation of gamma aminobutyric acid receptors (GABAAR) by ethanol: how are inhibitory receptors affected?. Frontiers in cellular neuroscience, 10, 114.

[18] Sandra A. Springer, Marwan M. Azar & Frederick L. Altice (2011) HIV, alcohol dependence, and the criminal justice system: a review and call for evidence-based treatment for released prisoners, The American Journal of Drug and Alcohol Abuse, 37:1, 12-21, DOI: 10.3109/00952990.2010.540280

[33] Hanchar, H. J., Dodson, P. D., Olsen, R. W., Otis, T. S., & Wallner, M. (2005). Alcohol-induced motor impairment caused by increased extrasynaptic GABA A receptor activity. Nature neuroscience, 8(3), 339.

[19] Tabakoff, B., & Hoffman, P. L. (2013). The neurobiology of alcohol consumption and alcoholism: an integrative history. Pharmacology Biochemistry and Behavior, 113, 20-37. [20] Wall, T. L., Shea, S. H., Luczak, S. E., Cook, T. A., & Carr, L. G. (2005). Genetic associations of alcohol dehydrogenase with alcohol use disorders and endophenotypes in white college students. Journal of abnormal psychology, 114(3), 456. [21] Johnson, Bankole A. “Pharmacotherapy for Alcohol Use Disorder.” UpToDate, 14 Nov. 2018, www.uptodate.com/ contents/pharmacotherapy-for-alcohol-use-disorder.

74

[22] Jonas, D. E., Amick, H. R., Feltner, C., Bobashev, G., Thomas, K., Wines, R., ... & Garbutt, J. C. (2014). Pharmacotherapy for adults with alcohol use disorders in outpatient settings: a systematic review and meta-analysis. Jama, 311(18), 18891900.

[34] Bowirrat, A., & Oscar‐Berman, M. (2005). Relationship between dopaminergic neurotransmission, alcoholism, and reward deficiency syndrome. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 132(1), 29-37. [35] Everitt, B. J., & Robbins, T. W. (2013). From the ventral to the dorsal striatum: devolving views of their roles in drug addiction. Neuroscience & Biobehavioral Reviews, 37(9), 1946-1954. [36] Keeler, J. F., Pretsell, D. O., & Robbins, T. W. (2014). Functional implications of dopamine D1 vs. D2 receptors: A ‘prepare and select’model of the striatal direct vs. indirect pathways. Neuroscience, 282, 156-175.

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


[37] Yager, L. M., Garcia, A. F., Wunsch, A. M., & Ferguson, S. M. (2015). The ins and outs of the striatum: role in drug addiction. Neuroscience, 301, 529-541. [38] Nelson, A. C., Williams, S. B., Pistorius, S. S., Park, H. J., Woodward, T. J., Payne, A. J., ... & Steffensen, S. C. (2018). Ventral tegmental area GABA neurons are resistant to GABA (A) receptor-mediated inhibition during ethanol withdrawal. Frontiers in neuroscience, 12, 131. [39] Yin, H. H., & Knowlton, B. J. (2006). The role of the basal ganglia in habit formation. Nature Reviews Neuroscience, 7(6), 464. [40] Robinson, T. E., & Berridge, K. C. (1993). The neural basis of drug craving: an incentive-sensitization theory of addiction. Brain research reviews, 18(3), 247-291. [41] Grant, J. E., & Potenza, M. N. (2010). [HTML] nih.gov Introduction to behavioral addictions. Journal The American Journal of Drug and Alcohol Abuse , 36(5), 233–241. doi: 10.3109/00952990.2010.491884 [42] Davies, M. (2003). The role of GABAA receptors in mediating the effects of alcohol in the central nervous system. Journal of psychiatry & neuroscience. [43] Tikkanen, R., Sjöberg, R. L., Ducci, F., Goldman, D., Holi, M., Tiihonen, J., & Virkkunen, M. (2009). Effects of MAOA‐ genotype, alcohol consumption, and aging on violent behavior. Alcoholism: clinical and experimental research, 33(3), 428-434.

well as neuronal death. Neuroscience, 158(1), 334–343. doi: 10.1016/j.neuroscience.2008.01.080 [52] Blanpied, T. A., & Ehlers, M. D. (2004). Microanatomy of dendritic spines: emerging principles of synaptic pathology in psychiatric and neurological disease. Biological Psychiatry, 55(12), 1121–1127. doi: 10.1016/j.biopsych.2003.10.006 [53] Feng, J. (2017). Epigenetics and drug addiction: translational aspects. In Neuropsychiatric Disorders and Epigenetics (pp. 335-360). [54] Palmisano, M., & Pandey, S. C. (2017). Epigenetic mechanisms of alcoholism and stress-related disorders. Alcohol, 60, 7-18. [55] Olson, V. G. (2005). Regulation of Drug Reward by cAMP Response Element-Binding Protein: Evidence for Two Functionally Distinct Subregions of the Ventral Tegmental Area. Journal of Neuroscience, 25(23), 5553–5562. doi: 10.1523/jneurosci.0345-05.2005 [56] Voorn, P., Vanderschuren, L. J., Groenewegen, H. J., Robbins, T. W., & Pennartz, C. M. (2004). Putting a spin on the dorsal–ventral divide of the striatum. Trends in neurosciences, 27(8), 468-474. [57] Delprato, D. J., & Midgley, B. D. (1992). Some fundamentals of B. F. Skinners behaviorism. American Psychologist, 47(11), 1507–1520. doi: 10.1037//0003066x.47.11.1507

[44] Eisenberg, D. T., MacKillop, J., Modi, M., Beauchemin, J., Dang, D., Lisman, S. A., ... & Wilson, D. S. (2007). Examining impulsivity as an endophenotype using a behavioral approach: a DRD2 TaqI A and DRD4 48-bp VNTR association study. Behavioral and Brain Functions, 3(1), 2. [45] Trifilieff, P., & Martinez, D. (2014). Imaging addiction: D2 receptors and dopamine signaling in the striatum as biomarkers for impulsivity. Neuropharmacology, 76, 498-509. [46] Kelley, A. E. (2004). Memory and addiction: shared neural circuitry and molecular mechanisms. Neuron, 44(1), 161-179 [47] Surmeier, D. J., Ding, J., Day, M., Wang, Z., & Shen, W. (2007). D1 and D2 dopamine-receptor modulation of striatal glutamatergic signaling in striatal medium spiny neurons. Trends in neurosciences, 30(5), 228-235. [48] Blendy, J. A., & Maldonado, R. (1998). Genetic analysis of drug addiction: the role of cAMP response element binding protein. Journal of Molecular Medicine, 76(2), 104-110. [49] Shaywitz, A. J., & Greenberg, M. E. (1999). CREB: A Stimulus-Induced Transcription Factor Activated by A Diverse Array of Extracellular Signals. Annual Review of Biochemistry, 68(1), 821–861. doi: 10.1146/annurev.biochem.68.1.821 [50] Kolb, B., Mychasiuk, R., Muhammad, A., & Gibb, R. (2013). Brain Plasticity in the Developing Brain. Changing Brains Applying Brain Plasticity to Advance and Recover Human Ability Progress in Brain Research, 35–64. doi: 10.1016/b9780-444-63327-9.00005-9 [51] Martel, M.-A., Wyllie, D., & Hardingham, G. (2009). In developing hippocampal neurons, NR2B-containing N-methyl-d-aspartate receptors (NMDARs) can mediate signaling to neuronal survival and synaptic potentiation, as

WINTER 2020

75


A Branch of Precision Medicine and a Glimpse of the Future: Gene Therapy BY MELANIE PRAKASH '21 Cover Image: Precision Medicine (Source: Needpix, labeled for reuse)

“The term 'precision medicine' arises often in conversation about medicine of the future. It implicity argues that the future of medicine will be more for the patient than ever before.”

76

Introduction

Theory of Gene Therapy

The term “precision medicine” arises often in conversation about medicine of the future. It implicitly argues that the future of medicine will be more for the patient than ever before. Generally, this means data science-driven analyses of patients based on their genetics (Dias et al., 2018). The NIH collectively associates precision and personalized medicine – approaches based on “genetic, environmental, and lifestyle factors.” (What is the difference between precision medicine and personalized medicine? What about pharmacogenomics?, n.d.) The primary focus of this paper will be the remedy of an individual’s detrimental genetic factors Gene therapy promises a new era of medicine that is precise. It doesn’t seek to slow down the spread of a disease, but it looks to target its genetic basis to resolve the symptoms completely.

Gene therapy is not as young as one might expect; only recently it has garnered more attention. As long ago as 1971, a form of gene therapy was devised that involved the construction of viral gene delivery vectors. The European Medicines agency uses two characteristics to identify a gene therapy: One, the presence of an “active substance which contains or consists of a recombinant nucleic acid used in or administered to human beings with a view to regulating, repairing, replacing, adding, or deleting a genetic sequence” and second, its “therapeutic, prophylactic, or diagnostic effect (Wirth et al., 2013). Essentially, this is to say that a gene therapy must have living material that can edit the gene to fix any errors which will serve to the benefit of the patient. Definitions of precision medicine include focus DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


on the eventual result as well as the process and utilized data. One paper suggests that it is different than “traditional” forms of medicine because it is based on the philosophy of a process. There is no one “end goal”, because the end is always changing to make the treatment more precise. In the process of conditioning an appropriate treatment, patients become more stratified by the specificity of their disease. In the end, disease is less of a diagnosis-prognosiscure scenario and more of a habitual practice (König et al., 2017). In other words, medicine becomes more of a science of perfecting a treatment. Often times precision medicine is considered in terms of big data collection; in order to truly ‘see’ a disease one must understand as much of its phenotype as possible. From there, subgroups and models can be analyzed and devised. We will not go too far into this methodology, but the basis of broadening the application of precision medicine stands as a long term, logistic goal in the field of medicine. A trend that will play out in the study of clinical approaches to gene therapy will consider the variability in developing treatments; in the long term, the hope is that by taking the sum of several individual treatments, a broader picture can be calculated and accounted of the genetic disorder’s bigger picture (König et al., 2017). However, the key to an effective gene therapy is its ability to deliver the gene via an “efficient non-toxic gene delivery system” (Young et al., 2006). At present, Viruses are deemed the most effective vector because of their ability to infect human cells. Additionally, the mutability of the viral genome makes a virus difficult to target and kill, but the same property is being harnessed by scientists to insert genes of interest than can then be inserted into the genome of a cell (Young et al., 2006). Most genetic therapies have been considered with regards to cancer. It can have impacts on other diseases like Parkinson’s, for instance. However, the number of genetic diseases against which gene therapy can be applied to create a functional result is one of the reasons that mendelian inheritance can’t be long in coming.

History Frederick Griffith’s experiment established that a non-virulent bacteria could incorporate DNA from other strains. Almost 30 years after, Joshua Lederberg proposed the concept of transduction, which involves the transfer of genetic material between strains of bacteria by phages (viruses that infect bacteria). A WINTER 2020

few years later, Waclaw Szybalski proved that genetic material could be transferred in such a way as to “rescue” a gene that was damaged and make it effective again. But only in 1966 when Edward Tatum published a paper suggesting the use of viruses in gene therapy did a paradigm begin to develop (Wirth et al., 2013). In 1990, Martin Cline tried to use gene therapy with recombinant DNA. Cline was able to insert foreign genes into mouse bone marrow stem cells. The modified cells were able to grow and reproduce in the bone marrow of other mice as well. Having had such positive success with mice, Cline wanted to move to human testing – seeking to apply his theory to two patients with Beta-thalassemia. The Cline study became infamous, however, because he performed the experiments without ethical approval from the institutional review board at UCLA (where he was a professor) (Wirth et al., 2013). Not long after, Michael Blaese and French Anderson (affiliates of the NIH) used a therapeutic gene to treat two children with adenosine deaminase deficiency (ADA-SCID). Neither response was particularly significant, but the treatment began to catch on in other parts of the world. Gene therapy was shut down for a while when Jesse Gelsinger died when his immune system reacted strongly to a high dose of adenovirus administration (Wirth et al., 2013). Following his death, gene therapy trials in humans met much greater scrutiny (Verma, 2000).

“Gelsinger's death provided a cautionary tale, but the research didn't stop in its aftermath. Scientists have gone on to develop potential treatments for diseases from cancer to fat metabolism disorder, addressing inherited genetic defects and acquired llnesses alike.”

Gelsinger’s death provided a cautionary tale, but the research didn’t stop in its aftermath. Scientists have gone on to develop potential treatments for diseases from cancer to fat metabolism disorder, addressing inherited genetic defects and acquired illnesses alike. Gendicine was the first gene therapy drug approved by the FDA in 2003. By introducing a normal functioning p53 gene, it aims to treat patients with cancer. A single dose is given once a week for 8 weeks until the cancer is gone. The drug is given by infusions or intramural injections, but it is injected directly into the tumor (Plasmid / plasmids | Learn Science at Scitable, n.d.). T-Cell engineered transfusions are another application of viral gene therapy. Kymriah is a therapy that reprograms a patient’s T cells. T Cells are removed from the individual and designed with the necessary intracellular signaling components and a specifically crafted Chimeric Antigen Receptor (CAR) to recognize CD19. A virus is used to infect the T cells so that they present an engineered receptor, and then the T cells are injected back into the body (Bernardes de Jesus et al., 2012).

77


With regards to other areas, successful therapies for Parkinson’s, Beta-Thalassemia, and SCID have emerged in recent years. Many of such therapies are based using biomolecules, like the CAR therapy and chemical solutions. However, such protein and peptide-based therapies have short half-lives, limited biodistribution, and are often toxic. These chemically synthesized solutions can also lead to immune responses while still being complex and expensive to develop. The effect of chemically synthesized solutions is also short, while gene therapy holds the potential for a permanent solution (Bernardes de Jesus et al., 2012).

“Gene threapy can even be used to improve health during again. One study tested the impact of increasing telomere length.”

predisposition towards greater food intake. In a 1996 study, obese mice were found to have a deficiency in the protein leptin. The gene was corrected such than more leptin was produced, and the mice were able to achieve more normal weights. However, the eventually the mouse developed more robust eating habits and weight gain (Muzzin et al., 1996). This forms a question of how uncertain the effects of gene therapy can be in the long run.

Basic Biology

Gene Therapy can even be used to improve health during aging. One study tested the impact of increasing telomere length. The group was able to insert a gene that increased the product of Telomerase reverse transcriptase (TERT) in mice. They found increased insulin sensitivity, less osteoporosis, and better neuromuscular coordination. The lifespan of the mouse was increased significantly between two trials of mice at different ages (Bernardes de Jesus et al., 2012).

Gene therapy is most commonly performed using viral vectors, a tool to carry and insert DNA into the genome of interest (Wirth et al., 2013). Nonviral vectors also exist, there are mixed opinions concerning its effectiveness. (Cotrim & Baum, 2008). Viruses are naturally adapted to performing this task as it is the way they survive – by infecting cells and using the cell’s machinery to replicate their own DNA. Genetic therapies may involve either direct viral injection into the body or editing stem cells (from bone marrow, for instance) in a lab and then re-introducing them to the human body.7

Gene Therapy can also be used to change lifestyle benefits. In our original definition of precision and personalized medicine, a generalization of genetic, environmental, and lifestyle was suggested to be taken into account. The reality, however, is that a person’s genetics facilitate the effect of environmental and lifestyle choices. For instance, obesity may arise due to a genetic

Because they can integrate their own genome into those of human cells, viruses can insert potentially any gene (including non-viral genes) written into their own genome (Ali et al., 1994). There is an incredible variety of viruses on the earth. Different viruses are best used for different gene transfers. For instance,

Figure 1: Basic overview of Gene Therapy using Adenovirus Vectors. Notice how the basic biology of virus infection can be used to mass produce cells with new protein characteristics. (Source: How does gene therapy work, n.d.)

78

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Adenoviruses (which have a DNA genome) insert well into respiratory epithelium. By comparison, the Herpes simplex virus can be used to transfect neurons.11 The very first human gene therapy trial used the Shope papilloma virus, which had supposedly been engineered to contain the gene encoding arginase to treat two patients with a urea cycle disorder. Unfortunately, there was no change in arginine levels, which was later found to be because the gene for arginase was not in the virus introduced to patients (Wirth et al., 2013). There are many ways to classify different types of viral gene therapy. For instance, gene therapy can be categorized by the type of cell attacked - somatic gene therapy (affecting somatic, nonreproductive cells) or germline gene therapy (affecting reproductive germ cells). Germline therapies will allow for the changes made to pass into successive generations, whereas somatic gene therapies are inserted into target cells and only benefit the treated patient (Wirth et al., 2013). Gene therapy can also be categorized in terms of how it implemented: whether by immune cell engineering, antibody gene expression, or editing to prevent protein production (Collins & Thrasher, 2015). Somatic cell gene therapy may involve any target cell except sperm and egg, meaning that it cannot affect progeny.

Current Logistics Gene therapy clinical trials are in full effect all over the world. However, the rush to see the great potential benefits of gene therapy sometimes leads to undertested theories being applied. Gendicine in China replaces a mutated gene that causes cancer. It doesn’t replicate and has been used in head- and neck squamous cell carcinomas. The efficacy of the drug again became questionable (Wirth et al., 2013). Most gene therapy trials, in fact over 50%, are conducted in the US. Many of the studies look at gene therapy as an application to treating cancer; immune, digestive, and skin diseases are understudied. In 2012, the European Union recommended its first gene therapy product, Glybera. Glybera enhances the expression of lipoprotein lipase in muscle tissue for patients that express it in abnormally low levels (Wirth et al., 2013). The largest benefit of Glybera is adenovirus vector is able to transfer genes without integrating into the host chromosome; the risk of cancer is avoided in this way (Miller, 2012). The drug WINTER 2020

Figure 2: Overview of the CRISPR-Cas9 System; CRISPR is one of the many directions gene therapy can take. It is predicted that CRISPR-Cas9 will be a popular direction that gene therapy will expand down. Cas-9 is a protein which fixes the DNA strand so that DNA can be inserted into the genome (Source: CRISPR: Implications for materials science, n.d.)

has been shown to reduce the development of pancreatitis attacks in treated patients. Glybera had to undergo a very long approval process. It was developed by a private biotech company called Amsterdam Molecular Therapeutics. The European Union’s Committee on Human Medicinal Products (CHMP) evaluated the therapy and submitted its review to the European Commission. The extreme level of human caution towards the application of gene therapy slowed down its development. The drug was eventually legalized and sold in Europe, but the process necessary for its approval demonstrate a common deterrent towards bringing gene therapy to the biomedical market (Ylä-Herttuala, 2012).

“Gene therapy trials are in full effect all over the world. However, the rush to see the great potential benefits of gene therapy sometimes leads to undertested theories being applied.”

As the CRISPR-Cas9 system has evolved, it has been used to solve the problem of switching off a gene as well as repairing non-functional ones. CRISPR-Cas9 is a bacterial immune mechanism which catalogs viruses so that the bacteria can easily recognize them and mount a defense in the future. The CRISPR-Cas9 system is sometimes used to stop the activity of a mutant gene – a cancer oncogene, for example (What are genome editing and CRISPR-Cas9?, n.d.). It may also be used in some gene editing strategies to correct the gene and restore it to its wild type form (What are genome editing and CRISPR-Cas9?, n.d.). Researchers have used

79


the CRISPR-Cas9 system to genetically edit stem cells such that when they mature into red blood cells they are able to produce fetal hemoglobin (Humbert et al., 2019).

Clinical Applications of Notice “When this gene is repaired, the dystrophy ceases. It is directly caused by a mutation in this gene.”

So far we have established the basic biology and theory behind genetic-based precision medicine. Not only are the types of therapy diverse, but the conditions to which they can be applied are diverse also. Genetic disorders range from a single mutation in one gene to multiple mutations of different types, occurring in different cell types. The potential of gene therapy lies in its (theoretical) capacity to resolve any genetic disorder. The demonstrate this, we will discuss key clinical trials and real applications of these technologies. Eye – Voretigene Neparvovec (Luxturna) One of the first gene therapies to be sold on the market treats inherited retinal dystrophy (specifically as a result of RPE-65 deficiency). Retinal Dystrophy leads to the degeneration of rods and cones, photoreceptors that are important for processing light (Sciences, 2008). These photoreceptors rely on the retinoid cycle, which renews their light absorbing properties. The cycle itself is carried out by an enzymatic pathway; if there is a disruption of any protein in the pathway, the photoreceptors will not regenerate (CDMNY, n.d.). This is exactly happens when there is a mutation in RPE65, an mutation which causes of Leber’s congential amaurosis. The RPE65 mutation usually occurs in the same way across cases of the disease (Wirth et al., 2013). When this gene is repaired, the dystrophy ceases. It is directly caused by a mutation in this gene (Ameri, 2018). The deficiency must occur in both

copies of the gene as it is a recessive disorder CDMNY, n.d.). The RPE-65 deficiency causes a lack of retinal pigment production which can lead to Leber congenital amaurosis (Sciences, 2008). This disease is incredibly rare; however, due to the consistency in terms of the type of mutation, a therapy has been developed against it CDMNY, n.d.). Artur Cideciyan’s and his group at the University of Pennsylvania School of Medicine found that only cells exposed to the virus showed any change; that is, this specific gene therapy did not affect any untreated cells. Both rod and cone capability improved (Sciences, 2008). According to the pharmaceutical company Novartis’s website, Luxturna is administered by injection into an eye. The site mentions that it is necessary for the retina to have enough viable cells; again, an instance in which the effect of a therapy is still dependent upon the conditions of a patient. Precision medicine is limited. The duration of the effect of the therapy is still relatively unknown; more data can be collected now that the FDA has approved it for drug markets as of December 2017 (Patel et al., 2016). Gene therapy is limited by the state of disease progression; if the therapy is given too late, the effect may be null. Or, as some studies in canines suggest, the effect may only last for a short time before degeneration continues (Ameri, 2018). Cystic Fibrosis Cystic Fibrosis (CF) is caused by a mutation in a protein called the cystic fibrosis transmembrane conductance regulator (CFTR). The protein encoded by this gene functions as an ion channel, carrying (mainly) chloride ions across the cell membrane and out of the cell. As a result of mutations in CFTR, the secretion

Figure 3: Importance of RPE65. Luxturna repairs the cyclical renewal of photoreceptor cells. The image above shows the basic functioning of the visual system and the role of the photoreceptor cells (Source: CDMNY, n.d.)

80

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


of chloride ions into the lungs is disrupted and adverse effects like build up of thicker mucus and immune system dysfunction ensue (Davies et al., 2007). The mucus that forms as a result of a lack of chloride secretion leads to excess mucus secretion that can affect many other physiological systems, such as by blocking the lungs (Van Goor et al., 2014). CF is relatively common (for a genetic disorder) within the Caucasian population. Unlike LCA, CF is extremely well reported on and funded because it affects relatively more people (Lindee & Mueller, 2011). While the lifespan of CF patients has extended further into adulthood since the 21st century began, a more permanent solution is still missing.

While viral vectors are generally more effective, it is difficult for the virus to bypass this immune system. It may even be more effective to use non-viral vectors while they may not be as effective at integrating into the genome; it will vary based on the degree of severity of the CF. Alternatively, the lentiviral system has been proposed to. Lentiviruses integrate into host genome but are safer than retroviruses (Srivastava et al., 1999). The pursuit of these gene therapies also decreased as the actual patients began to protest than the glamorization of popular science was overshadowing their own pain and suffering (Lindee & Mueller, 2011).

CF is an autosomal recessive genetic disorder. While there are many mutations, deletion of the amino acid phenylalanine at codon 508 is the most prevalent (Davies et al., 2007). CF is a strong candidate for gene therapy because of this. However, over 1600 other mutations do exist which can cause the disease to manifest on a spectrum of severity – so gene therapy would need to be adapted to these different cases (Davies et al., 2007).

Sickle Cell Anemia arises due to a mutation in the beta subunit of hemoglobin that leads to malformation of red blood cells, causing them to take on a sickled appearance. Sickle Cell Disease (SCD) is a unique condition because it bears a genetic advantage; it is an autosomal recessive disorder, but heterozygous carriers have a genetic advantage because one copy provides a degree of immunity from malaria (Orkin & Bauer, 2019). The symptoms of sickle cell anemia can include stroke and chronic pain due to a lack of functioning red blood cells transporting oxygen throughout the body. As a result, anemia is a common outcome. (Mangla et al., 2020). Furthermore, the sickled red blood cells can build up in the vasculature and cause clots (Nowogrodzki, 2018). The typical treatment is a bone marrow transplant or a blood transfusion (Mangla et al., 2020). Bone marrow transplants, however, are risky because they require the patient’s immune system to be suppressed. Otherwise it might attack the transplant. This makes that the patient temporarily susceptible to attack (Nowogrodzki, 2018). Vaccines and treatments against bacterial infections are helpful here.

A major difficulty in treating Cystic Fibrosis is the systemic nature of the disease. Unlike retinal dystrophy, cystic fibrosis affects many organs of the body: the lungs, the GI tract, the pancreas, and even the reproductive system (Davies et al., 2007). A gene therapy that managed to replace the CFTR mutated gene with a functional copy in the lungs would only solve part of the problem. Depending on how far the disease has progress, measures would still need to be taken in order to fight in other parts of the body. The CFTR gene was identified in 1989; the concept of gene therapy was that the solution could be inhaled directly into the lungs, enter cells, and allow for production of the functional CFTR. In order to develop a viable therapy, the correct copy of the CFTR gene would need to be inserted into the genome of a cell. Clinical trials were performed on patients as early as 1993; while effective on the cells directly, achieving clinical efficacy has proven to be quite challenging. One distinguishing challenge that CF poses is the need for administration to occur throughout life because of epithelium turnover (Burney & Davies, 2012). This returns us to the methodological philosophy of precision medicine – that it is a science of perfecting a treatment rather than the production of a cure.

WINTER 2020

Sickle Cell Anemia

“A major difficulty in treating Cystic Fibrosis is the systemic nature of the disease. Unlike retinal dystrophy, cystic fibrosis affects many organs of the body: the lungs, the GI tract, the pancreas, and even the reproductive system."

Hematopoietic stem cells (HSCs) are another potential solution. The transfusion process takes about a month in which stem cells are isolated, transfected with the gene of interest, and allowed to differentiate (Nowogrodzki, 2018). The basic solution is to transfer the globin gene to HSC which can then go on to develop into Red blood cells (RBCs). A lentivirus has been used to accomplish this but with limited success (Orkin & Bauer, 2019). This is because the lentivirus tends to insert in inappropriate regions (Hoban et al., 2016).

Challenges and the Future 81


https://doi.org/10.1136/bmj.39391.713229.AD

“The social problems then follow - who would be able to afford gene therapy? Would insurance companies be willing to pay for a solution that could solve everything...”

82

Gene therapy is a difficult avenue to walk down. The specific details of which vector to use are complicated. And viral insertions could also potentially lead to “insertional mutagenesis” – that is, the chance that off-target mutations are introduced to the genome. And remembering the death of Jesse Gelsinger, the immune response to viral vectors must be avoided. The social problems then follow – who would be able to afford gene therapy? Would insurance companies be willing to pay for a solution that could solve everything and avoid years of treatment to pay for the consequences of the therapy? (Wirth et al., 2013). There are ethical considerations too: the side effects of some of these clinical trials have proven to be debilitating. Cultural and religious values concerning adjusting the biological framework of an individual may inhibit the range of application. Or even the general fear of what allowing gene editing for “good” may lead to causes hesitation in its development (Cotrim & Baum, 2008). The future of gene therapy is uncertain. But while there are many obstacles, the promise of its benefit should it succeed lends infinite possibility.

[8] Dias, M. F., Joo, K., Kemp, J. A., Fialho, S. L., da Silva Cunha, A., Woo, S. J., & Kwon, Y. J. (2018). Molecular genetics and emerging therapies for retinitis pigmentosa: Basic research and clinical perspectives. Progress in Retinal and Eye Research, 63, 107–131. https://doi.org/10.1016/j. preteyeres.2017.10.004 [9] Herman, J. R., Adler, H. L., Aguilar-Cordova, E., RojasMartinez, A., Woo, S., Timme, T. L., Wheeler, T. M., Thompson, T. C., & Scardino, P. T. (1999). In Situ Gene Therapy for Adenocarcinoma of the Prostate: A Phase I Clinical Trial. Human Gene Therapy, 10(7), 1239–1250. https://doi. org/10.1089/10430349950018229 [10] Hoban, M. D., Orkin, S. H., & Bauer, D. E. (2016). Genetic treatment of a molecular disorder: Gene therapy approaches to sickle cell disease. Blood, 127(7), 839–848. https://doi. org/10.1182/blood-2015-09-618587 [11] Humbert, O., Radtke, S., Samuelson, C., Carrillo, R. R., Perez, A. M., Reddy, S. S., Lux, C., Pattabhi, S., Schefter, L. E., Negre, O., Lee, C. M., Bao, G., Adair, J. E., Peterson, C. W., Rawlings, D. J., Scharenberg, A. M., & Kiem, H.-P. (2019). Therapeutically relevant engraftment of a CRISPR-Cas9– edited HSC-enriched population with HbF reactivation in nonhuman primates. Science Translational Medicine, 11(503). https://doi.org/10.1126/scitranslmed.aaw3768

References

[12] Keeler, A. M., & Flotte, T. R. (2019). Recombinant AdenoAssociated Virus Gene Therapy in Light of Luxturna (and Zolgensma and Glybera): Where Are We, and How Did We Get Here? Annual Review of Virology, 6(1), 601–621. https://doi. org/10.1146/annurev-virology-092818-015530

[1] Bernardes de Jesus, B., Vera, E., Schneeberger, K., Tejera, A. M., Ayuso, E., Bosch, F., & Blasco, M. A. (2012). Telomerase gene therapy in adult and old mice delays aging and increases longevity without increasing cancer. EMBO Molecular Medicine, 4(8), 691–704. https://doi.org/10.1002/ emmm.201200245

[13] Kitson, C., & Alton, E. (2000). Gene therapy for cystic fibrosis. Expert Opinion on Investigational Drugs, 9(7), 1523–1535. https://doi.org/10.1517/13543784.9.7.1523 König, I. R., Fuchs, O., Hansen, G., von Mutius, E., & Kopp, M. V. (2017). What is precision medicine? European Respiratory Journal, 50(4), 1700391. https://doi. org/10.1183/13993003.00391-2017

[2] Burney, T. J., & Davies, J. C. (2012, May 29). Gene therapy for the treatment of cystic fibrosis. The Application of Clinical Genetics. https://www.dovepress.com/gene-therapy-for-thetreatment-of-cystic-fibrosis-peer-reviewed-article-TACG CDMNY. (n.d.). Luxturna. Retrieved April 24, 2020, from http:// luxturna.com/image

[14] Lindee, S., & Mueller, R. (2011). Is Cystic Fibrosis Genetic Medicine’s Canary? Perspectives in Biology and Medicine, 54(3), 316–331. https://doi.org/10.1353/pbm.2011.0035 Mangla, A., Ehsan, M., & Maruvada, S. (2020). Sickle Cell Anemia. In StatPearls. StatPearls Publishing. http://www.ncbi. nlm.nih.gov/books/NBK482164/

[3] Collins, M., & Thrasher, A. (2015). Gene therapy: Progress and predictions. Proceedings of the Royal Society B: Biological Sciences, 282(1821), 20143003. https://doi.org/10.1098/ rspb.2014.3003

[15] Miller, N. (2012). Glybera and the future of gene therapy in the European Union. Nature Reviews Drug Discovery, 11(5), 419–419. https://doi.org/10.1038/nrd3572-c1

[4] Cooney, A. L., McCray, P. B., & Sinn, P. L. (2018). Cystic Fibrosis Gene Therapy: Looking Back, Looking Forward. Genes, 9(11), 538. https://doi.org/10.3390/genes9110538

[16] Muzzin, P., Eisensmith, R. C., Copeland, K. C., & Woo, S. L. C. (1996). Correction of obesity and diabetes in genetically obese mice by leptin gene therapy. Proceedings of the National Academy of Sciences, 93(25), 14804–14808. https:// doi.org/10.1073/pnas.93.25.14804

[5] Cotrim, A. P., & Baum, B. J. (2008). Gene Therapy: Some History, Applications, Problems, and Prospects. Toxicologic Pathology, 36(1), 97–103. https://doi. org/10.1177/0192623307309925

[17] Nowogrodzki, A. (2018). Gene therapy targets sickle-cell disease. Nature, 564(7735), S12–S13. https://doi.org/10.1038/ d41586-018-07646-w

[6] CRISPR: Implications for materials science. (n.d.). Cambridge Core. Retrieved April 25, 2020, from /core/journals/mrsbulletin/news/crispr-implications-for-materials-science

[18] Orkin, S. H., & Bauer, D. E. (2019). Emerging Genetic Therapy for Sickle Cell Disease. Annual Review of Medicine, 70(1), 257–271. https://doi.org/10.1146/annurevmed-041817-125507

[7] Davies, J. C., Alton, E. W. F. W., & Bush, A. (2007). Cystic fibrosis. BMJ : British Medical Journal, 335(7632), 1255–1259.

[19] Patel, U., Boucher, M., de Léséleuc, L., & Visintini, S. (2016).

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Voretigene Neparvovec: An Emerging Gene Therapy for the Treatment of Inherited Blindness. In CADTH Issues in Emerging Health Technologies. Canadian Agency for Drugs and Technologies in Health. http://www.ncbi.nlm.nih.gov/ books/NBK538375/ [20] Plasmid | Learn Science at Scitable. (n.d.). Retrieved April 24, 2020, from https://www.nature.com/scitable/content/ plasmid-6623303/ [21] Reference, G. H. (n.d.). How does gene therapy work? Genetics Home Reference. Retrieved April 24, 2020, from https://ghr.nlm.nih.gov/primer/therapy/procedures Reference, G. H. (n.d.-a). What are genome editing and CRISPR-Cas9? Genetics Home Reference. Retrieved April 24, 2020, from https://ghr.nlm.nih.gov/primer/genomicresearch/ genomeediting [22] Reference, G. H. (n.d.-b). What is the difference between precision medicine and personalized medicine? What about pharmacogenomics? Genetics Home Reference. Retrieved April 24, 2020, from https://ghr.nlm.nih.gov/primer/ precisionmedicine/precisionvspersonalized [23] Sciences, N. A. of. (2008). In This Issue. Proceedings of the National Academy of Sciences, 105(39), 14745–14746. https:// doi.org/10.1073/iti3908105 [24] Srivastava, M., Eidelman, O., & Pollard, H. B. (1999). Pharmacogenomics of the Cystic Fibrosis Transmembrane Conductance Regulator (CFTR) and the Cystic Fibrosis Drug CPX Using Genome Microarray Analysis. Molecular Medicine, 5(11), 753–767. https://doi.org/10.1007/BF03402099 [25] Van Goor, F., Yu, H., Burton, B., & Hoffman, B. J. (2014). Effect of ivacaftor on CFTR forms with missense mutations associated with defects in protein processing or function. Journal of Cystic Fibrosis: Official Journal of the European Cystic Fibrosis Society, 13(1), 29–36. https://doi.org/10.1016/j. jcf.2013.06.008 [26] Verma, I. M. (2000). A Tumultuous Year for Gene Therapy. Molecular Therapy, 2(5), 415–416. https://doi.org/10.1006/ mthe.2000.0213 [27] Wirth, T., Parker, N., & Ylä-Herttuala, S. (2013). History of gene therapy. Gene, 525(2), 162–169. https://doi. org/10.1016/j.gene.2013.03.137 [28] Ylä-Herttuala, S. (2012). Endgame: Glybera Finally Recommended for Approval as the First Gene Therapy Drug in the European Union. Molecular Therapy, 20(10), 1831– 1832. https://doi.org/10.1038/mt.2012.194 [29] Young, L. S., Searle, P. F., Onion, D., & Mautner, V. (2006). Viral gene therapy strategies: From basic science to clinical application. The Journal of Pathology, 208(2), 299–318. https://doi.org/10.1002/path.1896

WINTER 2020

83


The Origins of Electron Crystallography and the Merits of Data Merging Techniques BY NISHI JAIN '21 The very first electron crystallographic protein structure that was determined in atomic resolution, bacteriorhodopsin, was completed by Richard Henderson at the Medical Research Council Laboratory of Molecular Biology in 1990 (Source: Wikimedia Commons).

“The higher resolving potential of the TEM is due to the inherent physical characteristics of electrons.”

84

Introduction to Electron Crystallography Electron crystallography has its roots in methodologies that have historically defined x- ray crystallography, in which an x-ray beam is shined on a macromolecular crystal and the diffraction patterns that result allow the molecular shape to be determined1,2. Because electrons engage most energetically with atoms arranged in a crystalline structure, it is possible to use small crystals to give an effective diffraction pattern resulting from the electron beams which can then help determine molecular shape3. Modern high throughput images produced by electron crystallography have resulted from the use of a transmission electron microscope, or TEM. With a TEM, a beam of electrons is transmitted through a material, most frequently a sample from a living specimen, to form a diffraction image from which the structure can be determined.

The raw diffraction image is then magnified and focused onto an imaging device so that it can be visualized by the scientist4. Due to its resolving capabilities, the use of a TEM is often superior to the use of a traditional light or electron microscope. The higher resolving potential of the TEM is due to the inherent physical characteristics of electrons. Like their photon counterparts, electrons also have wavelike properties in addition to particulate characteristics, as per the seminal work done by French quantum physicist Louis de Broglie5. In addition to his postulate that electrons have wavelike characteristics, His contribution to electron crystallography is best represented by the De Broglie equation, where λ is the de Broglie wavelength, h is Planck’s constant, m is mass, and v is velocity: λ = h/mv DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


When considering traditional microscopes, the resolution that can be obtained by a light microscope is limited by wavelengths of the photons applied to the sample. The De Broglie wavelength of electrons is smaller than that of photons, and this smaller wavelength allows for higher resolution images6. Abbe’s equation below further explores this relationship; in Abbe’s equation, n is the index of refraction, α is the half aperture angle and λ is the wavelength used: Maximum Resolution in nm= λ/(2nsinα) Maximum resolution in Abbe’s equation is inverted from conventional sense—the smaller the value of the maximum resolution, the fewer the nm, the more precise the overall image will be7. Applying de Broglie’s insights to Abbe’s equation, it is evident that conventional light microscopes will result in a larger nm resolution because the photon’s wavelength is comparatively larger than the electron. In TEM, the smaller wavelength of the electron results in a smaller numerator, leading to a smaller nm resolution—making TEM microscopy more precise than conventional light microscopy. The wave behavior of tiny particles ultimately assists in the resolution of images here— it means that electrons can behave like electromagnetic radiation, carried by photons but with exceptional accuracy since their wavelengths are substantially less than that of a photon.

y, z planes: This equation represents the Fourier transform of the transmitted wave that come from the bottom of the specimen, which breaks down the complex diffraction wave patterns into a sum of harmonic frequencies11. The focal length of the intermediate aperture, or opening, in the middle of the TEM can then be further adjusted so that switching back and forth between diffraction pattern and the direct image is made easier for the user12.

Origins of Electron Crystallography The very first images that were taken with the TEM were those of the tail of a bacteriophage. The T4 bacteriophage has an interesting tail composed of helical fibers, antiparallel, betastrand needle domains, and receptors that facilitate viral infection13. The intricacies of this structure were first discovered using TEM, and this structure specifically was sought out since it had been historically used in genetics and heredity experiments, including the famous ‘Waring Blender’ experiment of Hershey and Chase which confirmed that DNA, and not

“The very first images that were taken with the TEM were those of the tail of a bacteriophage.”

Exploring the Transmission Electron Microscope (TEM) In TEM, the source of electrons is a heated tungsten filament. The electrons are accelerated by an electric potential and then focused by electrostatic and electromagnetic lenses, with the electrostatic lens assisting in the movement of the charged electrons while the electromagnetic lens focuses the electrons8,9. Since the magnetic strength of both lenses can be readily adjusted simply by varying the current that runs through the electromagnet’s coil, there is considerable flexibility in the lens focal length, beam intensity, and image magnification10. Magnetic deflection coils help align the beam that is transmitted from the lenses, which contains the information that is then used to formulate an image10. The Fraunhofer diffraction pattern that results from the beam in the lens’ focal plane can be modeled by Fraunhofer’s equation. Cartesian coordinates are given by the following equation where the diffracted wave is observed in the x, WINTER 2020

Figure 1: The basic structure of a transmission electron microscope (TEM) in which electrons are generated in a heated tungsten filament in the electron gun, focused by electrostatic and electromagnetic “condenser lenses” located in the condenser aperture, and subsequently diffracted in the diffractor lens of the apparatus (Source: Wikimedia Commons). 85


Figure 2: An apt computer simulated representation of Fraunhofer’s equation diffraction pattern that results from imaging by a rectangular aperture (Source: Wikimedia Commons).

“Alongside biological samples, work was being done to apply electron microscopy to other scientific disciplines.�

protein, was the genetic material14. The imaging of the T4 bacteriophage paved the way for many subsequent experiments that helped uncover more about the microscopic world. TEM helped determine the structure of viruses with over 20 faces (also called icosahedral viruses), which differ from the previously discovered helical virus with equilateral triangles which are arranged in a symmetrical fashion and have significantly fewer faces and a simpler structure15,16. TEM also allowed microscopy to move into the imaging of inorganic molecules, helping with the structural determination of two-dimensional crystals17. The field also shifted into biological molecules, beginning with the structures of the mammalian fatty acid synthase complex and the ribosome18,19,20,21. Alongside biological samples, work was being done to apply electron microscopy to other scientific disciplines. For instance, the TEM optimization that resulted in the discovery of bacteriorhodopsin ultimately allowed scientists to investigate the proposed atomic model22. LHC II, the photosynthetic light-harvesting protein, was imaged a few years later using the same optimization techniques23,27. A significant limitation of TEM was that specimens became radiation-damaged after prolonged exposure to radioactive elements inherent to the TEM system24. To combat this problem, a method to merge data from multiple identical samples into one aggregated image or data set was introduced by multiple scientists in the late 20th century. This method allowed scientists to dose the image with small doses of radiation (to avoid degradation) and then

86

combine images to form a coherent visual of the structure. This data-merging method was further supplemented by the discovery that thin aqueous films could be converted to a glasslike substance by rapid freezing that could allow substances to remain viable for a longer period of time25,26,27. After these two methods were implemented, the structure of tubulin, a protein that plays a central role in many eukaryotic cells, was discovered at the turn of the 21st century28. Around the same time, the structure of the ribosome was determined to a resolution of less than 2nm, also using data merging and sample preservation29. By merging data from many individual particles and preserving the original samples well, it was soon thought that atomic resolution by data merging techniques

was imminent30.

Data Merging Further Explored Evidently, data merging played a significant role in the development of better imaging. Although discovered in the late 20th century, it continues to be used. As discussed earlier, data merging was warranted due to the radiation damage that resulted after the structure was imaged for too long a time with the TEM31. Although the technique has been in use since its inception in the mid 20th century, it has reached a new DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


level of effectiveness in recent years. Modern computing power and image processing have allowed images to become sharper and more defined32. Computer programs can now accurately delineate between noise associated with each image, determine commonalities between multiple structures being compares, and collect images of the structures prior to their radioactive degradation33. Computer processing also allows for the mathematical modeling of the electron diffraction and fading at multiple intensities as computer programs elucidate structures under investigation. Most importantly, however, they are instrumental in differentiating structures that, though appearing normal at the beginning of the experiment, may have been partially degraded along the way and should not be included in the ultimate results. Although many organisms that are imaged share similar basic structures, there are inherent differences between structures that are often so minute that not even the TEM can detect it34. (These are often biochemical differences that cause variable reactions to radiation and variation in the onset of degradation). Due to the advent of modern computer processing, scientists are able to easily isolate images exposed to too much radiation that have become minutely fragmented35. Despite the power of computers, they too are susceptible to error. For this reason, there have been additional techniques developed to reduce reliance on computer processing, with some scientists claiming that mistakes in the computer processing would cause the foundational understanding of the structure to be flawed32,34. In addition to rapid freezing and glass crystallization, other temperaturedependent methods of cryofixation and operation at liquid nitrogen or liquid helium temperatures have been developed36. With all these tools at their disposal, scientists have used concurrent imaging techniques where many different particles are imaged, and radiation is distributed to minimize degrading effects37. In preliminary experiments, scientists have also determined the radiation exposure necessary to produce an image of a three-dimensional position of a single molecule, as well as the radiation exposure that degrades the sample, and the slim window between them in which imaging is optimal. This optimal radiation exposure is calculated by determining the Np, or the number of particles that have to be used for the image. With this, scientists can further

WINTER 2020

Figure 3: An example of an electron micrograph of bacteriophages that are attached to a bacterial cell. DeRosier and Klug were the first scientists to be able to accurately image the tails of the bacteriophages, which allow them to access the host cell. Those tails are depicted here as the thin white lines that are extending from the head of the bacteriophage to the cell body (Source: Wikimedia Commons)

optimize their methods to provide a survivable amount of radiation that is enough to produce an image37. Although these methods have significantly reduced the number of samples suffering radiation damage, computer processing has not been discarded because computer-aided statistical analysis in data merging is extremely valuable. Quantitative estimation is required even after the non-degraded images are isolated, as there is still noise that needs to be normalized to reveal the true structure. The ease in putting these images together, coupled with modern computer processing, then brings in the namesake of the procedure: crystals. Crystals are useful as they can be used to obtain different structure factor amplitudes from the diffraction patterns38. Because there are often thousands of particles of radiation required for imaging, the computational challenge of eliminating noise and aligning all the images in highly complex molecules would stump even the most advanced computer. The use of crystals solves this problem; by confining the specimen to a small region, they can remove some of the potential inaccuracies that might cause computational processing requirements to increase inordinately/excessively39.

“... there have been additional techniques developed to reduce reliance on computer processing, wiith some scientists claiming that mistakes in the computer processing would cause the foundational understanding of the structure to be flawed.�

Conclusion Crystallography techniques, though essential

87


to biological and biochemical imaging, draws from methods developed nearly decades ago. Data merging mounts for many different imaged specimens continues to be employed, and although small changes have been made in order to help prime the system to undertake the more modern specimen that are being modeled, it remains to be a cornerstone methodology with room to grow. References [1] Scott, A. (1921). CRYSTALLOGRAPHY. Science Progress in the Twentieth Century (1919-1933), 15(60), 547-550. [2] Gallat, F., Matsugaki, N., Coussens, N., Yagi, K., Boudes, M., Higashi, T., . . . Chavas, L. (2014). In vivo crystallography at X-ray free-electron lasers: The next generation of structural biology? Philosophical Transactions: Biological Sciences, 369(1647), 1-4. [3] Yonekura, K., Kato, K., Ogasawara, M. (2015). Electron crystallography of ultrathin 3D protein crystals: Atomic model with charges. Proceedings of the National Academy of Sciences of the United States of America, 112(11), 3368-3373. [4] Binder, B. (1983). The Inner Life of the Electron Microscope. The Science Teacher, 50(9), 18-22. [5] Kunkle, G. (1995). Technology in the Seamless Web: "Success" and "Failure" in the History of the Electron Microscope. Technology and Culture, 36(1), 80-103. [6] Hanle, P. (1977). Erwin Schrödinger's Reaction to Louis de Broglie's Thesis on the Quantum Theory. Isis, 68(4), 606-609. [7] Kendall, M. (1971). Studies in the History of Probability and Statistics. XXVI: The Work of Ernst Abbe. Biometrika, 58(2), 369-373. [8] Cockayne, D., Kirkland, A., Nellist, P., & Bleloch, A. (2009). Preface: New Possibilities with Aberration-Corrected Electron Microscopy. Philosophical Transactions: Mathematical, Physical and Engineering Sciences, 367(1903), 3633-3635. [9] Herring, R. (2011). A New Twist for Electron Beams. Science, 331(6014), 155-156. [10] Batson, P. (2011). Unlocking the time resolved nature of electron microscopy. Proceedings of the National Academy of Sciences of the United States of America, 108(8), 3099-3100. [11] Fitzgerald, A., & Mannami, M. (1966). Electron Diffraction from Crystal Defects: Fraunhofer Effects From Plane Faults. Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, 293(1433), 169-180. [12] Lichte, H., Geiger, D., & Linck, M. (2009). Off-Axis Electron Holography in an Aberration-Corrected Transmission Electron Microscope. Philosophical Transactions: Mathematical, Physical and Engineering Sciences, 367(1903), 3773-3793. [13] Liu, L., Quillin, M., & Matthews, B. (2008). Use of Experimental Crystallographic Phases to Examine the Hydration of Polar and Nonpolar Cavities in T4 Lysozyme. Proceedings of the National Academy of Sciences of the United States of America, 105(38), 14406-14411. [14] Spence, J., & Chapman, H. (2014). Introduction: The birth of a new field. Philosophical Transactions: Biological Sciences,

88

369(1647), 1-3. [15] Rosenthal, P. (2015). From high symmetry to high resolution in biological electron microscopy: A commentary on Crowther (1971) 'Procedures for three-dimensional reconstruction of spherical viruses by Fourier synthesis from electron micrographs'. Philosophical Transactions: Biological Sciences, 370(1666), 1-12. [16] Basta, T., Wu, H., Morphew, M., Lee, J., Ghosh, N., Lai, J., . . . Stowell, M. (2014). Self-assembled lipid and membrane protein polyhedral nanoparticles. Proceedings of the National Academy of Sciences of the United States of America, 111(2), 670-674. [17] Arndt, U. (2001). Instrumentation in X-Ray Crystallography: Past, Present and Future. Notes and Records of the Royal Society of London, 55(3), 457-472. [18] Moffat, K. (2014). Time-resolved crystallography and protein design: Signalling photoreceptors and optogenetics. Philosophical Transactions: Biological Sciences, 369(1647), 1-6. [19] Sawaya, M., Cascio, D., Gingery, M., Rodriguez, J., Goldschmidt, L., Colletier, J., . . . Eisenberg, D. (2014). Protein crystal structure obtained at 2.9 Å resolution from injecting bacterial cells into an X-ray free-electron laser beam. Proceedings of the National Academy of Sciences of the United States of America, 111(35), 12769-12774. [20] Neutze, R. (2014). Opportunities and challenges for time-resolved studies of protein structural dynamics at X-ray free-electron lasers. Philosophical Transactions: Biological Sciences, 369(1647), 1-9. [21] Ramachandran, G. N. “Protein Structure and Crystallography.” Science, vol. 141, no. 3577, 1963, pp. 288–291. [22] Bartesaghi, A., Matthies, D., Banerjee, S., Merk, A., & Subramaniam, S. (2014). Structure of β-galactosidase at 3.2-Å resolution obtained by cryo-electron microscopy. Proceedings of the National Academy of Sciences of the United States of America, 111(32), 11709-11714. [23] Aidelsburger, M., Kirchner, F., Krausz, F., Baum, P., & Zewail, A. (2010). Single-electron pulses for ultrafast diffraction. Proceedings of the National Academy of Sciences of the United States of America, 107(46), 19714-19719. [24] Hubert, S., Uiterwaal, C., Barwick, B., Batelaan, H., & Zewail, A. (2009). Temporal Lenses for Attosecond and Femtosecond Electron Pulses. Proceedings of the National Academy of Sciences of the United States of America, 106(26), 1055810563. [25] Vijayan, M. (2002). The story of insulin crystallography. Current Science, 83(12), 1598-1606. [26] Baum, P., & Zewail, A. (2006). Breaking Resolution Limits in Ultrafast Electron Diffraction and Microscopy. Proceedings of the National Academy of Sciences of the United States of America, 103(44), 16105-16110. [27] Kupitz, C., Grotjohann, I., Conrad, C., Roy-Chowdhury, S., Fromme, R., & Fromme, P. (2014). Microcrystallization techniques for serial femtosecond crystallography using photosystem II from Thermosynechococcus elongatus as a model system. Philosophical Transactions: Biological Sciences, 369(1647), 1-8.

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


[28] Van Hove, M. (1993). Surface Crystallography with LowEnergy Electron Diffraction. Proceedings: Mathematical and Physical Sciences, 442(1914), 61-72. [29] Pendry, J., & Stoneham, A. (1992). Electronic Structure of Surfaces and of Adsorbed Species [and Discussion]. Philosophical Transactions: Physical Sciences and Engineering, 341(1661), 293-300. [30] Gibbons, M. (2012). Reassessing Discovery: Rosalind Franklin, Scientific Visualization, and the Structure of DNA*. Philosophy of Science, 79(1), 63-80. [31] Yefanov, Oleksandr, et al. “Mapping the Continuous Reciprocal Space Intensity Distribution of X-Ray Serial Crystallography.” Philosophical Transactions: Biological Sciences, vol. 369, no. 1647, 2014, pp. 1–7. [32] White, T. (2014). Post-refinement method for snapshot serial crystallography. Philosophical Transactions: Biological Sciences, 369(1647), 1-6. [33] Petsko, G. (1992). Art is Long and Time is Fleeting: The Current Problems and Future Prospects for Time-Resolved Enzyme Crystallography. Philosophical Transactions: Physical Sciences and Engineering, 340(1657), 323-334. [34] Cassetta, A., Deacon, A., Emmerich, C., Habash, J., Helliwell, J., McSweeney, S., . . . Weisgerber, S. (1993). The Emergence of the Synchrotron Laue Method for Rapid Data Collection from Protein Crystals. Proceedings: Mathematical and Physical Sciences, 442(1914), 177-192. [35] T., Hanashima, S., Suzuki, M., Saiki, H., Hayashi, T., Kakinouchi, (2016). Membrane protein structure determination by SAD, SIR, or SIRAS phasing in serial femtosecond crystallography using an iododetergent. Proceedings of the National Academy of Sciences of the United States of America, 113(46), 13039-13044. [36] Cassetta, A., Deacon, A., Emmerich, C., Habash, J., Helliwell, J., McSweeney, S., . . . Weisgerber, S. (1993). The Emergence of the Synchrotron Laue Method for Rapid Data Collection from Protein Crystals. Proceedings: Mathematical and Physical Sciences, 442(1914), 177-192. [37] Chen, Julian C.-H., et al. “Direct Observation of Hydrogen Atom Dynamics and Interactions by Ultrahigh Resolution Neutron Protein Crystallography.” Proceedings of the National Academy of Sciences of the United States of America, vol. 109, no. 38, 2012, pp. 15301–15306. [38] Silvestre, H., Blundell, T., Abell, C., & Ciulli, A. (2013). Integrated biophysical approach to fragment screening and validation for fragment-based lead discovery. Proceedings of the National Academy of Sciences of the United States of America, 110(32), 12984-12989. [39] Tran, R., Kern, J., Hattne, J., Koroidev, S., Hellmich, J., Alonso-Mori, R., . . . Yachandra, V. (2014). The Mn 4 Ca photosynthetic wateroxidation catalyst studied by simultaneous X-ray spectroscopy and crystallography using an X-ray free-electron laser. Philosophical Transactions: Biological Sciences, 369(1647), 1-6.

WINTER 2020

89


NEED NEW IMAGE!

Artificial Neural Network Approaches to Echocardiography BY SAHAJ SHAH '21

Cover Image: Artifical Neural Networks (Source: Wikimedia Commons)

“Recent advances in cardiovascular imaging techniques, especially echocardiography, have allowed clinicians and researchers to store and collect large quantities of medical data related to heart health.� 90

Abstract Recent advances in cardiovascular imaging techniques, especially echocardiography, have allowed clinicians and researchers to store and collect large quantities of medical data related to heart health. Artificial Neural Networks (ANNs) can be used as an effective model to analyze these vast quantities of data and predict outcomes, improve clinical care, reduce the time taken by medical professionals to perform analytic tasks, and yield new knowledge for precision medicine phenotyping. This paper provides an overview of the advances in machine learning in the field of echocardiography.

Introduction Echocardiography is a crucial tool for making medical diagnoses related to the heart. An echocardiogram is a non-invasive approach

that uses sound waves, allowing physicians to observe the heart through a monitor, measure its state, make diagnoses, and report abnormalities. Several cardiac diseases, such as Ventral Tachycardia, Myocardial interaction, mitral valve stenosis, and acute coronary syndrome can be diagnosed using echocardiograms1. However, assessing an echocardiogram is a time-consuming process that requires extensive medical training. Recent research advances have allowed scientists to use artificial neural networks (ANNs) to automate the process of analyzing echocardiograms, giving machines the ability to learn from images and predict outcomes2.

Artifical Neural Networks Artificial Neural Networks (ANNs) refer to sets of networks, a system containing nodes and edges, that are loosely modeled after the neurons in the human brain. They are DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


comprised of a collection of nodes that connected to one another via edges. A simple ANN cell typically takes in an input, performs a desired computation, and produces an output. This model is similar to that of a neuron that receives molecular inputs at its dendrites and generates an electrical message that traverses the length of the neuronal axon, stimulating the release of neurotransmitters at its axon terminal. Also similar to the changes in neural responses that reflect learning, the cells in ANN have synaptic weight properties, in vector form, that change to produce better outcomes. ANNs come in a variety of models. Some examples include perceptron, backprop, and convolutional neural networks. The extent to which these models differ from each other depends on the assembly of the network and its underlying machinery. However, the overarching goal of all ANNs is to observe patterns in data, make predictions, and allow the system to “learn” on its own by minimizing the error between the predicted and actual output of the cells. With each iteration, the network improves its machinery, reduces error, and makes more accurate predictions. In an ideal world, the error goes to zero, meaning that the cell predicts exactly what the outcome was, but this is rarely observed in highly complex datasets. In recent years, a variety of artificial neural network applications have been seen, especially in Natural Language Processing (NLP), a subfield where computers learn and analyze human languages to create personal chat assistants. But the most compelling recent application of these neural networks lies in the realm of image-based diagnosis, where machine learning is being used to classify images, flag suspicious activity, and reveal new medical knowledge. In order to understand how these neural networks extract information and classify images, a few properties of neural networks must be outlined.

Supervised vs. Unsupervised Networks Learning in ANNs can take two forms: supervised and unsupervised In supervised networks, the networks are presented with labelled data that contains an input vector x and training vector t as parameters. t corresponds to the desired outcome for input vector x. These labels are used to train the networks and learn from the data patterns by adjusting their synaptic weights based on the difference between the WINTER 2020

output vector (y) and t. With each iteration, the goal of the network is to predict an outcome y that is increasingly close to t. After the network is thoroughly trained, we present it with testing data to make predictions. Unsupervised networks, on the other hand, do not need a “teacher” or a tag to learn. Instead, they acquire patterns without external labelling of data. An unsupervised network identifies clusters and patterns within the data and seeks to adjust the weights towards a cluster. The unsupervised networks more closely resemble human neural networks.

Feedforward vs. Feedback In a simple feedforward network, an output

from one cell becomes the input to the next cell. In feedback networks, the output of the cell becomes the input to the same cell. Hence, the output equals the input; this arrangement is commonly seen in unsupervised networks.

Convolutional Neural Networks Convolutional Neural Networks (CNNs), a subclass of ANN, are used to identify patterns from images and videos. They are widely used in cardiology because of their abilities to successfully extract and classify features from images and videos. The origins of CNNs trace back to the early work of Hubel and Wiesel on the visual cortex of a cat3. Motivated to understand the process of visual recognition, the researchers projected patterns of light and dark on a screen in front of a cat and recorded neural activity. Some neurons responded to changes in light patterns whereas others responded to changes in dark. They called these neurons simple cells. Further experiments revealed a complex “receptive field” arrangement of these simple cells that gave insights into how complex arrangements are processed by the visual cortex4. Each optic nerve contains a receptive field that encompasses a central disk with a surrounding concentric circle, each region responding differently to illuminated light. While stimulation of the center field strengthened firing of the simple cell, illumination of the surrounding region decreased firing. Different patterns of firing were observed when the shape or motion of the object was changed, making it possible to predict the nature of the stimuli based on the neuronal cell firing. This change in the pattern of cell stimulation upon different spatial arrangements of the input

“Convolutional Neural Networks (CNNs), a subclass of ANN, are used to identify patterns from images and videos. They are widely used in cardiology because of their abilities to successfully extract and classify features from images and videos.”

91


Figure 2: Convolution. The convolution filter activates a collection of vectors from the source (receptive field), and the summation of the two vectors is stored in the destination. The destination is also called a feature map due to the delineation of features that results from convolution. These features can be higher order or lower order depending upon their placement in the network. Source: Wikimedia Commons

became the basis for CNNs.

“Images contain a collection of features. For example, an image of a number can be broken down into horizontal, vertical, and curved lines.”

92

CNNs are multi-layered perceptrons and use an algorithm to find patterns and recognize stimulus activity based on geometrical similarity6. CNNs are designed to complete two activities: feature detection and image classification.

Feature Extraction images contain a collection of features. For example, an image of a number can be broken down into horizontal, vertical, and curved lines5. The first step in classifying an image is the detection and extraction of features from the input image or video using a hidden-layer network. A technique called convolution is used to extract patterns from the input vector. Convolution refers to the process of combining two vectors to produce an output vector. This is done through the use of a filter (an array of constant dimensions) that “slides” across the input vector, convolving the vectors within the input vector and the vectors in the filter to produce a vector with fewer dimensions. The resultant output vector is called a feature map. The filter can be thought to be similar to an illumination, and the area it covers is a receptive field. The filter captures certain sections of the input in a sequential manner, tantamount to the activation of the receptive field in the Hubel and Wiesel study that concluded that the image falling on the retina undergoes a stepwise analysis where each cell has a specific function. The mapping results in a compression of the “activated” input vectors onto the resultant feature map, allowing the layer to better identify redundancies within the input. This process

occurs multiple times depending on the number of convolutional layers.

Image Classification After convolution, the vectors from the feature map pass through the hidden layers and proceed towards output. Hidden layers are comprised of intermediate cells that don’t have direct access to the training data but receive the output from the input layer and further modify it. A nonlinear function is introduced, such that the value of output beyond a certain threshold gives a particular value (in this case the classification of the object), whereas a value below the threshold gives zero. The learning rule for the convolutional learning networks measures the difference between the training and output data, and then propagated the error backwards through the preceding cells, shifting synaptic weights to further reduce error using gradient descent – a process that adjusts synaptic weights such that they identify objects with optimum accuracy. During learning, the connections between the cells that produce the desired output are strengthened, similar to the neurons in our brain. After the network is trained, the network is introduced to testing data to make predictions and classifications.

Constraints

A Convolutional Neural Network makes use of three constraints in its architecture to make predictions. They are: 1)

Shared Weights: Each cell contains

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


weight properties. However, all the cells in a hidden layer are constrained to share the same weight. This allows all the cells in the hidden layer to respond similarly to a particular feature. The constant weights greatly reduce the complexity of computation, as weight is now a constant parameter. It also removes the spatial component associated with these vectors. The network will respond similarly when similar features are detected elsewhere in the picture. 2) Local Receptive fields: In a typical artificial neural network, each cell connects to a cell in the hidden layer. However, this is not the case in CNNs. Only subgroups of activated cells connect and pass information to a single hidden layer. Hence, different subgroups connect and transmit information to different hidden layers. The strength of this connection changes upon a learning rule.

early diagnosis and evaluation of patients at high risk of cardiac events. Echocardiography uses sound waves to pinpoint areas of the heart with abnormal contractions due to poor blood flow or injury. One type of echo, doppler, is also used to observe blood flow in the heart and lungs. Use of echocardiography devices in hospitals and emergency care units have increased capacity for early intervention in patients that otherwise showed normal echocardiogram signals, significantly reducing the prevalence of false negatives7.

3) Activation/Pooling: In activation or pooling, the cell takes the output of the neuron and maps it to the highest positive value. This leads to the reduction of dimensionality, reducing the number of parameters and computation in the layer. The size of the region detected increases as you increase the number of hidden layers, allowing for extraction of higher-order features. For example, while the initial hidden layer will detect edges in the image, while the last hidden layer is capable of detecting complex shapes. A reduction of dimensionality allows the cell to identify redundancies within pixels of the image.

However, there are important issues for doctors to consider. Many patients admitted with chest pain do not have an acute coronary syndrome and 5–10% of those who do are discharged. This places doctors in difficult situations, where they are forced to quickly make interpretations. The interpretation of echocardiograms also requires great expertise in making visual estimation rather than a precise calculation; this judgement is often deferred to expert technicians and cardiologists. Machine learning approaches in recent years have been used to increase the efficiency of the process by automating tasks performed by cardiologists. While physicians are well-trained to perform such tasks, small deviations cannot be detected by eye1. The careful, data-driven approach of AI can lead to early intervention, decrease the number of false negatives, minimize false interpretation via the precise calculations of machine learning algorithms, and reveal new knowledge in the field of cardiology.

Case Study: Echocardiograms

Acute Coronary Syndrome

Machine learning has been utilized for multiple applications in the medical field. Here we look at two cases where machine learning is currently being used as a tool for echocardiography. Two-dimensional echocardiography has proven a valuable, non-invasive imaging technique for

“Machine learning has been utilized for multiple applications in the medical field...�

In the past few years, a great selection of studies has been published contributing to the rapid progress of the field and bolstering human endeavors to improve healthcare. A group of researchers at MIT described the use of artificial

Figure 3: An example of a convolutional layer network containing input of size 28x28 pixels that convolves into multiple layers of feature maps to result in 26 single vector outputs of size 1x1. (Source: Wikimediia Commons)

WINTER 2020

93


arrhythmia. A group of researchers at the Stanford Machine Learning Group developed a convolutional neural network model that detects and classifies arrhythmias from ECG signals, exceeding the ability of the average cardiologist in terms of sensitivity and precision9.

Figure 4: An Artificial Neural Network (ANN) is used to extract segments of echocardiogram and stratify high-risk patients for cardiac events. The ECG signal is segmented and converted to a vector format which serves as an input to the recurrent networks. (Source: Wikimedia Commons)

“A group of researchers at MIT described the use of artificial neural networks to improve risk stratification among patients with acute coronary syndrome..�

neural networks to improve risk stratification among patients with acute coronary syndrome. Data from medical records and ECG waveforms were combined to classify patients by risk of a cardiac event. Seven baseline characteristics were chosen to predict risk: age, gender, history of hypertension, history of diabetes, previous myocardial infarction (MI), history of a previous angiography, and whether or not the patient currently smoked8. A recurrent artificial neural network was used to extract clinically significant morphological features from the ECG signals. First, the researchers segmented the ECG signal into beats. They then further extracted ST segments (see Figure 4) that form at the baseline of the peak, and mathematically transformed segments into a vector form contained information on slope and level. This segment served as the input to the recurrent network and a logistic regression curve from the seven classifications. The output from the two models was then passed onto a sigmoidal activation function, which generated a predicted outcome y. This study proved the ability of ANN machine learning algorithms to effectively discriminate metrics and identify patients at greater risk of death. The combination of the models provided best performance, compared to both models used individually. The ANN model was also able to effectively discriminate high risk patients in two unseen testing datasets, conveying the model’s ability to generalize from learned information and produce reliable outcomes8.

Arrhythmia Detection Convolutional Neural Networks can be used as a model to detect various classes of heart

94

The network is a 33-convolution-layer supervised network that takes in raw ECG signal, and outputs a sequence of predicted classifications. It was trained with segments of ECG signal tagged with appropriate arrhythmia classifications. Over each iteration, the network learned to recognize ECG signal patterns, segment them, and the classify them into one of the 12 arrhythmia classifications. The network contains 15 hidden layers, each associated with two convolutional layers. With a filter size of 16, the input was subsampled and convolved with the filter to form a feature map that served an input to the next hidden layer, allowing for higher-order feature extraction. The final output is then compared to the training vector, and the difference is propagated back through the hidden layers, shifting the synaptic weights of the cells to reduce error. The model was further tested with unseen ECG signals and the performance was compared to the assessment of six cardiologists presented with the same ECG signals. Notably, the model was shown to outperform the average cardiologist score; the model received a precision score of 0.800 compared to 0.723 of the cardiologists9.

Implementation Several programming packages can be used to implement artificial neural networks. Some examples include Tensorflow, Therano, and Keras packages available in Python.

Limitations Despite their high levels of accuracy in prediction, ANNs are actually relatively inexpensive. However, there are several factors that stand to limit the potential application of artificial neural networks to echocardiography, namely inadequate data and overfitting, a modeling error where a model closely mimics a limited set of data points, which is costly in a medical setting.

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


While echocardiograms are performed on a daily basis, medical imaging data is rarely accessible to researchers due to various restrictions, including patient privacy. The complexity of convolutional Neural Networks require huge datasets to make accurate predictions. Furthermore, for supervised neural networks, the process of data labelling is extensive and time consuming, requiring the expertise of cardiologists and technicians10.

[9] Rajpurkar, P., Hannun, A., Haghpanahi, M., Bourn C., & Ng, A. (2017). Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks. arXiv.org. doi: arXiv:1707.01836 [10] Shameer K, Johnson KW, Glicksberg BS, et al Machine learning in cardiovascular medicine: are we there yet? Heart 2018;104:1156-1164. [11] Krittanawong, C., Johnson, K. W., Hershman, S. G., & Tang, W. W. (2018). Big data, artificial intelligence, and cardiovascular precision medicine. Expert Review of Precision Medicine and Drug Development, 3(5), 305–317. doi: 10.1080/23808993.2018.1528871

In addition, artificial Neural Networks often come with the risk of overfitting. While data driven, the network models are often riddled with traditional biases from the training data, making them unable to generalize predictions to a diverse set of testing datasets. While the model may seem to outperform cardiologists, the logic behind a prediction is lost within the abstraction of the algorithm, putting the cardiologist in a heavy predicament when charged to make a call11. Given rapid advances in machine learning in the past decade, the invention of new and improved methods will be necessary to further advance the prospects of machine learning in echocardiography. References [1] Zhang, J., Gajjala, S., Agrawal, P., Tison, G. H., Hallock, L. A., Beussink-Nelson, L., … Deo, R. C. (2018). Fully Automated Echocardiogram Interpretation in Clinical Practice. Circulation, 138(16), 1623–1635. doi: 10.1161/circulationaha.118.034338 [2] Greaves, S. C. (2002). Role of echocardiography in acute coronary syndromes. Heart, 88(4), 419–425. doi: 10.1136/ heart.88.4.419 [3] Hubel, D. H., & Wiesel, T. N. (1959). Receptive fields of single neurones in the cats striate cortex. The Journal of Physiology, 148(3), 574–591. doi: 10.1113/jphysiol.1959.sp006308 [4] Hubel, D. H., Wiesel, T. N., ( 1963), Shape and arrangement of columns in cat's striate cortex. The Journal of Physiology, 165 doi: 10.1113/jphysiol.1963.sp007079. [5] Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324. doi: 10.1109/5.726791 [6] Kim, Y. (2014). Convolutional Neural Networks for Sentence Classification. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). doi: 10.3115/v1/d14-1181 [7] Electrocardiogram (ECG or EKG). (2019, February 27). Retrieved from https://www.mayoclinic.org/testsprocedures/ekg/about/pac-20384983. [8] Myers, P.D., Scirica, B.M. & Stultz, C.M. (2017). Machine Learning Improves Risk Stratification After Acute Coronary Syndrome. Sci Rep 7, 12692 doi:10.1038/s41598-017-12951-x

WINTER 2020

95


Making the Modern Medical Profession: 19th Century Standardization of Medical Education

BY SAM NEFF '21 Cover Image: ‘The Quack Doctor’, oil painting by Pietro Longhi (1702-1785). Medical quackery was a problem that once plagued the world - the national association of the American medical profession with a focus on increasing the professionalism of doctors in the mid-19th century is the focus of this paper Source: Wikimedia Commons

“[In the early 19th century] Medical schools did not provide a suitably rigorous education to distinguish the skill and intelligence of real doctors from 'quacks'.” 96

American medicine underwent a tremendous change in the mid-19th century. In the century’s early decades, doctors armed themselves with new innovations in medical practice including vaccinations – which had slowly gained acceptance since the end of the previous century – and the advent of chloroform as an anesthetic in the 1840s.2,4 [Figure 1] But despite these advances in medical treatment, the status of the medical profession was not concurrently elevated. Having the choice, patients seeking medicine were just as likely to turn to ‘quack doctors’ as licensed physicians for remedies. And who could’ve blamed them? Medical schools did not provide a suitably rigorous education to distinguish the skill and intelligence of real doctors from ‘quacks.’ They did not require pre-medical education as a condition for admittance, nor was there a standard curriculum or length of schooling.7 There wasn’t a distinct licensing body to accredit doctors either – that task fell to medical school professors, who each held their own standards

of qualification (and a candidate’s qualifications may have been augmented by their personal relationship to the professor).1 In this historical moment, medical reformers felt that only a national association could effectively solve these problems by raising the status of licensed doctors and improving public perception of the medical profession. Standardization of medical education required the collaboration of medical professionals in different states, holding diverse views on what is desirable medical practice that had to be resolved. Throughout the late 18th and early 19th centuries, state and county medical societies sprung up across the country. For example, The Medical Society of the State of Pennsylvania (est. 1848) was preceded by the Philadelphia County Medical Society (est. 1796), and numerous other county organizations. By the time that the Pennsylvania State Medical Society was founded, there were fifteen other state medical societies both in DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


among others) turned in far greater numbers to theology or the law as ‘respectable professions.’ And even doctors from these high-caliber institutions were not the most distinguished members of their class.6 Jackson viewed medical professors too as inadequately suited to the spirit of medical reform. Many of them were appointed by corporations, which were not beholden to the AMA or any local body of doctors. Jackson advocated strongly that professors be kept “under the watch and care of the profession. They should be its property.” 6

the North and South.5 Prior to the late 1840s, these associations existed as independent units, yet were actively striving for national association.1 As one member of the New Jersey Medical Society (and delegate to the American Medical Society) noted in his early history of the AMA, members of state medical societies and faculty of medical colleges debated internally the prospect of national association. In the mid 1830s, dissatisfaction with the short duration of medical school terms prompted colleges in both New England and the South to contemplate a concerted reform effort. In 1839, Dr. John McCall of the Medical Society of the State of New York (MSSNY) proposed a resolution that would bring about a National Medical Convention [Figure 2]. This convention didn’t occur until 1846 (in New York), but it was followed promptly by a second in Philadelphia in 1847, and in the same year by the formation of the American Medical Association (AMA).1 The spirit of medical reform carried out by the AMA was driven by the belief that the existing medical profession was unprofessional. Expressing this sentiment, Dr. Samuel Jackson gave a speech before the Philadelphia County Medical Society in 1852 on the ‘Organization of the American Medical Association,’ which was printed later that same year in The Western Journal of Surgery and Medicine.6 In his speech, Jackson described the Medical Profession of the United States as a body of “unworthy and ignorant men” who were “mere adventurers in great numbers [who] enter the halls of medical science, choosing medicine as a trade, and not as an honorable profession.” Jackson felt that doctors should be men of science; graduates of the best schools, who were educated in Ancient languages and skilled in scientific and mathematical reasoning. Yet students of the most renowned schools (he mentioned Harvard, Yale, Dartmouth, Brown, Princeton, WINTER 2020

Speeches such as Dr. Jackson’s helped define the contours of a reformed medical profession. For one, they served to sync up the goals of national reform (driven by the AMA) with the work of local associations. Medical journals, such as The Western Journal and others published by individual state and county medical societies (the New Jersey Medical Reporter, for example) aimed to affect the opinions of doctors. Although they would have little direct effect on public opinion, they nonetheless served a powerful purpose. Jackson’s and the anonymous AMA member’s accounts in the New Jersey Medical Reporter provided a shared history for reformers to rally around in their efforts to change the face of medicine. That being said, reformers’ invocation of the ‘medical profession’ is rather ambiguous. Who comprises the medical profession, and how is it defined in the public mind? It seems a reasonable assumption that a patient’s view of the medical profession would be shaped most strongly by their interactions with their doctor? It should be acknowledged that the practice of doctors was shaped by their professors. And their schools were suffused with a spirit of reform fostered by the AMA [Figure 3]. The body’s members did much to define the profession including voting together for key reforms and writing journal articles that explained the spirit and goals for reform. Any successful effort to make medicine

Figure 1: Edward Jenner’s initial publication on the ‘discovery’ of the Smallpox Vaccine. The practice of inoculation was introduced to the English upper class from the Ottoman Empire almost a century earlier by the aristocratic writer (famous for her ‘Turkish Letters’) Lady Worley Montagu. Vaccination was also a topic of debate in colonial America – with the Puritan Minister Cotton Mather advocating the practice and meeting stiff resistance – during the Boston Smallpox epidemic of 1721. This is not to downplay Jenner’s achievements – his testing of the vaccine was relatively rigorous and scientific, and the practice spread relatively quickly after his inquiry was published. (Source: Wikimedia Commons)

“The spirit of medical reform carried out by the AMA was driven by the belief that the existing medical profession was unprofessional.”

Figure 2: A photomechanical print of the Medical Society of the State of New York in 1880-81. Its members were key figures in the establishment of the AMA a few decades earlier. (Source: Wikimedia Commons) 97


more professional had to work at all three levels of the medical profession – individual doctors, medical colleges, and local and national medical associations.

“Raising the social status of the doctor was at once an uplifting and exclusionary process.”

Economically, the fact that licensed practitioners and ‘quack’ physicians differed little in skill was problematic for the medical profession. Individuals seeking treatment and receiving the same (or better) quality of service from the quack as the licensed doctor, would invariably turn to the quack for cheaper medicine. This renders the medical license worthless and devalues the years of study undertaken by aspiring doctors at medical colleges. Making the medical profession respectable and raising the socioeconomic status of doctors would require reshaping public opinion. As Dr. Jackson noted in his address to the Philadelphia Medical Society: “[The AMA] may legislate much, but its legislation is not binding, further than as it embodies public opinion. The public opinion of the profession is omnipotent.”6 Raising the social status of the doctor was at once an uplifting and an exclusionary process. Its end was to raise demand for doctors over ‘quacks,’ so doctors would be paid more and educated individuals would be encouraged to enter the profession (further elevating its respectability). But to do so meant narrowing the body of individuals who sought to enter the profession. Doctors equipped with the tools of scientific analysis and well-versed in Greek and Latin were desired; the profession wanted Ivy League graduates to attend medical schools, not aspiring country folk without any significant pre-medical education. Dr. William Sutton, Co-

Figure 3: Cover of the first issue of the Journal of the American Medical Association, published in 1847. (Source: Wikimedia Commons)

98

founder of the Kentucky State Medical Society argued as much: “a student of medicine ought to have such knowledge of Latin and Greek as would enable him to appreciate the technical language of his profession, and read and write prescriptions.” Sutton wrote this article in solidarity with the aims of the National Convention of 1847, which had come under attack from conservative critics. Within the article, he further underscored the importance of individual doctors to the broader prestige of medicine, naming it their “duty to uphold the respectability of the profession… by keeping the great body of its members individually respectable.”7 The ability to grant individual doctors greater respectability rested with the medical schools, which reformers saw as woefully ineffective. Dr. Sutton again concurred with the National Convention that raising the reputation of individual doctors and the profession at large required standardizing medical education on the national scale.7 Dr. Jackson of Philadelphia also shared the sentiment that “there are deficiencies in the present mode of medical education that ought to be remedied,” agreeing on the necessity of a required pre-medical curriculum, and advocating longer terms of study.6 Another doctor and member of the AMA (author of the ‘History of the American Medical Association’) further proclaimed that “the business of teaching should be separated as far as possible from the privilege of granting diplomas.”1 A separate licensing body alone would ensure that the standards of medical licensing were adhered to rigidly - no amount of favoritism felt by a teacher for their pupil could soften the requirements for earning a degree. The key point is that many aspects of the platform for medical reform thus outlined either required the formation of a national association or would be strongly facilitated by it. Only a national body could set rigorous, universal standards, and produce an impartial licensing body to enforce them. It alone could conduct extensive data collection; measuring the general welfare of different regions of the United States – rates of birth, marriage, and death and the incidence of particular diseases by region, for example – and attaining results that could inform reformers as to which regions of the United States needed the greatest attention.7 And a national body could claim to represent the voices of individual doctors, the needs of patients, and the spirit of science

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Figure 4: The American Medical Assocation Headquarters in Chicago. Founded in 1847, the AMA is still a national authority for the medical profession. (Source: Wikimedia Commons)

all at once – more so than any one school or local medical association could. As Dr. Jackson noted, a national organization had immense persuasive power, to “restrain the schools when established and … gradually raise the standards of qualification in their graduates… [such that] no institution, it may be safely said, can long stand ground against the deliberate judgement of the profession.”6 The principles of medical reform trickled down from the AMA to the schools, and from the schools to their graduates. It should now be clear that the movement for reform in medicine was a thoroughly national effort, not a product of isolated changes at various hospitals and universities. The emergence of local medical societies paved the way for national association in the first decades of the 19th century, whose members served as delegates first to two national conventions, and then to the American Medical Association. And the bonds of this association were strengthened by writing in medical journals – in The Western Journal of Medicine, the journal of the New Jersey Medical Society, and many others – which helped rally doctors around the cause of reform. Elevating the status of the medical profession required nothing less than changing public opinion. As Dr. Jackson remarked: “So far as these grievances are susceptible of remedy, they are not to be reached by those of a direct or radical character; they must be corrected by nothing short of an enlightened public sentiment.”6 A powerful national organization was necessary to project the spirit of reform from the top down, but perception of the

WINTER 2020

industry was uplifted most directly by the doctors on the ground. References 1. Anonymous (member of the American Medical Association)*, “History of the American Medical Association.” New Jersey Medical Reporter and Transactions of the New Jersey Medical Society 7, No. 1 (1854): 26-34. 2. Francis Burney, “A Mastectomy” in Francis Burney: Journal and Letters, ed. Peter Sabor and Lars Troide (London: Penguin Books, 2001). 3. Lindsay Fitzharris, The Butchering Art: Joseph Lister's Quest to Transform the Grisly World of Victorian Medicine (New York: Scientific American/Farrar, Straus and Giroux, 2017) 4. Matthew Niederhuber, “The Fight Over Inoculation During the 1721 Boston Smallpox Epidemic.” Harvard University Science in the News: Special Edition on Infectious Disease, December 31, 2014. http://sitn.hms.harvard.edu/flash/ special-edition-on-infectious-disease/2014/the-fight-overinoculation-during-the-1721-boston-smallpox-epidemic/ 5. “Pennsylvania Medical Society.” Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Pennsylvania_Medical_ Society#cite_note-1. 6. Samuel Jackson, M.D., “The Organizing of the American Medical Association.”The Western Journal of Surgery and Medicine 9, No. 6 (1852): 495-507. 7. W.L. Sutton, M.D., “Medical Reforms – Reflections growing out of the Action of the National Convention of 1847.”The Western Journal of Surgery and Medicine 8, No. 5 (1847): 403-418. 8. William L. Sutton and A. L. Fisher papers, 18101967, University of Kentucky Special Collections Research Center, https://exploreuk.uky.edu/fa/ findingaid/?id=xt74j09w1f7b#fa-heading-abstract [Biography/History] *Articles in bold where gathered from the American

99


Not Your Average Coffee Cup: Methods and Thermodynamic Applications of Isothermal Titration Chemistry BY TEDDY PRESS '23 Cover: Diagram showing the overall setup and plots of an ITC experiment. (Source: Flickr - Creative Commons)

“While chemical reactions can be described in many ways, one of the most useful methods is to examine energy transfer, otherwise known as thermodynamics.” 100

Introduction Chemical reactions are important to all life processes. While chemical reactions can be described in many ways, one of the most useful methods is to examine energy transfer, otherwise known as thermodynamics. Nearly all chemical reactions require an exchange of energy, and can be either exothermic (releasing energy) or endothermic (absorbing energy). Calorimetry, a collection of techniques used to measure reaction thermodynamics, can give important information about a reaction. One use of calorimetry is as a “universal detector” for chemical reactions – measuring the amount of heat exchange can show how much of a substance was consumed or produced – and quantifying reaction thermodynamics is of great use in biochemistry and drug design (Leavitt & Freire, 2001). Isothermal titration calorimetry (ITC), a special

case of calorimetry in which temperature is held constant, can measure very small changes in energy. While ITC can measure both thermodynamics (enthalpy, entropy, free energy) and kinetics (reaction rates), this article’s scope will be limited to thermodynamic measurements. ITC can be used to measure many thermodynamic quantities including reaction stoichiometry, enthalpy ("Δ" H), entropy (∆S), heat capacity (C), or the binding affinity of a ligand (KA) (Freyer & Lewis, 2008). ITC has some distinct advantages over other calorimetric tools that have led to its widespread use across chemistry and interrelated fields. This article will provide a brief overview of ITC through a discussion of the experiment design, implementation, and recent applications.

Theory Titration is the process by which a small amount of a reactive solution (aliquot) is slowly dropped into a sample liquid over time. In DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Figure 1: (left) A regular binding isotherm; (right) plots where the c-value lies outside the desired range. (Source: Wikimedia Commons)

the case of ITC, titration allows for an accurate measurement of reaction thermodynamics. A typical ITC machine is composed of three key elements: a syringe, a sample cell, and a reference cell (Freire, Mayorga, & Straume, 1990). An ITC syringe is a needle that releases aliquots into the sample cell. These “cells” are insulated spaces where reactions occur in solution. The researcher positions the syringe of reactant (often a small molecule or peptide that will bind to a protein in the sample cell) such that the system is insulated and the aliquot will fall directly into solution. Using a computer program, the volume per injection and the amount of time between each injection can be controlled. The experimenter then runs a set of titrations over time and measures the heats of reaction. In order to keep the reaction isothermal (constant temperature), the machine consistently cools or heats the reference and sample cells to maintain a set temperature (Freyer & Lewis, 2008). Maintaining a constant temperature prevents denaturing of biological macromolecules (e.g. proteins) and is essential for constructing a binding isotherm – a plot showing how reaction enthalpy varies with ligand concentration, as shown in Figure 2. The amount of electrical energy necessary to return the reaction to the desired temperature, measured with a voltmeter, is proportional to the change in heat energy of the reaction (Freyer & Lewis, 2008). Data software tailored for ITC, such as Origin, will mathematically convert the electrical energy measured to the change energy of the system caused by the reaction (Grosshoeme, Spuches, & Wilcox, WINTER 2020

2010). The computer program then plots the calculated change in energy against the proportion of ligand to cell concentration to find the thermodynamic values of the reaction. Plot values are determined through the equation:

“A typical experiment consists of a chosen cell solution and a chosen ligand.”

where ΔH is the enthalpy of reaction, q is the heat exchange, V is the volume of the aliquot, and Δ[LB] is the change in bound ligand concentration (Leavitt & Freire, 2001). A typical experiment consists of a chosen cell solution and a chosen ligand. If the ligand concentration in the syringe is an order of magnitude, meaning at least ten times higher than the cell concentration, the plot will form the desired S-shaped curve. Figure 2 shows the shape of this ‘ideal’ binding isotherm and also shows plots when the heats of reactions do not align well with the cell and syringe concentration. Although one can make educated predictions about how the two solutions interact through computational methods, physical experiments can more accurately confirm, deny, or modify existing models (Wilcox, Quinn, Carpenter, & Croteau, 2016). Preexisting data from the National 101


Institute of Standards and Technology (NIST) database provides a useful starting point for making predictions about the experimental outcome (Wilcox, Quinn, Carpenter, & Croteau, 2016).

“Although there are other calorimetry methods, ITC proves the superior choice foor a large range of experiments.”

The binding isotherm is the main source of information for thermodynamic data. Figure 2 outlines three values that can be derived from graphical analysis of the plot for binding data: the change in enthalpy of the reaction is equal to the height of the curve, the point of inflection is equal to the binding stoichiometry, and the slope of the tangent at the point of inflection is equal to the binding affinity of the reaction. Binding isotherms can also be analyzed to inform future experiments that might improve the isotherm by changing time intervals, concentrations, or aliquot volumes to make it fit the desired S-shaped curve (Grosshoeme, Spuches, & Wilcox, 2010). Values extracted from the plot can then be used to find other variables that are valuable in characterizing chemical reactions. For instance, the Gibbs’ Free Energy of a reaction, which measures the change in energy available to do work, can be determined from the obtained value for the association constant (Ka), the fixed temperature (in Kelvin), and the universal gas constant (R):

Experimental Design: Advantages and Disadvantages of ITC Advantages Although there are other calorimetry methods, ITC proves the superior choice for a large range of experiments. Since the majority of biological reactions occur at a constant temperature (i.e. body temperature), isothermal experiments are fitting choice for examining biological processes (Freire, Mayorga, & Straume, 1990). Another popular calorimetry method is differential scanning calorimetry (DSC). This method involves changing the temperatures to measure the heat capacity of the system, and in turn analyzing the structural layers of proteins. But ITC is more cost-effective than DSC because the former generally uses less biological material than the latter. ITC also has a greater potential range of applications than DSC, since DSC does not measure all the thermodynamic values that ITC does (Freire, Mayorga, & Straume, 1990).

102

Disadvantages and Difficulties At the time of development, ITC had a fair amount of error and uncertainty due to a lack of measurement sensitivity, but technological improvements since then have greatly improved this method. A current limitation of this method, however, is that there is only a certain range in which values of ITC have credible results, called the c-range, because of its measurement sensitivity. As a result, ITC requires very careful planning and design to keep experiments within the accurate range of values (Leavitt & Freire, 2001). The following equation shows the equation for obtaining a c value:

Where [S] is the sample cell concentration and Ka is the binding affinity of the reaction. The acceptable window range for a single binding site c value is between 1 and 1,000. For instance, if a reaction is conducted with a binding affinity around 1x107, the cell concentration should be no greater than 1x104 molar in order to be at the upper limit of the acceptable range. Acceptable ranges and curve fitting are both far more complicated for two or more binding events, as it requires more detailed analysis of competing reactions (Wilcox, 2008). In addition, ITC experiments often require many iterations to troubleshoot graphical inaccuracies derived from the error associated with measurement tools. As with other experimental structures, it is necessary to demonstrate the reproducibility of results. Often, binding affinity between two reactants is too high to measure through direct titration methods. Therefore, experimental design plays a large role in extracting credible data from an ITC experiment. For example, one can use known data of a competing ligand (potential binding partner) and measure the properties of a reaction where the ligands are exchanged. The desired data is then the affinity measured from the reaction plus the known ligand binding affinity from the NIST database (Wilcox, Quinn, Carpenter, & Croteau, 2016). Further difficulties associated with ITC are more specific to its application. For example, when using ITC for proteins and metal binding, other competing equilibria that also contribute to the measured enthalpy of ITC must be accounted for, such as metal-buffer interactions and protonation

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Figure 2: DNA sequence and structure. (Source: flickr - Creative Commons)

reactions (Grosshoeme, Spuches, & Wilcox, 2010). However, with technological expansion, measurements will be more precise than ever before, overcoming many of the difficulties associated with ITC that result in noticeable inaccuracies. Because of current research and development devoted to it, ITC is seeing a variety of new and exciting uses in a multitude of disciplines.

Current Experiments and Research in ITC Biological Applications—Drug Design In the field of biochemistry, scientists are using ITC thermodynamic data to aid in drug design. Researchers are using ITC to measure thermodynamic data for reactions between protein and ligands. These measurements can be used to optimize thermodynamic contributions to ensure binding at the target site (Leavitt & Freire, 2001). ITC is a simple way for researchers to optimize drug design and can therefore be adopted on a large scale by the scientific community. Bioinorganic Chemistry In the field of physical bioinorganic chemistry, ITC is being used to measure the interaction between metals and proteins that occur in living cells. At Dartmouth College, for example, the Wilcox lab studies proteins used in metal ion transport. The lab uses ITC to quantify the strength of binding and stoichiometric ratios of metal to protein. Due to the fact that metals in buffer can have complex interactions with ligands and proteins, post-experiment WINTER 2020

analysis is often complicated and unique to each reaction. There are also applications in studying the oxidation (loss of electrons) and reduction (gain of electrons) of metals through ITC in a process called redox coordination thermodynamics (RCT). In RCT, differences in reaction potential for two different states of a metal can be determined (Wilcox, 2008). Future applications of ITC are focused on providing a deeper understanding of how the body processes and removes metals during heavy metal poisoning. DNA Binding DNA sequences have long been studied to elucidate genetic expression of subjects. However, other factors besides DNA sequences affect genetics. Understanding how DNA folds is paramount to attaining a complete understanding of how cells operate (Figure 3). In order to understand these mechanisms, known collectively as “DNA condensation,” ITC is being used to study how certain molecules bind to DNA and cause it to form orderly structures (Matulis, Rouzina, & Bloomfield, 2000). Knowing the thermodynamics of reactions can serve as a complementary tool for studying how cations (positively charged particles) interact with the negatively charged sugar-phosphate backbone of DNA. These cations can play certain roles in rearranging DNA structure and therefore affect how DNA is expressed.

“In order to understand these mechanisms, known collectively as 'DNA condensation', ITC is being used to study how certain molecules bind to DNA and cause it to form orderly structures.”

Nanoparticle Interactions With advancing technologies in medicine, researchers are hoping to utilize nanoparticles

103


“Thermodynamic data from ITC experiments will provide complementary research to other methods in completely understanding how nanoparticles interact in the human body.”

in “various fields of medicine and biology including cancer therapy, drug delivery, tissue engineering, regenerative medicine, biomolecule detection, and also as antimicrobial agents” (Rudramurthy & Swamy, 2018, p. 1185). However, the way that nanoparticles function and interact with macromolecules in the body, such as proteins, are complex. In order to ensure safety and control over nanoparticles for future uses, it is necessary to know exactly how they interact with macromolecules in the human body. Prior to the incorporation of ITC as a methodology in the field of nanoscience, the majority of nanoparticle-protein interactions were solely proposed mechanisms (cause and effect scenarios). When nanoparticles enter the body, a common set of reactions occurs where numerous proteins gather around the nanoparticle, known as the “Vroman-effect” (Prozeller, Morsbach, & Landfester, 2019). The proteins form a “corona,” or crown, that changes over time depending on the strength of interactions including electrostatic, hydrodynamic, electrodynamic, steric, and solvent (Prozeller, Morsbach, & Landfester, 2019). ITC allows scientists to track the thermodynamics of these interactions in situ (on site), which provides the advantage of accounting for all important reactions, including adsorption and restructuring of the protein corona. Thermodynamic data from ITC experiments will provide complementary research to other methods in completely understanding how nanoparticles interact in the human body (Rudramurthy & Swamy, 2018).

Microcalorimetry of Biological Molecules (pp. 61-74). New York: Humana Press. 7. Prozeller, D., Morsbach, S., & Landfester, K. (2019). Isothermal titration calorimetry as a complementary method for investigating nanoparticle-protein interactions. The Royal Society of Chemistry. 8. Rudramurthy, G. R., & Swamy, M. K. (2018). Potential applications of engineered nanoparticles in medicine and biology: an update. Journal of Biological Inorganic Chemistry, 1185-1204. 9. Velazquez-Campoy, A., & Freire, E. (2006). Isothermal titration calorimetry to determine association constants for high-affinity ligands. Nature Protocols, 186-191. 10. Wilcox, D. E. (2008). Isothermal titration calorimetry of metal ions binding to proteins: An overview of recent studies. Inorganica Chimica Acta, 361, 857-867. 11. Wilcox, D. E., Quinn, C. F., Carpenter, M. C., & Croteau, M. L. (2016). Isothermal Titration Calorimetry Measurements of Metal Ions Binding to Proteins. Methods in Enzymology, 567, 3-21.

References 1. Freire, E., Mayorga, O. L., & Straume, M. (1990). Isothermal Titration Calorimetry. Analytical Chemistry, 62(18). 2. Freyer, M. W., & Lewis, E. A. (2008). Isothermal Titration Calorimetry: Experimental Design, Data Analysis, and Probing Macromolecule.Ligand Binding and Kinetic Interactions. Methods in Cell Biology, 84. 3. Grosshoeme, N. E., Spuches, A. M., & Wilcox, D. E. (2010). Application of isothermal titration calorimetry in bioinorganic chemistry. Journal of Biological Inorganic Chemistry. 4. Leavitt, S., & Freire, E. (2001). Direct measurement of protein binding energetics by isothermal titration calorimetry. Current Opinion in Structural Biology, 560-566. 5. Matulis, D., Rouzina, I., & Bloomfield, V. A. (2000). Thermodynamics of DNA binding and condensation: isothermal titration calorimetry and electrostatic mechanism. Journal of Molecular Biology, 296(4), 1053-1063. 6. Paketuryte, V., Zubriene, A., Ladbury, J. E., & Matulis, D. (2019). Intrinsic Thermodynamics of Protein-Ligand Binding by Isothermal Titration Calorimetry as Aid to Drug Design. In

104

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Dartmouth Undergraduate Journal of Science Hinman Box 6225 Dartmouth College Hanover, NH 03755 dujs@dartmouth.edu

ARTICLE SUBMISSION FORM* Please scan and email this form with your research article to dujs@dartmouth.edu

Undergraduate Student: Name:_______________________________ School _______________________________

Graduation Year: _________________

Department _____________________

Research Article Title: ______________________________________________________________________________ ______________________________________________________________________________ Program which funded/supported the research ______________________________ I agree to give the Dartmouth Undergraduate Journal of Science the exclusive right to print this article: Signature: ____________________________________

Faculty Advisor: Name: ___________________________

Department _________________________

Please email dujs@dartmouth.edu comments on the quality of the research presented and the quality of the product, as well as if you endorse the student’s article for publication. I permit this article to be published in the Dartmouth Undergraduate Journal of Science: Signature: ___________________________________

*The Dartmouth Undergraduate Journal of Science is copyrighted, and articles cannot be reproduced without the permission of the journal.

Visit our website at dujs.dartmouth.edu for more information

WINTER 2018

105


DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE Hinman Box 6225 Dartmouth College Hanover, NH 03755 USA http://dujs.dartmouth.edu dujs@dartmouth.edu

WINTER 2020

106


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.