Innovation Journal (2017)

Page 1

INNOVATION A Student Research Journal

Published by STEMY


Contents Innovation Team 2 About Us 3 Letter From The Editors 4 Beta-glucans Stimulate the Immune System in the Fight Against Cancer 5

History and Hurdles in Organ Transplantation 6

Development of a Low Cost Ex Vivo Perfusion System for the Elucidation of Flow Modulation Induced Vascular Remodeling 8 Collecting Kinetic Energy as an Alternative Tolling System 17

Effects of Glycerin on the Flash Point and Vapor Pressure of Propylene Glycol 22

Effects of Nanostructured Oxides and Conducting Polymers on Pseudocapacitance 27

Blood Type and Cholesterol: Is There a Correlation? 33

Advillin Expression Defines Subtypes of Taste Neurons and Receptor Cells 39

TRPA1 Mediates Cardiac Dysfunction Through Sympathetic Dominance in Mice Exposed to Concentrated Ambient Particulates 43 Perfectionism and Body Image Dissatisfaction in Adolescents as a Predictor of Eating Disorders 47 The Effects of the Pacifier Activated Lullaby on the Sleep Scores, Feeding Scores, Feeding Volumes, and Finnegan Scores of Newborns with Neonatal Abstinence Syndrome 51

1


Editor in Chief Edward Zhong Managing Editor Harsha Paladugu

Peer Reviewers Agharnan Gandhi Jesse Wang

Feature Article Authors Edward Zhong Allison Tu

Research Paper Authors Agharnan Gandhi Emma Heironimus Felicia Zhong Mark Raj and Ruchira Sumanasekera Lilly Gonzalez Jennifer Xu Minh Tran Betty Ngo Sanya and Aakash Mehta

2


ABOUT US WHO WE ARE STEM + Youth, or STEMY, is a local, student-run nonprofit organization on a mission to foster curiosity, learning, and innovation by engaging students of all backgrounds, genders, and means in STEM (science, technology, engineering, and mathematics) education. We aim to close gender, racial, and socioeconomic gaps in STEM fields by exposing students of all ages to exciting and educational STEM programming.

EXISTING PROGRAMS Only a few months after we founded this organization, we’ve already impacted hundreds of students from all backgrounds. Our elementary school workshops and educational seminars taught students from a range of ages about how to conduct science research. This journal shares outstanding high school research with the community. Elementary School Workshops

= 10 students

Educational Seminars Innovation Journal

UPCOMING PROGRAMS The following are three of our new initiatives for the 2017-2018 school year. Learn more about these and our other programs on our website: stemy.org

STEMY ACADEMY STEMY Academy is a two-pronged educational initiative that aims to cultivate scientific curiosity in local middle school students — particularly those from underrepresented backgrounds. The program will engage between 90-150 underserved middle schoolers at three Louisville schools.

PEER MENTORSHIP PROGRAM The peer mentorship program pairs high school freshmen who are new to scientific research with experienced upperclassmen to improve mentees’ scientific inquiry skills and build mentors’ leadership skills. The program will involve 30 mentor-mentee pairs.

STEMY SCIENCE BOXES

STEMY science boxes will contain all the materials necessary to conduct fascinating science experiments (this year, kits will focus on diagnostic tools created through paper microfluidics) paired with a comic book instruction manual. For each kit sold, we will donate one to a student in need.


LETTER FROM THE EDITORS We are STEM + Youth (STEMY), a local, entirely student-run nonprofit organization on a mission to foster curiosity, learning, and innovation by engaging students of all backgrounds, genders, and means in STEM education. Through our programs, we aim to close pervasive gender, racial, and socioeconomic gaps in STEM by sparking passions for STEM among disadvantaged and underrepresented students. STEMY began as an after-school club called the Manual Science Review (MSR), which aimed specifically to help duPont Manual High School students with their science fair projects. After about a year, we saw the tangible impacts that the MSR’s programs had on the school—program participants came to enjoy the science research process and grew more interested in STEM—and we wanted to expand beyond just our high school. Officially founded three months ago, this organization is the result of our drive to change lives by engaging all students, regardless of gender or background, in STEM. We’ve already directly impacted over 300 local students from all backgrounds and plan to engage thousands more in new programs this year. Effective STEM education isn’t just valuable to improve academic performance—the right kind of learning influences students’ career options, worldview, and mindset. High-quality STEM experiences have been shown to improve problem-solving skills, build confidence, and motivate students to enter high-paying jobs in the future. STEM programs like ours lead participants to view the world through an observational, curiosity-driven lens and have the power to open minds that have been closed by years of learning through rote memorization. Critically, sparking passions for STEM among underrepresented and disadvantaged students can break down established social barriers in STEM fields. To expose as many local students as possible to the numerous benefits of STEM, we are launching several new programs during the 2017-18 school year. Our initiatives will engage students in exciting, creative STEM activities to build new skills and spark new passions among participants. Innovation is one of these many programs. All across this city, there are talented, dedicated high school students who pursue incredible research projects and demonstrate the amazing things students with strong STEM backgrounds can do. We believe that these students deserve the opportunity showcase their efforts, which is why we created Innovation. Innovation, just like our organization, is created and produced by students and students only. We cultivate strong partnerships among editors and authors and give writers the unique opportunity to publish high-quality, peer-reviewed work as a high schooler. Through this journal, we hope to spread the innovative spirits of our authors and their amazing research throughout the community. We believe that Innovation is a unique way to expose Louisville to STEM, and we will continue to produce and publish this journal for years to come.

4


Beta-glucans Stimulate the Immune System in the Fight Against Cancer Edward Zhong Traditionally, cancer has been treated with chemotherapy, usually aggressive cocktail of drugs that kills both cancer cells and healthy cells. This causes patients undergoing chemotherapy to have unpleasant side effects such as hair loss, nausea, pain, and low white blood cell count which could lead to fatal infections. However, there has recently been an increase in interest for alternative treatment methods, including immunotherapy. The idea of immunotherapy is to boost the patient’s immune system to fight cancer on its own, thereby circumventing the side effects associated with chemotherapy or radiation. Another important advantage is that immunotherapy can be applied to multiple,

vivo growth of melanoma cells (Lee & Hong, 2011). In a study by Wang et. al., it was shown that “a purified polysaccharide from Pleurotus nebrodensis improved immunity and coordinated innate immunity and inflammatory responses by activating macrophages”. According to Chan et al. (2016), “imprime PGG (Imprime), an intravenouslyadministered, soluble β-glucan, has shown compelling efficacy in multiple phase 2 clinical trials with tumor targeting or anti-angiogenic antibodies”. In that study, human blood was used to show that Imprime-anti-β-glucan antibodies complexes activated the complement system via the classical complement pathway (Chan et al., 2016). The complement system is part of the innate immune response that opsonizes pathogens (mark them for destruction). Currently, β-glucans are an extremely hot topic of research, with many possibilities for the future. It is very possible that in the near future, cancer will commonly be treated with immunotherapy involving β-glucans. WORKS CITED Chan AS, Jonas A, Qiu X, Ottoson NR, Walsh RM, Gorden KB, Harrison B, Maimonis PJ, Leonardo SM, Ertelt KE, Danielson ME, Michel KS, Nelson M, Graff JR, Patchen ML, Bose N. Imprime PGG-mediated anti-cancer immune activation requires immune complex formation. PLoS One. 2016;11 doi: 10.1371/journal.pone.0165909. Friedman, M. (2016). Mushroom Polysaccharides: Chemistry and Antiobesity, Antidiabetes, Anticancer, and Antibiotic Properties in Cells, Rodents, and Humans. Foods, 5(4), 80. http://doi.org/10.3390/foods5040080 Lee J.S., Hong E.K. Immunostimulating activity of the polysaccharides isolated from Cordyceps militaris. Int. Immunopharmacol. 2011;11:1226–1233. doi: 10.1016/j.intimp.2011.04.001. Wang X.M., Zhang J., Wu L.H., Zhao Y.L., Li T., Li J.Q., Wang Y.Z., Liu H.G. A mini-review of chemical composition and nutritional value of edible wild-grown mushroom from China. Food Chem. 2014;151:279–285. Doi: 10.1016/j.foodchem.2013.11.062.

Figure 1: possible linkages in beta-glucans

perhaps even all cancers due to the fact that it is not dependent on the type of cancer being treated, but on the host immunity. One possibility with immunotherapy involves the use of β-glucans, which are polysaccharides of glucose with β-1,3 and β-1,6 linkages. Β-glucans are commonly found in mushrooms and other edible fungi (Friedman, 2016). These β-glucans have been shown to have antioxidative and immunostimulating effects. A study by Lee and Hong showed that in a mouse model, a polysaccharide isolated from Cordiceps militaris mushrooms seemed to inhibit the in

Edward Zhong is a current sophomore at Kentucky Country Day High School. He has done research on beta-glucans and macrophages for the past two years.

5


History and Hurdles in Organ Transplantation Allison Tu

On December 23, 1954, Dr. Joseph Murray and Dr. David Hume conducted the first successful transplant of a major body organ, a kidney transplant between identical twins. This marked the beginning of an age of rapid innovation and development in organ transplantation—over the next two decades, lungs, pancreata, livers, and hearts would all be relocated from living or deceased donors to chronically sick recipients. Organ transplantation came far from fruitless efforts in the early 1800s to graft flaps of skin between person to person, which ended in rejection time and time again. As new technology, procedures, and drugs were developed, success rates for increasingly complex transplants skyrocketed. The invention and spread of immunosuppressants, which prevent the recipient’s immune system from attacking the transplanted organ, were particularly catalytic. But even with recent developments in transplantation, there remains three key unmet needs in the field: rejection, shortage, and infection. Organ rejection was once the most significant initial hurdle to successful organ transplants. It occurs when the recipient’s immune system, trained to recognize and attack unfamiliar potential pathogens, recognizes the implant as a foreign object. Modern medicine has used two steps to minimize rejection. First, antigenic markers are matched as closely as possible between donor and receiver. This often includes examination of both blood type and the human leukocyte antigen, a key marker that tells the immune system whether a certain cell is self (to be left alone) or non-self (to be attacked). Second, patients take immunosuppressants before and long after transplantation, largely preventing the immune system from attacking the grafted organ. However, these drugs, while instrumental to successful transplants, are also the cause of the second key need in organ transplantation: infection. Immunosuppressants leave transplant recipients particularly vulnerable to infection because the drugs decrease the body’s entire natural immune response. As one of the most common complications following a transplant procedure, infections in new organ recipients are both particularly widespread and particularly dangerous. Since infection-associated symptoms such as fever may develop in recipients for noninfectious reasons (including, ironically, organ rejection), accurate and timely diagnosis of the infection is often challenging. Infections also progress rapidly in transplantees due to their suppressed immune systems. Currently, however, the risks associated with immunosuppressants are a necessary evil, for forgoing them would prevent successful transplants altogether. Several innovative strategies are being developed to curb risk of infection. For example, researchers are investigating nanoparticles made of various polymers as an immunosuppressant delivery system. Because of the unique targetability of nanoparticles, the dose of the drug would be restricted to only the area of the transplanted organ. Research shows that restricting drug delivery specifically to the site of

the graft as opposed to the entire body would likely suppress immunity against the grafted organ while having little impact on overall immunity, preventing both rejection and infection. Another strategy by researchers at UC San Francisco sidesteps infections by eliminating the need for immunosuppressants long-term. First, cells from the organ donor were intentionally injected into the recipient to activate the immune that typically react to the grafted organ. Then, they used a drug called cyclophosphamide to preferentially kill the activated cells. This alone, however, would not permanently stop immune cells from attacking the transplant, so the second key step of this approach was to double the population of T regulatory cells, which calm the immune response. This strategy has the potential to eliminate the need for immunosuppressants after surgery, letting the immune system remain at full strength and preventing infections from taking hold. The final hurdle when it comes to organ transplantation is organ shortage. Very few people pass away in a way where their organs can be harvested, and organs are incredibly difficult to match well, causing transplant waiting lists to grow longer and longer each day. To stop the growth of this list and combat organ rejection issues that waste usable organs, Massachusetts General Hospital and Harvard Medical School researchers aimed to grow organs from scratch instead. Their strategy involved removing the cells from human donor hearts that were deemed unusable, leaving behind a bare scaffold on which new cells could grow. Then, the scientists repopulated the empty structure with cardiac cells derived from induced pluripotent stem cells (normal human cells that are altered to gain properties that allow them to develop into any other type of cell). After two weeks, the scientists applied an electrical shock to the hearts, and they began to beat. Though these hearts aren’t quite ready to implant into humans, the study shows that this fascinating technique may soon be able to apply to hearts and other organs. Eventually, all organs may be custom-grown, eliminating rejection and infection risks and reducing length of the organ waitlist. Organ transplantation is a groundbreaking technique that has come a long way from failed transplants in the 19th century and as technology progresses, the tools and strategies to address its shortcomings will become more and more advanced and innovative. Eventually, we may live in a world without rejection, post-op infections, or organ waitlists.

6



Development of a Low Cost Ex Vivo Perfusion System for the Elucidation of Flow Modulation Induced Vascular Remodeling Agharnan Gandhi1 1 duPont Manual High School Mentor: Dr. Kevin Soucy Louisville, Kentucky pump blood to the next chamber by contracting or squeezing. In addition to the myocardium, the heart valves play an integral role in pumping by open and closing appropriately. During systole, the tricuspid and mitral valves open while the pulmonary and aortic valves close and vice-versa during diastole. The valves are triggered by pressure gradients and open when the neighboring chamber has less pressure. By shutting, the heart valves effectively inhibit retrograde flow. By contracting, the pressure within the chamber increases, thereby causing blood to flow to the neighboring chamber with less pressure.

ABSTRACT Heart Failure (HF) is the progressive weakening of the heart’s ability to pump blood and is becoming a rapidly emerging problem with 5.7 million Americans having it. A small subset of HF patients have Advanced Heart Failure (AHF), a form that does not respond to traditional treatment options such as optimal medical management and lifestyle changes. Heart transplantation remains the gold standard for AHF treatment, but the limited supply has led to the development of mechanical circulatory support devices, most notably the Left Ventricular Assist Device (LVAD), which is attached to the apex of the Left Ventricle (LV) and assists the LV in pumping. Over time LVAD design has shifted from pulsatile flow (PVAD) to continuous flow (CVAD) due to improved quality of life. However, this decrease in pulsatility has coincided with an increased occurrence of certain symptoms, notably gastrointestinal bleeding and aortic insufficiency. Flow modulation patterns have been devised to mimic pulsatility within CVADs and minimize the occurrence of the prior symptoms. In this study, an Ex Vivo Perfusion System (EXVP) was assembled and optimized to recreate LVAD flow conditions. All six flow patterns were successfully achieved with four running simultaneously in the incubator. These results validate the EXVP as a viable small scale model for the replication of the proposed LVAD flow modulation patterns. In the future, vessels will be implanted within EXVP to understand how the vasculature changes in response to flow conditions so that flow modulation patterns can be perfected and implemented clinically.

Figure 1: From anatranik.org

The first stage in circulation within the heart is the arrival of deoxygenated blood into the right atrium via the superior and inferior vena cavae. Blood flows through the heart in this process: superior/inferior vena cavae, right atrium, tricuspid valve, right ventricle, pulmonary valve, pulmonary artery, lungs, pulmonary vein, left atrium, mitral valve, left ventricle (LV), aortic valve, and aorta. From the aorta, blood continues into sub branches called arteries, which BACKGROUND eventually branch into even smaller vessels called The heart, at its core, is a pump that is tasked with circulating blood throughout the body. arterioles. The arterioles further divide into the even narrower capillaries which are found in the The cyclic nature of heart contractions results capillary bed. There, oxygen diffuses into the in a distinct pumping motion that provides a physiologic pulse and consequently, pulsatile blood cells while carbon dioxide diffuses into the blood. Thus, from the time that blood exits the aorta, it flow to the vast network of arteries that span makes its way into even narrower branches before the human body. The cardiac muscle is called the becoming deoxygenated in the capillary bed. After myocardium and it is able to pump continuously becoming deoxygenated, the blood moves through because it has a continuous blood and nutrient supply. There are four chambers in the heart - two progressively larger vessels from venuoles to veins and finally the vena cavae, which lead back into the atria and two ventricles - and they are able to

8


heart to start the process again. Today, heart failure (HF) poses a major problem to the world despite the massive improvement in medical care and technological advancement that society has undergone in the past several decades. According to the Center for Disease Control, an estimated 5.7 million adults in the US and over twenty million adults globally have HF. Heart Failure’s five year mortality rate is 48%, which is significantly higher than that of breast cancer, 11%, and its ten year mortality rate is close to 75% (CDC, 2016). Consequently, it is no surprise that HF is one of the leading causes of death in the United States, with one in nine deaths listing HF as a contributing factor. Heart Failure occurs when the heart is unable to pump enough in order to satisfy the demands of the body. One of the main causes of Heart Failure is coronary artery disease, which occurs when plaque accrues within the coronary arteries. Since the coronary arteries directly supply the myocardium, any impedance will detrimentally affect the myocardium and heart’s ability to pump. Since plaque accumulates over time within the coronary arteries, there is significantly less flow, which causes the myocardium to weaken over time. Despite these daunting statistics, numerous techniques and implantable devices have been developed to help alleviate the growing burden of HF. The gold standard in HF treatment remains heart transplantation, which entails completely removing the heart from a recently deceased donor and then transplanting that into a compatible recipient. However, the demand for viable hearts far exceeds the supply and as a result, a vast number of people are left without an organ, which necessitates the development of other more accessible treatment options. Endothelial dysfunction is a significant and often potent repercussion associated with HF. A healthy endothelium is able to constrict and dilate the vessels appropriately based on the shear stress that it detects. When the endothelium detects increased shear stress on the luminal surface, it produces the enzyme endothelial

Figure 2: Diagram of Cardiac Blood Flow

9

Nitric Oxide synthase (eNOS), which results in the production of Nitric Oxide (NO). From there, NO diffuses to the smooth muscle cells of the blood vessel and causes them to relax and expand, thereby increasing the lumen diameter. The ability of the endothelium to sense shear stress and dilate or constrict appropriately ensures that tissue blood demand is consistently met. Furthermore, neurohormal activation as a result of LV deterioration results in the production of angiotensin II, which in turn increases the production of certain reactive oxygen species (ROS) such as O2-. The ROS react rapidly with and degrade NO thereby reducing the latter’s bioavailability, which in turn suppresses the endothelium’s ability to dilate in response to physiological stimuli. This reduced vasodilation therefore increases the vascular resistance, which further reduces myocardial perfusion. Due to the aforementioned problem associated with donor heart availability, other alternatives such as the left ventricular assist device (LVAD) have been developed. An LVAD is essentially a pump that attached to the apex of the LV and suctions the blood from the LV to the aorta, from where blood is dispensed out. The first LVAD was implanted in a thirty seven year old woman by Dr. Michael E. DeBakey. Although the device was successfully implanted, it only lasted a short period of time. The first successful long-term implantation occurred in 1988 when an LVAD was implanted by Dr. William F. Bernhard (Kirklin & Naftel, 2008). The subsequent LVADs that were developed after this implantation were largely pulsatile, meaning that they removed blood from the LV through volume displacement and ejected it through the aorta, thereby mimicking the pulsatile nature of the heartbeat. These pulsatile flow left ventricular assist devices (PF LVAD) were considered as the first generation of LVADs and were eventually replaced by the second generation continuous flow left ventricular assist devices (CF LVAD). Compared to the PF LVADs, the CF LVADs pump blood at a constant speed through a spinning impeller. There are two main CF LVAD designs: axial flow and centrifugal flow. In terms of implantation, both types of CF LVADs are patched onto the apex of the LV. In axial flow CF LVADs, blood from the LV runs horizontally and perpendicular to the surface of the impeller and continues to the aorta. Whereas in centrifugal flow CF LVADs, blood from the LV runs vertically and perpendicular to the surface of the impeller before continuing to the aorta. The smaller sizes of CF LVADs made it much easier to surgically implant them, which expedited the transition from PF LVADs to CF LVADs. However, one of the downsides of the rotary pump within the CF LVADs is the increased chance of pump thrombosis or clot formation


within the device. Since the CF LVAD is completely enclosed within the body, there is no easy method to remove this clot. Consequently, the risk of coagulation and subsequent complications was very high in the second generation LVADs. To diminish the risk of pump thrombosis inherent within the second-generation CF LVADs, the third generation of LVADs was introduced a decade ago. These newer LVADs are still continuous flow but they consist of impellers that are suspended within the device itself, thereby minimizing contact with the device. The impellers are suspended through technologies such as magnetic and hydrodynamic levitation. Most of the third generation LVADs are undergoing clinical trials in both Europe and the United States and will soon be readily adopted by all major hospitals within a few years. Notable third generation LVADs include the HeartMate III and the HVAD, which are manufactured by Thoratec and HeartWare, respectively. One review by Soucy et al evaluates the overall importance of the pulse with respect to LVAD implantation (Soucy et al., 2013). It specifically explains the process of mechanotransduction in the cardiovascular system, which is the mechanism by which mechanically induced forces are transmitted into biological signals that affect the vasculature. Continuous flow results in blood flow at a constant speed, which reduces cyclic stretch and shear stress oscillation. This in turn results in a relatively constant vessel diameter, which bespeaks of a very low compliance and distensibility. Through mechanotransduction, the attenuation of these two mechanical properties of blood vessels results in biological signaling as evidenced by augmented oxidative stress and myocardial remodeling. Whereas, in pulsatile flow, the oscillating flow results in cyclic stretch and oscillating shear stresses. Soucy et al notes that cyclic stretch and shear stress work synergistically, which results in greater endothelial responses for cells subject to pulsatile flow (Soucy et al., 2013). The continuous refinement of LVADs offers significant hope for HF patients, especially compared to fifteen years ago. Compared to PF LVADs, CF LVADs have a portable power supply and are more reliable and durable. These advantages have significantly augmented quality of living for patients. However, the widespread adoption of CF LVADs in the past decade has coincided with the onset of numerous symptoms such as gastrointestinal bleeding and thrombosis within patients that have often led to earlier deaths (Soucy et al., 2013). The increased frequency of complications in patients with CF LVADs question the true importance of pulsatility. Nonetheless, these complications make it clear

that diminished pulsatility detrimentally affects the vasculature. A proposed method to avoid the aforementioned symptoms is to restore pulsatility to a certain degree while still within a CF LVAD. In 2011, a group of researchers at the University of Louisville theorized that by modulating the speed of the impeller within the CF LVAD through flow modulation algorithms, the symptoms that were associated with diminished pulsatility could be avoided (Ising et al., 2011). Initially, the researchers conducted their study through a computer simulation model, but in 2015, they performed the experiment within a bovine model and discovered that the flow modulation algorithms were able to restore pulsatility to a certain degree in a CF LVAD (Soucy et al., 2015). In terms of operation with respect to the native cardiac cycle, there are two discrete types of LVAD flow modulation: asynchronous and synchronous. Asynchronous flow modulation occurs when the impeller flow rate is modulated independently of the native heart rate whereas synchronous flow modulation occurs when the impeller flow rate is synchronized with the native heart rate. Two discrete types of synchronous flow modulation are copulsation and counterpulsation. Copulsation occurs when the maximum CF LVAD output is synchronized with the heart’s systole phase and counterpulsation occurs when the maximum CF LVAD output is synchronized with the heart’s diastole phase. These flow modulation algorithms modulate the RPM within the rotary pump such that the mean flow rate remains constant but the periodic transition from maximum to minimum flow rate and vice versa generates a pulsatile movement that is similar to the physiologic pulse. The results from the bovine flow modulation algorithm study have proved that flow modulation algorithms have the ability to restore pulsatility within CF LVADs. In terms of LVAD experimental studies, there are two main types: animal, and ex vivo. An animal LVAD study entails implanting a LVAD within an animal and observing its long term health and physiology. Whereas, an ex vivo LVAD study entails replicating LVAD conditions outside of a living animal. The main benefit in conducting an animal study over an ex vivo study is that it is able to replicate human-like physiologic conditions without the manufactured tubing and artificial pumps in ex vivo studies. On the other hand, an ex vivo study is extremely flexible and simpler to conduct. One of the downsides of the bovine flow modulation algorithm study was that there were numerous challenges associated with the study such as procurement and maintenance of the animal itself. This study design prevents the monitoring of vessel structure in real time since the vessels can only be analyzed at a certain point

10


in time and cannot easily be reattached to analyze the vessel at a later point in time. The flexibility of an ex vivo study significantly reduces the cost associated with the study but increases the controllability and flexibility of the study. In order to make the process more replicable, a system must be built such that it can replicate in vivo circulation and evaluate the effects that different flow conditions inflict on the vasculature. The best ex vivo study design as it pertains to this study is the Ex Vivo Perfusion System (EXVP), which is essentially a mock flow circulation loop that replicates in vivo circulation and allows for a vessel to be inserted in order to subject it to the flow conditions. Through its various pumps and tubing, the EXVP is designed to mirror the pulmonary and systemic circuits. The veins are mirrored in the EXVP by the reservoir since both contain large quantities of fluid at any given moment. Next is the pressure gradient, which allows for fluid to keep flowing, which is mirrored in the EXVP by the peristaltic pump. The heart valves are mirrored by the EXVP one way valves since both inhibit retrograde flow and ensure that blood continues to move. The mock ventricle on the EXVP is connected to a pneumatic driver which can be modified to change the intensity of the pumping in order to simulate the LV. The perfusion chamber contains a section of a blood vessel that is explanted from a living organism. By subjecting the vessel to various flow conditions, it can be removed and analyzed. Furthermore, throughout the EXVP, there are compliance elements and resistance elements that can be adjusted to change the pressure and satisfy the parameters associated with the flow conditions. By optimizing the EXVP such that it simulates the circulation under the various flow modulations, the induced vascular changes can be observed and ascertained without actually implanting a LVAD within a living organism. The main goal of the present study is to optimize an Ex Vivo Perfusion System and modify it to simulate the flow modulation patterns. The next goal is to elucidate and understand the effects inflicted on the vasculature and its remodeling process by the flow modulation patterns. MATERIALS AND METHODS Ex Vivo Perfusion System (EXVP) System In order to prepare the EXVPs so that they could be run under physiologic conditions in the incubator, the EXVPs were set up according to the diagrams below. Figure 1 represents the setup for the Normal, Heart Failure, Continuous Flow, Copulsation, and Counterpulsation Conditions. Figure 3 represents the setup for the Static Flow Condition and Figure 4 represents the setup for

the Asynchronous Flow Condition. In all four setups, fluid flows counterclockwise and the consistent diameter across all four setups is 5 mm. All of the EXVP setups share a reservoir at the top left with tubing coming out of the reservoir. The tubing then went down to a peristaltic pump, which ensured that fluid kept moving forward. After the peristaltic pump, a fifteen centimeter length of Penrose tubing was attached to serve as a compliance element, and can be clamped accordingly to adjust the pressure. The vessel was kept in a perfusion chamber, which is basically a chamber that has holes and fittings designed to hold the vessel in place.

Figure 3: Ex Vivo Perfusion Setup for Normal, Heart Failure, Copulsation, Continuous, and Counterpulsation Flow Conditions

For the Normal, Heart Failure, Copulsation and Counterpulsation Conditions (depicted in Figure 3), a mock ventricle follows the compliance element. A one way valve was added after the mock ventricle to prevent retrograde flow in the EXVP after which, another compliance element is added. Next, a perfusion chamber was added, which is basically a chamber containing the vessel suspended in media but still attached to the loop. Another one-way valve was added after the Perfusion Chamber after which the pressure Millar and flow probes were added. Next, another compliance element was added after which, came the resistance element. The resistance element consists of a clamp that is positioned on top of the tubing in the EXVP and can be shifted up or down to decrease or increase resistance, respectively. The EXVP continues back to the reservoir, from which the process starts again. Relative to the other conditions, the EXVP setup for the Continuous Flow Condition is much shorter and simpler. After the reservoir, peristaltic pump, and compliance element, which were all shared by all the conditions, a one-way valve was

11


independent of the peristaltic pump situated on the original tubing. After the second peristaltic pump, a flow probe was attached to monitor the flow rate so that the peristaltic pump can be adjusted accordingly. Meanwhile, on the original tubing, a one way valve was added to ensure that flow is continuous through the mock ventricle. Next came the empty syringe tube which is connected to a pneumatic driver and serves as a mock ventricle. After this, another one-way valve was added after which, the concurrent set of tubing was reattached to the original set of tubing. Immediately after the reattachment point, a compliance element was added. The pressure Millar and flow probe were then added after which the perfusion chamber (Figure 4) was added. Next, a pressure Millar was added after which came a compliance element and resistance element. The tubing then led back to the reservoir, from which the process starts all over again.

added to ensure continuous flow. After this valve came the perfusion chamber (Figure 4) and the pressure Millar and flow probes. Next, another one-way valve was added before the EXVP goes back to the reservoir, from which the process starts all over again.

Figure 4: Perfusion Chamber Design

Data Acquisition After the EXVPs for all the flow conditions were setup, they had to be adjusted to concur with the target parameters associated with each flow conditions. Those parameters are listed in Table 1. The pressure and flow probes on each EXVP setup were connected to an external computer where a data acquisition program (LabChart) allowed for the monitoring and recording of pressure and flow continuously in real time. From there, the compliance elements, resistance element, peristaltic pump, and pneumatic driver were adjusted accordingly in order to satisfy the pressure and flow requirements for the particular flow condition. After these conditions were met, the measurements (position of clamp on compliance element, resistance pressure, pneumatic driver settings, peristaltic pump flow) from the modified setup were recorded.

Figure 5: Ex Vivo Perfusion Setup for Static Flow Condition

Figure 6: Ex Vivo Perfusion Setup for Asynchronous Flow Condition

Out of all the conditions, the Asynchronous Flow Condition is the most complex since it consists of another peristaltic pump that must be operated independently. After the reservoir, peristaltic pump, and compliance element, which are all shared by all the flow condition, a flow probe was attached. After this, another set of tubing was attached which contained the second peristaltic pump. This pump was operated at a rate

Table 1: Target Parameters for Flow Conditions

After the target parameters were achieved for each flow condition in the EXVP, they were run for thirty minute intervals over a ten hour period with data being taken from ten second intervals at the end of each thirty minute interval. By obtaining data from thirty minute intervals over the course of a ten-hour period, drifts in pressure can be discerned in order to evaluate

12


the overall integrity of the system. During each Womersley Number formula: ten second interval, pressure and flow data was taken every 0.0025 seconds, resulting in 4000 data points. These data points were then plotted onto graphs for each flow condition. In order to quantify the degree of pulsatility for each Womersley Shear formula: condition, the Energy Equivalent Pressure (EEP) and Surplus Hemodynamic Energy (SHE) values were calculated with the formula below, where Q is blood flow, P is Pressure, and t is time. To make the calculation more straightforward, the Womersely Number formula was substituted directly into the Womersley Shear formula and then manipulated to account for the units on the The units for this value are mmHg. These integral values were obtained by calculating the area under EXVP. The revised formula is thus: the Flow*Pressure graph and Flow graph using Matlab software.

where MAP is Mean Arterial Pressure. The units for this value are in ergs/cm3 and the factor of 1332 converts mmHg (units for EEP and MAP) to ergs/cm3 (unit for SHE). The MAP was calculated through MATLAB software by taking the integral of the Pressure vs time graph and then dividing by the time interval. Furthermore, in order to calibrate the EXVP with the natural body, the tantamount shear stress values were calculated using the Womersley Shear and Hagen-Poiseuille Shear formulas. The HagenPoiseuille Shear formula is as follows:

However, to account for the units in the EXVP, this formula was manipulated to yield:

Where μ is dynamic viscosity (Pa*s), Q is volumetric flow rate (mL/min), and r is radius (m). The factor of 60,000,000 that is introduced to simplify units cancelation is:

The Womersely Shear formula, on the other hand, requires the calculation of an intermediary, Womersley number, which is then inputted into another formula to calculate the shear value.

where r is radius (m), f is frequency (Hz), ν is kinematic viscosity (m^2/s), Q is volumetric flow rate (mL/min), μ is dynamic viscosity (Pa*s), r is radius (m). The factor of 3.6*1014 that is introduced within the square root to simplify units cancelation is:

An Excel spreadsheet was created that outputted both the Womersley Shear and HagenPoiseuille Shear after the variables were entered. The only variables in this study are mean flow (Q) and vessel radius (r); thus, they can be modified in order to achieve the target shear stress. Vessel Measurements One of the aims for this study is to understand the effects that are inflicted on the vasculature as a result of these proposed flow conditions. Compliance and Distensibility are two mechanical properties of blood vessels that are influenced by hemodynamics forces and pathways that in turn influence the process of mechanotransduction. Compliance is the “ability to stretch and hold volume” while distensibility is the easy with which a blood vessel distends or expands (Soucy et al., 2010). By calculating the initial compliance and distensibility for a blood vessel and then comparing those with the compliance and distensibility for the blood vessel after it has been subject to the various flow conditions for 24 and 48 hours, the flow modulation induced

13


vascular remodeling can be discerned. The formula for compliance is:

The formula for distensibility is: Graph 1: Pressure Graph for Normal Condition

where v is volume, a is area, d is diameter, and p is pressure. The calculation of compliance and distensibility requires the measurement of a blood vessel diameter over various pressures. This was done by using a VEVO Ultrasound System to take continuous pictures of the blood vessel while slowly increasing the internal pressure with a manometer to six different pressures (50, 80, 90, 110, 120, 150 mmHg). Using ImageJ, the inner diameter of the blood vessel was calculated. For compliance, the area (πr2) was plotted against pressure on a graph. The slope of the resulting graph was then the compliance for the vessel since the slope relayed the change in area over change in pressure (rise over run). For distensibility, the area/area0 (πr2/πr02) was plotted again pressure on a graph. The slope of the resulting graph was then the distensibility for the vessel since the slope relayed the change in area/area0 over change in pressure (rise over run). This same process occurred for vessels subject to 24 and 48 hours in the various flow conditions.

Graph 2: Flow Graph for Normal Condition

Graph 3: Pressure Graph for Continuous Flow Condition

RESULTS

Graph 4: Flow Graph for Continuous Flow Condition

Table 2: Setup Parameters for Flow Conditions

In order to achieve the target parameters for each of the aforementioned flow conditions, the EXVPs were modified on a trail-and-error basis. Table 2 contains the parameters on the EXVPs in order to achieve the target parameters listed in Table 1. Graphs 1-12 illustrate the Pressure and Flow profiles for each of the flow conditions. The graphs are the pressure and flow waveforms for the Continuous Flow Condition as reproduced within the EXVP.

Graph 5: Pressure Graph for Copulsation Flow Condition

14


Graphs 11 and 12 show the pressure and flow waveforms for the Heart Failure Condition as reproduced within the EXVP.

Graph 6: Flow Graph for Copulsation Flow Condition

Graph 11: Pressure Graph for Asynchronous Condition

Graph 7: Pressure Graph for Counterpulsation Flow Condition Graph 12: Flow Graph for Asynchronous Condition

These graphs are the pressure and flow waveforms for the Asynchronous Condition (20 BPM, 500-1000 RPM) CONCLUSION Graph 8: Flow Graph for Counterpulsation Flow Condition

Graph 9: Pressure Graph for Heart Failure Flow

Graph 10: Flow Graph for Heart Failure Flow

In conclusion, the increasing adaption of CF LVADs for Heart Failure has markedly augmented quality of living. However, various symptoms such as gastrointestinal bleeding and thrombosis that have been reported clinically question the importance of pulsatility and whether or not it should be abandoned in CF LVAD design and operation. Certain flow modulation algorithms have been offered to help create a pulsatile aspect to the CF LVADs, including asynchronous and synchronous flow modulation. In this study, an Ex Vivo Perfusion System (EXVP) was optimized and calibrated with the aforementioned flow waveforms. The demonstrated maneuverability and flexibility in the EXVP setups allow for the extrapolation of the EXVP to other shear stresses, pressures, and flow. By adjusting the EXVP to increase these values and then integrating a vessel, the induced vascular effects can be observed over time. An added benefit to using EXVPs to perform these studies is that they can all be performed without the costs and challenges associated with an animal model. Comparing these waveforms with the target parameters identified in the beginning of the study, the main goal of this study, which was

15


to optimize an EXVP to the LVAD flow modulation waveforms, has clearly been achieved. Currently, the EXVPs are being setup within the incubator and vessels will soon be integrated into the EXVP in order to understand how the compliance and distensibility of the vessel changes in response to the flow conditions. Furthermore, an angiogenic study is slated to start in April in order to fully identify the changes in vasculature as a result of blood flow. ACKNOWLEDGMENTS First and foremost, I would like to thank Dr. Kevin Soucy at the University of Louisville for the unwavering support and guidance that he provided without which I would not have been able to complete this study. I would also like to thank Mr. Abhinav Kanukunta for his support and assistance over the course of his study.

WORKS CITED Alba, A. C., & Delgado, D. H. (2009). The Future is Here: Ventricular Assist Devices for the Failing Heart. Expert Review of Cardiovascular Therapy, 7(9), 1067-1077. Retrieved from http://www.medscape.com/ viewarticle/709956_2 Center for Disease Control and Prevention. (2016, June 16). Heart Failure Fact Sheet. Retrieved from http://www.cdc.gov/ dhdsp/data_statistics/fact_sheets/fs_heart_failure.htm Ising, M., Warren, S., Sobieski, M. A., Slaughter, M. S., Koenig, S. C., & Giridharan, G. A., (2011). Flow Modulation Algorithms for Continuous Flow Left Ventricular Assist Devices to Increase Vascular Pulsatility: A Computer Simulation Study. Cardiovascular Engineering and Technology, 2(2), 90-100. Kirklin, J. K., MD, & Naftel, D. C., PhD. (2008, June 5). Mechanical Circulatory Support: Registering a Therapy in Evolution. Circulation: Heart Failure, 1, 200-205. Soucy, K. G., PhD, Giridharan, G. A., PhD, Choi, Y., BS, Sobieski, M. A., RN, CCP, Monreal, G., PhD, Cheng, A., MD, . . . Koenig, S. C., PhD. (2015). Rotary pump speed modulation for generating pulsatile flow and phasic left ventricular volume unloading in a bovine model of chronic ischemic heart failure. The Journal of Heart and Lung Transplantation, 34(1), 122-131. Soucy, K. G., PhD, Koenig, S. C., PhD, Giridharan, G. A., PhD, Sobieski, M. A., RN, CCP, & Slaughter, M. S., MD. (2013, June). Rotary Pumps and Diminished Pulsatility: Do We Need a Pulse? American Society for Artificial Internal Organs, 355-366. Soucy, K. G., Lim, H. K., Attarzadeh, D. O., Santhanam L., Kim J. H., Bhunia A. K., Sevinc B., ‌ Berkowitz D. E. (2010). Dietary inhibition of xanthine oxidase attenuates radiation induced endothelial dysfunction in rat aorta. Journal of Applied Physiology, 108, 1250-1258. Kanukunta, A., Gandhi, A., & Soucy. K. (2016). AHA Progress Report. The Heart as a Double Pump: Pulmonary and Systemic Circuits [Online Image]. (2014). Retrieved January 13, 2017 from http://www.newhealthadvisor.com/blood-flow-through the-heart.html Bauersachs, J., & Schäfer, A. (2004). Endothelial Dysfunction in Heart Failure: Mechanisms and Therapeutic Approaches. Current Vascular Pharmacology, 2(2), 115-124. Mancini, D., & Colombo, P. (2015). Left Ventricular Assist Devices: A Rapidly Evolving Alternative to Transplant. Journal of the American College of Cardiology, 65(23), 2542-2555. The Heart as a Double Pump: Pulmonary and Systemic Circuits [Online Image]. (2014). Retrieved January 13, 2017 from http://www.newhealthadvisor.com/blood-flow-through the-heart.html

16


Collecting Kinetic Energy as An Alternative Tolling System Emma Heironimus1 1 duPont Manual High School Louisville, Kentucky

ABSTRACT

on the highway. There have been 135 highway accidents in Illinois already this year. Assuming There have been 201 highway accidents in approximately 49% occurred at toll booths, 66 Illinois this year. Approximately 49% of highway occurred within 0.1 miles of toll booths. Many accidents in Illinois occur within 0.1 miles of toll claim the new electronic tolling systems being booths. That is about 98 accidents this year. Tolls implemented in some areas are a safer option also have a negative impact on the environment as they do not require people to abruptly slow as the commuter times with tolls in the trip are down. However, these systems often require 139% as long as times without tolls. This is due to lane changes which are even more dangerous standing as tolls are often high traffic areas. This as people are trying to slow, change lanes, and standing releases harmful gases and air pollutants accelerate near each other. (U.S. Department of into the air, damaging the environment. Tolls Transportation) themselves don’t even produce a high revenue Tolls also have a negative impact on because, even though they collect about $ 13 the environment. Urban commute times billion anally, administrative costs can be as high as are 39% higher because of tolls. This is not 92.5%. only bothersome to people passing, but also The goal of the project was to use Faraday’s detrimental to the environment. As people sit laws and ideas of previous projects combined in traffic caused by toll systems, their vehicles with innovative ideas to create an alternating exhaust unnecessary greenhouse gases which tolling option which collects kinetic energy pollute the air and lead to many environmental as opposed to physical money which is costissues. The new electronic systems also often mail efficient, stable, and electrically efficient. The the bill. This uses unnecessary paper which has its hypothesis for the experiment was that more own set of environmental issues. coils and smaller magnets would provide the Tolls, despite collecting about $13 billion most efficient experimental design. As of now, 9 annually, often underperform as administrative different experiments have been performed with costs can be as high as 92.5%. This brings to multiple trials each. As of now, the hypothesis is question whether tolls are worthwhile, at least the partially supported as multiple coils have collected current design. The design which this project aims the most energy. However, the largest string of to create is an alternative tolling option which magnets has collected the most energy as opposed collects kinetic energy as opposed to physical to the smallest string. In the immediate future, money. different magnet and coil formations as well as When a car slows down, it loses energy. working models will be tested. Kinetic energy is equal to the mass of the vehicle times the velocity of the vehicle squared. The INTRODUCTION energy remaining after a car slows is described 49% of highway accidents in Illinois, 38% as : KE before - lost KE = KE after. Therefore, of highway accidents in New Jersey, and 30% of the lost KE = the KE before – the KE after. This highway accidents in Pennsylvania occur within is equal to the mass * velocity squared (before) – 0.1 miles of toll booths. Motorists are three times mass * velocity squared (after). Essentially, this more likely to die at a toll than at any other place lost kinetic energy is the toll the project aims to

17


collect. This was attempted using Faraday’s laws of induction. In 1831, an English Physicist named Michael Faraday conducted an experiment. Faraday took a coil hooked up to a galvanometer and placed a magnet near it. As anticipated, no change occurred on the galvanometer. However, when Faraday moved the magnet towards the coil rather quickly, there was a noticeable deflection in the galvanometer. Once he halted the magnet, there was a slight deflection in the opposite direction. Moving the magnet away from the coil also yielded an opposite deflection. He also noted that the faster the magnet was moved, the more of a deflection occurred. Faraday concluded that “...whenever there is a relative motion between conductor and a magnetic field, the flux linkage with a coil changes and this change in flux induces a voltage across a coil.” (Encyclopedia Britannica, 2015). From this experiment, Faraday derived his two laws, known as Faraday’s laws of electromagnetic induction. The first law states that any change in the magnetic field of a coil will cause an electromagnetic force (emf) to be induced in the coil. His second law states that the magnitude of emf is equal to the rate of change of flux that linkages with the coil. This flux linkage is equal to the product of number of turns in the coil and flow of the coil. In 1893, Nikola Tesla created the steam powered reciprocating electricity generator based on Faraday’s laws. In this design, steam would rush through a series of ports, pushing a piston up and down. The piston was attached to an armature, rotating coils. The armature would vibrate up and down at a high speed, producing an alternating magnetic field. This induced an electromagnetic force in the wire coils adjacent to the piston. This design worked well; however, it was rather expensive to make and was not entirely sustainable as it took a large amount of energy to make the steam. It did, however, provide a design which was easier and cheaper to make than the steam engine which was common at the time. (Encyclopedia Britannica, 2017) A similar design was seen in a device created by designer Fang- Chu Tsai who attempted to harness kinetic energy from vehicles. His design involved attaching a magnetic device to

the bottom of the car and adding magnetic strips to highways so that as the car passed over the highways, the magnetic field would change, and energy could be collected. Although his tests appeared efficient on a small scale, it was costly to produce the required materials and devices. (Sing, 2011) The basic principle of these designs is moving a magnet through a series of coils to induct an electromagnetic force in the coils. A similar design was used in this project. Unlike the design by Tesla, this device uses air to push the piston. Unlike the design by Tsai, this device captures energy from the road and draws it to an exterior source rather than capturing the energy directly from the car and immediately returning it to the car. In this design, a car is slowed by a very miniscule amount (approximately one foot per second) by a tube, bag, or pump, on a major highway where a toll system, which also would have required the car to slow down, would have been. As the car is slowed, kinetic energy from the car is used to push a magnet through a tube surrounded by induction coils, changing the magnetic field of the coils and generating an electromagnetic force in the coil. This is the energy captured as a toll. In dollars, this would be about $20,000,000 per every low-cost device (See figure A for in depth calculation of this value). METHODOLOGY The materials used to create the device were extra strong magnets of many shapes and sizes, a copper pipe, magnetic copper wire induction coils, a bellows pump, a copper connector, and a copper tube strap. A ring clamp with one neodymium ring magnet inside was placed at the top of the pipe. This was done to create a somewhat frictionless environment by preventing the magnets within the pipe from completely reaching the top by exerting an opposing magnetic force, it also aided in propelling the magnets back down the pipe. The bottom of the pipe was connected to the bellows pump by placing a copper fitting in the hose attached the pump and attaching the copper fitting to the bottom of the pipe. This was necessary as the hose was about 2.5 cm in diameter whereas the pipe was about 1.75 cm in diameter. Varying numbers

18


of coils were placed in various places on the outside of the pipe. Figure 1 shows a mockup of this device.

magnets (6 large and 14 small total). The last one had one large, three small, one large, three small, one large, two small, one large, two small, one large, three small, and one large. Three experiments kept the string of magnets constant (20 large ones) but altered the number and positioning of the surrounding coils. In one, two coils were placed at the bottom. In another, one coil was placed at the bottom and another was placed 12.5 cm above it, the two were connected by a wire. In the third, the coils were placed the same as the second; however, they were unconnected. Figure 1 shows a mockup of this device. RESULTS

Figure 1: Device Mockup

Methods of reducing friction by preventing the magnets inside the tube from completely reaching the bottom were then tried. First, a single, 3 cm diameter ring was placed around the tube. Trials, however, could not be conducted as the ring was unstable and magnets were able to travel easily through the opposing magnetic field to the other side where they would be stuck. As an alternate method, two 1.2 cm by 1.3 cm neodymium cylinder magnets were placed on opposite sides of the copper tube strap at 45 degree angles pointing the same direction. This was stable and magnets were unable to break through the opposing magnetic field. Trials were conducted using this method. Inside the pipe, strings of varying sizes and amounts of magnets were placed. Experiments were conducted using each of the strings by exerting approximately the same force on the pump multiple times. The force would drive the magnets upward through the coils. Galvanometers attached to the coils were used to measure mA collected. Currently, 9 experiments have been conducted, with ten trials each. Six were done with a single coil resting at the bottom. One used a connected string of 1.2 cm magnets, one used a string of 15, and one used a string of 10. The three other trials had a mix of 1.2 and 0.7 cm magnets, one of which contained three large magnets on each end with 14 small magnets in between. One alternated sets of 2 large magnets and 7 small

The following tables above summarize the current results of the experiments. For each trial, the mA collected from going up and down each coil on the pipe was collected, totaled, and averaged to the nearest 5 mA. Tables 1, 2, and 3 show the results for the ten trials using strings of 20, 15, and 10 1.2 cm diameter magnets respectively.

Table 1: Experiments With Only 1 Coil

Table 2: Experiments With 2 Coils *Note: “U”, “D”, and “S” stand for mA collected when the magnet traveled up and down through the coil and the sum, respectively “U1”, “D1”, and “S1” stand for the mA collected when the magnet traveled up, down, and in total through the first coil (the one closest to the bottom), respectively. “U2”, “D2”, and “S2” stand for the mA collected when the magnet traveled up, down, and in total through the second coil (the one farthest from the bottom), respectively.

The average total mA collected for the string of 20 was approximately 154 mA. The average total mA collected for the string of 15 was approximately 127 mA. The average total mA

19


collected for the string of 10 was approximately 99 mA. Tables 4, 5, and 6 show the results for the ten trials using the 3:14:3 alternating string, 2:7:2:7:2 alternating string, and 3:1:2:1:2:1:3 alternating string respectively. The average total mA for Alternating 1 was approximately 76 mA. The average total mA for Alternating 2 was approximately 76 mA. The average total mA for Alternating 3 was approximately 85 mA. Tables 7, 8, and 9 display the results for the ten trials using two unattached coils at the bottom, two unattached coils 12.5 in apart, and two attached coils 12.5 in apart respectively. The average total mA collected for the two unattached coils at the bottom was approximately 94 mA. The average total mA collected for the two unattached coils 12.5 in apart was approximately 143 mA. The average total mA collected for the two attached coils 12.5 in apart was approximately 152 mA. The folling graphs summarize the current comparative results of the experiments. Graphs 1, 2, and 3 display how each string of 1.2 cm magnets compared to each other in average mA collected going up, average mA collected going down, and average mA total for each trial respectively. In all cases, the string of 20 magnets collected the most mA of energy. Graphs 4, 5, and 6 display how each string of alternating 1.2 cm and 0.7 magnets compared to each other in average mA collected going up, average mA collected going down, and average mA total for each trial respectively. Average mA going up did not vary much among each of the three strings with a tie between Alternating 1 and Alternating 2 for most mA collected. On average, Alternating 3 collected the most mA going down and in total. Graph 7 compares the total mA collected for each trial of the 20 1.2 cm magnet string and Alternating 3. In all scenarios, the 20 1.2 cm magnet string collected the most mA. Graphs 8, 9, and 10 display how each variation in coil positioning compared to each other in average mA going up through each of the coils, average mA going down through each of the coils, and average mA total respectively. In most cases, the two coils at the bottom collected the most mA, however, in some trials there was little or opposite variation.

20


CONCLUSION The goal has currently been somewhat achieved. The current design is temporary, but it is is cost- efficient and produces rather large amounts of energy. The hypothesis was partially supported by data collected. The data showed that more unconnected coils produced greater amounts of energy; this is likely due to the design resembling multiple smaller versions, each collecting energy. The hypothesis, however, was partially unsupported as adding smaller magnets appeared to lessen amount of energy collected. This could also be due to the inconsistent size of the magnet strings as large magnets were necessary to prevent the string from falling to far into the pipe. Further experimentation is required to determine the true cause of these results. Continuing in this project, more experiments and changes must be made. Shape of the magnets as well as size of single size strings must be tested in control environments. Further research must be done to determine if there is repetition in recorded amounts of energy collected, whether or not it is only a certain amount of energy which is being recorded unintentionally multiple times. Also, more trials will be conducted on already conducted experiments. In addition, trials determining amount of mA collected from two attached coils at the bottom will be completed. Other factors, such as pipe shape, material, and size, coil size, and weight placed on the pump will be experimented upon. REAL LIFE APPLICATION The following is a table including analysis of how much money the device would save/ collect in a year. The data is based on the estimation that kinetic energy from about 111 midsize cars would be collected per minute. This is a bit of an underestimation as this is only on average and only with mid-sized cars. It does not consider larger vehicles or higher traffic areas which would produce more energy. The calculations assume the car will be slowed one foot per minute therefore the energy capture is equal to the weight in feet per second. The change in kinetic energy (what is being captured) is the mass * (the change in velocity)^2. There is about 1 watt collected per

every 0.74 ft lb/ second. Therefore, from an average mid-sized car with a weight of 3800 lbs, the device would collect about 2.8 KW. Through the calculations, it was determined that the device would collect about $19,602,777.60 per year.

REFERENCES Patent Citations: Cefo, Nevres. Electrical Power Generating Tire System. Cefo Nevres, assignee. Patent 6291901. 13 June 2000. Print. Lin, Gloria, Pareet Rahul, Michael Rosenblatt, Taido Nakajima, Bruno Germansderfer, and Saumitro Dasgupta. Harnessing Power Through Electromagnetic Induction Utilizing Printed Coils. Apple Inc., assignee. Patent 20110057629. 04 Sept. 2009. Print. Mazzotta, Giovanni. Device for Recovering Part of the Kinetic Energy of Moving Motor Vehicles. GM Oil & Gas Machinery, assignee. Patent 2527652. 22 May 2012. Print. Moriarty, Donald E., and Stephen Toner. Partially Self-Refueling Low Emissions Vehicle and Stationary Power System. Donald E. Moriarty, assignee. Patent 8459213. 22 Oct. 2009. Print.Ricketts, Todd. Apparatus for Generating Power From Passing Vehicular Traffic. Tod Ricketts, assignee. Patent 6756694. 15 Jan. 2002. Print. Information Citations: Brikela, Kristian. Electromagnetic Gun/ Coil Gun. Kristian Brikela, assignee. Patent 754637. 02 Jan. 1902. Print. Hunt, Inez Whitaker. “Nikola Tesla.” Encyclopædia Britannica. Encyclopædia Britannica, Inc., 20 Mar. 2017. Web. Singh, Timon. “Magneter: Magnetic Highway Harvests Kinetic Energy From Cars To Generate Electricity.” Inhabitat Green Design Innovation Architecture Green Building. Inhabitant, 01 Sept. 2011. Web. US Department of Transportation. “Facts & Statistics - Safety | Federal Highway Administration.” Facts & Statistics Williams, L. Pearce. “Michael Faraday.” Encyclopædia Britannica. Encyclopædia Britannica, Inc., 31 Dec. 2015. Web. Yoder, Jean. “Fatality Analysis Reporting System (FARS).” NHTSA. NHTSA, 15 Dec. 2016. Web

21


Effects of Glycerin on the Flash Point and Vapor Pressure of Propylene Glycol Felicia Zhong1 1 duPont Manual High School Mentor: Zhikai Zhong Louisville, Kentucky ABSTRACT The purpose of this experiment is to investigate the relationship between safetyrelated properties (flash point and vapor pressure) and the composition of glycerin / propylene glycol. Glycerin and propylene glycol are both commonly used bases for e-cigarettes, and account for about 95% of the E-liquid composition. Both glycerin and propylene glycol have their own benefits and drawbacks. Flash point is the lowest temperature at which a liquid gives off enough flammable vapor to ignite in air, and vapor pressure is defined as the pressure of a vapor in contact with its solid or liquid form. The hypothesis was that the more glycerin added to the propylene glycol, the higher the flash point and the lower the vapor pressure. Flash and fire points were measured with the Cleveland Open Cup method and vapor pressure was measured with the Differential Scanning Calorimetry method. Five glycerin / propylene glycol compositions were studied. With the increase of glycerin content, the flash and fire points were found to increase while the vapor pressure was found to decrease. The vapor pressures were also found to follow the Antoine relationship. The Antoine models were set up for each composition and were then used to predict the boiling point and vapor pressure under conditions that were not experimented in this project. INTRODUCTION In 1492, Christopher Columbus was offered a gift of tobacco from the Native Americans he encountered in the New World, which he brought back to Europe. Within a few years, its popularity had exploded. Nicolas Monardes, claimed that tobacco could cure 36 different health problems

(Boston University Medical Center, n.d.). Later, Thomas Harriet started promoting the smoking of tobacco as a way to benefit from the curing properties of tobacco. Ironically, Harriet later died of nose cancer, most likely because of his heavy tobacco use (Galileo Project, n.d.). Throughout the 1600’s, tobacco’s popularity continued to flourish although individuals were beginning to realize some of the negative effects of smoking tobacco. As time progressed, an increasing percentage of the population began to understand the dangers of smoking tobacco. According to the United States National Library of Medicine (NLM), nicotine (C10H14N2) was first extracted from tobacco by the German physicians Wilhelm Heinrich Posselt and Karl Ludwig Reimann. They found that in its pure form, nicotine is a clear liquid with a distinguishable odor. A study conducted by Mishra et al found that upon direct application of nicotine, a burning sensation and irritation in the mouth and throat, nausea, vomiting, and diarrhea can occur. Over time, or in larger doses, nicotine can increase blood viscosity, cause convulsion, paralysis, lung cancer, and a host of other issues.

Figure 1: Molecular Structure of Nicotine

Even with this new information, cigarettes and other tobacco products continued to flourish throughout the centuries. Dr. Robert N. Proctor of Stanford University stated “Lung cancer was once a very rare disease, so rare that doctors took special notice when confronted with a case, thinking it a once-in- a-lifetime oddity.” However,

22


today lung cancer is quite common, affecting hundreds of thousands of people in the U.S. alone. The American Cancer Society estimates that in 2016, approximately 224,000 individuals will be affected by lung cancer in the U.S. alone, and out of those 224,000 cases, approximately 70% will be fatal. Faced with the abundance of health issues caused by the usage of cigarettes, it should come as no surprise that finding a healthier alternative to cigarette smoking has become a pressing issue. In 2003, the first commercially successful electronic cigarette, often shortened to simply “e-cigarette,” was introduced. This e-cigarette was invented by Hon Lik, a Chinese pharmacist. An e-cigarette is a type of Electronic Nicotine Delivery System (ENDS). E-cigarettes have many benefits over traditional cigarettes, including the fact that they have no smell, don’t stain teeth, can be smoked in non-smoking areas in some circumstances, and are less costly than traditional cigarettes. Most importantly, e-cigarettes contain varied amounts of nicotine, and some e-cigarettes don’t contain nicotine at all (Mishra et al., n,d.). E-cigarettes use liquid bases containing nicotine with flavorings and other ingredients. These ingredients are heated into the vapor that the user inhales. When inhaled, the vapor causes a pleasant tickling or tingling sensation at the back of the throat known as a “throat hit,” which makes smoking pleasurable. The throat hit is actually the body’s reaction to the vapor irritating the back of the throat. Two of the liquid bases that cause these sensations at the back of the throat, propylene glycol and glycerin, are extremely controversial. E-cigarette bases give the liquid consistency. One of the most popular bases is vegetable glycerin (C3H8O3 ), which is a thick, sweet base. Vegetable glycerin results in thicker smoke, but a less pleasing throat hit than propylene glycol (Quit Smoking Community, n.d.). The main drawbacks of vegetable glycerin are that it offers a weaker, less pleasurable throat hit, and that the sweetness of it may interfere with the flavoring. Propylene glycol (C3H8O2 ) is the most widely used base because it offers a strong hit to the back of the throat. Arguably, propylene glycol is the e-cigarette base that best mimics the throat hit of a traditional cigarette. However, propylene glycol does have its drawbacks. Some people can be allergic to propylene glycol. Needless to

say, many have tried to solve this controversy by mixing the two bases. This research’s aim was to determine the safest and also the most pleasing combination of glycerin and propylene glycol. Although many factors play into determining the ideal combination of glycerin and propylene glycol to create the safest and healthiest smoke, this experiment focused on two factors: flash point and vapor pressure. Flash point is defined as “the lowest temperature at which a liquid can form an ignitable mixture in air near the surface of the liquid.” Vapor pressure (also known as equilibrium vapor pressure) is the amount of pressure a vapor in thermodynamic equilibrium exerts with its condensed phases at a given temperature within a closed system. A higher flash point means a safer smoke, because the liquid will ignite less easily. Lower vapor pressure results in less smoke going to your lungs, which is better for the health. A lower vapor pressure also means that the e-cigarette is less likely to explode. This is important because between 2009 and 2014, there have been 25 separate incidents of e-cigarettes exploding or catching fire. Glycerin and propylene glycol have varying levels of vapor pressure and flash points, and the purpose of this experiment was to determine the effect of glycerin on the flash point and vapor pressure of propylene glycol, with the overall goal of determining the combination of glycerin and propylene glycol to create the safest and most pleasurable smoke. It was hypothesized that adding glycerin to propylene glycol will raise the flash point of the solution, because glycerin has a much higher flash point than propylene glycol. In contrast, the vapor pressure of the solution will be decreased by the addition of glycerin, because glycerin has a lower vapor pressure than propylene glycol. Although e-cigarettes are generally regarded as safer than traditional cigarettes, there is insufficient information at the moment to make a definite statement, because of how recent e- cigarettes are. Certainly, there are hopes that e-cigarettes are a less addictive, less toxic alternative to traditional cigarettes. There are even suggestions that e-cigarettes could be a way to slowly wean people off of nicotine. Potentially, the results of this project could help those seeking to find the right e-cigarette to help with their

23


nicotine addictions. METHODOLOGY Sample Preparation First, the test samples (2 pure and 3 blended) were prepared. The samples were prepared according to chart A, using 99+% glycerin purchased from ACROS and 99% 1, 2 Propanediol (propylene glycol) purchased from ALDRICH. A Semi-Micro Balance and a pipette was used to attain maximum accuracy in the creation of each sample. Next, the blended samples were mixed. This was done using the SpeedMixer DAC 150 FVZ. The machine was first set to 3000 RPM (rotations per minute) for 1 minute. Then a sample was loaded onto the machine, and the “start” button was pressed to begin the mixing. When the machine completely stopped, the jar was unloaded, checked to ensure that it was mixed properly, and then set aside. This process was repeated for the remaining 2 mixed samples.

Chart A: Sample Preparation

Vapor Pressure Tests Next, the vapor pressure tests were conducted. First, a hermetic pan was placed on a Semi-Micro Balance and filled with 3-5 mg. of a sample. Then the hermetic pan was sealed to a hermetic lid with a laser-cut pin-hole using a pan sealer. After the sample was sealed, it was loaded onto a TA Instruments Q 1000 Differential Scanning Calorimetry (DSC) with High Pressure Cell. The machine was then set to the desired pressure, and the test was started. After the desired pressure was reached, the high pressure cell was sealed, and the machine was run at 5°C per minute from 100-400°C to determine the boiling point of the sample. The boiling point was then used to determine the vapor pressure of

the sample, as when a liquid is at its boiling point, its vapor pressure is equal to the pressure of its surrounding environment. Each sample was tested at 6 significantly different pressures. To obtain pressures below atmospheric pressure, a vacuum pump was used. To obtain pressures above atmospheric pressure, a high pressure cylinder was used. Flash Point Tests To conduct the flash point tests, a Cleveland Open Cup Flash Tester was used. First, the copper cup of the flash tester was filled to the line with a sample. Then a thermometer was secured to the machine and lowered into the cup so that the bulb of the thermometer was submerged in the sample. After that, a flame was made with a cigarette lighter on the test flame applicator of the machine, and the applicator was moved across the large frame to create a flame on the large frame. The applicator with the flame at its end was moved back and forth across the cup occasionally, to test for a flash. As the temperature shown by the thermometer increased, the applicator was moved across the cup more frequently. Eventually, the applicator was kept moving continually across the cup. When a flash appeared, the temperature (the flash point) was quickly noted on the thermometer and recorded. When the flash lasted at least 5 seconds, that temperature (the fire point) was noted as well. The flash point and fire point were the same temperature on some occasions. When both the flash point and fire points had been recorded, the flames were blown out and the cup was cleaned. This process was repeated twice for each of the 5 samples. RESULTS Vapor Pressure As seen according to chart 1 and graph 1, the more glycerol, the higher the flash point of the sample. Oftentimes, the flash point and the fire point were reported to be the same. There wasn’t much variance seen in the data, especially when looking at the samples with mostly propylene glycol. In the 100%, 80%, and 50% propylene glycol samples, the standard deviation was 2 or lower. However, more variance can be seen in the

24


flash point measurements. In the 100% glycerin sample, there was a standard deviation of 6 in the flash point measurements.

Graph 4 Chart 1

Chart 2 and Graph 5 Graph 1

Chart 3 Graph 2

Chart 4

It can be seen in the vapor pressure data that the more glycerol, the lower the vapor pressure of the sample. The boiling point of the sample correlates with the temperature the peaks are at. The pure propylene glycol data shows that the peaks are at the lower end of the temperature spectrum. Meanwhile, the pure glycerin data

Graph 3

25


shows the peaks being on the opposite, higher end of the temperature spectrum. The data for the 50/50 propylene glycol/glycerin blend showed peaks somewhat in the middle. The dotted lines show that as the boiling point increases, the pressure increases as well. The data was set to the Antoine Equation to predict vapor pressures at temperatures and pressures not included in this experiment. It can be seen in chart 4 the predicted values for vapor pressure. These values are different from the ones reported in the literature because in this experiment, the peak data was used in the equation instead of the onset, as the peak is less broad and easier to work with. For example, the reported vapor pressure at 200°C for the 100% glycerol sample is 4.12 kPa, while the vapor pressure found in this experiment at the same temperature is 3.45 kPa. CONCLUSION The hypothesis was supported by the data obtained from this experiment. Higher amounts of glycerin led to higher flash points and lower vapor pressures. As such, the pure glycerin mixture had the highest flash point, and the pure propylene glycol mixture had the highest vapor pressure. As the flash point increases and the vapor pressure decreases the safer the e-cigarette is, as the risk of the e-cigarette exploding and/or catching fire decreases as well. However, since it is generally agreed that propylene glycol is the vase that results in the most pleasurable throat hit, some individuals may elect to use pure propylene glycol regardless of any safety concerns. This information is potentially helpful to individuals searching for e-cigarettes that satisfy their demands, and additionally, are safe. In the future, is this research were to be continued, the effects of flavoring and other components of e-liquid on flash point and vapor pressure could be explored. WORKS CITED A Historical Timeline of Electronic Cigarettes. (2016, November 21). Retrieved from http://casaa.org/historical-timeline-of-electronic cigarettes/ E-cigarettes and Lung Health. (2016, November 2). Retrieved from http:// www.lung.org/stop-smoking/smoking-facts/e-cigarettes-and lung-health.html?referrer=http://www.google.com/ E-Cig vs Tobacco Cigarette FAQ. Retrieved from http://www.vapertrain. com/page/ecvstc

Flash Point. (2016, August 22). Retrieved from http://www.ilpi.com/msds/ ref/flashpoint.html Glycerol. (2016, November 19). Retrieved from https://pubchem.ncbi.nlm. nih.gov/compound/glycerol#section=Top Hahn et. al. (2014, December 9). Electronic cigarettes: overview of chemical composition and exposure estimation. Retrieved from https:// www.ncbu.nlm.nih.gov/pmc/articles/PMC4304610/ Health Effects of Cigarette Smoking. (2016, December 1). Retrieved from https://www.cdc.gov/tobacco/data_statistics/fact_sheets/health_ effects/effects_cig_smoking/ University of Dayton. History of Tobacco. Retrieved from http://academic. udayton.edu/health/syllabi/tobacco/history.htm Johnson, L. (2016, February 5). Propylene Glycol in E-Cigarettes – Is PG Dangerous to Inhale? Retrieved from https://www. ecigarettedirect.co.uk/ashtray-blog/2016/02/propylene-glycol-e cigarettes.html Just Getting Started Series: The Throat Hit. (27 October 2015). Retrieved from http://vaporcade.com/just-getting-started-series-the throat-hit/# Key statistics for lung cancer. (2016, May 16). Retrieved from http://www. cancer.org/cancer/lungcancer-non-smallcell/detailedguide/non small-cell-lung-cancer-key-statistics Liu, S., Tran, C. (2016, March 3). Vapor Pressure1. Retrieved from http:// chem.libretexts.org/Core/Physical_and_Theoretical_Chemistry/ Physical_Propertiesof_Matter/States_of_Matter/Liquids/Vapor_ Pressure1 Mishra et al. Harmful effects of nicotine. Retrieved from https://www.ncbi. nlm.nih.gov/pmc/articles/PMC4363846/ Proctor, R. N. (2011, November 22). The history of the discovery of the cigarette–lung cancer link: evidentiary traditions, corporate denial, global toll. Retrieved from http://tobaccocontrol.bmj.com/ content/21/2/87.ful Propylene glycol-(OD)2. (2016, November 19). Retrieved from https:// pubchem.ncbi.nlm.nih.gov/compound/12203549#section=Top Propylene Glycol vs. Vegetable Glycerin in E-Liquid. Retrieved from https:// quitsmokingcommunity.org/e-liquid/propylene-glycol-vs vegetable-glycerin-in-e-liquid/ Raloff, J. (2016, February 12). Vaping linked to host of new health risks. Retrieved from https://www.sciencenews.org/article/vaping linked-host-new-health-risks Thomas Harriot (1560-1621). Retrieved from http://galileo.rice.edu/sci/ harriot.html Vaporizers, E-Cigarettes, and other Electronic Nicotine Delivery Systems (ENDS). (2016, November 8). Retrieved from http://www.fda.gov/ TobaccoProducts/Labeling/ProductsIngredientsComponents/ ucm456610.htm Electronic Cigarette Fires and Explosions, U.S. Fire Administration, October, 2014. Thomson, GW. The Antoine Equation for Vapor-Pressure Data. Chemical Reviews, 38(1), 1-39, 1946; ASTM Standard E1782-14, Standard Test Method for Determining Vapor Pressure by Thermal Analysis. ASTM International, West Conshohocken, PA. ASTM Standard D92-05, Standard Test Method for Flash and Fire Points by Cleveland Open Cup Tester. ASTM International, West Conshohocken, PA. Cassol, B. Determining Vapor Pressure by Pressure DSC. PerkinElmer Application Note. Jones, K. and Seyler, R. Differential Scanning Calorimetry for Boiling Points and Vapor Pressure. TA Instruments Application Note, TA201. CRC Handbook of Chemistry and Physics, 67th Edition, Weast R.C. Ed., CRC Press, Boca Raton, FL, 1986. Yaws C. The Yaws Handbook of Vapor Pressure, 2nd Ed. Gulf Professional Publishing, 2015.

ACKNOWLEDGEMENTS Special thanks to Zhikai Zhong for mentoring me during this project, and to Matthew Raj for his advice as a Peer Advisor.

26


Effects of Nanostructured Oxides and Conducting Polymers on Pseudocapacitance Mark Raj1 Ruchira Sumanasekera1 1 duPont Manual High School Louisville, Kentucky ABSTRACT The following experimentation investigated the effects of manganese dioxide and conducting polymers on graphene-based supercapacitors. Graphene has been known to produce a large capacitance due to its honeycomb structure and very high surface area. Manganese dioxide was hypothesized to greatly increase the capacitance due to redox reactions being facilitated by oxygen vacancies in the manganese dioxide crystal, allowing for the beneficial effects of pseudocapacitance. In conducting polymer (Polyaniline), nitrogen species in the polymer chains were expected to provide similar effects. First, the graphene was combined with a controlled amount of MnO2 or conducting polymers. The resulting paste was then placed on aluminum meshes and tested in a potassium hydroxide electrolyte. Using a sourcemeasure device, the specific capacitances for each sample were determined: MnO2/graphene averaged 210 F/g while the conductive polymer/ graphene averaged 183 F/g. The data supports the hypothesis that the manganese dioxide can greatly increase the capacitance of graphenebased supercapacitors, which were established to be around 130 F/g without any additions. Furthermore, the capacitor was calculated to have an energy density of 57.2 Wh/kg, nearly four times the high-end market standard of 15 Wh/kg. The power density was calculated to be 10.3 kW/ kg, placing them in the high-end supercapacitor market and seven times the power density for high-end lithium-ion batteries, providing the benefits of both devices. These capacitors have a variety of high power applications, such as electric cars and regenerative braking, and the potential to revolutionize the market for graphene-based supercapacitors with their simple production method.

INTRODUCTION In an increasingly technologically advanced world, the devices that are used to power these improving technologies must also improve drastically. Currently, many hybrid electric vehicles, among other devices, are reliant upon batteries, and consequently have been unable to have the quick release times offered by supercapacitors. Improvements in supercapacitor long-term storage and release would raise many industries and devices to never seen heights. Improving the implementation and power of graphene-based supercapacitors would make it much more appealing to those considering graphene as a capacitor material. Furthermore, advancements made in graphene technology would make the large-scale implementation of them much more reasonable, due to graphene’s relatively high cost per unit. Fortunately, graphene’s cost is expected to decline in the near future as a method of production is optimized and scaled up (Deloitte Global, 2016). There were three main cornerstones to this experimentation: graphene, super capacitors, and pseudocapacitance. Graphene is a single-atomthick layer of graphite, commonly known for its use in pencil lead. Graphene was discovered very recently, in 2004, by Andre Geim of the University of Manchester, earning him the 2010 Nobel Prize in Physics. It was the first two-dimensional material to ever be isolated and was thus rejected by many scientists. Graphene’s structure is a “honeycomb” pattern of carbon atoms bonded together, which gives it many unique properties. Since it is only one atom thick, electrons flowing across it do not face much resistance, giving it very high conductivity. Some estimates state that it can carry nearly one thousand times more electricity than copper. It is also extremely strong, in fact, the strongest material to ever be measured. It is

27


nearly one hundred and fifty times stronger than steel by weight (Lee et al., 2008). Graphene is also extremely elastic, making it a highly sought after material in many scientific fields. Finally, it has a huge surface area, even larger than what is seen in carbon nanotubes and activated carbons. Leading estimates put the surface area at nearly 2630 m2/g, due to its two-dimensional structure (Bonaccorso et al., 2015). The first material added to the graphene was manganese dioxide (MnO2). Manganese dioxide, like all metal oxides, has a net negative electric charge due to the presence of oxygen vacancies. When bonded with certain metals, such as manganese or zinc, not enough oxygen atoms enter the crystal structure of the material, causing oxygen vacancies. These oxygen vacancies provide extra electrons (2 electrons per each oxygen atom) to the oxide material (Cheng, 2013). Thus, manganese dioxide can facilitate a redox reaction for electrical strage applications. The final material is polyaniline (PANi), a conducting polymer. Although polymers are

high energy. There is also a fundamental difference between supercapacitors and capacitors in how the energy is stored. Capacitors store energy using a temporary electrostatic field that occurs between the plates of the capacitor. This results in capacitances measured in the micro or picofarads. Supercapacitors store their energy using electro double layer capacitance (EDLC) and are reliant upon two main factors: the surface area of the electrodes and the distance between the charged ions of the double layer (De La Fuente, n.d.). This results in capacitance measures typically in farads, many times larger than what is seen in normal capacitors. These two factors are also the reason graphene was chosen. Due to graphene’s aforementioned extraordinary surface area, it facilitates double layers occurring on its surface, thus a higher capacitance. Secondly, the distance between the charged ions for graphene are only angstroms wide. This also contributes to a high capacitance due to increased conductivity. The very large surface area of graphene coupled with the very small distance between the double layer results in a very large capacitance. The final cornerstone is that of pseudocapacitance. Pseudocapacitance is the

Figure 1. Structure of polyaniline (Song & Choi 2013)

typically insulators, recent research has revealed various polymers that can conduct electricity. The polymer PANi can be created by linking individual monomers using an acid. This creates a conjugated bonding structure where single and double bonds alternatively link the monomers. However, even in this state PANi still contains many different oxidation states, requiring the addition of acid. Finally, the presence of nitrogen in PANi (Figure 1) provides the conditions for a redox reaction to occur. The selection of graphene was heavily reliant upon the properties of supercapacitors and their nature, the next cornerstone. Supercapacitors and capacitors are fundamentally different from batteries in how each releases their respective stored energy. Capacitors are meant to release high amounts of energy in a very short amount of time, giving them high power. On the other hand, batteries release huge amounts of energy over a very long timeframe, giving them

Figure 2. The typical curves and elongation that occur with pseudocapacitance (Brousse et al., 2015)

boost in capacitance supplied by a redox reaction occurring within the capacitor. This is coupled with EDLC to further increase the capacitance of supercapacitors. On a cyclic voltammetry graph, the graph becomes elongated and curved, as opposed to the rectangular shape of a typical capacitor (Dunn, 2014). The purpose of this experiment was to find the effect MnO2 and PANi have on graphenebased supercapacitors. It was hypothesized that

28


MnO2 would best enhance the properties of graphene-based supercapacitors. The reasoning behind this is that the oxygen vacancies that naturally occur in MnO2 would facilitate a redox reaction with the carbon in the graphene, allowing for the beneficial effects of pseudocapacitance. METHODOLOGY To create the graphene, nickel acetate (Ni(CH3CO2)2) and citric acid were combined in equal proportions and placed into an 80 °C oven to dry. The subsequent powder was placed into a tubular furnace and heated in nitrogen gas (N2) at 500 °C. Once heated, all that remained were nickel nanoparticles surrounded by bonded carbon atoms. To remove the nickel, the mixture was washed with a nitric acid solution, resulting in a sphere or “cage” of carbon atoms bonded together. These resulting cages were the first of their kind, measuring only 3 nanometers in width. However, despite having a very large surface area, carbon nanocages are not conductive enough by themselves. For the first experiment, a conductive polymer was added. For these conductors, a total of 50 mg of electrode material was created. 7 mg of a binding PVDF (polyvinylidene fluoride) solution, or 14% of the total electrode, was made by measuring and combining 0.25 mL, or 5 mg, of PVDF with 2 mg of acetylene black. Next, the conductive polymer was made by creating a 1:2 ratio of the polyaniline to camphor sulfonic acid. Thus, 2.4 mg of polyaniline was combined with 4.6 mg of acid, resulting in 7 mg added to the electrode, or 14%, of the total mass. The final 36 mg, or 72%, were constituted of carbon nanocages. All these substances were measured and placed into a mortar and pestle. They were then mixed with ethanol into a black and highly malleable paste. This paste was then spread, using a scoopula, on circular, similarly weighted aluminum meshes. These meshes were then wrapped in aluminum foil and placed in a hydraulic press in to keep the electrode itself securely attached. The final two meshes, with their electrodes attached, were then placed into the capacitor chamber. Between the two electrodes was a cellulose separator dipped in a potassium hydroxide (KOH) solution: the electrolyte. The capacitors were then left in the source-measure

unit overnight to collect the required data, which was then analyzed. This was repeated for three separate trials. The second experimentation involved manganese dioxide. These capacitors totaled 20 mg and made of 70% carbon nanocages (14 mg), 10% manganese dioxide (2 mg), and 20% (4 mg) of a binding agent known as TAB-2. The resulting mixture was then combined in a mortar and pestle

Figure 3. Charge and discharge cycles for the MnO2 based capacitor

Figure 4. Charge and discharge cycles for the PANi based capacitor

until a relatively malleable paste was formed, with ethanol added as needed. The resulting paste was then placed onto a circular stainless steel mesh and formed using scoopulas and tweezers to cover the mesh. The mesh was then pressed

29

Figure 5. First charge and discharge cycle current dependency for MnO2 (top) and PANi (bottom)


at approximately 2 metric tons. After 10 mL of 6M potassium hydroxide solution was created, a cellulose-based separator was dipped into the solution and placed into the chamber. The measurements were taken using a source-measure unit and left overnight. This procedure was repeated three times. The capacitors were then attached to a source measuring device and the current of the capacitor remained constant with the voltage being maintained between 0V and 1V. After the capacitor went through its maximum charges and discharges, it was removed from the computer, the data was collected, and their capacitances were calculated. RESULTS Figures 3 and 4 illustrate the various charge and

Figure 6. Cyclic voltammetry results from the MnO2 sample (top) and the PANi sample (bottom)

discharge cycles for the two samples. Over the course of thousands of cycles, the capacitors maintained steady rates of charge and discharge. The figures indicate they continued to reach their maximum voltage of 1 V, allowing the capacitors to continue to be useful many cycles longer than batteries. Using the first charge/discharge cycle shown in Figure 5, the slope and subsequently the capacitances were

Table 1. The calculated capacitances for the first 28 charge/discharge cycles for both capacitor types

calculated. The first charge/discharge cycle was used since it is the purest and an unaltered version of the capacitor, thus allowing the most accurate data. The MnO2 sample initially had a capacitance of 220 F/g and the PANi sample had a capacitance of 188 F/g. As the slope decreases, the specific

Table 2. Regression analysis of the MnO2 and PANi- based supercapacitors

capacitance increases. Thus, the graphs for MnO2 samples showed a slight shift to the left. Also, represented in Figure 5 is current dependence, an essential component of any super capacitor. The relationship shows that as the current increases, the slope increases. This subsequently causes the capacitance to decreases, since they cannot handle as much current. Figure 6 graphs voltage against the current for both capacitors. At lower sweep rates, both samples maintain a parallelogram shape with a slight curve on the edges, indicating little pseudocapacitance. However, as the sweep rate increases to 100 mV/s, it becomes clear that

30


market standard. In fact, the low-end market for lithium-ion batteries, famed for their large energy densities, is just 100 wH/kg. The power density, a measure of how much energy can be released per unit of time, for the MnO2 sample was calculated to be 10.3 kW/kg, placing it among the higherend market supercapacitors. However, the power density was nearly seven times larger than that of lithium ion batteries, with the high-end market currently measuring 1.5 kW/kg. These capacitors boast a very high power density, that most Figure 7. The modified graphene-based supercapacitors’ specific capacitance over supercapacitors achieve easily, along with a large 400 cycles compared to a pristine graphene energy density similar to that of batteries, which is supercapacitor rarely seen in supercapacitors. there was a significant pseudocapacitance that Furthermore, this research offers a solution occurred. This is indicated by the large curvature to this issue by optimizing the mass production of and elongation, which was very similar to the ideal these capacitors. Previous research into MnO2pseudocapacitance in Figure 2. graphene based capacitors requires the fusing of Table 1 shows the calculated slopes for manganese dioxide into the graphene structure, each of the first cycles. A total of 400 cycles were which is very resource heavy and would be limited measured to ensure real world applicability. in scope (Yu et al., 2011). However, the highly As shown, each subsequent cycle causes the effective capacitors measured were made simply capacitances to show a slight decrease. To quantify by forming a paste from the graphene powder and this decrease, a regression analysis was used. manganese dioxide. This is much less resource As shown in Table 2, the initial point for the heavy, while still garnering the many benefits of MnO2 regression was 210 F/g while PANi’s was manganese dioxide. The PANi could successfully 184 F/g. This indicates that MnO2 is initially a introduce nitrogen into the graphene structure better capacitor. However, for each additional through simple mixing and still gained a significant cycle, there is a 0.09 F/g decrease in the increase in capacitance. Previous studies have capacitance for the MnO2 capacitor, while the been able to create a similar pseudocapacitance PANi saw a 0.04 F/g decrease. This eventually but required a large energy input to do so (Tan resulted in a 22% decrease in capacitance over et al., 2013). Finally, there was a very strong 400 cycles for the MnO2 capacitor and an 11% pseudocapacitance obtained, as shown in Figure 6. decrease for the PANi capacitor. This much curvature and elongation are very rare. The possibilities for these types of capacitors are endless. DISCUSSION Moreover, the supercapacitors created The results of the experiment are summed have a very small volume and can sustain many up in Figure 7. The MnO2 sample is a significantly charge and discharge cycles, as shown in Figures better capacitor initially, but it sees rapid decay 3 and 4. This is an essential component to any and eventually levels off around the same supercapacitor. They are specifically designed to capacitance as the PANi capacitor. The PANi sustain a high number of charge and discharge starts lower but maintains its capacitance over cycles while maintaining their high capacitance. the course of 400 cycles, signifying longevity and allowing for more reliability. Both samples end up The PANi capacitor achieves the definition of a supercapacitor in its reliability while pushing the in the range of 180 F/g, significantly higher than limit on how much these supercapacitors can the pristine graphene sample. store. Currently, the energy density for high The purpose of the experiment was to see end supercapacitors fluctuates around 10-15 the effect of manganese dioxide and polyaniline wH/kg. The MnO2 sample reached 57.2 wH/ on graphene-based supercapacitors. The results kg at 1 V, showing a four-fold increase in the

31


suggest a substantial boost is provided by both. The MnO2 capacitors were measured at nearly 220 F/g and the polyaniline showed 175 F/g, both well about the 130 F/g benchmark. This data supports the original hypothesis that the manganese dioxide would measure the highest increase in capacitance due to the oxygen vacancies. The oxygen vacancies result in fewer oxygen atoms being present in the MnO2, which would allow for redox reactions to occur as electrons would have to be transferred between species. A similar situation likely occurred with the PANi capacitor as the nitrogen in the conducting polymer has exactly one more electron than the carbon located in the graphene structure, likely facilitating a redox reaction as well. This would also explain the difference between the MnO2 and PANi sample, as the PANi sample would only be able to transfer one electron while MnO2 could transfer much more, allowing for a higher capacitance. The cyclic voltammetry data (Figure 6) also supports the hypothesis that the increase in capacitance would be due to a pseudocapacitance occurring, due to the large curvature and elongation of both samples. This experiment has many real-world applications, considering supercapacitors are all around us. Devices like trains, electric cars, forklifts, buses, and electrical cars would all benefit from increases in capacitance of supercapacitors. However, the most important real world application would be in regenerative braking. This would be huge because not only would it allow for more energy to be recovered from brakes, but better brakes would be essential in preventing car accidents around the globe. Specifically, the very large energy density of 57.2 wH/kg and high power density of 10.3 kW/kg allows for a niche in small power devices and revolutionize the supercapacitor market. Future research possibilities involving this experiment are numerous. Although there was an initial boost that was seen by adding both samples, a possible improvement would be to vary the amounts of each that are doped into the graphene structure. This could be used to determine which increases the energy or power density the most per gram added, which would be useful for large scale implementation. Another continuation would be to create an asymmetrical capacitor with

a combination of the two capacitors researched to see if the benefits of both could be combined. Finally, the potassium hydroxide electrolyte is unable to handle very high voltages but organic electrolytes are currently being tested and have promising preliminary results. The impact of changing the electrolyte could be investigated. WORKS CITED

Augustyn, Simon, Dunn, V. (2014). Pseudocapacitiveoxide materials for high-rate electrochemical energy storage. Energy Environ, 7, 1597-1614: doi:10.1039/C3EE44164D Bonaccorso, F.; Colombo, L.; Yu, G.; Stoller, M.; Tozzini, V.; Ferrari, A. C.; Ruoff, R. S.; Pellegrini, V. (2015). “Graphene, related two dimensional crystals, and hybrid systems for energy conversion and storage”. Science 347 (6217): 1246501. doi:10.1126/science.1246501. Brousse, T., Bélanger, D., & Long, J. W. (2015). To be or not to be pseudocapacitive?. Journal of The Electrochemical Society, 162(5), A5185-A5189 BU-209: How does a Supercapacitor Work? (2015, August 24). Retrieved September 15, 2015, from http://batteryuniversity.com/learn article/whats_the_role_of_the_supercapacitor Cheng, F., Zhang, T., Zhang, Y., Du, J., Han, X. and Chen, J. (2013), Enhancing Electrocatalytic Oxygen Reduction on MnO2 with Vacancies . Angew. Chem. Int. Ed., 52: 2474–2477. doi:10.1002/anie.201208582 De La Fuente, J. (n.d.). Capacitors and supercapacitors explained. Retrieved September 14, 2015, from http://www.graphenea.com/pages/graphene supercapacitors#.VfeaWBFVikp Deloitte Global. (2016, March 07). Predictions 2016: Graphene: research now, reap next decade. Retrieved from https://www2.deloitte com/global/en/pages/technology-media-and telecommunications/articles/tmt-pred16-tech graphene-research-now-reap-next-decade.html Lee, C., Wei, X., Kysar, J. W., & Hone, J. (2008 Measurement of the elastic properties and intrinsic strength of monolayer graphene. science, 321(5887), 385-388. Periasamy, Athinarayanan, Alfawaz, Alshatwi. (2015). Carbon nanoparticle induced cytotoxicity in human mesenchymal stem cells through upregulation of TNF3, NFKBIA and BCL2L1 genes. Pubmed, 275-284. doi:10.1016 Song, Edward, and Jin-Woo Choi. “Conducting Polyaniline Nanowire and Its Applications in Chemiresistive Sensing.” Nanomaterials, vol. 3, no. 3, 7 Aug. 2013, pp. 498–523. doi:10.3390/nano3030498. Tan, Y., Xu, C., Chen, G., Liu, Z., Ma, M., Xie, Q., . . . Yao, S. (2013). Synthesis of Ultrathin Nitrogen Doped Graphitic Carbon Nanocages as Advanced Electrode Materials for Supercapacitor. ACS Applied Materials & Interfaces, 5(6), 2241-2248. doi:10.1021/am400001g Wei, Q., Tong, X., Zhang, G., Qiao, J., Gong, Q., & Sun, S. (2015). Nitrogen-Doped Carbon Nanotube and Graphene Materials for Oxygen Reduction Reactions. Catalysts, 5(3), 1574-1602. doi:10.3390/catal5031574 Yu, G., Hu, L., Liu, N., Wang, H., Vosgueritchian, M., Yang, Y., . . . Bao, Z. (2011). Enhancing the Supercapacitor Performance of Graphene MnO2Nanostructured Electrodes by Conductive Wrapping. Nano Letters, 11(10), 4438-4442.

doi:10.1021/nl2026635

ACKNOWLEDGMENTS We would like to thank our mentor Dr. Gamini Sumanasekera for providing us with the tools necessary to succeed and the motivation to continue working every single day and the Conn Center for Renewable Energy Research and their staff for providing us with the equipment and facilities to embark on our research.

32


Blood Type and Cholesterol: Is There A Correlation? Lilly Gonzalez duPont Manual High School

ABSTRACT Heart disease is the leading cause of death in the United States. Cholesterol contributes to plaque formation in coronary arteries. While studies have shown that O-type blood has a 23% lower chance of developing heart disease, no published research has shown any correlation between blood type and cholesterol values. (AHA, 2012) The questions were “Is there a correlation between blood type and cholesterol?”, and “Is there any difference in cholesterol levels among types O, A, B and AB?” My hypothesis was that people with O-type blood may have lower cholesterol, which leads to lower cardiovascular risk. I began my experiment with blood samples from 29 patients in a clinical laboratory. Their blood types were analyzed using Eldon blood typing kits, which used the forward antibody method. Once blood types (A, B, O or AB) were recorded, cholesterol analysis was performed using the Alere Cholestech LDX Analyzer. Lipid values (total cholesterol, LDL, HDL and triglycerides) were measured. The data showed that 48.4 % had blood type O, and 51.6% with non-O. After comparing the lipid data and performing standard t-tests, it was found that O-type people had significantly lower total cholesterol, and lower LDL. Triglycerides and HDL, however, showed no statistical difference. Additionally, type B had slightly higher total cholesterol and triglycerides, while type A had higher LDL. The hypothesis was supported. It was concluded that there is a correlation between blood type and cholesterol. O type people have lower total and LDL cholesterol, which explains the lower cardiovascular risk. While people cannot change their blood type, these findings may help them better understand their risk for heart

disease. A healthy lifestyle may help protect those with type A, B and AB. BACKGROUND Heart disease is the leading cause of death in the United States, accounting for nearly 610,000 deaths every year (Pirillo et al., 2013). That is one in four deaths. (CDC, 2015.) Atherosclerosis, the most common form of heart disease, is caused by the blockage of heart vessels through plaque formation and the narrowing of the arteries (Figure 1) (Backes et al., 2014). Studies have shown that these plaque formations are made with cholestesterol and triglycerides, which are waxy, fat-like substances. (Bekerman, 2016). Lowdensity lipoproteins (LDL) is the “bad cholesterol,” which is the main cause of plaque build-up. HDL,

Figure 1

however, is referred to as “good cholesterol” because they can transport fat molecules out of artery walls, thus help prevent or even regress atherosclerosis (CDC , 2015). Most methods to counter or prevent heart disease are focused on diets with low cholesterol and low fat. (CDC, 2015.) However, some people are naturally born with higher or lower levels of cholesterol. These are often attributed to genetics or family history. It was reported that some African tribes had noticeably lower rates of heart disease. It is extremely interesting that the large majority of these tribes had O-type blood .

33


Human blood types are inherited. They are commonly divided into one of four main blood types (A, B, AB, and O) and are based on the presence or absence of specific antigens on the surface of red blood cells (Figure 2). Type O is the most common blood type in the United States. About 37% of whites, 47% of African-Americans, 53% of Hispanics, and 39% of Asians in the U.S. have the blood type O, according to the American Red Cross. It is also very interesting that blood type may be related to some health conditions. Recent new research from American Heart Association suggests that people with blood type A, B, or AB had a higher risk for coronary heart disease when compared to those with blood type O (AHA 2012). In a study done by Harvard University, it was found that people with O blood type had a 37%

Figure 2

Figure 3

lower chance of developing the heart disease. In another large cohort study recently published in BMC Medicine, it is reported that nearly 6% of total deaths and as many as 9% of cardiovascular deaths could be attributed to having non-O blood groups. Although some thoughts have been suggested, it remains unclear why having blood type O conveys some protection against heart attack and stroke, while having the far less common AB

blood type appears to increase risk. While the epidemiology studies showed the correlation between blood type and heart disease (AHA 2012) , no biochemical research or published studies in the United States showed any correlation between blood type and cholesterol/lipid values. Because there have been no publications investigating a correlation between blood type and cholesterol levels, I was really interested and this raised some questions from me. Why does O-type have lower heart disease? Is this because of cholesterol levels? Does blood type affects cholesterol levels ? Is there any difference in cholesterol levels among the blood types O, A, B and AB? After the questions were developed, more research was done, and medical journals were read. The hypothesis was that if someone has O blood type, then he or she will have lower lipid values, while if a patient has a non-O blood type, then they will have higher lipid values. This experiment may help to explain why O-type people have a lower risk for heart disease. MATERIALS AND METHODS The study subjects were 29 adults of ages 30-75. Patients who were smokers, were on cholesterol medication or had a BMI over 40 or below 20 were excluded from the study. Before and after the experiment, all safety precautions were checked. Standard lab practices were reviewed, and gloves and a lab coat were worn at all times. Patient gave consent before their routine blood test. They had their blood drawn in the clinic by a professional phlebotomist. Small vials of blood samples were labeled with patient’s initial and DOB. Sample were stored in a sterilized green-top phlebotomy tube & stored at 4°C. Blood types were analyzed by Eldon Blood Type Test analyzing kits. 10 microliters of sterilized water were pipetted into each well with a micropipette, followed with 25 microliter of each patient’s blood. Eldonsticks were used for mixing for 10 seconds. The card’s wells were rotated right 90 degrees and held vertically. This was repeated four times. The cards were then placed in room temperature for 15 min before they were sealed in a biological safety container. Each individual card was then read by using the

34


Figure 4

following standard reference card. The first well detects the presence of the A antigen on blood cells. The second well detects the B antigen similarly, while the third well detects the RH protein. The card was then read. The fourth well, the last one, is meant as a control. If the fourth one has any spots, dotting, or fine bumpy texture, indicating agglutination, that means the test is NOT valid, and the procedure should be repeated. The first three wells are read as shown in Figure 4 (above) but are as described. Being marked as “present� meant that the antigens were present for that corresponding blood type, and agglutination occurred as the blood cells stuck together and reacted with the reagents. The first well detects the presence of the A antigen on blood cells. The second well detects the B antigen similarly, while the third well detects the Rh protein. Given these three, the corresponding blood type was derived. If an antigen was present, you have that blood type. If none were, it is described as O. Second, the Rh factor was read. If the well was marked, that indicates that the sample was Rh positive. Showing no change indicates Rh negative, as the Rh glycoprotein is not present. Data was recorded to a software matching system for later comparison and statistical analysis. Next, lipid values were measured for each patient sample. The Alere Cholestech LDX Analyzer System, which combines enzymatic methodology and solid-phase technology, was used to measure total cholesterol, triglyceride,

LDL and HDL levels. 5 microliter of a sample was applied to an Alere Cholestech LDXÂŽ cassette. The cassette was then placed into the Analyzer, where a unique system on the cassette separates the plasma from blood cells. A portion of the plasma flowed to the right side and was transferred to both the total cholesterol and triglyceride reaction pads. Simultaneously, plasma flowed to the left side, where the low- LDL is precipitated. A magnetic stripe on each cassette contains the calibration information required for the analyzer to convert the reflectance reading to the total cholesterol, HDL cholesterol, triglycerides and LDL values.. Data was transferred to Excel and corresponding results were separated into categories of each blood type (O, A, B, AB). Statistical values like mean, median, and standard deviation were calculated. A standard t-test was

Figure 5: Display of an example of blood typing report for patients.

Figure 6 Table 1: Patient Blood Types

performed. P-values were calculated accordingly. Original data and derived values were placed in graphs and charts accordingly, and reviewed/ analyzed. DATA AND RESULTS Table 1 displays the summary of the blood types in this experiment. There are 48.4 % with type O, and 51.6% of non-O. Among the Non-O, 29% is type A, and 19.4% is type B. Only 3.2 % is type AB.

35


Figure 6 shows the distribution of blood

(low density lipoprotein), HDL (high density lipoprotein) and triglycerides, were all measured

Table 4

for each patients.

Table 2

type O, A, B and AB in this experiment, which is similar to the average distribution in the country. Table 2 displays the cholesterol data for blood type O. Total cholesterol is shown. In addition, detailed lipid profile, including LDL

Table 3 displays the cholesterol data for Non-O blood type, Total cholesterol, LDL (low density lipoprotein), HDL (high density lipoprotein) and triglycerides were all measured. Table 4 displays the summary of mean and median lipid values for Non-O and O Type blood samples. As seen in the table above, Medium total cholesterol level is significantly lower in blood type O comparing with Non-O. LDL is also lower.

Graph 1: Comparison of Mean Lipid Values for Blood Type O vs. Non-O Groups

Table 3

Triglycerides level only showed minor difference. There is no difference for HDL. Medians were also completed in order to account for any possible outliers or larger deviations from the mean. As we can see, they followed the similar pattern as means. Total cholesterol significantly lower in blood type O and LD. But Triglycerides and HDL did not show any significant difference. The graph above (Graph 1) compares the means for each lipid value in each blood group. As we can see, the average mean for total cholesterol, LDL and triglycerides are all lower in the O group than

36


This table provides information on the p-values between each lipid value. When using a significant p-value level of ≤0.05, we see that

Graph 2 - Median Lipid Values for Blood Type O vs Non-O Groups

the non-O group, with HDL being an exception. The graph above (Graph 2) compares the medians for each lipid value in each blood group. There is a similar pattern, and values are noticeably lower in the O-type categories. The difference in triglycerides and LDL is more prevalent in medians. Again, HDL did not show any significant difference. Getting detailed about each blood type, the following table shows a summary of the specific lipid values for each blood type.

the total cholesterol values had a significant difference, with LDL close to the margin. Triglycerides and HDL do not appear as significant.

Table 5: P values

Table 5

Graph 3 is a comparison of total cholesterol level and triglyceride level among different blood type. There is not enough number of type AB, therefore, type AB graph is not included. Graph 4 displays the mean LDL and HDL values for Types O, A, and B. Due to the the insufficient number of AB patients, AB was not included.

Graph 3

CONCLUSION This experiment’s purpose was to examine whether there was a correlation between blood types and lipid values. Twenty nine people were included in this study. Their blood type was analyzed and it showed there are 48.4 % with type O, and 51.6% of non-O. Among the Non-O, 29% is type A, and 19.4% is type B. Only 3.2 % is type AB. This matched the average distribution of the blood type in the U.S. Cholestesterol profile analysis on these samples showed that the patients with O-blood type had significantly lower total cholesterol values than those with non-O blood types. Although not significant, O-type samples also had lower LDL level than non-O-type samples. HDL and triglycerides values, however, showed no statistical difference. Among the specific blood types, it was interesting to see that type B had higher total cholesterol and triglyceride levels, while type A had higher LDL levels. Due to the limited number

37


of subjects, type AB was not included in the statistics calculation. The hypothesis was that O-type people would have less cholesterol than non-O-type people. This experiment supports my hypothesis. The difference in total cholesterol levels is significant. Because cholestesterol contributes to plaque formation in the process of atherosclerosis, this experiment helped to explain why blood type O has lower risk for heart disease. While people cannot change their blood type, these experiment findings may help them better understand the risk for heart disease and a healthy lifestyle may help protect those with type A, B and AB. DISCUSSION It is extremely interesting to learn that blood type is correlated to the health condition (AHA, 2012). This experiment helped to explain why blood type O has lower risk for cardiovascular disease. Knowing one’s blood type can be an important part of staying healthy and avoiding heart disease. For people who know they are at a higher risk, it is likely more important to reduce risk by adopting a healthier lifestyle. Further experimental studies are needed to unravel the molecular mechanisms linking ABO blood type, cardiovascular disease and cancer development. Although the mechanism is unknown at this time, presence of more protein groups on the surface of the red blood cells may contributes to more lipid production (Franchini et al., 2015). As the figure shown below, the antigen A and B have N-acetylgalactosamine and D-galactose are added to a commonside. However, The O type has unaltered H structure. This may lead to different lipoprotein production. This mechanism may need further study at molecular level. From all the journal reading, another clinical finding that interested me is that the association between ABO blood group types and cancer (Anstee et, al., 2010). Scientists have been reported there was an association between blood type and pancreatic/gastric cancers (Liumbruno, et al., 2010). Blood groups A, AB, or B were reported to more likely to develop pancreatic cancer compared with those with blood group

O (Liumbruno et al., 2010) . Blood group A was reported more prevalent in patients with gastric cancer (Wolpin, et al., 2014). What causes it? This also need further studies. This project had a limited number of patients that could be tested. Therefore, this project would be expanded in the future by increasing the sample size and further analyzing the presence of protein groups on blood cells. It would also be very interesting to explore more associations among blood type and different health conditions. In this experiment, the presence of the RH (rhesus) protein group was not considered to be of importance, or many of the other common protein group. If this experiment could be furthered, I would conduct more experiments to analyze the presence versus the absence of these proteins, and compare them to the lipid values. WORKS CITED

Avent, N. D., & Reid, M. E. (1999, August 31). The Rh blood group system: A review. Blood type may influence heart disease risk. (AHA, 2012, August 14). Dean, L. (1970, January 01). The Rh blood group. Bekerman, J. (Ed.). (2016, June 19). Cholesterol and Heart Disease. CDC. (2015, August 10). Preventing Heart Disease: Healthy Living Habits September 28, 2016, CDC. (2015, August 10). Heart Disease Facts. Retrieved September 28, 2016, from O’Neil,D. (2013). Human Blood: Rh Blood Types. Thompson, E. G., & Colby, W. D. (2015, August 21). Blood Type Test. Arteriosclerosis, Thrombosis and Vascular Biology, American Heart Association journal. Oct 16, 2012 Storry JR, Olsson ML. The ABO blood group system revisited: a review and update. Immunohematology. 2009;25:48–59. Anstee DJ. The relationship between blood groups and disease. Blood. 2010;115:4635–43. Dentali F, Sironi AP, Ageno W, Crestani S, Franchini M. ABO blood group and vascular disease: an update. Semin Thromb Hemost. 2014;40:49–59. Lowe J. The blood group-specific human glycosyltransferases. Baillieres Clin Haematol. 1993;6:465–90. Franchini M, Liumbruno GM. ABO blood group: old dogma, new perspectives. Clin Chem Lab Med. 2013;51:1545–53. Liumbruno GM, Franchini M. Beyond immunohaematology: the role of the ABO blood group in human diseases. Blood Transfus. 2013;11:491–9. Garratty G. Blood groups and disease: a historical perspective. Transfus Med Rev. 2000;14:291–301. Franchini M, Favaloro EJ, Targher G, Lippi G. ABO blood group, hypercoagulability, and cardiovascular and cancer risk. Crit Rev Clin Lab Sci. 2012;49:137–49. Liumbruno GM, Franchini M. Hemostasis, cancer, and ABO blood group: the most recent evidence of association. J Thromb Thrombolysis. 2014;38:160–6. Dentali F, Sironi AP, Ageno W, Turato S, Bonfanti C, Frattini F, Crestani S, Franchini M Non-O blood type is the commonest genetic risk factor for VTE: results from a meta-analysis of the literature. Semin Thromb Hemost. 2012;38:535–48. Franchini M, Mannucci PM. ABO blood group and thrombotic vascular disease. Thromb Haemost. 2014;112(6):1103–9. Franchini M, Capra F, Targher G, Montagnana M, Lippi G. Relationship between ABO blood group and von Willebrand factor levels: from biology to clinical implications. Thromb J. 2007;5:14. Jenkins PV, O’Donnell JS. ABO blood group determines plasma von Willebrand factor levels: a biologic function after all? Transfusion. 2006;46:1836–44. Wolpin BM, Chan AT, Hartge P, Chanock SJ, Kraft P, Hunter DJ, Giovannucci EL, Fuchs CS. ABO blood group and the risk of pancreatic cancer. J Ntl Cancer Inst. 2009;101:424–31. Mengoli C, Bonfanti C, Rossi C, Franchini M. Blood group distribution and life expectancy: a single-centre experience. Blood Transfus. 2014 doi:10.2450/2014.0159-14 Edgren G, Hjalgrim H, Rostgaard K, Norda R, Wikman A, Melbye M, Nyrén O. Risk of gastric cancer and peptic ulcers in relation to ABO blood type: a cohort study. Am J Epidemiol. 2010;172:1280–5. Daniels G, Reid ME. Blood groups: The past 50 years. Transfusion. 2010;50:281–9 Franchini M, Frattini F, Crestani S, Bonfanti C, Lippi G. von Willebrand factor and cancer: a renewed interest. Thromb Res. 2013;131:290–2. Etemadi A, kamangar F, Islami F, Poustchi H, Poursham A, Brennan P, Boffetta P, Malekzadeh R, Dawsey SM, Abnet CC, Emadi A. Mortality and cancer in relation to ABO blood group phenotypes in the Golestan Cohort Study. BMC Medicine. 2014. doi:10.1186/s12916-014-0237-8.

38


Advillin Expression Defines Subtypes of Taste Neurons and Receptor Cells Jennifer Xu duPont Manual High School

non-toxic and toxic substances, protecting us from consuming toxins or poisons. For example, humankind has evolved to prefer things that taste sweet, salty or savory. Sweet tastes signal energy rich nutrients, while savory tastes signal the presence of amino acids, and salty tastes ensure that we maintain proper electrolyte balance (Wauson et. al, 2012). Conversely, most organisms have developed an aversion to bitter and sour tastes, as they signal toxic or dangerous substances. In the modern day, the sense of taste has not been as heavily investigated as the sense of touch or the sense of sight. However, loss of taste is an important symptom of certain neurological diseases. In a recent study, it was found that people with Multiple Sclerosis (MS) have significantly more difficulty in identifying tastes when compared with people without MS. Additionally, participants of the study affected by MS had rated the intensity of certain tastes, namely bitter, lower than participants without MS (Doty, 2016). Other diseases in which change in taste is a symptom range from Bell’s Palsy to Strep throat (Roth, 2016). The sensation of taste itself is caused by a variety of neurological processes. On the tongue, there are papillae that contain taste onion-shaped structures, called taste buds. Within taste buds there are multiple taste receptor cells (TRCs) which are responsible for responding to taste stimuli and releasing neurotransmitters to the brain. The glossopharyngeal, facial, and vagus nerves, which innervate taste buds, sends input to the solitary nucleus, a portion of the brain that is purely made up of sensory nuclei . INTRODUCTION Branches of the facial cranial nerve innervate taste One of the senses most often taken for granted is the sense of taste. Without the ability to buds in the anterior ⅔ of the tongue, while the taste, the ability to enjoy food is lost, significantly glossopharyngeal nerve innervates the posterior ⅓ decreasing quality of life. Additionally, the sense of (Patestas, 2016). Axons of these nerves terminate taste has helped humankind differentiate between on 2nd order sensory neurons in the solitary nucleus. In humans and other mammals, 2nd order ABSTRACT The sense of taste is an important bodily function that is often taken for granted. The ability to taste allows humans to be able to distinguish between toxins and non-toxins, and to enjoy the fruits of life in general. However, there are still questions about the links between the brain and the taste system. Advillin, a protein that has been found to be highly expressed in the brain and on the tongue, could potentially be used to characterize subpopulations of taste cells. The purpose of this experiment is to trace the path of neurons expressing Advillin and see if they innervate taste buds. If a majority (greater than 50%) of taste buds tested are innervated, then neurons expressing Advillin do innervate taste buds. In this experiment, the taste buds of lab mice (mus musculus) were tested. Two antibodies were used, DsRed and Troma-1. DsRed labeled all taste cells and nerves expressing Advillin red and Troma-1 labeled the parts of the taste bud expressing Keratin 8 bud green. This made it easier to visualize the taste buds and the taste cells under a confocal microscope. It was found that a majority of taste buds tested were innervated. Performing a one sample z-test and a confidence interval analysis found that it is reasonable to conclude that a majority of taste buds are innervated by Advillin-positive fibers. Ultimately, the data found supported the hypothesis. The next step of this experiment assesses which specific taste cells are innervated by Advillin-positive fibers, which would link these neurons to specific tastes or sensations.

39


fiber neurons travel through the ipsilateral central tegmental tract to 3rd order sensory neurons in the ventroposterior medial nucleus, located in the thalamus. From this point, the VPM projects to the gustatory cortex, which is primarily responsible for the perception of taste (Hutchins & Byrne, 1997). There are five different currently recognized taste modalities, which invoke different reactions in taste receptor cells: salts, acids, sweet, bitter, and umami. Historically, it was believed that certain regions of the tongue were responsible for certain taste. However, recent studies have proven that no single fiber responds to only one taste, although it may be predisposed to one taste quality and less receptive to another (Hutchins & Byrne, 1997). In actuality, TRCs express a family of receptors called G-proteincoupled receptors (GPCRs). Previous research shows that taste-receptor cells that detect attractive taste modalities express a sub-group of GPCRs called T1Rs (T1R1, T1R2, and T1R2). For example, TRCs that co-express T1R1 and T1R2 act as the receptor for a variety of sweet tastants, implying that T1R1+T1R2 expression is an appropriate way to define sweet TRCs. Additionally, TRCs that detect umami (savory) have been found to express T1R1 and T1R3. TRCs that co-express T1R1 and T1R3 (T1R1+T1R3) respond exclusively to monosodium glutamate (MSG) and asparte, two compounds primarily responsible for the savory sensation in humans. As for bitter tastes, a whole different subgroup of GPCRs, called T2Rs, are expressed in bitterdetecting TRCs (Chandrashekar, 2006). In this project, Advillin-Cre was used to track taste bud innervation. Advillin is a recently discovered actin-binding protein, which shares similarities with the protein Gelsolin and related proteins, such as Villin and Adservin. The expression of both Adservin and Villin are more limited than Gelsolin, appearing primarily in neural and endocrine tissue as well as intestinal and renal tissue, respectively. Advillin has been found to have highly similar amino acid sequences to Adveserin and Villin. While Advillin is expressed in some of the same locations as villin, it has been found to be expressed uniquely on the tongue, as well as in the dorsal root and trigeminal ganglia. Genetic expression of Advillin may play a role in neural development, specifically in the

morphology of ganglia, of which the geniculate ganglion is partly responsible for taste (Marks et al., 1998), Further research indicates that Advillin is exclusively expressed in peripheral sensory neurons and axons, making it an effective tool for studying sensory neuron development (Hasegawa, 2007). Cre-Lox recombinase is a popular tool used to carry out deletions, insertions, and translocations at specific sites in DNA (Mueller, Hassel, Grealy, 2012). Using a Advillin-Cre driver line has been proven to be an effective tool for targeting peripheral neurons (Zurborg et al., 2011). Keratin-8 is a protein expressed in epithelial cells, and commonly used for staining in immunohistochemistry procedures, similar to its purpose in this experiment (Martens et. al, 1999). The goal of this experiment was to see if the expression of Advillin could be used as a marker for defining specific subsets of TRCs and gustatory neurons. However, to do so, it is first important to see if Advillin neurons innervate taste buds in the first place. This experiment traces Advillin-Cre expression in taste buds of transgenic AdvillinCre mice. If the majority of taste buds tested (defined as greater than 50%) indicate Advillin innervation, then it is reasonable to conclude that mice taste buds, in general, are innervated by Advillin neurons. This is because Advillin is found in both the brain and the tongue, potentially indicating some sort of linkage. It has already been established that some geniculate neurons express Advillin. However, the Advillin neurons that are expressed on the tongue could have been somatosensory neurons, which do not necessarily innervate taste buds. PROCEDURES First, a Cre mouse was cross-bred with a floxed mouse. The Cre mouse expressed the gene needed to activate an enzyme called CreRecombinase, which allowed for the target gene to be removed in the offspring, and for the offspring to express the td-Tomato Gene. The td-Tomato gene causes red fluorescence to show up under a confocal microscope. Using a cryostat-microtome, the tissue harvested from the offspring was sectioned at a width of 70 microns. The tissue was rinsed, by being placed in 0.1 mol phosphate buffer and placed on a shaker for 10 minutes. This

40


process was repeated 5 times. Afterwards, the tissue was placed in a blocking solution. The blocking solution was made from 50 μl of 10% Triton-X100, 30 μl of Donkey serum, 500 μl of 0.2 mole phosphate buffer, and 420 μl of deionized water. The tissue was placed on a shaker at 4 degrees Celsius for 2 days. After 2 days, 2 μl of primary antibody DsRed and 20 μl of primary antibody Troma-1 were added, as well as an antibody solution of 500 μl of 0.2 M phosphate buffer, 50 μl of 10% Triton-X100, and 450 μl of deionized water. The DsRed antibody binds to the td-Tomato gene, while to Troma-1 antibody binds to a protein called Keratin-8, which is found in taste buds. Troma-1 labels the taste bud green, while td-Tomato labels anything expressing advillin a bright red. The tissue was returned to the shaker for another 5 days. After another 5 days of incubation at 4 degrees Celsius, the secondary antibodies were

hypothesis where the population proportion was equal to 0.5 (P=0.5) and the alternative hypothesis was that the population proportion was greater than 0.5 (P>0.5). Using a one proportion z-test, a p-value was calculated for both sets of data separately, at a significance level of 0.05.

RESULTS The analyses and summarizations referred to in this section are based on the data in tables 1.1 and 2.1, which displays the raw data for each tissue sample. Both tables can be found in the appendix. Table 1.2 displays the proportion of taste buds that were innervated as well as the proportion of those that were not. Out of a sample of 49 taste buds, about 73% of the taste buds were innervated and 23% were not. Table 2.2 displays the proportion of innervated taste buds and uninnervated taste buds in tissue 10168. Based on a sample of 46 taste buds, approximately 85% of taste buds were innervated and 15% were not. Graph 1.1 and Graph 2.1 display the distribution of the number of taste receptor cells of the taste buds of tissue 9173 and 10168, respectively. There were no statistically significant outliers Fig 1.1 (TL) and 1.2 (TR), Fig 1.3 (BL) and 1.4 (BR) in both sets of data. Based on the data of tissue added. 2 μl of Donkey anti Rabbit Donkey anti9173, the distribution is positively skewed rat was added. A new antibody solution, with the centered about the median, at around 4 taste cells same composition as the first antibody solution, with 50% of the sample falling between 2 taste was also added. After two days, the tissue was cells and 5 taste cells. The data of tissue 10168 rinsed with 0.1 mol phosphate buffer 5 times, for is negatively skewed, meaning the center is best 10 minutes each. Then, the tissue was mounted measured by the median. The center is 5 taste with aqueous mounting media. The tissue was cells, with 50% of the data falling in between 3 examined under a standard microscope, to see if taste cells and 6 taste cells. In both sets of taste the antibody staining worked. Further analysis buds, the number of visible taste cells ranged from and imaging of the taste buds were done using 0 to 9 taste cells. an Olympus Fluoview confocal microscope. The Based on the data of sample 9173, the confidence same procedure was repeated for the second interval is 0.611< p < 0.858, with a 95% set of tissue from the second mice. Innervation confidence level. The variable p refers to the was determined by whether or not red nerves population proportion. The confidence interval penetrated the green of the outside of the taste bud. This procedure can be seen in figures 1.1 - 1.4, indicates that, based on an infinite population of mice taste buds, it is 95% certain that the and innervation is circled. population proportion of innervated taste buds Based on the data, a confidence interval was calculated and used to predict the proportion of innervated taste buds in a population of mice taste buds. In order to do so, a success was defined as having innervation, while a failure was defined as having no innervation. Another statistical test that was used was a one-proportion z-test. A null Graph 1.1 and 2.1

41


falls between about 61% to 86%. As for sample 10168, the confidence interval is 0.744 < p < 0.952, with 95% confidence. This means that, based on an infinite mice taste bud population, it is 95% certain that the population proportion of innervated taste buds falls between about 74% to 95%. Both samples’ calculated p-values were low enough to indicate statistical significance.

Table 1.2

Table 2.2

CONCLUSION Based on the data in this experiment, it can be said with relative certainty that a majority of the taste buds tested were innervated by AdvillinCre expressing nerve fibers. Both sets of data generated a p-value that was low enough to reject the null hypothesis, which means that it can be concluded that greater than 50% of all mice have taste buds that are innervated by Advillin neurons. Additionally, in both tissue samples, both the upper and lower limits of the confidence intervals are above 50%, indicating there is a high chance that more than half of all mice taste buds are innervated by Advillin-Cre expressing nerve fibers. This would support the hypothesis, which stated that if testing for Advillin innervation in taste buds, that the majority of taste buds would be innervated. Thus, it can be concluded that there is some kind of relationship between Advillin expressing neurons and taste buds. There could have been some experimental errors that may have affected the results, but not significantly. The thickness of the tested tissue varied by a few microns. This could have affected the antibody binding, which would make the confocal image dimmer, potentially obscuring innervation. Additionally, other parts of the taste buds are sometimes tagged red, which could be mistaken

for taste cells or innervation. However, multiple trials indicate that the conclusion is still valid. Since this experiment shows that Advillin neurons do innervate taste buds, it is reasonable to move onto the next step, which is to assess if Advillin neurons innervate specific subpopulations of taste receptor cells. This would indicate that the neurons are either responsible, or play a role in, specific taste sensations. Recent studies show that a major signaling effector of GPCRs, called PLC -ß2, is co-expressed with T1Rs and T2Rs, which codes for sweet, savory, and bitter tastes. Even though these tastes rely on different receptors, the shared PLC -ß2 expression indicates that they possibly rely on the same transduction channel (Yasuoka et al. 2007). Assessing the innervation of Advillin on these taste receptor cells could potentially shed light on this phenomenon. This could potentially lead to the use of Advillin as a way to define a subtype of taste receptor cells. This could lead to a variety of real world applications, as the relationship between our brains and our taste system are not yet fully explored. Further research could indicate new connections, which could be used to our advantage when assessing neurological diseases. WORKS CITED

1. Bachmanov , A. A., & Beauchamp, G. K. (2007). Taste Receptor Genes. Annual Review of Nutrition,27, 389-414. doi:10.1146/annurev.nutr.26.061505.111329 2. Chandrashekar, J., Hoon, M. A., Ryba, N. J., & Zuker, C. S. (2006). The receptors and cells for mammalian taste. Nature,444(7117), 288-294. doi:10.1038/nature05401 3. Doty, R.L., Tourbier, I.A., Pham, D.L. et al. J Neurol (2016) 263: 677. doi:10.1007/s00415-016-8030-6 4. European Bioinformatics Institute Protein Information Resource SIB Swiss Institute of Bioinformatics. (2017, February 15). Advillin. Retrieved from http://www.uniprot.org/uniprot/O75366 5. Hasegawa, H., Abbott, S., Han, B., Qi, Y., & Wang, F. (2007). Analyzing Somatosensory Axon Projections with the Sensory Neuron-Specific Advillin Gene. Journal of Neuroscience, 27(52), 14404-14414. doi:10.1523/ jneurosci.4908-07.2007 6. Hutchins, M. O. (n.d.). Chemical Senses: Olfaction and Gustation. In J. H. Byrne (Ed.). Retrieved November 11, 2016, from http://neuroscience.uth.tmc.edu/s2/chapter09.html 7. Jacob, T. (n.d.). Taste (Gustation). Retrieved from http://www.cardiff.ac.uk/biosi/staffinfo/jacob/teaching/ sensory/taste.html 8. Martens, J., Baars, J., Smedts, F., Holterheus, M., Kok, M.-J., Vooijs, P. and Ramaekers, F. (1999), Can keratin 8 and 17 immunohistochemistry be of diagnostic value in cervical cytology?. Cancer, 87: 87–92. doi:10.1002/ (SICI)1097-0142(19990425)87:2<87::AID-CNCR8>3.0.CO;2-L 9. Marks, P. W., Arai, M., Bandura, J. L., & Kwiatkowski, D. J. (1998). Advillin (p92): a new member of the gelsolin/villin family of actin regulatory proteins.Journal of Cell Science, 111(15), 2129-2136. 10. Müller, W. A., Hassel, M., & Grealy, M. (2012). Development and reproduction in humans and animal model species. Berlin: Springer-Verlag. 11. Online Mendelian Inheritance in Man. (n.d.). ADVILLIN; AVIL. Retrieved from http://www.omim.org/ entry/613397 12. Patestas, M. A., & Gartner, L. P. (2016). A textbook of neuroanatomy. Malden MA: Blackwell. 13. Roth, E. (2016). Impaired Taste. Retrieved from http://www.healthline.com/health/tasteimpaired#overview1 14. Wauson, E., Zaganjor, E., Lee, A., Guerra, M., Ghosh, A., Bookout, A., . . . Cobb, M. (2012). The G ProteinCoupled Taste Receptor T1R1/T1R3 Regulates mTORC1 and Autophagy. Molecular Cell,47(6), 851-862. doi:10.1016/j. molcel.2012.08.001 15. Yasuoka, A., Aihara, Y., Matsumoto, I., & Abe, K. (2004). Phospholipase C-beta 2 as a mammalian taste signaling marker is expressed in the multiple gustatory tissues of medaka fish, Oryzias latipes. Mechanisms of Development,121(7-8), 985-989. doi:10.1016/j.mod.2004.03.009 16. Zhang, Y., Hoon, M. A., Chandrashekar, J., Mueller, K. L., Cook, B., Wu, D., . . . Ryba, N. J. (2003). Coding of Sweet, Bitter, and Umami Tastes. Cell, 112(3), 293-301. doi:10.1016/s0092-8674(03)00071-0 17. Zurborg, S., Piszczek, A., Martínez, C., Hublitz, P., Al Banchaabouchi, M., Moreira, P., … Heppenstall, P. A. (2011). Generation and characterization of an Advillin-Cre driver mouse line. Molecular Pain, 7, 66. http://doi. org/10.1186/1744-8069-7-66

ACKNOWLEDGEMENTS I would like to thank University of Louisville’s Medical-Dental Research Lab as well as Dr. Robin Krimm, Jennifer Rios-Plier, Lisa Ohman, and everyone else who helped me complete this project.

42


TRPA1 Mediates Cardiac Dysfunction through Sympathetic Dominance in Mice Exposed to Concentrated Ambient Particulates Minh Phuc Tran1, Alex Carll2, Kyle Fulghum2 1duPont Manual High School 2University of Louisville, Diabetes and Obesity Center SUMMARY Epidemiological studies have tied exposure to fine particulate matter (PM2.5) air pollution with cardiovascular morbidity and mortality, including arrhythmia and heart failure. PM2.5 causes an estimated 3.5 million cardiopulmonary deaths a year. However, it remains unclear how PM2.5 causes cardiovascular pathophysiology. One likely biological mechanism of PM2.5-induced cardiovascular pathophysiology is autonomic nervous system imbalance. Research suggests autonomic imbalance—typically involving dominance of the sympathetic division over the parasympathetic division—mediates PM2.5induced cardiac arrhythmia, electrophysiological dysfunction, and heart failure exacerbation. This study was to investigate the role of TRPA1 in the cardiac reaction of mice who are exposed to concentrated ambient particulates (CAPs). Conscious unrestrained wild-type (WT) and TRPA1 knockout (KO) mice implanted with radio telemeters were exposed once to 60-120 μg/ m3 or HEPA filtered air. The electrocardiogram was monitored 18 hours after all exposure. Heart rate (HR) was also measured and recorded. Electrocardiograms (ECGs) were analyzed to determine if a single six-hour exposure to concentrated ambient particulates (CAPs) differentially alters electrophysiology, heart rate, and autonomic balance in wild-type mice versus separate TRPA1 genetic knock-out mice.P.M2.5. HRV increased as the mice were exposed to CAPs. The data showed that a single exposure to CAPs causes cardiac dysfunction through the TRPA1 activation and autonomic imbalance involving a shift towards the dominance of the sympathetic branch of the ANS.

INTRODUCTION Epidemiological studies have tied

exposure to fine particulate matter (PM2.5) air pollution with cardiovascular morbidity and mortality, including arrhythmia and heart failure (Brook 2010 et al.). For example, in a 2013 meta-analysis by Shah and colleagues, it was found that hospitalization or death increased between 2-12% for each increase of 10 µg/m3 (Shah et al. 2013). In addition, PM2.5 causes an estimated 2.5 million cardiopulmonary deaths a year (Anenberg et al. 2010; Brook et al. 2010). However, it remains unclear how PM2.5 exposure causes cardiovascular pathophysiology. One likely biological mechanism of PM2.5-induced cardiovascular pathophysiology is physiologic agitation involving the modulation of the autonomic nervous system. Other effects, that are discussed but not the focus of this paper, include myocardial ischemia, oxidative stress, inflammation, cardiac ion channel and vascular function (Brook et al. 2010; Schulz et al. 2005).

Figure 1: Diagram comparing the different sizes of PM (EPA)

The autonomic nervous system (ANS) maintains cardiovascular homeostasis by harmonizing sympathetic and parasympathetic neural influences. There are two main divisions of the ANS, the sympathetic nervous system and the parasympathetic nervous system. The activity in the sympathetic nervous system activity is related to situations where metabolic exertion is needed with the classic example being the “fight or flight” reaction (a situation where life is threatened by some external disturbance). The parasympathetic

43


nervous system is most often associated with rest, growth and increased resources (Coheren). Substantial research over the years has demonstrated that autonomic imbalance— typically involving the dominance of the sympathetic division over the parasympathetic division—mediates PM2.5-induced effects. As the primary cardiac target of increased sympathetic tone, the β1 adregenic receptor (β1AR) increases heart rate (HR) and decreases an index of autonomic influence called heart rate variability (HRV). By incorporating HRV measures and β1ARinhibiting drugs, research has provided evidence that PM2.5 exposure induces sympathetic dominance, oxidative stress, arrhythmia and exacerbates myocardial injury (Brook et al. 2010; Carll 2013; de Hartog et al. 2009; Dockery 2005; Hazari et al. 2012; Hazari et al. 2011; Rhoden et al. 2005; Robertson et al. 2013; Ying et al. 2014). Research has shown that PM2.5 from diesel engine exhaust directly stimulates an irritant receptor, transient receptor potential ankyrin 1 (TRPA1), that is an established trigger of sympathetic activation (Koba et al. 2011). Furthermore, PM2.5 can induce oxidative stress and subsequent lipid peroxidation (Brook et al. 2010), which itself releases an endogenous aldehyde and potent agonist of TRPA1 (Trevisani 2007) that induces cardiomyocyte ion channel dysfunction (Bhatnagar 1995). A recent animal study found that the inhalation of diesel exhaust at high concentrations promotes cardiac electrical dysfunction and sympathetic dominance through TRPA1 activation. Epidemiologists have used the electrocardiogram (ECG) to link air pollution exposure to changes in autonomic balance in part by identifying changes in heart rate variability (HRV). The primary aim of this study was to determine if the TRPA1 irritant receptor influences the acute effects of PM2.5 exposures on cardiac electrophysiology and autonomic imbalance. ECGs were analyzed to determine if a single six-hour exposure to concentrated ambient particulates (CAPs) differentially alters electrophysiology, HR, and autonomic imbalance in wild-type (WT) mice versus separate TRPA1 genetic knock-out mice (KO). For each strain, CAPs-exposed mice will be compared to strain matched mice concurrently exposed to air (wild

type n=4/group; TRPA1 n=3/group). ECGs were analyzed for alterations in cardiac conduction, repolarization, arrhythmia, and HRV. The hypothesis was that TRPA1 mice will be protected from CAPs-induced changes in ECG morphology, arrhythmia and autonomic imbalance. This would be seen through the measured HRV. METHODOLOGY Animal Study (TRPA1 KO and WT) Mice were implanted with radio telemeters. The design of the exposure of TRPA is summarized in Figure 2 and the exposure for WT is summarized in Figure 3. Before the exposure the ECGs of the mice were recorded for 18 hours on Day 0. Mice were then all exposed to highefficiency particulate arrestance (HEPA) filtered air for six hours on Day 1. After the exposure, the ECGs of the mice were recorded for 18 hours on Day 1. Finally, each mouse was exposed to either HEPA filtered air or CAPS (60-120µg/m3) depending on the group that they were in on Day 2. The TRPA KO mice Group 1 were exposed to 60µg/m3 and the WT were exposed to 120µg/ m3 (on different days), however research has shown that exposure to more than 30µg/m3 are potent. The two groups of mice were exposed to different amounts of PM2.5 because the PM2.5 was dependent on the concentration in the air

44

Table 1: Design for KO Exposure

Table 2: Design for WT Exposure


Electrocardiogram Acquisition and Analysis ECGAuto software (EMKA technologies USA, Falls Church, VA) was used to visualize individual ECG signals, analyze and quantify ECG segment durations and identify cardiac arrhythmias. Using ECGAuto, P wave, QRS complex, and T wave were identified for individual ECG waveforms and compiled into a library (with an average number of 40) and used for analysis of all experimental ECG traces. The following measurements were decided for each ECG waveform: PR interval, QRS duration, ST interval and using the Bazett’s formula, QT corrected for HR (QTc). The Lambeth conventions (Walker et al. 1988) were used as regulations for the identification of cardiac arrhythmic episodes in rats. Arrhythmias were identified as occurring subsequently during all the exposures as ventricular fibrillation (VF), ventricular tachycardia (VT) and ventricular premature beats (VPBs).

Graph 1

Graph 2

Graph 3

Graph 4

Graph 5

Table 3: Time and Frequency Domain Measurements for HRV

Heart rate variability (HRV) was also computed as the mean of the differences between sequential HRs for the complete set of ECG signals. For each 5-min stream of ECG waveforms, mean time between successive QRS complex peaks (RR interval), mean HR, and mean HRVanalysis– generated time-domain measures were obtained. The time-domain units included standard deviation of the time between normal-to-normal beats (SDNN), and root mean squared successive differences (RMSSD). Additionally, HRV analysis was conducted in the frequency domain using a Fourier transform. In this study, the spectrum was divided into low-frequency (LF) and highfrequency (HF) sections. The ratio of these two frequency domains (LF/HF) was calculated as an estimate of the respective stability between sympathetic (LF) and vagal (HF) activity. RESULTS

Graph 6

Graph 1. All groups except PM-Wild-type had decreased heart rate at Post-Exposure relative to their own Pre-Exposure values (collected at same time of day on prior day). Graph 2. All mice except PM-exposed wild-type mice had consistent SDNN HRV between Pre- and Post-exposure. SDNN for Post-WT were lower than for Post-TRPA1. Graph 3. HR averaged between hours 3-13. PM exposure in WT mice increased HR relative to Air exposure (p=0.0045/determined through a 2-Way ANOVA) and TRPA1 KO Abolished this effect. Graph 4. RMSSD for Post WT were higher than for Post TRPA1. A big difference in RMSSD between the Post Air—WT and the Post PM—WT, with the second being a lot higher. Graph 5. The average LF/HF Ratio for the WT Post

45


Exposure were much higher than the ones for the TRPA1 KO. Graph 6. The average LF/HF Ratio for the KO Post Exposure were much lower than the ones for the TRPA1 KO. Summary of Results PM increased HR (sympathetic dom.) in WT mice, whereas it had no such effect in TRPA1-knock-out mice. PM tended to decrease SDNN (sympathetic dom.), whereas the KO did not. When the parameters were averaged between hour 3-13 instead of hour 0-16, HR effects became statically significant. WT mice had an overall higher LF/HF Ratio (more SNS activity) than the TRPA1 KO. CONCLUSION In Graphs 1 and 2, elevated levels of HR and SDNN suggests PM caused sympathetic activation. Graph 3 indicates that the data is significant but only in hours 3-13 after the exposure to PM2.5. Graph 4 suggests a decrease in parasympathetic activation in the Post PM—WT. Graph 5 suggests that the WT experienced more sympathetic dominance than the KO. Graph 6 suggests that the KO experienced more parasympathetic dominance than the WT. Because of this data, the hypothesis was partially supported as shown by the results. TRPA1 does play some role in mediating cardiac dysfunction when the mice exposed to CAPs. There are a wide range of applications for this project. This project improves understanding of how air pollutants, specifically fine particulate matter, cause cardiac pathophysiology. It shows that exposure to PM2.5 harms the heart through innate stress responses. It also may guide development of drug therapies to prevent adverse cardiac effects of pollutants by targeting specific mechanisms such as sympathetic excitation. Ultimately, it may help guide regulations that decrease air pollution-related cardiovascular disease burden. There were some limitations of this project. One of them was the problem that dealt with the lost signal of the ECG. Sometimes the ECG did not record correctly because the radiotelemeters were messed up. In the future, this project will be explored on a more cellular and molecular level. Several types of drugs will also be used to test their effectiveness in preventing certain mechanisms when exposed to CAPs.

ACKNOWLEDGEMENTS M.T would like to express his thanks to A.C for the use of his lab and for his mentorship throughout this project. Additionally, the author expresses his thanks for K.F, who guided him through how to use ECGAuto and for handling the mice. WORKS CITED

1. Anenberg, S. C., Horowitz, L. W., Tong, D. Q., & West, J. J. (2010). An Estimate of the Global Burden of Anthropogenic Ozone and 
Fine Particulate Matter on Premature Human Mortality Using 
Atmospheric Modeling. Environmental Health Perspectives, 118(9), 1189-1195. doi:10.1289/ehp.0901220 2. Bhatnagar, A. (1995). Electrophysiological Effects of 4-Hydroxynonenal, an Aldehydic Product of Lipid Peroxidation, on Isolated Rat Ventricular Myocytes. Circulation Research, 76(2), 293-304. doi:10.1161/01.res.76.2.293 3. Brook, R. D., Rajagopalan, S., Pope, C. A., Brook, J. R., Bhatnagar, A., Diez-Roux, A. V., . . . Kaufman, J. D. (2010). Particulate Matter Air Pollution and Cardiovascular Disease: An Update to the Scientific Statement From the American Heart Association. Circulation, 121(21), 2331-2378. doi:10.1161/ cir.0b013e3181dbece1 4. Carll, A. P., Hazari, M. S., Perez, C. M., Krantz, Q. T., King, C. J., HaykalCoates, N., . . . Farraj, A. K. (2013). An Autonomic Link Between Inhaled Diesel Exhaust and Impaired Cardiac Performance: Insight From Treadmill and Dobutamine Challenges in Heart Failure–Prone Rats. Toxicological Sciences, 135(2), 425-436. doi:10.1093/toxsci/kft155 5. Deering-Rice, C. E., Romero, E. G., Shapiro, D., Hughen, R. W., Light, A. R., Yost, G. S., . . . Reilly, C. A. (2011). Electrophilic Components of Diesel Exhaust Particles (DEP) Activate Transient Receptor Potential Ankyrin-1 (TRPA1): A Probable Mechanism of Acute Pulmonary Toxicity for DEP. Chemical Research in Toxicology, 24(6), 950-959. doi:10.1021/tx200123z 6. de Hartog, J. J., Lanki, T., Timonen, K. L., Hoek, G., Janssen, N. A., IbaldMulli, A., . . . Pekkanen, J. (2008). Associations between PM2.5 and Heart Rate Variability Are Modified by Particle Composition and Beta-Blocker Use in Patients with Coronary Heart Disease. Environmental Health Perspectives, 117(1), 105-111. doi:10.1289/ehp.11062 7. Hazari, M. S., Haykal-Coates, N., Winsett, D. W., Krantz, Q. T., King, C., Costa, D. L., & Farraj, A. K. (2011). TRPA1 and Sympathetic Activation Contribute to Increased Risk of Triggered Cardiac Arrhythmias in Hypertensive Rats Exposed to Diesel Exhaust. Environmental Health Perspectives, 119(7), 951-957. doi:10.1289/ ehp.1003200 8. Hazari, M. S., Callaway, J., Winsett, D. W., Lamb, C., Haykal-Coates, N., Krantz, Q. T., . . . Farraj, A. K. (2012). Dobutamine “Stress” Test and Latent Cardiac Susceptibility to Inhaled Diesel Exhaust in Normal and Hypertensive Rats. Environmental Health Perspectives, 120(8), 1088-1093. doi:10.1289/ehp.1104684 9. Koba, S., Hayes, S. G., & Sinoway, L. I. (2010). Transient receptor potential A1 channel contributes to activation of the muscle reflex. AJP: Heart and Circulatory Physiology, 300(1). doi:10.1152/ajpheart.00547.2009 10. Luttmann-Gibson, H., Laden, F., Schwartz, J., Coull, B., Gold, D., Dockery, D. W., & Link, M. (2011). Air Pollution and Atrial Arrhythmias in Patients With Implanted Cardioverter Defibrillators. Epidemiology, 22. doi:10.1097/01. ede.0000391826.03966.a2 11. Rhoden, C. R., Wellenius, G. A., Ghelfi, E., Lawrence, J., & GonzálezFlecha, B. (2005). PM-induced cardiac oxidative stress and dysfunction are mediated by autonomic stimulation. Biochimica et Biophysica Acta (BBA) - General Subjects, 1725(3), 305-313. doi:10.1016/j.bbagen.2005.05.025 12. Robertson, S., Thomson, A. L., Carter, R., Stott, H. R., Shaw, C. A., Hadoke, P. W., . . . Gray, G. A. (2014). Pulmonary diesel particulate increases susceptibility to myocardial ischemia/reperfusion injury via activation of sensory TRPV1 and β1 adrenoreceptors. Particle and Fibre Toxicology, 11(1), 12. doi:10.1186/1743-897711-12 13. Shapiro, D., Deering-Rice, C. E., Romero, E. G., Hughen, R. W., Light, A. R., Veranth, J. M., & Reilly, C. A. (2013). Activation of Transient Receptor Potential Ankyrin-1 (TRPA1) in Lung Cells by Wood Smoke Particulate Material. Chemical Research in Toxicology, 26(5), 750-758. doi:10.1021/tx400024h 14. Trevisani, M., Siemens, J., Materazzi, S., Bautista, D. M., Nassini, R., Campi, B., . . . Geppetti, P. (2007). 4-Hydroxynonenal, an endogenous aldehyde, causes pain and neurogenic inflammation through activation of the irritant receptor TRPA1. Proceedings of the National Academy of Sciences, 104(33), 13519-13524. doi:10.1073/pnas.0705923104 15. Ying, Z., Xu, X., Bai, Y., Zhong, J., Chen, M., Liang, Y., . . . Rajagopalan, S. (2013). Long-Term Exposure to Concentrated Ambient PM2.5 Increases Mouse Blood Pressure through Abnormal Activation of the Sympathetic Nervous System: A Role for Hypothalamic Inflammation. Environmental Health Perspectives. doi:10.1289/ ehp.1307151

46


Perfectionism and Body Image Dissatisfaction in Adolescents as a Predictor of Eating Disorders Betty Ngo duPont Manual High School

ABSTRACT Previous studies have shown body dissatisfaction and perfectionism to be elevated in individuals diagnosed with clinically significant eating disorders. However, past research has consistently focused on adult females and thus, other demographics have been under-researched. The purpose of this study was to characterize the relationship between body dissatisfaction and perfectionism in both male and female adolescents to identify and diminish eating disorders in their early stages. The best way to assess the relationship was by a survey consisting of three measures: Frost-Multidimensional-Perfectionism-Scale (FMPS), Male Body Dissatisfaction Scale, and the Eating-Disorder-Inventory 2. From the FMPS, two subscales were used as measures for perfectionism: parental criticism and parental expectation. After distributing questionnaires to voluntary participants, ANOVA tests were run to explore the significance between BD and perfectionism. The hypothesis was partially supported; parental criticism (fmpsPC) per the FMPS and adolescent body dissatisfaction had a positive moderate significant correlation (R-squared= 0.194, p = .007), whereas parental expectation (fmpsPE) and body dissatisfaction were not. In past research, high expectations deriving from the individual is an indicator of adaptive perfectionism; this study shows that high fmpsPE acts similarly. fmpsPC is an indicator of body dissatisfaction because it leads to maladaptive perfectionism. However, due to the small sample size, only female BD and fmpsPC had a significant correlation. In future studies, if a larger sample size was obtained, particularly in adolescent males, it may be possible to find other significant relationships.

INTRODUCTION The term body image was first coined by Paul Ferdinand Schilder, an Austrian psychiatrist and psychoanalyst. It refers to a person’s personal beliefs, attitudes, and perception of their own body. This issue has become a tropic study for psychologists since the twentieth century due to popular culture and later by social media influences (Baum, 2000). As time passes, societal beauty standards become increasingly difficult to attain. Thus, in the United States, nearly 20 million women and 10 million men suffer from a clinically significant eating disorder at some time in their lives. These eating disorders include anorexia nervosa, bulimia, binge eating disorders, etc. (Wade, Keski-Rahkonen, & Hudson, 2011). In addition, many individuals who are diagnosed with eating disorders are dissatisfied with their body (Stice, 2002). There are many factors that provoke body dissatisfaction, namely perfectionism. In fact, according to researchers at the University of Missouri, “research consistently shows perfectionism to be elevated in people with eating disorders and people recovering from eating disorders compared to controls (Bardone-Cone AM, Sturm K, Lawson MA, Robinson PA, Smith R, 2013).” According to Sidney J. Blatt in her paper The Destructiveness of Perfectionism (1966), perfectionism is defined as “the practice of demanding of oneself or others a higher quality of performance than is required by the situation.” She also states that perfectionists themselves must characterize their behaviors as perfect, which may be difficult since self-evaluation is flawed. Perfectionism manifests in two forms: maladaptive and adaptive. Adaptive perfectionism is the personal drive that is characterized as normal and healthy; adaptive perfectionist gain satisfaction from their achievements (Park, Jeong, 2015). Maladaptive perfectionism, on the other

47


hand, is defined by having higher performance standards and are increased self-criticism during self- evaluations. If maladaptive perfectionists see imperfection in their lives, they are more apt to become discouraged and seek an alternative way to gain acceptance. There are psychological maladjustments that are associated with maladaptive perfectionism such as depression, suicidal ideation, anxiety, stress, eating disorders, emotional deregulation, recurrent physical pain and other medical problems, and less desirable academic performance (Rice & Stuart, 2010). For maladaptive perfectionists, events that causes failure, can result in behavioral withdrawal and avoidance. For the purposes of this study, the Frost Multidimensional Perfectionism Scale, the Male Body Dissatisfaction Scale, and the Eating Disorder Inventory 2 were used. The Frost Multidimensional Perfectionism Scale contains 35 items subdivided into 6 sub-categories or sub-scales. These include Concern over Mistakes, Personal Standards, Parental, Parental Criticism, Doubts about actions, and Organization, which are evaluated on the Likert scale. The Male Body Dissatisfaction Scale developed an establishment of initial psychometric properties of male body dissatisfaction which has risen substantially in recent decades due to Western ideals of male attractiveness. The measure established is a self-report assessment answered on the Likert scale paired with a rating of importance from 1-10. The Eating Disorder Inventory 2 (EDI-2) is a 64 item, self-report, multiscale measure designed for assessment of psychological and behavioral traits common in anorexia nervosa and bulimia. It is used as a method for early detection of the risk for development of eating disorders. The measure is divided into 8 subscales: Drive for Thinness, Bulimia, Body Dissatisfaction, Ineffectiveness, Perfectionism, Interpersonal Distrust, Interoceptive Awareness, and Maturity Fears. Because the population of adult females diagnosed with eating disorders is massive, other demographics namely adolescents has been under-researched. This research seeks to find the correlation between perfectionism and body image dissatisfaction in both male and female adolescents aged 14-19.

Intensive research about this issue will provide further insight about body image in adolescents. It is hypothesized that if both subscales of perfectionism are elevated per the Frost Multidimensional Perfectionism Scale, then body image dissatisfaction will be more prevalent in both males and females. Perfectionists are highly critical of themselves hence they are more apt to become discouraged. They are also more driven to seek alternative ways to gain acceptance; in this case, they are more prompted to actively lose weight even if those methods are unhealthy. METHODOLOGY The best method to assess the correlations between perfectionism and body dissatisfaction in adolescents is by distributing a questionnaire to interested voluntary participants. Participants were recruited via advertising of flyers posted around the school, word-of-mouth, and email. Some participants were also given the opportunity by their teachers to complete the questionnaire during class for extra credit. Upon agreeing to participate in the study and receiving consent from the minor’s parent/legal representative and subject assent, the questionnaire was given out via email link. Each participant was deidentified and given a unique ID. All participant names were not included on the questionnaires. Each participant’s unique ID was kept secure on a secured online excel document. General demographic information, which included age, gender, work status, and religious affiliation, was kept with each individual’s unique ID. A link depending on their age and gender was sent to the participant. The individual completed a series of self-report questionnaires that assess personality, and behavior on REDcap, a secure web application for building and managing online surveys. This survey took approximately twenty minutes to complete. If the survey was handed out to the student via hard copy, the student would then place the completed questionnaire inside a sealed box placed inside the classroom. DISCUSSION Fifty students participated in the study, however, only 36 females and nine males (20%)

48


Table 2: Correlational Statistics

There are however, limitations to the results. Power tests signified that a sample of 60 students was needed for correlational analyses- 30 females and 30 males. However, since the sample size for adolescent males was below the benchmark, that could be a reason to why no significant relationships were found in that demographic.

Figure 1: Simple Linear Regression Model between fmpsPC and BodyDis

fully completed the study. After conducting analysis of variance tests, a positive moderate correlation between female adolescent body dissatisfaction and parental criticism as the measure of perfectionism was found. The r squared value was 0.194, F-score (1,35) = 8.173, and p value = .007. A simple line of best fit between parental criticism and body dissatisfaction in females was tested using SPSS (Figure 1).

Table 1: Descriptive Statistics

The mean value for body dissatisfaction in adolescent females was 28.33 and the mean value male body dissatisfaction was 43.42 (Table 1). When compared to the norm values from the journal Development and Validation of a Multidimensional Eating Disorder Inventory for Anorexia Nervosa and Bulimia, the average mean score for an individual who is not dissatisfied with their body is 15.14 and 15.43 for females and males respectively. Compared to the sample size collected, the mean scores for body dissatisfaction is higher. There were no other significant correlations found. Body dissatisfaction did not have a significant correlation with parental expectation in both females and males. For females, the r- value was 0.122 and p value was 0.479 and for males the r- value is .359 and the p-value is .343. Male body dissatisfaction also did not have a significant correlation with parental criticism; the r-value is 0.402 and the p-value is 0.283 (Table 2).

CONCLUSION Female adolescent body dissatisfaction and parental criticism were positively moderately correlated with an r-squared value of 0.194, an F-score (1,35) of 8.173, and a p-value of 0.007. However, there were no significant relationships between body dissatisfaction and parental expectation. Past research by Enns, Clark, and Clara state that high expectations in general are an indicator of positive outcomes and in this case, adaptive perfectionism. Adaptive perfectionists strive to meet high expectations from themselves by applying high amounts of effort in an activity. It is characterized as a normal, healthy type of perfectionism and is defined by satisfaction from achievements made from intense efforts but tolerating imperfections without resorting to harsh self-criticism. The results of this study show that parental expectation acts the same way. As parents expect more from their child, an example from the fmpsPE being, “My parents set high standards for me,” the child will strive to meet these expectations to appease their parent, but since there are no consequences paired with not meeting the expectations, only satisfaction gained from meeting the expectations, the child does not resort to high self-criticism. On the other hand, parental criticism is an indicator of body dissatisfaction because, according to “Adaptive and maladaptive perfectionism: developmental origins and association with depression proneness” by Enns,

49


Cox, and Clara, it leads to negative outcomes, in this case, maladaptive perfectionism. An example from the fmpsPC subscale is “As a child, I was punished for doing things less than perfectly,” signifying consequences faced due to imperfect actions. Thus, body dissatisfaction is moderately positively correlated to parental criticism.

WORKS CITED

LIMITATIONS There are some limitations to the results and the data collected. There could have been potential bias in the sample size; perhaps not everyone who potentially wanted to participate were aware of the study I was conducting. There could have been convenience bias; of course, it would be easier to collect consent and assent forms to my classmates, and thus, my data is skewed towards my classmates in general. Because the sample sizes were relatively small, especially in adolescent males, this could be the reason why only one significant relationship could be found. Perhaps anase sample sizes, a relationship between body dissatisfaction could be correlated with parental expectation and perhaps a significant relationship between adolescent male body dissatisfaction could be significantly correlated with parental criticism. FUTURE STEPS If this project were to be expanded upon, the sample size for both adolescent females and males would be increased and bias would be eliminated as much as possible. It would also be interesting to see changes in results by using other subscales from the FMPS and using an entirely different perfectionism scale, perhaps the Flett & Hewitt. It would also be interesting to look at gender fluid individuals to see their ideals of the “perfect body”. Looking more in depth into the results would be interesting as well to identify the differences in perfectionism levels between magnets in duPont Manual high school.

Bardone-Cone AM, Sturm K, Lawson MA, Robinson PA, Smith R: Perfectionism across stages of recovery from eating disorders. Int J Eat Disorder. 2010, 43: 139-148. Blatt, S. J. (1995). The destructiveness of perfectionism: Implications for the treatment of depression. American Psychologist,50(12), 1003-1020. doi:10.1037/0003-066x.50.12.1003 Enns, M. W., Cox, B. J., & Clara, I. (2002). Adaptive and maladaptive perfectionism: developmental origins and association with depression proneness. Personality and Individual Differences,33(6), 921-935. doi:10.1016/s0191-8869(01)00202-1 Frost, R. O., Marten, P., Lahart, C., & Rosenblate, R. (1990). The dimensions of perfectionism. Cognitive Therapy and Research,14(5), 449-468. doi:10.1007 bf01172967 Garner, D. M., Olmstead, M. P., & Polivy, J. (1983). Development and validation of a multidimensional eating disorder inventory for anorexia nervosa and bulimia. International Journal of Eating Disorders,2(2), 15-34. doi:10.1002/1098-108x(198321)2:2<15::aid eat2260020203>3.0.co;2-6 Ochner, C. N., Gray, J. A., & Brickner, K. (2009). The development and initial validation of a new measure of male body dissatisfaction. Eating Behaviors,10(4), 197 201. doi:10.1016/j.eatbeh.2009.06.002 Park, Hyun-Joo, and Dae Yong Jeong. “Psychological Well-being, Life Satisfaction, and Self esteem among Adaptive Perfectionists, Maladaptive Perfectionists, and Nonperfectionists.” Personality and Individual Differences 72 (2015): 165-70. Web. Rice, Kenneth G., and Jennifer Stuart. “Differentiating Adaptive and Maladaptive Perfectionism on the MMPI–2 and MIPS Revised.” Journal of Personality Assessment 92.2 (2010): 158 67. Web. Shomaker, L. B., & Furman, W. (2010). A prospective investigation of interpersonal influences on the pursuit of muscularity in late adolescent boys and girls. Journal of Health Psychology,15(3), 391-404. doi:10.1177/1359105309350514 Shore, R. A., & Porter, J. E. (1990). Normative and reliability data for 11 to 18 year olds on the eating disorder inventory. International Journal of Eating Disorders,9(2), 201-207. doi:10.1002/1098-108x(199003)9:2<201::aid eat2260090209>3.0.co;2-9

ACKNOWLEDGEMENTS I would like to express my gratitude for Leigh C. Brosof, B.A., Lisa Michelson, B.A., and Benjamin Calebs, B.A., and Cheri A. Levinson, Ph.D at UofL’s department of Psychological and Brain Sciences in the Eating Anxiety Treatment lab for assisting and overseeing my project (16.0876). I would also like to thank my family and Mr. Zwanzig for their unconditional support.

50


The Effects of the Pacifier Activated Lullaby on the Sleep Scores, Feeding Scores, Feeding Volumes, and Finnegan Scores of Newborns with Neonatal Abstinence Syndrome Aakash Mehta, Sanya Mehta duPont Manual High School Summary Neonatal abstinence syndrome is a disorder that occurs in newborns whose mothers abuse drugs. It is diagnosed if a patient has withdrawal symptoms like tremors, mottling, seizures, and irritability. The Pacifier Activated Lullaby (PAL) device (made by Powers Medical Devices LLC) could be used as a therapeutic intervention for newborns with neonatal abstinence syndrome (NAS). In this study, the effect of the PAL system was investigated by comparing the sleep scores, feeding scores, feeding quality ratings, and total Finnegan scores after PAL interventions to before PAL interventions and comparing the scores of an experimental group that received PAL interventions to a control group that received interventions with a standard orange pacifier. The results suggested an improvement in the overall Finnegan scores after PAL interventions as compared to before PAL interventions. This may mean that the PAL system could be used as a treatment modality for newborns with NAS. INTRODUCTION Newborns with NAS commonly suffer from withdrawal symptoms. Morphine is often used in this setting to decrease the incidence of seizures, decrease agitation, decrease diarrhea, improve feeding, and increase sleep duration (Kocherlakota, 2014). Other symptoms associated with neonatal abstinence syndrome are increased muscle tone, disturbed tremors, undisturbed tremors, myoclonic jerks, and decreased sleep duration (Tierney, 2013). These symptoms come from withdrawal of a drug or combination of drugs that the newborn’s mother took during her pregnancy such as methamphetamines, heroin, methadone, other opiates, and selective serotonin reuptake inhibitors (SSRIs) (Tierney, 2013). In addition to morphine, music therapy using the

PAL device may help lower Finnegan scores which measure the severity of NAS as well as stabilize sleep cycles and feeding behavior. Other researchers have found a decrease in length of hospital stay with the use of music therapy (Standley, 1998). In conclusion, the research on the effect of music therapy on patients with neonatal abstinence syndrome is mixed with some showing favorable results and others not showing a difference. The PAL device is a machine that when used in newborns with relatively healthy sucking patterns can be set to play music for the newborn when they suck on a pacifier, further improving their suck pattern. “A pressure transducer system, relying on sensing, control and feedback algorithms integrated into the device, can be calibrated to individual babies to provide the lullaby feedback whenever the preset pressure criteria indicating correct sucking is met” (Quick, 2012). It relies on this system in order to teach infants how to suck properly as sucking is an important skill needed to develop the ability to feed. One study concluded that the PAL device can decrease a premature infant’s length of stay (“New Musical Pacifier Helps Premature Babies Get Healthy,” 2012). However, another study has shown somewhat contradicting results. The results of the study indicated that music therapy using the PAL device did not make a significant difference in premature infants’ sucking patterns (Cevasco and Grant, 2005). In conclusion, there is some research that supports that the PAL device improves the feeding behavior of premature infants, while other research indicates that there is no significant effect. This study looked at the effect of the PAL device on the Finnegan scores, sleep scores, feeding scores, and feeding volumes of newborns with NAS. The Finnegan scoring system is used to measure the severity of the symptoms of NAS.

51


It consists of 21 symptoms under 3 categories: central nervous system disturbances, motor/ vasomotor/respiratory disturbances, and gastrointestinal disturbances. The minimum is 0 and the theoretical maximum is 37. If the PAL device is given to newborns with NAS, then the Finnegan scores, feeding scores, and sleep scores were expected to decrease and the feeding volumes were expected to increase as compared to those who receive the standard of care only. This was the expected outcome for several reasons. First, lower Finnegan scores indicate improvement of symptoms expected as the newborns’ symptoms decrease. Second, the music therapy is expected to calm the baby to sleep. Third, the feeding volume may increase since newborns may increase coordinated sucking patterns with music therapy. Many doctors observe newborns with NAS suffering from extreme symptoms, but currently, other than pharmacological modalities, few other interventions exist. METHODOLOGY The design of the study was a prospective, randomized controlled trial with one experimental group and one control group. The study period lasted four months, from mid-September 2016 until mid-January 2017. Participants were newborns with a gestational age greater than 36 weeks who were admitted to Norton Women’s and Children’s Hospital’s Neonatal Intensive Care Unit (NICU) with a diagnosis of NAS and were treated with morphine and clonidine. A diagnosis of NAS was given to newborns if they had 2 Finnegan scores of 8 or greater or 1 Finnegan score of 12 or greater. Researchers excluded patients if they had a diagnosis that would interfere with feeding, were referred on a hearing screen, were transferred to another facility, or had neurological symptoms not associated with NAS. Following a physician’s order for music therapy, it was confirmed that an eligible participant met inclusion criteria, after which they randomized patients to either the control group or the experimental group using an online random number generator on http://vassarstats. net/. The control group received the standard of

care, which included the provision of a standard orange (soothie) pacifier for 10 minutes, within 30 minutes of a feeding, about 4 times per week throughout their length of stay. The experimental group received standard care along with music therapy via the PAL device beginning within 72 hours of starting pharmacological intervention (e.g., morphine and clonidine). PAL therapy was also provided for 10 minutes, within 30 minutes of a feeding, about 4 times per week throughout each infant’s stay. The PAL device, an FDA-approved medical device, uses a standard soothie pacifier fitted on a pressure-sensitive sensor that activates contingent lullaby music as positive reinforcement for an infant sucking on the pacifier. The recorded lullaby music was evidence-based and intended for soothing neonates. The music was played via speakers within the PAL device placed at the head of the infant’s bed. After the infant activated the music by sucking on the pacifier, the music automatically played for 10 seconds and subsequently shut off unless reactivated by the infant’s suck. The music therapist adjusted the settings on the PAL device to match the infant’s suck strength. For example, if the infant had a strong suck strength (measured by the “threshold”

Figure 1: Finnegan Score Before and After Intervention for the Control and Experimental Patients

with scores from 1-10) the therapist would increase the threshold setting, requiring the infant to suck at the higher threshold in order to activate the music. An electronic medical record review retrieved demographic data including gestational age, date of birth, date of admission into NICU, maternal drug exposure. In addition, it included

52


all dependent variables. Dependent variables for both study groups include 3 Finnegan scores prior to and after each intervention, 3 feeding volumes prior to and after each intervention, and 3 feeding quality descriptions prior and after each intervention. Nursing staff trained in Finnegan scoring recorded Finnegan scores prior to each feed, recorded each feeding volume measured in milliliters, and reported feeding quality using “poor,” “fair,” or “well.”

Figure 2: Feeding Volumes Before and After Intervention for the Control and Experimental

Figure 3: Total Finnegan Scores Before and Intervention for the Experimental Patients

RESULTS Figure 1 shows the p-values for the sleep scores, feeding scores, and Finnegan scores. P-values were derived from t-tests. It shows that the p-value for the sleep scores 0.77 in the control group and 0.24 in the experimental group, the p-value for the feeding scores was 0.39 in the control group and 0.86 in the experimental group, and the p-value for the Finnegan scores was 0.56 in the control group and 0.016 in the experimental group. Figure 2 shows the p-values

for the feeding volumes was 0.72 in the control group and 0.98 in the experimental group. Figure 3 shows that the trend line is lower for the patients after the intervention as compared to before the intervention.The red dots in Figure 3 represent the average of the 3 Finnegan scores recorded before the intervention and the blue dots represent the average of the 3 Finnegan scores recorded after the intervention. In general, the pattern observed on the figure is that the red dots tend to be higher scores than the blue dots. DISCUSSION The purpose of this study was to determine whether the PAL system would have an effect on the sleep scores, feeding scores, feeding volumes, and Finnegan scores in newborns with neonatal abstinence syndrome. If the PAL system had an effect on any of these variables, it could serve as a treatment modality for babies with neonatal abstinence syndrome. The data does not support the hypothesis that the Finnegan scores, feeding scores, feeding volumes, and sleep scores of newborns with NAS who received PAL interventions would improve as compared to newborns who received the standard of care only. However, the data does suggest improvement in newborns with NAS after receiving PAL interventions. This was shown by the p-value for which the average of the 3 Finnegan scores from before the interventions and the averages of the 3 Finnegan scores after the interventions were recorded for both the experimental and control groups. The p-value indicating the difference in the total Finnegan scores in the experimental and control groups was 0.1. The p-value comparing the averages of the Finnegan scores before the intervention to after the interventions was 0.56, while the p-value comparing the averages of the Finnegan scores before the interventions to after the interventions was 0.016. Also, Figure 1 shows that the average of the 3 Finnegan scores decreased after the interventions compared to the average of the 3 Finnegan scores before the interventions. On the figure, the red dots represent the average of the 3 Finnegan scores recorded before the intervention and the blue dots represent the average of the 3 Finnegan scores recorded after the intervention. In general, the

53


pattern observed on the figure is that the red dots tend to be higher scores than the blue dots. We conclude that the data supported the hypothesis that the PAL system decreased the Finnegan scores. However, the p-values comparing the averages of the sleep scores, feeding scores, and feeding volumes before the interventions to after the interventions was not significant to suggest that the PAL system had any effect on them. Using this study, further experimentation can be conducted to observe more effects of music therapy on babies with NAS. It is suggested that interventions be conducted multiple times in a day for a longer duration in order to make the music therapy more effective. In future studies, it would be interesting to note the role of gender and whether the patient is exposed to one or multiple drugs in individual improvement before and after interventions using the PAL system. Also, it would be interesting to look at the effects of music therapy using the PAL system on length of treatment.

ACKNOWLEDGEMENTS Our study was conducted at Norton Women’s and Children’s Hospital. We would like to thank our mentor Dawn Forbes, music therapist Michael Detmer, and the nursing staff that conducted the interventions.

WORKS CITED

1. Allen, K. A. (2013, October). Music Therapy in the NICU: Is There Enough Evidence to Support Integration for Procedural Support? Retrieved November 6, 2016, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3826794 2. Arnon, S., Shapsa, A., Forman, L., Regev, R., Bauer, S., Litmanovitz, I., & Dolfin, T. (2006, May 24). Live Music is Beneficial to Preterm Infants in the Neonatal Intensive Care Unit Environment. Retrieved November 2, 2016, from http://onlinelibrary.wiley. com/doi/10.1111/j.0730-7659.2006.00090.x/full 3. Cevasco, AM & Grant, RE (2005). Effects of the Pacifier Activated Lullaby on Weight Gain of Premature Infants. Retrieved November 06, 2016, from http://www.ncbi/nih. gov/pubmed/15913390 4. Hartling, L., Shaik, M. S., Tjosvold, L., Leicht, R., Liang, Y., & Kumar, M. (2009, May 28). Music for Medical Indications in the Neonatal Period: A Systematic Review of Randomized Controlled Trials. Retrieved November 2, 2016, from http://fn.bmj.com/ content/94/5/F349.short 5. Kocherlakota, P. (2014, August). Neonatal Abstinence Syndrome. Retrieved November 2, 2016, from http://pediatrics.aappublications.org/content/134/2/e547 6. Loewy, J., Stewart K., Dassler A., Telsey A., & Homel P. (2013, April 15). The Effects of Music Therapy on Vital Signs, Feeding, and Sleep in Premature infants, Retrieved November 2, 2016 from http://pediatrics.aappublications.org/content/pediatrics/ early/2013/04/10/peds.2012-1367.full.pdf 7. Quick, D. (2012, May 21). Pacifier Activated Lullaby Device Uses Music to Teach Premature Babies How to Feed. Retrieved November 2, 2016, from http://newatlas.com/pacifieractivated-lullaby-device/22617/ 8. New Musical Pacifier Helps Premature Babies Get Healthy. (2012, May 21). Retrieved November 2, 2016, from https://www.eurekalert.org/pub_ releases/2012-05/fsu-nmp052112.php 9. Standley, J. M. (1998, November/December). The Effect of Music and Multimodal Stimulation on Responses of Premature Infants In Neonatal Intensive Care. Retrieved November 2, 2016, from http://search.proquest.com/openview/16b96431ffbe800457f284a34ffa2 0a3/1?pq-origsite=gscholar 10. Tierney, S. (2013, February 11). Identifying Neonatal Abstinence Syndrome (NAS) and Treatment Guidelines. Retrieved November 2, 2016, from https://www.uichildrens. org/uploadedFiles/UIChildrens/Health_Professionals/Iowa_Neonatology_Handbook/ Pharmacology/Neonatal Abstinence Syndrome Treatment Guidelines Feb2013 revision.pdf 11. Standley, J. M., Cassidy, J., Grant, R., Cevasco, A., Szuch, C., Nguyen, J., . . . Adams, K. (2010). The Effect of Music Reinforcement for Non-Nutritive Sucking on Nipple Feeding of Premature Infants. Continuing Nursing Education, 36(3), 138-145. doi:20687305

54



Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.