Innovation journal print

Page 1

INNOVATION THE MANUAL SCIENCE REVIEW

VOLUME 1 2015-16


Table of Contents Cover Innovation Team 3 Feature Articles Advice from the Best: Interviews of Successful High School Research Scientist 4 Succeeding in Research without a Mentor 8 Team Projects in Science Fair and How to Make Them Work 9 A Brief History of Cancer 11

Research Articles MicroRNA-203 induces cell death and inhibits PI3K/Akt signaling through PIK3CA in hepatocellular carcinoma 13 Vapor Propelled Marangoni Effect 17 Implementation of a Diverter Based Water Conservation System for Showers 19 The Effect of Shape, Weight, and Diameter on Haptic Perception: An Active Haptic Sensing Study of the Predicted and Actual Grip Forces and their Impacts on Weight Detection Thresholds 23 Programming a Handwriting Interpreter using Artificial Neural Networks Manuscript 28 Individual Based Computer Modeling of MERS Super Spreaders 31

2


Managing Editor Harsha Paladugu Peer Reviewers Conor Blackburn Sophie Lai Emily Liu Harsha Paladugu Matthew Raj Feature Article Authors Sophie Korner Diya Mathur Harsha Paladugu Allison Tu Sasank Vishnubhatla Research Article Authors Annie Zhang William Schuhmann Conor Blackburn Diya Mathur Sophie Korner Joshua Ashley Abraham Riedel-Mishaan Greg Schwartz Sarah Schwartz

3


Advice from the Best: Interviews of Successful High School Research Scientists When I began our high school research careers, I solicited advice from just about everyone- teachers, students, professors and parents. However, when the advice I could most relate too was that of the students. For this interview section, we asked four of our experienced alumni, who were extremely successful at science research, for their advice on high school research. When you read this advice, there are varying perspectives. This is to illustrate the variety of opinions and practices when it comes to high school research. We hope that these responses will give you useful advice to carry with you throughout your high school career. - Harsha

James Graham Brown Cancer Center (JBCC) Summer Research Program. Jingjing Xiao, duPont Manual High School Class of 2014 (J.X.)

When did you get involved in high school science research?

Who has been the single most influential person for your high school science career?

I started doing science fair since I was in the 5th grade, but it wasn’t until sophomore year that I became heavily involved in research. I started off doing a team project with my best friend Rishi Jonnala on Sodium-ion batteries. We never really thought of making it far, but once I was selected to compete at the 2014 Intel ISEF, that changed my entire mindset on research. Since then I have become heavily invested in research and hope to pursue a career in it later down the road. Roshan Duggineni, duPont Manual High School Class of 2016 (R.D.)

I would have to say my science teacher Mr. Zwanzig has been the greatest supporter of mine since day one. He has been a mentor to me since freshman year and has traveled with me to two internationals fairs. (R.D.)

In the summer before sophomore year, I came across my research mentor after searching for labs that specialized in stem cells. I wasn’t very familiar with stem cells--or biology in general--at the time, but he walked me through everything I needed to know. That learning process, coupled with the classes I was taking at Manual, really helped spur my involvement in biological research. Kevin Tien, duPont Manual High School Class of 2016 (K.T.) I became involved in high school science research in the summer of my freshman year, through the

I got actually got started with research in middle school after my then science teacher encouraged me. I enjoyed it so much that when I got to high school, conducting research was something I wanted to pursue, especially since I was a part of the Math, Science, and Technology (MST) Program at Manual. Sanjana Rane, duPont Manual High School Class of 2016 (S. R.)

Venkat Ramakrishnan, the MD-PhD student that supervised me in the lab, was the most integral figure to my high school science career. Venkat graduated from Dunbar not too long ago, so in addition to all the guidance he gave me for my project, he was able to use his high school experiences in their STEM program to advise on what science-related things I should be doing within school. (K.T.) Dr. Kara Sedoris, my research mentor, has been the single most influential person for my high school science career. Her guidance and support has been critical for my four years of cancer research. (J.X.) Without a doubt, my mother has been my biggest role model. Having spent a large part of my childhood in her office or in her laboratory, I was exposed to science at a very young age. My mother, being a researcher herself, shared with me her experience with the scientific process as well as the

4


details of her discoveries. Those conversations with my mom is what originally sparked my curiosity about the way things work. The best thing she ever did for me was speaking to me, not as a child, but as another researcher. (S. R.)

I believe the most formative part would be opportunity to present at both the 2014 & 2016 Intel ISEF and the 2015 & 2016 ISWEEEP . At these fairs, it’s not necessarily the idea of competing that was rewarding; rather it was the opportunity to see the scope of research that others were doing. Seeing that scope of work and the potential applications forced me to evaluate the practical and realistic impact of my own work. I also got to meet people from around the world. In this diverse environment, I was inspired by the exchange of ideas and the network/community that I quickly built with my peers. Throughout these fairs I also had the opportunity to meet with multiple companies like NASA, Alcoa, and Tesla; all of whom spurred my thinking towards possible market applications of my research. (R.D.)

What is one thing you wish you had known before you had begun scientific research? The one thing I wish I had know before I began scientific research would be the amount of criticism that you would encounter during the process. Ever since my sophomore year I have been doubted as to whether or not my project idea would succeed or whether or not i deserved to attend four international fairs. But now I gladly accept criticism because it allows me to gain another perspective on my work and at the same time I get to prove others wrong. (R.D.)

Presenting at national conferences and competitions was the most formative part of my high school research experience. It was wonderful to meet my peers, learn about their work, and learn from how they tell their stories. (J.X.)

I wish I had known the importance of organization, especially when it comes to data collection. When it came time to start writing my paper, I spent way too much time searching for old results simply because I didn’t label the files with the necessary details. It’s critical to plan where and how to store all of your data just in case you need them in the future. (K.T.)

Ironically, I think the most formative part of my research career was after the end of sophomore year when I seemed to have reached an impasse in my project. I knew acrolein was affecting the biochemistry of the cell, but I couldn’t pinpoint a single marker or protein that it was using that justified the development of fibrosis. During those weeks, however, I learned the most about the scientific process, and more importantly I learned about failure’s crucial role in any research project. I tried out a multitude of assays; I read as many previous studied I could get my hands on, and I tested for various protein markers. My patience was at an all time low, but my creativity was at an all time high, and that is exactly what research is all about. You never know where to start, so you try everything and anything. (S. R.)

I wish I had known that presentation and paper writing is just as important as, if not more important than the research itself. No one will know how to value my research unless I prove the value of my research through my presentation and my writing. With biology research especially, the path to making a particular discovery an actuality is a very long one. For me, research was intriguing because it gave me the potential to solve important problems, which could stand to help a large number of people. However, before any benefit can be extracted, there are countless experiments with cells, then tissue, then mice, and so on. It is important to recognize that research is more of a marathon than a sprint, but it is nonetheless still a race worth running. (J.X.)

What effective presentation techniques did you use?

What do you think was the most formative part of your science research experience?

The one thing I always kept in mind is body language. Whether or not I made eye contact or if I

5


was properly using my hands to demonstrate my project. Because doing this allows me to engage the listener. Another thing I always did was purposefully lead the judges to ask certain questions. Those were just a few of the things I used, but most importantly I knew my project inside and out. (R.D.)

from the hard work that you have done. Lastly and most importantly, confidence is key. The secret to life is 25% actual content and 75% confidence (but beware overconfidence! don’t be full of yourself ). If it isn’t clear that you don’t think that your work is important, then why should anyone else? Take a deep breath, and give it your best shot. (S. R.)

I tried to incorporate animated images into my PowerPoints to explain some concepts of my project and to bring more liveliness to my presentation. I used them to illustrate the organization and the growth of the blood vessels of my device, and I think this visual representation communicated progression over time in a cleaner way than a paragraph of words would have. On my presentation board, I converted this graphic into a flowchart for the same purpose. (K.T.)

What are some effective ways you prepared for research presentations? Unlike most people, I never really practice except for maybe the night before, and when I did practice I would review a powerpoint I made once or twice; Not practicing it word for word. Another thing I would do was watch a few TEDtalks the night before or the day of competition. I was also a bit superstitious so I always wore the same suit and tie that I’d win in wherever I competed. (R.D.)

I really like Pecha Kucha, a style of presentation where you talk about 20 slides for 20 seconds each. My slide shows often ran longer than 20 slides, but the need to break every presentation down into 20 second intervals helped ensure that I make my highly-specialized research accessible. (J.X.)

I prepare for research presentations by videotaping myself and critiquing the videos repeatedly. (J.X.) The most important part of my presentation is the idea flowchart I visualize. Rather than planning out an entirely scripted presentation, which in my opinion can lead to your pitch sounding overly rehearsed and less dynamic, I would create a rough sketch of the ideas I wanted to touch upon and the order I would most likely follow. I knew the important parts of my project that I really wanted the judges to internalize such as the background information, conclusions, and practical applications, so those parts of the presentation were slower and more emphasized. Knowing where I wanted to go next in my thought process was always helpful because it helped me to avoid stumbling over my words or drawing a blank. (S. R.)

To me, the presentation is one of the most important aspects of science fair. Conducting good research is only half the battle. Scientists must also know how to communicate their ideas effectively. For me, I like to try and establish a friendly connection with the judges. It is the little things that count like shaking a person’s hand, making eye contact, and having a smile on your face. During your presentation, you want to seem enthusiastic (not too enthusiastic! Being super peppy can overwhelm them); you want to be speaking at a pace that is not too fast so that the audience can actually follow what you’ve done (remember you did the work so you know your project better than anyone else. It will definitely take other people more time to pick up what you have done). Also, try and limit the gestures (keep them small!). If you have huge gestures, people will be distracted; you want them focused completely on you. Additionally, make sure you keep it professional. You (especially as a younger person) want people to respect you and respect the work that you have done. Excessive giggling, hair twirling, and slang can all take away

How did you become interested in your topic of research? I’ve been interested in the idea of renewable energy since I was in the 6th grade but once I read and realized that batteries were the future of renewable energy, I began thinking about the possibilities of making lithium-ion batteries better. Thus, I thought of creating an alternative Sodium based battery.

6


Since I have developed a Sodium-CFx and a Sodium-Sulfur battery. (R.D.) I became interested through my first research experience with the JBCC summer research program. (J.X.) For the past four years, I have been researching how the unregulated environmental pollutant acrolein affected the kidneys. This idea stemmed from a study that I read that had been conducted by USA Today that placed duPont Manual in the top 2% of worst air out of over 100,000+ schools tested in the country. This concerned me, and ultimately made me interested in environmental pollutants. I chose to study acrolein because not only is it present in car exhaust fumes, cigarette smoke, and plastic/ petrochemical industrial smoke, it is also currently unregulated by the EPA. In the past few years, more and more studies have popped up connecting high levels of pollution with kidney disease. In a study conducted at Mt. Sinai hospital, many 9/11 first responders are now showing advanced signs of kidney disease, something most doctors overlooked in the aftermath. Therefore, I decided to investigate if acrolein had harmful effects and thus should be regulated and also how exactly it was straining our kidneys. (S. R.) What do you think are some challenges of starting high school research? Some challenges include time management, the commitment to research, and the criticism/competitiveness that you face during high school. Everyone wants to win in science fair, but the only way you win is through a 100% commitment to your work. If you show passion, creativity, and dedication, the possibilities are endless. (R.D.) It depends on where you go to school and where you seek to perform research. At duPont Manual and the University of Louisville, there is a lot of institutional support for high school researchers. I’ve heard that getting into the lab can be prohibitively difficult in environments where this is not the case. (J.X.)

7

Research requires a big commitment; that goes without saying. As a high schooler, you are limited by not just time commitments with homework and extracurriculars, but also by transportation and accessibility. Many PIs underestimate the ability to high schoolers to make a lasting impact in the lab and actually contribute to the research being done. I think being taken seriously is a major challenge. However, on the same note, I want to say that it is imperative that one proves them wrong. Conduct background research on other publications; become more knowledgable in that particular field; take initiative and contact people who are doing interesting research. (S. R.)


Succeeding in Research without a Mentor Sasank Vishnubhatla Though scientific research has been a huge part of my high school career, I have gone through the process for the past two years without the help of an research mentor to guide me. Without one, I have successfully been able to complete two major research projects–the first, the creation of an algorithm built to facilitate communication between two people, and the most recent, the development of an iris recognition algorithm designed for the protection of sensitive data. Both of my projects fell under the field of cryptography, the study of information security. This remains a relatively obscure domain, particularly within the Louisville computer science community. Many professors initially deemed my research ideas too complex and challenging for a high school student to tackle. While both endeavors seemed daunting at first, executing two projects without a mentor has allowed me to grow and mature in my ability to both work independently and actively seek out assistance when necessary. Identifying one’s topic is easily be the most difficult part of any independent research project. After the 2011 Sony PlayStation Hack, of which I was a victim, I began to become interested in the fields of cyber security and cryptography. In general, basing one’s research on existing passions or significant personal events is a great starting point. If a student already has knowledge or background in a particular field, s/he will have more context for how to approach the research process, even without an adult mentor. Without a research mentor, the task of establishing a fundamental knowledge base in one’s area of study can be difficult and time-consuming. I’ve found that online educational resources can be an excellent substitute. Among them, I took Stanford University’s Cryptography I course via Coursera, which also offers many other free or inexpensive (<$100) classes. After acquiring some general information, I read lots of research papers to gain a deeper specialized understanding. For my first project, I read over 50 articles, and the next year, I found over 150 papers

on iris recognition (like this one). The more information you find, and the more you read, the better your knowledge base will be. Once you’ve extensively investigated your area of study, you need to take a step back and design your research plan. It doesn’t matter what format or structure this plan takes, but it should highlight two important things: What is the problem you’re trying to solve? How are you going to solve that problem? It may take several days to design your research plan, but be sure to do so: this will keep you on track during your execution phase and also provide a clear end goal. It’s not uncommon to go back and read more research papers about your topic at this point. After designing your research plan, execution is the next step. This may seem intimidating, but I’ve found through experience that the best strategy is just to start working. If you encounter a roadblock, consider alternate routes to come to your end goal. Perseverance is key in any computer science and engineering project, especially when you don’t have a mentor. When I was developing my cryptographic algorithm, I work from October to February without having any functional code. For my computer science questions, I consulted Stack Exchange, a useful resource for answering any technical issues that you might have in engineering disciplines. Although the process seems (and is) difficult, research without a mentor has been one of the most educational and fulfilling experiences of my life. It’s an amazing accomplishment to be able to successfully prepare for, design, and execute a research project. If you proceed with passion, motivation, and curiosity, independent research won’t seem like such a daunting task. Sasank is graduate of duPont Manual High School. He qualified to the International Science and Engineering Fair twice, winning the Mu Alpha Theta Math Honors Society Mathematical Excellence award during his sophomore year. He currently studies computer science at Carnegie Mellon University.

8


Team Projects in Science Fair and How to Make Them Work Diya Mathur and Sophia Korner A lot of students wonder whether they should work with a partner or be part of group when they start their scientific research. As you will read below, Diya Mathur and Sophia Korner have worked as a research team for 7 years. This experience has taken them all across the United States and world. So, we decided to ask them what makes their partnership successful and impart some advice to students who are interested in a group project. Science research is not easy. Whether that includes choosing a research topic, choosing the perfect teammate or constructing an effective and concise presentation, the thought of having to conduct a research project can be overwhelming. However, if we meticulously break down each part of the entire process, we can realize that is not as daunting as it may seem and that, in fact, it is an extremely rewarding and exciting endeavor. Diya and Sophia have been engaging in science fairs for seven years now and have competed in over 20 competitions since they began. Their research have been named among the top 30 in the world and have been recognized for its innovation by organizations such as Intel, National Aeronautics and Space Administration (NASA), Massachusetts Institute of Technology (MIT), American Intellectual Property and Law Association (AIPLA), Sigma Xi, Society of Neuroscience, American Physiology Society, and numerous other universities and organizations. Not only have they been selected as competitors for Team Kentucky at the Intel International Science and Engineering Fair (ISEF), but they were also chosen to represent the United States In China at the Chinese Adolescents Science and Innovation Contest. Although it may seem otherwise, success did not come so easily to them at first. During their 6th grade science fairs, neither of the current teammates advanced past their class science fairs nor were selected as top presenters in their grade. So as their 7th grade began, Diya and Sophia, each on their own, decided to attempt this seemingly grueling process yet again. Diya, who was in-

spired by her friends’ daily complaints about heavy backpacks, and Sophia, who had an aunt struggling with back problems, each devised their own ideas about how to create a more ergonomic backpack that reduced pain almost completely. When they both realized that they had plans to conduct similar experiments, the two decided to collaborate on their research, rather than compete against one another, and create an even more impressive project. Ever since, the dynamic duo have remained teammates and have travelled around the world because of it. The best teams come together organically. People often choose to collaborate on research projects simply because they are best friends or because one person is extremely intelligent; however, this strategy commonly leads to distractions or one partner unequally contributing to the project (which the judges rarely appreciate). In order to create an optimal research project, the research must come first: decide on the project, and then decide on a partner if need be. You should only add a partner if it will improve the research and presentation immensely. Diya and Sophia quickly realized the diverse and significant contributions each of them could make to the entire research process, which was an integral reason they remained teammates. Diya is fascinated by the biology aspects of science and keeps up-todate with current scientific advancements in the medical field, while Sophia enjoys mechanical and electrical engineering and programs microprocessors to conduct numerous tasks in her free time. By cohesively collaborating, they have been able to engineer and program a 3D prosthetic robotic arm that allows a user to more accurately perceive objects through non-invasive vibration feedback. Their studies about haptic and vibrotactile perception and the numerous effects of weight illusions and other factors on perception accuracy have allowed them to integrate each of their interests to create an interesting, innovative, and applicable research project over the years. Furthermore, with teammates, it is important to have a system to efficiently complete work. When it comes to composing any document, email, or pre-

9


sentation, Sophia quickly and concisely drafts an outline (in rough language) of all ideas that need to be included. Then, Diya reads through Sophia’s notes and formulates a final paper by expanding on each idea, morphing simple sentences to professionally written paragraphs. They then edit any minor changes and finalize the piece (Such as this one!). Although the actual research is the most time-consuming aspect of this process, the presentation and the project’s application are also very important. The judges are aware that all competitors at a research competition are diligent and deserving, so the main thing that can distinguiss you from everyone else is how well you concisely and effectively convey your findings in the allotted time. When presenting, be sure to clearly let your audience know why and how you conducted your experiments , their overall importance and the impact of your results. Instead of consuming time reciting your title, create a 15 second summary of your research that can hook the judges to continue listening. Although it is more difficult to present your research with little prior practice in team projects, there are two main alternative methods you can take to overcome this. The first one requires you to initially plan and write out the entire spiel, print each person’s part onto index cards, and then practice until you can smoothly present the entire project. It is important to not memorize the cards, which will lead to a more monotone and unauthentic presentation, and instead remember the main points discussed in each card which will create a more conversational spiel. The other method is to assign each team member a certain section of the research to discuss. While this may not lead to the most consistent or eloquent speech, it is a faster alternative and ensures a more impromptu type of presentation. Another important tip is to note the judge’s body language, to quickly notice if they are uninterested or confused. Diya and Sophia have created numerous subtle signals to let each other know when to switch off presenting by stepping to a certain direction and when to slow down or speed up a certain part of the presentation by a series of hand gestures.

If you follow these tips and pursue a field of research you are personally interested in, you will soon realize that science research is extremely exciting and is not as daunting as you may have first considered it to be. Integrate your fascination with science into your research project, create a system to complete work efficiently, practice your presentation in front of as many critical audience members as you can before the fair, and continue to improve and build on what you discover. You can only continue to improve from where you are right now. Diya Mathur and Sophia Korner are current seniors at duPont Manual High School. They have qualified twice to the International Science and Engineering Fair and have, among other awards, represented the United States In China at the Chinese Adolescents Science and Innovation Contest.

10


A Brief History of Cancer Allison Tu Five years ago, I sat idly in a classroom waiting for the next substitute teacher to walk through the doors. It had been our fourth sub in a month. We were waiting for the return of my art teacher from the hospital, where she pulled out the last chunks of her hair as doctors mixed chemicals into her blood. She came back every so often, bringing good news and good spirits, pushing through to the end. Eventually, we were lucky enough that she came back in the clear. At some point, cancer will be a part of everyone’s life. By the end of this year, cancer will have affected over 14 million people, and caused over 8 million deaths. Cancer is an ancient disease, first described by Egyptian physicians 5,000 years ago. The condition was described in an ancient textbook-like document as bulging masses under the skin. In the therapy section, the author simply wrote, “There is none.” Since then, the illness waxed and waned from prominence in historical documents. As civilization developed and lives were extended, cancer took a brighter spotlight. The development of anesthesia allowed for the pioneering of invasive surgery, the extensive removal of contaminated tissues. The radical mastectomy, pioneered by surgeon William Halsted, removed not only both breasts, but surrounding lymph nodes, layers of chest muscle, and, in some cases, several ribs. The more nuanced era of chemotherapy began in the 1940s, but its progression was slow and followed a similar brute force approach to the surgeries of the past. Oftentimes, researchers blindly mixed previously discovered chemotherapeutic agents and created treatment regimens administering higher and higher doses, pushing their patients closer and closer to the limit. New chemotherapeutic agents were discovered through years of blind research and testing. For example, the highly toxic VAMP regimen in the late 1960s involved blasting young leukemia patients with high doses of four different chemotherapeutic agents at once. It was only recently, as modern scientific techniques were developed, that scientists began to

11

Image of a dividing cancer cell

realize the importance of understanding the disease’s pathophysiology, or the mechanisms by which diseases progress. Lewis Thomas, a physician and prominent biological writer, discussed three levels of technology when it came to treating illness: nontechnology, halfway technology, and high technology. Nontechnology is utilized when the mechanisms underlying a disease are not well understood, and mitigates the symptoms without rectifying the underlying issues. Halfway technology, such as a kidney transplant, is designed to make up for a disease, or postpone death, while high technology is developed after the disease is mechanistically understood and represents a cure; many of the cancer treatments of the past were misguided nontechnologies or halfway technologies. As scientists began to research the causes of cancer, treatments began to advance towards high technology. For example, researchers Barry Marshall and Robin Warren identified the bacterium H. Pylori as the cause for gastric inflammation and sometimes, stomach cancer. This discovery allowed for the development of more effective preventative screening for those at high risk of developing stomach cancer. As causes of cancer were identified, others were evaluating the role of prevention in cancer mitigation. An example is the New York Health Insurance Plan (HIP) breast cancer screening trial, which showed that frequent breast examination significantly reduced mortality rates from breast cancer. Advances in both sides of the disease, prevention and treatment, helped


refine cancer technology. As others were investigating foreign causes for cancer, researcher Bruce Ames was developing a method to quickly determine if a certain chemical were a mutagen, or a factor that could cause an organism’s DNA to change. As he recorded results from his tests, he realized that many of the mutagens were also carcinogens: chemicals that increased the risk for the development of cancer. In the 1980s, J. Michael Bishop and Harold Varmus built off of this knowledge—and the previous research of many others—and confirmed the first oncogene, a gene that when causes cancer when mutated. This finally allowed for the development of cancer treatments based on the cause of cancer itself, rather than virtually random testing of chemicals. Bishop’s and Varmus’s research sparked the investigation into other oncogenes, and subsequent development of treatments. One of the first cancer treatments based on oncogene research was the drug Herceptin, which was created based on expression of an oncogene called HER2 in some breast cancer patients. Herceptin is a step towards Lewis Thomas’s high technology; though it doesn’t represent a flat-out cure for cancer, it connects the understanding of the disease to its efficient remedy. Modern cancer research utilizes oncogene knowledge as a strong foundation, but has progressed farther and more rapidly than ever before. Many novel developments in the treatment of cancer involve harnessing the patient’s own immune system to target cancer cells, a technique called immunotherapy. Immunotherapy may involve stimulating the immune system to recognize and attack cancer cells or giving the immune system artificially-made components to aid in the fight. Other research focuses on oncolytic viruses, which preferentially attack only cancer cells, directly killing cancer while leaving healthy cells untouched. The quest to cure cancer has been a tedious, painful, and disastrous journey, but modern developments are slowly carrying us to the finish line. In his novel The Emperor of Maladies, Siddartha Mukurjee said, “Down to their innate molecular core, cancer cells are hyperactive, survival-endowed, scrappy, fecund, inventive copies of

ourselves.” The disease is unique because it is the patient’s own self that initiates disease, not a foreign particle. The syndrome’s manifestation is a more perfect version of the patient’s body. Subtle mutations lead to almost immortal cells, which develop strategies to avoid the defenses of the body to which they once belonged. Cancer is the sufferer’s personal civil war, a battle millions of people are fighting at this moment. But perhaps one day, It will fade back into the corner of history just as it did thousands of years ago. Allison Tu is a current sophomore at duPont Manual High School. She will represent Kentucky at the American Junior Academy of Science in 2017. Image Source: http://fineartamerica.com/featured/cancer-cell-division-spl-and-photo-researchers.html

12


MicroRNA-203 induces cell death and inhibits PI3K/Akt signaling through PIK3CA in hepatocellular carcinoma Annie Zhang1, Baochun Zhang2 1 duPont Manual High School 2 Price Institute of Surgical Research, University of Louisville

Summary Hepatocellular carcinoma (primary liver cancer) is a leading cause of cancer-related deaths, and alternative ways to treat this disease are urgently needed (1). In recent years, novel approaches to cancer treatment have been based on microRNAs, small non-coding RNA molecules that play a crucial role in cancer progression by regulating gene expression (2). The aim of the present study was to investigate microRNA-203 (miR-203) as a potential therapeutic agent against hepatocellular carcinoma and evaluate its molecular targets. HepG2 cells were transfected with either miR-203 mimics (synthetic RNA molecules used to overexpress miR-203) or negative control RNA molecules (which do not affect cellular function). Afterwards, the cells were subjected to a cell viability assay and western blotting. MiR-203 significantly reduced cell viability and inhibited expression of PIK3CA (an oncogene commonly found in liver cancer) and phosphorylated Akt (a cancer-promoting protein kinase). Both PIK3CA and Akt are part of the oncogenic PI3K/Akt signaling pathway. It was concluded that miR-203 induces the death of HepG2 cells by downregulating the PI3K/Akt signaling pathway through PIK3CA. We demonstrated that microRNA-203 overexpression may be a potential new treatment for cancer; however, further investigation is needed.

Introduction Hepatocellular carcinoma (primary liver cancer) is the second-leading cause of cancer-related deaths worldwide and the fastest-rising cause of cancer-related deaths in the United States (1). Hepatocellular carcinoma is typically diagnosed in its advanced stages when treatment options are considerably limited (2). Therefore, there is an urgent need to develop alternative ways to treat liver cancer and research the underlying molecular mechanisms involved. MicroRNAs are small non-coding RNA molecules that negatively regulate gene expression by binding to the 3’untranslated regions (3’UTRs) of target mRNAs. Through this mechanism, they play an important role in apoptosis, proliferation, migration, and other biological processes (2). Numerous studies report that microRNAs are dysregulated in cancer, and they have been strongly implicated in the development and progression of various cancers, including liver (2)(3). MicroRNAs influence tumorigenesis by functioning as either oncogenes or tumor suppressors, depending on the particular genes and signaling pathways that they target (3).

13

Tumor suppressor microRNAs are downregulated in cancerous tissues when compared to their normal tissue counterparts (4). Researchers have found that forced overexpression of these particular RNAs in liver cancer can result in a wide range of anti-cancer effects (5)(6). For example, overexpression of miR-26a in mice with hepatocellular carcinoma has inhibited proliferation and induced apoptosis (7), and overexpression of microRNA-122 and microRNA-144 also inhibited metastasis of liver cancer in vivo (8)(9). There is evidence that microRNA-203 (miR-203) is relevant to the progression of liver cancer. Notably, a study by Liu et al. demonstrated that miR-203 is downregulated in patients with hepatocellular carcinoma, and miR-203 expression was shown to be inversely correlated to cancer progression in the patients (10). However, there has been very little investigation into its therapeutic potential or its molecular targets in liver cancer. In the past, microRNAs have been shown to target components of the PI3K/Akt signaling pathway. Aberrant activation of the PI3K/Akt signaling pathway has been described as critical to the progression of many cancers, including liver cancer (11). PIK3CA, the p110α catalytic subunit of PI3K, is an oncogene commonly reported in liver cancer; as such, it has become a therapeutic target involved in several microRNA-based studies (12). For example, microRNA-1 and microRNA-124 have been reported to target PIK3CA to inhibit nonsmall cell lung cancer and liver cancer, respectively (13)(14). Downstream of PIK3CA is the Akt, a protein kinase that drives cancer progression by phosphorylating proteins that promote cellular growth, proliferation, and survival (15). In the present study, we aimed to investigate the therapeutic potential of miR-203 in liver cancer and evaluate its molecular targets. Since miR-203 was shown to be downregulated in hepatocellular carcinoma patients (10) – a characteristic of tumor


suppressor microRNAs – we investigated the effect of restoring miR-203 expression on cell viability. Furthermore, we analyzed PIK3CA as one of miR203’s molecular targets. Our results confirm that miR-203 takes on a tumor suppressor role in hepatocellular carcinoma, and it exerts its anti-cancer effect by regulating the PI3K/Akt signaling pathway (through direct inhibition of PIK3CA).

Methodology Cell culture The human hepatocellular carcinoma cell line HepG2 (ATCC) was maintained in the Minimum Essential Medium with 10% fetal bovine serum and penicillin/streptomycin (ThemoFisher Scientific, Inc. – Invitrogen). The HepG2 cells were grown in 5% CO2 at 37°C then seeded into four 6-well plates (at a density of 5 * 105 cells per well). MicroRNA target prediction Targets of miR-203 were predicted using TargetScan (http://www. targetscan.org/) [16], miRmap (http://mirmap.ezlab.org/app/) [17], and microRNA.org (http://www.microrna.org/) [18]. MicroRNA transfection The Ambion Pre-miR hsa-miR-1 miRNA precursor negative control and pre-miR hsa-miR-203b-5p precursor were obtained from ThemoFisher Scientific, Inc. Pre-miR miR-203 precursor molecules are synthetic RNA molecules designed to mimic endogenous miR-203, while the pre-miR negative control is a random-sequence RNA molecule that has no impact on gene expression and cellular function. The detailed sequence of hsa-miR-203b-5p is shown below: - Mature miRNA-203 sequence: UAGUGGUCCUAAACAUUUCACA - Stem-loop sequence : GCGCCCGCCGGGUCUAGUGGUCCUAAACAUUUCACAAUUGCGCUACAGAACUGUUGAACUGUUAAGAACCACUGGACCCAGCGCGC HepG2 cells were plated into 12- or 6-well plates, and 24 hours after subculture, the miR-203 mimics or the negative control RNA were prepared and added to each of the wells. Transfection of the miRNA into HepG2 cell line was performed by using the Xfect RNA Transfection Reagent (Clontech Laboratories, Inc.) according to the manufacturer’s instructions. At an appropriate time after transfection, cells were either subjected to cell viability assay (MTT assay) or harvested for protein analysis (western blotting assay). Cell viability assay The number of viable HepG2 cells was assessed 1, 3, and 6 days after transfection with the MTT assay. An MTT stock solution was stocked at 5 mg/mL in 70% ethanol, at -20°C without light. The stock solution was diluted (1:50) in 4 mL of fresh culture medium to create a MTT working solution, which was added 1 mL/well into the 12-well plates. Cells well incubated in 5% CO2 at 37°C. Afterwards, the MTT working solution was aspirated off, and DMSO and Tris were added into the wells. The supernatant solutions were added into a 96-well plate in duplicates, and the plate was loaded to Multiskan MCC/340 Microplate Reader (Fisher Scientific). The Ascent Software program measured absorbance at 540 nanometers. Western blotting assay To prepare protein samples for western blotting, 2 and 3 days after transfection, cells were lysed and stored at -80°C for 24 hours. The bicinchoninic acid assay (BCA assay) was performed to determine the protein concentration in the cell lysates, and afterwards, cell lysates were diluted to the same concentration. A sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) was performed on the PROTEAN II xi cell protein electrophoresis system (Bio Rad, Inc.).

14

Protein samples were then transferred to a polyvinylidene fluoride (PVDF) membrane. The membrane was blocked with 5% skim milk in Tris-buffered saline with Tween 20 (TBST). Anti-PI3K (1:1000) and anti-p-Akt (1:2000) antibodies (Cell Signaling Technology, Inc.) were diluted into TBST with 5% bovine serum albumin (Sigma-Aldrich), and the membranes were incubated in the anti-PI3K and anti-p-Akt antibody solutions. Anti-mouse (1:10,000) and anti-rabbit (1:10,000) secondary antibodies (Sigma-Aldrich) were diluted into TBST with 1% skim milk, then the membranes were incubated with HRP-labeled secondary antibodies. The proteins were detected with ECL Western Blotting Substrate, and film was developed with a Kodak developer and fixer. Afterwards, the membrane was re-striped and re-probed with anti-Actin and t-Akt antibodies to determine loading equity. Densitometric scan of western blots The ImageJ software from NIH was used to quantify the optic density of the protein bands in the immunoblots. Statistical analysis A student t-test was performed to determine the statistical significance of data from the cell viability assay and densitometric analysis for western blotting (p < 0.05).

Results

Overexpression of microRNA-203 decreases cell viability in the HepG2 line The MTT assay was performed to determine the effect of miRNA-203 overexpression on HepG2 cell viability. The viabilities of the cells transfected with miR-203 mimics were calculated as a percentage out of the viabilities of cells transfected with the negative control (which were at 100%). 1 and 3 days after transfection, the viability of miR-203-overexpressing cells was not significantly impacted compared to the negative control. 6 days after transfection, the viability of the microRNA-203-overexpressing cells had significantly decreased to 57% (p < 0.01) (Figure 1). Overexpression of microRNA-203 inhibits PI3K/ Akt signaling 2 and 3 days after transfection with miR-203 mimics or negative control, western blotting and densitometry were used to analyze the effect of miR-203 overexpression on PIK3CA expression and Akt phosphorylation (p-Akt). The PIK3CA and p-Akt fold intensities for the miR-203-overexpressing cells were calculated as a percentage out of the PIK3CA and p-Akt fold intensities for the miR control cells (which were at 100%). Two and three days after transfection, PIK3CA levels had decreased to 40.5% (p < 0.01) and 66.3%, respectively (Figure 2A). In addition, two and three days after transfection, phosphorylated Akt levels had decreased to 26.2% (p < 0.01) and 50.0% (p < 0.05), respectively (Figure 2B).


120

Day

2

3

2

3

control

control

miR-203

miR-203

% PIK3CA Fold Intensity vs. control

A

100 80

PIK3CA

60 40

Actin

20 0

Day

2 control

p-Akt t-Akt

Actin

Figure 1. Effect of miR-203 overexpression on cell viability in the HepG2 cell line. HepG2 cells were plated in 12-well plates and transfected with microRNA-203 mimics or negative control in triplicate. One, three, and six days after transfection, the MTT assay was performed to measure cell viability. Cell viabilities of miR-203-overexpressing cells are expressed as percentages out of cell viabilities of control cells (at 100%). Data is presented as mean ± SD, n=3. *p < 0.01 versus negative control.

Discussion

MicroRNA-based treatments are being identified as novel approaches to cancer therapy. MicroRNA-203 has been shown to act as a tumor suppressor in breast (19) and lung (20) cancers, and past studies have verified microRNA-203’s significance in liver cancer. For example, one previous study demonstrated that microRNA-203 played an important role in metastasis (10). Up until now, only one study has investigated miR-203 overexpression in liver cancer: Wang et. al demonstrated that miR-203 inhibited proliferation of hepatocellular carcinoma by targeting survivin (21). In the present study, miR203 caused a significant decline in cell viability. The PI3K/Akt signaling pathway is commonly activated in cancer and promotes cellular growth, proliferation, and survival, making it a useful target for cancer therapies. In this study, we investigated regulation of the PI3K/Akt pathway as a potential mechanism through which miR-203 suppresses hepatocellular carcinoma. The western blots indicated that miR-203 inhibited PIK3CA expression 2 days after transfection. Since PIK3CA is the catalytic subunit of the PI3K protein kinase, we concluded that miR-203 directly targeted PIK3CA. In addition, the western blots indicate that Akt phosphorylation decreased 2 and 3 days after transfection; however, total Akt expression was not impacted, indicating that miR-203 did not directly target Akt. Since Akt is downstream of PIK3CA, it was concluded that Akt phosphorylation decreased as a direct result of

15

3 control

2

3

miR-203 miR-203

120

% p-Akt Fold Intensity vs. control

B

1 2 Days after miRNA transfection

100 80 60 40 20 0

1

2 Days after miRNA transfection

Figure 2. Western blot analysis of effects of microRNA-203 overexpression on PIK3CA expression and Akt phosphorylation. HepG2 cells were plated in 12-well plates and transfected with miR-203 mimics or negative control in triplicate. (A) Western blots were performed using antibodies against PI3KCA and Actin 48 and 72 hours after miRNA transfection. The right panel shows densitometric analysis (the PIK3CA fold intensity of miR-203-overexpressing cells is expressed as a percentage out of 100% fold intensity of control cells). Data is presented as mean ± SE, n=3. *p < 0.01 versus negative control. (B) Western blots were performed using antibodies against p-Akt, t-Akt, and Actin 48 and 72 hours after miRNA transfection. Right panel shows densitometric analysis (p-Akt fold intensity of miR-203-overexpressing cells is expressed as a percentage out of 100% fold intensity of control cells). Data is presented as mean ± SE, n=3. **p < 0.05 versus negative control.

changes in PIK3CA expression. Based on the results, we concluded that microRNA-203 induces the death of HepG2 cells in vitro by downregulating the oncogenic PI3K/ Akt signaling pathway (through direct inhibition of PIK3CA). Notably, this indicates that overexpression of miR-203 potentially used in cancer treatment. However, to support this conclusion, further investigation is needed. The restoration of microRNA-203 must be tested in other hepatocellular carcinoma cell lines, and microRNA-203’s effects on apoptosis, invasion, and migration will be looked at as well. MicroRNAs have demonstrated several advantages over both traditional and alternative chemotherapeutics. While most cancer drugs come in the form of chemicals, microRNAs exist in nature, and simply altering the expression of a tumor-suppressor microRNA inside cancer cells can inhibit oncogene expression and suppress cancer progression. Unlike some other cancer therapeutics, many microRNAs can regulate multiple genes in the same signaling cascade and multiple signaling pathways at a single time, making it harder

for tumors to resist a microRNA-based cancer treatment (22). This also indicates that altering


the expression of a single microRNA may dramatically impact gene expression and cancer progression (23). Furthermore, tumor-suppressor microRNAs may be potentially used as biomarkers in liver cancer patients (2). Whether miR-203 is toxic to normal cells and targets multiple signaling pathways will be investigated in the future. Interestingly, a previous study demonstrated that no significant toxicity was observed when tumor suppressor miR-26a was delivered into mice with hepatocellular carcinoma (7).

References 1. Mittal, S., & El-Serag, H. (2013). Epidemiology of HCC: Consider the Population. Journal of Clinical Gastroenterology, 47(0), S2-S6. 2. Hung, C.-H., Chiu, Y.-C., Chen, C.-H., & Hu, T.-H. (2014). MicroRNAs in Hepatocellular Carcinoma: Carcinogenesis, Progression, and Therapeutic Target. BioMed Research International, 2014, 486407. http://doi.org/10.1155/2014/486407 3. Hayes, J., Peruzzi, P.P., Lawler, S. (2014). MicroRNAs in cancer: biomarkers, functions and therapy. Trends in Molecular Medicine, 20(8), 460-469. 4. Lee, Y. S., & Dutta, A. (2009). MicroRNAs in cancer. Annual Review of Pathology, 4, 199–227. http://doi.org/10.1146/annurev.pathol.4.110807.092222 5. Price, C., Chen, J. (2014). MicroRNAs in cancer biology and therapy: Current status and perspectives. Genes & Diseases, 1(1), 53-63. doi:10.1016/j.gendis.2014.06.004 6. Sun, J., Lu, H., Wang, X., & Jin, H. (2013). MicroRNAs in Hepatocellular Carcinoma: Regulation, Function, and Clinical Implications. The Scientific World Journal, 2013, 924206. http://doi. org/10.1155/2013/924206 7. Kota, J., Chivukula, R. R., O’Donnell, K. A., Wentzel, E. A., Montgomery, C. L., Hwang, H.-W., … Mendell, J. T. (2009). Therapeutic delivery of miR-26a inhibits cancer cell proliferation and induces tumor-specific apoptosis. Cell,137(6), 1005–1017. http://doi.org/10.1016/j.cell.2009.04.021 8. Tsai, W.C., Hsu, P.W., Lai, T.C., Chau, G.Y., Lin, C. W., Chen, C.M., .... Tsou, A. P. (2009). MicroRNA-122, a tumor suppressor microRNA that regulates intrahepatic metastasis of hepatocellular carcinoma. Hepatology, 49(5), 1571-82. doi: 10.1002/ hep.22806. 9. Zhang, X., Liu, S., Hu, T., Liu, S., He, Y., Sun, S. (2009). Up-regulated microRNA-143 transcribed by nuclear factor kappa B enhances hepatocarcinoma metastasis by repressing fibronectin expression. Hepatology, 50(2), 490-499. doi: 10.1002/hep.23008. 10. Liu, Y., Ren, F., Rong, M., Luo, Y., Dang, Y., & Chen, G. (2015). Association between underexpression of microRNA-203 and clinicopathological significance in hepatocellular carcinoma tissues. Cancer Cell International, 15, 62. http://doi. org/10.1186/s12935-015-0214-0 11. Avila, M.A., Berasain, C., Sangro, B., & Prieto, J. (2006). New therapies for hepatocellular carcinoma. Oncogene, 25,

16

3866–3884. doi:10.1038/sj.onc.1209550 12. Karakas, B., Bachman, K.E., & Park, B.H. (2006). Mutation of the PIK3CA oncogene in human cancers. British Journal of Cancer, 94, 455–459. doi:10.1038/sj.bjc.6602970 13. Yu, Q.Q., Wu, H., Huang, X., Shen, H., Shu, Y.Q., Zhang, B., … Chen, L. (2014). MiR-1 targets PIK3CA and inhibits tumorigenic properties of A549 cells. Biomed Pharmacother, 68(2), 155-161. doi: 10.1016/j.biopha.2014.01.005. 14. Lang, Q., Ling, C. (2012). MiR-124 suppresses cell proliferation in hepatocellular carcinoma by targeting PIK3CA. Biochem Biophys Res Commun., 426(2), 247-52. doi: 10.1016/j. bbrc.2012.08.075. 15. Altomare, D., & Testa, J. (2005). Perturbations of the AKT signaling pathway in human cancer. Oncogene, 24, 7455– 7464. doi:10.1038/sj.onc.1209085 16. Witkos, T. ., Koscianska, E., & Krzyzosiak, W. (2011). Practical Aspects of microRNA Target Prediction. Current Molecular Medicine, 11(2), 93–109. http://doi. org/10.2174/156652411794859250 17. Vejnar, C. E., Blum, M., & Zdobnov, E. M. (2013). miRmap web: comprehensive microRNA target prediction online. Nucleic Acids Research, 41(Web Server issue), W165–W168. http://doi.org/10.1093/nar/gkt430 18. Betel, D., Wilson, M., Gabow, A., Marks, D. S., & Sander, C. (2008). The microRNA.org resource: targets and expression. Nucleic Acids Research,36 (Database issue), D149–D153. http://doi.org/10.1093/nar/gkm995 19. Zhang, Z., Zhang, B., Li, W., Fu, L., Fu, L., Zhu, Z., & Dong, J.-T. (2011). Epigenetic Silencing of miR-203 Upregulates SNAI2 and Contributes to the Invasiveness of Malignant Breast Cancer Cells. Genes & Cancer, 2(8), 782–791. http:// doi.org/10.1177/1947601911429743 20. Wang, N., Liang, H., Zhou, Y., Wang, C., Zhang, S., Pan, Y., … Chen, X. (2014). miR-203 Suppresses the Proliferation and Migration and Promotes the Apoptosis of Lung Cancer Cells by Targeting SRC. PLoS ONE, 9(8), e105570. http://doi. org/10.1371/journal.pone.0105570 21. Wang, W., Liu, W., Sun, H., Chen, D., Yao, X., Zhao, J. (2013). MiR-203 inhibits proliferation of HCC cells by targeting survivin. Cell Biochem Funct., 31(1), 82-85. doi: 10.1002/ cbf.2863 22. Rooij, E. (2011). The Art of MicroRNA Research. Circulation Research, 108, 219-234. doi: 10.1161/CIRCRESAHA.110.227496 23. Rooij, E., Purcell, A.L., Levin, A.A. (2012). Developing MicroRNA Therapeutics. Circulation Research, 110, 496-507. doi: 10.1161/CIRCRESAHA.111.247916

Acknowledgements A.Z would like to thank B.Z from the Price Institute of Surgical Research (University of Louisville) for his assistance.


Vapor Propelled Marangoni Effect William Schuhmann1, Jagannadh Satyavolu2 1 Ballard High School, Louisville, KY 2 Conn Center for Renewable Energy Research, University of Louisville, Louisville, KY

Summary

The Marangoni effect describes the transfer of mass down a surface tension gradient along the interface of two fluids. While this effect has been utilized for millennia by spiders and microvelia for propulsion in water, its potential uses in engineering have only recently been explored. Here, I investigate the Marangoni effect’s applicability to micro-vessel propulsion, where we hypothesize that surface-gradient propulsion holds scaling advantages compared to more conventional methods of propulsion. We present results on optimized Gibbs-Marangoni propulsion systems and discuss real-world applications such as high-efficiency autonomous vehicles and surfactant-propelled micro-vessels for cleaning oil spills.

Introduction The first use of dispersants to clean an oil spill was in 1962, when the supertanker Torrey Canyon leaked oil off the coast of England. Alkylphenol dispersants were originally used, but it proved to be extremely toxic, killing many types of aquatic life (1). Dispersants are simply mixtures of surfactants. Surfactants are substances that reduce the surface tension of the liquid it is dissolved in. Since dispersants are composed of hydrophobic tails and hydrophilic heads, they can surround individual molecules of oil, making them smaller and more easily broken down by aquatic bacteria(2). The Marangoni Effect was first observed by James Thomson (1822-1892) while he looked at wine tears in his glass. He noticed, that given enough time, a region of low alcohol concentration would form above the main body of wine, an occurrence like this is commonly referred to as a “tear” or “leg”. The effect itself was named after Carlo Marangoni (1840-1925). Marangoni studied the effect for his dissertation called, “Sull’ espansione delle gocce liquide” or, “On the Spreading of Liquid Droplets”(3). The Marangoni Effect occurs when a surfactant, such as soap, comes into contact with a solvent in this case, water. By definition, the Marangoni Effect is the mass transfer along an interface between two fluids due to surface tension gradient. The surfactant, on contact, will lower the surface tension of the solvent. This creates an outward current that pushes anything on the surface away from the reaction. (4) The Marangoni Effect is very commonly used in aquatic insects such as water striders and Microv-

17

Description of the Maragoni effect. Source: Royal Society of Chemistry, “Surfacant Driven Propulusion”.

elia. Water striders have an excess appendage underneath their body that, when threatened, is lowered into the water. The appendage is coated with a surfactant, enabling the fast escape of the insect (5). The boats in this experiment were eventually powered by a vapor propelled Marangoni Effect. Instead of dripping a liquid surfactant onto water, a volatile liquid was forced to turn into gas and disperse into the water below, resulting in the same movement. The purpose of this project is to create a vessel capable of dispersing oil, while not harming the environment. It does so by not releasing massive amounts surfactants, unlike dropping surfactant from an airplane over an oil spill.

Methodology

Three tests were performed overall. The first test was comparing how Ethanol and 70% isopropyl alcohol affect the performance of the boat. The reservoir on the boat was filled with Ethanol and then placed on a still body of water. The time was recorded for how long the ethanol continually wicked into the water, the reaction of the ethanol hitting the surface tension of the water is what pushes the boat forward. The reservoir where the fuel was stored is run under water until clean. The reservoir is then refilled with the other fuel, 70% Isopropyl alcohol, and allowed to run in the exact same conditions as above. This process was repeated exactly for all three tests, the only variable was the fuel itself.


Discussion

A

For the purpose of Marangoni propulsion, surfactants with a low surface tension and high vapor pressure are the optimal fuel and dispersant. A surfactant such as Diethyl ether (C4H10O) has a vapor pressure of 442 Torr and a surface tension of 17 mN/m. The drawback to having that extreme of a vapor pressure is that it will not last very long. The optimal fuel, based on data alone, should be Methanol. It has a vapor pressure of 97 torr and a surface tension of 22.5 mN/m. The various surfactants would form a downward opening parabola shape. To create the curve, The effects of the vapor pressure and surface tension of a prodigious amount of surfactants would have to be tested. The boats I made could be scaled up to be slightly larger and placed on a body of water where an oil spill has happened. The use of these boats would save oil companies large amounts of money by not having to spend as much money on surfactant and also decrease the damage done to the environment by oil spill cleaning.

B

C

Figure 1. Results of expeirments

Results

Ethanol and Isopropyl alcohol were tested mainly to learn what properties affect Marangoni propulsion and to learn about variations between surfactants. One of the major variables found was vapor pressure. The general rule of thumb, the higher the vapor pressure, the more volatile the substance. While this proved to be very useful knowledge later, it wasn’t known immediately. Three different experiments were performed. The first was to see which fuel was more capable of propulsion, Ethanol or 70% Isopropyl alcohol. In this experiment, Isopropyl alcohol proved to be a better fuel, which didn’t seem correct. Later, a second test was performed comparing two different concentrations of Isopropyl alcohol. The highest concentration, 91%, was the better fuel. The final experiment was comparing vaporized Ethanol and vaporized 91% Isopropyl alcohol. Isopropyl alcohol and Ethanol are both volatile and will readily evaporate at room temperature. This time, the results were sheer opposites of each other. Ethanol lasted, on average, 1.88 seconds longer than the Isopropyl alcohol. The Isopropyl alcohol lasted longer than the Ethanol because there is a larger difference between the surface tension of water, 71.97 mN/m, and the surface tension of Isopropyl alcohol, 23.00 mN/m, then there is between water and Ethanol, 22.10 mN/m. We hypothesized that Ethanol worked better as a gas is because it has a higher vapor pressure than Isopropyl alcohol. The higher the vapor pressure, the more volatile the liquid is.

References

1. Deepwater Horizon Oil Spill. 2010. By Cutler Cleveland. Web. 27 Apr. 2016. <http://www.eoearth.org/view/article/161185/>. 2. ”Bioalcohols.” Biofuels. Web. 17 Dec. 2015. <http://biofuel.org.uk/bioalcohols.html>. 3. “Lecture 4: Marangoni Flows.” MIT.edu. Web. 6 Oct. 2015. <http://web.mit. edu/2.21/www/Lec-notes/Surfacetension/Lecture4.pdf>. 4. ”Chemical Science.” Vapour-driven Marangoni Propulsion: Continuous, Prolonged and Tunable Motion - (RSC Publishing). Web. 23 Sept. 2015. <http:// pubs.rsc.org/en/content/articlelanding/2012/sc/c2sc20355c#!divAbstract>. 5. J, Clarke. Mechanism, Chemistry, and Physics of Dispersants in Oil Spill Response. 2004. Web. 19 Apr. 2016. <http://ehp.niehs.nih.gov/118-a338/>. 6. Nicola, Jones. “Rising Waters: How Fast and How Far Will Sea Levels Rise?” Yale Environment 360. Web. 6 Oct. 2015. <http://e360.yale.edu/feature/rising_waters_how_fast_and_how_far_will_sea_levels_rise/2702/>. 7. “Focus: Motoring Oil Drops.” Physics. 22 Feb. 2005. Web. 16 Dec. 2015. <http://physics.aps.org/story/v15/st7>. 8. “Surfactant Driven Propulsion.” RSC RSS. Web. 23 Sept. 2015. <http://www. rsc.org/chemistryworld/2012/06/surfactant-driven-propulsion>. 9. “True Scale of CO2 Emissions from Shipping Revealed.” 13 Feb. 2008. Web. 16 Dec. 2015. <http://www.theguardian.com/environment/2008/feb/13/ climatechange.pollution>. 10. Campbell, Neil A., and Jane B. Reece. Biology. 7th ed. San Francisco: Pearson, Benjamin Cummings, 2005. Print. 11. Dunbar, Brian. “The Marangoni Effect: A Fluid Phenom.” NASA. NASA, 10 Mar. 2011. Web. 6 Oct. 2015. <http://www.nasa.gov/mission_pages/station/ research/news/marangoni.html>. 12. Ethanol: What Is It?” - Ethanol. Web. 16 Dec. 2015. <http://web.extension. illinois.edu/ethanol/>. 13. San Martin, Giles. Microvelia Reticulata. 2014. Jambes, Belgium. Web. 25 Mar. 2016. <https://commons.wikimedia.org/wiki/ File:20140427_130230_7250M.JPG>. 14. Tears of Wine and the Marangoni Effect. 2015. By Brianne Costa. Web. 24 Apr. 2016. <https://www.comsol.com/blogs/tears-of-wine-and-the-marangoni-effect/>.

Acknowledgements

I would like to thank Dr. Satyavolu at the University of Louisville for guiding me through my experiments and providing advice along the way. I would also like to thank Mrs. Fields, Mrs. Haffell, and the Ballard High School science department.

18


Implementation of a Diverter Based Water Conservation System for Showers Connor Blackburn1 1 duPont Manual High School

Summary This project will be used to eliminate the amount of water wasted while the water is heating up, by using a simple water diverter connected electronically to a temperature sensor. The user will be able to input a specific temperature of water on a small control panel, preferred for his/her showering/ bathing. Any water below the specified temperature would be diverted back down to the source such as the water heater or the inflow pipe so it can be heated back up and used again. Then as the flowing water becomes greater than or equal to the specified temperature, the diverter will allow water to flow out the shower head and into the shower/tub. Thus, no water would be wasted then attempting to heat it up. During this research project, a full-scale model of the device was installed in the researchers home and tested on the basis of effectiveness, user-friendliness, and efficiency. Not only was all the water being used while heating up the shower saved through quick diversion of the water, but a more user-friendly interface was created using an Arduino microcomputer, to form a much simpler medium of controlling the diversion. After testing the diversion, it was concluded that the device could save from approximately 1.45-2.65 gallons of water, on average. This seemingly small amount of water saved has not only a huge impact on the environment, but also an impact on the user’s water expenses.

Introduction

Last year, research was done to design and evaluate the effectiveness of a water conservation system for showers based on a temperature-sensitive water diverter. The goal was to eliminate the amount of water wasted while heating up a shower or bath by diverting the cold water not out of the shower head but back down to the source pipe to be reheated. It was concluded during preliminary testing that the design could save up to approximately nine gallons of water — although those statistics do conflict slightly with results of other research, pointing that approximately 2 gallons of water were wasted while a shower was heated up1, a huge amount of water that could be saved during the period of one shower. The nine gallons of water that people use and waste while heating up their shower is a huge portion of the 17.2 gallons used in an average shower2. Last year, the average yearly savings on water was calculated to be 14,235 gallons a year, allowing the user a rapid payback and a huge positive effect on the environment.

19

This project’s goal is to eliminate the amount of water wasted while increasing the temperature of the shower water by using a simple water diverter (connected electronically to a temperature sensor). The user will be able to input his/her preferred temperature of water on a small control panel. Water that is colder than the specified temperature will be diverted back down to a source, such as a water heater or the warm inflow pipe, so that it can be reheated and reused. Then, as the flowing water becomes greater than or equal to the specified temperature, the diverter will allow water to flow out the shower head and into the shower/tub. Thus no water would be wasted when attempting to heat it up. Last year, a full-size model was drawn in Solidworks CAD. That model was used to simulate and evaluate the device in Solidworks FloXpress and to examine the potential use of a circulation pump to facilitate movement of water through the system. After the simulation was analyzed, a small prototype was connected to a full scale plumbing system and the solenoid valve and simple control panel were tested, but the recirculation was not tested due to constraints during construction. This year, the goal was to build a full scale model with all working components and to analyse its effects on water usage over a longer period of time, in multiple locations, as to formulate a more exact value of how much water would be saved by using this device. The whole device including electronic components would be installed in the researchers household and evaluated. The electronic systems that were involved in the usage of the device was also modified. Last year a small pre-programmed temperature controller was used and was far from user-friendly, and had very little insulation causing many potential dangers when using the device. This year a Arduino microcomputer and a panel will be programmed


and will act as a more user friendly alternative to a bulky parameter controlled temperature sensor. Last year, it was observed that the design was not without flaws. Not only would the device be relatively expensive to construct and install but the research done last year is far from enough to prove that it is the best design. This year the design was evaluated and is hoped to be modified and streamlined for both the cost and the user friendliness. This device will have huge effects on the water usage of showers and with the new user-friendly capabilities will be more effective than ever.

Figure 1. Description of structure of electrical system.

Engineering Goals and Procedures

This purpose of this project is to conserve as much water as possible using a simple conservation method. Three goals were set to outline the research: improve the electrical system of the device, construct a full scale device or prototype on a full scale including all components of the device and evaluate the effectiveness of the electrical and mechanical aspect of the device, and continue research on how much water would be save through usage of this device. The complete description of the theorized system is as follows (See Figure 1): When a shower is turned on, the initial water is closer to room temperature than body temperature and, in general, is not initially used3. To prevent this water from being wasted, a motorized water diverter, actuated by the temperature of the water, will be installed between a water mixer and the shower head. While the cold water is running, the diverter will force the cold water back down through an additional pipe to a warm water source, such as a water heater or warm water inflow pipe. The water will be circulated through this cycle, allowing it to heat up. When the water reaches a specified temperature, the sensor will alert the user. The sensor will then either let the water flow through the shower or prompt the user to manually engage the diverter and let the water flow through a shower head. Thus, no water is wasted while heating up the water. The process by with the preceding system was implemented is as follows: For reference, two terms are used to describe the

20

position of the diverter: open (when the diverter allows water come out of the shower head) and closed (when the water is being sent back to the source). Directly before the water reaches the shower head, it will reach the sensor and diverter assembly. The diverter will come after the water mixer. Thus, the temperature sensor would be measuring the temperature of the water that would be coming out of the shower head, to more accurately meet the user’s needs. The diverter was a simple three way solenoid diverter valve, set on a spring so that it is in the closed state by default and requires the force of a motor to open. In order to create flow, the pressure in the pipes had to be manipulated.This was to be achieved using a system that could propel an unlimited amount of water, through itself. After consideration, a recirculation pump was selected to create the propulsion, due to its ability to propel an unlimited volume of water, the efficiency at which it does this, and how much energy it would use in the process. to create the propulsion needed to create a cycle between the diverter and water heater. This propulsion will cause the inlet warm water to mix with the cold returning water and be sent into the water heater, and the water will recirculate as much as necessary to reheat the water. The entire device will be controlled by the user through a simple user interface, that allows the user to easily set the temperature prefered for their shower, view the current temperature of the shower, and in special cases have the option of being able to actuate the diverter manually. The last goal of the experiment was to evaluate the effect of the device on the water usage of the shower. In order to calculate the potential amount


of water saved by using the device, multiple showers were used in the researcher’s home, as the amount of water saved would depend on the distance from the water heater to the shower head. To find the amount of water used while heating up a shower, a shower was turned on to as hot as possible, and a timer was started. A thermometer was used to track the temperature of the flowing water. When the water had reached 110° F (used as an arbitrary value for testing, as results would likely vary depending on the specified temperature), the timer was stopped. This process was repeated 28 times over the course of testing. It was ensured that measurements were taken after two hours of no hot water being used in the researcher’s residence, to make sure the water in the pipes had cooled to room temperature, and the the water heater had returned to its idle temperature. These values and the gallons per minute of the shower head, were used to calculate the amount of water used while heating up the water.

Closer Room (~15ft Farther Room(~40ft from water heater) from water heater) Volume Volume Time (sec) Time (sec) Saved (g) Saved (g) 1.565 73.09 3.045 37.58 31.7 1.32 63.13 2.63 33.82 1.409 52.94 2.205 24.91 1.037 56.92 2.371 40.88 1.703 62.18 2.59 29.9 1.24 59.28 2.47 32.46 1.352 66.24 2.76 38.74 1.614 60.78 2.532 36.28 1.511 64.85 2.702 34.21 1.425 62.89 2.62 39.42 1.642 70.97 2.957 31.48 1.311 68.98 2.874 37.5 1.56 65.45 2.727 34.79 1.449 61.89 2.578 Figure 2. Results of the amount of water saved, using faucet of 2.5 gallons per minute

ture and the current water temperature, and where manual allowed the user to actuate the diversion of the water manually through the aid of the current temperature being printed on the LCD. Another button was added for manual mode -- when the user wishes to actuate the diverter manually. The Arduino and the various parts of the user interface took in the temperature and target temperature and controlled a two way diverter valve that controlled which way the water flowed. The valve used was a Honeywell V40441019 motorized diverter valve, which uses a 120V current to control the motor. When the current was flowing, the water would be diverted out of the shower head, and when the current wasn’t flowing, the water would be diverted down to be reheated. The 120V current was controlled by a Powerswitch Tail II 120V array. The Powerswitch can be controlled by the Arduino, which in turn channels or blocks the 120V current which would affect the position of the divertor motor and this the direction of the water movement. The Arduino was not used to control the circulation motor, as if the circulation would only occur right before the user’s shower, the most likely form of activation would be usage of the water mixer, and if not, another button would have to be added to the control panel to begin circulation, in addition to usage of the water miver. Those methods would require another layer of complexity in construction, or in usage. Instead, a manually-operated circulation pump was used. The circulation pump was located on an additional pipe between the diverter and warm water inlet, pointing towards the water heater

Results The goal of a fuller evaluation of the device was successfully performed through the setup, installation, and testing the proposed device. A prototype of the device was installed in a residential environment and the operation and of the device when used with a shower was tested. Additionally, more precise calculation of the amount of water saved was performed, using two different showers. A full installation was performed in accordance to the description below. The entire electrical system--including the control panel, settings control, water temperature intake, solenoid valve power control, and user interface--was all controlled through an Arduino Uno, both the wiring and the coding. Unfortunately, the power for the circulation pump was not implemented and a manually operated pump was used to simulate operation of the device. The user had complete control of the system through a small set of switches and buttons on the control panel. These controls included two buttons that could increase or decrease the input or goal temperature that would be used to actuate the valve, a 2×16 LCD that displayed that current temperature, the target temperature, and the basic operation settings. Also included was a switch that controlled whether the user wanted automatic or manual operation, where automatic operation would actuate the valve depending on the goal tempera-

21


to be reheated. The necessary amount of power used during circulation was estimated in the installation, but the amount of power needed may be different for different cases as there would be different volumes of water flowing through the system and thus the pump used may not be enough power, or it may be an excess of power for the situation. During testing, a complete full scale model of the device was installed and tested for functionality and user friendliness. The flow of the water, operation of the diverter, and ease of use through the Arduino, was all tested. The flow of the water, and diverter seemed to operate with little to no noticeable error. The device stimulated an effective and consistent increase in temperature, that allows for fast and simple actuation, that does not detrimentally impact the operation of the shower. The user interface, fulfilling of the operations required, may not have been the most user friendly interface, as almost all of the switches or buttons were sparsely insulated and seemed difficult to use compared to other less primitive technologies, but for the experiment, it was enough to test both the automatic and manual operation, and the effects on changing the target temperature. The effect on water consumption was also examined. Two showerheads were examined, one relatively close to a tested water heater and one farther away, in order to collect a wider range of samples. The two distances were measured to be about 15ft and 40ft (Figures 3 and 4, respectively). As the amount of water saved is completely dependent on the distance from the water heater to the shower head, the shortest and longest distances available were used to form a minimum, maximum, and average amount of water saved when using the device. The piping distance from the shower head to the water heater was measured. These were the results, converted from the time it took to heat up, to the amount of water saved, using the faucets gallons per minute of 2.5.

Discussion This project’s goal was to prevent all water from being wasted in a shower while the water in the shower was being heated up, to improve upon this design, and implement it into a more use friendly interface, with both an automatic and manual operating system. The general goal of the project was achieved. Not only was all the water being used while heating up the shower, but a more user-friendly interface was created using an Arduino microcomputer to form a much simpler medium of controlling the device, specifically the addition of

Figure 4. Gallons of Water Conserved - long distance testing.

Figure 3. Gallons of Water Conserved - short distance testing.

an automated and manual operation. An easier way to change target temperature, enhance the view of the current temperature, and switch between automatic and manual modes were added to the user interface. While little testing was performed to create a statistically significant value of how much water could be saved, it was confirmed that water would be saved through the usage of the device. The general design of the device was not modified but hopefully in the future a better design could be created to streamline installation, usage, and resource usage. The current design is relatively expensive to build and install, but it is proposed that a cheaper design could be made, that would use less resources and make installation easier. Another potential aim would be to use the same design but in different applications, such as more automated household products like dishwashers or laundry machines, which need little user input to use. Another future interest would be a more thorough analysis of the necessary model of actuated solenoid valve and circulation pump, because in the current tested model, extremely intensive models were used to test operation and were likely excessive for feasible application in a real world environment.

References

1. Lutz, J. D. (2012). Water and Energy Wasted During Residential Shower Events: Findings from a Pilot Field Study of Hot-Water Distribution Systems. ASHRAE Transactions, 118(1), 890B900. 2. Home Water Works. (n.d.). Showers. Retrieved August 01, 2016 3. Mooney, C. (2015, March 4). Your shower is wasting huge amounts of energy and water. Here’s what you can do about it. Retrieved August 01, 2016

22


The Effect of Shape, Weight, and Diameter on Haptic Perception: An Active Haptic Sensing Study of the Predicted and Actual Grip Forces and their Impacts on Weight Detection Thresholds

Summary

Diya Mathur1, Sophia Korner1 1 duPont Manual High School

It is necessary to better understand human haptics because of the growing need for multi-sensory interfaces/devices that utilize weight perception to decide force required for object handling. The difference of predicted and observed exerted force values can help accurately calibrate prosthetic and motor assist arms to apply the correct force needed to lift objects, and help with medical rehabilitation, invasive surgery, and gaming technology/steering wheel designing. This research analyzed: the correlation between predicted and observed force, the effect of object shape (cylindrical/ rectangular) and diametric properties to human haptics, and the effect of weight detection thresholds. Two sets of weights (small and large diameters) and a sensor-embedded glove were engineered to measure applied force. Subjects perceived each weight 3 times: firstly, they lifted the object as a personal reference; secondly, they applied their estimation for the predicted force needed to lift the weight; and finally, they lifted the weight (observed). 5,040 pieces of data were obtained. One significant finding was that most people underestimate the force needed to lift an object through visual cognition, which can help with prosthetic calibration. It was observed that more force was required to lift a cylindrical shape than a rectangular prism of the same weight.

Introduction Haptic perception is the process of recognizing surroundings through touch. This study delves into weight perception of an object through haptics. Perceiving the weight of an object is a multi-dimensional process – sensory, perceptual, and decisional – that are juxtaposed to make a decision on how heavy an object is. (Sensory and Perceptual Interactions in Weight Perception). To better understand how accurately humans perceive an object’s weight, this study concentrated on studying the correlation between the predicted and actual amounts of pressure applied to weights through haptic lifting, along with the impact on weight detection thresholds, the effect of an object’s weight, shape (cylindrical or rectangular), volume, and diametric properties to haptic perception accuracy, and its distinction between genders. Restoring Natural Sensory Feedback in Real-Time Bidirectional Hand Prostheses explained the processes of multiple sensors on prosthetics when electrodes are attached to the ulnar and median nerves. Development of robot hand aiming at nursing care services to humans discussed robotic

23

sensors delicate enough to work with humans in nursing homes. Although this research used a different kind of sensor than those available to us, these articles provided guiding rudiments and an introduction into the world of programming of circuit boards and robotics. Previous research conducted on the size-weight illusion has demonstrated that many subjects perceive the smaller diameter objects as heavier than larger diameter objects and subsequently apply more force when lifting them. Additionally, it was observed that there was no distinct relationship between age and weight perception accuracy. This year, it was hypothesized that the size-weight illusion could be explained by the relation of grip pressure and surface area contact, that humans exert pressure on an object proportional to an object’s perceived weight while lifting it, that the predicted force values to lift an object would be greater than the actual applied force values because of the personal benefit and tendency of overcompensating for the weight of an object, that when an object is squeezed, discrimination thresholds will decrease, and the difference in diameter and density would no longer affect human haptic perception. It was also hypothesized that the rectangular weights would have a higher perception of force accuracy in comparison to the cylindrical weights due to the closer finger placement locations on the weights. This study was undertaken so its results can be used to more accurately design prosthetic arms, robotic lifting actions, medical rehabilitation of stroke patients, and other devices that utilize weight perception in deciding force required for object handling. Using the measurements of predicted pressure amount and the observed exerted values, the difference in these values can be applied to prosthetic or motor assist arms to help calibrate them accurately apply the amount of pressure necessary to lift objects. Because many


users of prosthetic arms are left to visually perceive objects before lifting, the results from this experimentation can empower them to tactically and correctly perceive the amount of pressure needed to lift the weight. Also, the sensor-embedded glove used for this experimentation and the data obtained from it, can help engineer better robotic arm design. Furthermore, this research can add to the understanding of steering mechanisms and hand tools. By figuring out the human weight detection thresholds, modern vehicles and gaming technology for touch interfaces can be engineered to design vehicle steering wheels and peripherals that allow users to feel over-correction through vibration and wheel resistance, whether it is from a bumpy off-road course or the feel of a car sliding in the rain (Formula One Car, 2013). Haptic perception and the difference of the sized weights can be greatly connected to handling an object. Figuring out how the variation of the size and/or resistance of an object to human perception can also be a great contribution to learning more about everyday hand tools and more (Jones L. A., 2006).

Methodology Table 1 details the weights utilized in the experiment. A glove was engineered and embedded with a force sensor in its thumb, index finger, and middle finger to record values of applied pressure in the data collection. The three sensors (Round Force Sensitive Resistor, Adafruit Inc, NYC, NY) were wired to an Arduino circuit board, and the program Fritzing, was used to troubleshoot the circuitry. The Arduino open source software was used to program the circuit board and sensors to measure the amount of pressure being applied to an object through the force sensors in grams and volts. A basic FSR testing program was downloaded from the Adafruit website which supplied code for the analog readings along with the formulas for converting to voltage (mV), conductance (microMhos), and force (Newtons). Using these publicly provided accidences, the program was then further edited and manipulated to function with three sensors (one for each finger) and output the amount of voltage applied on each sensor, per second. The

Weight (grams) Lighter

Medium

Heavier

Small Cylinder

500

750

1000

Large Cylinder Small Prism Large Prism

500

750

1000

500

750

1000

500

750

1000

Table 1. Weights utilized in this experiment.

displayed voltage readings were then used as a basis to calculate and convert the values into an approximation of the amount of grams of force applied by each finger. Once the program and wire connections were finalized, all resistors, sensors, and wires were soldered into a perf board which was then placed on the Arduino. The sensors were then embedded onto a large cotton glove and were secured into place with a piece of fabric sewn over each sensor on each finger. A scale was used to calibrate the force sensors and to test the amount of thresholds needed for each of them to sense any applied pressure. Using the serial monitor of the Arduino program, these values were displayed on a computer screen and transferred to the Excel database file. Before collecting any data, all participants were informed about the purpose and goal of the study. The correct way to grasp and lift the weights using the glove was demonstrated to the subject before conducting the experiment. A survey was generated asking subjects for their name, gender, dominant hand, and age. As for the data collection, each subject was asked to stand, with elbows at 90 degree angles, and lift a given weight vertically. With the glove on his/her dominant hand, all five fingers were grasped around the top circumference/ perimeter of the weight. The participant initially lifted the weight vertically once to get a sense of reference for how much pressure was needed to lift the object. This initial lift was solely for the participant to get a reference amount of needed pressure, and was not recorded for values. After setting the weight down, the participant was then asked to apply the amount of pressure that he/she estimated would be needed to lift the weight using his/ her observations from the initial reference lift. The

24


subject applied his/her prediction of the needed amount of pressure to the stationary weight. These values were observed through the sensors and recorded. Lastly, the participant was asked to lift the weight with the glove, allowing the sensors to record a reading of the amount of pressure applied when the weight was handled. This process took approximately 5 – 7 minutes total. A total of 35 people were surveyed, which complies with the Central Limit Theorem, with 145 data points from each. This totaled 5,040 pieces of data. Another small experiment was also conducted to test the effect of size on perception and the sizeweight illusion. Ten participants were surveyed and were asked to lift two different objects of the same weight and diameter, but of different shapes (cylindrical and rectangular). They were then asked to lift the weights sequentially with their dominant hands and answer if they thought that the rectangular weight felt heavier, lighter, or equal to the cylindrical weight. There were no potential risks to the subjects from this research; confidentiality was maintained through a randomized number system where all participants were assigned a number that was solely accessed for data interpretation. Both males and females of all and any racial composition were welcomed to take part in the data collection. The tested subjects fit into the age range of 10 years old to 80 years old. There were no vulnerable populations for this project. Data was collected at a chess tournament, some large public gatherings and events, and a few other locations with willing participants. The benefit offered to the participants was more awareness and knowledge about their personal haptic perception. This research involving human subjects was conducted under the supervision of an experienced teacher and a designated supervisor and followed all state and federal regulatory guidelines applicable to the human and ethical conduct of such research. As can be seen, the index fingers typically applies the most amount of force during lifting. Also it can

Results

Graph 1 summarizes the average amount of grip force applied with each of the fingers for each of weights.

be observed that the cylinders typically have the most amount of force applied than the rectangular prisms. Graph 2 displays an approximation for how much average grip force is applied (in grams) for the smaller shape weights from each finger. Three set of results for Thumb, Middle, and Index finger are shown – the first set is for the 500 grams weights, the next three are for 750 grams, and the last three are for 1000 grams. The least amount of difference is consistently the middle finger, while the most difference is the thumb for 500 grams and index for 750 and 1000 grams. It can again be observed that the force always appears to be higher when lifting the cylindrical weight compared to a rectangular one. Graph 3 shows similar information for the big cylinder and the big rectangular weights. The difference between the two seems pretty consistent with the big cylinder’s force average is always larger than the big rectangle’s. Graph 4 presents the difference between predicted and actual lifting force for all fingers and different weights and sizes. It is significant to note that in almost all cases, the actual force required to lift the object was higher than more people predicted would be needed to lift it. Graphs 5, 6, 7, and 8 show the average percentage that each finger applied to the different shape configurations. Graph 10 depicts the amount of force applied by both males and females. There is no pattern concerning which gender applies more force. Graphs 11 and 12 show a fairly consistent pattern in how prediction values are approximately 90% of actual values. They also show how the shape differences compare. We can see that a factor of 1.1 could be applied to the force reading from when the participants gripped the weight by the amount that they thought that they would need. This factor would bring the predicted value to very close to what was actually needed. The average thumb is about 2.5 cm wide and the average index finger is 1.6 to 2 cm. wide (T., 2012). The FSR Adafruit force sensors’ diameter was 1.25 cm for the thumb, index, and middle finger. This means that all subjects applied different amounts of force to the same amount of surface area.

25


Graph 5 Graph 1

Graph 6

Graph 2

Graph 8

Graph 7 Graph 3

Graph 9 Graph 4

Graph 10

This study was limited by the questionable accuracy of the FSR Force Sensing Resistors used in the sensor-embedded glove. According to the FSR Integration Guide and Evaluation Parts Catalog, “Force accuracy ranges from approximately ± 5% to ± 25% depending on the consistency of the measurement and actuation system, the repeatability tolerance held in manufacturing, and the use of part calibration.” This variability of output accuracy from the sensors supported the fact that this circuit system cost 90% less than one with a ±3% accuracy range, as aforementioned. With this and the informal sensor calibration methods in mind, the sensors may have produced a slight inaccuracy in data. Furthermore, the fragile glove often had to be re-stitched due to overuse and incompatibility with various hand sizes. The sample demographic focused more on females and young males, who typically have smaller hands. Hence the results were inconclusive in relating gender to the prediction and observed exertion of force upon an object. Perhaps a more

26

Graph 11

Conclusions It was initially hypothesized that the predicted force values to lift an object would be greater than the actual applied force values because of the general tendency of overcompensating for the weight of an object, rather than under. However, through successful programming of sensors sewed to a glove and through experimentation, it was concluded that the actual applied pressure was always greater than the predicted exerted grip force value. As shown in Graph 4, it can be deduced that most people underestimate the force needed to lift an object through visual cognition than required to actually lift the object. Another significant finding was that out of the three fingers used to lift the object – thumb, index, and middle – most force (in grams and volts) was applied by the index finger while lifting. Discerning the results regarding object shapes, it was noted that more force was required to lift a cylindrical object than a rectangular


one, which supports one of the hypotheses. Ernst Heinrich Weber (1795-1878), and his disciple, Theodor Fechner (1801-1887), are often credited for first studying human response to physical stimulus in a methodical way. In one experiment, E.H. Weber tested a blindfolded man by gradually increasing a weight in his hands and asked him to respond when he first felt an increase. Weber found that the response was proportional to a relative increase in the weight. He concluded that when the weight is 1 kg, an increase of a few grams will not be noticed. Rather, when the mass is increased by a certain factor, an increase in weight is perceived. If the mass is doubled, the threshold is also doubled. This law is known as the Weber–Fechner law (Weber-Fechner Law) and is a logarithmic relationship between sensation and stimulus. The perceptible increment of sensation, also called Just Noticeable Difference (JND), for humans for lifted weight is 1/50 grams. The small administered survey consisting of 10 subjects tested a similar idea and concluded that even when two weights weighed the same, the participants consistently perceived the 750 and 1000 gram cylindrical weights as heavier than the rectangular ones. The 500 gram weights had a smaller weight detection threshold, and were thus were harder to perceive differences between. With this in mind, it can be concluded that this year’s experimentation successfully showed that a sizeweight illusion can be explained by understanding the grip pressure during haptically lifting objects by using the size-weight illusion and Weber-Fechner Law. It is also important to note the relationship between pressure and weight. Pressure is equal to force divided by area, while force is equal to mass times acceleration. Substituting force definition in pressure equation, we note that mass times acceleration divided by area of application is pressure. With gravity being constant on Earth, mass of an object can be compared to its weight on Earth. Therefore, weight times acceleration divided by area is pressure. Hence, pressure is directly proportional to weight. This supports one of the aforementioned hypotheses and is attested by multiple graphs. Additionally, pressure can be calculated with the obtained data by dividing the outputted force values from the

27

sensors by 1.25 cm, the area of application from the sensors. Understanding human weight perceptions to create proper pressure signals can greatly improve handling an object. Providing the right amount of pressure and vibration to the steering wheel can send the right signal, without distraction of watching multiple gauges, for the pilot/driver to take corrective actions and help save lives. They can also be applied to motor assist arms to help with rehabilitation of, for example, stroke patients during therapy for recovery. With an explosion in recent years of many multi-sensory interfaces and wide range of human weight/force perception applications in prosthetic and motor assist arms, robotic lifting actions, medical rehabilitation, invasive surgery, gaming technology, touch interfaces, and racing and aviation

References

1. (n.d.). Retrieved January 12, 2014, from cns: http://www.cns.nyu.edu/~msl/ courses/0044/handouts/Weber.pdf 2. “Size Matters: A Single Representation Underlies Our Perceptions of Heaviness in the Size-Weight Illusion.” . (n.d.). Retrieved October 07 , 2013, from http://www. plosone.org/article/info:doi/10.1371/journal.pone.0054709 3. Adafruit. (n.d.). FSR Force Sensing Resistor Integration Guide and Evaulation Parts Catalog. Retrieved November 12, 2014, from Adafruit Inc.: https://learn. adafruit.com/system/assets/assets/000/010/126/original/fsrguide.pdf 4. Donald Ary, L. C. (2002). The One-Variable Chi-Square (Goodness of Fit). In L. C. Donald Ary, Introduction to Research in Education (p. 203). 5. Dynamic mapping of human cortical development during childhood through early adulthood. (2004, Janurary 7). Retrieved October 07 , 2013 , from Dynamic mapping of human cortical development during childhood through early adulthood.: http://www.pnas.org/content/101/21/8174.long 6. Formula One Car. (2013, December 31). Retrieved Janurary 7, 2014, from Wikipedia: http://en.wikipedia.org/wiki/Formula_One_car 7. Gaming steering wheel review. (n.d.). Retrieved Janurary 12, 2014, from Top Ten Reviews: http://gaming-steering-wheel-review.toptenreviews.com/ 8. (2008). Retrieved Janurary 7, 2014, from Human Haptic Perception: Basics and Applications: http://books.google.com/books?id=uw5llO5WdrEC&pg=PA329&lpg=PA329&dq=age+and+haptic+perception&source=bl&ots=XDHRXB37TS&sig=chl26St4ToiG7G_6xasAuPFPzrc&hl=en&sa=X&ei=m6C8Uvb3GYSs2wWOqoHIBA&ved=0CEcQ6AEwBA%20-%20v=onepage&q=age%20 and%20haptic%20perception&f 9. Grunwald, M. (2008). Human Haptic Perception: Basics and Applications. Basil, Switzerland: Birkhauser Verlag. 10. Hockenbury, H. &. (2000, 1997, 1998). Psychology Secound Edition. New York, New York: Worth Publishers. 11. J. Farley Norman, A. M. (2011, April). Aging and the haptic perception of 3D surface shape. Retrieved Janurary 7, 2014, from Springer Link: http://link.springer. com/article/10.3758%2Fs13414-010-0053-y#page-2 12. Jones, L. A. (2006). The Human Hand Function Book. L.A.: Oxford UP. 13. Kajikawa, S. (2009). Development of robot hand aiming at nursing care services to humans. IEEE, 3663-3669. 14. Sensory and Perceptual Interactions in Weight Perception. (n.d.). Retrieved December 19, 2013, from https://psychology.clas.asu.edu/sites/default/files/valdezamazeenpp.pdf 15. Stanisa Raspopovic, M. C. (2014). Restoring Natural Sensory Feedback in Real-Time Bidirectional Hand Prostheses. Science Translational Medicine, 1-10. 16. T., A. (2012, 02 21). Smashing Magazine. Retrieved 02 14, 2014, from Finger Friendly Design: Ideal Mobile Touchscreen Targets: http://www.smashingmagazine.com/2012/02/21/finger-friendly-design-ideal-mobile-touchscreen-target-sizes/ 17. Understanding the sport. (n.d.). Retrieved September 12, 2014, from formula 1: http://www.formula1.com/inside_f1/understanding_the_sport/5287.html 18. Weber, E. H. (1978). E.H. Weber: The Sense of Touch. London: Academic for Experimental Psychology Society: Academic for Experimental Psychology Society. 19. Weber-Fechner Law. (n.d.). Retrieved January 12, 2014, from wikipedia: http:// simple.wikipedia.org/wiki/Weber-Fechner_law 20. Westerfield, M. (2013). iPhone & iPad Electronic Projects. Sebastopol: O’Reilly.

21. Wilcher, D. (2014). Make: Basic Arduino Projects. California: Maker Media.


Programming a Handwriting Interpreter using Artificial Neural Networks Manuscript Joshua Ashley1, Abraham Riedel-Mishaan1 1 duPont Manual High School

Summary

The purpose of this project was to create a handwritten digit and letter interpreter using an artificial neural network. An artificial neural network is a programming technique that emulates how a brain computes task through adjusting weights and biases. These networks ‘learn’ by using test inputs and outputs to adjust these weights and biases accordingly to achieve the greatest accuracy. In this project, an artificial neural network was created in python. It works by creating the network with an inputted number of neurons in each layer and a set of test data: initially setting each bias and weight to a random number. The test data used was from the Mixed National Institute of Standards and Technology (MNIST) and a separate, alphabetical dataset. The program initially runs the inputs from the dataset into the current network and creates a cost function based on how many inputs gave incorrect outputs. Then, stochastic gradient descent lowers the cost of the network’s neurons by adjusting their weights and biases. The program was set up to accept outside input images once the network was created. The number network, on average, had 90-95% accuracy in classifying the digits, compared to a theoretical null hypothesis of 10% accuracy for merely guessing the digit randomly. The alphabetical network had 65-70% accuracy compared to a 3.84% guessing chance. Thus, the network showed a vast improvement from random application. The effect of number of hidden layer neurons, epochs, and learning rate was also examined, with the greatest accuracy trending toward a low number of hidden neurons and a middle-of-the-road number of epochs and learning rate.

Introduction

The goal of this project was, using neural network programming techniques, to create software that can effectively and efficiently read images of handwritten digits based on patterns in the data given. This program used the learning techniques derived from the brain’s neural structure to more effectively see patterns in the data than conventional programming methods. The net was trained and evaluated by testing the program on various types of handwriting from a database and observing the program’s cost-efficiency. If the program sees rapid improvement in detection of the correct characters, then it can be assumed that it has effectively used the neural network design to detect patterns in the data. The brain completes a task by feeding information through neurons, which process the data and hand the output off to other neurons through synapses. These synapses have weights which are decided by the signal strength between the input and

28

output neuron. This weight is important because it decides which input neuron receives the information. Neural networks is a programming technique that simulates this process by using a learning rule to decide synapses weights that can be changed as the program receives inputs. Brains are far more efficient at doing and learning tasks than any man-made computer: being comparable to supercomputers in processing power and storage, and far surpassing in power use (Fastest Supercomputer: 9.9 million watts, Brain: 20 watts) (Fischetti 2011). Because of this, neural networks have been used in attempting to do large complex computing tasks or tasks that involve the software itself to adapt/learn. These networks are typically used in engineering, science, economics, and finance. More practical use of neural networks will improve the efficiency of computers without change of hardware, and help accomplish tasks on a computer that seem so easy for our neuron filled brains to do (“Neural Networks Software…”). Neural networks were first conceived in 1943 when Warren McCulloch and Walter Pitts wrote a paper about how the neurons in the human brain work. Around 1959, when computers were brought into existence, Stanford created the first artificial neural network for practical use: Multiple Adaptive Linear Elements (MADALINE), created to predict the next bit received a phone call and use that prediction to limit outside “noise” from interfering with the call. However, after this, a significant advancement in neural networks was not made until around 1975 with the advent of multiple layers of neurons, resulting in the so-called “hidden layer”. After this particular discovery, the field of artificial neural networks evolved and spread like wildfire, many people creating new types of neural networks and applying them to an extremely large amount of different fields (“Neural Networks: History”). The typical neural network in the brain works on many levels using the natural structure of the


brain that has evolved over billions of years; however, a computer has none of this preexisting biological setup, and as there is still much work to be done with understanding how the brain actually works, the artificial application of these networks uses a large base of mathematics and testing for creation. The most common process for the creation of such neural networks is known as backpropagation: in which the topography, or layout, of the network is predetermined, but the weights along the synapses and underlying functions within the neurons are randomized at first. Test inputs and outputs are initially stored, and the inputs are run through the randomized weights neural network to give an expected output, and then this expected output is measured against the actual, recorded output. Then the actual learning comes, when the network determines a cost function based off of the neurons and weights in which the cost represents the amount that the expected output was different from the actual output, with the goal of the neural network being to minimize the cost in a reasonably efficient way (Palka & Palka 2011). This backpropagation approach brings into question not only the efficiency of the solution but also the efficiency of finding the solution. The mere “brute force” approach of trying around ten points on the cost function may both be inaccurate and, when combining multiple cost functions based on multiple test samples, require an extreme amount of operations. To lower the inaccuracy and needed operations, a process known as gradient descent is done to the cost function so long as the cost function can be represented as a differentiable function. The process of gradient descent entails using the derivative of the cost function in order to find the absolute minimum over a specified interval using the extreme values theorem. Using this method results is an extremely accurate approach while also maximizing the cost efficiency (Baldi & Brunak, 2001). The neural network used the process of gradient descent to attempt to expand upon Optical Character Recognition (OCR) at a high efficiency. This is a technology that enables you to scan documents or PDF’s to put them back into digital format. Handwritten documents have been a challenge for conventional OCR software due to handwriting being

less procedural and flawed. Neural networks in OCR can be more versatile because of their ability to see patterns in data and adapt the way the program functions to fit those patterns. The main problem with using neural networks as a practical OCR software is that it takes large amounts of operations, or control samples, to get a low cost. The use of gradient descent made the neural network able to get a lower cost faster and, therefore, make it a more effective and viable solution for practical OCR.

Methodology An artificial neural network was coded in Python using NumPy for list operations using the theory as outlined above. This program was separated into three classes: init.py, mnist_loader.py, and network. py. The init class is the basis of the program; it is used to call the mnist_loader and network classes as well as initialize all of the values that may vary with the user’s wishes. It first loads all of the data from the mnist_loader class and partitions it into training and test data, before sending this data and instructions for the setup of the topography to the network class. Once the network is created and optimized, it takes in outside image inputs and decides what number the image represents using the network. The network class represents the actual network, a construction containing all of the weights and biases in the network, as well as the number of neurons in each layer of the network. Naturally, it includes a stochastic gradient descent (SGD) function used in correlation with the backpropagation and update_mini_batch functions to do the actual learning of the network. The SGD function itself partitions the training data into random mini batches of the stated size so that each of these mini batches can be used to represent the data as a whole, thus greatly increasing the efficiency of operation. This is then passed to the update_mini_batch function which contains the bulk of the mathematics behind the network: it updates each of the network weights and biases according to the gradient vectors obtained from the back propagation function made by evaluating the cost with relation to these weights and biases, opting to minimize such cost.

29


Figure 1. The accuracy and cost of the number network created by changing the minibatch size, hidden neuron number, and learning rate.

Figure 2. The same as above except now for the alphabetical network.

Results

negative direction. None of the changed parameters significantly affected these processes. A lower learning rate, however, did appear to reach a lower cost, or at least undergo a steeper descent on cost, than others. This finding is in line with the idea that a smaller learning rate may take longer to reduce cost, but will do so to a greater extent than a higher one. The changed parameters had more of an effect on the network’s peak accuracy and how quickly it got to that for both networks. Though it is more noticeable in the number network, the same is true for the alphabet network. The two main differences between the alphabet network and number network was that the peak efficiency of the networks were very different, and the runtime was significantly less for the alphabet network. These can both most likely be attributed to the smaller dataset of the alphabet network as well as less data normalization.

The data shown below are first consisted of runs of the network created and refined for numerical digit processing and second consisted of runs of the network created and refined for alphabetical character image processing. In both, the hyperparameters were changed as prescribed in the legend and the accuracy and cost vs. number of epochs was plotted.

Conclusions

In experimentation many testing runs of the network were made including many whose data and parameters were observed and recorded. The data shown include the cost and accuracy of the typical test of the network and tests with adjusted parameters to show how the networks accuracy and cost are affected by these parameters. These graphs show the pattern in which the networks learn. As the network progressed, the rate of improvement of accuracy would decrease until it reached almost zero improvement. It did continue to variate however due to the network continuing to attempt to find better weights and random luck of the network. The cost had the same effect where its rate of improvement would decrease, however in the

References

1. Baldi, P., & Brunak, S. (2001). Bioinformatics the machine learning approach. Cambridge, Mass., Massachusetts: MIT Press. 2. Fischetti, M. (2011, October 12). Computers versus Brains. Retrieved October 17, 2015. http://www.scientificamerican.com/article/computers-vs-brains/Neural Networks - History. (n.d.). Retrieved October 28, 2015, from http://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/History/history1.html 3. Neural Networks Software: Train, Visualize, and Validate Neural Network Models. (n.d.). Retrieved October 17, 2015. https://www.wolfram.com/ products/applications/neuralnetworks/ 4. Palka, J., & Palka, J. (2011). OCR SYSTEMS BASED ON NEURAL NETWORK. Annals Of DAAAM & Proceedings, 555-556.

30


Individual Based Computer Modeling of MERS Super Spreaders Summary

Gregory Schwartz1, Sarah Schwartz1 1 duPont Manual High School

It is hypothesized that super spreader modeling will replicate Middle Eastern Respiratory Syndrome (MERS) outbreaks in a moderate size city and that more rapid institution of precautions to reduce hospital transmission will decrease outbreak severity. This study used a computer simulation to evaluate an individual based model of a MERS outbreak with and without super spreaders in addition to the effects of early hospital containment. The project found that simulations with a small number of super spreaders (3.5 average) significantly increased outbreak duration, transmissions, and deaths (p < 0.001). Super spreaders caused an almost six times increase in infections (11 vs 65, p < 0.001). Early intervention to reduce hospital transmission at 30, 60, and 90 days after outbreak initiation significantly reduced outbreak duration, deaths, and infections (p < 0.01). The study did support the hypothesis that super spreaders would significantly affect MERS outbreaks. The study also supported the hypothesis that early protective measures to reduce hospital transmission are highly effective in reducing outbreak severity. While the model does have significant limitations, it provides a tool for evaluating infection spread and containment measures.

Introduction Modeling of human populations using mathematical formulas is widely considered to have begun in the late 18th century. At that time, biologists began using the technique of mathematical population modeling in order to understand the dynamics of growing and shrinking populations of living organisms. 1 Besides mathematical population modeling, another type of modeling is individual based population modeling. Individual based population modeling works by tracking each of the individuals in a population.2 These models are very data intensive and were not possible on a large scale until the availability of modern computers. In the past, both individual and population based modeling have assumed that the probability of infection spread has a normal distribution and that each individual has an equal probability of spreading disease. This probability could change with the individuals as the disease progressed and viral loads increased such as with Ebola and Human Immunodeficiency Virus (HIV), but each individual was treated similarly even though it was known that certain people would spread more than others. For example, a prostitute’s transmission of sexual transmitted disease would be greater than that of

31

the general population. Famous cases such as Typhoid Mary were felt to be outliers along the normal distribution curve. The heterogeneity of spread was first appreciated in the 1990’s. Outbreaks of HIV, Ebola and Tuberculosis demonstrated marked variation in individual transmission.3 The 80/20 rule was conceived, which states that twenty percent of the population spreads eighty percent of the disease.4 The outbreaks evaluated at that time were generally small and did not stimulate a reevaluation of epidemiologic modeling.3 Super spreader events have had a variety of definitions. Lloyd-Smith et al. in 2005 defined a protocol for determining super spreading as a case which causes more infections than would occur in 99% of infectious histories in a homogeneous population.5 During the 2003 SARS outbreak epidemiologists defined a super spreader as an individual that transmitted to at least eight contacts.6 Super spreader events can be caused by individual behavior, immunologic factors, co-infection, and environment.11 The super spreader phenomenon upended the traditional epidemiologic models. A study by James Lloyd Smith at UCLA showed skewed distribution with SARS, Measles, Small Pox, Monkey Pox and Pneumonic Plague.5 The traditional homogenous modeling does not take into account super spreaders and therefore does not accurately track transmission. Middle East respiratory syndrome (MERS) was first diagnosed in Saudi Arabia in June of 2012. Clinically, infections range from asymptomatic to severe pneumonia and death. The mean incubation period is 5 days, but it varies from 2 to 13 days. 7 MERS patients progress through two stages. The most common symptoms were fever, cough, sore throat, myalgia, and dyspnea, which are all stage one symptoms. Patients progress to pneumonia, multi-organ failure, and death (40-


60%), which are stage two symptoms.7 Treatment is primarily supportive care with the potential future treatment through human monoclonal antibodies. In the hospital, various precautions can be taken such as hand washing, using gloves, and wearing a high-filtration mask to limit transmission. Most cases of MERS have been diagnosed in Saudi Arabia and the United Arab Emirates. Studies from Saudi Arabia demonstrated transmission rates in the home of approximately 5%. 8 There has been spread of some cases to the USA, Europe, and Asia through travel to the Middle East. The first confirmed case in Korea occurred in May of 2015. The disease spread included 185 cases and 36 deaths within 2 months, causing the closure of two hospitals and the quarantining of more than 16,000 people. Models of super spreading with both SARS and MERS have been done using various mathematical formulas but not individual based population modeling. This study uses an individual based computer model to evaluate the spread of the MERS virus in a moderate sized community. An accurate model of MERS would allow testing of various techniques for infection control and prevention. It is hypothesized that super spreader modeling will replicate infection spread and that more rapid institution of precautions to reduce hospital transmission will lead to a decreased transmission rate of MERS in a moderate sized population center.

Methodology Using Microsoft Visual Studio 2012 Release 4 C++, a program was written to simulate a simple moderate sized population area. Multiple factors were used by the program during the simulation. Parameters included population size, family size, schools per family, transmission rates per hour at various settings, and many others. The values of the various parameters came from multiple sources.8, 9, 10 The program begins by creating an environment for the population using US Census data.10 The population was derived from 100,000 family units. The makeup of the family units was determined from the 2014 US Census data for the city of Lexington, Kentucky. The simulation then ran through the hours of day. Each hour each individual was moved to a location

based on the time of day and random probability. At each location the risk of transmission was calculated based on the numbers of infected individuals and the transmission rate associated with that type of location. The transmission rates per hour were estimated from available data8,9. Once health levels fell below a cut off, cases were located to the hospital. The hospital transmission rate was adjusted for an intervention factor, which could reduce transmission by 80% Once a new case occurred, the incubation time, time for stage one, and time for stage two of the infection were determined using a normal distribution random number generator. If it was determined that a case was a super spreader, then its transmission factor was increased. After the incubation time elapsed, the individual became infectious. If the individual survived the course of infection he/she became immune and his/her health slowly increased. The simulations were run with different parameters. Simulations were run without super spreaders and with a 5% rate of super spreaders. The time until increased hospital precautions were able to decrease transmission by 80% was tested at 30, 60, 90, 120, 150 and 180 days along with the control of no increased hospital precautions. If the simulation ran without transmission from the index case the simulation was repeated. Each of the parameters was tested until a total of 100 runs, with at least one transmission being obtained. A total of 1,400 transmitted runs were performed. The data was then analyzed using Microsoft Excel. A Student’s T-test was used for comparison of the different groups. A P-value for significance was set at 0.01, which is often used in computer and engineering testing.

Results

This experiment looked at several different factors in a MERS outbreak in a moderate sized population center. The factors that varied with the computer simulation where the presence of super spreaders and time until effective hospital control measures were implemented. Simulations with at least one transmission of the index case were repeated until 100 runs of each type were completed. The control simulation of no super spreaders and

32


no intervention was run repeatedly. The first 100 runs were compared to the next five runs of 100 each. The control group took 186 runs to generate 100 outbreaks while the average for the other five runs was 194 runs. There was no statistically significant difference in the duration of the outbreaks, deaths or total infected. This was repeated for the control with super spreaders compared to the next five runs of 100 each. There was again no statistically significant difference in the duration of the outbreaks, number of deaths or total number of infected individuals. The control simulation without super spreaders was compared to the simulations with increased hospital precautions to prevent the spread of infection without super spreaders. There was a statistically significant reduction in the duration of outbreak, number of deaths, and total number of infected individuals for intervention at 30, 60, and 90 days. The simulations with super spreaders were compared with control simulations without super spreaders for the same start day of increased precautions. There was a statistically significant increase in duration of outbreak, number of deaths, and total number of infected individuals with the super spreader simulations compared to those without super spreaders. The number of super spreaders was small compared with the number of overall cases. Earlier intervention to reduce hospital transmission at 30, 60, and 90 days had an even larger impact on simulations with super spreaders. For example, there was an 84% reduction in total infections with a 30 day intervention in super spreader simulations (Graph 1). The control compared with a 62% reduction in those without super spreaders. The duration of the outbreak decreased 7% with intervention at 180 days, 28% with intervention at 90 days and 50% with a 30 day intervention when compared to the super spreader control. This greater reduction was seen in all intervention durations tested. Earlier

Graph 1. The effect of the super spreaders was analyzed separately from the general population (Graphs 2 and 3). The R0 for super spreaders in the control group was 9.2 (Table 4). There was a reduction in the R0 during the 30 and 60 day intervention groups. The R0 for the non-super spreaders during the super spreading control was 0.52.

Number of Super Spreaders

Number of Super Spreaders vs. Number Infected per Super Spreader

40 35 30

Control

30 Day

20

60 Day

90 Day

15

180 Day

25

10 5 0

0

10

20

30

40

Number of People Infected per Super Spreader

Graph 2.

Graph 3. The super spreaders not only directly increase the outbreak, but have a compounding effect increasing the numbers of non-super spreaders who infect others (Graph 4). The effect of each is demonstrated graphically (Graph 4, 5). The super spreaders and non-super spreaders show two distinct populations when combined (Graph 6).

33


Number Infected vs. Number Infected per Infected Case

4000

Number of infected subjects

3500

Control 30 Day no SS 60 Day no SS 90 Day no SS 180 Day no SS 30 Day with SS 60 Day with SS 90 Day with SS 180 Day with SS Control SS

345

3000 2500

231

2000

136

1500

91 57

1000 500 0

0

Number Super Spreaders

1

2

3

4

Numbers infected per Infectious subject

5

Graph 4

Graph 5

Conclusions

The simulation did demonstrate reproducibility and was consistent with several known characteristics of MERS outbreaks. The results do support the importance of super spreaders in transmission. It also demonstrated the benefits of rapid intervention to reduce spread within hospitals. Epidemiologic models need to replicate the disease being studied. There have been only a limited number of outbreaks of MERS since its discovery in 2012 so multiple comparisons could not be performed. The only major outbreak in a developed country occurred in South Korea during the summer of 2015. This outbreak started with a single business traveler returning from the Middle East. Several of the characteristics of this outbreak were reproduced by the computer simulation. Several of the simulations produced infections of similar duration and total cases. Several super spreader patients were identified during Korean outbreak, which also correlated with this model.18 The calculated R0 of

34

Graph 6 0.52 for the non-super spreaders correlated well with meta-analysis by Gryphon Scientific calculating an R0 of 0.54 [0.45-0.63]. 10The model was also tested for internal consistence with no significant difference when the first one 100 control runs were compared with the next five runs of 100. The model demonstrated several other characteristics consistent with currently available data. MERS does have a high mortality rate, but outbreaks in the Middle East and South Korea appear limited. The computer model’s outbreaks, even with the worst scenario of super spreaders and no hospital infection control intervention, were limited and did not spread to the majority of the population. The study also showed on average two runs were needed to have even one transmitted case. Simulations with large numbers of cases were even less frequent. This seems reasonable as MERS infections can be mild or asymptomatic. It is likely that travelers develop mild cases that are not transmitted as was seen in this model. The model appeared consistent with


available data regarding MERS outbreaks limited course and limited spread. The inclusion of super spreaders in epidemiologic modeling is crucial for accurate modeling. Many recent studies have looked at the effects of super spreaders and heterogeneity of population transmission. This study supports the importance of super spreading and its effect on infectious outbreaks. Previously it was felt that populations were homogeneous with variation based on normal population variation. Based on the MERS outbreak in Korea,18 related SARS studies, 15 and results from this study there appears to be two or possibly more distinct populations that each varies based on a normal population curve. This has potentially profound effects on epidemiology since it implies that a single R0 is not a good representation of infection spread. The concept of super spreaders may also better explain the course of many infectious agents compared to homogenous spreading. The homogenous models generally account for the outbreak of infection, but do not explain frequent increasing in infection spread during the later stages of the outbreak. Analysis using super spreading models better accounts for the frequent sudden inc rease cases observed in many infections including Ebola, Measles, and SARS.12 The super spreader model may also explain sudden infection outbreaks. Ebola outbreaks, for example, are thought to start with the transmission from an animal reservoir to a human. If that individual does not spread the infection or only spreads to a single other person then the outbreak is likely to end often before it is even recognized. If on the other hand a single human is infected and a super spreader event occurs then suddenly ten or twenty people are infected. Heterogeneous modeling may better explain the sudden severe outbreaks that are seen with many infectious agents. Super spreader events could have a profound effect on control of outbreaks. Some super spreader events are related to environmental factors such as the improper plumbing at a South Korean hospital.9 The control of environmental factors should reduce outbreak risk and severity. There are host factors that increase super spreading including hygiene, pre-existing illness, and co-infection. These

factors have been shown to increase transmission. It may be possible to target certain high-risk individuals if an outbreak occurs to try to reduce the severity of the outbreak. For example, an immunocompromised host may have increased shedding. Immunocompromised patients in the general population could be advised to wear masks, wash their hands more frequently, or avoid high-risk areas. It may be possible to start prophylaxis treatments if available. The study demonstrated a very significant impact with early intervention to reduce hospital transmission. The earlier the intervention, the greater the reduction in outbreaks across multiple parameters including duration, number of deaths, and total number of infected individuals. Analysis of the MERS outbreak in South Korea showed a large number of cases occurred within the hospital. This forced the closing of two hospitals and significant changes in isolation of hospitalized patients. SARS, which has some similarity to MERS, has also been shown to spread through hospital transmission. This study strongly supports rapid efforts to reduce hospital transmission such as personal protective equipment, rapid isolation, and aggressive infection control measures. The study has several potential problems. All individual based population models are limited by the inability to replicate every person’s individual behavior. The model attempted to replicate a population center of about 200,000 people in a developed country. The model required simplification of the number of possible locations and possible behaviors. This is inherent in all individual based models. The MERS model was made with currently available data regarding transmission. MERS was first reported in 2012 and data regarding transmission is still limited, particularly in environments outside the Middle East. There has been only one large outbreak in a developed country, so data for comparing model to know outbreaks is limited. The model for super spreading demonstrates only two populations. While some studies have suggested this type of dichotomy, there may in fact be multiple populations of spreaders. The model did not evaluate the different types of super spreader events but instead focused only on the effect of heterogeneity in the population.

35


There are several possible future areas of research using individual based computer modeling of MERS. The model could be modified to evaluate the course of the infection. This combined with the concept of super spreader models may help explain some of the episodic nature of infectious outbreaks. Other outbreak control methods such as super spreader prophylaxis or vaccination efforts could be simulated. While MERS outbreaks have thus far been limited there is concern about possible mutation of the virus and greater spread. These outbreaks could also be simulated to optimize future infectious control responses. The study did support the hypothesis that super spreaders would significantly affect MERS outbreaks. The effect of this small number of highly infectious individuals had a significant effect on outbreak severity. The study also supported the hypothesis that earlier increased protective measures to reduce hospital transmission are highly effective in reducing outbreak severity. While the model does have significant limitations, it provides a tool for evaluating MERS infection spread and containment measures.

Acknowledgements We would like to thank Mr. Zwanzig and Ms. Thomas for being our science fair advisors along with our family and duPont Manual High School for their support.

References

1. Cohen, J. (1995, July 21). Population Growth and Earth’s Carrying Capacity. Retrieved September 2, 2014, from http://www. montana.edu/screel/Webpages/conservation biology/cohen.pdf 2. Lomnicki, A. (2011, January 1). Individual‐based Models in Population Ecology. Retrieved September 2, 2014, from http://www. els.net/WileyCDA/ElsArticle/refId-a0003312.html 3. Stein, Richard, Super-spreaders in infectious diseases, International Journal of Infectious Diseases 15 (2011) e510-513. 4. Woolhouse ME, Dye C, Etard JF, Smith T, Charlwood JD, Garnett GP, et al, Heterogeneities in the transmission of infectious agents: Implications for the design of control programs, Proc Natl Acad Sci USA 1997;94:338-42. 5. Lloyd-Smith, JO; Schreiber, SJ; Kopp, PE; Getz, WM (2005). “Superspreading and the effect of individual variation on disease emergence”. Nature 438: 355–359. doi:10.1038/nature04153. 6. Z. Shen, F. Ning, W. Zhou, L.He, C. Lin, D. Chin, Z. Zhus, A. Schuchat. Superspreading events, Beijing, 2003. Emerging Infectious Diseases. Vol. 10, No. 2. Feb. 2004. 7. Alimuddin, Zumla, Hui, David, Perlman, Stanley. Middle East respiratory syndrome. Lancet. June 3, 2015 http://dx.doi. org/10.1016/ S0140-6736(15)60454-8 8. Drosten, Christian, et al. Transmission of MERS-Coronvirus in Household Contacts. New England Journal of Medicine. 371;9 Aug 2014, p.828-835 9. Ki, Moran. 2015 MERS outbreak in Korea:hospital –to-hospital transmission. Epidemiology and Health. Vol 37. Jul 2015, p.1-4 10. US Census Data. 2014. Accessed 9/2015. http://quickfacts. census.gov/qfd/states/21000.html

36


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.