Thursday, July 31, 2014

Lynna Ye - Week 6: My Last Week at the Ware Lab

I have completed my lab experience at the Ware Lab in Rutgers Newark!

This last week I worked on finishing up my project, which though ultimately unsuccessful, was still a thrilling achievement. Dr. Ware drove away to her hometown of Canada and most of the grad students were busy running the high school program, so it was very quiet in the lab, with only grad student Dominic working at his desk, much like my first week. The only difference was that he had started to leave his laptop and do some work at the benches where I was, which was an exciting change. On the other hand, I began to spend more time on the computer doing data analysis.

On Monday, I used NCBI's Blastn to compare the sequences I received from Friday''s PCR against others in the database. Unfortunately, the results all suggested similarities with genes from Homo Sapiens instead of species from Odonata (dragonflies or damselflies), which means that the DNA may have been contaminated, or that there was a low yield of PCR product. After blasting all my sequences, I was disappointed to discover that only one sequence yielded similarities to some species of dragonflies. While I was fairly upset that my sequences were not successful, I was very relieved that one worked. The rest of the day, I started doing my last batch of extractions and tried out the programs I would have used for my analysis.

Tuesday, I finished extracting the DNA from the last 32 samples from nearly 500 legs, which felt like a great achievement. As I was doing this, I ran a few more PCRs in hopes of getting a few more samples sequenced to add to my analysis. I tried the usual troubleshooting methods again - adjusting thermal cycler temperatures and PCR recipes, but alas I was met again with more failure. Dominic told me that I should probably stop trying to do PCRs, since I would only be here for a few more days and they were not likely to work. So, I decided to re-PCR all the samples that had previously worked to try and sequence again for a hopefully better result, and to start my analysis with the one eligible sequence that I acquired as a sort of simulation for what I would have done had I had more data. However, I encountered some troubles at this stage  as well. I discovered that the primers I used in my PCR, TL2 and C1J, amplified a different region of the COI gene than most of the other sequences I downloaded from GenBank, and as a result it was difficult to align all the sequences for comparison. To combat this issue, I downloaded some longer COI sequences that included both regions of the gene, so that sequences from different regions of the gene could be aligned with each other.
Brightly color-coded nucleotide bases assisted in alignment. Notice how some sequences at the bottom do not begin until most others have already ended.

Wednesday, I focused on making a phylogenetic tree, which would in theory tell me what species my mystery sequence was by grouping my sequence on a sister branch with another previously identified sequence from the database. After aligning the sequences with ClustalX, I had to do manual alignment in Mesquite to make sure that the complementary portions of each sequence were lined up with each other. Then, I ran GARLI, which uses a complicated algorithm and many generations of repetitions to create a "best tree". After this was complete, the same program was used to create bootstrap replicate trees, which determines the strength of the branches created in the "best tree" based on probability of two branches being found next to each other throughout the replicates. While the program was running, I worked on my poster. I would like to give a big thank you to Dominic for giving me PCR advice, showing me how to use all the complicated programs, and reading over my poster!

Thursday was the last day of my 6-7 week adventure at the Ware Lab. Dominic helped me to combine the bootstrap and the best trees to create my final tree.
This tree suggests that my sequence belongs to Cordulia aenea. The percentages on the branches reveals the support for each.
Since I was pretty much done with my project (although I was not successful in collecting much data), I helped out some grad students, first organizing Dominic's vast collection of beetles and other bugs, then joining undergrad Miriam with scanning the wings of dragonflies that Melissa and Will caught in Wisconsin so they can improve the program they are working on to identify species through wing venation alone. I made sure to personally thank and say goodbye to all the grad students, and after I organized my many boxes of DNA extractions and legs for someone to possibly use in the future, I left the lab for the last time to catch my final train ride home.

Overall, I had a very robust and enjoyable lab experience. I'm glad to have met so many wonderful people - the friendly undergrads, the brilliant grad students, and the fabulously kind PI, Dr. Ware - who I all hope to see again in the future. I've learned so many things about lab life (and public transportation) and I'm very happy to have spent my summer at the Ware Lab!

Thanks for reading,

       Lynna Ye

Michael King - Week 7 - Loh Lab UCSF

Our cell cultures have finally started to grow healthily, and we were finally able to perform a few experiments on the cultures this week. The first experiment we performed was to treat five different cell lines with two different experimental drugs. After a 24 hour incubation, we determined the viability of each cell line with and without the drugs by first looking at the samples under a microscope, and later with the Vicell machine. After assessing viability, we performed a western blot to determine any changes in protein expression between treated and untreated cells. We also demoed a new gel reading machine from a local company, and the images turned out very well. My postdoc was pleased with the results from the experiment, as they clearly indicated there was one protein that would be induced when cells were susceptible to drug therapy.

I also set up my own experiment last week with just one cell line, but multiple types of chemotherapy. We will again perform a similar analysis of each sample with a western blot to look for protein markers. I am fairly independent in terms of taking care of the cells now, and I usually check viability everyday and look at them under the microscope to ensure they are not contaminated. I also have to prepare and change the media, and I have been coming in for around an hour each weekend to make sure the cells' media is replaced. All of this work has to be done under the hood to ensure sterility.

I also finally completed all of my mouse training, which involved me taking a three hour training class. I learned how to perform injections on mice and how to give them anesthesia. I also had to take the mandatory euthanasia training that involved severing the necks of mice. The mice are not the easiest to work with, and they do bite quite often.

We have been getting some very good results these past weeks, and everything is going smoothly. I only have two weeks left, and I have definitely learned a lot in the last six. My postdoc will not actually be around for the last week, so I will need to be even more independent in taking care of the cell cultures and performing western blots.

Week 6 at the Deo Lab-- Zui Dighe

This week the other lab technician who is on the same project went for vacation, so I was all alone. This gave me even more independence to come, work, and go as I pleased. It took a bit longer to find things but I was able to continue along.

Another problem occurred this week. The competent bacterial cells used for transformation turned out to be incompetent. Thus, I had to make competent cells. I printed out a protocol with the help of my PI and began the process on Tuesday. The process took the whole week and I will test them out on Tuesday to see if they work. If the homemade competent cells are successful, I will help save the lab thousands of dollars. 

To make competent cells first I had to combine lots of chemicals and calculate correct dilution numbers. This is for LB, magnesium chloride, calcium chloride, and glycerol. No one in the building seemed to have plain LB plates (without antibiotic) so I had to make those as well! This took a day and a couple hours for cooling. Next, I used a zigzag method with a pipet tip to streak LB plates with the Top 10 cells that did not work. I would be reactivating the cells. 

I made three LB plates and let them grow overnight. I was looking for single colonies on the plates. That is what streaking is for (to prevent multiple colonies). Out of my three plates, two had single colonies. Yay, good results. I picked the colonies with a pipet and added them to a 250 mL LB mix in a beaker. The cells would cultivate overnight. 

The next day I measured the OD of the cells using a cuvette machine. OD measures the amount of bacterial growth. For competent cells I could not let the OD grow over .9. Thus, I had to keep running upstairs and measuring to keep the cells in check. 

On Monday I will perform a bunch of centrifuging, decanting supernatants, and re-suspending with new chemicals to reactivate cells. Then I will flash freeze the competent cells in a mixture of ethanol and dry ice. 



Above is a video that I took in our microscope room. It is of zebra-fish embryos from the other project in my lab. The spine-like structure is the forming artery wrapped around the fish's veins. The color is green because of gfps (green fluorescent proteins).The other lab is classifying heart disease mutations in zebra-fish to further understand the process in humans.





Tuesday, July 29, 2014

Alex Hauschild - Shields Oncology Research Rotation - Week 7

Monday as usual started out with the weekly tumor board conference. As far as things I saw during my rotation, it was fairly usual stuff; PAM, melanoma, CHRPE, retinoblastoma...etc. Most of my day was spent on follow up phone calls, working on my poster, and filling spreadsheets with data. Tuesday was very similar; nothing of note really happened.
Wednesday was when the fun began for me. During my rotation we saw a patient who the doctor suspected to have an MYH11 mutation causing pupillary margin cysts (PMC) although could not really be quite sure of it until the patient mentioned that he also had a history of thoracic aortic aneurysms (TAA). This mutation is apparently very rare and only a handful of cases have ever been reported on it. The correlation between the TAA and the PMC was just discovered a few years ago. I also got to see my mom in action once more at the Wills Eye clinic. She took me to dinner in china town after work.
Thursday we witnessed an orbital exenteration (surgical removal of the area surrounding and including the eye). There were plaques and conjunctival surgeries as well, pretty typical stuff to watch in this clinic. After the exenteration, I decided to go to see Dr. Alex Levine's genetics clinic on the 12th floor. It was fun, and we got to see a lot of cool disorders, the names of which I struggled to even enunciate.
Friday was all  spent on my poster and other research before heading off with the students to a very good pizza restaurant for dinner. I then left for the train to go home for my mom's birthday.
Unrelated to EXP, I went to Johns Hopkins that Saturday. It was very nice and I was impressed by most everything there.

Dominique Escandon - NJIT Lab - Week 6

Hi guys! Week 6 was pretty exciting. The beginning of the week didn't make it seem so -- we've been running low on carbon nanotubes so we spent almost a full three days making 5 carbon nanotubes (which we go through in about 4 days). On Tuesday, we had to evacuate the building because someone in the lab smelled smoke. We had to do the same thing last week (someone smelled plastic burning), so this time around I didn't get anxious at all. You can basically say I've become a pro at evacuating buildings full of explosive and flammable substances. Thankfully, it was a false alarm so Qin and I were able to get back to doing our experiments, but not before we took this picture:

On Wednesday, Dr Crider visited my lab. It was a really great feeling to talk about the experiments I had performed and what I had learned. Afterwards, Qin and I ran 4 fuel cells, all with different combinations of substances for the anode and cathode (using carbon nanotubes which had been suspended in a solution of their enzymes, then using a platinum suspension instead of laccase enzyme, then a cathode using suspended GOx and an anode using platinum, then compressed GOx with a platinum anode) but nothing lasted more than an hour which was extremely frustrating. 
Finally, on Thursday, we tried out mechanically compressing the powdered forms of laccase and GOx enzyme onto our carbon nanotubes (surrounded by a carbon sheet to sandwich effectively/ for protection of the CNT) and it worked great! We were able to get an all-time high of 163.04 uW for our power density (an all time high for our lab) and it lasted 5 days (another record for our lab). We are extremely happy with these results, but of course we are always looking for ways to make it even better. Hopefully this can be achieved in my last 2 weeks at this lab!
I'm hoping to see some broken records in these next two weeks, but if not, I am still so happy that I've been able to participate in this process and feel like I've made an impact. 
Have a great week everyone!

Monday, July 28, 2014

Chris Oh - Gabrieli Lab, MIT Week 2

Hello Everyone,

My second week at the Gabrieli Lab went by with much progress.  
On Monday, I finally met Zhenghan, the postdoc I am working for, who explained to me both my project and the big picture of the experiment.  As I explained in my previous blog entry, my project is to find out the time it takes for the subjects of all three groups to respond to the non-word stimuli.  The goal of the experiment is to find any overlapping characteristics of language impairment in Autism patients and SLI patients.  Zhenghan also emphasized that I should use Python for all data analysis after I create the spreadsheet with the average durations for each subject.  Using Python, she wants me to add a column for each spreadsheet to indicate the Subject ID and to calculate average response time for all subjects as well as for all syllable groups (i.e. average RT for 2 syllable words, 3 syllable words, etc.).  She also told me to graph the these calculations using Python.  Zhenghan explained to me that she wants me to master Python by the end of the summer, as it will help me substantially in the near future.  Thus it is my "mission" to complete all the tasks with Python.  
After the meeting with Zhenghan, I continued filling up the spreadsheets for each subject.  Many of the soundfiles required me to play around with the script because Praat kept giving me different error messages.  And for many of these soundfiles, I had to remove unneccessary noises that Praat was catching up as the subject's voice using Audacity.
On Tuesday, Mr. Honsel visited me at the lab.  We met with Zhenghan and had a chat about what I have done so far and what I will be doing for the next 5 weeks.  Then, we went upstairs to show him where I work, and I showed him Praat and the soundfiles that I was working on.  
On Wednesday, the weekly reading group for Undergraduates and High School students met to discuss the relationship between stress and memory.  Allyson, the speaker, talked mostly about why a little bit of stress enhances the memory while prolonged stress does the opposite.  She explained the basics of how and where memory is formed, and then talked about more specific details on stress and memory such as the MR and GR, which are different receptors that get released when we are stressed.  With little bit of stress, the body reacts by releasing only the MR, which enhances our brain functions, but with a lot of stress, the body releases mostly GR, which does the opposite of MR.  
On Thursday, I finally completed the spreadsheet for all 25 subjects, which turned out to be a quite time consuming process.  I had to edit the script frequently for the Subject Onset, and had to run Scanner Onset again for most of the files, as I had to cut out the random noises in many of the soundfiles.  Then, I looked for online resources that would help me figure out how to use Python to manipulate Excel spreadsheets, but they were mostly very hard to understand with my limited knowledge of the language.  So I decided to finish the Python tutorial on Codecademy before I start looking at the manuals on how to edit Excel spreadsheets.  

I finally completed the Python tutorial this morning and I am about to start teaching myself how to analyze data in Excel spreadsheets using Python with the tutorial videos I found on Youtube.  Hopefully I will be able to learn quickly so that I can apply them to my project soon.  

Colette Gazonas-Week 5-Shadlen Lab, Columbia University

During my fifth week in the lab I finished piloting the new mask paradigm we have been working on. I reached 40 percent as my minimum  success rate for each window, but the data was not as promising as anticipated so I was asked to increase this success rate to 80 percent. The grad student I work with analyzed this data and determined that there was a lack of symmetry within the graphs of psychometric curves folded one on top of each other. Therefore, he decided that we would need to define when a stimulus is difficult and when it is not difficult for each of the unfolded conditions which would require more trials.

I ran this paradigm about 50 more times this week which provided Yul with what he considered promising evidence in support of our hypothesis. Before this paradigm can be tested on naive subjects Yul said he would need to perform a little more data analysis but that he is optimistic about the results. By our next meeting he will have completed the analysis and will share with You-Nah and me what he has discovered by further investigating  some of his findings. We hope that this paradigm yields more reliable results than the less complex ones we previously ran in the lab and that we will be able to begin running it on naive subjects soon.

This week I also worked on completing a rough draft of my poster. Yul assigned me a different section to complete each night and by the end of the week I sent him a rough draft of the poster with everything organized on the template.

Over the weekend he got back to me with revisions. He suggested that in order to make my poster more approachable, I should save the words, and try to add a minimum of two more figures: one that illustrates the experimental procedure and another that illustrates the diffusion-to-bound model. Also, he noted that in one of the graphs I created in the the results section, the fit didn't seem right. His argument was that the curve doesn't overlap the data points from the actual data and suggested I may may need to check it.

By our next meeting on Tuesday I will have sent him a revised copy of my poster with less writing and more visuals. I will also have edited my matlab code for the graph of the predicted data so that the curve overlaps the observed data. I will do this three times, one for each of the three windows of stimulus duration and will plot all three curves on the same graph so they can easily be compared. Yul and I hope that my revised poster will serve as my final copy and that this will be done and approved by Tuesday night.

Sunday, July 27, 2014

Alex Baum- Cohen Lab- CHOP Week 6

Hello! It is Alex and this is my sixth week at the Cohen Lab at CHOP/ UPenn!

 This week was more exciting because I spent a lot of time in the main lab in Abramson with the rest of the  lab. I talked to more people this week than I have throughout my entire lab experience. I did not realize how isolated I was researching in a different building from everyone else. Working in both buildings is exciting because I work alone in the morning and in the afternoon I work with other people. I have learned more about what other members in the lab do this week.

 I do not have a specific project- I am helping Colin with his research, but a lot of the other people in the lab have their own projects. Most of these project revolve around behavior, recording data, analyzing data, ext. A lot of the cool experiments that go on happen in Abramson. THese experiments would be cutting out brains and stimulating brain cells with electrodes. The animal behavior research that I am doing is important too, I need to figure out how it plays into Colin's research so I will find out in my last two weeks in the lab. On Monday I walked into the lab and I saw a few people I did not know. I think that one of them is an undergraduate intern and he would be the next youngest person in the lab besides me. Monday was my "Bring in Food" Day! On Sunday I baked dark chocolate brownies and then brought them on the train with me in tupperware. I was actually pretty nervous to bring in the food because it would be my first time in the main lab in a while.  I spent the morning in CTRB and around noon came in to give everyone food. I thanked everyone for letting me research with them this summer. In the afternoon I worked on a second project. It is not exciting- just something to do- I am basically just looking at old photos of brain slices that Colin took on a fancy microscope and editing them on Adobe photoshop. Friday was probably my most exciting day at the lab so far because someone else was researching in CTRB with me! Her name is Jackie and she is a nurse at UPenn that also works in the lab. She was giving her mice brain injury using the fluid machine I showed everyone in an earlier post. It was sad to see her give the mice TBI but it was also really cool! I am also glad that I had someone else to talk to while I was doing my research. She told me about everyone's specific projects in the lab (although not in a lot of detail) and she told me how the lab was getting really expensive equipment next year that would allow them to put electrodes in the brain while the mice are alive. With this advancement the lab would be able to do more behavior research with TBI on live mice. We also talked about animal research as a whole and the primate facilities on sub-level D. She told me she might be able to get me in to see it! That would be so amazing because I have been wanting to see those facilities since the first week but security is really tight around all of these facilities. According to Jackie, each mouse costs $27 but each primate study costs hundreds of thousands of dollars. In the mornings I run the T-maze and in the afternoons I go to the main lab to look at data and I'm glad I have more research to analyze during the day. I like conducting behavior research and then analyzing research because it gives me more exposure to different types of lab work. 

Next week I am starting 1 minute trials and I am expecting the mice to be less accurate in the alternations because they will probably forget what side of the maze they were forced to choose the first time. 

Lauren Donato Week 6 Mason Lab: CD 21 success

As I said in my last blog in order to verify that the ligation was successful, highly competent bacterial cells were transformed. Bacterial cells were grown on LB Ampicillin plates, and incubated overnight. There were two types of colonies visible on the plates 24 hours post transformation that differed in size and brightness, so I assumed the transformation was successful. Random colonies were selected for a colony PCR, which would verify that the bacteria was transformed. I designed primers to amplify a 1200 basepair fragment that would indicate correct orientation if colonies harbored the plasmid. Results from gel electrophoresis verified that a colony harbored the plasmid. This selected colony and three random others were minipreped and sent for DNA sequencing. The sequence had 7 mutations, as I compared it to one on NCBI. Looking at a DNA waveform without DSAP was not challenging, because I was very comfortable doing this. My lab was very surprised that I knew what a waveform even was, and were thrilled that I could do this tedious task by myself. I constructed a glycerol stock using the colony that harbored the plasmid. A glycerol stock is a way to store bacteria for later use, so for this particular experiment it is useful. I did a maxi prep of the colony. A maxi prep is a very tedious, full day task. However, this day was broken up into a wonderful visit by Dr.Peretz. Dr. Peretz got to see my awesome lab and meet my even more awesome PI. I am hoping Dr. Mason can find time in her very very busy schedule to fit us in! It was nice to share my work with somebody outside of the lab. We also went to the cutest restaurant ever!! The maxi prep yielded 2,500 nanograms of DNA, so my lab was excited!! Human stem cells were transfected on a 6 well plate with turbofact and liptofectamine 2000. Turbofect Transfection Reagent is recommended for transfection of plasmid DNA. While most transfection reagents are lipid-based formulas like Lipofectamine2000, turbofact is a sterile solution of a proprietary catonic polymer in water. The polymer forms positively-charged complexes with DNA that are both compact and stable. These complexes protect DNA from degradation and facilitate efficient plasmid delivery into eukaryotic cells. 24 hours post transfection, cells were transfected and glowing green from GFP protein in the vector. About 70% of cells were transfected. Cells were prepared and stained for flow cytometry, which with various controls would conclude if CD21 was expressed on the surface of the cells. I saw a slight shift in staining, which concluded that CD21 was probably expressed on the surface but not in an immense amount. The lab is now going to continue to work on this gene due to my success. Overall, I feel like these 6 weeks in the lab taught me a lot of life skills (taking trains and such), and also proved my abilities to work in a lab. I know I had a very solid project, that has many promising applications, and I completed it mostly alone. I know I helped the lab out a lot this summer, whether it be a miniprep here or there or saving time for others. My last day at the lab was extremely bittersweet. My lab took me out to lunch at a very fun mexican restaurant, and wrote me a very meaningful card. I gave gifts and cards to everyone in the lab who has been so great to me, and it was sad for everyone to see me go. I am welcome back to work at the Mason lab in the future anytime, and will definitely miss everyone and my work. I feel confident that I was productive in the lab, and that I took advantage of this amazing opportunity from peddie. As for continuing a career in research a PhD student I spent time with at the lab wrote to me, "You still have a long while to go before you make any decisions (and then decide again after getting a different degree), and there is no need to rush. You certainly seem comfortable in the lab, and have what we call "good hands" - a knack for things going well at the bench - but I imagine that you would be an asset wherever you go and whatever you decide to do. But, if you do decide to go into lab research, you can be sure that the Mason lab will be cheering you on. I cannot say that it will be easy or even fun all the time, but it will definitely be an adventure." On that note, have a good summer everybody <3 

Colby Saxton- Week 6 (Final Week), Linksvayer Lab- UPenn

        Hello readers, this past week was my final one at the Linksvayer Lab. This week saw very little progress in my personal research, which only came in preliminary viewing of pathfinder ant walking videos. This week, everyone in the lab devoted their time to helping one of the lab-members in a large genetic cross mating project. This involved countless hours of removing pupae from colonies, weighing pupae, and setting up more colonies. I spent most of my time weighing pupae, a very boring process, which entailed weighing each individual pupae, one by one. By the last couple of weeks, my project has considerable slowed due to multiple factors, including problems with the ant tracking program, and others around the lab needing help.
        When Dr. Peretz visited my lab, she asked me if I would now like to pursue a field in research. This was the reason why each of us signed up for EXP, because it exposes us to a potential career field for each of us, yet, I was caught off-guard by the question; I had not thought about it at all. I gave a reluctant "no" to Dr. Peretz, but after careful thinking, I believe I have thought of a better answer. I think that only six weeks at one lab is not enough information to choose your career path on. I have not experienced nearly enough in the lab setting in only six weeks, and every lab works, researches, and cooperates differently. Although I did not see a potential future career at the particular lab I worked at, that does not mean I would not fall in love with another. The only answer I can conclude to this question is instead of learning that I would want to definitively have a career in research, I have learned to keep my options open and expose myself to all different fields of science; I can't make a decision from only one AP Biology class at Peddie for example, or 6 weeks at a lab at UPenn. The more you explore, the more you learn about yourself.
         I would like to thank Dr. Peretz and Dr. Crider for giving me this opportunity and wonderful experience. I would also like to thank my lab for a wonderful experience. Thank you Dr. Linksvayer for granting me the ability to work at your lab. We were a very close lab, and it would not have been this way without the wonderful character of each of my co-workers. Lastly, thank you to Dan and Jake Fine for giving me a wonderful living experience in the city.

Signing Out for the Last Time,

Colby Saxton


Friday, July 25, 2014

Caroline Casey - Week 6 - Complex Systems Group: Connector Hubs, Participation Coefficients, and More Writing - University of Pennsylvania

This week was exciting as I navigated my project completely by myself! Dr. Bassett was on vacation this week, so it was up to me to figure out what analyses to complete and how to complete them. I feel very comfortable with Matlab as well as network science and managed to complete a lot this week.

On Monday morning, I added what I worked on the previous week to my PowerPoint. I also began looking at the communities I identified in order to determine the next step in the process of analyzing the networks. Although Dr. Bassett was on vacation this week, she came back for our lab meeting. In the meeting, one of the post-docs in the lab presented his work on temporal (time dependent) brain networks and how they change when completing memory tests. It was a very interesting lecture and I felt like I could understand most of the concepts. After the lecture, each member of the lab gave Dr. Bassett a five minute update on his/her completed work from the past week. I enjoyed hearing what everyone in the lab is working on; the projects are all very diverse. Dr. Bassett suggested that for this week I look more into the communities I identified and anything else I find important. I ended Monday drafting ideas for how I would go about analyzing the communities I found and what interesting data and results I could pull from the networks.

On Tuesday, I performed the consensus partitions multiple times again for each of the four scenarios because not all of the 100 optimizations were yielding the same community partitions for all 112 nodes. However, the consensus partitions came back the same as before, there was still disagreement on the community partitions for some of the nodes. So, I took the community each node appeared in most often and let that be the community assignment for each node. Next, I created four vectors, 1x112, which contained the community assignment of each node for each scenario. I then plotted the resulting nodes, where the color of the node indicated which community it was assigned to. I created and saved the networks and then plotted the nodes onto a brain surface for each scenario. I analyzed the resulting networks for trends among the four scenarios in order to identify whether or not certain nodes tended to appear in a community together. Unfortunately, there were no obvious trends. I added each of the images and explanations of what I did to the PowerPoint. I then added the community analyses I completed to the methods and results sections of the paper I am writing.
The brain surface plots of the first scenario (edges from the correlation of interference session 2 with scan 1). The color of the node indicates which community it is assigned to. 

Wednesday was devoted to reading literature on network analysis methods and graph theory. I was looking for other methods of analyzing the networks. By Wednesday afternoon, I had two analyses techniques that I wanted to complete, finding the betweenness centrality of each node and the participation coefficient of each node. Betweenness centrality is a fraction explaining the number of all shortest paths in the network that pass through a certain node. A high betweenness centrality indicates that the specific node connects various parts of the network together (it plays a very central role). Nodes on the edges of networks tend to have a very low betweenness centrality. The participation coefficient helps identify which nodes connect to other communities and which nodes only connect within their own communities. A participation coefficient close to 1 indicates that the connections from that node are evenly distributed among all the communities. A participation coefficient of 0 indicates that the connections from that node are all within its own community. I spent the rest of Wednesday finding the algorithms and figuring out how to employ the two analyses techniques.

On Thursday, I began by finding the betweenness centrality of each node for all of the networks. After obtaining the values, I then found the connector hubs of each of the networks. Connector hubs are nodes whose betweenness centrality is greater than the mean plus the standard deviation. These nodes connect various parts of the network together and play a very central and global role. I identified these nodes and then added them to an Excel sheet. I compared the connector hubs of the two scenarios in the predictive group (edges from the correlation of interference session 2 with scan 1 and the edges from the correlation of interference session 3 with scan 2) to see if there were any regions that were hubs in both scenarios. I then did the same for both scenarios in the retrospective group (edges from the correlation of interference session 2 with scan 2 and edges from the correlation of interference session 3 with scan 3). I found some conserved regions and added my work to the PowerPoint. Thursday afternoon, I began working on finding the participation coefficient of each of the nodes.

Friday, I finished finding the participation coefficients of the nodes. I then found the node with the highest participation coefficient in each of the four scenarios and then found the nodes with the lowest participation coefficients (a participation coefficient of 0) for each of the four scenarios. I compared the two scenarios of the predictive group and compared the two scenarios of the retrospective group to find conserved regions. I believe that there is more I can do with the participation coefficients, but I will talk to Dr. Bassett on Monday about her suggestions. I then added my work to the PowerPoint and continued working on my paper. As I was going through my paper, I realized that last week when I was computing the z-score (to find the hubs in the networks) and then comparing the two scenarios in the predictive group and the two scenarios in the retrospective group, I did not necessarily find all hubs in common. This is because in my Excel sheet of hubs for each scenario, I separated the left side of the brain from the right. For example, if the left superior frontal gyrus was a hub in one scenario and then the right superior frontal gyrus was a hub in the other scenario, it was not identified as a conserved hub region for that group. So, I decided to go back and identify all the conserved hubs in both groups regardless of whether the hub was on the right side of the region for one scenario and on the left side of the region for the other scenario. This resulted in many more conserved hubs. I then went on to research the regions of the connector hubs (from the betweenness centrality) and the hubs from the z-score that were conserved between the two scenarios in the two groups (predictive and retrospective). Something interesting I noticed was that a majority of those regions had visual or environmental functions such as face recognition, object recognition, color recognition, spatial orientation, and self-awareness. I think that this finding might indicate that being in a different/unknown environment might be a cause of the interference effect, however, this is just a hunch. I will talk to Dr. Bassett on Monday about her take on the meaning of these results.

On Friday, August 1, I will have a Skype conference with two other researchers where I will share my PowerPoint and discuss my work and results. I enjoy how my project is at the cross-roads of so many fields such as biology (specifically neurology), graph theory, and computer science. Not to mention, I am also gaining skills in making presentations and writing research papers! So far, I am loving the experience in my lab and really enjoy the work that I am doing!
My desk at the lab.


Thursday, July 24, 2014

Pieter de Buck - Week 4/5 - Duke University

Hi everyone, my name is Pieter and these are weeks 4 and 5 at Duke.

Again, I have learned and achieved a lot this week. In both the programming and physics sides of my research I have made great progress. I have had three sessions these past weeks with Dr. Bass and a Duke undergrad, JP, where Dr. Bass teaches us certain things that are relevant to our projects. He talks mostly about the computer model that we use to simulate particle collisions (UrQMD) , which he made. He talks to us about the theories in this field of physics, how they were discovered, the physics equations behind them, and then explain how one would incorporate these theories in a computer model, which in most cases is not as easy as just plugging in the equations. This is because these equations are huge and time-consuming. A nice example of one of these equations is the Boltzmann equation, which denotes the expansion of gas as time passes.
Boltzmann Equation

This calculation only handles two separate particles, to simulate the ~5000 particles that come out of a lead+lead collision this equation would grow unbelievably fast. Dr. Bass showed us this equation for a system with three particles, it spanned about four full pages. This is obviously not a good solution to use in a computer model, since it takes a lot of computer power even in the original equation. So when Dr. Bass and others made this UrQMD model, they had to employ some tricks to make the program run at an acceptable rate. These tricks are kind of in depth but they include only working with two particles at a time, and only doing calculations when a collision between two particles happens.

We also talked about the idea behind the model, because there are a few things that are weird about it. The most glaring weirdness is the fact that Dr. Bass' research group managed to create a computer model to simulate a phenomenon that has never been observed, and probably never will be. So how are they able to model this Quark-Gluon Plasma after experimental data? First a comment on experimental data from particle colliders such as the RHIC in Long Island, and more recently the LHC at CERN (France/Switzerland). The problem with looking for Quark-Gluon Plasma (QGP) with these colliders is that they can only detect the end state of the collision, when all the particles are done scattering. QGP is supposed to appear during an incredibly short amount of time, and in a very small area, making it impossible to observe. This is the reason for the creation of computer models that simulate QGP, they can use any timescale and "zoom in" infinitely. So to accurately model QGP from experimental data, we have to look for clues and proofs about the QGP in the end state of collisions at colliders. Dr. Bass has given us an in-depth talk about some of these "observables". But this still leaves us with the problem that it is impossible for us to create some kind of equation for the creation of QGP, so we resort to model the clouds of particles that result from one of these collision with better known, verifiable, phenomena. These can include gases, liquids, plasmas. We can use the equations of these states of matter and apply them to the particles in the simulation, and see if they give us the same end result that we see in experiments. It turns out the QGP can be modeled with a hybrid of gas and liquid modeling, using both states at different times and parts of the model. The gas part of the model is seen in the aforementioned Boltzmann equation. With a set of equations directly from other parts of physics we have a model that, after some tweaking, corresponds to the findings at RHIC and LHC. So that means that we can turn back the time and see what is going on during the creation of QGP, which is the ultimate goal of the model.

On my side of the project, the handling of output with computer scripts, I have also made progress. When plotting histograms I now "normalize" the graphs such that the integral (in the case of a histogram this is just to total area of the bars) always equals the same thing, irregardless of the amount of particles in a simulation. So basically everything has the same scale. I have also assigned weights to the data dependent on the value, so that irregardless of the size or range of the simulation you should expect the same graph. Here is an example of a normalized graph with a log scale for the y-axis.

The transverse momentum of the particle of the beam (i.e. momentum perpendicular to the particle beam) in GeV. Note that the log scale on the y-axis means the actual graph is exponential.



Picture of my part of the office, double screens!


I hope I was able to explain my thoughts clearly, it is really hard to condense all this theory in one blog post.

Pieter





Week 5 at the Deo Lab-- Zui Dighe

This week I repeated my PCR reactions. There are two sets of reactions, the first PCR and the nested PCR. The first PCR is supposed to amplify target DNA, guided by the primers created. You must go through the full sequencing process one time to make sure that the primers created are amplifying the right products. Once the first PCR reaction shows the right bands, the nested reaction is another test to make sure that the bands are correct. With nested reactions you use nested primers which  are more specific to the regions needed.

On Monday I came in extra early to repeat my RT PCR reaction (turning rat lymph node RNA into DNA). I was the first one in the lab! It took me some time to gather all the materials on my own and I felt independent being able to work on my own. After completing the RT PCR (it takes about 5 hours) I began my first PCR reaction to test if my RT reaction was successful.

The results of my first PCR seemed to be at the right size or slightly large (successful!). Thus, I proceeded to the nested reaction on Tuesday. The nested reaction; however, shows no clear bands to the actual size of the product amplified. This is because the band witnessed is a blurred elongated one ranging from 200 to 800 base pairs. This could be amplifying what is needed but it is not completely determinable.

Thus, my PI advised me to continue the cloning with the first PCR reactions rather than the nested one.

I also realized how sensitive the RT PCR samples are to temperature. Thus, I aliquoted my samples into smaller one-use tubes as to not waste the entire stock. Hopefully, the samples will not get contaminated and I will not have to repeat the 5 hour process of turning RNA to DNA.

Here is a picture I took of one of the animals I saw on the animal floor in the CRVI building. This amphibian (the Mexican axolotl) can regenerate their limbs after an arm or leg is cut off. A neighboring lab has a grant from the army to research regenerating limbs in the field of biomedicine.

Shivani Gupta-Week 5- Reddy Lab



On Monday, we got back our sequencing of the BirA-YY1 and BirA-L2B which we sent in last Friday. We used software known as BioEdit to align our sequences with the sequences we were supposed to obtain using the original construct we built on Pdraw32, as shown below. 

Unfortunately, when we got back our sequencing from this plasmid, it aligned in the reverse order. We then realized that we chose the wrong enzymes for the double digest as they were not in frame. We then chose two other enzymes to do the double digest with, BamHI and PmeI. Because the BirA-L2B was in frame, we are going to start the maxi preps for them next week. Below is the new construct for BirA-YY1.

We then spent the rest of the week re-doing the cloning. We first had to digest the two plasmids, BirA-Myc/BioID AND PcDNA3-YY1 with enzymes BamHl and Pmel. To do this double digest, I also had to add 10x BSA, NEB buffer 4, and enough water to make each sample 25 ul. After I incubated the samples for an hour, I ran the samples on an agarose gel containing Ethidium Bromide. Because the fragments were the correct size (6382 bp  and 29bp for BirA-Myc plasmid and 1372 and 4751bp for pcDNA6_YY1), I could then run the samples on a SYBR safe gel. I then isolated the fragments by cutting them out of the gel and began the gel extraction protocol. After I isolated the DNA, I ligated the two fragments together using the NEB 5 min ligase. I then did the transformation where I transformed competent DH5 alpha cells with my ligation reaction. After the transformation, I plated them on LB-amp plates. If the colonies grow successfully, we will then do a mini prep and a restriction digest verification/sequencing.