MCB 372 :Introduction to Bayesian Analyses

Answer the questions in green, (email to bioinf@carrot.mcb.uconn.edu)

MrBayes is installed on the cluster. You invoke it by typing mb at the command line. In case you want to install the software on additional computers, go here and follow instructions to download and install MrBayes.

Exercise 1:
The goal of this exercise is to learn how to use MrBayes to reconstruct phylogenies.

  1. Save testseq1c.nex (or sequences of your choice, or here or here--The latter 2 analyses might take a long time. I placed the files into an archive. After you expand the archive, execute the file. You then can summarize the results from the previous run. The archives are here and here.) into the folder from which you want to run the MrBayes Program. The 1st is the dataset of vacuolar ATPase A subunit you used earlier, but in the NEXUS format that MrBayes reads (clustal and many other programs write nexus formated files, but usually, you need to go into the file with a texteditor and change things like the way gaps are treated, or the data type). The other two are aminoacyl tRNA synthetases. You might need to rename the file after downloading. Start MrBayes by typing mb

  2. At the MrBayes command prompt type "execute filename". This will load the data into the program.

  3. Type some of the following commands one by one to select a model and settings for your analyis:

    prset aamodelpr=fixed(jones) [this sets the substititution matrix to JTT, a mdern version of PAM]
    lset rates=invgamma
    lset ngammacat=4 [this selects a model in which ASRV is described by a gamma distribution approaximated by 4 categories]
    mcmcp samplefreq=50 printfreq=50 nchains=2 startingtree=random [this sets the frequency with which the "robot" reports results to the screen and to the files (different files for parameters (.p) and trees (.t))]
    mcmcp savebrlens=yes [mcmcp sets parameters for the chain. savebrlens tells it to save the trees with branchlengths]
    [mcmcp filename=testseq1c] [if you use this command, it tells the program to save the files under a certaian name. This is handy, if you want to read in data from a previous run, but it usually is easier to go with the default, which in this case is testseq1c.nex]
    mcmc ngen=20000 [this starts the chain, at the end you'll be asked if you want to continue the run, read the stuff below while you wait]
    sump [or sump filename=testseq1c
    ] [this summarizes the data stored in the parameter file]

    help prset lists the settings for the priors, help lset lists the settings for the evolutionaly model, and help mcmcp lists the settings of the chains.
    Information on the BETA and DIRILECHT distributions is here and here
    . More info is in the MrBayes Manual.
    Before you start the run type showmodel.

    Rather than typing the commands one by one you could use a MrBayes block at the end of the input file, e.g.:
    begin mrbayes;
        prset aamodelpr=fixed(jones);
        mcmcp samplefreq=50 printfreq=50 nchains=2 startingtree=random;
        mcmcp savebrlens=yes filename=testseq1c;
        mcmc ngen=20000; [you need to add
    set autoclose=yes to get to the next step]
        sump filename=testseq1c;

    end;

    " prset" sets the priors for the model used to calculate likelihoods. In this case we choose the substitution parameters from the JTT amino acid substitution model (Jones et al., 1992).
    " mcmcp " sets parameters for Markov Chain Monte Carlo: we set to sample every 50 generation, to print results to a screen every 50th generation, run 2 chains simultaneously, start with random tree, and save branch lengths.
    " mcmc "
    actually runs the chain, and we set it to run for 20000 generations.
    The program runs to analyses in parallel (by default each with 4 chains, and three of these chains are heated; we use only two to make things a little faster--- it definitely is a good idea to run mb on a fast and remote computer). The smaller the reported average standard deviation of split frequencies is, the more reliable the result (i.e., your run is close enough to infinitely long). When the value is below .015, or when your patience is exhausted, terminate the run, by typing no at the prompt.

    After the run is finished, the " sump " command will plot the logL vs. generation number, that allows to determine the necessary burnin (you want to discard those samples as "burnin" where the -logL is still rising steadily).

    [Rather than using the sump command, you also can import the parameter file into EXEL and plot the logL values as a chart in EXEL. See below.]

    During the start of the run, the likelihood rapidly increases by
    orders of magnitude. If the first samples are included in the plot, one
    really cannot see if the likelihood values fluctuate around a constant
    value. You can exclude the first couple of samples by specifying a burnin.
    sump burnin=20 excludes the first 20 samples.
    Note, that the y-axis is rescaled automatically, which provides you a better spread to judge it the chain is "mixing well", or still in the initial orientation.

    Type
    " sumt burnin=20 " or " sumt filename=testseq1c burnin=20 "
    , where you need to substitute '20' with the number you obtained in the previous step of the exercise (Note: that burnin values is the # of generations divided by 50, since we sample every 50th generation). This command creates a consensus tree, and shows posterior probabilities of the clades. You can take a look at the tree in Treeview or njplot by loading the testseq1.con file.

    Which branch in the tree is the longest?
    How long is it?
    What is the measure?
    Can you explain in a few words, why is it important to exclude a 'burnin' from our analyses?

    Did your analysis extimate a shape parameter and a % invariant sites?
    If yes, what values did you obtain, and what is the 95% credibility intervall for these values?
    (To estimate the latter, load the parameter file into MSExel, copy the values for each generation after the burnin and sort them. Then discard the top and bottom 2.5% of the values. The reaminder represents the 95% credibility interval)

 

If you were a student in MCB221, you might want to skip this exercise and work on your student project.

MrBayes by example: Identification of sites under positive selection in a protein

Background:

Professor Walter M. Fitch and assistant research biologist Robin M. Bush of UCI's Department of Ecology and Evolutionary Biology, working with researchers at the Centers for Disease Control and Prevention, studied the evolution of a prevalent form of the influenza A virus during an 11-year period from 1986 to 1997. They discovered that viruses having mutations in certain parts of an important viral surface protein were more likely than other strains to spawn future influenza lineages. Human susceptibility to infection depends on immunity gained during past bouts of influenza; thus, new viral mutations are required for new epidemics to occur. Knowing which currently circulating mutant strains are more likely to have successful offspring potentially may help in vaccine strain selection. The researchers' findings appear in the Dec. 3 issue of Science magazine.

Fitch and his fellow researchers followed the evolutionary pattern of the influenza virus, one that involves a never-ending battle between the virus and its host. The human body fights the invading virus by making antibodies against it. The antibodies recognize the shape of proteins on the viral surface. Previous infections only prepare the body to fight viruses with recognizable shapes. Thus, only those viruses that have undergone mutations that change their shape can cause disease. Over time, new strains of the virus continually emerge, spread and produce offspring lineages that undergo further mutations. This process is called antigenic drift. "The cycle goes on and on-new antibodies, new mutants," Fitch said.

The research into the virus' genetic data focused on the evolution of the hemagglutinin gene-the gene that codes for the major influenza surface protein. Fitch and fellow researchers constructed "family trees" for viral strains from 11 consecutive flu seasons. Each branch on the tree represents a new mutant strain of the virus. They found that the viral strains undergoing the greatest number of amino acid changes in specified positions of the hemagglutinin gene were most closely related to future influenza lineages in nine of the 11 flu seasons tested.

By studying the family trees of various flu strains, Fitch said, researchers can attempt to predict the evolution of an influenza virus and thus potentially aid in the development of more effective influenza vaccines.

The research team is currently expanding its work to include all three groups of circulating influenza viruses, hoping that contrasting their evolutionary strategies may lend more insight into the evolution of influenza.

Along with Fitch and Bush, Catherine A. Bender, Kanta Subbarao and Nancy J. Cox of the Centers for Disease Control and Prevention participated in the study.

A talk by Walter Fitch (slides and sound) is here

 

Exercise:

The goal of this exercise is to detect sites in hemmagglutinin that are under positive selection.

Since the analysis takes a very long time to run (several days), here are the saved results of the MrBayes run: Fitch_HA.nex.p , Fitch_HA.nex .t .

The original data file is flu_data.paup . The dataset is obtained from an article by Yang et al, 2000 . The File used for MrBayes is here


The MrBayes block used to obtain results above is:

begin mrbayes;
set autoclose=yes;
lset nst=2 rates=gamma nucmodel=codon omegavar=Ny98;
mcmcp samplefreq=500 printfreq=500;
mcmc ngen=500000;
sump burnin=50;
sumt burnin=50; end;

Selecting a nucmodel=codon with Omegavar=Ny98 specifies a model in which for every codon the ratio of the rate of non-synonymous to synonymous substitutions is considered. This ratio is called OMEGA. The Ny98 model considers three different omegas, one equal to 1 (no selection, this site is neutral); the second with omega < 1, these sites are under purifying selection; and the third with Omega >1, i.e. these sites are under positive or diversifying selection. (The problem of this model is that the there are only three distinct omegas estimated, and for each site the probability to fall into one of these three classes. If the omega>1 is estimated to be very large, because one site has a large omega, the other sites might not have a high probability to have the same omega, even though they might also be under positive selection. This leads to the site with largest omega to be identified with confidence, the others have more moderate probabilities to be under positive selection).

Note : Version 2.0 of Mr Bayes has a model that estimates omega for each site individually, the new version only allows the Ny98 model as described above..

  1. First, you need to detect how many generations to burn in (meaning the number of samples you will have to discard). Import the file Fitch_HA.nex.p.txt into Excel (link is above, open a new spreadsheet, select DATA, get external data, import text file (if your file does not have a txt extension, select enable all document types) -- select windows format, else defaults; to import the remaining columns of the spreadsheet see 2. below) and plot # of generations versus -LnL values. Determine after how many generations the graph becomes "stationary". The burnin value is that number of generations divided by 50 (since only every 50th generation was sampled; i.e. the burnin value roughly is equal to the number of rows - not quite because there is a header). To more accurately determine the burnin, you need to rescale the Y-axis (click at the Y-axis -- if you aim accurately, you'll get a box that allows rescaling). Selecting the values to plot becomes easier, if you delete the first row from the spreadsheet (then the first row acts as headers).
    The result (scatterplot of LogL versus generation) might look like this:



  2. Load Fitch_HA.nex.p into Excel. (You need to load the data into 2 sheets, since Excel does not allow to load more than 256 columns per sheet. To load the data, create a new Excel spreadsheet. Go to Data->Get External Data->Import Text File , and load first 256 columns.

    Go to a separate sheet, and repeat "Get External Data" command to load the rest of the data, you need to block out (=select =make black) and exclude the first 256 columns (the last imported codon ended on nuc 555) -- you need to click the radio button AFTER you selected the columns to skip!). This file contains information for posterior probabilities for each codon (columns) at each sampled generation (rows).

  3. Calculate average posterior probability for each site of being under positive selection ( see example from Monday ). Do not forget to exclude first N rows as a burnin; you should have detected value of N in the first question of this exercise - to be clear on where the burnin ends, you might want to highlight the rows representing the burnin and select a different font color. (Use AVERAGE() function of Excel, enter the formula in a cell below the values for the individual trees, copy the formula to all columns)

  4. Plot average posterior probability vs. site # . (select the row in which you calculated the averages, then click Graph, and select a bar graph). Write down the codon positions for a few sites with the highest posterior probability of being positively selected (the columns name pr+(1,2,3), pr+(4,5,6)....and so on. pr+(1,2,3) mean probability of codon #1 (nucleotide #1, #2 and #3) to be under positive selection)) (Examples are here and here , check all sheets - note: these were calculated from only 30 000 generations)

  5. Create histograms for the omega value <1 and the omega>1 values. To do this you need to use Analysis ToolPak of Excel ( Tools->Data Analyses ; if you do not see this item in the Tools menu, you need to activate the ToolPak. To do that go to Tools->Add-Ins... and check "Analysis ToolPak").
    The histogram data will be placed into a separate sheet. Again, do not forget to discard the burnin from consideration. Plot the calculated histogram data as a histogram.
    What is the shape of the distribution?
    The histogram fro omega <1 should look something like this:



  6. Determine the 95% credibility interval for the omega<1 value. To do this you sort posterior probability column in ascending order (Select data you want to sort, and go to Data->Sort... ). Again, do not forget to discard the burnin ; the easiest might be to actually delete it.. After sorting, exclude 2.5% of the data on the top and on the bottom. The range of the remaining data gives you the 95% confidence interval. (The values for the 95% credibility interval should be approximately .31- .52 ; using a burnin of 55500 generations, 2.5% of the remainder is about 7.)
    What is the 95% credibility interval for the omega value?