~ubuntu-branches/ubuntu/natty/python-cogent/natty

« back to all changes in this revision

Viewing changes to cogent/cluster/metric_scaling.py

  • Committer: Bazaar Package Importer
  • Author(s): Steffen Moeller
  • Date: 2010-12-04 22:30:35 UTC
  • mfrom: (1.1.1 upstream)
  • Revision ID: james.westby@ubuntu.com-20101204223035-j11kinhcrrdgg2p2
Tags: 1.5-1
* Bumped standard to 3.9.1, no changes required.
* New upstream version.
  - major additions to Cookbook
  - added AlleleFreqs attribute to ensembl Variation objects.
  - added getGeneByStableId method to genome objects.
  - added Introns attribute to Transcript objects and an Intron class.
  - added Mann-Whitney test and a Monte-Carlo version
  - exploratory and confirmatory period estimation techniques (suitable for
    symbolic and continuous data)
  - Information theoretic measures (AIC and BIC) added
  - drawing of trees with collapsed nodes
  - progress display indicator support for terminal and GUI apps
  - added parser for illumina HiSeq2000 and GAiix sequence files as 
    cogent.parse.illumina_sequence.MinimalIlluminaSequenceParser.
  - added parser to FASTQ files, one of the output options for illumina's
    workflow, also added cookbook demo.
  - added functionality for parsing of SFF files without the Roche tools in
    cogent.parse.binary_sff
  - thousand fold performance improvement to nmds
  - >10-fold performance improvements to some Table operations

Show diffs side-by-side

added added

removed removed

Lines of Context:
15
15
__author__ = "Catherine Lozupone"
16
16
__copyright__ = "Copyright 2007-2009, The Cogent Project"
17
17
__credits__ = ["Catherine Lozuopone", "Rob Knight", "Peter Maxwell",
18
 
               "Gavin Huttley", "Justin Kuczynski"]
 
18
               "Gavin Huttley", "Justin Kuczynski", "Daniel McDonald"]
19
19
__license__ = "GPL"
20
 
__version__ = "1.4.1"
 
20
__version__ = "1.5.0"
21
21
__maintainer__ = "Catherine Lozupone"
22
22
__email__ = "lozupone@colorado.edu"
23
23
__status__ = "Production"
80
80
    """
81
81
    num_rows, num_cols = shape(E_matrix)
82
82
    #make a vector of the means for each row and column
83
 
    column_means = add.reduce(E_matrix) / num_rows
 
83
    #column_means = (add.reduce(E_matrix) / num_rows)
 
84
    column_means = (add.reduce(E_matrix) / num_rows)[:,newaxis]
84
85
    trans_matrix = transpose(E_matrix)
85
86
    row_sums = add.reduce(trans_matrix)
86
87
    row_means = row_sums / num_cols
87
88
    #calculate the mean of the whole matrix
88
89
    matrix_mean = sum(row_sums) / (num_rows * num_cols)
89
90
    #adjust each element in the E matrix to make the F matrix
90
 
    for i, row in enumerate(E_matrix):
91
 
        for j, val in enumerate(row):
92
 
            E_matrix[i,j] = E_matrix[i,j] - row_means[i] - \
93
 
                    column_means[j] + matrix_mean
 
91
 
 
92
    E_matrix -= row_means
 
93
    E_matrix -= column_means
 
94
    E_matrix += matrix_mean
 
95
 
 
96
    #for i, row in enumerate(E_matrix):
 
97
    #    for j, val in enumerate(row):
 
98
    #        E_matrix[i,j] = E_matrix[i,j] - row_means[i] - \
 
99
    #                column_means[j] + matrix_mean
94
100
    return E_matrix
95
101
 
96
102
def run_eig(F_matrix):