Dimensionality Reduction Methods for Molecular Motion
Topics in this Module

Introduction

The study of many biological processes at the molecular level involves understanding how biological molecules (especially proteins) behave dynamically. The three-dimensional shape of these molecules, which we call a conformation, usually determines the chemical action they perform. Both the stable (also called native) shape of a biomolecule and dynamical deviations from it are important to understand how it interacts with other molecules such as pharmaceutical drugs or other complexes. For these reasons, understanding the main shapes and motions of these molecules is of utmost importance.

Current structural biology experimental methods are restricted in the amount of information they can provide regarding protein motions because they were designed mainly to determine the three-dimensional static representation of a molecule. For this reason, in silico methods (i.e., run in a computer) are used to extensively sample protein conformations. As a summary, the most popular methods to gather protein conformations are:

The computations required to simulate protein motion in silico are very expensive and involve non-trivial force field evaluations, as explained above. These simulations provide us with the (x,y,z)(x,y,z) positions of all atoms in the molecule; for a molecule with N atoms, this amounts to 3N numbers per conformation. For interestingly sized molecules (such as proteins) the number of atoms is large and thus the dimensionality of the obtained data is extremely high. That is, a conformation sample for a protein with N atoms can be thought of as a 3N-dimensional point.

Protein folding is an important biological process. Individual atom motions are very complex, but the overall process seems almost one-dimensional, that is, following a direction from "unfolded" to "folded".

However, many biological processes are known to be very structured at the molecular level, since the constituent atoms self-organize to achieve their bio-chemical goal. An example of such a process is protein folding, the process by which a protein achieves its thermodynamically-stable three-dimensional shape to perform its biological function (a depiction of a protein folding process is shown in figure 1). To study such processes based on data gathered through simulations, there is a need to "summarize" the high-dimensional conformational data. Simply visualizing the time-series of a moving protein as produced by simulation packages does not provide a lot of insight into the process itself. One way of summarizing these motions is to turn conformations into a low-dimensional representation, such as a vector with very few components, that somehow give the "highlights" of the process. This data analysis process -turning high-dimensional data into low-dimensional data for better interpretation- is called dimensionality reduction and is discussed next.

Dimensionality Reduction

When molecular shapes are sampled throughout some physical-chemical process that involves the motion of the molecule, there is a need to simplify the high-dimensional (albeit redundant) representation of a molecule given as a 3N-dimensional point, since it is believed that the actual degrees of freedom (DoFs) of the process are much less, as explained before. The resulting, simplified representation has to be useful to classify the different conformations along one or more "directions" or "axes" that provide enough discrimination between them.

Dimensionality Reduction techniques aim at analyzing a set of points, given as input, and producing the corresponding low-dimensional representation for each. The goal is to discover the true dimensionality of a data set that is only apparently high-dimensional. There exist mathematical tools to perform automatic dimensionality reduction, based on arbitrary input data in the form of high-dimensional points (not just molecules). Although different techniques achieve their goals in different ways, and have both advantages and disadvantages, the most general definition for dimensionality reduction could be stated as:

Dimensionality Reduction

Some dimensionality reduction methods can also produce other useful information, such as a "direction vector" that can be used to interpolate atomic positions continuously along the main motions (like in PCA, see below). For both a general discussion and specific methodology on dimensionality reduction, refer to [7], and for more information on non-linear methods, see [8].

As a simple example of dimensionality reduction, consider the case of a bending string of beads, as depicted in figure 2. The input data has 3x7=21 dimensions (if given as the (x,y,z)(x,y,z) coordinates of each bead) but the beads always move collectively from the "bent" arrangement (left) to the "straight" arrangement (right). Under this simplified view, the process can be considered as one-dimensional, and a meaningful axis for it would represent the "degree of straightness" of the system. Using this axis, each string of beads can be substituted by one single number, its "coordinate" along the proposed axis. Thus, the location of a shape along this axis can quickly indicate in what stage of the bending process it is.

Sampled data from a chain of 7 beads can be given by the (x,y,z) positions of each bead, for a total of 21 "apparent" degrees of freedom. However, the process is inherently one-dimensional.

When dimensionality reduction methods are applied to molecular motion data, the goal is to find the main "directions" or "axes" collectively followed by the atoms, and the placement of each input conformation along these axes. The meaning of such axes can be intuitive or abstract, depending on the technique used and how complex the system is. We can reword the definition of dimensionality reduction when working with molecular motion samples as:

Dimensionality Reduction of Molecular Motion Data

Dimensionality reduction methods can be either linear or non-linear. Linear methods typically compute the low-dimensional representation of each input point by a series of mathematical operations involving linear combinations and/or linear matrix operations. Non-linear methods use either non-linear mathematics or modify linear methods with algorithmic techniques that encode the data's "curvature" (such as Isomap, explained later). Both categories of methods have advantages and disadvantages, which will become clear through the rest of this module. More information on different linear and non-linear techniques for dimensionality reduction are given in [7]. Also, a more extensive classification of dimensionality reduction techniques into several categories, and examples of different algorithms can be found navigating this hypertext article.

The remainder of this module describes two dimensionality reduction techniques and their application to molecular motion data. These are Principal Components Analysis (PCA), a linear method, and ISOmetric feature MAPping (Isomap), a non-linear method.

Principal Components Analysis

We will start our discussion of Principal Components Analysis, or PCA, considering a very general data set of points as depicted in figure 3. Each point in this simple data set is given as a 3-dimensional vector (x,y,z)(x,y,z) (the discussion will later be turned to the molecular motion domain, and the interpretation of such data). Even though this data set is given as 3-dimensional points, it is obvious from the figure that the data points are distributed mostly on a two-dimensional surface. Our objective is then to find the inherent, 2-dimensional parameterization of this data set. (For a full discussion on PCA, see Principal Components Analysis by I. T. Jolliffe).

Principal Components Analysis
An illustration of PCA. a) A data set given as 3-dimensional points. b) The three orthogonal Principal Components (PCs) for the data, ordered by variance. c) The projection of the data set into the first two PCs, discarding the third one.

For data points of dimensionality M, the goal of PCA is to compute M so-called Principal Components (PCs), which are M-dimensional vectors that are aligned with the directions of maximum variance (in the mathematical sense) of the data. These PCs have the following properties:

For example, in figure 3 b), the PCs have been superimposed with the data set (PCs are drawn with different lengths to illustrate the amount of data variance they account for, but remember they are actually of unit length). Since the PCs are orthonormal, they form a vector basis in terms of which the data set can be expressed. An alternative, equivalent view, is that the PCs become aligned with the canonical axes X,Y,Z,... if the data set is rotated or rigidly transformed so that the directions of maximum cumulative variance are made to coincide with the canonical base. For the simple example, the reader can agree that the last direction of maximum variance, the 3rd in this case, accounts for little or no data variability. It is customary, when projecting the original data set into the principal components, to discard the components that do not add a significant contribution to the data variance. In figure 3 c), the third component has been discarded and the projection of the data onto the first two components is shown. Discarding the least-important components is how dimensionality reduction is performed.

For M-dimensional input data, the Principal Components (PCs) are M-dimensional vectors and have two main uses. These are to:

There is one detail left to introduce before going into the mechanics of computing the principal components. Notice that in figure 3-a the data set is not shown at any particular position in 3D space, that is, the data set could be centered at any point in space, not necessarily around the origin (0,0,0)(0,0,0). However, in order to compute the principal components, compute the low-dimensional projections, and synthesize new points, the data set is assumed to be centered in every direction. In other words, the data set needs to have its centroid removed before the computations. If new points need to be synthesized using the PCs, the centroid can be added to place the newly syntesized points in the correct region of space.

To compute the principal components, let XX be an nxM matrix that contains n M-dimensional data points in its columns. Furthermore, assume that the mean for all dimensions is zero, i.e., the data are centered. The goal is to find an MxM orthonormal transformation matrix PP containing the PCs, such that:

The resulting covariance matrix YYTYYT can be re-written as:

We want YYTYYT to be a diagonal matrix DD so we can write:

So then, by multiplying by PP to the left and PTPT to the right:

Since PPT=IPPTI. Note that the Singular Value Decomposition (SVD) of XXTXXT yields:

Where VV and WW are the left and right eigenvectors of XXTXXT, and SS is a diagonal matrix with the eigenvalues. But since XXTXXT is a symmetric matrix by construction, the left and right eigenvectors coincide, so W=VWV. So we can write:

This fact, together with the fact that PP and VV are orthonormal, means that P=VPV and D=SDS (because both DD and SS are diagonal). In other words, the Principal Components, or PCs, of the data set XX are given by the eigenvectors of the covariance matrix XXTXXT of the original (centered) data. Moreover, the diagonal matrix of eigenvalues, SS, is equal to the matrix DD, which is the covariance of the projected points YY. Since it is a diagonal matrix, the diagonal contains the variance of the projected data set along each dimension. By retrieving the eigenvectors (or PCs) in order of decreasing eigenvalue, the PCs that explain the most data variance are automatically chosen before the rest. Since most SVD computations return the eigenvectors and eigenvalues in this order anyway, the resulting PCs are already sorted by variance.

It is important to note at this point that the eigenvalues actually correspond to the variance along each PC. By computing the ratio of each eigenvalue sisi to the total sum, one can obtain the fraction of total variance explained by each PC when the data is projected onto them. Subtracting the sum of variance fraction for the first dd PCs from 1, we can obtain what is known as the residual variance rdrd, that is, the amount of variance in the original data left unexplained by discarding the PCs corresponding to the lower M-dd eigenvalues:

which is a typical measure of the error made by approximating the data set using only the first dd principal components.

The above derivation can be written as an algorithm fairly easily.

PCA algorithm
  1. Let the input data consist of n observations xixi, each of dimensionality M. Construct an nxM matrix XX of centered observations by subtracting the data mean from each point, so that Xij=xij<x>jXijxij<x>j
  2. Construct the covariance matrix C=XXTCXXT
  3. Compute the top dd eigenvalues and corresponding eigenvectors of CC, for example by performing an SVD of CC.
  4. The first dd Principal Components (PCs) of the data are given by the eigenvectors, which can be placed in a ddxM matrix PP. The residual variance can be computed from the eigenvalues as explained above.
  5. To project the original (centered) points into the optimal dd-dimensional hyperplane, compute the dot product of each point with the PCs to obtain the projections yiyi. This can be written as Y=PTXYPTX.

PCA is very well established as a dimensionality reduction technique and efficient algorithms with guaranteed convergence for its computation are readily available. Software packages that perform SVD and PCA are freely available and trivial to use. For example Matlab has built-in commands for both, and moderately experienced C and Fortran programmers can use the popular and extremely-efficient LAPACK linear algebra package. The concepts explained in this section have not assumed any particular number of dimensions. Even though a 3D example was given at the beginning, the concepts can be lifted to deal with spaces of arbitrary dimensionality, where the input data set can have a large dimension M. PCA has the advantage over other available methods that the principal components have a direct physical interpretation, especially when working with molecular motion data.

PCA of conformational data

In structural bioinformatics we want to apply PCA to a set of molecular conformations, which will serve as our high-dimensional points. The input dimensionality of each point is 3N, where N is the number of atoms in the molecule. We will have n such conformations, that have been gathered through some form of sampling (for example through molecular dynamics simulations), and we want to reduce the dimensionality of each "point" (conformation) for analysis purposes. The data used as input for PCA is in the form of several atomic position vectors corresponding to different structural conformations which together constitute a vector set. Each vector in the conformational vector set has dimension 3N and is of the form [ x 1 x 1 , y 1 y 1 , z 1 z 1 , x 2 x 2 , y 2 y 2 , z 2 z 2 ,..., x N x N , y N y N , z N z N ], where [ x i x i , y i y i , z i z i ] corresponds to the Cartesian coordinates of the ithith atom.

However, before running the PCA procedure outlined above, all conformations need to be aligned with a reference structure first, as discussed in the module Molecular Distance Measures. The reason why alignment is important is that most simulation packages model the exchange of heat between the molecule and a thermal bath, in the form of random velocity perturbations on the atoms. These perturbations will in general add non-zero linear and angular momenta to the molecule structure, and not all simulation packages remove them. As a result, molecular conformations that are almost identical in shape but translated/rotated with respect to each other will have significantly different coordinates, and will be considered as different 3N-dimensional points. The reference structure (to which all structures should be aligned to) can be chosen from the set (e.g., the native structure) or provided from outside the set. Results may vary a little, but aligning all conformations to the same reference structure yields comparable results in general.

After all conformations have been aligned, the PCA procedure can be used exactly as detailed above. The first step is to determine the average vector from the conformational vector set, so it can be subtracted from all conformations to build the matrix XX. This is done by computing an average conformation that contains the average for all 3N dimensions of the data set. It is important to note that this "average conformation" no longer represents a physically feasible molecule, since the Cartesian coordinates of all atoms are being averaged throughout the entire data set. When this "average conformation" is subtracted from the aligned input conformations, the "centered" data now becomes atomic displacements, not positions, and so building a covariance matrix for this data makes sense.

The PCs for molecular conformation data can be used for the two purposes explained before, i.e., to obtain a low-dimensional representation of each point and to synthesize (or interpolate) new conformations by following the PCs. Now, the PCs have a physical meaning: they represent the "main directions" followed by the molecule's 3N degrees of freedom, or, in other words, the directions followed collectively by the atoms. Interpolating along each PC makes each atom follow a linear trajectory, that corresponds to the direction of motion that explains the most data variance. For this reason, the PCs are often called Main Modes of Motion or Collective Modes of Motion when computed from molecular motion data. Interpolating along the first few PCs has the effect of removing atomic "vibrations" that are normally irrelevant for the molecule's bigger scale motions, since the vibration directions have little data variance and would correspond to the last (least important) modes of motion. It is now possible to define a lower-dimensional subspace of protein motion spanned by the first few principal components and to use these to project the initial high-dimensional data onto this subspace. Since the PCs are displacements (and not positions), in order to interpolate conformations along the main modes of motion one has to start from one of the known structures and add a multiple of the PCs as perturbations. In mathematical terms, in order to produce conformations interpolated along the first PC one can compute:

Where cc is a molecular conformation from the (aligned) data set and the interpolating parameter can be used to add a deviation from the structure cc along the main direction of motion. The parameter can be either positive or negative. However, it is important to keep in mind that large values of the interpolating parameter will start stretching the molecule beyond physically acceptable shapes, since the PCs make all atoms follow straight lines and will fairly quickly destroy the molecule's topology. A typical way of improving the physical feasibility of interpolated conformations is to subject them to a few iterations of energy minimization following some empirical force field.

Although it is possible to determine as many principal components as the number of original variables (3N), PCA is typically used to determine the smallest number of uncorrelated principal components that explain a large percentage of the total variation in the data, as quantified by the residual variance. The exact number of principal components chosen is application-dependent and constitutes a truncated basis of representation.

As an example of the application of PCA to produce low-dimensional points for high-dimensional input, consider the Cyanovirin-N (CV-N) protein depicted in figure 4 a), corresponding to PDB code 2EZM. This protein acts as a virucidal for many viruses, such as HIV. Protein folding simulations of this protein can be used as input for a PCA analysis. A model of this protein that considers atoms at the alpha-carbon positions only has 101 such atoms, for a total of 303 degrees of freedom. Folding/unfolding simulations starting from the native PDB structure produce abundant conformation samples of CV-N along the folding reaction.

CV-N protein and PCA projections. a) A cartoon rendering of the CV-N protein. b) The projection of each simulated conformation onto the first two PCs. The concentrated cluster to the left corresponds to the "folded" protein, and the more spread cluster to the right corresponds to the "unfolded" protein. c) The PCA coordinates used as a basis for a free-energy (probability) plot.

Figure 4 b) shows the projection of each simulated conformation onto the first two principal components. Each point in the plot corresponds to exactly one of the input conformations, which have been classified along the first two PCs. Note that even the first PC alone can be used to approximately classify a conformation as being "folded" or "unfolded", and the second PC explains more of the data variability by assigning a wider spread to the unfolded region (for which the coordinate variability is much larger). The low-dimensional projections can be used as a basis to compute probability and free-energy plots, for example as in Das et al. Figure 4 c) shows such a plot for CV-N, which makes the identification of clusters easier. Using PCA to interpolate conformations along the PCs is not a good idea for protein folding data since all of the protein experiences large, non-linear motions of its atoms. Using only a few PCs would quickly destroy the protein topology when large-scale motions occur. In some cases, however, when only parts of a protein exhibit a small deviation from its equilibrium shape, the main modes of motion can be exploited effectively as the next example shows.

As an example of the application of PCA to analyze the main modes of motion, consider the work of Teodoro et al., which worked on the HIV-1 protease, depicted in figure 5. The HIV-1 protease plays a vital role in the maturation of the HIV-1 virus by targeting amino acid sequences in the gag and gag-pol polyproteins. Cleavage of these polyproteins produces proteins that contribute to the structure of the virion, RNA packaging, and condensation of the nucleoprotein core. The active site of HIV-1 protease is formed by the homodimer interface and is capped by two identical beta-hairpin loops from each monomer, which are usually referred to as "flaps". The structure of the HIV-1 protease complexed with an inhibitor is shown in figure 5. The active site structure for the bound form is significantly different from the structure of the unbound conformation. In the bound state the flaps adopt a closed conformation acting as clamps on the bound inhibitors or substrates, whereas in the unbound conformation the flaps are more open. Molecular dynamics simulations can be run to sample the flexibility of the flaps and produce an input data set of HIV-1 conformations to which PCA can be applied. A backbone-only representation of HIV-1 has 594 atoms, which amounts to 1,782 degrees of freedom for each conformation.

Alternative conformations for HIV-1 protease. Tube representation of HIV-1 protease (PDB codes 4HVP and 1AID) bound to different inhibitors represented by spheres. The plasticity of the binding site of the protein allows the protease to change its shape in order to accommodate ligands with widely different shapes and volumes.

Applying the PCA procedure as outlined before to a set of HIV-1 samples from simulation produces 1,782-dimensional principal components. Since the physical interpretation of the PCs is quite intuitive in this case, the PC coordinates can be split in groups of 3 to obtain the (x,y,z)(x,y,z) components for each of the 594 atoms. These components are 3-dimensional vectors that point in the direction each atom would follow along the first PC. In figure 6 a), the per-atom components of the first PC have been superimposed in purple.

First mode of motion for the HIV-1 protease. a) The purple arrows are a convenient representation of the first PC grouped every 3 coordinates -(x,y,z) for each atom- to indicate the linear path each atom would follow. Note that the "flaps" have the most important motion component, which is consistent with the simulation data. b) A reference structure (middle) can be interpolated along the first PC in a negative direction (left) or a positive one (right). Using only one degree of freedom, the flap motion can be approximated quite accurately.

Figure 6 b) shows the effect of interpolating HIV-1 conformations along the first PC, or first mode of motion. Starting from an aligned conformation from the original data set, multiples of the first PC can be added to produce interpolated conformations. Note that the first mode of motion corresponds mostly to the "opening" and "closing" of the flaps, as can be inferred from the relative magnitued of the first PC components in the flap region. Thus, interpolating in the direction of the first PC produces an approximation of this motion, but using only one degree of freedom. This way, the complex dynamics of the system and the 1,782 apparent degrees of freedom have been approximated by just one, effectively reducing the dimensionality of the representation.

The residual variance (solid line) and percentage of overall variance explained (dashed line) after each principal component.

Figure 7 (solid line) shows the residual variance left unexplained by discarding the lower-ranked principal components. Residual variance plots always decrease, and in this case, the first PC accounts for approximately 40% of the total variance along the motion (the dashed line shows the percentage of total variance explained up to the given PC). Also, the first 10 PCs account for more than 70% of the total data variance. Given the dominance of only a few degrees of freedom it is possible to represent the flexibility of the protein in a drastically reduced search space.

Non-Linear Methods

PCA falls in the category of what is called a linear method, since mathematically, the PCs are computed as a series of linear operations on the input coordinates. Linear methods such as PCA work well only when the collective atom motions are small (or linear), which is hardly the case for most interesting biological processes. Non-linear dimensionality reduction methods do exist, but are normally much more computationally expensive and have other disadvantages as well. However, non-linear methods are much more effective in describing complex processes using much fewer parameters.

As a very simple and abstract example of when non-linear dimensionality reduction would be preferred over a linear method such as PCA, consider the data set depicted in figure 8 a). The data is apparently two-dimensional, and naively considering the data variance in this way leads to two "important" principal components, as figure 8 b) shows. However, the data has been sampled from a one-dimensional, but non-linear, process. Figure 8 c) shows the correct interpretation of this data set as non-linear but one-dimensional.

Effects of dimensionality reduction on an inherently non-linear data set. a) The original data given as a two-dimensional set. b) PCA identifies two PCs as contributing significantly to explain the data variance. c) However, the inherent topology (connectivity) of the data helps identify the set as being one-dimensional, but non-linear.

Most interesting molecular processes share this characteristic of being low-dimensional but highly non-linear in nature. For example, in the protein folding example depicted in figure 1, the atom positions follow very complicated, curved paths to achieve the folded shape. However, the process can still be thought of as mainly one-dimensional. Linear methods such as PCA would fail to correctly identify collective modes of motion that do not destroy the protein when followed, much like in the simple example of figure 8.

Several non-linear dimensionality reduction techniques exist, that can be classified as either parametric or non-parametric.

The Isomap algorithm is more intuitive and numerically stable than LLE and will be discussed next.

Isometric Feature Mapping (Isomap)

Isomap was introduced by Tenenbaum et al. in 2000. It is based on an improvement over a previous technique known as MultiDimensional Scaling (MDS). Isomap aims at capturing the non-linearity of a data set from the set itself, by computing relationships between neighboring points. Both MDS and the non-linear improvement will be presented first, and then the Isomap algorithm will be detailed.

MDS. Multidimensional Scaling (see Cox et al.) is a technique that produces a low-dimenisional representation for an input set of n points, where the dimensions are ordered by variance, so it is similar to PCA in this respect. However, MDS does not require coordinates as input, but rather, distances between points. More precisely, MDS requires as input a data set of points, and a distance measure d(i,j)d(i,j) between any pair of points xixi and xjxj. MDS works best when the distance measure is a metric. MDS starts by computing all possible pairwise distances between the input points, and placing them in a matrix DD so that Dij=d(i,j)Dijd(i,j). The goal of MDS is to produce, for every point, a set of n Euclidean coordinates such that the Euclidean distance between all pairs match as close as possible the original pairwise distances DijDij, as depicted in figure 9.

MDS at work. A matrix of interpoint distances DD is used to compute a set of Euclidean coordinates for each of the input points.

If the distance measure between points has the metric properties as explained above, then n Euclidean coordinates can always be found for a set of n points, such that the Euclidean distance between them matches the original distances. In order to obtain these Euclidean coordinates, such coordinates are assumed to have a mean of zero, i.e., they are centered. In this way, the matrix of pairwise distances DD can be converted into a matrix of dot products by squaring the distances and performing a "double centering" on them to produce the matrix BB of pairwise dot products:

Let's assume that n Euclidean coordinates exist for each data point and that such coordinates can be placed in a matrix XX. Then, multiplying XX by its transpose should equal the matrix BB of dot products computed before. Finally, in order to retrieve the coordinates in the unknown matrix XX, we can perform an SVD of BB which can be expressed as:

Where the left and right singular vectors coincide because BB is a symmetric matrix. The diagonal matrix of eigenvalues can be split into two identical matrices, each having the square root of the eigenvalues, and by doing this a solution for XX can be found:

Since XX has been computed through an SVD, the coordinates are ordered by data variance like in PCA, so the first component of XX is the most important, and so on. Keeping this in mind, the dimensionality reduction can be performed by ignoring the higher dimensions in XX and only keeping the first few, like in PCA.

Geodesic distances. The notion of "geodesic" distance was originally defined as the length of a geodesic path, where the geodesic path between two points on the surface of the Earth is considered the "shortest path" since one is constrained to travel on the Earth's surface. The concept of geodesic distance can be generalized to any mathematical surface, and defined as "the length of the shortest path between two points that lie on a surface, when the path is constrained to lie on the surface." Figure 10 shows a schematic of this generalized concept of geodesic path and distance.

Geodesic distance. The geodesic distance between the two red points is the length of the geodesic path, which is the shortest path between the points, that lies on the surface.

The Isomap algorithm. The Isomap method augments classical MDS with the notion of geodesic distance, in order to capture the non-linearity of a data set. To achieve its goal, Isomap uses MDS to compute few Euclidean coordinates that best preserve pairwise geodesic distances, rather than direct distances. Since the coordinates computed by MDS are Euclidean, these can be plotted on a Cartesian set of axes. The effect of Isomap is similar to "unrolling" the non-linear surface into a natural parameterization, as depicted in figure 11. Isomap approximates the geodesic distances from the data itself, by first building a neighborhood graph for the data. A neighborhood graph consists of the original set of points, together with a connection between "neighboring" points (neighboring points are defined as the closest points to a given point, as determined by the distance measure used) as shown in figure 11-b. After the neighborhood graph has been built, it can be used to approximate the geodesic distance between all pairs of points as the shortest path distance along the graph (red path in figure 11-b). Naturally, the sampling of the data set has to be enough to capture the inherent topology of the non-linear space for this approximation to work. MDS then takes these geodesic distances and produces the Euclidean coordinates for the set. Figure 11-c shows two Euclidean coordinates for each point in the example.

Isomap at work on the "swiss roll" data set. a) The input data is given as three-dimensional, but is really two-dimensional in nature. b) Each point in the data set is connected to its neighbors to form a neighborhood graph, overlaid in light grey. Geodesic distances are approximated by computing shortest paths along the neighborhood graph (red). c) MDS applied to the geodesic distances has the effect of "unrolling" the swiss roll into its natural, two-dimensional parameterization. The neighborhood graph has been overlaid for comparison. Now, the Euclidean distances (in blue) approximate the original geodesic distances (in red).

The Isomap method can be put in algorithmic form in the following way:

The Isomap algorithm
  1. Build the neighborhood graph. Take all input points and connect each point to the closest ones according to the distance measure used. Different criteria can be used to select the closest neighbors, such as the kk closest or all points within some threshold distance.
  2. Compute all pairwise geodesic distances. These are computed as the shortest paths on the neighborhood graph. Efficient algorithms for computing all-pairs-shortest-paths exist, such as Dijkstra's algorithm. All geodesic distances can be put in a matrix DD.
  3. Perform MDS on geodesic distances. Take the matrix DD and apply MDS to it. That is, apply the double-centering formula explained above to produce BB, and compute the low-dimensional coordinates by computing BB's eigenvectors and eigenvalues.

The Isomap algorithm captures the non-linearity of the data set automatically, from the data itself. It returns a low-dimensional projection for each point; these projections can be used to understand the underlying data distribution better. However, Isomap does not return "modes of motion" like PCA does, along which other points can be interpolated. Also, Isomap is much more expensive than PCA, since building a neighborhood graph and computing all-pairs-shortest-paths can have quadratic complexity on the number of input points. Plus, ther is the hidden cost of the distance measure itself, which can also be quite expensive. Regardless of its computational cost, Isomap has proven extremely useful in describing non-linear processes with few parameters. In particular, the work in Das et al. applied Isomap successfully to the study of a protein folding reaction.

In order to apply Isomap to a set of molecular conformations (gathered, for example, through molecular dynamics simulations) all that is needed is a distance measure between two conformations. The most popular distance measure for conformations is lRMSD, discussed in the module Molecular Distance Measures. There is no need to pre-align all the conformations in this case, since the lRMSD distance already includes pairwise alignment. Thus, the Isomap algorithm as described above can be directly applied to molecular conformations. Choosing an appropriate value for a neighborhood parameter (such as kk) may require a bit of experience, though, and it may depend on the amount and variability of the data. It should be noted that now, the shape of the low-dimensional surface where we hypothesize the points lie is unknown to us. In the swiss roll example above, it was obvious that the data was sampled from a two-dimensional surface. For molecular data, however, we do not know, a priori, what the surface looks like. But we know that the process should be low-dimensional and highly non-linear in nature. Of course, the distance measure used as an underlying operation has an impact on the final coordinates as well. The validation of the Isomap method using the lRMSD distance on protein folding data was done in Das et al., where a statistical analysis method was used to prove its usefulness.

Figure 12 shows the results of applying Isomap to the same molecular trajectory used for the CV-N example in the PCA section. CV-N is a protein that is known to have an "intermediate" state in the folding mechanism, and it is also known to fold through a very ordered process following a well-defined "folding route".

The first two Isomap coordinates for the CV-N protein folding trajectory. Left: the projection of each simulated point onto the first two low-dimensional coordinates. Right: A free-energy (probability) surface computed using the Isomap coordinates.

The free-energy plot in figure 12 (right) clearly shows the superior quality of the Isomap coordinates when compared with PCA. Isomap clearly identifies the CV-N intermediate (seen as a free-energy minimum in blue) in between the unfolded and folded states. Also, the highest probability route connecting the intermediate to the folded state is clearly seen in this rendering. The PCA coordinates did not identify an itermediate or a folding route, since the PCA coordinates are simply a linear projection of the original coordinates. Isomap, on the contrary, finds the underlying connectivity of the data and always returns coordinates that clearly show the progression of the reaction along its different stages.

The Isomap procedure can be applied to different molecular models. As another, smaller example, consider the Alanine Dipeptide molecule shown in figure 13 (left). This is an all-atom molecule that has been studied thoroughly and is known to have two predominant shapes: an "extended" shape (also called C5 and a variant called PII) and a "helical" shape (also called alpha, with three variants: right-handed, P-like, and left-handed). The application of the Isomap algorithm to this molecule, without any bias by a priori known information, yields the low-dimensional coordinates on the left of figure 13 (shown again as a free-energy plot). The first two coordinates (top right) are already enough to identify all five major states of the peptide, and the possible transitions between them. Applying PCA to such a simple molecule yields comparable results (not shown), but PCA cannot differentiate the left-handed helix shape from the right-handed one. Only the geodesic formulation of Isomap can.

Alanine Dipeptide. Left: a) A helical conformation. b) An extended conformation. Right: Top: free-energy as a function of the first two Isomap coordinates computed from a Molecular Dynamics trajectory. Bottom: free-energy as a function of the first and third coordinates, for comparison (these explain the data variance better but do not add new states or transitions).

Although the Isomap coordinates describe non-linear molecular processes extremely well, their computation is very expensive, as mentioned earlier. The most expensive step is the computation of the neighborhood graph. Some approximations are possible to speed up its computation, at the expense of some accuracy in the identification of the "true" nearest neighbors for each point. The work in Plaku et al. proposed the use of an approximation technique that allows the computation of nearest-neighbors for high-dimensional data sets orders of magnitude faster, by quickly reducing the dimensionality of all points, and then querying for neighbors. The tradeoff between precision and speed is controllable through several algorithm parameters.