Cluster analysis '''Clustering''' is the [[Statistical classification|classification]] of objects into different groups, or more precisely, the [[partition of a set|partitioning]] of a [[data set]] into [[subset]]s (clusters), so that the data in each subset (ideally) share some common trait - often proximity according to some defined [[metric (mathematics)|distance measure]]. Data clustering is a common technique for [[statistics|statistical]] [[data analysis]], which is used in many fields, including [[machine learning]], [[data mining]], [[pattern recognition]], [[image analysis]] and [[bioinformatics]]. The computational task of classifying the data set into ''k'' clusters is often referred to as '''''k''-clustering'''''. Besides the term ''data clustering'' (or just ''clustering''), there are a number of terms with similar meanings, including ''cluster analysis'', ''automatic classification'', ''numerical taxonomy'', ''botryology'' and ''typological analysis''. == Types of clustering == Data clustering algorithms can be [[hierarchical]]. Hierarchical algorithms find successive clusters using previously established clusters. Hierarchical algorithms can be agglomerative ("bottom-up") or divisive ("top-down"). Agglomerative algorithms begin with each element as a separate cluster and merge them into successively larger clusters. Divisive algorithms begin with the whole set and proceed to divide it into successively smaller clusters. [[partition of a set|Partitional]] algorithms typically determine all clusters at once, but can also be used as divisive algorithms in the [[hierarchical]] clustering. ''Two-way clustering'', ''co-clustering'' or [[biclustering]] are clustering methods where not only the objects are clustered but also the features of the objects, i.e., if the data is represented in a [[data matrix (statistics)|data matrix]], the rows and columns are clustered simultaneously. Another important distinction is whether the clustering uses symmetric or asymmetric distances. A property of [[Euclidean space]] is that distances are symmetric (the distance from object'' A'' to ''B'' is the same as the distance from ''B'' to ''A''). In other applications (e.g., sequence-alignment methods, see Prinzie & Van den Poel (2006)), this is not the case. == Distance measure == An important step in any clustering is to select a [[Distance|distance measure]], which will determine how the ''similarity'' of two elements is calculated. This will influence the shape of the clusters, as some elements may be close to one another according to one distance and further away according to another. For example, in a 2-dimensional space, the distance between the point (x=1, y=0) and the origin (x=0, y=0) is always 1 according to the usual norms, but the distance between the point (x=1, y=1) and the origin can be 2,\sqrt 2 or 1 if you take respectively the 1-norm, 2-norm or infinity-norm distance. Common distance functions: * The [[Euclidean distance]] (also called distance [[as the crow flies]] or 2-norm distance). A review of cluster analysis in health psychology research found that the most common distance measure in published studies in that research area is the Euclidean distance or the squared Euclidean distance. * The [[Manhattan distance]] (also called taxicab norm or 1-norm) * The [[Maximum_norm|maximum norm]] * The [[Mahalanobis distance]] corrects data for different scales and correlations in the variables * The angle between two vectors can be used as a distance measure when clustering high dimensional data. See [[Inner product space]]. * The [[Hamming distance]] (sometimes edit distance) measures the minimum number of substitutions required to change one member into another. ==Hierarchical clustering== ===Creating clusters=== Hierarchical clustering builds (agglomerative), or breaks up (divisive), a hierarchy of clusters. The traditional representation of this hierarchy is a [[tree data structure|tree]] (called a [[dendrogram]]), with individual elements at one end and a single cluster containing every element at the other. Agglomerative algorithms begin at the top of the tree, whereas divisive algorithms begin at the root. (In the figure, the arrows indicate an agglomerative clustering.) Cutting the tree at a given height will give a clustering at a selected precision. In the following example, cutting after the second row will yield clusters {a} {b c} {d e} {f}. Cutting after the third row will yield clusters {a} {b c} {d e f}, which is a coarser clustering, with a smaller number of larger clusters. ===Agglomerative hierarchical clustering=== For example, suppose this data is to be clustered, and the [[euclidean distance]] is the [[Metric (mathematics)|distance metric]]. [[Image:Clusters.PNG|frame|none|Raw data]] The hierarchical clustering [[dendrogram]] would be as such: [[Image:Hierarchical_clustering_diagram.png|frame|none|Traditional representation]] This method builds the hierarchy from the individual elements by progressively merging clusters. In our example, we have six elements {a} {b} {c} {d} {e} and {f}. The first step is to determine which elements to merge in a cluster. Usually, we want to take the two closest elements, according to the chosen distance. Optionally, one can also construct a [[distance matrix]] at this stage, where the number in the ''i''-th row ''j''-th column is the distance between the ''i''-th and ''j''-th elements. Then, as clustering progresses, rows and columns are merged as the clusters are merged and the distances updated. This is a common way to implement this type of clustering, and has the benefit of caching distances between clusters. A simple agglomerative clustering algorithm is described in the [[single linkage clustering]] page; it can easily be adapted to different types of linkage (see below). Suppose we have merged the two closest elements ''b'' and ''c'', we now have the following clusters {''a''}, {''b'', ''c''}, {''d''}, {''e''} and {''f''}, and want to merge them further. To do that, we need to take the distance between {a} and {b c}, and therefore define the distance between two clusters. Usually the distance between two clusters \mathcal{A} and \mathcal{B} is one of the following: * The maximum distance between elements of each cluster (also called complete linkage clustering): :: \max \{\, d(x,y) : x \in \mathcal{A},\, y \in \mathcal{B}\,\} * The minimum distance between elements of each cluster (also called [[single linkage clustering]]): :: \min \{\, d(x,y) : x \in \mathcal{A},\, y \in \mathcal{B} \,\} * The mean distance between elements of each cluster (also called average linkage clustering, used e.g. in [[UPGMA]]): :: {1 \over {|\mathcal{A}|\cdot|\mathcal{B}|}}\sum_{x \in \mathcal{A}}\sum_{ y \in \mathcal{B}} d(x,y) * The sum of all intra-cluster variance * The increase in variance for the cluster being merged ([[Ward's criterion]]) * The probability that candidate clusters spawn from the same distribution function (V-linkage) Each agglomeration occurs at a greater distance between clusters than the previous agglomeration, and one can decide to stop clustering either when the clusters are too far apart to be merged (distance criterion) or when there is a sufficiently small number of clusters (number criterion). === Concept clustering === Another variation of the agglomerative clustering approach is [[conceptual clustering]]. ==Partitional clustering== ===''K''-means and derivatives=== ====''K''-means clustering==== The [[K-means algorithm|''K''-means algorithm]] assigns each point to the cluster whose center (also called centroid) is nearest. The center is the average of all the points in the cluster — that is, its coordinates are the arithmetic mean for each dimension separately over all the points in the cluster... :''Example:'' The data set has three dimensions and the cluster has two points: ''X'' = (''x''1, ''x''2, ''x''3) and ''Y'' = (''y''1, ''y''2, ''y''3). Then the centroid ''Z'' becomes ''Z'' = (''z''1, ''z''2, ''z''3), where ''z''1 = (''x''1 + ''y''1)/2 and ''z''2 = (''x''2 + ''y''2)/2 and ''z''3 = (''x''3 + ''y''3)/2. The algorithm steps are (J. MacQueen, 1967): * Choose the number of clusters, ''k''. * Randomly generate ''k'' clusters and determine the cluster centers, or directly generate ''k'' random points as cluster centers. * Assign each point to the nearest cluster center. * Recompute the new cluster centers. * Repeat the two previous steps until some convergence criterion is met (usually that the assignment hasn't changed). The main advantages of this algorithm are its simplicity and speed which allows it to run on large datasets. Its disadvantage is that it does not yield the same result with each run, since the resulting clusters depend on the initial random assignments. It minimizes intra-cluster variance, but does not ensure that the result has a global minimum of variance. ====Fuzzy ''c''-means clustering==== In [[fuzzy clustering]], each point has a degree of belonging to clusters, as in [[fuzzy logic]], rather than belonging completely to just one cluster. Thus, points on the edge of a cluster, may be ''in the cluster'' to a lesser degree than points in the center of cluster. For each point ''x'' we have a coefficient giving the degree of being in the ''k''th cluster u_k(x). Usually, the sum of those coefficients is defined to be 1: : \forall x \sum_{k=1}^{\mathrm{num.}\ \mathrm{clusters}} u_k(x) \ =1. With fuzzy ''c''-means, the centroid of a cluster is the mean of all points, weighted by their degree of belonging to the cluster: :\mathrm{center}_k = {{\sum_x u_k(x)^m x} \over {\sum_x u_k(x)^m}}. The degree of belonging is related to the inverse of the distance to the cluster :u_k(x) = {1 \over d(\mathrm{center}_k,x)}, then the coefficients are normalized and fuzzyfied with a real parameter m>1 so that their sum is 1. So :u_k(x) = \frac{1}{\sum_j \left(\frac{d(\mathrm{center}_k,x)}{d(\mathrm{center}_j,x)}\right)^{2/(m-1)}}. For ''m'' equal to 2, this is equivalent to normalising the coefficient linearly to make their sum 1. When ''m'' is close to 1, then cluster center closest to the point is given much more weight than the others, and the algorithm is similar to ''k''-means. The fuzzy ''c''-means algorithm is very similar to the ''k''-means algorithm: * Choose a number of clusters. * Assign randomly to each point coefficients for being in the clusters. * Repeat until the algorithm has converged (that is, the coefficients' change between two iterations is no more than \epsilon, the given sensitivity threshold) : ** Compute the centroid for each cluster, using the formula above. ** For each point, compute its coefficients of being in the clusters, using the formula above. The algorithm minimizes intra-cluster variance as well, but has the same problems as ''k''-means, the minimum is a local minimum, and the results depend on the initial choice of weights. The [[Expectation-maximization algorithm]] is a more statistically formalized method which includes some of these ideas: partial membership in classes. It has better convergence properties and is in general preferred to fuzzy-c-means. ====QT clustering algorithm==== QT (quality threshold) clustering (Heyer et al, 1999) is an alternative method of partitioning data, invented for gene clustering. It requires more computing power than ''k''-means, but does not require specifying the number of clusters ''a priori'', and always returns the same result when run several times. The algorithm is: * The user chooses a maximum diameter for clusters. * Build a candidate cluster for each point by including the closest point, the next closest, and so on, until the diameter of the cluster surpasses the threshold. * Save the candidate cluster with the most points as the first true cluster, and remove all points in the cluster from further consideration. Must clarify what happens if more than 1 cluster has the maximum number of points ? * [[Recursion|Recurse]] with the reduced set of points. The distance between a point and a group of points is computed using complete linkage, i.e. as the maximum distance from the point to any member of the group (see the "Agglomerative hierarchical clustering" section about distance between clusters). === Locality-sensitive hashing === [[Locality-sensitive hashing]] can be used for clustering. Feature space vectors are sets, and the metric used is the [[Jaccard distance]]. The feature space can be considered high-dimensional. The ''min-wise independent permutations'' LSH scheme (sometimes MinHash) is then used to put similar items into buckets. With just one set of hashing methods, there are only clusters of very similar elements. By seeding the hash functions several times (eg 20), it is possible to get bigger clusters. [http://www2007.org/program/paper.php?id=570 Google News personalization: scalable online collaborative filtering] === Graph-theoretic methods === [[Formal concept analysis]] is a technique for generating clusters of objects and attributes, given a [[bipartite graph]] representing the relations between the objects and attributes. Other methods for generating ''overlapping clusters'' (a [[Cover (topology)|cover]] rather than a [[partition of a set|partition]]) are discussed by Jardine and Sibson (1968) and Cole and Wishart (1970). == Elbow criterion == The elbow criterion is a common [[rule of thumb]] to determine what number of clusters should be chosen, for example for ''k''-means and agglomerative hierarchical clustering. It should also be noted that the initial assignment of cluster seeds has bearing on the final model performance. Thus, it is appropriate to re-run the cluster analysis multiple times. The elbow criterion says that you should choose a number of clusters so that adding another cluster doesn't add sufficient information. More precisely, if you graph the percentage of variance explained by the clusters against the number of clusters, the first clusters will add much information (explain a lot of variance), but at some point the marginal gain will drop, giving an angle in the graph (the elbow). This elbow cannot always be unambiguously identified. Percentage of variance explained is the ratio of the between-group variance to the total variance. On the following graph, the elbow is indicated by the red circle. The number of clusters chosen should therefore be 4. [[Image:DataClustering_ElbowCriterion.JPG|Explained Variance]] == Spectral clustering == Given a set of data points A, the [[similarity matrix]] may be defined as a matrix S where S_{ij} represents a measure of the similarity between points i, j\in A. Spectral clustering techniques make use of the [[Spectrum of a matrix|spectrum]] of the similarity matrix of the data to perform [[dimensionality reduction]] for clustering in fewer dimensions. One such technique is the ''[[Shi-Malik algorithm]]'', commonly used for [[segmentation (image processing)|image segmentation]]. It partitions points into two sets (S_1,S_2) based on the [[eigenvector]] v corresponding to the second-smallest [[eigenvalue]] of the [[Laplacian matrix]] :L = I - D^{-1/2}SD^{-1/2} of S, where D is the diagonal matrix :D_{ii} = \sum_{j} S_{ij}. This partitioning may be done in various ways, such as by taking the median m of the components in v, and placing all points whose component in v is greater than m in S_1, and the rest in S_2. The algorithm can be used for hierarchical clustering by repeatedly partitioning the subsets in this fashion. A related algorithm is the ''[[Meila-Shi algorithm]]'', which takes the [[eigenvector]]s corresponding to the ''k'' largest [[eigenvalue]]s of the matrix P = SD^{-1} for some ''k'', and then invokes another (e.g. ''k''-means) to cluster points by their respective ''k'' components in these eigenvectors. ==Applications== === Biology === In [[biology]] '''clustering''' has many applications *In imaging, data clustering may take different form based on the data dimensionality. For example, the [http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_2D_PointSegmentation_EM_Mixture SOCR EM Mixture model segmentation activity and applet] shows how to obtain point, region or volume classification using the online [[SOCR]] computational libraries. *In the fields of [[plant]] and [[animal]] [[ecology]], clustering is used to describe and to make spatial and temporal comparisons of communities (assemblages) of organisms in heterogeneous environments; it is also used in [[Systematics|plant systematics]] to generate artificial [[Phylogeny|phylogenies]] or clusters of organisms (individuals) at the species, genus or higher level that share a number of attributes *In computational biology and [[bioinformatics]]: ** In [[transcriptome|transcriptomics]], clustering is used to build groups of [[genes]] with related expression patterns (also known as coexpressed genes). Often such groups contain functionally related proteins, such as [[enzyme]]s for a specific [[metabolic pathway|pathway]], or genes that are co-regulated. High throughput experiments using [[expressed sequence tag]]s (ESTs) or [[DNA microarray]]s can be a powerful tool for [[genome annotation]], a general aspect of [[genomics]]. ** In [[sequence analysis]], clustering is used to group homologous sequences into [[list of gene families|gene families]]. This is a very important concept in bioinformatics, and [[evolutionary biology]] in general. See evolution by [[gene duplication]]. ** In high-throughput genotyping platforms clustering algorithms are used to automatically assign [[genotypes]]. === Medicine === In [[medical imaging]], such as [[PET scan|PET scans]], cluster analysis can be used to differentiate between different types of [[tissue (biology)|tissue]] and [[blood]] in a three dimensional image. In this application, actual position does not matter, but the [[voxel]] intensity is considered as a [[coordinate vector|vector]], with a dimension for each image that was taken over time. This technique allows, for example, accurate measurement of the rate a radioactive tracer is delivered to the area of interest, without a separate sampling of [[arterial]] blood, an intrusive technique that is most common today. === Market research === Cluster analysis is widely used in [[market research]] when working with multivariate data from [[Statistical survey|surveys]] and test panels. Market researchers use cluster analysis to partition the general [[population]] of [[consumers]] into market segments and to better understand the relationships between different groups of consumers/potential [[customers]]. * Segmenting the market and determining [[target market]]s * [[positioning (marketing)|Product positioning]] * [[New product development]] * Selecting test markets (see : [[experimental techniques]]) === Other applications === '''Social network analysis''': In the study of [[social networks]], clustering may be used to recognize [[communities]] within large groups of people. '''Image segmentation''': Clustering can be used to divide a [[digital]] [[image]] into distinct regions for [[border detection]] or [[object recognition]]. '''Data mining''': Many [[data mining]] applications involve partitioning data items into related subsets; the marketing applications discussed above represent some examples. Another common application is the division of documents, such as [[World Wide Web]] pages, into genres. '''Search result grouping''': In the process of intelligent grouping of the files and websites, clustering may be used to create a more relevant set of search results compared to normal search engines like [[Google]]. There are currently a number of web based clustering tools such as [[Clusty]]. '''Slippy map optimization''': [[Flickr]]'s map of photos and other map sites use clustering to reduce the number of markers on a map. This makes it both faster and reduces the amount of visual clutter. '''IMRT segmentation''': Clustering can be used to divide a fluence map into distinct regions for conversion into deliverable fields in MLC-based Radiation Therapy. '''Grouping of Shopping Items''': Clustering can be used to group all the shopping items available on the web into a set of unique products. For example, all the items on eBay can be grouped into unique products. (eBay doesn't have the concept of a SKU) '''[[Mathematical chemistry]]''': To find structural similarity, etc., for example, 3000 chemical compounds were clustered in the space of 90 [[topological index|topological indices]].Basak S.C., Magnuson V.R., Niemi C.J., Regal R.R. "Determing Structural Similarity of Chemicals Using Graph Theoretic Indices". ''Discr. Appl. Math.'', '''19''', 1988: 17-44. '''Petroleum Geology''': Cluster Analysis is used to reconstruct missing bottom hole core data or missing log curves in order to evaluate reservoir properties. == Comparisons between data clusterings == There have been several suggestions for a measure of similarity between two clusterings. Such a measure can be used to compare how well different data clustering algorithms perform on a set of data. Many of these measures are derived from the [[matching matrix]] (aka [[confusion matrix]]), e.g., the [[Rand index|Rand measure]] and the Fowlkes-Mallows ''B''''k'' measures.{{Cite journal | author = E. B. Fowlkes & C. L. Mallows | title = A Method for Comparing Two Hierarchical Clusterings | journal = [[Journal of the American Statistical Association]] | volume = 78 | issue = 383 | pages = 553–584 | month = September | year = [[1983]] | doi = 10.2307/2288117 }} [[Marina Meila]]'s Variation of Information metric is a more recent approach for measuring distance between clusterings. It uses [[Mutual information|mutual information]] and [[entropy]] to approximate the distance between two clusterings across the lattice of possible clusterings. ==Algorithms== In recent years considerable effort has been put into improving algorithm performance (Z. Huang, 1998). Among the most popular are ''CLARANS'' (Ng and Han,1994), ''[[DBSCAN]]'' (Ester et al., 1996) and ''BIRCH'' (Zhang et al., 1996). ==See also== * [[Artificial neural network]] (ANN) * [[Canopy clustering algorithm]] * [[Cluster-weighted modeling]] * [[Cophenetic correlation]] * [[Expectation-maximization algorithm|Expectation maximization]] (EM) * [[FLAME clustering]] * [[K-means]] * [[Multidimensional scaling]] * [[Self-organizing map]] * [[Structured data analysis (statistics)]] == Bibliography ==
=== Others === * Clatworthy, J., Buick, D., Hankins, M., Weinman, J., & Horne, R. (2005). The use and reporting of cluster analysis in health psychology: A review. ''British Journal of Health Psychology'' 10: 329-358. * Cole, A. J. & Wishart, D. (1970). An improved algorithm for the Jardine-Sibson method of generating overlapping clusters. ''The Computer Journal'' 13(2):156-163. *Ester, M., Kriegel, H.P., Sander, J., and Xu, X. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, Portland, Oregon, USA: AAAI Press, pp. 226–231. * Heyer, L.J., Kruglyak, S. and Yooseph, S., Exploring Expression Data: Identification and Analysis of Coexpressed Genes, ''Genome Research'' 9:1106-1115. * S. Kotsiantis, P. Pintelas, Recent Advances in Clustering: A Brief Survey, WSEAS Transactions on Information Science and Applications, Vol 1, No 1 (73-81), 2004. * Huang, Z. (1998). Extensions to the K-means Algorithm for Clustering Large Datasets with Categorical Values. ''Data Mining and Knowledge Discovery'', 2, p. 283-304. * Jardine, N. & Sibson, R. (1968). The construction of hierarchic and non-hierarchic classifications. ''The Computer Journal'' 11:177. * [http://www.inference.phy.cam.ac.uk/mackay/itila/ The on-line textbook: Information Theory, Inference, and Learning Algorithms], by [[David J.C. MacKay]] includes chapters on k-means clustering, soft k-means clustering, and derivations including the E-M algorithm and the variational view of the E-M algorithm. * MacQueen, J. B. (1967). Some Methods for classification and Analysis of Multivariate Observations, Proceedings of 5-th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, University of California Press, 1:281-297 * Ng, R.T. and Han, J. 1994. Efficient and effective clustering methods for spatial data mining. Proceedings of the 20th VLDB Conference, Santiago, Chile, pp. 144–155. * Prinzie A., D. Van den Poel (2006), [http://econpapers.repec.org/paper/rugrugwps/05_2F292.htm Incorporating sequential information into traditional classification models by using an element/position-sensitive SAM]. ''Decision Support Systems'' 42 (2): 508-526. * Romesburg, H. Clarles, ''Cluster Analysis for Researchers'', 2004, 340 pp. ISBN 1-4116-0617-5, reprint of 1990 edition published by [[Krieger Pub. Co.]].. A Japanese language translation is available from [[Uchida Rokakuho Publishing Co.]], Ltd., Tokyo, Japan. *Sheppard, A. G. (1996). The sequence of factor analysis and cluster analysis: Differences in segmentation and dimensionality through the use of raw and factor scores. Tourism Analysis, 1(Inaugural Volume), 49-57. *Zhang, T., Ramakrishnan, R., and Livny, M. 1996. BIRCH: An efficient data clustering method for very large databases. Proceedings of ACM SIGMOD Conference, Montreal, Canada, pp. 103–114. For spectral clustering : * Jianbo Shi and Jitendra Malik, "Normalized Cuts and Image Segmentation", IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8), 888-905, August 2000. Available on [http://www.cs.berkeley.edu/~malik/malik-pubs-ptrs.html Jitendra Malik's homepage] * Marina Meila and Jianbo Shi, "Learning Segmentation with Random Walk", Neural Information Processing Systems, NIPS, 2001. Available from [http://www.cis.upenn.edu/~jshi/jshi_publication.htm Jianbo Shi's homepage] * see referenced articles [http://www.luigidragone.com/datamining/spectral-clustering.html#references here] For estimating number of clusters: * I. O. Kyrgyzov, O. O. Kyrgyzov, H. Maître and M. Campedel. [http://www.tsi.enst.fr/~kyrgyzov/publications.html Kernel MDL to Determine the Number of Clusters], [http://www.springerlink.com/content/j646uqx4p435j530/ MLDM, pp. 203-217, 2007]. * Stan Salvador and Philip Chan, [http://cs.fit.edu/~pkc/papers/ictai04salvador.pdf Determining the Number of Clusters/Segments in Hierarchical Clustering/Segmentation Algorithms], Proc. 16th IEEE Intl. Conf. on Tools with AI, pp. 576-584, 2004. * Can, F., Ozkarahan, E. A. (1990) "Concepts and effectiveness of the cover coefficient-based clustering methodology for text databases." ACM Transactions on Database Systems. 15 (4) 483-517. For discussion of the elbow criterion: * Aldenderfer, M.S., Blashfield, R.K, ''Cluster Analysis'', (1984), Newbury Park (CA): Sage. ==External links== *''[http://adios.tau.ac.il/compact/ COMPACT - Comparative Package for Clustering Assessment]''. A free Matlab package, 2006. * P. Berkhin, ''[http://citeseer.ist.psu.edu/berkhin02survey.html Survey of Clustering Data Mining Techniques]'', Accrue Software, 2002. * Jain, Murty and Flynn: ''[http://citeseer.ist.psu.edu/jain99data.html Data Clustering: A Review]'', ACM Comp. Surv., 1999. * for another presentation of hierarchical, k-means and fuzzy c-means see this [http://www.elet.polimi.it/upload/matteucc/Clustering/tutorial_html/index.html introduction to clustering]. Also has an explanation on mixture of [[normal distribution|Gaussians]]. * David Dowe, ''[http://www.csse.monash.edu.au/~dld/cluster.html Mixture Modelling page]'' - other clustering and mixture model links. * A tutorial on clustering [http://gauss.nmsu.edu/~lludeman/video/ch6pr.html] * [http://www.inference.phy.cam.ac.uk/mackay/itila/ The on-line textbook: Information Theory, Inference, and Learning Algorithms], by [[David J.C. MacKay]] includes chapters on k-means clustering, soft k-means clustering, and derivations including the E-M algorithm and the variational view of the E-M algorithm. * [http://www.nerd-cam.com/cluster-results/ An overview of non-parametric clustering and computer vision] * [http://blog.peltarion.com/2007/04/10/the-self-organized-gene-part-1/ "The Self-Organized Gene"], tutorial explaining clustering through competitive learning and self-organizing maps. * [http://cran.r-project.org/web/packages/kernlab/index.html kernlab] - R package for kernel based machine learning (includes spectral clustering implementation) * [http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/ Tutorial] - Tutorial with introduction of Clustering Algorithms (k-means, fuzzy-c-means, hierarchical, mixture of gaussians) + some interactive demos (java applets) * [http://dmoz.org/Computers/Software/Databases/Data_Mining/Public_Domain_Software/ Data Mining Software] - Data mining software frequently utilizes clustering techniques. * [http://homepages.feis.herts.ac.uk/~nngroup/software.html Java Competitive Learning Application] A suite of Unsupervised Neural Networks for clustering. Written in Java. Complete with all source code. * [http://dmoz.org/Computers/Artificial_Intelligence/Machine_Learning/Software/ Machine Learning Software] - Also contains much clustering software. *[http://cism.kingston.ac.uk/people/shihab/dissertation.pdf ''Fuzzy Clustering Algorithms and their Application to Medical Image Analysis'' PhD Thesis, 2001, by AI Shihab.] * [http://www.youtube.com/watch?v=1ZDybXl212Q Cluster Computing and MapReduce Lecture 4] * [http://factominer.free.fr/ FactoMineR] (free exploratory multivariate data analysis software linked to [[R programming language|R]]) * [http://www.springer.com/statistics/statistical+theory+and+methods/journal/357 The Journal of Classification]. A publication of the [http://thames.cs.rhul.ac.uk/~fionn/classification-society Classification Society of North America] that specializes on the mathematical and statistical theory of cluster analysis. [[Category:Data mining]] [[Category:Data analysis]] [[Category:Data clustering algorithms|*]] [[Category:Machine learning]] [[Category:Multivariate statistics]] [[Category:Knowledge discovery in databases]] [[ca:Clusterització de dades]] [[cs:Shluková analýza]] [[de:Clusteranalyse]] [[es:Algoritmo de agrupamiento]] [[fr:Partitionnement de données]] [[hr:Grupiranje]] [[it:Clustering]] [[ja:データ・クラスタリング]] [[nl:Classificatie]] [[pl:Analiza skupień]] [[pt:Clustering]] [[ru:Кластерный анализ]] [[sl:Grupiranje]] [[th:การแบ่งกลุ่มข้อมูล]] [[vi:Phân nhóm dữ liệu]] [[zh:数据聚类]]