Statistics {{Two other uses|the field of statistics|statistics about [[Wikipedia]]|Wikipedia:Statistics||}} {{redirect|Statistical science|the review journal|Statistical Science (journal)}} [[Image:The Normal Distribution.svg|thumb|350px|right|A graph of a [[Normal distribution|normal bell curve]] showing statistics used in [[standardized testing]] assessment. The scales include ''[[standard deviations]], cumulative percentages, percentile equivalents, Z-scores, T-scores, standard nines,'' and ''percentages in standard nines.'']] '''Statistics''' is a [[Mathematics|mathematical science]] pertaining to the collection, analysis, interpretation or explanation, and presentation of [[data]]. It is applicable to a wide variety of [[academic discipline]]s, from the [[Natural science|natural]] and [[social science]]s to the [[humanities]], government and business. Statistical methods can be used to summarize or describe a collection of data; this is called '''[[descriptive statistics]]'''. In addition, patterns in the data may be [[mathematical model|modeled]] in a way that accounts for [[random]]ness and uncertainty in the observations, and then used to draw inferences about the process or population being studied; this is called '''[[inferential statistics]]'''. Both descriptive and inferential statistics comprise '''applied statistics'''. There is also a discipline called '''[[mathematical statistics]]''', which is concerned with the theoretical basis of the subject. The word '''''statistics''''' is also the plural of '''''[[statistic]]''''' (singular), which refers to the result of applying a statistical algorithm to a set of data, as in [[economic statistics]], [[crime statistics]], etc. ==History== :{{main|History of statistics}} ''"Five men, [[Hermann Conring|Conring]],[[Gottfried Achenwall| Achenwall]], [[Johann Peter Süssmilch|Süssmilch]], [[John Graunt|Graunt]] and [[William Petty|Petty]] have been honored by different writers as the founder of statistics."'' claims one source (Willcox, Walter (1938) ''The Founder of Statistics''. Review of the [[International Statistical Institute]] 5(4):321-328.) Some scholars pinpoint the origin of statistics to 1662, with the publication of "[[Observations on the Bills of Mortality]]" by John Graunt. Early applications of statistical thinking revolved around the needs of states to base policy on demographic and economic data. The scope of the discipline of statistics broadened in the early 19th century to include the collection and analysis of data in general. Today, statistics is widely employed in government, business, and the natural and social sciences. Because of its empirical roots and its applications, statistics is generally considered not to be a subfield of pure mathematics, but rather a distinct branch of applied mathematics. Its mathematical foundations were laid in the 17th century with the development of [[probability theory]] by [[Pascal]] and [[Fermat]]. Probability theory arose from the study of games of chance. The [[method of least squares]] was first described by [[Carl Friedrich Gauss]] around 1794. The use of modern [[computer]]s has expedited large-scale statistical computation, and has also made possible new methods that are impractical to perform manually. ==Overview== In applying statistics to a scientific, industrial, or societal problem, one begins with a process or [[statistical population|population]] to be studied. This might be a population of people in a country, of crystal grains in a rock, or of goods manufactured by a particular factory during a given period. It may instead be a process observed at various times; data collected about this kind of "population" constitute what is called a [[time series]]. For practical reasons, rather than compiling data about an entire population, one usually studies a chosen subset of the population, called a [[sampling (statistics)|sample]]. Data are collected about the sample in an observational or [[experiment]]al setting. The data are then subjected to statistical analysis, which serves two related purposes: description and inference. *[[Descriptive statistics]] can be used to summarize the data, either numerically or graphically, to describe the sample. Basic examples of numerical descriptors include the [[mean]] and [[standard deviation]]. Graphical summarizations include various kinds of charts and graphs. *[[Inferential statistics]] is used to model patterns in the data, accounting for randomness and drawing inferences about the larger population. These inferences may take the form of answers to yes/no questions ([[hypothesis testing]]), estimates of numerical characteristics ([[estimation]]), descriptions of association ([[correlation]]), or modeling of relationships ([[regression analysis|regression]]). Other [[mathematical model|modeling]] techniques include [[ANOVA]], [[time series]], and [[data mining]]. {{Quote box | quote = “… it is only the manipulation of uncertainty that interests us. We are not concerned with the matter that is uncertain. Thus we do not study the mechanism of rain; only whether it will rain.” | source = [[Dennis Lindley]], "The Philosophy of Statistics", ''The Statistician'' (2000). | width = 50% | align= right }} The concept of correlation is particularly noteworthy. Statistical analysis of a [[data set]] may reveal that two variables (that is, two properties of the population under consideration) tend to vary together, as if they are connected. For example, a study of annual income and age of death among people might find that poor people tend to have shorter lives than affluent people. The two variables are said to be correlated (which is a positive correlation in this case). However, one cannot immediately infer the existence of a causal relationship between the two variables. (See [[Correlation does not imply causation]].) The correlated phenomena could be caused by a third, previously unconsidered phenomenon, called a [[lurking variable]] or [[confounding variable]]. If the sample is representative of the population, then inferences and conclusions made from the sample can be extended to the population as a whole. A major problem lies in determining the extent to which the chosen sample is representative. Statistics offers methods to estimate and correct for randomness in the sample and in the data collection procedure, as well as methods for designing robust experiments in the first place. (See [[experimental design]].) The fundamental mathematical concept employed in understanding such randomness is [[probability]]. [[Mathematical statistics]] (also called [[statistical theory]]) is the branch of [[applied mathematics]] that uses probability theory and [[mathematical analysis|analysis]] to examine the theoretical basis of statistics. The use of any statistical method is valid only when the system or population under consideration satisfies the basic mathematical assumptions of the method. [[Misuse of statistics]] can produce subtle but serious errors in description and interpretation — subtle in the sense that even experienced professionals sometimes make such errors, serious in the sense that they may affect, for instance, social policy, medical practice and the reliability of structures such as bridges. Even when statistics is correctly applied, the results can be difficult for the non-expert to interpret. For example, the [[statistical significance]] of a trend in the data, which measures the extent to which the trend could be caused by random variation in the sample, may not agree with one's intuitive sense of its significance. The set of basic statistical skills (and skepticism) needed by people to deal with information in their everyday lives is referred to as [[statistical literacy]]. ==Statistical methods== ===Experimental and observational studies=== A common goal for a statistical research project is to investigate [[causality]], and in particular to draw a conclusion on the effect of changes in the values of predictors or [[independent variable]]s on response or [[dependent variable]]s. There are two major types of causal statistical studies, experimental studies and observational studies. In both types of studies, the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies in how the study is actually conducted. Each can be very effective. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Instead, data are gathered and correlations between predictors and response are investigated. An example of an experimental study is the famous [[Hawthorne studies]], which attempted to test the changes to the working environment at the Hawthorne plant of the Western Electric Company. The researchers were interested in determining whether increased illumination would increase the productivity of the [[assembly line]] workers. The researchers first measured the productivity in the plant, then modified the illumination in an area of the plant and checked if the changes in illumination affected the productivity. It turned out that the productivity indeed improved (under the experimental conditions). (See [[Hawthorne effect]].) However, the study is heavily criticized today for errors in experimental procedures, specifically for the lack of a [[control group]] and [[double-blind|blindedness]]. An example of an observational study is a study which explores the correlation between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case, the researchers would collect observations of both smokers and non-smokers, perhaps through a [[case-control study]], and then look for the number of cases of lung cancer in each group. The basic steps of an experiment are; # Planning the research, including determining information sources, research subject selection, and [[ethics|ethical]] considerations for the proposed research and method. # [[Design of experiments]], concentrating on the system model and the interaction of independent and dependent variables. # [[summary statistics|Summarizing a collection of observations]] to feature their commonality by suppressing details. ([[Descriptive statistics]]) # Reaching consensus about what [[statistical inference|the observations tell]] about the world being observed. ([[Statistical inference]]) # Documenting / presenting the results of the study. ===Levels of measurement=== :''See: [[Levels of measurement|Stanley Stevens' "Scales of measurement" (1946): nominal, ordinal, interval, ratio]]'' There are four types of measurements or [[level of measurement|levels of measurement]] or measurement scales used in statistics: nominal, ordinal, interval, and ratio. They have different degrees of usefulness in statistical [[research]]. Ratio measurements have both a zero value defined and the distances between different measurements defined; they provide the greatest flexibility in statistical methods that can be used for analyzing the data. Interval measurements have meaningful distances between measurements defined, but have no meaningful zero value defined (as in the case with IQ measurements or with temperature measurements in [[Fahrenheit]]). Ordinal measurements have imprecise differences between consecutive values, but have a meaningful order to those values. Nominal measurements have no meaningful rank order among values. Since variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically, sometimes they are called together as categorical variables, whereas ratio and interval measurements are grouped together as quantitative or [[continuous variables]] due to their numerical nature. ===Statistical techniques=== Some well known statistical [[Statistical hypothesis testing|test]]s and [[procedure]]s for [[research]] [[observation]]s are:
* [[Student's t-test]] * [[chi-square test]] * [[Analysis of variance]] (ANOVA) * [[Mann-Whitney U]] * [[Regression analysis]] * [[Factor Analysis]] * [[Correlation]] * [[Pearson product-moment correlation coefficient]] * [[Spearman's rank correlation coefficient]] * [[Time Series Analysis]]
==Specialized disciplines== Some fields of inquiry use applied statistics so extensively that they have [[specialized terminology]]. These disciplines include:
* [[Actuarial science]] * [[Applied information economics]] * [[Biostatistics]] * [[Bootstrapping (statistics)|Bootstrap]] & [[Resampling (statistics)|Jackknife Resampling]] * [[Business statistics]] * [[Data analysis]] * [[Data mining]] (applying statistics and [[pattern recognition]] to discover knowledge from data) * [[Demography]] * [[Economic statistics]] (Econometrics) * [[Energy statistics]] * [[Engineering statistics]] * [[Environmental Statistics]] * [[Epidemiology]] * [[Geography]] and [[Geographic Information Systems]], more specifically in [[Spatial analysis]] * [[Image processing]] * [[Multivariate statistics|Multivariate Analysis]] * [[Psychological statistics]] * [[Quality]] * [[Social statistics]] * [[Statistical literacy]] * [[Statistical modeling]] * [[Statistical survey]]s * Process analysis and [[chemometrics]] (for analysis of data from [[analytical chemistry]] and [[chemical engineering]]) * [[Structured data analysis (statistics)]] * [[Survival analysis]] * [[Reliability engineering]] * Statistics in various sports, particularly [[Baseball statistics|baseball]] and [[Cricket statistics|cricket]]
Statistics form a key basis tool in business and manufacturing as well. It is used to understand measurement systems variability, control processes (as in [[statistical process control]] or SPC), for summarizing data, and to make data-driven decisions. In these roles, it is a key tool, and perhaps the only reliable tool. ==Statistical computing== The rapid and sustained increases in computing power starting from the second half of the 20th century have had a substantial impact on the practice of statistical science. Early statistical models were almost always from the class of [[linear model]]s, but powerful computers, coupled with suitable numerical [[algorithms]], caused an increased interest in [[nonlinear regression|nonlinear models]] (especially [[neural networks]] and [[decision tree]]s) as well as the creation of new types, such as [[generalized linear model|generalised linear model]]s and [[multilevel model]]s. Increased computing power has also led to the growing popularity of computationally-intensive methods based on [[resampling (statistics)|resampling]], such as permutation tests and the [[bootstrapping (statistics)|bootstrap]], while techniques such as [[Gibbs sampling]] have made Bayesian methods more feasible. The computer revolution has implications for the future of statistics with new emphasis on "experimental" and "empirical" statistics. A large number of both general and special purpose [[List of statistical packages|statistical software]] are now available. == Misuse == :{{main|Misuse of statistics}} There is a general perception that statistical knowledge is all-too-frequently intentionally [[Misuse of statistics|misused]] by finding ways to interpret only the data that are favorable to the presenter. A famous saying attributed to [[Benjamin Disraeli]] is, "[[Lies, damned lies, and statistics|There are three kinds of lies: lies, damned lies, and statistics]]"; and Harvard President [[Lawrence Lowell]] wrote in 1909 that statistics, ''"like veal pies, are good if you know the person that made them, and are sure of the ingredients"''. If various studies appear to contradict one another, then the public may come to distrust such studies. For example, one study may suggest that a given diet or activity raises [[blood pressure]], while another may suggest that it lowers blood pressure. The discrepancy can arise from subtle variations in experimental design, such as differences in the patient groups or research protocols, that are not easily understood by the non-expert. (Media reports sometimes omit this vital contextual information entirely.) By choosing (or rejecting, or modifying) a certain sample, results can be manipulated. Such manipulations need not be malicious or devious; they can arise from unintentional biases of the researcher. The graphs used to summarize data can also be misleading. Deeper criticisms come from the fact that the hypothesis testing approach, widely used and in many cases required by law or regulation, forces one hypothesis (the [[null hypothesis]]) to be "favored", and can also seem to exaggerate the importance of minor differences in large studies. A difference that is highly statistically significant can still be of no practical significance. (See [[Hypothesis test#Criticism|criticism of hypothesis testing]] and [[Null hypothesis#Controversy|controversy over the null hypothesis]].) One response is by giving a greater emphasis on the [[p-value|''p''-value]] than simply reporting whether a hypothesis is rejected at the given level of significance. The ''p''-value, however, does not indicate the size of the effect. Another increasingly common approach is to report [[confidence interval]]s. Although these are produced from the same calculations as those of hypothesis tests or ''p''-values, they describe both the size of the effect and the uncertainty surrounding it. ==See also== {{col-start}} {{col-break}} * [[List of basic statistics topics]] * [[List of statistical topics]] * [[List of academic statistical associations]] * [[List of national and international statistical services]] * [[List of publications in statistics]] * [[List of statisticians]] * [[Glossary of probability and statistics]] * [[Notation in probability and statistics]] {{col-break}} * [[Forecasting]] * [[Foundations of statistics]] * [[Multivariate statistics]] * [[Regression analysis]] * [[Statistical phenomena]] * [[Statistical consultant]]s * [[Statistician]] * [[Structural equation modeling]] {{col-end}} ==Bibliography== *{{cite book | last = Best | first = Joel | year = 2001 | title = Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists | publisher = University of California Press | id = ISBN 0-520-21978-3 }} *{{cite book | last = Desrosières | first = Alain | authorlink = Alain Desrosières | year = 2004 | title = The Politics of Large Numbers: A History of Statistical Reasoning | others = Trans. Camille Naish | publisher = Harvard University Press | id = ISBN 0-674-68932-1 }} *{{cite book | last = Hacking | first = Ian | authorlink = Ian Hacking | year = 1990 | title = The Taming of Chance | publisher = Cambridge University Press | id = ISBN 0-521-38884-8 }} *{{cite book | last = Lindley | first = D.V. | authorlink = Dennis Lindley | edition = 2nd ed. | year = 1985 | title = Making Decisions | publisher = John Wiley & Sons | id = ISBN 0-471-90808-8 }} *{{cite book | last = Tijms | first = Henk | year = 2004 | title = Understanding Probability: Chance Rules in Everyday life | publisher = Cambridge University Press | id = ISBN 0-521-83329-9 }} ==External links== ===General sites and organizations=== * [http://www.amstat.org/ American Statistical Association] * [http://freestatistics.altervista.org/ Free Statistics (free and open source software, data and tutorials)] * [http://isi.cbs.nl/ International Statistical Institute] * [http://www.imstat.org/ Institute of Mathematical Statistics] * [http://www.mathcs.carleton.edu/probweb/probweb.html Probability Web] * [http://www.rss.org.uk/ Royal Statistical Society] * [http://www.census.gov/main/www/stat_int.html Statistical Agencies (International)] * [http://lib.stat.cmu.edu/ Statlib: Data, Software and News from the Statistics Community (Carnegie Mellon)] * [http://statpages.org/ StatPages.net (statistical calculations, free software, etc.)] * [http://www.statsci.org StatSci.org: Statistical Science Web] ===Online courses and textbooks=== {{wikibooks|Statistics}} {{Wikiversity|Introduction to Statistics}} {{Wikiversity|Probability and statistics}} * [http://www.stats4students.com/index.php Stats4students: Introductory guides to understanding & calculating statistics] * [http://sportsci.org/resource/stats/ "A New View of Statistics" by Will G. Hopkins] * [http://www.statsoft.com/textbook/stathome.html Electronic Statistics Textbook (StatSoft,Inc., The Statistics Homepage)] * [http://davidmlane.com/hyperstat/index.html HyperStat Online: An Introductory Statistics Textbook and Online Tutorial for Help in Statistics Courses (David Lane)] * [http://www.itl.nist.gov/div898/handbook/ NIST/SEMATECH e-Handbook of Statistical Methods] * [http://www2.chass.ncsu.edu/garson/pa765/statnote.htm Statnotes: Topics in Multivariate Analysis, by G. David Garson] * [http://www.StatisticalPractice.com "The Little Handbook of Statistical Practice"] by [http://www.tufts.edu/~gdallal/ Dr. Gerard E. Dallal], [[Tufts University]]. ===Other resources=== {{Wikiquote|Statistics}} * [http://www.informath.org/StatDis.pdf Disputes in statistical analyses] a single-page review (PDF). * [http://www.mbhs.edu/~steind00/statistics.html Exploratory Statistics Applets] * [http://www.ericdigests.org/1993/marriage.htm Resampling: A Marriage of Computers and Statistics (ERIC Digests)] * [http://www.ericdigests.org/2000-2/resources.htm Resources for Teaching and Learning about Probability and Statistics (ERIC Digests)] * [http://www.csdassn.org/software_reports.cfm Software Reports (by the International Association for Statistical Computing)] {{Mathematics-footer}} {{Statistics}} [[Category:Applied mathematics]] [[Category:Formal sciences]] [[Category:Evaluation methods]] [[Category:Mathematical science occupations]] [[Category:Statistics| ]] [[Category:Psychometrics]] [[Category:Quality]] [[af:Statistiek]] [[ar:إحصاء]] [[an:Estadistica]] [[az:Statistika]] [[bn:পরিসংখ্যান]] [[zh-min-nan:Thóng-kè-ha̍k]] [[ba:Статистика]] [[bs:Statistika]] [[br:Statistikoù]] [[bg:Статистика]] [[ca:Estadística]] [[cs:Statistika]] [[cy:Ystadegaeth]] [[da:Statistik]] [[de:Statistik]] [[dv:ތަފާސް ހިސާބު]] [[et:Statistika]] [[el:Στατιστική]] [[es:Estadística]] [[eo:Statistiko]] [[eu:Estatistika]] [[fa:آمار]] [[fo:Hagfrøði]] [[fr:Statistiques]] [[fy:Statistyk]] [[fur:Statistiche]] [[ga:Staidreamh]] [[gv:Steat-choontey]] [[gd:Staitistearachd]] [[gl:Estatística]] [[ko:통계학]] [[hi:सांख्यिकी]] [[hr:Statistika]] [[io:Statistiko]] [[id:Statistika]] [[ia:Statistica]] [[iu:ᑭᓯᑦᓯᓯᖕᖑᕐᓗᒋᑦ ᐹᓯᔅᓱᑎᔅᓴᑦ/kisitsisillgurlugitpasissitissat]] [[is:Tölfræði]] [[it:Statistica]] [[he:סטטיסטיקה]] [[jv:Statistika]] [[ka:სტატისტიკა]] [[kk:Статистика]] [[lad:Estadistika]] [[lo:ສະຖິຕິສາດ]] [[la:Statistica]] [[lv:Statistika]] [[lb:Statistik]] [[lt:Statistika]] [[li:Sjtatistiek]] [[hu:Statisztika]] [[mg:Statistika]] [[mr:संख्याशास्त्र]] [[ms:Statistik]] [[nl:Statistiek]] [[ja:統計学]] [[no:Statistikk]] [[nn:Statistikk]] [[pl:Statystyka]] [[pt:Estatística]] [[ro:Statistică matematică]] [[ru:Статистика]] [[sq:Statistika]] [[scn:Statìstica]] [[simple:Statistics]] [[sk:Štatistika]] [[sl:Statistika]] [[sr:Статистика]] [[su:Statistik]] [[fi:Tilastotiede]] [[sv:Statistik]] [[tl:Estadistika]] [[ta:புள்ளியியல்]] [[th:สถิติศาสตร์]] [[vi:Thống kê]] [[tg:Омор]] [[tr:İstatistik]] [[uk:Статистика]] [[vec:Statìstega]] [[fiu-vro:Statistiga]] [[war:Estadistika]] [[yi:סטאטיסטיק]] [[zh-yue:統計學]] [[bat-smg:Statėstėka]] [[zh:统计学]]