Exploratory data analysis Box

- 00.14

Parallel box plots - YouTube
photo src: www.youtube.com

In statistics, exploratory data analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task. Exploratory data analysis was promoted by John Tukey to encourage statisticians to explore the data, and possibly formulate hypotheses that could lead to new data collection and experiments. EDA is different from initial data analysis (IDA), which focuses more narrowly on checking assumptions required for model fitting and hypothesis testing, and handling missing values and making transformations of variables as needed. EDA encompasses IDA.


Parallel group design: detecting unequal variances | Statistics ...
photo src: analyticastats.wordpress.com


Maps, Directions, and Place Reviews



Overview

Tukey defined data analysis in 1961 as: "[P]rocedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data."

Tukey's championing of EDA encouraged the development of statistical computing packages, especially S at Bell Labs. The S programming language inspired the systems 'S'-PLUS and R. This family of statistical-computing environments featured vastly improved dynamic visualization capabilities, which allowed statisticians to identify outliers, trends and patterns in data that merited further study.

Tukey's EDA was related to two other developments in statistical theory: robust statistics and nonparametric statistics, both of which tried to reduce the sensitivity of statistical inferences to errors in formulating statistical models. Tukey promoted the use of five number summary of numerical data--the two extremes (maximum and minimum), the median, and the quartiles--because these median and quartiles, being functions of the empirical distribution are defined for all distributions, unlike the mean and standard deviation; moreover, the quartiles and median are more robust to skewed or heavy-tailed distributions than traditional summaries (the mean and standard deviation). The packages S, S-PLUS, and R included routines using resampling statistics, such as Quenouille and Tukey's jackknife and Efron's bootstrap, which are nonparametric and robust (for many problems).

Exploratory data analysis, robust statistics, nonparametric statistics, and the development of statistical programming languages facilitated statisticians' work on scientific and engineering problems. Such problems included the fabrication of semiconductors and the understanding of communications networks, which concerned Bell Labs. These statistical developments, all championed by Tukey, were designed to complement the analytic theory of testing statistical hypotheses, particularly the Laplacian tradition's emphasis on exponential families.


Parallel Box Plot Video



Development

John W. Tukey wrote the book Exploratory Data Analysis in 1977. Tukey held that too much emphasis in statistics was placed on statistical hypothesis testing (confirmatory data analysis); more emphasis needed to be placed on using data to suggest hypotheses to test. In particular, he held that confusing the two types of analyses and employing them on the same set of data can lead to systematic bias owing to the issues inherent in testing hypotheses suggested by the data.

The objectives of EDA are to:

  • Suggest hypotheses about the causes of observed phenomena
  • Assess assumptions on which statistical inference will be based
  • Support the selection of appropriate statistical tools and techniques
  • Provide a basis for further data collection through surveys or experiments

Many EDA techniques have been adopted into data mining, as well as into big data analytics. They are also being taught to young students as a way to introduce them to statistical thinking.


Understanding Box and Whisker Plots |
photo src: helicaltech.com


Techniques

There are a number of tools that are useful for EDA, but EDA is characterized more by the attitude taken than by particular techniques.

Typical graphical techniques used in EDA are:

  • Box plot
  • Histogram
  • Multi-vari chart
  • Run chart
  • Pareto chart
  • Scatter plot
  • Stem-and-leaf plot
  • Parallel coordinates
  • Odds ratio
  • Multidimensional scaling
  • Targeted projection pursuit
  • Principal component analysis (PCA)
  • Multilinear PCA
  • Dimensionality reduction
  • Nonlinear dimensionality reduction (NLDR)
  • Projection methods such as grand tour, guided tour and manual tour
  • Interactive versions of these plots

Typical quantitative techniques are:

  • Median polish
  • Trimean
  • Ordination

RnBeads report
photo src: rnbeads.mpi-inf.mpg.de


History

Many EDA ideas can be traced back to earlier authors, for example:

  • Francis Galton emphasized order statistics and quantiles.
  • Arthur Lyon Bowley used precursors of the stemplot and five-number summary (Bowley actually used a "seven-figure summary", including the extremes, deciles and quartiles, along with the median - see his Elementary Manual of Statistics (3rd edn., 1920), p. 62 - he defines "the maximum and minimum, median, quartiles and two deciles" as the "seven positions").
  • Andrew Ehrenberg articulated a philosophy of data reduction (see his book of the same name).

The Open University course Statistics in Society (MDST 242), took the above ideas and merged them with Gottfried Noether's work, which introduced statistical inference via coin-tossing and the median test.


RnBeads: Example 1
photo src: rnbeads.mpi-inf.mpg.de


Example

Findings from EDA are often orthogonal to the primary analysis task. This is an example, described in more detail in. The analysis task is to find the variables which best predict the tip that a dining party will give to the waiter. The variables available are tip, total bill, gender, smoking status, time of day, day of the week and size of the party. The analysis task requires that a regression model be fit with either tip or tip rate as the response variable. The fitted model is

  tip rate = 0.18 - 0.01×size  

which says that as the size of the dining party increase by one person tip will decrease by 1%. Making plots of the data reveals other interesting features not described by this model.

What is learned from the graphics is different from what could be learned by the modeling. You can say that these pictures help the data tell us a story, that we have discovered some features of tipping that perhaps we didn't anticipate in advance.


Evaluating Donald Trump's Allegations of Voter Fraud in the 2016 ...
photo src: www.dartmouth.edu


Software

  • Automated Exploratory and Data Science Software on KDNuggets
  • GGobi is a free software for interactive data visualization data visualization
  • CMU-DAP (Carnegie-Mellon University Data Analysis Package, FORTRAN source for EDA tools with English-style command syntax, 1977).
  • Graph Commons, a web-based collaborative network mapping, analysis, and publishing platform.
  • Data Applied, a comprehensive web-based data visualization and data mining environment.
  • High-D for multivariate analysis using parallel coordinates.
  • JMP, an EDA package from SAS Institute.
  • KNIME Konstanz Information Miner - Open-Source data exploration platform based on Eclipse.
  • Orange, an open-source data mining and machine learning software suite.
  • SOCR provides a large number of free online tools.
  • TinkerPlots (for upper elementary and middle school students).
  • Weka an open source data mining package that includes visualisation and EDA tools such as targeted projection pursuit

Source of the article : Wikipedia



EmoticonEmoticon

 

Start typing and press Enter to search