Complexity curve: a graphical measure of data complexity and classifier performance

We describe a method for assessing data set complexity based on the estimation of the underlining probability distribution and Hellinger distance. Contrary to some popular measures it is not focused on the shape of decision boundary in a classification task but on the amount of available data with respect to attribute structure. Complexity is expressed in terms of graphical plot, which we call complexity curve. We use it to propose a new variant of learning curve plot called generalisation curve. Generalisation curve is a standard learning curve with x-axis rescaled according to the data set complexity curve. It is a classifier performance measure, which shows how well the information present in the data is utilised. We perform theoretical and experimental examination of properties of the introduced complexity measure and show its relation to the variance component of classification error. We compare it with popular data complexity measures on 81 diverse data sets and show that it can contribute to explaining the performance of specific classifiers on these sets. Then we apply our methodology to a panel of benchmarks of standard machine learning algorithms on typical data sets, demonstrating how it can be used in practice to gain insights into data characteristics and classifier behaviour. Moreover, we show that complexity curve is an effective tool for reducing the size of the training set (data pruning), allowing to significantly speed up the learning process without reducing classification accuracy. Associated code is available to download at: https://github.com/zubekj/complexity_curve (open source Python implementation).


INTRODUCTION
It is common knowledge in machine learning community that the difficulty of classification problems varies greatly.Sometimes it is enough to use simple out of the box classifier to get a very good result and sometimes careful preprocessing and model selection are needed to get any non-trivial result at all.The difficulty of a classification task clearly stems from certain properties of the data set, yet we still have problems with defining those properties in general.
Bias-variance decomposition (Domingos, 2000) demonstrates that the error of a predictor can be attributed to three sources: bias, coming from inability of an algorithm to build an adequate model for the relationship present in data, variance, coming from inability to estimate correct model parameters from an imperfect data sample, and some irreducible noise.Following this line of reasoning, difficulty of a classification problem may come partly from the complexity of the relation between dependent variable and explanatory variables, partly from the scarcity of information in the training sample, and partly from an overlap between classes.This is identical to sources of classification difficulty identified by Ho and Basu (2002), who labelled the three components: 'complex decision boundary', 'small sample size and dimensionality induced sparsity' and 'ambiguous classes'.
In this article we introduce a new measure of data complexity targeted at sample sparsity, which 1 is mostly associated with variance error component.We aim to measure information saturation of a data set without making any assumptions on the form of relation between dependent variable and the rest of variables, so explicitly disregarding shape of decision boundary and classes ambiguity.Our complexity measure takes into account the number of samples, the number of attributes and attributes internal structure, under a simplifying assumption of attribute independence.The key idea is to check how well a data set can be approximated by its subsets.If the probability distribution induced by a small data sample is very similar to the probability distribution induced by the whole data set we say that the set is saturated with information and presents an opportunity to learn the relationship between variables without promoting the variance.To operationalise this notion we introduce two kinds of plots: • Complexity curve -a plot presenting how well subsets of growing size approximate distribution of attribute values.It is a basic method applicable to clustering, regression and classification problems.
• Conditional complexity curve -a plot presenting how well subsets of growing size approximate distribution of attribute values conditioned on class.It is applicable to classification problems and more robust against class imbalance or differences in attributes structure between classes.
Since the proposed measure characterise the data sample itself without making any assumptions as to how that sample will be used it should be applicable to all kinds of problems involving reasoning from data.In this work we focus on classification tasks since this is the context in which data complexity measures were previously applied.We compare area under the complexity curve with popular data complexity measures and show how it complements the existing metrics.We also demonstrate that it is useful for explaining classifier performance by showing that the area under the complexity curve is correlated with the area under the receiver operating characteristic (AUC ROC) for popular classifiers tested on 81 benchmark data sets.
We propose two immediate applications of the developed method.The first one is connected with the fundamental question: how much of the original sample is needed to build a successful predictor?We pursue this topic by proposing a data pruning strategy based on complexity curve and evaluating it on large data sets.We show that it can be considered as an alternative to progressive sampling strategies (Provost et al., 1999).
The second proposed application is classification algorithm comparison.Knowing characteristics of benchmark data sets it is possible to check which algorithms perform well in the context of scarce data.To fully utilise this information, we present a graphical performance measure called generalisation curve.It is based on learning curve concept and allows to compare the learning process of different algorithms while controlling the variance of the data.To demonstrate its validity we apply it to a set of popular algorithms.We show that the analysis of generalisation curves points to important properties of the learning algorithms and benchmark data sets, which were previously suggested in the literature.

RELATED LITERATURE
Problem of measuring data complexity in the context of machine learning is broadly discussed.Our beliefs are similar to Ho (2008), who stated the need for including data complexity analysis in algorithm comparison procedures.The same need is also discussed in fields outside machine learning, for example in combinatorial optimisation (Smith-Miles and Lopes, 2012).
The general idea is to select a sufficiently diverse set of problems to demonstrate both strengths and weaknesses of the analysed algorithms.The importance of this step was stressed by Macià et al. (2013), who demonstrated how algorithm comparison may be biased by benchmark data sets selection, and showed how the choice my guided by complexity measures.Characterising problem space with some metrics makes it possible to estimate regions in which certain algorithms perform well (Luengo and Herrera, 2013), and this opens up possibilities of meta-learning (Smith-Miles et al., 2014).
In this context complexity measures are used not only as predictors of classifier performance but more importantly as diversity measures capturing various properties of the data sets.It is useful when the measures themselves are diverse and focus on different aspects of the data to give as complete characterisation of the problem space as possible.In the later part of the article we demonstrate that complexity curve fits well into the landscape of currently used measures, offering new insights into data characteristics.

Measuring data complexity
A set of practical measures of data complexity with regard to classification was introduced by Ho and Basu (2002), and later extended by Ho et al. (2006) and Orriols-Puig et al. (2010).It is routinely used in tasks involving classifier evaluation (Macià et al., 2013;Luengo and Herrera, 2013) and meta-learning (Díez-Pastor et al., 2015;Mantovani et al., 2015).Some of these measures are based on the overlap of values of specific attributes, examples include Fisher's discriminant ratio, volume of overlap region, attribute efficiency etc.The others focus directly on class separability, this groups includes measures such as the fraction of points on the boundary, linear separability, the ratio of intra/inter class distance.In contrast to our method, such measures focus on specific properties of the classification problem, measuring decision boundary and class overlap.Topological measures concerned with data sparsity, such as ratio of attributes to observations, attempt to capture similar properties as complexity curve.Li and Abu-Mostafa (2006) defined data set complexity in the context of classification using the general concept of Kolmogorov complexity.They proposed a way to measure data set complexity using the number of support vectors in support vector machine (SVM) classifier.They analysed the problems of data decomposition and data pruning using above methodology.A graphical representation of the data set complexity called the complexity-error plot was also introduced.The main problem with their approach is the selection of very specific and complex machine learning algorithms, which may render the results in less universal way, and which is prone to biases specific for SVMs.This make their method unsuitable for diverse machine learning algorithms comparison.
Another approach to data complexity is to analyse it on instance level.This kind of analysis is performed by Smith et al. (2013) who attempted to identify which instances are misclassified by various classification algorithm.They devised local complexity measures calculated with respect to single instances and later tried to correlate average instance hardness with global data complexity measures of Ho and Basu (2002).They discovered that is mostly correlated with class overlap.This makes our work complementary, since in our complexity measure we deliberately ignore class overlap and individual instance composition to isolate another source of difficulty, namely data scarcity.Yin et al. (2013) proposed a method of feature selection based on Hellinger distance (a measure of similarity between probability distributions).The idea was to choose features, which conditional distributions (depending on the class) have minimal affinity.In the context of our framework this could be interpreted as measuring data complexity for single features.The authors demonstrated experimentally that for the high-dimensional imbalanced data sets their method is superior to popular feature selection methods using Fisher criterion, or mutual information.

Evaluating classifier performance
The basic schema of classifier evaluation is to train a model on one data sample (training set) and then collect its predictions on another, independent data set (testing set).Overall performance is then calculated using some measure taking into account errors made on the testing set.The most intuitive measure is accuracy, but other measures such as precision, recall or F-measure are widely used.When we are interested in comparing classification algorithms, not just trained classifiers, this basic schema is limited.
It allows only to perform a static comparison of different algorithms under specified conditions.All algorithms' parameters are fixed, so are the data sets.The results may not be conclusive since the same algorithm may perform very well or very poor depending on the conditions.Such analysis provides a static view of classification task -there is little to be concluded on the dynamics of the algorithm: its sensitivity to the parameter tuning, requirements regarding the sample size etc.
A different approach, which preserves some of the dynamics, is receiver operating characteristic (ROC) curve (Fawcett, 2006).It is possible to perform ROC analysis for any binary classifier, which Another graphical measure of classifier performance, which visualises its behaviour depending on a threshold value, is cost curve introduced by Drummond and Holte (2006).They claim that their method is more convenient to use because it allows to visualise confidence intervals and statistical significance of differences between classifiers.However, it still measures the performance of a classifier in a relatively static situation where only threshold value changes.

3/34
Both ROC curves and cost curves are applicable only to classifiers with continuous outputs and to two class problems, which limits their usage.What is important is the key idea behind them: instead of giving the user a final solution they give freedom to choose an optimal classifier according to some criteria from a range of options.
The learning curve technique presents in a similar fashion the impact of the sample size on the classification accuracy.The concept itself originates from psychology.It is defined as a plot of learner's performance against the amount of effort invested in learning.Such graphs are widely used in medicine (Schlachta et al., 2001), economics (Nemet, 2006), education (Karpicke and Roediger, 2008), or engineering (Jaber and Glock, 2013).They allow to describe the amount of training required for an employee to perform certain job.They are also used in entertainment industry to scale difficulty level of video games (Sweetser and Wyeth, 2005).In machine learning context they are sometimes referred to as the performance curve (Sing et al., 2005).The effort in such curve is measured with the number of examples in the training set.
Learning curve is a visualisation of an incremental learning process in which data is accumulated and the accuracy of the model increases.It captures the algorithm's generalisation capabilities: using the curve it is possible to estimate what amount of data is needed to successfully train a classifier and when collecting additional data does not introduce any significant improvement.This property is referred to in literature as the sample complexity -a minimal size of the training set required to achieve acceptable performance.
As it was noted above, standard learning curve in machine learning expresses the effort in terms of the training set size.However, for different data sets the impact of including an additional data sample may be different.Also, within the same set the effect of including first 100 samples and last 100 samples is very different.Generalisation curve -an extension of learning curve proposed in this article -deals with these problems by using an effort measure founded on data complexity instead of raw sample size.

DEFINITIONS
In the following sections we define formally all measures used throughout the paper.Basic intuitions, assumptions, and implementation choices are discussed.Finally, algorithms for calculating complexity curve, conditional complexity curve, and generalisation curve are given.

Measuring data complexity with samples
In a typical machine learning scenario we want to use information contained in a collected data sample to solve a more general problem which our data describe.Problem complexity can be naturally measured by the size of a sample needed to describe the problem accurately.We call the problem complex, if we need to collect a lot of data in order to get any results.On the other hand, if a small amount of data suffices we say the problem has low complexity.
How to determine if a data sample describes the problem accurately?Any problem can be described with a multivariate probability distribution P of a random vector X.From P we sample our finite data sample D. Now, we can use D to build the estimated probability distribution of X -P D .P D is the approximation of P. If P and P D are identical we know that data sample D describes the problem perfectly and collecting more observations would not give us any new information.Analogously, if P D is very different from P we can be certain that the sample is too small.
To measure similarity between probability distributions we use Hellinger distance.For two continuous distributions P and P D with probability density functions p and p D it is defined as: The minimum possible distance 0 is achieved when the distributions are identical, the maximum 1 is achieved when any event with non-zero probability in P has probability 0 in P D and vice versa.Simplicity and naturally defined 0-1 range make Hellinger distance a good measure for capturing sample information content.
In most cases we do not know the underlining probability distribution P representing the problem and all we have is a data sample D, but we can still use the described complexity measure.Let us picture our data D as the true source of knowledge about the problem and the estimated probability distribution P D as 4/34 the reference distribution.Any subset S ⊂ D can be treated as a data sample and a probability distribution P S estimated from it will be an approximation of P D .By calculating H 2 (P D , P S ) we can assess how well a given subset represent the whole available data, i.e. determine its information content.
Obtaining a meaningful estimation of a probability distribution from a data sample poses difficulties in practice.The probability distribution we are interested in is the joint probability on all attributes.In that context most of the realistic data sets should be regarded as extremely sparse and naïve probability estimation using frequencies of occurring values would result in mostly flat distribution.This can be called the curse of dimensionality.Against this problem we apply a naïve assumption that all attributes are independent.This may seem like a radical simplification but, as we will demonstrate later, it yields good results in practice and constitute a reasonable baseline for common machine learning techniques.Under the independence assumption we can calculate the joint probability density function f from the marginal density functions f 1 , . . ., f n : We will now show the derived formula for Hellinger distance under the independence assumption.
Observe that the Hellinger distance for continuous variables can be expressed in another form: In the last step we used the fact the that the integral of a probability density over its domain must be one.
We will consider two multivariate distributions F and G with density functions: The last formula for Hellinger distance will now expand: In this form variables are separated and parts of the formula can be calculated separately.

Practical considerations
Calculating the introduced measure of similarity between data set in practice poses some difficulties.
First, in the derived formula direct multiplication of probabilities occurs, which leads to problems with numerical stability.We increased the stability by switching to the following formula: For continuous variables probability density function is routinely done with kernel density estimation (KDE) -a classic technique for estimating the shape continuous probability density function from a finite data sample (Scott, 1992).For sample (x 1 , x 2 , . . ., x n ) estimated density function has a form: where K is the kernel function and h is a smoothing parameter -bandwidth.In our experiments we used Gaussian function as the kernel.This is a popular choice, which often yields good results in practice.The bandwidth was set according to the modified Scott's rule (Scott, 1992): where n is the number of samples and d number of dimensions.
In many cases the independence assumption can be supported by preprocessing input data in a certain way.A very common technique, which can be applied in this situation is the whitening transform.It transforms any set of random variables into a set of uncorrelated random variables.For a random vector X with a covariance matrix Σ a new uncorrelated vector Y can be calculated as follows: where D is diagonal matrix containing eigenvalues and P is matrix of right eigenvectors of Σ. Naturally, lack of correlation does not implicate independence but it nevertheless reduces the error introduced by our independence assumption.Furthermore, it blurs the difference between categorical variables and continuous variables putting them on an equal footing.In all further experiments we use whitening transform preprocessing and then treat all variables as continuous.
A more sophisticated method is a signal processing technique known as Independent Component Analysis (ICA) (Hyvärinen and Oja, 2000).It assumes that all components of an observed multivariate signal are mixtures of some independent source signals and that the distribution of the values in each source signal is non-gaussian.Under these assumption the algorithm attempts to recreate the source signals by splitting the observed signal into the components as independent as possible.Even if the assumptions are not met, ICA technique can reduce the impact of attributes interdependencies.Because of its computational complexity we used it as an optional step in our experiments.

Machine learning task difficulty
Our data complexity measure can be used for any type of problem described through a multivariate data sample.It is applicable to regression, classification and clustering tasks.The relation between the defined data complexity and the difficulty of a specific machine learning task needs to be investigated.We will focus on supervised learning case.Classification error will be measured as mean 0-1 error.Data complexity will be measured as mean Hellinger distance between real and estimated probability distributions of attributes conditioned on target variable: where X -vector of attributes, Y -target variable, y 1 , y 2 , . . .y m -values taken by Y .
It has been shown that error of an arbitrary classification or regression model can be decomposed into three parts: Domingos (2000) proposed an universal scheme of decomposition, which can be adapted for different loss functions.For a classification problem and 0-1 loss L expected error on sample x for which the true label is t, and the predicted label given a traning set D is y can be expressed as: where B -bias, V -variance, N -noise.Coefficients c 1 and c 2 are added to make the decomposition consistent for different loss functions.In this case they are equal to: otherwise.
Bias comes from an inability of the applied model to represent the true relation present in data, This intuition can be supported by comparing our complexity measure with the error of the Bayes classifier.We will show that they are closely related.Let Y be the target variable taking on values v 1 , v 2 , . . ., v m , f i (x) an estimation of P(X = x|Y = v i ) from a finite sample D, and g(y) an estimation of P(Y = y).In such setting 0-1 loss of the Bayes classifier on a sample x with the true label t is: Let assume that t = v j .Observe that: which for the case of equally frequent classes reduces to: We can simultanously add and substract term P so as long as estimations f i (x), f j (x) do not deviate too much from real distributions the inequality is satisfied.It will not be satisfied (i.e. an error will take place) only if the estimations deviate from the real distributions in a certain way (i.e.
and the sum of these deviations is greater than The Hellinger distance between f i (x) and P(X = x|Y = v i ) measures the deviation.This shows that by minimising Hellinger distance we are also minimising error of the Bayes classifier.Converse may not be true: not all deviations of probability estimates result in classification error.
In the introduced complexity measure we assumed independency of all attributes, which is analogous to the assumption of naïve Bayes.Small Hellinger distance between class-conditioned attribute distributions induced by sets A and B means that naïve Bayes trained on set A and tested on set B will have only very slight variance error component.Of course, if the indepedence assumption is broken bias error component may still be substantial.

Complexity curve
Complexity curve is a graphical representation of a data set complexity.It is a plot presenting the expected Hellinger distance between a subset and the whole set versus subset size: where P is the empirical probability distribution estimated from the whole set and Q n is the probability distribution estimated from a random subset of size n ≤ |D|.Let us observe that CC(|D|) = 0 because (a) For j in 1 . . .K: i. Draw subset S j i ⊆ D I such that |S j i | = i.ii.Estimate probability distribution for each attribute of S j i and calculate joint probability distribution -Q j i .iii.Calculate Hellinger distance: (b) Calculate mean m i and standard error s i : Complexity curve is a plot of m i ± s i vs i.
To estimate complexity curve in practice, for each subset size K random subsets are drawn and the mean value of Hellinger distance, along with standard error, is marked on the plot.The Algorithm 1 presents the exact procedure.Parameters K (the number of samples of a specified size) and d (sampling  means that the subset size has a far greater impact on the Hellinger distance that the composition of the individual subsets. The shape of the complexity curve captures the information on the complexity of the data set.If the data is simple, it is possible to represent it relatively well with just a few instances.In such case, the complexity curve is very steep at the beginning and flattens towards the end of the plot.If the data is complex, the initial steepness of the curve is smaller.That information can be aggregated into a single parameter -the area under the complexity curve (AUCC).If we express the subset size as the fraction of the whole data set, then the value of the area under the curve becomes limited to the range [0, 1] and can be used as an universal measure for comparing complexity of different data sets.

Conditional complexity curve
The complexity curve methodology presented so far deals with the complexity of a data set as a whole.
While this approach gives information about data structure, it may assess complexity of the classification task incorrectly.This is because data distribution inside each of the classes may vary greatly from the overall distribution.For example, when the number of classes is larger, or the classes are imbalanced, a random sample large enough to represent the whole data set may be too small to represent some of the classes.To take this into account we introduce conditional complexity curve.We calculate it by splitting each data sample according to the class value and taking the arithmetic mean of the complexities of each sub-sample.Algorithm 2 presents the exact procedure.
Comparison of standard complexity curve and conditional complexity curve for iris data set is given by Figure 2.This data set has 3 distinct classes.Our expectation is that estimating conditional distributions for each class would require larger data samples than estimating the overall distribution.Shape of the conditional complexity curve is consistent with this expectation: it is less steep than the standard curve and has larger AUCC value.

Generalisation curve
Generalisation curve is the proposed variant of learning curve based on data set complexity.It is the plot presenting accuracy of a classifier trained on a data subset versus subset's information content, i.e. its Hellinger distance from the whole set.To construct the plot, a number of subsets of a specified size are drawn, the mean Hellinger distance and the mean classifier accuracy are marked on the plot.Trained classifiers are always evaluated on the whole data set, which represents the source of full information.
Using such resubstitution in the evaluation procedure may be unintuitive since the obtained scores do not represent true classifier performance on independent data.However this strategy corresponds to information captured by complexity curve and allows to utilise full data set for evaluation without relying (a) For j in 1 . . .K: i according to the class into S j,1 i , S j,2 i , . . ., S j,C i .iii.From S j,1 i , S j,2 i , . . ., S j,C i estimate probability distributions Q j,1 i , Q j,2 i , . . ., Q j,C i .iv. Calculate mean Hellinger distance: (b) Calculate mean m i and standard error s i : Conditional complexity curve is a plot of m i ± s i vs i.  iii.Train the classifier on S j i and evaluate it on D to get its accuracy a j i .
(b) Calculate mean l i and mean a i : Generalisation curve is a plot of a i vs l i .
Standard learning curve and generalisation curve for the same data and classifier are depicted in Figure 3.The generalisation curve gives more insight into algorithm learning dynamics, because it emphasises initial learning phases in which new information is acquired.In the case of k-neighbours classifier we can see that it is unable to generalise if the training sample is too small.Then it enters a rapid learning phase which gradually shifts to a final plateau, when the algorithm is unable to incorporate any new information into the model.
In comparison with standard learning curve, generalisation curve should be less dependent on data characteristics and more suitable for the comparison of algorithms.Again the score, which can be easily obtained from such plot is the area under the curve.

PROPERTIES
To support validity of the proposed method, we perform an in-depth analysis of its properties.We start from purely mathematical analysis giving some intuitions on complexity curve convergence rate and identifying border cases.Then we perform experiments with toy artificial data sets testing basic assumptions behind complexity curve.After that we compare it experimentally with other complexity data measures and show its usefulness in explaining classifier performance.

Mathematical properties
Drawing a random subset S n from a finite data set D of size N corresponds to sampling without replacement.Let assume that the data set contains k distinct values {v 1 , v 2 , . . ., v k } occurring with frequencies P = (p 1 , p 2 , . . ., p k ).Q n = (q 1 , q 2 , . . ., q k ) will be a random vector which follows a multivariate hypergeometric distribution.
The expected value for any single element is:

12/34
The probability of obtaining any specific vector of frequencies: We will consider the simplest case of discrete probability distribution estimated through frequency counts without using the independence assumption.In such case complexity curve is by definition: It is obvious that CC(N) = 0 because when n = N we draw all available data.This means that complexity curve always converges.We can ask whether it is possible to say anything about the rate of this convergence.This is the question about the upper bound on the tail of hypergeometric distribution.Such bound is given by Hoeffding-Chvátal inequality (Chvátal, 1979;Skala, 2013).For the univariate case it has the following form: which generalises to a multivariate case as: where |Q n − P| is the total variation distance.Since H 2 (P, Q n ) ≤ |Q n − P| this guarantees that complexity curve converges at least as fast.Now we will consider a special case when n = 1.In this situation the multivariate hypergeometric distribution is reduced to a simple categorical distribution P. In such case the expected Hellinger distance is: This corresponds to the first point of complexity curve and determines its overall steepness.
Theorem: E[H 2 (P, Q 1 )] is maximal for a given k when P is an uniform categorical distribution over k categories, i.e.: We will consider an arbitrary distribution P and the expected Hellinger distance E[H 2 (P, Q 1 )].
We can modify this distribution by choosing two states l and k occurring with probabilities p l and p k such as that p l − p k is maximal among all pairs of states.We will redistribute the probability mass between the two states creating a new distribution P ′ .The expected Hellinger distance for the distribution P ′ will be: where a and p k + p l − a are new probabilities of the two states in P ′ .We will consider a function and look for its maxima.
The derivative is equal to 0 if and only if a = p k +p l 2 .We can easily see that: This means that f (a) reaches its maximum for a = p k +p l 2 .From that we can conclude that for any distribution P if we produce distribution P ′ by redistributing probability mass between two states equally the following holds: If we repeat such redistribution arbitrary number of times the outcome distribution converges to uniform distribution.This proves that the uniform distribution leads to the maximal expected Hellinger distance for a given number of states.
Theorem: Increasing the number of categories by dividing an existing category into two new categories always increases the expected Hellinger distance, i.e.
Without the loss of generality we can assume that a < 0.5p l .We can subtract terms occurring on both sides of the inequality obtaining: Now we can see that: which concludes the proof.
From the properties stated by these two theorems we can gain some intuitions about complexity curves in general.First, by looking at the formula for the uniform distribution The complexity curve will be less steep if the variables in the data set take multiple values and each value occurs with equal probability.This is consistent with our intuition: we need a larger sample to cover such space and collect information.For smaller number of distinct values or distributions with mass concentrated mostly in a few points smaller sample will be sufficient to represent most of the information in the data set.

Complexity curve and the performance of an unbiased model
To confirm validity of the assumptions behind complexity curve we performed experiments with artificial data generated according to the known model.Error of the corresponding classifier trained on such data does not contain bias component, so it is possible to observe if variance error component is indeed upper bounded by the complexity curve.We used the same scenario as when calculating the complexity curve: classifiers were trained on random subsets and tested on the whole data set.We matched first and last points of complexity curve and learning curve and observed their relation in between.The first kind of data followed the logistic model (logit data set).Matrix X (1000 observations, 12 attributes) contained values drawn from normal distribution with mean 0 and standard deviation 1. Class vector Y was defined as follows: where β = (0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0, 0, 0, 0, 0, 0).All attributes were independent and conditionally independent.Since Y values were not deterministic, there was some noise present -classification error of the logistic regression classifier trained and tested on the full data set was larger than zero.for values lesser than 0.25 or greater than 0.75 the class was 1, for other values the class was 0. This kind of relation can be naturally modelled by a decision tree, and all the attributes are again independent and conditionally independent.
Figure 5 presents complexity curve and adjusted error of decision tree classifier on the generated data.
Once again the assumptions of complexity curve methodology are satisfied and the complexity curve is indeed an upper bound for the error.
What would happen if the attribute conditional independence assumption was broken?To answer this question we generated another type of data modelled after multidimensional chessboard (chessboard data set).X matrix contained 1000 observations and 2, 3 attributes drawn from an uniform distribution on range [0, 1).Class vector Y had the following values: s is even 1 otherwise where s was a grid step in our experiments set to 0.5.There is clearly strong attribute dependence, but since all parts of decision boundary are parallel to one of the attributes this kind of data can be modelled with a decision tree with no bias.8R w (AUCC: 0.12) Figure 7. Complexity curves for whitened data (dashed lines) and not whitened data (solid lines).Areas under the curves are given in the legend.8I -set of 8 independent random variables with Student's t distribution.8R -one random variable with Student's t distribution repeated 8 times.8I w -whitened 8I.8R w -whitened 8R.
dimensions, the more dependencies between attributes violating complexity curve assumptions.For 3 dimensional chessboard the classification problem becomes rather hard and the observed error decreases slowly, but the complexity curve remains almost the same as for 2 dimensional case.
Results of experiments with controlled artificial data sets are consistent with our theoretical expectations.Basing on them we can introduce a general interpretation of the difference between complexity curve and learning curve: learning curve below the complexity curve is an indication that the algorithm is able to build a good model without sampling the whole domain, limiting the variance error component.
On the other hand, learning curve above the complexity curve is an indication that the algorithm includes complex attributes dependencies in the constructed model, promoting the variance error component.

Impact of whitening and ICA
To evaluate the impact of the proposed preprocessing techniques (whitening and ICA -Independent Component Analysis) on complexity curves we performed experiments with artificial data.In the first experiment we generated two data sets of 300 observations and with 8 attributes distributed according to Student's t distribution with 1.5 degrees of freedom.In one data set all attributes were independent, in the other the same attribute was repeated 8 times.To both sets small Gaussian noise was added.Figure 7 shows complexity curves calculated before and after whitening transform.We can see that whitening had no significant effect on the complexity curve of the independent set.In the case of the dependent set complexity curve calculated after whitening decreases visibly faster and the area under the curve is smaller.This is consistent with our intuitive notion of complexity: data set with repeated attributes should be significantly less complex.
In the second experiment two data sets with 100 observations and 4 attributes were generated.The first data set was generated from the continuous uniform distribution on interval [0, 2], the second one from the discrete (categorical) uniform distribution on the same interval.To both sets small Gaussian noise was added.Figure 8 presents complexity curves for original, whitened and ICA-transformed data.
Among the original data sets the intuitive notion of complexity is preserved: area under the complexity curve for categorical data is smaller.The difference disappears for the whitened data but is again visible in the ICA-transformed data.

Complexity curve variability and outliers
Complexity curve is based on the expected Hellinger distance and the estimation procedure includes some variance.The natural assumption is that the variability caused by the sample size is greater than the variability resulting from a specific composition of a sample.Otherwise averaging over samples of the same size would not be meaningful.This assumption is already present in standard learning curve methodology, when classifier accuracy is plotted against training set size.We expect that the exact variability of the complexity curve will be connected with the presence of outliers in the data set.Such influential observations will have a huge impact depending whether they will be included in a sample or not.
To verify whether these intuitions were true, we constructed two new data sets by introducing artificially outliers to WINE data set.In WINE001 we modified 1% of values by multiplying them by a random number from range (−10, 10).In WINE005 5% of values were modified in such manner.
Figure 9 presents conditional complexity curves for all three data sets.WINE001 curve has indeed a higher variance and is less regular than WINE curve.WINE005 curve is characterised not only by a higher variance but also by a larger AUCC value.This means that adding so much noise increased the overall complexity of the data set significantly.
The result support our hypothesis that large variability of complexity curve signify an occurrence of highly influential observations in the data set.This makes complexity curve a valuable diagnostic tool for such situations.However, it should be noted that our method is unable to distinguish between important outliers and plain noise.To obtain this kind of insight one has to employ different methods.

Comparison with other complexity measures
The set of data complexity measures developed by Ho and Basu (2002) and extended by Ho et al. (2006) continues to be used in experimental studies to explain performance of various classifiers (Díez-Pastor et al., 2015;Mantovani et al., 2015).We decided to compare experimentally complexity curve with those measures.Descriptions of the measures used are given in Table 1.
According to our hypothesis conditional complexity curve should be robust in the context of class imbalance.To demonstrate this property we used for the comparison 88 imbalanced data sets used previously in the study by Díez-Pastor et al. (2015).These data sets come originally from HDDT (Cieslak et al., 2011) and KEEL (Alcalá et al., 2010) repositories.We selected only binary classification problems.
Results are presented as Figure 10.Clearly AUCC is mostly correlated with log T2 measure.This is to be expected as both measures are concerned with sample size in relation to attribute structure.The difference is that T2 takes into account only the number of attributes while AUCC considers also the complexity of distributions of the individual attributes.Correlations of AUCC with other measures are much lower and it can be assumed that they capture different aspects of data complexity and may be potentially complementary.
The next step was to show that information captured by AUCC is useful for explaining classifier performance.In order to do so we trained a number of different classifiers on the 81 benchmark data sets and evaluated their performance using random train-test split with proportion 0.5 repeated 10 times.The performance measure used was the area under ROC curve.We selected three linear classifiers -naïve error, with larger k it is prone to bias if the sample size is not large enough (Domingos, 2000).Both AUCC and log T2 seem to capture the effect of sample size in case of large k value well (correlations -0.2249 and 0.2395 for 35-NN).However, for k = 1 the correlation with AUCC is stronger (-0.1256 vs 0.0772).
Depth parameter in decision tree also regulates complexity: the larger the depth the more classifier is prone to variance error and less to bias error.This suggests that AUCC should be more strongly correlated with performance of deeper trees.On the other hand, complex decision trees explicitly model attribute interdependencies ignored by complexity curve, which may weaken the correlation.This is observed in the obtained results: for a decision stub (tree of depth 1), which is low-variance high-bias classifier, correlation with AUCC and log T2 is very weak.For d = 3 and d = 5 it becomes visibly stronger, and then for larger tree depth it again decreases.It should be noted that with large tree depth, as with small k values in k-NN, AUCC has stronger correlation with the classifier performance than log T2.
A slightly more sophisticated way of applying data complexity measures is an attempt to explain classifier performance relative to some other classification method.In our experiments LDA is a good candidate for reference method since it is simple, has low variance and is not correlated with either AUCC or log T2.Table 5 presents correlations of both measures with classifier performance relative to LDA.
Here we can see that correlations for AUCC are generally higher than for log T2 and reach significance for the majority of classifiers.Especially in the case of decision tree AUCC explains relative performance better than log T2 (correlation 0.1809 vs -0.0303 for d = inf).
Results of the presented correlation analyses demonstrate the potential of complexity curve to complement the existing complexity measures in explaining classifier performance.As expected from theoretical considerations, there is a relation between how well AUCC correlates with classifier performance and the classifier's position in bias-variance spectrum.It is worth noting that despite the attribute independence assumption of complexity curve method it proved useful for explaining performance of complex non-linear classifiers.

Large p, small n problems
There is a special category of machine learning problems in which the number of attributes p is large with respect to the number of samples n, perhaps even order of magnitudes larger.Many important biological data sets, most notably data from microarray experiments, fall into this category (Johnstone and Titterington, 2009).To test how our complexity measure behaves in such situations, we calculated AUCC scores for a few microarray data sets and compared them with AUC ROC scores of some simple classifiers.Classifiers were evaluated as in the previous section.Detailed information about the data sets is given by Table 6.
Results of the experiment are presented in Table 7.As expected, with the number of attributes much larger than the number of observations data is considered by our metric as extremely scarce -values of AUCC are in all cases above 0.95.On the other hand, AUC ROC classification performance is very varied between data sets with scores approaching or equal to 1.0 for LEUKEMIA and LYMPHOMA data sets, and scores around 0.5 baseline for PROSTATE.This is because despite the large number of dimensions the form of the optimal decision function can be very simple, utilising only a few of available dimensions.
Complexity curve does not consider the shape of decision boundary at all and thus does not reflect differences in classification performance.
From this analysis we concluded that complexity curve is not a good predictor of classifier performance for data sets containing a large number of redundant attributes, as it does not differentiate between important and unimportant attributes.The logical way to proceed in such case would be to perform some form of feature selection or dimensionality reduction on the original data, and then calculate complexity curve in the reduced dimensions.

APPLICATIONS Interpreting complexity curves
In order to prove the practical applicability of the proposed methodology, and show how complexity curve plot can be interpreted, we performed experiments with six simple data sets from UCI Machine Learning Repository (Frank and Asuncion, 2010) 11.
Shape of the complexity curve portrays the learning process.The initial examples are the most important since there is a huge difference between having some information and having no information at all.After some point including additional examples still improves probability estimation, but does not introduce such a dramatic change.
Looking at the individual graphs, it is now possible to compare complexity of different sets.From the sets considered, MONKS-1 and CAR are dense data sets with a lot of instances and medium number of attributes.The information they contain can be to a large extend recovered from relatively small subsets.
Such sets are natural candidates for data pruning.On the other hand, WINE and GLASS are small data sets with a larger number of attributes or classes -they can be considered complex, with no redundant information.
Besides the slope of the complexity curve we can also analyse its variability.We can see that the shape of WINE complexity curve is very regular with small variance in each point, while the GLASS curve displays much higher variance.This mean that the observations in GLASS data set are more diverse and some observations (or their combinations) are more important for representing data structure than the other.
returns continuous decisions.The fraction of correctly classified examples in class A is plotted against the fraction of incorrectly classified in class B for different values of the classification threshold.The ROC curve captures not only the sole performance of a classifier, but also its sensitivity to the threshold value selection.
variance comes from an inability to estimate optimal model parameters from the data sample, noise is inherent to the solved task and irreducible.Since our complexity measure is model agnostic it clearly does not include bias component.As it does not take into account the dependent variable, it cannot measure noise either.All that is left to investigate is the relation between our complexity measure and variance component of the classification error.The variance error component is connected with overfitting, when the model fixates over specific properties of a data sample and looses generalisation capabilities over the whole problem domain.If the training sample represented the problem perfectly and the model was fitted with perfect optimisation procedure variance would be reduced to zero.The less representative the training sample is for the whole problem domain, the larger the chance for variance error.
for the sake of convenience we assume CC(0) = 1.Algorithm 1 Procedure for calculating complexity curve.D -original data set, K -number of random subsets of the specified size.1.Transform D with whitening transform and/or ICA to obtain D I .2. Estimate probability distribution for each attribute of D I and calculate joint probability distribution -P.3.For i in 1 . . .|D I | (with an optional step size d): step size) controls the trade-off between the precision of the calculated curve and the computation time.In all experiments, unless stated otherwise, we used values K = 20, d = |D| 60 .Regular shapes of the obtained curves did not suggest the need for using larger values.

Figure 1 Figure 1 .
Figure1presents a sample complexity curve.It demonstrates how by drawing larger subsets of the data we get better approximations of the original distribution, as indicated by the decreasing Hellinger distance.The logarithmic decrease of the distance is characteristic: it means that with a relatively small number of samples we can recover general characteristics of the distribution, but to model the details precisely we need a lot more data points.The shape of the curve is very regular, with just minimal variations.It

Algorithm 2
Procedure for calculating conditional complexity curve.D -original data set, C -number of classes, N -number of subsets, K -number of samples.1. Transform D with whitening transform and/or ICA to obtain D I .2. Split D I according to the class into D 1 I , D 2 I , . . ., D C I .3. From D 1 I , D 2 I , . . ., D C I estimate probability distributions P 1 , P 2 , . . ., P C . 4. For i in 1 . . .|D I | with a step size |D I | N :

Figure 2 .
Figure 2. Complexity curve (solid) and conditional complexity curve (dashed) for iris data set.

Figure 4 .
Figure 4. Complexity curve and learning curve of the logistic regression on the logit data.

Figure 4
Figure 4 presents complexity curve and adjusted error of logistic regression for the generated data.After ignoring noise error component, we can see that the variance error component is indeed upper bounded by the complexity curve.Different kind of artificial data represented multidimensional space with parallel stripes in one dimension (stripes data set).It consisted of X matrix with 1000 observations and 10 attributes drawn from an uniform distribution on range [0, 1).Class values Y dependent only on value of one of the attributes:

Figure 6 Figure 5 .Figure 6 .
Figure 6 presents complexity curves and error curves for different dimensionalities of chessboard data.Indeed here classification error becomes larger than indicated by complexity curve.The more

Figure 8 .
Figure 8. Complexity curves for whitened data (dashed lines), not whitened data (solid lines) and ICA-transformed data (dotted lines).Areas under the curves are given in the legend.U -data sampled from uniform distribution.C -data sampled from categorical distribution.U w -whitened U. C wwhitened C. U ICA -U w after ICA.C ICA -C w after ICA.

Figure 9 .
Figure 9. Complexity curves for WINE and its counterparts with introduced outliers.For the sake of clarity only contours were drawn.

Table 1 .
Data complexity measures used in experiments.

Table 2 .
Properties of HDDT data sets used in experiments.

Table 3 .
Properties of KEEL data sets used in experiments.

Table 4 .
Pearson's correlations coefficients between classifier AUC ROC performances and complexity measures.Values larger than 0.22 or smaller than -0.22 are significant at α = 0.05 significance level.
observed.As can be seen, AUC ROC scores of linear classifiers have very little correlation with AUCC and log T2.This may be explained by the high-bias and low-variance nature of these classifiers: they are not strongly affected by data scarcity but their performance depends on other factors.This is especially true for LDA classifier, which has the weakest correlation among linear classifiers.In k-NN classifier complexity depends on k parameter: with low k values it is more prone to variance

Table 6 .
Properties of microarray data sets used in experiments.

Table 8 .
. The sets were chosen only as illustrative examples.They have Basic properties of the benchmark data sets.no missing values and represent only classification problems, not regression ones.Basic properties of the data sets are given in Table 8.For each data set we calculated conditional complexity curve, as it should capture data properties in the context of classification better than standard complexity curve.The curves are presented in Figure