ddl.univariate.HistogramUnivariateDensity

class ddl.univariate.HistogramUnivariateDensity(bins=None, bounds=0.1, alpha=1e-06)[source]

Bases: ddl.univariate.ScipyUnivariateDensity

Histogram univariate density estimator.

Parameters:
bins : int or sequence of scalars or str, optional

Same ase the parameter of numpy.histogram. Copied from numpy documentation:

If bins is an int, it defines the number of equal-width bins in the given range (10, by default). If bins is a sequence, it defines the bin edges, including the rightmost edge, allowing for non-uniform bin widths. .. versionadded:: 1.11.0 If bins is a string from the list below, histogram will use the method chosen to calculate the optimal bin width and consequently the number of bins (see Notes for more detail on the estimators) from the data that falls within the requested range. While the bin width will be optimal for the actual data in the range, the number of bins will be computed to fill the entire range, including the empty portions. For visualisation, using the ‘auto’ option is suggested. Weighted data is not supported for automated bin size selection.

‘auto’

Maximum of the ‘sturges’ and ‘fd’ estimators. Provides good all around performance.

‘fd’ (Freedman Diaconis Estimator)

Robust (resilient to outliers) estimator that takes into account data variability and data size.

‘doane’

An improved version of Sturges’ estimator that works better with non-normal datasets.

‘scott’

Less robust estimator that that takes into account data variability and data size.

‘rice’

Estimator does not take variability into account, only data size. Commonly overestimates number of bins required.

‘sturges’

R’s default method, only accounts for data size. Only optimal for gaussian data and underestimates number of bins for large non-gaussian datasets.

‘sqrt’

Square root (of data size) estimator, used by Excel and other programs for its speed and simplicity.

bounds : float or array-like of shape (2,)

Specification for the finite bounds of the histogram. Bounds can be percentage extension or a specified interval [a,b].

alpha : float

Regularization parameter corresponding to the number of pseudo-counts to add to each bin of the histogram. This can be seen as putting a Dirichlet prior on the empirical bin counts with Dirichlet parameter alpha.

Attributes:
bin_edges_ : array of shape (n_bins + 1,)

Edges of bins.

pdf_bin_ : array of shape (n_bins,)

pdf values of bins. Note that histograms have a constant pdf value within each bin.

cdf_bin_ : array of shape (n_bins + 1,)

cdf values at bin edges. Used with linear interpolation to compute pdf, cdf and inverse cdf.

Methods

cdf(self, X[, y]) [Placeholder].
create_fitted(hist, bin_edges, \*\*kwargs) Create fitted density.
fit(self, X[, y, histogram_params]) Fit estimator to X.
get_params(self[, deep]) Get parameters for this estimator.
get_support(self) Get the support of this density (i.e.
inverse_cdf(self, X[, y]) [Placeholder].
sample(self[, n_samples, random_state]) Generate random samples from this density/destructor.
score(self, X[, y]) Return the mean log likelihood (or log(det(Jacobian))).
score_samples(self, X[, y]) Compute log-likelihood (or log(det(Jacobian))) for each sample.
set_params(self, \*\*params) Set the parameters of this estimator.
__init__(self, bins=None, bounds=0.1, alpha=1e-06)[source]

Initialize self. See help(type(self)) for accurate signature.

cdf(self, X, y=None)

[Placeholder].

Parameters:
X :
y :
Returns:
obj : object
classmethod create_fitted(hist, bin_edges, **kwargs)[source]

Create fitted density.

Parameters:
hist : counts
fitted_univariate_densities : array-like of Density, shape (n_features,)

Fitted univariate densities.

**kwargs

Other parameters to pass to object constructor.

Returns:
fitted_density : Density

Fitted density.

fit(self, X, y=None, histogram_params=None)[source]

Fit estimator to X.

Parameters:
X : array-like, shape (n_samples, 1)

Training data, where n_samples is the number of samples. Note that the shape must have a second dimension of 1 since this is a univariate density estimator.

y : None, default=None

Not used in the fitting process but kept for compatibility.

histogram_params : list or tuple of size 2

Tuple or list of values of bins and bin edges. For example, from numpy.histogram.

Returns:
self : estimator

Returns the instance itself.

get_params(self, deep=True)

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

get_support(self)[source]

Get the support of this density (i.e. the positive density region).

Returns:
support : array-like, shape (2,) or shape (n_features, 2)

If shape is (2, ), then support[0] is the minimum and support[1] is the maximum for all features. If shape is (n_features, 2), then each feature’s support (which could be different for each feature) is given similar to the first case.

inverse_cdf(self, X, y=None)

[Placeholder].

Parameters:
X :
y :
Returns:
obj : object
sample(self, n_samples=1, random_state=None)

Generate random samples from this density/destructor.

Parameters:
n_samples : int, default=1

Number of samples to generate. Defaults to 1.

random_state : int, RandomState instance or None, optional (default=None)

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by numpy.random.

Returns:
X : array, shape (n_samples, n_features)

Randomly generated sample.

score(self, X, y=None)

Return the mean log likelihood (or log(det(Jacobian))).

Parameters:
X : array-like, shape (n_samples, n_features)

New data, where n_samples is the number of samples and n_features is the number of features.

y : None, default=None

Not used but kept for compatibility.

Returns:
log_likelihood : float

Mean log likelihood data points in X.

score_samples(self, X, y=None)

Compute log-likelihood (or log(det(Jacobian))) for each sample.

Parameters:
X : array-like, shape (n_samples, n_features)

New data, where n_samples is the number of samples and n_features is the number of features.

y : None, default=None

Not used but kept for compatibility.

Returns:
log_likelihood : array, shape (n_samples,)

Log likelihood of each data point in X.

set_params(self, **params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:
self