ddl.independent.IndependentDensity

class ddl.independent.IndependentDensity(univariate_estimators=None)[source]

Bases: sklearn.base.BaseEstimator, ddl.base.ScoreMixin

Independent density estimator.

This density assumes that the underlying density is independent. The user can specify the univariate densities for each feature.

Parameters:
univariate_estimators : estimator or array-like of shape (n_features,)

Univariate estimator(s) for this independent density. Default assumes univariate Gaussian densities for all features. Should be one of the following:

  1. None (default, assumes independent Gaussian density).
  2. Univariate density estimator (assumes all features have the same density class, but the fitted parameters can be different, e.g. the means of features 1 and 2 could be different even though they are both Gaussian estimators.).
  3. Array-like of univariate density estimators for each feature.
Attributes:
univariate_densities_ : array, shape (n_features, )

Fitted univariate estimators for each feature.

n_features_ : int

Number of features.

Methods

conditional_densities(self, X, cond_idx, …) [Placeholder].
create_fitted(fitted_univariate_densities[, …]) Create fitted density.
fit(self, X[, y]) Fit estimator to X.
get_params(self[, deep]) Get parameters for this estimator.
get_support(self) Get the support of this density (i.e.
marginal_cdf(self, x, target_idx) [Placeholder].
marginal_density(self, marginal_idx) [Placeholder].
marginal_inverse_cdf(self, x, target_idx) [Placeholder].
sample(self[, n_samples, random_state]) Generate random samples from this density/destructor.
score(self, X[, y]) Return the mean log likelihood (or log(det(Jacobian))).
score_samples(self, X[, y]) Compute log-likelihood (or log(det(Jacobian))) for each sample.
set_params(self, \*\*params) Set the parameters of this estimator.
__init__(self, univariate_estimators=None)[source]

Initialize self. See help(type(self)) for accurate signature.

conditional_densities(self, X, cond_idx, not_cond_idx)[source]

[Placeholder].

Parameters:
X :
cond_idx :
not_cond_idx :
Returns:
obj : object
classmethod create_fitted(fitted_univariate_densities, n_features=None, **kwargs)[source]

Create fitted density.

Parameters:
fitted_univariate_densities : array-like of Density or Density, shape (n_features,)

Fitted univariate densities. If a single fitted density then n_features parameter must be provided to appropriately expand to an array.

n_features : int, optional

Number of features. If not supplied, will be inferred from fitted_univariate_densities.

**kwargs

Other parameters to pass to object constructor.

Returns:
fitted_density : Density

Fitted density.

fit(self, X, y=None, **fit_params)[source]

Fit estimator to X.

Parameters:
X : array-like, shape (n_samples, n_features)

Training data, where n_samples is the number of samples and n_features is the number of features.

y : None, default=None

Not used in the fitting process but kept for compatibility.

fit_params : dict, optional

Optional extra fit parameters.

Returns:
self : estimator

Returns the instance itself.

get_params(self, deep=True)

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

get_support(self)[source]

Get the support of this density (i.e. the positive density region).

Returns:
support : array-like, shape (2,) or shape (n_features, 2)

If shape is (2, ), then support[0] is the minimum and support[1] is the maximum for all features. If shape is (n_features, 2), then each feature’s support (which could be different for each feature) is given similar to the first case.

marginal_cdf(self, x, target_idx)[source]

[Placeholder].

Parameters:
x :
target_idx :
Returns:
obj : object
marginal_density(self, marginal_idx)[source]

[Placeholder].

Parameters:
marginal_idx :
Returns:
obj : object
marginal_inverse_cdf(self, x, target_idx)[source]

[Placeholder].

Parameters:
x :
target_idx :
Returns:
obj : object
sample(self, n_samples=1, random_state=None)[source]

Generate random samples from this density/destructor.

Parameters:
n_samples : int, default=1

Number of samples to generate. Defaults to 1.

random_state : int, RandomState instance or None, optional (default=None)

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by numpy.random.

Returns:
X : array, shape (n_samples, n_features)

Randomly generated sample.

score(self, X, y=None)

Return the mean log likelihood (or log(det(Jacobian))).

Parameters:
X : array-like, shape (n_samples, n_features)

New data, where n_samples is the number of samples and n_features is the number of features.

y : None, default=None

Not used but kept for compatibility.

Returns:
log_likelihood : float

Mean log likelihood data points in X.

score_samples(self, X, y=None)[source]

Compute log-likelihood (or log(det(Jacobian))) for each sample.

Parameters:
X : array-like, shape (n_samples, n_features)

New data, where n_samples is the number of samples and n_features is the number of features.

y : None, default=None

Not used but kept for compatibility.

Returns:
log_likelihood : array, shape (n_samples,)

Log likelihood of each data point in X.

set_params(self, **params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:
self