gators.feature_selection.SelectFromModels

class gators.feature_selection.SelectFromModels(models: List[object], k: int)[source]

Select From Models By Vote Transformer.

Select the top k features based on the feature importance of the given machine learning models.

Parameters
modelsList[model]

List of machine learning models.

kint

Number of features to keep.

See also

gators.feature_selection.SelectFromMode

Similar method using one model.

Examples

Imports and initialization:

>>> from gators.feature_selection import SelectFromModels

Note that the model can be:

  • a xgboost.dask or a sklearn model for dask dataframes

  • a sklearn model for pandas and pandas dataframes

  • a pyspark.ml model for koalas dataframes

The fit, transform, and fit_transform methods accept:

  • dask dataframes:

>>> import dask.dataframe as dd
>>> import pandas as pd
>>> from xgboost.dask import XGBClassifier
>>> from distributed import Client, LocalCluster
>>> cluster = LocalCluster()
>>> client = Client(cluster)
>>> X = dd.from_pandas(pd.DataFrame({
... 'A': [0.94, 0.09, -0.43, 0.31, 0.99, 1.05, 1.02, -0.77, 0.03, 0.99],
... 'B': [0.13, 0.01, -0.06, 0.04, 0.14, 0.14, 0.14, -0.1, 0.0, 0.13],
... 'C': [0.8, 0.08, -0.37, 0.26, 0.85, 0.9, 0.87, -0.65, 0.02, 0.84]}), npartitions=1)
>>> y = dd.from_pandas(pd.Series([1, 0, 0, 0, 1, 1, 1, 0, 0, 1], name='TARGET'), npartitions=1)
>>> models = [
... XGBClassifier(n_estimators=1, random_state=0, eval_metric='logloss', use_label_encoder=False),
... XGBClassifier(n_estimators=1, random_state=1, eval_metric='logloss', use_label_encoder=False)]
>>> models[0].client = client
>>> models[1].client = client
>>> obj = SelectFromModels(models=models, k=1)
  • koalas dataframes:

>>> import databricks.koalas as ks
>>> from pyspark import SparkConf, SparkContext
>>> from pyspark.ml.classification import RandomForestClassifier as RFCSpark
>>> conf = SparkConf()
>>> _ = conf.set('spark.executor.memory', '2g')
>>> _ = SparkContext(conf=conf)
>>> X = ks.DataFrame({
... 'A': [0.94, 0.09, -0.43, 0.31, 0.99, 1.05, 1.02, -0.77, 0.03, 0.99],
... 'B': [0.13, 0.01, -0.06, 0.04, 0.14, 0.14, 0.14, -0.1, 0.0, 0.13],
... 'C': [0.8, 0.08, -0.37, 0.26, 0.85, 0.9, 0.87, -0.65, 0.02, 0.84]})
>>> y = ks.Series([1, 0, 0, 0, 1, 1, 1, 0, 0, 1], name='TARGET')
>>> models = [RFCSpark(numTrees=1, maxDepth=2, labelCol=y.name, seed=0),
... RFCSpark(numTrees=1, maxDepth=2, labelCol=y.name, seed=1)]
>>> obj = SelectFromModels(models=models, k=1)
  • and pandas dataframes:

>>> import pandas as pd
>>> from xgboost import XGBClassifier
>>> X = pd.DataFrame({
... 'A': [0.94, 0.09, -0.43, 0.31, 0.99, 1.05, 1.02, -0.77, 0.03, 0.99],
... 'B': [0.13, 0.01, -0.06, 0.04, 0.14, 0.14, 0.14, -0.1, 0.0, 0.13],
... 'C': [0.8, 0.08, -0.37, 0.26, 0.85, 0.9, 0.87, -0.65, 0.02, 0.84]})
>>> y = pd.Series([1, 0, 0, 0, 1, 1, 1, 0, 0, 1], name='TARGET')
>>> models = [XGBClassifier(n_estimators=1, max_depth=3, random_state=0, eval_metric='logloss'),
... XGBClassifier(n_estimators=1, max_depth=4, random_state=1, eval_metric='logloss')]
>>> obj = SelectFromModels(models=models, k=1)

The result is a transformed dataframe belonging to the same dataframe library.

>>> obj.fit_transform(X, y)
      A
0  0.94
1  0.09
2 -0.43
3  0.31
4  0.99
5  1.05
6  1.02
7 -0.77
8  0.03
9  0.99
fit(X: Union[pd.DataFrame, ks.DataFrame, dd.DataFrame], y: Union[pd.Series, ks.Series, dd.Series] = None) → gators.feature_selection.select_from_models.SelectFromModels[source]

Fit the transformer on the dataframe X.

Parameters
XDataFrame

Input dataframe.

ySeries, default None.

Target values.

Returns
self“SelectFromModels”

Instance of itself.

static check_array(X: numpy.ndarray)

Validate array.

Parameters
Xnp.ndarray

Array.

check_array_is_numerics(X: numpy.ndarray)

Check if array is only numerics.

Parameters
Xnp.ndarray

Array.

static check_binary_target(X: Union[pd.DataFrame, ks.DataFrame, dd.DataFrame], y: Union[pd.Series, ks.Series, dd.Series])

Raise an error if the target is not binary.

Parameters
ySeries

Target values.

static check_dataframe(X: Union[pd.DataFrame, ks.DataFrame, dd.DataFrame])

Validate dataframe.

Parameters
XDataFrame

Dataframe.

static check_dataframe_contains_numerics(X: Union[pd.DataFrame, ks.DataFrame, dd.DataFrame])

Check if dataframe is only numerics.

Parameters
XDataFrame

Dataframe.

static check_dataframe_is_numerics(X: Union[pd.DataFrame, ks.DataFrame, dd.DataFrame])

Check if dataframe is only numerics.

Parameters
XDataFrame

Dataframe.

check_dataframe_with_objects(X: Union[pd.DataFrame, ks.DataFrame, dd.DataFrame])

Check if dataframe contains object columns.

Parameters
XDataFrame

Dataframe.

check_datatype(dtype, accepted_dtypes)

Check if dataframe is only numerics.

Parameters
XDataFrame

Dataframe.

static check_multiclass_target(y: Union[pd.Series, ks.Series, dd.Series])

Raise an error if the target is not discrete.

Parameters
ySeries

Target values.

check_nans(X: Union[pd.DataFrame, ks.DataFrame, dd.DataFrame], columns: List[str])

Raise an error if X contains NaN values.

Parameters
XDataFrame

Dataframe.

theta_vecList[float]

List of columns.

static check_regression_target(y: Union[pd.Series, ks.Series, dd.Series])

Raise an error if the target is not discrete.

Parameters
ySeries

Target values.

static check_target(X: Union[pd.DataFrame, ks.DataFrame, dd.DataFrame], y: Union[pd.Series, ks.Series, dd.Series])

Validate target.

Parameters
XDataFrame

Dataframe.

ySeries

Target values.

fit_transform(X: Union[pd.DataFrame, ks.DataFrame, dd.DataFrame], y: Union[pd.Series, ks.Series, dd.Series] = None) → Union[pd.DataFrame, ks.DataFrame, dd.DataFrame]

Fit and Transform the dataframe X.

Parameters
XDataFrame.

Input dataframe.

ySeries, default None.

Input target.

Returns
XDataFrame

Transformed dataframe.

static get_column_names(inplace: bool, columns: List[str], suffix: str)

Return the names of the modified columns.

Parameters
inplacebool

If True return columns. If False return columns__suffix.

columnsList[str]

List of columns.

suffixstr

Suffix used if inplace is False.

Returns
List[str]

List of column names.

get_params(deep=True)

Get parameters for this estimator.

Parameters
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsdict

Parameter names mapped to their values.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters
**paramsdict

Estimator parameters.

Returns
selfestimator instance

Estimator instance.

transform(X: Union[pd.DataFrame, ks.DataFrame, dd.DataFrame], y: Union[pd.Series, ks.Series, dd.Series] = None) → Union[pd.DataFrame, ks.DataFrame, dd.DataFrame]

Transform the dataframe X.

Parameters
XDataFrame.

Input dataframe.

ynp.ndarray

Target values.

Returns
XDataFrame

Transformed dataframe.

transform_numpy(X: numpy.ndarray) → numpy.ndarray

Transform the array X.

Parameters
Xnp.ndarray

Input array.

Returns
Xnp.ndarray

Transformed array.