gators.discretizers package#

Module contents#

class gators.discretizers.CustomDiscretizer[source]#

Bases: _BaseDiscretizer

Custom discretizer class.

Parameters:
  • subset (Optional[List[str]], default=None) – List of numeric column names to discretize. If None, uses all columns in the bins dictionary.

  • bins (Dict[str, List[float]]) – Dictionary specifying bin edges for each column. Keys are column names, values are lists of bin boundaries. Use -np.inf and np.inf for open-ended bins.

  • num_bins (PositiveInt, default=5) – Number of bins (used for validation, not for computation since bins are custom).

  • rounding (PositiveInt, default=3) – Decimal places to round bin edges for labels.

  • inplace (bool, default=True) – If True, replace original columns with discretized values. If False, create new columns with suffix ‘__discretize_custom’.

  • drop_columns (bool, default=True) – If inplace=False, whether to drop the original columns after discretizing. Ignored when inplace=True.

  • as_numerics (bool, default=False) – If True, create numeric labels (0, 1, 2, …) instead of interval strings.

Examples

>>> from gators.discretizers import CustomDiscretizer
>>> import polars as pl
>>> X = pl.DataFrame({
...     'A': [0.1, 0.2, 0.2, 0.4],
...     'B': [10, 20, 30, 40]
... })
>>> bins = {
...     'A': [-np.inf, 0.2, 0.3, np.inf],
...     'B': [-np.inf, 20, 30, np.inf]
... }
>>> discretizer = CustomDiscretizer(bins=bins, num_bins=3, drop_columns=True)
>>> discretizer.subset=['A', 'B']
>>> discretizer.fit(X)
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 2)
┌───────────────┬─────────────────────┐
│ A__discretize │ B__discretize       │
│ _custom       │ _custom             │
│ ---           │ ---                 │
│ str           │ str                 │
├───────────────┼─────────────────────┤
│ (-inf,0.2]    │ (-inf,20.0]         │
│ (0.2,0.3]     │ (20.0,30.0]         │
│ (0.2,0.3]     │ (20.0,30.0]         │
│ (0.3,inf)     │ (30.0,inf)          │
└───────────────┴─────────────────────┘
>>> discretizer.drop_columns = False
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 4)
┌─────┬─────┬───────────────┬───────────────┐
│ A   │ B   │ A__discretize │ B__discretize │
│ --- │ --- │ _custom       │ _custom       │
│ f64 │ i64 │ str           │ str           │
├─────┼─────┼───────────────┼───────────────┤
│ 0.1 │ 10  │ (-inf,0.2]    │ (-inf,20.0]   │
│ 0.2 │ 20  │ (0.2,0.3]     │ (20.0,30.0]   │
│ 0.2 │ 30  │ (0.2,0.3]     │ (20.0,30.0]   │
│ 0.4 │ 40  │ (0.3,inf)     │ (30.0,inf)    │
└─────┴─────┴───────────────┴───────────────┘
>>> discretizer.columns = None
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 2)
┌───────────────┬─────────────────────┐
│ A__discretize │ B__discretize       │
│ _custom       │ _custom             │
│ ---           │ ---                 │
│ str           │ str                 │
├───────────────┼─────────────────────┤
│ (-inf,0.2]    │ (-inf,20.0]         │
│ (0.2,0.3]     │ (20.0,30.0]         │
│ (0.2,0.3]     │ (20.0,30.0]         │
│ (0.3,inf)     │ (30.0,inf)          │
└───────────────┴─────────────────────┘
>>> discretizer.subset=['A']
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 3)
┌─────┬─────┬───────────────┐
│ A   │ B   │ A__discretize │
│ --- │ --- │ _custom       │
│ f64 │ i64 │ str           │
├─────┼─────┼───────────────┤
│ 0.1 │ 10  │ (-inf,0.2]    │
│ 0.2 │ 20  │ (0.2,0.3]     │
│ 0.2 │ 30  │ (0.2,0.3]     │
│ 0.4 │ 40  │ (0.3,inf)     │
└─────┴─────┴───────────────┘
fit(X, y=None)[source]#

Fit the discretizer using predefined custom bins.

Parameters:
  • X (DataFrame) – Input DataFrame with numeric columns.

  • y (Series | None) – Target series (not used, present for sklearn compatibility).

Returns:

The fitted discretizer instance.

Return type:

CustomDiscretizer

class gators.discretizers.EqualLengthDiscretizer[source]#

Bases: _BaseDiscretizer

Discretizes numerical variables using equal-length bins.

Creates bins with equal width (range) by dividing the data range into num_bins intervals of equal length. Good for uniformly distributed data.

Parameters:
  • subset (Optional[List[str]], default=None) – List of numeric column names to discretize. If None, all numeric columns are selected.

  • num_bins (PositiveInt, default=5) – Number of equal-length bins to create.

  • rounding (PositiveInt, default=3) – Decimal places to round bin edges for labels.

  • inplace (bool, default=True) – If True, replace original columns with discretized values. If False, create new columns with suffix ‘__dicretize_length’.

  • drop_columns (bool, default=True) – If inplace=False, whether to drop the original columns after discretizing. Ignored when inplace=True.

  • as_numerics (bool, default=False) – If True, create numeric labels (0, 1, 2, …) instead of interval strings.

Examples

>>> from gators.discretizers import EqualLengthDiscretizer
>>> import polars as pl
>>> X = pl.DataFrame({
...     'A': [0.1, 0.2, 0.2, 0.4],
...     'B': [10, 20, 30, 40]
... })
>>> discretizer = EqualLengthDiscretizer(num_bins=3, drop_columns=True)
>>> discretizer.subset=['A', 'B']
>>> discretizer.fit(X)
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 2)
┌───────────────┬───────────────┐
│ A__dic_length │ B__dic_length │
│ ---           │ ---           │
│ str           │ str           │
├───────────────┼───────────────┤
│ (0.1,0.2]     │ (10,20]       │
│ (0.1,0.2]     │ (20,30]       │
│ (0.2,0.3]     │ (20,30]       │
│ (0.3,0.4]     │ (30,40]       │
└───────────────┴───────────────┘
>>> discretizer.drop_columns = False
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 4)
┌─────┬─────┬───────────────┬───────────────┐
│ A   │ B   │ A__dic_length │ B__dic_length │
│ --- │ --- │ ---           │ ---           │
│ f64 │ i64 │ str           │ str           │
├─────┼─────┼───────────────┼───────────────┤
│ 0.1 │ 10  │ (0.1,0.2]     │ (10,20]       │
│ 0.2 │ 20  │ (0.1,0.2]     │ (20,30]       │
│ 0.2 │ 30  │ (0.2,0.3]     │ (20,30]       │
│ 0.4 │ 40  │ (0.3,0.4]     │ (30,40]       │
└─────┴─────┴───────────────┴───────────────┘
>>> discretizer.columns = None
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 2)
┌───────────────┬───────────────┐
│ A__dic_length │ B__dic_length │
│ ---           │ ---           │
│ str           │ str           │
├───────────────┼───────────────┤
│ (0.1,0.2]     │ (10,20]       │
│ (0.1,0.2]     │ (20,30]       │
│ (0.2,0.3]     │ (20,30]       │
│ (0.3,0.4]     │ (30,40]       │
└───────────────┴───────────────┘
>>> discretizer.subset=['A']
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 3)
┌─────┬─────┬───────────────┐
│ A   │ B   │ A__dic_length │
│ --- │ --- │ ---           │
│ f64 │ i64 │ str           │
├─────┼─────┼───────────────┤
│ 0.1 │ 10  │ (0.1,0.2]     │
│ 0.2 │ 20  │ (0.2,0.3]     │
│ 0.2 │ 30  │ (0.2,0.3]     │
│ 0.4 │ 40  │ (0.3,0.4]     │
└─────┴─────┴───────────────┘
fit(X, y=None)[source]#

Fit the discretizer by computing equal-length bin boundaries.

Parameters:
  • X (DataFrame) – Input DataFrame with numeric columns.

  • y (Series | None) – Target series (not used, present for sklearn compatibility).

Returns:

The fitted discretizer instance.

Return type:

EqualLengthDiscretizer

class gators.discretizers.EqualSizeDiscretizer[source]#

Bases: _BaseDiscretizer

Equal-size discretizer.

Parameters:
  • subset (Optional[List[str]], default=None) – List of column names to discretize. If None, all numeric columns are used.

  • num_bins (PositiveInt, default=5) – Number of bins to divide each numeric column into.

  • rounding (PositiveInt, default=3) – Decimal places to round bin edges for labels.

  • drop_columns (bool, default=True) – If True, drops original columns after discretizing.

Examples

>>> from gators.discretizers import EqualSizeDiscretizer
>>> import polars as pl
>>> X = pl.DataFrame({
...     'A': [0.1, 0.2, 0.2, 0.4],
...     'B': [10, 20, 30, 40]
... })
>>> discretizer = EqualSizeDiscretizer(num_bins=3, drop_columns=True)
>>> discretizer.subset=['A', 'B']
>>> discretizer.fit(X)
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 2)
┌───────────────┬───────────────┐
│ A__dic_size   │ B__dic_size   │
│ ---           │ ---           │
│ str           │ str           │
├───────────────┼───────────────┤
│ (0.1,0.2]     │ (10,20]       │
│ (0.1,0.2]     │ (20,30]       │
│ (0.2,0.3]     │ (20,30]       │
│ (0.3,0.4]     │ (30,40]       │
└───────────────┴───────────────┘
>>> discretizer.drop_columns = False
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 4)
┌─────┬─────┬───────────────┬───────────────┐
│ A   │ B   │ A__dic_size   │ B__dic_size   │
│ --- │ --- │ ---           │ ---           │
│ f64 │ i64 │ str           │ str           │
├─────┼─────┼───────────────┼───────────────┤
│ 0.1 │ 10  │ (0.1,0.2]     │ (10,20]       │
│ 0.2 │ 20  │ (0.1,0.2]     │ (20,30]       │
│ 0.2 │ 30  │ (0.2,0.3]     │ (20,30]       │
│ 0.4 │ 40  │ (0.3,0.4]     │ (30,40]       │
└─────┴─────┴───────────────┴───────────────┘
>>> discretizer.columns = None
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 2)
┌───────────────┬───────────────┐
│ A__dic_size   │ B__dic_size   │
│ ---           │ ---           │
│ str           │ str           │
├───────────────┼───────────────┤
│ (0.1,0.2]     │ (10,20]       │
│ (0.1,0.2]     │ (20,30]       │
│ (0.2,0.3]     │ (20,30]       │
│ (0.3,0.4]     │ (30,40]       │
└───────────────┴───────────────┘
>>> discretizer.subset=['A']
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 3)
┌─────┬─────┬───────────────┐
│ A   │ B   │ A__dic_size   │
│ --- │ --- │ ---           │
│ f64 │ i64 │ str           │
├─────┼─────┼───────────────┤
│ 0.1 │ 10  │ (0.1,0.2]     │
│ 0.2 │ 20  │ (0.2,0.3]     │
│ 0.2 │ 30  │ (0.2,0.3]     │
│ 0.4 │ 40  │ (0.3,0.4]     │
└─────┴─────┴───────────────┘
fit(X, y=None)[source]#

Fit the discretizer by computing equal-size (quantile-based) bin boundaries.

Parameters:
  • X (DataFrame) – Input DataFrame with numeric columns.

  • y (Series | None) – Target series (not used, present for sklearn compatibility).

Returns:

The fitted discretizer instance.

Return type:

EqualSizeDiscretizer

class gators.discretizers.GeometricDiscretizer[source]#

Bases: _BaseDiscretizer

Discretizes numerical variables using geometric progression bins.

Creates bins following a geometric progression where each bin edge is a constant multiple of the previous edge. The common ratio is calculated as r = (max/min)^(1/num_bins). This is particularly useful for data spanning multiple orders of magnitude (e.g., transaction amounts).

For columns with zero or negative values, the data is temporarily shifted to positive range before computing geometric bins.

Parameters:
  • subset (Optional[List[str]], default=None) – List of numeric column names to discretize. If None, all numeric columns are selected.

  • num_bins (PositiveInt, default=5) – Number of geometric bins to create.

  • rounding (PositiveInt, default=3) – Decimal places to round bin edges for labels.

  • inplace (bool, default=True) – If True, replace original columns with discretized values. If False, create new columns with suffix ‘__discretize_geom’.

  • drop_columns (bool, default=True) – If inplace=False, whether to drop the original columns after discretizing. Ignored when inplace=True.

  • as_numerics (bool, default=False) – If True, create numeric labels (0, 1, 2, …) instead of interval strings.

Examples

>>> from gators.discretizers import GeometricDiscretizer
>>> import polars as pl
>>> X = pl.DataFrame({
...     'A': [1, 10, 100, 1000],
...     'B': [0.1, 1, 10, 100]
... })
>>> discretizer = GeometricDiscretizer(num_bins=3, inplace=False)
>>> discretizer.fit(X)
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 2)
┌──────────────┬──────────────┐
│ A__dic_geom  │ B__dic_geom  │
│ ---          │ ---          │
│ str          │ str          │
├──────────────┼──────────────┤
│ (1,10]       │ (0.1,1.0]    │
│ (1,10]       │ (0.1,1.0]    │
│ (10,100]     │ (1.0,10.0]   │
│ (100,1000]   │ (10.0,100.0] │
└──────────────┴──────────────┘
>>> # With numeric labels
>>> discretizer = GeometricDiscretizer(num_bins=3, as_numerics=True, inplace=True)
>>> discretizer.fit(X)
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (4, 2)
┌─────┬─────┐
│ A   │ B   │
│ --- │ --- │
│ i32 │ i32 │
├─────┼─────┤
│ 0   │ 0   │
│ 0   │ 0   │
│ 1   │ 1   │
│ 2   │ 2   │
└─────┴─────┘
>>> # Handling zero/negative values
>>> X_neg = pl.DataFrame({
...     'C': [-10, 0, 10, 100, 1000]
... })
>>> discretizer = GeometricDiscretizer(num_bins=4, inplace=False)
>>> discretizer.fit(X_neg)
>>> transformed = discretizer.transform(X_neg)
>>> print(transformed)
shape: (5, 1)
┌─────────────┐
│ C__dic_geom │
│ ---         │
│ str         │
├─────────────┤
│ (-10,0]     │
│ (-10,0]     │
│ (0,10]      │
│ (10,100]    │
│ (100,1000]  │
└─────────────┘
fit(X, y=None)[source]#

Fit the discretizer by computing geometric progression bin boundaries.

Parameters:
  • X (DataFrame) – Input DataFrame with numeric columns.

  • y (Series | None) – Target series (not used, present for sklearn compatibility).

Returns:

The fitted discretizer instance.

Return type:

GeometricDiscretizer

class gators.discretizers.KMeansDiscretizer[source]#

Bases: _BaseDiscretizer

Clustering-based discretizer using k-means to find natural data clusters.

Uses k-means clustering to identify natural groupings in the data, creating bins based on cluster boundaries. This is more effective than equal-length binning for non-uniform distributions as it groups similar values together.

Parameters:
  • subset (Optional[List[str]], default=None) – List of numeric column names to discretize. If None, all numeric columns are selected.

  • num_bins (PositiveInt, default=5) – Number of clusters (bins) to create using k-means.

  • rounding (PositiveInt, default=3) – Decimal places to round bin edges for labels.

  • inplace (bool, default=True) – If True, replace original columns with discretized values. If False, create new columns with suffix ‘__dic_kmeans’.

  • drop_columns (bool, default=True) – If inplace=False, whether to drop the original columns after discretizing. Ignored when inplace=True.

  • as_numerics (bool, default=False) – If True, create numeric labels (0, 1, 2, …) instead of interval strings.

  • random_state (Optional[int], default=None) – Random state for reproducibility of k-means clustering.

  • max_iter (int, default=300) – Maximum number of iterations for k-means algorithm.

  • n_init (int, default=10) – Number of times k-means will be run with different centroid seeds.

Examples

Example: Non-uniform distribution clustering

>>> from gators.discretizers import KMeansDiscretizer
>>> import polars as pl
>>> X = pl.DataFrame({
...     'price': [10, 12, 15, 18, 100, 105, 110, 500, 520, 550],
...     'quantity': [1, 2, 3, 4, 5, 10, 12, 15, 20, 25]
... })
>>> discretizer = KMeansDiscretizer(
...     subset=['price', 'quantity'],
...     num_bins=3,
...     drop_columns=True,
...     random_state=42
... )
>>> discretizer.fit(X)
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (10, 2)
┌─────────────────┬──────────────────┐
│ price__dic_kme… ┆ quantity__dic_k… │
│ ---             ┆ ---              │
│ str             ┆ str              │
├─────────────────┼──────────────────┤
│ (-inf,56.25]    ┆ (-inf,4.5]       │
│ (-inf,56.25]    ┆ (-inf,4.5]       │
│ ...             ┆ ...              │
└─────────────────┴──────────────────┘

K-means groups similar values: [10-18], [100-110], [500-550]. This is more meaningful than equal-length bins like [10-190], [190-370], [370-550].

>>> discretizer.drop_columns = False
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (10, 4)
┌───────┬──────────┬─────────────────┬──────────────────┐
│ price ┆ quantity ┆ price__dic_kme… ┆ quantity__dic_k… │
│ ---   ┆ ---      ┆ ---             ┆ ---              │
│ i64   ┆ i64      ┆ str             ┆ str              │
├───────┼──────────┼─────────────────┼──────────────────┤
│ 10    ┆ 1        ┆ (-inf,56.25]    ┆ (-inf,4.5]       │
│ 12    ┆ 2        ┆ (-inf,56.25]    ┆ (-inf,4.5]       │
│ ...   ┆ ...      ┆ ...             ┆ ...              │
└───────┴──────────┴─────────────────┴──────────────────┘
fit(X, y=None)[source]#

Fit the discretizer by learning cluster boundaries using k-means.

Parameters:
  • X (DataFrame) – Input DataFrame with numeric columns.

  • y (Series | None) – Target series (not used, present for sklearn compatibility).

Returns:

The fitted discretizer instance.

Return type:

KMeansDiscretizer

class gators.discretizers.QuantileDiscretizer[source]#

Bases: _BaseDiscretizer

Flexible quantile-based discretizer with explicit quantile control.

Creates bins based on specified quantiles, allowing fine-grained control over bin boundaries. More flexible than EqualSizeDiscretizer as you can specify custom quantiles. Handles skewed distributions well by ensuring each bin contains similar numbers of samples.

Parameters:
  • subset (Optional[List[str]], default=None) – List of numeric column names to discretize. If None, all numeric columns are selected.

  • num_bins (PositiveInt, default=5) – Number of quantile-based bins to create (used if quantiles not specified). Ignored if quantiles parameter is provided.

  • quantiles (Optional[List[float]], default=None) – Explicit list of quantiles (0.0-1.0) to use as bin boundaries. If None, equally-spaced quantiles are generated based on num_bins. Example: [0.25, 0.5, 0.75] creates quartile bins.

  • rounding (PositiveInt, default=3) – Decimal places to round bin edges for labels.

  • inplace (bool, default=True) – If True, replace original columns with discretized values. If False, create new columns with suffix ‘__dic_quantile’.

  • drop_columns (bool, default=True) – If inplace=False, whether to drop the original columns after discretizing. Ignored when inplace=True.

  • as_numerics (bool, default=False) – If True, create numeric labels (0, 1, 2, …) instead of interval strings.

  • handle_duplicates (str, default='drop') –

    How to handle duplicate quantile values:

    • ’drop’: Remove duplicate bin edges (recommended for low variance data)

    • ’raise’: Raise error if duplicates are found

Examples

Example 1: Quartiles (default behavior with num_bins=4)

>>> from gators.discretizers import QuantileDiscretizer
>>> import polars as pl
>>> X = pl.DataFrame({
...     'age': [20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75],
...     'income': [20000, 25000, 30000, 35000, 40000, 50000,
...                60000, 70000, 80000, 90000, 100000, 120000]
... })
>>> discretizer = QuantileDiscretizer(
...     subset=['age', 'income'],
...     num_bins=4,
...     drop_columns=True
... )
>>> discretizer.fit(X)
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (12, 2)
┌─────────────────┬──────────────────┐
│ age__dic_quant… ┆ income__dic_qua… │
│ ---             ┆ ---              │
│ str             ┆ str              │
├─────────────────┼──────────────────┤
│ (-inf,33.75]    ┆ (-inf,33750.0]   │
│ (-inf,33.75]    ┆ (-inf,33750.0]   │
│ ...             ┆ ...              │
└─────────────────┴──────────────────┘

Example 2: Custom quantiles (deciles)

>>> discretizer = QuantileDiscretizer(
...     subset=['income'],
...     quantiles=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
...     drop_columns=False
... )
>>> discretizer.fit(X)
>>> transformed = discretizer.transform(X)

Example 3: Asymmetric quantiles for skewed data

>>> discretizer_skewed = QuantileDiscretizer(
...     subset=['income'],
...     quantiles=[0.5, 0.75, 0.9, 0.95, 0.99],
...     drop_columns=True
... )
>>> discretizer_skewed.fit(X)
>>> transformed = discretizer_skewed.transform(X)

Example 4: Tertiles (3 bins)

>>> discretizer_tertile = QuantileDiscretizer(
...     subset=['age'],
...     quantiles=[0.333, 0.667],
...     drop_columns=True
... )
>>> discretizer_tertile.fit(X)
>>> transformed = discretizer_tertile.transform(X)
fit(X, y=None)[source]#

Fit the discretizer by computing quantile-based bin boundaries.

Parameters:
  • X (DataFrame) – Input DataFrame with numeric columns.

  • y (Series | None) – Target series (not used, present for sklearn compatibility).

Returns:

The fitted discretizer instance.

Return type:

QuantileDiscretizer

class gators.discretizers.TreeBasedDiscretizer[source]#

Bases: _BaseDiscretizer

Supervised discretizer using decision tree splits for optimal bin boundaries.

Finds bin boundaries that maximize information gain or reduce variance by using decision tree split points. This is particularly effective for tree-based models as bins align with natural decision boundaries in the data.

Parameters:
  • subset (Optional[List[str]], default=None) – List of numeric column names to discretize. If None, all numeric columns are selected.

  • num_bins (PositiveInt, default=5) – Maximum number of bins to create. Actual number may be less if tree finds fewer optimal splits.

  • rounding (PositiveInt, default=3) – Decimal places to round bin edges for labels.

  • inplace (bool, default=True) – If True, replace original columns with discretized values. If False, create new columns with suffix ‘__dic_tree’.

  • drop_columns (bool, default=True) – If inplace=False, whether to drop the original columns after discretizing. Ignored when inplace=True.

  • as_numerics (bool, default=False) – If True, create numeric labels (0, 1, 2, …) instead of interval strings.

  • task (str, default='classification') – Type of supervised learning task: ‘classification’ or ‘regression’. Determines which tree algorithm to use (LGBMClassifier vs LGBMRegressor).

  • min_samples_leaf (int, default=10) – Minimum number of samples required in each leaf node (min_data_in_leaf in LightGBM). Controls the granularity of binning (higher = fewer, coarser bins).

  • random_state (Optional[int], default=None) – Random state for reproducibility of tree splits.

Examples

Example 1: Classification task

>>> from gators.discretizers import TreeBasedDiscretizer
>>> import polars as pl
>>> X = pl.DataFrame({
...     'age': [25, 30, 35, 40, 45, 50, 55, 60, 65, 70],
...     'income': [30000, 35000, 40000, 50000, 55000, 60000, 70000, 75000, 80000, 90000],
...     'target': [0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
... })
>>> discretizer = TreeBasedDiscretizer(
...     subset=['age', 'income'],
...     num_bins=3,
...     task='classification',
...     drop_columns=True
... )
>>> discretizer.fit(X, target='target')
>>> transformed = discretizer.transform(X)
>>> print(transformed)
shape: (10, 3)
┌──────────────┬────────────────┬────────┐
│ age__dic_tre ┆ income__dic_tr ┆ target │
│ e            ┆ ee             ┆ ---    │
│ ---          ┆ ---            ┆ i64    │
│ str          ┆ str            ┆        │
├──────────────┼────────────────┼────────┤
│ (-inf,42.5]  ┆ (-inf,52500.0] ┆ 0      │
│ ...          ┆ ...            ┆ ...    │
└──────────────┴────────────────┴────────┘

Example 2: Regression task

>>> X_reg = pl.DataFrame({
...     'feature1': [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0],
...     'target': [10, 15, 18, 25, 30, 35, 42, 50]
... })
>>> discretizer_reg = TreeBasedDiscretizer(
...     subset=['feature1'],
...     num_bins=4,
...     task='regression'
... )
>>> discretizer_reg.fit(X_reg, target='target')
>>> transformed = discretizer_reg.transform(X_reg)
fit(X, y=None)[source]#

Fit the discretizer by learning optimal splits from decision tree.

Parameters:
  • X (DataFrame) – Input DataFrame with numeric columns.

  • y (Series) – Target values (binary for classification, continuous for regression). Required for TreeBasedDiscretizer.

Returns:

The fitted discretizer instance.

Return type:

TreeBasedDiscretizer

Raises:

ValueError – If y is None.