dislib.classification.RandomForestClassifier

class dislib.classification.rf.forest.RandomForestClassifier(n_estimators=10, try_features='sqrt', max_depth=inf, distr_depth='auto', sklearn_max=100000000.0, hard_vote=False, random_state=None)[source]

Bases: object

A distributed random forest classifier.

Parameters:
  • n_estimators (int, optional (default=10)) – Number of trees to fit.

  • try_features (int, str or None, optional (default=’sqrt’)) – The number of features to consider when looking for the best split:

    • If “sqrt”, then try_features=sqrt(n_features).
    • If “third”, then try_features=n_features // 3.
    • If None, then try_features=n_features.

    Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than try_features features.

  • max_depth (int or np.inf, optional (default=np.inf)) – The maximum depth of the tree. If np.inf, then nodes are expanded until all leaves are pure.

  • distr_depth (int or str, optional (default=’auto’)) – Number of levels of the tree in which the nodes are split in a distributed way.

  • sklearn_max (int or float, optional (default=1e8)) – Maximum size (len(subsample)*n_features) of the arrays passed to sklearn’s DecisionTreeClassifier.fit(), which is called to fit subtrees (subsamples) of our DecisionTreeClassifier. sklearn fit() is used because it’s faster, but requires loading the data to memory, which can cause memory problems for large datasets. This parameter can be adjusted to fit the hardware capabilities.

  • hard_vote (bool, optional (default=False)) – If True, it uses majority voting over the predict() result of the decision tree predictions. If False, it takes the class with the higher probability given by predict_proba(), which is an average of the probabilities given by the decision trees.

  • random_state (int, RandomState instance or None, optional (default=None)) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

Variables:
  • classes (None or ndarray) – Array of distinct classes, set at fit().
  • trees (list of DecisionTreeClassifier) – List of the tree classifiers of this forest, populated at fit().
fit(dataset)[source]

Fits the RandomForestClassifier.

Parameters:dataset (dislib.data.Dataset) – Note: In the implementation of this method, the dataset is transformed to a dislib.classification.rf.data.RfDataset. To avoid the cost of the transformation, RfDataset objects are additionally accepted as argument.
fit_predict(dataset)[source]

Fits the forest and predicts the classes for the same dataset.

Parameters:dataset (dislib.data.Dataset) – Dataset to fit the RandomForestClassifier. The label corresponding to each sample is filled with the prediction given by the same predictor.
predict(dataset)[source]

Predicts classes using a fitted forest.

Parameters:dataset (dislib.data.Dataset) – Dataset with samples to predict. The label corresponding to each sample is filled with the prediction.
predict_proba(dataset)[source]

Predicts class probabilities using a fitted forest.

The probabilities are obtained as an average of the probabilities of each decision tree.

Parameters:dataset (dislib.data.Dataset) – Dataset with samples to predict. The label corresponding to each sample is filled with an array of the predicted probabilities. The class corresponding to each position in the array is given by self.classes.
score(dataset)[source]

Accuracy classification score.

Returns the mean accuracy on the given test dataset. This method assumes dataset.labels are true labels.

Parameters:dataset (Dataset) – Dataset where dataset.labels are true labels for dataset.samples.
Returns:score – Fraction of correctly classified samples.
Return type:float