dislib.optimization.ADMM

ADMM Lasso

@Authors: Aleksandar Armacki and Lidija Fodor @Affiliation: Faculty of Sciences, University of Novi Sad, Serbia

This work is supported by the I-BiDaaS project, funded by the European Commission under Grant Agreement No. 780787.

class dislib.optimization.admm.base.ADMM(loss_fn, k, rho=1, max_iter=100, rtol=0.01, atol=0.0001, verbose=False)[source]

Bases: sklearn.base.BaseEstimator

Alternating Direction Method of Multipliers (ADMM) solver. ADMM is renowned for being well suited to the distributed settings [1], for its guaranteed convergence and general robustness with respect to the parameters. Additionally, the algorithm has a generic form that can be easily adapted to a wide range of machine learning problems with only minor tweaks in the code.

Parameters:
  • loss_fn (func) – Loss function.
  • k (float) – Soft thresholding value.
  • rho (float, optional (default=1)) – The penalty parameter for constraint violation.
  • max_iter (int, optional (default=100)) – Maximum number of iterations to perform.
  • atol (float, optional (default=1e-4)) – The absolute tolerance used to calculate the early stop criterion.
  • rtol (float, optional (default=1e-2)) – The relative tolerance used to calculate the early stop criterion.
  • verbose (boolean, optional (default=False)) – Whether to print information about the optimization process.
Variables:
  • z (ds-array shape=(1, n_features)) – Computed z.
  • n_iter (int) – Number of iterations performed.
  • converged (boolean) – Whether the optimization converged.

References

[1]S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein (2011). Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. In Foundations and Trends in Machine Learning, 3(1):1–122.
fit(x, y)[source]

Fits the model with training data.

Parameters:
  • x (ds-array, shape=(n_samples, n_features)) – Training samples.
  • y (ds-array, shape=(n_samples, 1)) – Class labels of x.
Returns:

self

Return type:

ADMM