Sign-constrained regularized loss minimization

Tsuyoshi Kato, Misato Kobayashi, Daisuke Sano

Research output: Contribution to journalArticlepeer-review


In practical analysis, domain knowledge about analysis target has often been accumulated, although, typically, such knowledge has been discarded in the statistical analysis stage, and the statistical tool has been applied as a black box. In this paper, we introduce sign constraints that are a handy and simple representation for non-experts in generic learning problems. We have developed two new optimization algorithms for the sign-constrained regularized loss minimization, called the sign-constrained Pegasos (SC-Pega) and the sign-constrained SDCA (SC-SDCA), by simply inserting the sign correction step into the original Pegasos and SDCA, respectively. We present theoretical analyses that guarantee that insertion of the sign correction step does not degrade the convergence rate for both algorithms. Two applications, where the signconstrained learning is effective, are presented. The one is exploitation of prior information about correlation between explanatory variables and a target variable. The other is introduction of the sign-constrained to SVM-Pairwise method. Experimental results demonstrate significant improvement of generalization performance by introducing sign constraints in both applications.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2017 Oct 12

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Sign-constrained regularized loss minimization'. Together they form a unique fingerprint.

Cite this