15
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      On Tensors, Sparsity, and Nonnegative Factorizations

      Preprint
      ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with appropriate algorithms and theory. To do so, we propose that the random variation is best described via a Poisson distribution, which better describes the zeros observed in the data as compared to the typical assumption of a Gaussian distribution. Under a Poisson assumption, we fit a model to observed data using the negative log-likelihood score. We present a new algorithm for Poisson tensor factorization called CANDECOMP-PARAFAC Alternating Poisson Regression (CP-APR) that is based on a majorization-minimization approach. It can be shown that CP-APR is a generalization of the Lee-Seung multiplicative updates. We show how to prevent the algorithm from converging to non-KKT points and prove convergence of CP-APR under mild conditions. We also explain how to implement CP-APR for large-scale sparse tensors and present results on several data sets, both real and simulated.

          Related collections

          Most cited references24

          • Record: found
          • Abstract: found
          • Article: not found

          Projected gradient methods for nonnegative matrix factorization.

          Nonnegative matrix factorization (NMF) can be formulated as a minimization problem with bound constraints. Although bound-constrained optimization has been studied extensively in both theory and practice, so far no study has formally applied its techniques to NMF. In this letter, we propose two projected gradient methods for NMF, both of which exhibit strong optimization properties. We discuss efficient implementations and demonstrate that one of the proposed methods converges faster than the popular multiplicative update approach. A simple Matlab code is also provided.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Algorithms for Nonnegative Matrix Factorization with the β-Divergence

                Bookmark

                Author and article information

                Journal
                11 December 2011
                2012-08-14
                Article
                10.1137/110859063
                1112.2414
                a1030a43-8ab9-4828-b49c-d56bbce59dd3

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                SIAM Journal on Matrix Analysis and Applications 33(4):1272-1299, 2012
                math.NA

                Comments

                Comment on this article