From Nonnegative Matrix Factorization to Nonnegative Tensor Factorization

Event Sponsor: 
LANS Informal Seminar Series
Start Date: 
Mar 4 2009 - 3:00pm to 4:00pm
Building 221 Conference Room A261
Argonne National Laboratory
Haw-ren Fang
Speaker(s) Title: 

In machine learning and data mining, data are often represented in numerical form, as a matrix or a tensor. In many cases, the data are onnegative. For example, in text mining, a collection of documents is converted to a term-document matrix whose (i,j) entry is the frequency of term i in document j. In face recognition, a face image in grayscale is represented as a nonnegative matrix, and a set of face images forms a 3-dimensional data tensor.

Low-rank approximation by principal component analysis (for a matrix) or tensor decomposition (for a tensor) is frequently used to filter out noise or capture important features. However, certain properties may be lost due to the introduction of negative values. It has been proven by numerical evidence that adding the nonnegative constraints helps detect the essential features of the data. The resulting technique is called nonnegative matrix factorization (NMF) for matrices or nonnegative tensor factorization (NTF) for tensors.

The NMF and NTF computation can be formulated as a simple-bound nonconvex optimization problem, to minimize the matrix distance or divergence subject to the nonnegativity constraints. Various alternating algorithms have been proposed to solve it. We show that virtually all alternating algorithms for NMF can be adapted to compute NTF, and draw the relations between the algorithms in the literature. The initialization schemes and sparseness constraints for NMF and NTF will also be discussed.

Miscellaneous Information: