Nonnegative Matrix and Tensor Factorizations

Science & Technology

By Andrzej Cichocki

Publisher : John Wiley and Sons, Ltd,Publication

ABOUT Andrzej Cichocki

Andrzej Cichocki
Andrzej CICHOCKI received the M.Sc. (with honors), Ph.D. and Dr.Sc. (Habilitation) degrees, all in electrical engineering. from Warsaw University of Technology (Poland). Since 1972, he has been with the Institute of Theory of Electrical Engineering, Measurement  and Information Systems, More...



Signal processing, data analysis and data mining are pervasive throughout science and engineering.
Extracting an interesting knowledge from experimental raw datasets, measurements,
observations and understanding complex data has become an important challenge and objective.
Often datasets collected from complex phenomena represent the integrated result of several
inter-related variables or they are combinations of underlying latent components or factors. Such
datasets can be first decomposed or separated into the components that underlie them in order
to discover structures and extract hidden information. In many situations, the measurements are
gathered and stored as data matrices or multi-way arrays (tensors), and described by linear or
multi-linear models.
Approximative low-rank matrix and tensor factorizations or decompositions play a fundamental
role in enhancing the data and extracting latent components. A common thread in various
approaches for noise removal, model reduction, feasibility reconstruction, and Blind Source
Separation (BSS) is to replace the original data by a lower dimensional approximate representation
obtained via a matrix or multi-way array factorization or decomposition. The notion of a
matrix factorization arises in a wide range of important applications and each matrix factorization
makes a different assumption regarding component (factor) matrices and their underlying
structures, so choosing the appropriate one is critical in each application domain. Very often
the data, signals or images to be analyzed are nonnegative (or partially nonnegative), and sometimes
they also have sparse or smooth representation. For such data, it is preferable to take
these constraints into account in the analysis to extract nonnegative and sparse/smooth components
or factors with physical meaning or reasonable interpretation, and thereby avoid absurd or
unpredictable results. Classical tools cannot guarantee to maintain the nonnegativity.
In this research monograph, we provide a wide survey of models and algorithmic aspects of
Nonnegative Matrix Factorization (NMF), and its various extensions and modifications, especially
the Nonnegative Tensor Factorization (NTF) and the Nonnegative Tucker Decomposition