.. Tensor Rank Visualization documentation master file, created by sphinx-quickstart on Fri Aug 7 11:14:54 2009.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Tensor Rank Visualization
=====================================================
`Martin J. Mohlenkamp `_
Copyright (c) 2009, 2013 Martin J. Mohlenkamp.
This documentation is
distributed under the `GNU Free Documentation License
`_.
Introduction
============
.. {\em Subject Classification:}
65D15 %Numerical approximation Algorithms for functional approximation
{\em Keywords:} curse of dimensionality;
Since the notation and terminology are inconsistent in the literature,
we first have to define what we are talking about. A *tensor* is an object
with some number of indices that yields a real or complex number. We
denote it
.. math::
T(i_1,i_2,\ldots,i_d).
The *dimension* :math:`d` of the tensor is the number of indices. Thus
an ordinary vector is a tensor in dimension one, and an ordinary
matrix is a tensor in dimension two. Each index has some range of
allowed values. The *resolution* :math:`M_k` in an index
:math:`i_k` is the number of allowed values, so
.. math::
i_k=1,2,\ldots,M_k\,.
For simplicity we will often
assume the resolution is the same in all indices and denote it
:math:`M`.
A tensor :math:`T` in dimension :math:`d` with resolution :math:`M`
contains :math:`M^d` numbers. For even moderate :math:`d` and
:math:`M` this is an astronomical number, and there is no way to
directly store, compute, or manipulate such a tensor. This effect is
known as the *Curse of Dimensionality* [BELLMA1961]_.
A *separable* tensor is one that can be written as a product
.. math::
T(i_1,i_2,\ldots,i_d)=V_1(i_1)V_2(i_2)\cdots V_d(i_d)
=\prod_{k=1}^d V_k(i_k)
\,.
To write :math:`T` without explicitly listing its indices, we use the
tensor product
.. math::
T=V_1\otimes V_2\otimes\cdots \otimes V_d
=\bigotimes_{k=1}^d V_k
\,.
A tensor can be expressed with *separation rank* :math:`r` if it can
be written
.. math::
T(i_1,i_2,\ldots,i_d)&=V^1_1(i_1)V^1_2(i_2)\cdots V^1_d(i_d)+
V^2_1(i_1)V^2_2(i_2)\cdots V^2_d(i_d)+\cdots
+V^r_1(i_1)V^r_2(i_2)\cdots V^r_d(i_d)\\
&=\sum_{l=1}^r \prod_{k=1}^d V^l_k(i_k)
\,,
or equivalently
.. math::
T=\sum_{l=1}^r \bigotimes_{k=1}^d V^l_k
\,,
where the superscript in :math:`V^l_k` is an index rather than a
power. Such a tensor can be stored using only :math:`rdM` numbers
instead of :math:`M^d`. Any tensor can be written in this form with
:math:`r=M^{d-1}`, but only when :math:`r` is small is this
representation useful.
Given a tensor with separation rank :math:`r`, it may or may not be
possible to express it with smaller :math:`r`. There are several
examples where tensors have been written or approximated with much
smaller :math:`r` than expected (see e.g. [BEY-MOH2005]_ [MOH-MON2005]_ ).
In each of these examples a proof can be given, but there is no
apparent connection between the proofs. Overall, there is very little
understanding of why approximations with small :math:`r` seem to be
effective. Our understanding is hampered by the following:
* The set of tensors with separation rank at most :math:`r` is not a
subspace, since in general the sum of two such tensors has
separation rank :math:`2r`. Thus this set is not flat, but instead
has nontrivial geometry and topology.
* The smallest interesting example is :math:`2\times 2 \times 2`,
which has 8 entries and so would need to be plotted in :math:`R^8`.
The goal of this work is to provide a way to visually explore the set
of tensors with a given :math:`r`. We hope that this exploration will
yield unexpected structure and inspire better understanding of such
tensors. This understanding may then lead to proofs on why
approximations with small :math:`r` are unexpectedly effective.
See [KOL-BAD2009]_ for a general review of tensor decompositions.
Visualization Theory
====================
In these sections we describe how one can visualize a set of tensors in
a canonical fashion, and some practical issues that arise.
.. toctree::
:maxdepth: 2
theory.rst
practical.rst
Examples
===========
In these sections we explore some particular examples and show the
visualizations produced.
.. toctree::
:maxdepth: 2
2x2x2.rst
3x3x3.rst
3x3x4.rst
2to9.rst
ALSfail.rst
The :math:`2 \times 2 \times 2` case gives interesting holes that the
:math:`3 \times 3 \times 3` and :math:`3 \times 3 \times 4` cases
lack. Based on table 1 in [C-B-L-C2009]_, this was to be expected
since the :math:`2 \times 2 \times 2` case has two typical ranks and
the others do not. Other known cases with more than one typical rank are
* :math:`2 \times 3 \times 3` with ranks 3 and 4
* :math:`2 \times 4 \times 4` with ranks 4 and 5
* :math:`2 \times 5 \times 5` with ranks 5 and 6
* :math:`3 \times 3 \times 5` with ranks 5 and 6
* :math:`3 \times 4 \times 8` with ranks 8 and 9
* :math:`4 \times 4 \times 12` with ranks 12 and 13
In some other cases, the smallest typical rank is known, but it is
unknown if it is the only typical rank. For example, for
:math:`3 \times 4 \times 4` tensors the typical rank is either just 6 or is 6
and 7.
Software (including download)
=============================
In this section we explain what the software does to produce the
visualizations and how to use it.
.. toctree::
:maxdepth: 2
software.rst
Acknowledgments
===============
This material is based upon work supported by the
National Science Foundation under Grant DMS-0545895.
I would like to thank Sruthi Devarachetty, Nam Nguyen, Krishna
Chaitanya Palavarpu, and Robert Vanyo, who were research assistants on
this project while students at Ohio University.
I would also like to thank
`Shmuel Friedland `_,
`Lek-Heng Lim `_,
`Fernando Perez `_,
`Alwin Stegeman `_, and
`Jos ten Berge `_
for their comments, corrections, and suggestions.
Bibliography
============
.. [BELLMA1961] Richard Bellman, *Adaptive Control Processes: a Guided Tour*
Princeton Univ. Press, Princeton, New Jersey, 1961.
.. [BEY-MOH2005] G. Beylkin and M.J. Mohlenkamp, Algorithms for
numerical analysis in high dimensions, *SIAM J. Sci. Comput.*,
26 (2005), pp.2133-2159.
.. [BRE-HU2013] Bremner, Murray R. and Hu, Jiaxiong,
On Kruskal's theorem that every 3x3x3 array has rank at most 5
*Linear Algebra Appl.* 439(2)401-421, 2013.
doi:10.1016/j.laa.2013.03.021
.. [C-B-L-C2009] Comon, P.; ten Berge, J. M. F.; De Lathauwer, L.; and
Castaing, J., Generic and typical ranks of multi-way arrays,
*Linear Algebra Appl.*, 430: pp.2997-3007, 2009.
doi:10.1016/j.laa.2009.01.014
.. [DES-LIM2008] Vin De Silva and Lek-Heng Lim, Tensor Rank and The
Ill-posedness of The Best Low-Rank Approximation Problem, *SIAM
Journal on Matrix Analysis and Applications*, Volume 30, Issue 3,
September 2008.
.. [FRI-MEH] S. Friedland and V. Mehrmann, Best Subspace Tensor
Approximations, http://arxiv.org/abs/0805.4220
.. [KOL-BAD2009] Tamara G. Kolda and Brett W. Bader
Tensor Decompositions and Applications
SIAM Review 51(3)455-500, 2009
.. [KRUSKA1989] J. B. Kruskal, Rank, decomposition, and uniqueness
for 3-way and N -way arrays, in Multiway Data Analysis,
R. Coppi and S. Bolasco, eds., North-Holland, Amsterdam, 1989,
pp. 7-18.
.. [MOH-MON2005] M.J. Mohlenkamp and L. Monzon, Trigonometric identities and
sums of separable functions, *The Mathematical Intelligencer*, 27 (2005),
pp.65--69.
http://www.ohio.edu/people/mohlenka/research/sine.pdf
..
ALL-RHO2008.pdf is 4x4x4 gene related
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`