We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.

Rasmus Bonnevie • 5 years ago

I spent the last part of my PhD looking into tensor networks because of the relationship with graphical models, and I really think it's an amazing way to reason about a lot of computational algorithms and models. I think the relationship goes beyond the Robeva article, with message-passing algorithms in graphical models corresponding directly to tensor network contractions and the statistical subfield of spectral learning (which tries to learn models by calculating SVDs on high-order tensors) standing to be elucidated by diagrammatic reasoning.
I love your specialized notation by the way, with the embeddings and symmetry considerations!
One missing piece for me is how to handle inverses - is there any sensible way to handle something like the the Woodbury matrix identity? This would be particularly helpful in the spectral learning setting where you need to find compositions where the tensors "cancel out".

Tai-Danae • 5 years ago

No convention comes to mind at the moment, though I'm sure there is!

Yaroslav Bulatov • 2 years ago

BTW, there's could be something special about the tensor network that's not completely captured by looking at the graphical model corresponding to its line graph. There's a fast algorithm for contracting planar tensor networks, no equivalent for graphical models is known -- Jakes-Schauer, J., D. Anekstein, and P. Wocjan. 2019. “Carving-Width and Contraction Trees for Tensor Networks.” arXiv [cs.DM]. arXiv. https://doi.org/10.1016/j.j....

isomorphisms • 5 years ago
isomorphisms • 5 years ago

Using this, can you relate familiar deep-learning concepts to chemistry concepts such as Racah coefficients? What does the Clebsch-Gordan of a deep learning model mean?

park • 5 years ago

Hi there, I have a question about concrete k-tensor operation. As far as I know, 2-tensor whose size is (m x n) is a map from m-dimensional vector to n-dimensional one. i.e., we may consider a matrix as a connection between a layers with m and n-nodes, respectively (there is a natural direction : from m to n). In this sense, for instance, considering 3-tensor whose size is (i,j,k), I wonder how to determine input and output data. Thank you for your kind post!

Tai-Danae • 5 years ago

Yes, a 2-tensor is a matrix: it can be thought of an m×n rectangular array of numbers. A 3-tensor has a similar description: it can be thought of as a rectangular prism with base i×j and with height k.

Understanding a 2-tensor as an array is a bit different than viewing it as a "connection" between m and n nodes, which sounds more like a bipartite graph, as described in detail in a previous blog post: https://www.math3ma.com/blo.... Maybe there are useful ways of describing higher order tensors as graphs, though I am not aware of them.

Oreste • 4 years ago

you are wrong in your second image "the matrix M_ij is the node with edges i and j" : this very sentence contains the mistake that M_ij is not a matrix but one of the m.n M_ij entries of a matrix M