Comment by seanhunter
Comment by seanhunter 3 days ago
Definitely not an expert so I’m on a journey learning this stuff, but as I understand it at the moment, a multidimensional array can represent a tensor, but to be a tensor, a multidimensional array needs the specific additional property that it “transforms like a tensor” that is, that as you apply some transformation to its components, that its basis vectors transform in such a way as to preserve the “meaning” of the tensor. An example will make this clear. Say I am in manhattan and I have a vector (rank 1 tensor) which points from my current position to the top of the empire state building. I can take the components of this vector in cartesian (x,y,z) form and represent it that way as ai + bj + ck where i,j, and k are the Cartesian basis vectors. However I can use another representation if I want to. Like say I transform this vector so I’m using spherical coordinates, the basis vectors will transform using the inverse of whatever transformation I did on the xyz components so the new basis vectors multiplied by the new components will give me the exact same actual vector I had before (ie it will still point from me to the empire state).
Replying to myself to explain: - The components of the vector (in whatever coordinate system) are simply an array
- The combination of components + basis vectors + operators that transform components and basis vectors in such a way as to preserve their relationship is a tensor
In ML (and computer science more broadly), people often use the word tensor just to mean a multi-dimensional array. ML people do use tensor products etc so they maybe have more justification that some folks for using the word but I'm not 100% convinced. Not an expert as I say.