There’s no difference. A tensor is just a very, very generic term for: basically anything. A tensor is a generic term for a way of arranging numbers that generally has some geometric interpretation.
A scalar (math speak for a number) is a 0-tensor.
A vector (math speak for a list of numbers) is a 1-tensor
A matrix (math speak for a grid of numbers) is a 2-tensor
So on and so forth. Often people will only use the tensor to refer to 3+ tensors since we’ve got other words for smaller tensors. This is slightly more common in physics, but I don’t have any data that confirms it’s used any differently between physics and ML.
A tensor can represent pretty much anything though. For computational reasons it’s rare to deal with 3+ tensors in ML, and 3 tensors are more common in physics. You see them in ML though for things like color images (3-tensor), video (4-tensor) and more complex imaging (i.e. MRIs).
One important note and point of confusion is that the “dimensions” of the tensor are referred to as modes. A vector is a tensor with a single mode, not a single dimension. A vector can have infinitely many dimensions, but it only has a single mode.
View original question on Quora >