Solving linear systems is better than inverting
The main reason is the one in xigoi's answer, noncommutativity, but let me add that a separate operator is handy to have for matrix operations: computing the matrix K = inverse(A)
and then multiplying x = K * b
is inferior from the point of view of both stability and computational cost. So considering A \ b
as a single operation with two operands is more effective than thinking about it as "first invert $A$, then multiply by $b$".
Proxy object
An alternative design choice that solves the same problem and has some advantages is having a method K = factorize(A)
which computes an LU factorization (or another more convenient factorization) of the matrix $A$, and returns an "inverse-like" object K
for which a method K * b
is defined. This choice is very convenient because it allows one to separate and reuse the most expensive part of linear system solution, which is the $O(n^3)$ factorization. So you can write the clearer and more efficient
K = factorize(A); % 2/3*n^3 + O(n^2) for a general nxn matrix Ax1 = K * b1; % O(n^2) if b1 is a length-n vectorx2 = K * b2; % O(n^2) if b2 is a length-n vector
rather than
x1 = A \ b1; % 2/3*n^3 + O(n^2)x2 = A \ b2; % 2/3*n^3 + O(n^2)% total cost 4/3*n^3: the factorization is computed twice.% Also, with this syntax the factorization cannot be precomputed% before knowing b1, b2
or
K = inv(A); % 2*n^3 + O(n^2)x1 = K * b1; % O(n^2)x2 = K * b2; % O(n^2)% x1, x2 computed in this way typically have a higher% numerical error than A \ b, in floating point.
This "implicit inverse" trick is implemented in Matlab's decomposition
, and (with more generality and more methods defined) in Julia's factorize
.
In a new language. I can see arguments for going even further and calling this function inverse(A)
, making it the preferred syntax to perform matrix inversion and linear system solution.