Lately I've been playing around with static dimensional analysis in Rust. The overall idea is similar: Use PhantomData to add a type parameter and define an empty struct for each unit. So you might end up with, say, Scalar<Newtons> or Vec3<Meters>.
Dividing and multiplying units statically is where I've had trouble so far. I think I've found a way, but it would depend on negative trait bounds, as discussed here:
Ideally, I'd like to be able to do something like this:
let a: Scalar<Joules> = Scalar::new(2.0);
let b: Scalar<Seconds> = Scalar::new(3.0);
let c: Scalar<Watts> = a / b;
// Watts is a type synonym for Over<Joules, Seconds>.
// Other derived units would use Times<U, V>. E.g.:
type Pascals = Times<Newton, Times<Meter, Meter>>.
Canonicalization is tough, and requires you to define some common ordering on your units.
Systems will use an array of unit powers, so that if the array were defined as <Joules, Seconds, Newtons, Meters>, then acceleration would be <0,-2,0,1> and watts would be <1,-1,0,0>. Addition and subtraction require that your arrays are equal, and multiplication and division are pairwise additive/subtractive.
Sorry! I never really spent much time in the physical sciences, so I didn't know that. You would obviously not want to pick Joules as a fundamental unit.
Will the Rust compiler ever understand numbers in types? I.e. will the numbers ever be more than just part of the string that is the type name? If not, then I don't know how possible powers will be. Maybe the solution would be to define some types for commonly-used powers, e.g. PowN4, PowN3, PowN2, PowN1, Pow2, Pow3, Pow4. Users who needed higher powers could define those types themselves, I guess.
Great question. I believe each unit struct would need to implement PartialEq or something similar. That would define the canonical nesting order. Alphabetical would make sense.
We would to resolve the nesting during multiplication and division. For example:
let a: Scalar<Times<A, B>> = Scalar::new(10.0);
let b: Scalar<Times<A, Times<B, C>>> = Scalar::new(5.0);
let c: Scalar<Times<A, Times<A, Times<B, Times<C>>>> = a * b;
Another challenge is that the type of c is ugly. But this could be mitigated by generous use of type synonyms.
If I understand correctly the author remark about the lack of Rust support for value parameters, it seems that it should be equivalent to the Nim feature (static[int]) that has allowed me to write type-checked matrix operations there
Ok this is seriously amazing (coming from a C++ guy). However, one thing I don't like with putting matrix dimensions in templates is that then you can't construct them at runtime. I do understand the obvious - that you can't have both static type checks on all operations and runtime-determined matrix sizes. Though I would kill for a language which would specialise my code at runtime and throw an exception for compilation errors. So you could write, e.g.
template <int m, int n, int l>
Matrix<m, l>
mul(Matrix<m, n> lhs, Matrix <n, l> rhs)
{...impl...}
And then be able to call it like
int m, n, k, l = ... read from file or whatever
Matrix <m, n> m1 = ...;
Matrix <k, l> m2 = ...;
try {
Matrix <int o, int p> m3 = m1 * m2;
// ^ code compiled dynamically
// or loaded from cache, based on
// runtime types of m1 and m2.
// o and p set to the result
// of type inference.
// I could imagine even having
// specialised versions with inline
// assembly for specific dimensions.
} catch (DynamicCompilationException e) {
print("dimensions not compatible");
}
Java could be it, if it had reified generics. You'd create an implementation of Num, or load one from cache, then instantiate the template and attempt to call the mul function.
Or you could abuse the invoke dynamic feature - create specialised functions matrixMultiply$m$n$l and classes Matrix$m$n from some other templating language as needed, then do an invoke dynamic based on type. But this would be very cumbersome to use, I think.
Julia does this at runtime. You can create an arbitrarily sized and typed array and call a function. If no implementation exists specialized for that type, it will automatically compile one using LLVM.
(The function that creates the array will be slower since it's not type-stable, but it can generate a type-stable function that's fast.)
statically checked, but still runtime-determined, matrix sizes can be done in Scala. Doing specialized implementations would require bytecode magic, though.
case class Dim(size: Int)
trait Matrix[X <: Dim with Singleton, Y <: Dim with Singleton] {
def *[Z <: Dim with Singleton](other: Matrix[Y, Z]):Matrix[X, Z] = ???
}
object Matrix {
def apply[X <: Dim with Singleton, Y <: Dim with Singleton](x: X, y: Y):Matrix[X,Y] = ???
}
val x = Dim(100)
val y = Dim(readFromFile(...))
val z = Dim(37)
val A = Matrix[x.type, y.type](x, y)
val B = Matrix[y.type, z.type](y, z)
A * A // compile error
A * B // ok
Dividing and multiplying units statically is where I've had trouble so far. I think I've found a way, but it would depend on negative trait bounds, as discussed here:
https://github.com/rust-lang/rfcs/issues/1053
Ideally, I'd like to be able to do something like this: