Calculation Errors and Best Practices
-
Hi all,
I have an expression that Reads a value from and object and Write the same value at every frame.
For example:
value = obj.GetRelPos() "do other matrix transformations" obj.SetRelPos(value)
Is there danger for the value drifting due to calculation errors, after a lot of executions?
In Cinema 4D it oftentimes happens that we select multiple objects that are supposed to have the same value and we see <<Multiple Values>> in the attribute manager, meaning something affected some decimals to drift appart.
While the user is working this expression could execute thousands or tens of thousands of times.
I tend to prefer to give absolute, static values to objects due to this concern, but is there really something to concern about?
-
In general, yes, this is a real consideration in numeric calculations when you work with limited-size floating point numbers. Cinema 4D even takes precautions against that in case of time values, which are internally described as rationals with numerator and denominator. (Maybe other places, I don't know the source code.)
Whether the calculations are affected "visibly" (either in the numeric fields which show, however, only three digits after the point, or in the viewport) is a question only you can answer.
To mitigate, you e.g. avoid certain calculations (very large and very small numbers in one calculation are generally bad), or you compare against an epsilon tolerance around the target value, or you use modulos to put numbers into a grid, whatever is most fitting for the specific situation. (Cinema 4D has an epsilon comparison function somewhere.) You can find the necessary techniques on the web; they are not unique for C4D of course.
However - in praxis, I only care for a certain subset of situations that I deem likely to cause issues. A complete numeric analysis of your algorithms is generally too much effort; most cases will just lead to an invisible imprecision in an object's or a point's position and can be disregarded. That's not an excuse to drop these considerations altogether; use reasonable caution.
-
Hello @orestiskon,
this is a private answer of mine and I think @Cairyn has already given you a nice answer, but I would like to clarify some stuff. And to be clear: The question of yours is formally out of scope of support, you will have to rely for further answers on the community.
What is floating point precision?
e to the pi Minus pi, CC-BY-NC 2.5 XKCDI cannot unpack this in all detail and am going for an intuitive understanding. For more details I would recommend the excellent Wikipedia article on Floating-point arithmetic.
So, you have probably seen popular examples for floating point precision errors like
print(1.1 + 2.2)
, which will print3.3000000000000003
, accompanied by devastating statements like "the vast majority of real numbers cannot be represented in IEEE 754". While this is all true, it can be a bit misleading in my opinion.So, let us try it out ourselves in Python, in order to figure out what that mysterious error is. We can run this code here:
# Used to force Python printing numbers with up to 51 decimal places, as # Python will start rounding otherwise. PrintNumber = lambda n: print("{:.51f}".format(n)) n = 0.000000000000000444089209850062616169452667236328125 PrintNumber(4 + n)
Which will print the following:
4.000000000000000000000000000000000000000000000000000
Yikes, that is not correct, Python seems to have completely ignored that we did add n to four and just returned the number four. So, we did find that pesky floating point error? n seems to be some kind of limit which cannot be represented, right? Before we book this under solved, let us try two more calculations:
>>> PrintNumber(2 + n) 2.000000000000000444089209850062616169452667236328125 >>> PrintNumber(2 + n + n) 2.000000000000000888178419700125232338905334472656250
That is confusing, for the number two the the calculation is correct. And when we then add n to that result again, the new result is still correct. So, there seems to be more to floating point precision than just a fixed value. To understand what is happening, we need to understand at least roughly how floating point numbers are represented.
Other than whole numbers, real numbers are an uncountable set. Which means while there are infinitely many whole numbers, we can still start counting towards infinity, i.e., count 0, 1, 2, 3, .... For real numbers, which we are trying to represent with floating point numbers, this cannot be done. When we try to start counting from 0, we cannot even name the next element, since there is always a smaller one; e.g., 0.1 is not the next element, because 0.01 is smaller, which is not the next element, because 0.001 is even smaller, which is not the next element ... and so on. So, we need to pick a certain precision with which to represent real numbers on the computer, since we cannot handle infinite precision there.
Floating point numbers can be represented in many ways, but the dominant form is something called IEEE 754. These representations can come in different bit lengths: 16 bit (half), 32 bit (single), 64 bit (double), and 128 bit (quadruple) precsion. Cinema, Python and most software use dominantly 64 bit, i.e., double precision for floating point representation. These representations are composed of a exponent and an mantissa, for 64 bit their sizes are:
exponent: 11 bit as an unsigned integer
mantissa: 53 bit as an unsigned integerNow we skip some nasty technial details, but we can realize a fundatental thing:
There are always two powers of two for a given real number. One which is the closest power of two which is smaller than the given number and one that is the closest power of two that is bigger.
So, for the number 3, the closest power of two which is smaller would be 2¹, i.e., 2, and the closest power of two which is bigger would be 2², i.e., 4. This information of between which powers of two a floating point number lies is stored in the exponent and forms an interval. For the example of 3, the smaller power of two was 2¹ and the greater power 2², so the interval is [2, 4]. The mantissa is an unsigned integer with a very high precision (53 bits, i.e. 2⁵² values) which does divide that interval into representable values. From which folllows that the precision for each interval is always the same, 53 bits, but the the length of the interval does change exponentially with the distance of the lower boundary of the interval to zero:
53 bit mantissa max value = 4503599627370496 Precision for from 2¹ to 2²: interval length = 2² - 2¹ = 2 maximum error = 2 / 4503599627370496 = 4,4408920985006261616945266723633e-16 Precision for from 2² to 2³: interval length = 2³ - 2² = 4 maximum error = 4 / 4503599627370496 = 8,8817841970012523233890533447266e-16 Precision for from 2³ to 2⁴: interval length = 8 maximum error = 1,7763568394002504646778106689453e-15 ...
Wikipedia has a nice visualization of that exponential distribution of precision for this form of floating point representation:
Visualization of floating point precision loss, CC-SA 4.0 Joeleoj123From this we can come to a few conclusions:
- We can explain why
4 + n
in our example from abov did not work and2 + n
did, as n is exactly the precision of the [2, 4] interval. And n is below the precsion of the next [4, 8] interval, which resulted in the result being rounded down to the closest value, 4.0. IEEE 754 rounds to the closest representable value. - IEEE 754 can only represent numbers in the interval [2, 4] which are the sum of 2 and multiples of that precision n.
- For IEEE 754 doubles, there are 4,503,599,627,370,496 values in the interval [2⁰, 2¹], a range of 1 with a maximum error of 2,220446049250313e-16.
- For IEEE 754 doubles, there are also 4,503,599,627,370,496 values in the interval [2¹⁰²², 2¹⁰²³], a range of 4,49423284e307 with the mind boggling maximum error of 9,97920155e291. For comparison: There are approximately 1e82 atoms in the observable universe.
- Especially that last number should make clear that floating point precision error is not inherently a small value and that amyn bins have an error that is larger than one, which affects our abilty to represent integers with floats:
- Integers from −2⁵³ to 2⁵³ can be represented exactly.
- Integers between 2⁵³ and 2⁵⁴ round to a multiple of 2.
- Integers between 2⁵⁴ and 2⁵⁵ round to a multiple of 4.
- Integers between 2⁵⁵ and 2⁵⁶ round to a multiple of 8.
- ...
Yikes, how do we deal with that?
There is no definitive answer to this as already pointed out by @Cairyn. If there would be a "Computers hate this: Five easy tricks to avoid floating point errors!", it would be integrated into hardware or compilers. @Cairyn named a few tricks which more or less work in some situations and there are also more advanced techniques. Some languages also have buildin-tricks up their sleeves, like for example Python with
float.as_integer_ratio
,math.fsum
and the typedecimal
. But the fancy techniques and language tricks do not help much in your case, since language tricks won't work withc4d.Vector
&c4d.Matrix
and the fancy techniques usually aim more towards comparing and solving stuff.It would have been better if you had shown something more concrete, but I assume you have basically a script which does something like the following:
import c4d def main(): obj = op.GetObject() mg = obj.GetMg() mg.off += c4d.Vector(1.1, 2.2, 3.3) obj.SetMg(mg)
So, each iteration of this script will pile up a bit of error. Because all the components of the vector we add are not representable as 64 bit floats in the first place and while we run the script, we will traverse through different intervals of precision. And when we do this long enough, we theoretically land at some time t at an offset where the error is larger than the value we do add. But when we assume a starting point of the object which is somewhere in a cube of the side length of 100,000 units at the origin, then the maximum combined error for moving 100 units per frame for 1,000,000 frames will be 0,00145 units. Which follows from:
100,000 units lies within [2¹⁶, 2¹⁷] -> max error: 1,4551915228366851806640625e-11
So, 64 bits floats are relatively robust when it comes to entertainment oriented CGI and their precision needs for visually representing something, as we usually have both sane frame counts and world units (which is not always given in games and why the can struggle with this). You will still run into floating point problems when you for example want to test if two vectors are parallel or when you have to solve complex equations. But if you want to improve your script anyways, then the best approach is to replace iteration with interpolation. The script of yours is in general problematic, because it couples the world state to the frame rate of the document, i.e., the position of your object at T=1.0 will be a different one in a document with 25 fps, compared to a document with 50 fps. An interpolation based approach will not have the problem. How you interpolate can vary, here are two very simple examples:
import c4d import math # The point the object does move away from. ID_POINT_START = (c4d.ID_USERDATA, 1) # The point the object does move towards. ID_POINT_END = (c4d.ID_USERDATA, 2) # The time span it takes to move from start to end. ID_DURATION = (c4d.ID_USERDATA, 3) def main(): """Interpolate linearly between two points. This will "snap back" after ID_DURATION has passed. """ # Get the object. obj = op.GetObject() # Current document time in seconds. t = doc.GetTime().Get() # The current offset. dt = math.fmod(t, op[ID_DURATION]) * (1. / op[ID_DURATION]) # Interpolate the current position and write the value. p = c4d.utils.MixVec(op[ID_POINT_START], op[ID_POINT_END], dt) obj[c4d.ID_BASEOBJECT_ABS_POSITION] = p
import c4d # The point the object does move away from. ID_POINT_START = (c4d.ID_USERDATA, 1) # The travel per second of the object as a vector. ID_TRAVEL = (c4d.ID_USERDATA, 2) def main(): """Interpolate linearly with a starting point and a velocity vector. This will travel towards infinity. """ # Get the object. obj = op.GetObject() # Current document time in seconds. t = doc.GetTime().Get() # Interpolate the current position and write the value. p = op[ID_POINT_START] + op[ID_TRAVEL] * t obj[c4d.ID_BASEOBJECT_ABS_POSITION] = p
These two will also be subject to floating point errors, but you will not carry around the error of previous calculations, since you do not depend on the state of the last frame. These are just two very simple examples, many things can be restated as interpolations.
Cheers,
Ferdinand - We can explain why
-
Hey @Cairyn and @ferdinand , thanks you both for your input. Ferdinand thanks for getting in all the trouble to write your thoughts so extensively.
It's nice to have some approximate numbers of what the error can be.I'm not using time-based calculations right now. It's good idea to use the time delta. I did some experimenting yesterday but it's not easy to implement in my setup. Definitely a good idea to keep it in mind for the future.
My exact case is that I do matrix transformations that normalize the matrix of the object, but I want to preserve its scale.
I have two ways of doing it:-
Get current scale from the object, do matrix transforms, re-apply the pre-transform scale values. But I thought running this on every frame could cause value drifting.
-
Save original scale as a constant. Do matrix transforms, re-apply the scale from the constant. The constant wouldn't evolve so it wouldn't drift, and it is the approach I'm taking now. The disadvantage is that the scale is now locked and can't be changed directly.
My take is that the drift might be tiny but it can be a potential issue, so it would be wiser to stick with the second approach.
-
-
Hello @orestiskon,
well, are you experiencing "drift"? I am not 100% sure what you mean by drift, but I assume you mean just an error with that.
As I lined out above, floating point error at 64 bits is usually not something you have to worry about in the scenario of what you are doing. There will be an error, but if you are only interested in visually representing something, it will be negligible when you meet the conditions I did line out above. I also do not get quite why you have to store the scale of the matrix. When you transform an input matrix with another matrix that has the scale of (1,1,1), then scale of the result will have the same scale as the input matrix, since x * 1 is still x.
In general, you try to avoid doing multiple arithmetic operations in a row (see example at the end). Which is also the reason I proposed interpolations. But again, for just moving a cube around, you should be fine.
Cheers,
Ferdinand"""Showcases the impact of doing the same calculation in 'different' ways. """ import math # List of ten 1.1's, 1.1 cannot be represented by 64 bit floats. items = [1.1] * 10 # This will amass the precision error of ten additions of a not representable # value. print (sum(items)) # This won't. print ((1.1 * 10)) # And this won't either. print (9.9 + 1.1) # One of Python's floating point tricks, fsum is addition with an error term. print (math.fsum(items))
10.999999999999998 11.0 11.0 11.0