Simply put:
I have a image of a human body. I have two reference points which are the left and right waist locations. Let's say for example: (100,100) and (200,100) are the respective left and right waist locations.
In addition to those two points, I also know the "real life" inches value of the waist.
I am trying to take those three data points and extrapolate how many pixels = one inch in "real life". This shouldn't be that hard, but I'm having some type of brain block on this.
Looking for the simple formula. The one I started with is:
(RightPoint.X - LeftPoint.X) / 34"
This does not work. The smaller the waist gets, the larger the pixels per inch value. In the above, it would be 2.9 pixels == 1".
If I change the 34" to 10", it shoots up to 10 pixels == 1". Or maybe that's correct? Ugh...brain where are you tonight!?!?
The Question:
I'm looking for the correct formula that based on those three referential data points will allow me to determine how many pixels in the image == 1". So if I know in real life that the person's waist is 34 inches, I want to determine that in the image...let's say 2.5 pixels == 1 inch relative to the picture.