This recurring question is often a source of debate and questions regarding UFOs.
I propose here to better understand the circumstances in which these estimates are possible and, if they are, to provide tools that can give good
I'll try to be as educational as possible. In addition, I will illustrate my point with a well known photographic case.
I'm also fully open to any idea, opinion or criticism regarding my methodology, in order to improve it.
At first, let's talk about theory! For those who do not like maths/geometry, you can directly scroll down to the "Practical Analysis Methodology"
chapter, where it's somewhat more fun!
MEASURABLE PARAMETERS ON A PHOTO
Parameters that may be measured from a photo are expressed in two domains: geometric and radiometric.
Geometric parameters – angles, sizes or distances – make use of pixel positions, in rows and columns, while radiometric parameters are calculated
from pixel luminance levels.
Photography is based on a principle of conical projection and time integration, which permits representation by a 2-dimensional image of information
that occupies a 4-dimensional space (elevation, azimuth, depth and time). It is therefore impossible to reconstruct the whole geometry of a scene from
a single photo, except if additional information is available (such as other photos shot from another direction, or data from other sources).
In particular, if the position of pixels in a photo allows (provided other indispensable technical data are available) the calculation of the angular
dimensions of an object, an assessment of its real dimensions is only possible if the distance between that object and the camera at shooting time is
known or may be estimated.
We shall deal successively with angular distance of a given point from the line of sight (often referred to as the principal axis), and with
mensuration of an angular dimension of an object. Then, after a reminder on how to derive a linear dimension from that and from the distance to the
lens, we shall review different ways to assess that distance or, at least, a range of possible values.
In order to calculate the angular distance from the line of sight, of a point of the scene represented by point A on the sensor, or the angular
distance δ between 2 points of the scene represented by points A and B on the sensor, one needs several geometric data: focal length f used for the
shot, and distances d, a and b, measured on the photosensitive medium (silver film or CCD array), defined as follows:
f : focal length
a : measure of distance PA on the sensitive medium
b : measure of distance PB on the sensitive medium
d : measure of distance AB on the sensitive medium
O : optical center of the lens
P : center of the photo on the sensitive medium.
Angular localization of an object in the scene
Inside the solid angle defining the camera angular field at shooting time (i.e. the frame of the scene), it is straightforward to determine the
angular distance α of a given point A of the image from the line of sight.
α = arctan (a/f)
In certain cases it will be possible, using additional data, to derive an altitude estimate, if the line of sight is known, and in particular if it is
Angular dimensions of an object
Supposing the distance between the points of interest in the scene and the camera is significantly larger than the focal length (which is always true
in practice, with the exception of macrophotography), one may assume the following approximation:
OP ≈ f
Putting into practice, on triangle OAB, the generalized Pythagoras’ theorem, one may calculate the angular size of the object δ between points A
and B, with the following formula:
Dimensions of an object
To be able to measure – or estimate – the dimension D of an object in a given direction, perpendicular to the line of sight, one must previously
know the value – or the estimated value – of 2 pieces of data: the angular dimension δ of the object in that direction and the distance x between
that object and the lens.
The applicable relation is then:
D = 2x x tan (δ/2)
Estimate of the object’s distance
The distance between an object under study and the lens can of course not be directly derived from the photo, but different analytic approaches can
allow an estimate to be made, or at least limits for possible values to be set.
Estimate from other identified and localized objects
If the configuration of elements of the scene allows verification that the distance between the lens and the object under study was compatible to
known or measurable limits in its vicinity, one may easily derive a range of possible dimensions for that object, from its angular dimensions.
Depending on cases, reference objects may be buildings, clouds, vegetation or vehicles, etc.
Exploitation of cast shadow
If the object under study casts a shadow visible on the photo, one may try to extract geometric information, in particular if the light source
position (the sun is most cases) may be determined in the scene, or if shadows of other objects in that scene may also be brought out.
Analysis of the depth of field
The depth of field defines a range of distance between the lens and an object inside which a photo is sharp (outside movement blur). Therefore it
indicates, if the object does appear sharp, possible distance limits between that object and the lens.
This parameter sometimes allows to bring into evidence any incompatibility between sharpness – or blurredness – of an object’s contours on one
hand, and its supposed distance from the lens on the other hand (example of « orbs »).
When focus has been set to infinity, which is the case for most photos taken from a digital camera, the depth of field spreads from the hyperfocal
distance to infinity. In that case, only objects that are « too close » to the lens may be outside the depth of field, and thus blurred for that
Hyperfocal distance H is calculated as follows:
H = f2 / (n x e)
f : focal length
n : f-number
e : circle of confusion or acceptable sharpness.
: I usually use this online DOF calculator
that have a large range of cameras in its
database and that gives really accurate results for these data.
Parameter "e" is rather subjective by nature. In practice, one assigns a value around 0,03 mm in silver photography and, for a digital camera, a
value equal to the size of 2 pixels (generally in the order of 0,01 to 0,02 mm).
If focusing has been done on an object located at a distance D, depth of field limits may be calculated as follows:
PdC = Dp – Da
Da = (H x D) / (H + D)
Dp = (H x D) / (H – D)
PdC : depth of field
Da : front distance (lower limit of the depth of field)
Dp : back distance (upper limit of the depth of field)