Adobe has admitted an image used in its 'image deblur' presentation was artificially blurred for the purposes of the demonstration. The company
said the blur on the image was 'more complicated than anything we can simulate using Photoshop's blur capabilities.' It described the move as 'common
practice in research' and defended the use of the image because 'we wanted it to be entertaining and relevant to the audience.' The other images shown
were the result of camera shake, it said.
Source
If you would like to see the video demonstration go to this
thread
I suppose this news article brings up something that seems to come a lot on ATS and image manip threads ...
Adobe states that the application didn't 'know' that the blur was synthetic and therefore stands by its example ... however a synthetic blur is a
linear function in many cases meaning quite easy to reverse in comparison to say a non-linear random function.
The point spread function which is required to reverse the blur really isn't as difficult to work out as a real world chaotic situation with poor
focus, motion blurs, bad lighting, noise, lens flares and all the rest of it. This isn't the first time a deblurring app has reversed a synthetic blur
and claimed magical results. It won't be the last. (PS Armap was correct about some of this in a previous thread
)
It's just not popular to state the limitations of such functions (or the fact it's based on some many decades old maths functions), and it's something
ATS users need to be aware of when they look at, not only their own imagery, but other persons. When looking at what a piece of image software
allegedly does you have to look at it from the perspective of a washing up liquid commercial. Ever had to clean soap scum out a shower? Did it come
off in one wipe like the advert?
Here is another example raised from a recent thread:
Lucis vs.
Unsharp Mask Fight!
In this document Lucis compares their product with an unsharp mask. This is like comparing using bleach with rolling your face across the floor and
hoping the stain goes away. The unsharp mask was vogue in around the 1930s - 1940s, it's not exactly a new thing.
And another example:
Lucis Competitive Advantages
PDF
These examples, at best, are poor, and at worst are blatantly misleading; it's a boxing match where the first fighter gets to choose any opponent.
Contrast equalization Lucis style techniques and unsharp masks aren't really mathematically similar. They do
similar things in very different
ways with very different trade offs, but this would make you believe there's something amazing in this new patented algorithm.
Just to give an example ... I might use an unsharp mask to tighten up an image so I can perhaps get an easier tracking point for science or
compositing purposes.
I wouldn't want to use a laplacian because for a particular scene it might bring up too much noise, and a histogram equalization technique may also be
prone to noise and can turn my blacks to grey. Not only will this throw off my tracking since my track is likely based on luma values, but it will
also take more processing power than the unsharp mask and laplacian combined in many scenarios requiring many more calculations per frame. (Though
this limitation is rapidly being decreased by bigger computers)
I suppose the point I'm trying to make is ... It's not the tools that make the image analyst/artist, and no special plugin is going to act as a magic
bullet regardless of the advertising. If I was against John Knoll in an art competition and Knoll was given two crayons and an elastic band with an
out of print copy of photoshop 6.0 emulated onto a rusty old Amiga, and I was allowed to use CS5 and Nuke ... my money might end up being on Knoll.
Obviously that's a ridiculous (and maybe optimistic) scenario, but a good, knowledgeable person using software that doesn't cost a million bucks will
do well regardless of limitations of their tools.
Image theory doesn't suddenly change because a person spent a large amount of money or bought a plugin. For example, many of the more expensive
systems aren't expensive always because they provide new functionality, or have magical tools. There are expensive systems such as Smoke, and Flame in
the film industry, and many other expensive systems in the medical and science fields. They often actually
lack the functionality of lesser
allegedly inferior systems. They have fewer tools developed for them these days because there are many times fewer high end consumers. Generally what
you're paying for is things like 24/7 support on call support and reliability, and a single tested one-stop shop system and work flow which can tackle
the latest high resolution imagery in real time (often the hardware isn't even physically that cutting edge); in the case of medical imaging you may
be recieving some insurance regarding results and assurance that the algorithms and tools provided are accepted scientific standard (therefore often
not cutting edge in many cases). In a case where a break in your work flow might cost you a million dollar a year VFX client or, worse, result in
someone's death you want tried and tested promise work flow and results.
These big gun apps and hardware don't make a person any better at controlling an image, editing it, or analyzing it. (Quite the opposite in some
cases! You will be amazed what happens when a person is allowed to add as many nodes/filters as they want without punishing them by crashing their
workspace.) A martial artist learns the very base techniques of their art prior to learning how to Chuck Norris a person in the face. It is the same
for image analysts.
Just an example of the price of Smoke which is a video editing system in the autodesk family:
An Autodesk Smoke 2012 for Mac OS X license is available at a suggested retail price of $14,995.* Autodesk Subscription is available for purchase
simultaneously with the product license for $1,995 MSRP per year.
Source
Perhaps one day we will have algorithms which not only analyze and process your image, but work out all the steps in between ... so perhaps you may
just be able to click the 'I wanna see it better button, kthxgo' ... but until then it's highly likely we're just going to be working on our
individual filters and functions and trying to choose the best one for what we're aiming to do.
And until then I really would urge people to learn as much image theory as they can if it is what interests them. It can save you money on expensive
ABC plugins, and it's a great investment in the future because, oddly enough, how light, color and maths works isn't going to change in our life times
most likely.
edit on 26-10-2011 by Pinke because: Linkies