Showing posts with label statistics. Show all posts
Showing posts with label statistics. Show all posts

Wednesday, October 09, 2013

Full-Frame Camera Mind Share

For the ones not into digital photography, full-frame is today's name for a digital sensor size that has the same size as the effective picture size of the traditional 35 mm film rolls. If you subtract the perforations of the 35 mm film at the top and bottom, then a 36x24 mm area remains for each photo.

With the advent of the digital sensors, such big sensors had been too expensive to produce in the beginning. The bigger the area of the sensor the higher the chance a single pixel is corrupt, resulting in much lower yield rates in production.

The standard sensor size for DSLR (digital single-lens reflex cameras) has therefore become the so called APS-C image size, which is 24x16 mm. Most DSLRs use sensors of this format. BTW, each side of the full-frame format is 1.5 times bigger than for APS-C format. But the area of a full-frame sensor is 2.25 times as big as an APS-C sensor (OK, you do the math).

Since the advent of the Canon 5D, digital cameras with the bigger 36x24 mm sensor size have become available and affordable for serious enthusiasts and of course professional photographers.

I took a look at a Flickr statistic and summed up the average daily users (as of 2013-10-09) who use a full-frame camera:


I use this as a proxy for possible market share of different manufacturers. Of course, this is just a rough idea, so lets call it the market mind share instead.


The Leica M9 actually is closer to USD 10'000 then 1'000. So it is no surprise, it has only marginal usage.

But what is also obvious, Sony has also only minuscule usage numbers. If we look only at the currently produced models, it becomes even more severe for Sony (BTW, Sony took over the camera business of Konica Minolta, which by itself took over the Minolta system).


And Nikon is catching up with Canon big time.

Who is missing completely on this list is Pentax. It has a digital medium format, which is even bigger in size then full-frame. However, besides this it has only APS-C cameras and had so big business problems, that they got bought by Hoya and then resold to Ricoh.

Who is also absent is Olympus. They today don't even produce cameras of APS-C sensor size, but years ago have opted for an even smaller image size (the Micro Four Third and formerlay Four Third format).

But the list also can change considerably very soon, as Sony is shortly before introducing new cameras without the traditional mirror box. It has a mount it uses for its smaller size mirrorless Nex line, the E-Mount. The Nex uses an electronic view finder or alternatively a back screen like a compact camera. There is no mechanical mirror needed anymore. That also allows to make the so called flange range, the distance of the mount from the sensor/film pane, much much shorter. Without a mirror in between, there is a lot of space that can be safed.

But also, that space can also be filled with a simple tube or adapter to create the exact flange range of some other traditional mount from Canon, Nikon or whomever.
That way any lens from any manufacturer can be used on a camera. And should this camera have a full frame sensor, these lenses would be used the way they were initially designed.

So lets wait and see when and with what Sony will come out and how it will change the camera circuit.

Tuesday, March 09, 2010

Google - Translating

BTW, if the article is not accessible directly, goto Google News, search for the title, and go from there. At the moment most pay per view articles are free for Google users.

BTW BTW, this will be an area very difficult for Apple to compete in. Thought Apple has the cash reserves now, they don't have the expertise and no synergies to other areas of their business. They try already to catch up for maps.

BTW BTW BTW, the biggest progress in terms of artificial intelligence (well, since the invention of AI, whatever) is IMHO the Google Search database/engine. It is maybe not much more than a big big memory, but memory IS a VERY big part of intelligence! And every further software/module/engine/project can be build on top of that, with whatever new results and effects that might bring! However, and here is the problem with any further global progress and with Google itself, you better work for Google if you want to have the means and to have access to the goodies. Google has a treasure with nothing comparable in human history (gold, oil, land, resources, money, people, armies, you name it). And as you can see below, they intend on using it.

NY Times article: Google’s Computing Power Betters Translation Tool
Creating a translation machine has long been seen as one of the toughest challenges in artificial intelligence. For decades, computer scientists tried using a rules-based approach — teaching the computer the linguistic rules of two languages and giving it the necessary dictionaries.

But in the mid-1990s, researchers began favoring a so-called statistical approach. They found that if they fed the computer thousands or millions of passages and their human-generated translations, it could learn to make accurate guesses about how to translate new texts.

It turns out that this technique, which requires huge amounts of data and lots of computing horsepower, is right up Google’s alley.

“Our infrastructure is very well-suited to this,” Vic Gundotra, a vice president for engineering at Google, said. “We can take approaches that others can’t even dream of.”

...

“This technology can make the language barrier go away,” said Franz Och, a principal scientist at Google who leads the company’s machine translation team. “It would allow anyone to communicate with anyone else.”

Mr. Och, a German researcher who previously worked at the University of Southern California, said he was initially reluctant to join Google, fearing it would treat translation as a side project. Larry Page, Google’s other founder, called to reassure him.

“He basically said that this is something that is very important for Google,” Mr. Och recalled recently. Mr. Och signed on in 2004 and was soon able to put Mr. Page’s promise to the test.

While many translation systems like Google’s use up to a billion words of text to create a model of a language, Google went much bigger: a few hundred billion English words. “The models become better and better the more text you process,” Mr. Och said.