Google AI Retracts Photos and Deceives the Professionals

A Google Street View image experiment transforms algorithms into post-production and framing professionals thanks to machine learning.Teespire Application Development Company in USA is bringing this interesting study. Enjoy the read.

Google is on the edge of photo editing . This is demonstrated by some of the projects that have been widespread in recent times and are full of the potential of machine learning and, in general, of artificial intelligence . An area characterized by a key element: the absence of judgmental objectivity . Paradoxically considered as a key to salvation for certain trades, more characterized by a specialization rate but also an artistic component that could save them from automation.

” Machine learning excels in many areas with precise goals, tasks where a right or wrong answer exists, and support the algorithm training leading to the desired result, which has to identify objects in the images or make a decent translation from a language all ‘other – explains Hiu Fang , software engineer, in a speech recently published on the blog dedicated to the research of Mountain View – but there are areas where these objective evaluations are not available . For example, to say whether a photograph is beautiful or not depends on its aesthetic value, which is an extremely subjective concept . “

N5Sw91W-hx0XvCQguPjlsA_0How, then, does machine learning take on creative and creative creations? Researchers Big G Fang and Meng Zhang of Google Research’s Machine Perception Group have developed an experimental deep-learning system, Creatism , dedicated to images. And they told it all in a paper published on ArXiv .

How does it work? It basically mimics the work style of a professional photographer , from the frame to the composition, to taking on a special task : elaborating – cutting and postproduction – views taken by Google Street View in aesthetically pleasing creations. They called it a “virtual photographer”: he has analyzed over 40,000 views from the most diverse points in the globe (the Alps, the Canadian National Parks Banff and Jasper, Big Sur and Yellowstone in California) and has profounded them and worked them deeply . So much so, says the experiment, approaching a professional quality . ” At least judging by professional photographers“. Well, already, because you were six. But let’s go with order.

It would appear that this project is intersecting with another system that has always been developed by Google , this time in collaboration with the Boston Mill. What we have talked about is that it revolves around a series of mechanisms that can retouch the images with the same skills as a real-time photographer and without great commitment to the user. The principle was the same but in that case the researchers used 5,000 images created by Adobe and the Boston Institute. Each photo was retrofitted by five different experts from which algorithms have ” stolen the craft “. Making it work to process imaging data almost before the shot.

The other experiment, the one discussed above, was, if possible, even more sophisticated . The system has done virtually everything by dividing the landscape into its various levels of composition (saturation, high dynamic range, brightness) and learning not only from 15,000 highly successful landscapes (according to users) taken from the 500px site . com but also from bad examples dating to the meal , that is, from bad example of saturation, Hdr and composition randomly produced by a combination of images. In short, the training of the algorithm took place through a so-called generative adversarial network, a class of artificial intelligence algorithms introduced in 2014 and composed of two systems of neural networks that “face” in a zero-sum context.

JsMCuwNb3HvdINhyBPZm7Q_5Technical details apart, what matters are the results . And these tell about impressive images that have somehow deceived even professional photographers . ” To judge the effectiveness of the algorithm we’ve created a kind of Turing test – Google researchers explain – we mixed our creations with other photos of different quality and we showed them to a number of professionals of Best Web Application Development Company in USA .” What should they do? Exactly: to evaluate them . That is to assign a score without knowing this mix between “artificial” and realistic photos and having four possible levels: photo ” tip and snap“Devoid of any kind of consideration,” good photo “that anyone could achieve without a background in photography and from which nothing emerges of artistic, semi-pro, that is beautiful photos with some obvious and” professional ” artistic aspects . Well, of the 173 “artificial” pictures (you can see them  here)  41% have been awarded the semi-professional judgment and another 13% of the good level , tracking the evaluations of the researchers well. As if they were able to deceive even very experienced eyes, which in addition to real shots, that is, not generated and reworked and made by a photographer in flesh and bone, have given similar judgments only in 45% of cases.

A project obviously has a lot of other ways ahead. ” Street View Views are served as platforms for our plans, ” says Hiu Fang, who hopes one day, as in the other experiment, to lead users to take better photos in their daily lives . By overcoming the systems currently available on our smartphones with the power of artificial intelligence.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s