Google has shown off the muscle of its machine learning algorithm by demonstrating how it could remove a chain-link fence from a photo without any negative impact on the image itself. This was a year ago and the feature is nowhere to be found.

Google's object removal feature for Photos not coming anytime soo...

A lot of exciting features were shown off at Google I/O 2018, but last year’s I/O was not so different. Of all the announcements made and demos given last year, the one involving an algorithm capable of removing objects from photos was probably the most exciting. The demo used a photo that had a chain-link fence blocking the view of the subject and just like that, Google’s AI magic was able to remove the fence, training everything else in the scene. It has been a year since the demo and the feature still hasn’t made it to people’s smartphones. We finally know why.


At Google I/O executives shared that the feature was simply, ‘down prioritized.’ According to a report by XDA Developers, the object removal feature was teased during the 2017 keynote to demonstrate Google’s machine learning capabilities. XDA Developers says that “while the technology is certainly available and can be deployed, the team approaches building their product by prioritizing what’s most important for people. Hence, the Photos team prioritized other applications of machine learning above this feature.”


While XDA says this was just a case of reprioritization, 9to5Google reports a whole different story. According to their report, Google executives even refused to say if this feature, even a year later, was still expected to be ‘coming soon.’ “I wouldn’t say it’s coming very soon,” the Product Lead for Google Photos told 9to5Google. A tipster told the website that a serious underlying issue which the company has not managed to fix yet was the reason behind not just the delay, but also the change in priorities.


What we did get this year though, is new functionality in the Google Photos app where it can suggest actions based on the contents of the photo. For example, if Photos recognizes the friend, it will suggest sharing the photo with him. Photos is also now capable of offering selective colour removal and enhancements, a feature that has apparently already started rolling out. 

Digit NewsDeskDigit NewsDesk


‘).insertAfter(‘.inside-container p:eq(1)’); */
// $( ” ).insertAfter(‘.inside-container p:eq(0)’);
//});#}
//method to trunkate the text
function shorten(text, maxLength) {
var ret = text;
if (ret.length > maxLength) {
ret = ret.substr(0,maxLength-3) + “…”;
}
return ret;
}

$(function(){
//function to put utm on DontMiss links
/*if(isDesktop()){
$(‘div.dontMiss > a’).each(function(){
$(this).prop(‘href’, $(this).prop(‘href’)+’?utm_source=within_article&utm_medium=desktop&utm_campaign=related’);
//trunkate dont miss content
var sub = shorten($(this).html(),47);
$(this).html(sub);
});
}else{
$(‘div.dontMiss > a’).each(function(){
$(this).prop(‘href’, $(this).prop(‘href’)+’?utm_source=within_article&utm_medium=mobile&utm_campaign=related’);
});
}*/

//disabled method to append dontmiss links to page content by Mayank
/*$(‘div.dontMiss > a’).each(function(index){
//loop over each list item

// if(index%2 > 0){
// index = index – 1;
// }
if($(‘.inside-container > p:eq(‘+index+’)’).length){
if(isDesktop()){
$(‘.inside-container > p:eq(‘+((index * 2) + 1)+’)’).append(‘

Related: ‘ + $(this).html() + ‘‘ );
}else{
$(‘.inside-container > p:eq(‘+((index * 2) + 1)+’)’).append(‘

Related: ‘ + $(this).html() + ‘‘ );
}
}
});*/
$(‘div.dontMissArea’).hide();

/* if(isDesktop()) {
$(‘div.dontMissArea’).hide();
}else{
$(‘div.dontMissArea’).show();
} */

/*
* ga event tracking on page scroll start and end by Mayank
*/

// Debug flag
var debugMode = false;

// Default time delay before checking location
var callBackTime = 100;

// # px before tracking a reader
var readerLocation = 150;

// Set some flags for tracking & execution
var timer = 0;
var scroller = false;
var endContent = false;
var didComplete = false;

// Set some time variables to calculate reading time
var startTime = new Date();
var beginning = startTime.getTime();
var totalTime = 0;

// Get some information about the current page
var pageTitle = document.title;

// Track the aticle load — disabled
if (!debugMode) {
// ga(‘send’, ‘event’, ‘Reading’, ‘ArticleLoaded’, pageTitle, {‘nonInteraction’: 1});
// console.log(“ga(‘send’, ‘event’, ‘Reading’, ‘ArticleLoaded’, pageTitle, {‘nonInteraction’: 1}”);
} else {
alert(‘The page has loaded. Woohoo.’);
}

// Check the location and track user
function trackLocation() {
bottom = $(window).height() + $(window).scrollTop();
height = $(document).height();

// If user starts to scroll send an event
if (bottom > readerLocation && !scroller) {
currentTime = new Date();
scrollStart = currentTime.getTime();
timeToScroll = Math.round((scrollStart – beginning) / 1000);
if (!debugMode) {
ga(‘send’, ‘event’, ‘Reading’, ‘StartReading’, pageTitle, timeToScroll, {‘metric1’ : timeToScroll});
} else {
alert(‘started reading ‘ + timeToScroll);
}
scroller = true;
}

// If user has hit the bottom of the content send an event
if (bottom >= $(‘.inside-container’).scrollTop() + $(‘.inside-container’).innerHeight() && !endContent) {
currentTime = new Date();
contentScrollEnd = currentTime.getTime();
timeToContentEnd = Math.round((contentScrollEnd – scrollStart) / 1000);
if (!debugMode) {
if (timeToContentEnd = height && !didComplete) {
currentTime = new Date();
end = currentTime.getTime();
totalTime = Math.round((end – scrollStart) / 1000);
if (!debugMode) {
ga(‘send’, ‘event’, ‘Reading’, ‘PageBottom’, pageTitle, totalTime, {‘metric3’ : totalTime});
} else {
alert(‘bottom of page ‘+totalTime);
}
didComplete = true;
}
}

// Track the scrolling and track location
$(window).scroll(function() {
if (timer) {
clearTimeout(timer);
}

// Use a buffer so we don’t call trackLocation too often.
timer = setTimeout(trackLocation, callBackTime);
});
});

‘).insertAfter(“.inside-container p:eq(2)”);
}

});



Source link