论文标题
Salienteye:最大化参与度,同时使用深层神经网络在Instagram上保持艺术风格
Salienteye: Maximizing Engagement While Maintaining Artistic Style on Instagram Using Deep Neural Networks
论文作者
论文摘要
Instagram已成为业余和专业摄影师展示其作品的绝佳场所。换句话说,它使摄影民主化。通常,摄影师在一次会议上拍摄了数千张照片,他们从中挑选了一些照片来展示他们在Instagram上的作品。试图在Instagram上建立声誉的摄影师必须在最大化追随者与照片的互动之间取得平衡,同时还保持其艺术风格。我们使用传输学习来调整Xception,这是在Imagenet数据集上训练的对象识别的模型,适合参与预测的任务,并利用了由VGG19生成的革兰氏阴矩阵,VGG19生成的革兰氏矩阵,这是对Imagenet训练的另一个对象识别模型,用于在Instagram上发布的照片相似性测量的任务。我们的模型可以在单个Instagram帐户上进行培训,以创建个性化的参与预测和样式相似性模型。一旦接受了帐户的培训,用户就可以根据预测的参与度和与以前的作品的相似性进行分类的新照片,从而使他们能够上传照片,这些照片不仅有可能最大程度地吸引关注者的参与度,还可以保持其摄影风格。我们在几个Instagram帐户上培训并验证了我们的模型,表明这两个任务都擅长于几个基线模型和人类注释。
Instagram has become a great venue for amateur and professional photographers alike to showcase their work. It has, in other words, democratized photography. Generally, photographers take thousands of photos in a session, from which they pick a few to showcase their work on Instagram. Photographers trying to build a reputation on Instagram have to strike a balance between maximizing their followers' engagement with their photos, while also maintaining their artistic style. We used transfer learning to adapt Xception, which is a model for object recognition trained on the ImageNet dataset, to the task of engagement prediction and utilized Gram matrices generated from VGG19, another object recognition model trained on ImageNet, for the task of style similarity measurement on photos posted on Instagram. Our models can be trained on individual Instagram accounts to create personalized engagement prediction and style similarity models. Once trained on their accounts, users can have new photos sorted based on predicted engagement and style similarity to their previous work, thus enabling them to upload photos that not only have the potential to maximize engagement from their followers but also maintain their style of photography. We trained and validated our models on several Instagram accounts, showing it to be adept at both tasks, also outperforming several baseline models and human annotators.