Code and data for paper "Deep Painterly Harmonization"
This software is published for academic and non-commercial use only.
This code is based on torch. It has been tested on Ubuntu 16.04 LTS.
Dependencies:
CUDA backend:
Download VGG-19:
sh models/download_models.sh
Compile cuda_utils.cu
(Adjust PREFIX
and NVCC_PREFIX
in makefile
for your machine):
make clean && make
To generate all results (in data/
) using the provided scripts, simply run
python gen_all.py
in Python and then
run('filt_cnn_artifact.m')
in Matlab or Octave. The final output will be in results/
.
Note that in the paper we trained a CNN on a dataset of 80,000 paintings collected from wikiart.org, which estimates the stylization level of a given painting and adjust weights accordingly. We will release the pre-trained model in the next update. Users will need to set those weights manually if running on their new paintings for now.
Removed a few images due to copyright issue. Full set here for testing use only.
Here are some results from our algorithm (from left to right are original painting, naive composite and our output):
If you find this work useful for your research, please cite:
@article{luan2018deep,
title={Deep Painterly Harmonization},
author={Luan, Fujun and Paris, Sylvain and Shechtman, Eli and Bala, Kavita},
journal={arXiv preprint arXiv:1804.03189},
year={2018}
}
Feel free to contact me if there is any question (Fujun Luan fl356@cornell.edu).
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。