Estimating ref l ectance and shape of objects from a single cartoon-shaded image

2017-06-19 19:20HidekiTodoYasushiYamaguchi
Computational Visual Media 2017年1期

Hideki Todo(),Yasushi Yamaguchi

Estimating ref l ectance and shape of objects from a single cartoon-shaded image

Hideki Todo1(),Yasushi Yamaguchi2

Although many photorealistic relighting methods provide a way to change the illumination of objects in a digital photograph,it is currently diffi cult to relight digital illustrations having a cartoon shading style.The main dif f erence between photorealistic and cartoon shading styles is that cartoon shading is characterized by soft color quantization and nonlinear color variations that cause noticeable reconstruction errors under a physical ref l ectance assumption,such as Lambertian ref l ection.To handle this non-photorealistic shading property,we focus on shading analysis of the most fundamental cartoon shading technique.Based on the color map shading representation,we propose a simple method to determine the input shading as that of a smooth shape with a nonlinear ref l ectance property.We have conducted simple ground-truth evaluations to compare our results to those obtained by other approaches.

non-photorealistic rendering;cartoon shading;relighting;quantization

1 Introduction

Despite recent progress in 3D computer graphics techniques,traditional cartoon shading styles remain popular for 2D digital art.Artists can use a variety of commercial software(e.g.,Photoshop,Painter)to design their own expressive shading styles.Although the design principle used roughly follows a physical illumination model,editing is restricted to 2D drawing operations.We are interested in exploring new interactions which allow relighting of a painted shading style given a single input image.

Reconstructing surface shape and ref l ectance from a single image is known as the shape-from-shading problem[1].Based on the fundamental problem setting,most relighting approaches assume shading follows a Lambertian model[2–4].Although these approaches work well for photorealistic images,they often fail to interpret cartoon shading styles in digital illustrations.

The main dif f erence between photorealistic and cartoon shading styles is that cartoon shading is characterized by nonlinear color variation with soft quantization.The designed shading is typically more quantized than the inherent surface shape and its illumination.This assumption is common in many 3D stylized rendering techniques which use color map representation[5–7]that simply convert smooth 3D illumination to an artistic shading style.As shown in Fig.1,this simple mechanism can produce a variety of shading styles with dif f erent quantization ef f ects. However,such stylization processes make it more diffi cult for shading analysis to reconstruct a surface shape and ref l ectance from such shading.

Fig.1 Stylized shading styles obtained by color map representation.

In this paper,we propose a simple shadinganalysis method to recover a reasonable shading representation from the input quantized shading. As a f i rst step,we focus on the most fundamental cartoon shading[6].Our primary assumption is that the main nonlinear factor in the f i nal shading can be encoded by a color map function.With this in mind,we aim to reconstruct a smooth surface f i eld and a nonlinear ref l ectance property from the input shading.Using these estimated data,our method provides a way to change the illumination of the input image with its quantized shading style. To evaluate our approach,we conducted a simple pilot study using a prepared set of 3D models and color maps with a variety of stylization inputs. The proposed method was quantitatively compared to related approaches,which provided several key insights regarding relighting stylized shading.

2 Related work

Color mapping is a common approach used to generate stylized appearances in comics or illustrations.In stylized rendering of a 3D scene,the color map representation is used to convert smooth 3D illumination into quantized nonlinear shading ef f ects[5–7].Similar conversion techniques are used in 2D image abstraction methods for photorealistic images or videos[8–11].As a starting point,our work follows the basic assumption that stylized shading appearance is based on a smooth surface shape.

Previous shape reconstruction methods for painted illustrations also attempt to recover a smooth surface shape from the limited information provided by feature lines.Lumo[12]generates an approximate normal f i eld by interpolating normals on region boundaries and interior contours.S´ykora et al.[13] extended this approach with a simple set of user annotations to recover full 3D shape for global illumination rendering.CrossShade[14]enables the user to design cross-section curves for better control of the constructed normal f i eld.The CrossShade technique was extended by Iarussi et al.[15]to construct generalized bend f i elds from rough sketches in a bitmap form.However,these approaches only focus on shape modeling from the boundary constraints.The recently proposed inverse toon shading[16]modeling framework also follows the strategy of modeling normal f i elds by designing isophote curves.In this work,the interpolation scheme requires manual editing to design two sets of isophotes with dif f erent illumination conditions for robust interpolation.In addition,reliable isophote values are also assumed.In contrast,our objective is to use a single cartoon-shaded image to provide a shading representation that contains both a shape and a nonlinear color map ref l ectance.

An entire illumination constraint is considered in the well-known shape-from-shading(SFS) problem[1]for photorealistic images.Since the problem is severely ill-posed,accurate surface reconstruction requires skilled user interaction[3,4,17].The user must specify shape constraints to reduce the solution space of the SFS problem.To reduce user burden,another class of approach suggests rough approximation from luminance gradients[2,18]that can be tolerated by human perception.However,such approaches assume a photorealistic ref l ectance model,which often results in large reconstruction errors for the nonlinear shading in digital illustrations.

Motivated by these considerations,we attempt to leverage limited cartoon shading information to model a smooth surface shape and nonlinear ref l ectance to reproduce the original shading appearance.

3 Problem def i nition

3.1 Shading model assumptions

As proposed in the technique of cartoon shading[6], we assume a color map representation is used to reproduce the artist’s nonlinear shading ef f ects. Figure 2 illustrates the basic cartoon shading process.In this model,shading color c∈R3is computed as follows:

Fig.2 Cartoon shading process.

where I∈R is the luminance value of theillumination,and M:R 7→R3is a 1D color map function which converts the luminance value to the f i nal shading color.For a dif f use shading material, we set I=L·N,whereLis a light vector andNis the surface normal vector.We are interested in manipulatingLtoL0to produce a new lighting result,i.e.,c0=M(L0·N).

However,the inverse problem is ill-posed if only shading colorcis available.The primary consideration of this paper is that we limit the solution space for other factors while preserving the final shading appearance.Some basic assumptions considered in this paper are as follows.

•Smooth shape and illumination.We assume that the surface shapeNand the illumination I are smooth and follow a linear relationship.The only nonlinear factor is the color map function M,which is used to produce the stylized shading appearance.

•Monotonic function for color map.For the color map function M,we assume a monotonic relation between image luminance Ic(obtained fromc)and surface illumination I.This assumption is important to simplify our problem def i nition as a variation of a photorealistic relighting problem.

•Dif f use lighting for illumination.We analyze all shading ef f ects as due to dif f use lighting.We do not explicitly model specular ref l ections and shadows in our shading analysis experiments.

4 Methods

Figure 3 illustrates the main process of the proposed shading analysis and relighting approach.Here we provide the primary objective and summarize each step.

•Initial normal estimation.First,an initial normal f i eldN0is required as input for the ref l ectance estimation and normal ref i nement steps.Since the ref l ectance property is not available,we simply approximate a smooth rounded normal f i eld from the silhouette.

•Ref l ectance estimation.Given the initial normal f i eldN0,we estimate a key light directionLand a color map function M which best f i tc=M(L·N0).This decomposition result roughly matches the original shadingcfor the givenN0.

•Normal ref i nement.Since the estimated decomposition does not satisfyc=M(L·N0),we ref i ne the surface normalN0toNto reproduce the original shadingc.

Fig.3 Method overview.(a)Initial normal estimation to approximate a smooth rounded normal f i eld.(b)Ref l ectance estimation to obtain a light and a color map.(c)Normal ref i nement to modify the initial normal by f i tting the shading appearance.(d)Relighting to provide lighting interactions based on the shading analysis data.

•Relighting.Based on the above analysis results, the proposed method can relight the given input illustration.We change the light vector L to L0to obtain the f i nal shading color c0=M(L0·N).

In the following sections,each step of the proposed shading analysis and relighting approaches is described in detail.

4.1 Initial normal estimation

For the target region Ω,we can obtain a rounded normal f i eld N0from the silhouette inf l ation constraints[12,13]:

where N∂Ω=(N∂Ωx,N∂Ωy,0)is the normal constraint from the silhouette∂Ω.These normals are propagated to the interior of Ω using a dif f usion method[19].As shown in Fig.4,we can obtain a smooth initial normal f i eld N0as a rounded shape.

4.2 Ref l ectance estimation

Once the initial normal f i eld N0has been obtained, our system estimates ref l ectance factors based on the cartoon shading representation c=M(L·N).

The ref l ectance estimation process takes the original color c and the initial normal N0as inputs to estimate the light direction L and the color map function M.We assume that the scene is illuminated by a single key light direction(i.e.,L is the same for the entire image).The color map function M is estimated for each target object.

In the early stage of our experiments,we observed that the key light estimation step was signif i cantly af f ected by the input material style and shape.Our simple experiment is summarized in the Appendix. Since L is a key factor in the following estimation steps,we assume that a reliable light direction is provided by the user.In our evaluation,we used a predef i ned ground-truth light direction Ltto observe errors caused by the other estimation steps.

Fig.4 Initial normal f i eld obtained by silhouette inf l ation.

Color map estimation.Given the smooth illumination result I0=L·N0,we estimate a color map function M to f i t c=M(I0).

As shown in Fig.5,isophote pixels of I0do not provide the same color as c.Therefore,a straight forward minimization of Pproduces a blurred color map M.

To avoid this invalid correspondence between I0and c,we force monotonicity by sorting the target pixels in dark-to-bright order as shown in Fig.6.From the sorted pixels,we can obtain a valid correspondence between luminance range [Ii,Ii+1]and each shading color ciin the same luminance order.As a result,a color map function M is recovered as a lookup table for obtaining cifrom[Ii,Ii+1].We also construct the corresponding inverse map M−1,which is an additional lookup table to retrieve the luminance range[Ii,Ii+1]from a shading color ci.

4.3 Normal ref i nement

As shown in the right image of Fig.6,the shading result of M(L·N0)does not match c perfectly.Here we consider ref i ning normal N0to reproduce theoriginal color c by minimizing the following objective function:

Fig.5 Invalid correspondence between the initial illumination I0and the input shading c.

Fig.6 Color map estimation.Given the set of illumination L·N0and original color c,a color map function M is estimated by matching the range of luminance orders.

To address this issue,we provide the following complementary objective function to Eq.(3):

Figure 7 illustrates the illumination constraints for the normal ref i nement process.From the color map estimation process described in Section 4.2,the luminance range[Ii,Ii+1]is known for each shading color ci.Therefore,the illumination is restricted by the following conditions:

where Ci:={p∈Ω|c(p)=ci}is the quantized color area and illumination L·N(p)is constrained to [Ii,Ii+1].

We solve the problem by minimizing the following energy:

Fig.7 Illumination constraints for normal ref i nement.The initial illumination result is modif i ed by luminance range constraints derived from M−1.

The normal N is updated iteratively from the estimated initial normal N0in Gauss–Seidel iterations.Here we chose λ=1.5 to obtain the ref i nement result.Compared to the initial normal N0,the ref i ned normal N better f i ts the original color c.

4.4 Relighting

Based on the cartoon shading representation c= M(L·N),our system enables lighting interactions for the input illustration.We can obtain a relighting result c0by changing the light vector L to L0as follows:

where the estimated factors M and N are preserved in relighting process.

5 Evaluation of shading analysis

To evaluate our shading analysis approach,we conducted a simple pilot study via a ground-truth comparison.We compare our estimated results with several existing approaches and ground-truth inputs.

5.1 Experimental design

To generate a variety of stylized appearance,we f i rst prepared shape and color map datasets(see Fig.8).

Shape dataset.We prepared 20 groundtruth 3D models having varying shape complexity and recognizability.This dataset includes 7 simple primitive shapes and 13 other shapes from 3D shape repositories.Each ground-truth model is renderedfrom a specif i c view point to generate a 512×512 normal f i eld.

Fig.8 20 ground-truth 3D shapes and 24 color maps in our datasets.

Color map dataset.To better understand real situations,we extracted color maps from existing digital illustrations.We selected a small portion of a material area with a stroke.Then the selected pixels were simply sorted in luminance order to obtain a color map.We tried to extract more than 100 material areas from dif f erent digital illustrations sources.From the extracted color maps,we selected 24 distinctive color maps with dif f erent quantization ef f ects.

Given the ground-truth normal f i eld Ntand color map Mt,a f i nal input image was obtained by ct= Mt(Lt·Nt).Note that we also provide a groundtruth light direction Ltin our evaluation process.

5.2 Comparison of ref l ectance models

We f i rst compared the visual dif f erence between our target cartoon shading model and a common photorealistic Lambertian model as shown in Fig.9. To obtain an ambient color kaand a dif f use ref l ectance color kdfor the Lambertian shading representation c=ka+kdI,we minimizedM(I)−(ka+kdI)with the input color map function M.The color dif f erence suggests that cartoon shading includes some nonlinear parts,which cannot be described by a simple Lambertian model.We will discuss how this nonlinear ref l ectance property af f ects the estimation results.

Fig.9 Comparison of ref l ectance models.Top:color map materials selected from our dataset.Middle:Lambertian material f i tted to the corresponding color map.Bottom:color dif f erence between the color map materials and Lambertian materials.The materials are listed according to the color dif f erence.

5.3 Shading analysis

Figure 10 summarizes a comparison of our estimation results with ones from Lumo[12]and the Lambertian assumption[4].To simulate Lumo we used the silhouette inf l ation constraints of the initial normal estimation in Eq.(2).For the Lambertian assumption,we used the illumination constraint in Eq.(5)with a small value λ=1.0 to f i t the input image luminance Ic.In all examples,we used our color map estimation method(Section 4.2)to reproduce the original shading appearance.

As shown in Fig.10,Lumo cannot produce the details of illumination due to the lack of inner shading constraints.The Lambertian assumption recovers the original shading appearance well; however,the estimated normal f i eld is overf i tted to the quantized illumination.Although our method distributes certain shading errors near the boundaries of the color areas,it produces a relatively smooth normal f i eld and illumination that are both similar to the ground-truth.

Figure 11 summarizes the shading analysis results for dif f erent material settings.Although our method cannot recover the same shape from dif f erent quantization styles,the estimated normal f i eld is smoother than the input shading.

We also compute the mean squared error(MSE) to compare estimated results quantitatively(see Figs.12–15).In each comparison,we used the same shape and changed materials for computing the shape estimation errors.

Fig.10 Comparison of shading analysis results with Lumo[12]and Lambertian assumption[4].The proposed method reproduces the original shading appearance similar to the Lambertian assumption with a smooth normal f i eld as in Lumo.

Fig.11 Shading analysis results for dif f erent color map materials.

Fig.12 Errors of estimated shape depending on input material (simple shape Three Box).

Fig.13 Errors of estimated shape depending on input material (medium complexity shape Fertility).

Note that our method tends to produce smaller errors for simple rounded shapes but the errors become larger than the Lambertian assumption for more complex shapes.For a complex shape like the Pulley shown in Fig.15,even the Lambertian assumption results in large errors.Since initial normal estimation errors become large in such cases, our method fails to recover a valid shape when only minimizing the appearance error.We provide further discussions on initial normal estimation errors in Section 7.

Fig.14 Errors of estimated shape depending on input material (medium complexity shape Venus).

Fig.15 Errors of estimated shape depending on input material (complex shape Pulley).

Though the estimated shape may not be accurate, our method successfully reduces the inf l uence of the material dif f erence in all comparisons.Thanks to the proposed shading analysis based on the cartoon shading model assumption,our method regulates estimated ref l ectance properties for various quantization settings.

5.4 Relighting

Fig.16 Comparison of our relighting results with those from Lumo[12]and using the Lambertian assumption in Ref.[4].The shading analysis shows the estimated shading results from the input ground-truth light direction and shading.The analysis data are used to produce the following relighting results.Our method can produce dynamic illumination changes from the input light directions as in Lumo,which are less noticeable in the Lambertian assumption.The details of the shapes are also preserved in our method.

Figure 16 and the supplemental videos in the Electronic Supplementary Material(ESM) summarize a comparison of our relighting results with those from Lumo[12]and using the Lambertian assumption in Ref.[4].In all examples,we f i rst estimate the shading representations in the shading analysis step.Then we use the analysis data to produce relighting results.

As in the discussion in the previous evaluation of the shading analysis,the proposed method and the Lambertian assumption can preserve the original shading appearance in the shading analysis step. However,the Lambertian assumption tends to be strongly af f ected by the initial input illumination,so that dynamic illumination changes from the input light directions are less noticeable in the relighting results.On the other hand,the proposed method and Lumo can produce dynamic illumination changes that are similar to the ground-truth relighting results.The proposed method cannot fully recover the details of the ground-truth shape;however, our shading decomposition result can provide both dynamic illumination changes and details of the target shape.

6 Real illustration examples

We have tested our shading analysis approach on dif f erent shading styles using three real illustrations. Figure 17 shows relighting results for the one of them, the others are included in the supplemental videos in the ESM.The material regions are relatively simple, but each material region is painted with dif f erent quantization ef f ects.

To apply our shading analysis and relighting methods,we f i rst manually segmented material regions for the target illustration.We also provide a key light direction L for the target illustration,which is needed for our ref l ectance estimation step.

Fig.17 Relighting sequence using the proposed method.Nondif f use parts are limited to static transitions with simple residual representation.

Fig.18 Ref l ectance and shape estimation results for a real illustration.Non-dif f use parts are encoded as residual shading.

Figure 18 illustrates the elements of ref l ectance and shape estimation results for the illustration. Compared to the ideal cartoon shading in our evaluations,a material region in the real examples may include non-dif f use parts.As suggested by a photorealistic illumination estimation method[20], we encode such specular and shadow ef f ects as residual dif f erences∆c=c−M(L·N)from our assumed shading representation c=M(L· N).Finally,we obtain relighting results as c= M(L0·N)+∆c by changing the light direction L0.

As shown in Fig.17 and the supplemental videos in the ESM,the residual representation can recover the appearance of the original shading.We also note that our initial experiment produced possible shading transitions for dif f use lighting,while specular and shadow ef f ects are relatively static.

7 Discussion and future work

In this paper,we have demonstrated a new shading analysis framework for cartoon-shaded objects. The visual appearance of the relighting results is improved by the proposed shading analysis.We incorporate color map shading representation in our shading analysis approach,which enables shading decomposition into a smooth normal f i eld and a nonlinear color map ref l ectance.We have introduced a new way to provide lighting interaction with digital illustrations;however,there are several things left to accomplish.

Firstly,our method requires a reliable light direction which is provided by the user.Since the light estimation method in the Appendix is signif i cantly af f ected by the input shading,more friendly and robust cartoon shading estimation approaches are needed.We consider that a perceptually motivated approach[21]might be suitable.

Secondly,the method minimizes the appearance error,because a shading image is the only input. This results in an under-constrained problem to estimate both shape and ref l ectance.Actually,our method achieves almost the same appearance as the input.As shown in Fig.19,the proposed method cannot recover the input shape even if the material has Lambertian ref l ectance with full illumination constraints.Although the recovered shape satisf i es appearance similarity with the color map that is estimated in advance,we need a better solution space to obtain a plausible shape.Since a desirable shape is typically dif f erent for dif f erent users,we plan to integrate user constraints[3,4,14]for normal ref i nement.More robust iterated ref i nement cycles of shape and ref l ectance estimations are to be desired.

Fig.19 Shape analysis results for Lambertian ref l ectance.Blob (top):small errors in shape and shading.Pulley(middle):large errors in shape.Lucy(bottom):large errors in shading.

Another limitation is that our initial normal f i eld approximation assumes the shape to be convex.This causes errors noticeable in complex shapes such as the Pulley,as shown in Fig.19.Currently,we also plan to incorporate interior contours for concave constraints as suggested by Lumo[12].Even though we require a robust edge detection process to def i ne suitable normal constraints for various illustration styles,this is a promising direction for future work that may yield a more pleasing initial normal f i eld.

Although large collections of 2D digital illustrations are available online,we cannot directly apply our method since we require manual segmentation.A crucial area of future research is to automate albedo estimation,as suggested by intrinsic images[22,23].While our initial experiments with manual segmentation produced possible shading transitions via the dif f use shading assumption,our method cannot fully encode additional specular and shadow ef f ects.Therefore, incorporating such specular and shadow models is an important future work for more practical situations.Such shading ef f ects are often designed using non-photorealistic principles;however,we hope that our approach will provide a promising direction for new 2.5D image representations of digital illustrations.

Appendix Light estimation

In the early stage of our experiments,we tried toestimate the key light direction L from the input shading c and the estimated initial normal N0.

As suggested by Ref.[4],we approximate the problem using Lambertian ref l ectance Ic=kdL·N0, where the dif f use term L·N0is simply scaled by the dif f use constant kd.For the input illumination Ic, we compute the luminance value from the original color c as the L component in Lab color space. We estimate the light vector L by minimizing the following energy:

where L0is given by L0=kdL.We f i nally obtain the unit light vectorby normalizing L0.The dif f use ref l ectance constant kdis optionally computed from

Figure 20 summarizes our experiment for light estimation.In this experiment,we give a single ground-truth light direction Lt(top left)to generate the input cartoon-shaded image ctand then estimate a key light direction L by solving Eq.(10).

It can be observed that the estimated results look consistent with near-Lambertian materials(the left 3 maps)but inconsistent with more stylized materials (the right 3 maps).Another important factor is the shape complexity.The estimated light direction is relatively consistent with rounded smooth shapes. However,the light estimation error becomes quite large when the input model contains many crease edges,especially around the silhouette.

Fig.20 Light estimation error.Top left:input ground-truth light direction Lt.Top row:input color map materials shaded from the Lt.The left 3 maps have small average errors;the right 3 maps have large average errors.Left column:input 3D models.The top 3 models have small average errors;the bottom 3 models have large average errors.

The result suggests that we require additional constraints to improve light estimation.In this paper,we simply provide a ground-truth light direction for evaluation,or a user-given reliable light direction for relighting real illustration examples.

Acknowledgements

We would like to thank the anonymous reviewers for their constructive comments.We are also grateful to Tatsuya Yatagawa,Hiromu Ozaki,Tomohiro Tachi, and Takashi Kanai for their valuable discussions and suggestions.Additional thanks go to the AIM@SHAPE Shape Repository,Keenan’s 3D Model Repository for 3D models,and Makoto Nakajima,www.piapro.net for 2D illustrations used in this work.This work was supported in part by the Japan Science and Technology Agency CREST project and the Japan Society for the Promotion of Science KAKENHI Grant No.JP15H05924.

Electronic Supplementary MaterialSupplementary material is available in the online version of this article at http://dx.doi.org/10.1007/s41095-016-0066-0.

[1]Horn,B.K.P.;Brooks,M.J.Shape from Shading. Cambridge,MA,USA:MIT Press,1989.

[2]Khan,E.A.;Reinhard,E.;Fleming,R.W.; B¨ulthof f,H.H.Image-based material editing.ACM Transactions on Graphics Vol.25,No.3,654–663, 2006.

[3]Okabe,M.;Zeng,G.;Matsushita,Y.;Igarashi,T.; Quan,L.;Shum,H.-Y.Single-view relighting with normal map painting.In:Proceedings of Pacif i c Graphics,27–34,2006.

[4]Wu,T.-P.;Sun,J.;Tang,C.-K.;Shum,H.-Y. Interactive normal reconstruction from a single image. ACM Transactions on Graphics Vol.27,No.5,Article No.119,2008.

[5]Barla,P.;Thollot,J.;Markosian,L.X-toon: An extended toon shader.In:Proceedings of the 4th International Symposium on Non-Photorealistic Animation and Rendering,127–132,2006.

[6]Lake,A.;Marshall,C.;Harris,M.;Blackstein,M. Stylized rendering techniques for scalable real-time 3D animation.In:Proceedings of the 1st International Symposium on Non-Photorealistic Animation and Rendering,13–20,2000.

[7]Mitchell,J.;Francke,M.;Eng,D.Illustrative rendering in Team Fortress 2.In:Proceedings of the 5th International Symposium on Non-Photorealistic Animation and Rendering,71–76,2007.

[8]DeCarlo,D.;Santella,A.Stylization and abstraction of photographs.ACM Transactions on Graphics Vol. 21,No.3,769–776,2002.

[9]Kang,H.;Lee,S.;Chui,C.K.Flow-based image abstraction.IEEE Transactions on Visualization and Computer Graphics Vol.15,No.1,62–76,2009.

[10]Kyprianidis,J.E.;D¨ollner,J.Image abstraction by structure adaptive f i ltering.In:Proceedings of EG UK Theory and Practice of Computer Graphics,51–58, 2008.

[11]Winnem¨oller,H.;Olsen,S.C.;Gooch,B.Real-time video abstraction.ACM Transactions on Graphics Vol. 25,No.3,1221–1226,2006.

[12]Johnston,S.F.Lumo:Illumination for cel animation. In:Proceedings of the 2nd International Symposium on Non-Photorealistic Animation and Rendering,45–52,2002.

[13]S´ykora,D.;Kavan,L.;ˇCad´ık,M.;Jamriˇska,O.; Jacobson,A.;Whited,B.;Simmons,M.;Sorkine-Hornung,O.Ink-and-ray:Bas-relief meshes for adding global illumination ef f ects to hand-drawn characters. ACM Transactions on Graphics Vol.33,No.2,Article No.16,2014.

[14]Shao,C.;Bousseau,A.;Shef f er,A.;Singh,K. CrossShade:Shading concept sketches using crosssection curves.ACM Transactions on Graphics Vol. 31,No.4,Article No.45,2012.

[15]Iarussi,E.;Bommes,D.;Bousseau,A.Bendf i elds: Regularized curvature f i elds from rough concept sketches.ACM Transactions on Graphics Vol.34,No. 3,Article No.24,2015.

[16]Xu,Q.;Gingold,Y.;Singh,K.Inverse toon shading: Interactive normal f i eld modeling with isophotes. In:Proceedings of the Workshop on Sketch-Based Interfaces and Modeling,15–25,2015.

[17]Wu,T.-P.;Tang,C.-K.;Brown,M.S.;Shum,H.-Y.ShapePalettes:Interactive normal transfer via sketching.ACM Transactions on Graphics Vol.26,No. 3,Article No.44,2007.

[18]Lopez-Moreno,J.;Jimenez,J.;Hadap,S.;Reinhard, E.;Anjyo,K.;Gutierrez,D.Stylized depiction of images based on depth perception.In:Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering,109–118, 2010.

[19]Orzan,A.;Bousseau,A.;Barla,P.;Winnem¨oller, H.;Thollot,J.;Salesin,D.Dif f usion curves: A vector representation for smooth-shaded images. Communications of the ACM Vol.56,No.7,101–108, 2013.

[20]Kholgade,N.;Simon,T.;Efros,A.;Sheikh,Y.3D object manipulation in a single photograph using stock 3D models.ACM Transactions on Graphics Vol.33, No.4,Article No.127,2014.

[21]Lopez-Moreno,J.;Garces,E.;Hadap,S.;Reinhard, E.;Gutierrez,D.Multiple light source estimation in a single image.Computer Graphics Forum Vol.32,No. 8,170–182,2013.

[22]Grosse,R.;Johnson,M.K.;Adelson,E.H.;Freeman, W.T.Ground truth dataset and baseline evaluations for intrinsic image algorithms.In:Proceedings of IEEE 12th International Conference on Computer Vision,2335–2342,2009.

[23]Rother,C.;Kiefel,M.;Zhang,L.;Sch¨olkopf,B.; Gehler,P.V.Recovering intrinsic images with a global sparsity prior on ref l ectance.In:Proceedings of Advances in Neural Information Processing Systems 24,765–773,2011.

Hideki Todois an assistant professor in the School of Media Science at Tokyo University of Technology.He received his Ph.D.degree in information science and technology from the University of Tokyo in 2013.His research interests lie in the f i eld of computer graphics in general,particularly non-photorealistic rendering.

Yasushi Yamaguchi,Dr.Eng.,is a professor in the Graduate School of Arts and Sciences at the University of Tokyo.His research interests lie in image processing,computer graphics,and visual illusion,including visual cryptography,computer aided geometric design,volume visualization, and painterly rendering.He has been serving as a president of Japan Society for Graphic Science and as a vice president of International Society for Geometry and Graphics.

Open AccessThe articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/),which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

1 Tokyo University of Technology,Tokyo,192-0982,Japan. E-mail:toudouhk@stf.teu.ac.jp().

2 The University of Tokyo,Tokyo,153-8902,Japan.E-mail:yama@graco.c.u-tokyo.ac.jp.

t

2016-08-30;accepted:2016-11-10