Updated on 2024/02/28


Fukusato, Tsukasa
Faculty of Science and Engineering, School of Fundamental Science and Engineering
Job title
Assistant Professor(tenure-track)


▼display all



  • Faculty of Science and Engineering   Graduate School of Fundamental Science and Engineering

Internal Special Research Projects

  • 現場で働くプロの手描きデザイナのための対話的なセルアニメ制作支援ツール


     View Summary

    This research focused on a technique for designing character using inbetween charts. Inbetween charts are a familiar tool in cartoon production; they are used to show how many inbetweenings go between each keyframe. Typically, inbetweenings are not drawn for all 60+ frames required for one second of animation, and most movements can be done with fewer drawings per second. In addition, to vary the speeds of character movement, animators often place inbetweenings at different intervals on the chart. This technique is called ``the illusion of movement,'' which is a hand-drawn-cartoon-specific characteristic that differs full-CG animation (60+ fps). Digital 2D animation techniques such as shape interpolation methods have increasingly been adopted around the world. However, to our knowledge, no similar software exists that can support inbetween chart design; therefore, digital animators working in cartoon production manually make inbetweenings for cutout animations as follows. (1) First, they render many images of intermediate shapes generated using the above methods, (2) import them to software for digital motion compositing and editing, and (3) select several images among them as a post-process. This approach requires the repetition of the process of converting the selected images to video and checking it until the animator is satisfied. Therefore, we propose an interactive tool to intuitively make inbetween charts, inspired by cartoon animators’ techniques. Given several keyframes, this system constructs trajectory-guided sliders that enable users to directly adjust inbetween values on a screen. In addition, these sliders can visualize simple inbetween timings to provide guidance on cartoon-like motions, such as animating ``on twos'' and ``slow-in/out'' in the background of the slider. This method is simple enough to easily implement in existing animation-authoring tools. We conduct a user study with novice and amateur users and confirm that the proposed slider is effective for manually constructing the inbetween charts envisioned by the users.

  • 創造的な思考を促進するイラスト制作システム


     View Summary

    To build a system that creates high-quality illustrations imagined by users, as a first step we develop an AI system to generate photorealistic images (e.g., facial images) from rough illustrations. Synthesizing photorealistic facial images from monochromatic rough illustrations is one of the most fundamental tasks in the field of image-to-image translation. However, it is still challenging to simultaneously consider (1) high-dimensional face features such as geometry and color, and (2) characteristics of input sketches. Existing methods often use sketches as indirect inputs to guide AI models, resulting in the loss of sketch features or in alterations to geometry information. This research proposes an LDM-based network architect trained on the paired sketch–face dataset, named ``Sketch-Guided Latent Diffusion Model (SGLDM).’’ We first apply a Multi-Auto-Encoder (AE) to encode the input sketch from the pixel space to a feature map in the latent space by dividing the sketch into several regions, enabling us to reduce the dimensions of the sketch input while preserving the geometry-related information of the local face details. Next, we construct a sketch-face paired dataset based on an existing method that extracts the edge map from an image. In addition, we augment our dataset to improve the robustness of the SGLDM to handle arbitrarily abstract sketch inputs. The evaluation study shows that the SGLDM can synthesize high-quality face images with different expressions, facial accessories, and hairstyles from various sketches having different abstraction levels.