MATSUMARU, Takafumi

写真a

Affiliation

Faculty of Science and Engineering, Graduate School of Information, Production, and Systems

Job title

Professor

Homepage URL

http://www.waseda.jp/sem-matsumaru/

Research Institute 【 display / non-display

  • 2020
    -
    2022

    理工学術院総合研究所   兼任研究員

Education 【 display / non-display

  •  
    -
    1987

    Waseda University   Graduate School, Division of Science and Engineering   Mechanical Engineering, Biological Control Engineering  

  •  
    -
    1985

    Waseda University   Faculty of Science and Engineering   Department of Mechanical Engineering  

Degree 【 display / non-display

  • 1998.03   Waseda University   Doctor of Engineering

Research Experience 【 display / non-display

  • 2010.09
    -
    2011.03

    to date Professor. Waseda University, Kitakyushu, Japan. Bio-Robotics and Human-Mechatronics.

  • 2010.09
    -
     

    早稲田大学 教授(理工学術院,大学院情報生産システム研究科)

  • 1999.04
    -
    2010.08

    Visiting Professor. Shizuoka University, Hamamatsu, Japan. Bio-Robotics and Human-Mechatronics.

  • 2004.04
    -
    2005.03

    Associate Professor. Shizuoka University, Hamamatsu, Japan. Bio-Robotics and Human-Mechatronics

  • 2003.04
    -
    2003.12

    Part-time Professor. Shizuoka Institute of Science and Technology, Fukuroi, Japan. Robotics and Mechatronics

display all >>

Professional Memberships 【 display / non-display

  •  
     
     

    Society of Biomechanisms Japan (SOBIM)

  •  
     
     

    The Robotic Society of Japan (RSJ)

  •  
     
     

    The Society of Instrument and Control Engineers (SICE)

  •  
     
     

    The Japan Society of Mechanical Engineers (JSME)

  •  
     
     

    Human Interface Society (HIS)

display all >>

 

Research Areas 【 display / non-display

  • Database

  • Rehabilitation science

  • Intelligent robotics

  • Mechanics and mechatronics

  • Robotics and intelligent system

Research Interests 【 display / non-display

  • Bioengineering

  • Robotics

  • human-mechatronics

  • Bio-Robotics

Papers 【 display / non-display

  • Long-arm Three-dimensional LiDAR for Anti-occlusion and Anti-sparsity Point Clouds

    Jingyu Lin, Shuqing Li, Wen Dong, Takafumi Matsumaru, Shengli Xie

    IEEE Transactions on Instrumentation and Measurement    2021.08  [Refereed]  [International coauthorship]

     View Summary

    Light detection and ranging (LiDAR) systems, also called laser radars, have a wide range of applications. This paper considers two problems in LiDAR data. The first problem is occlusion. A LiDAR acquires point clouds by scanning the surrounding environment with laser beams emitting from its center, and therefore an object behind another cannot be scanned. The second problem is sample-sparsity. LiDAR scanning is usually taken with a fixed angular step, consequently the sample points on an object surface at a long distance are sparse, and thus accurate boundary and detailed surface of the object cannot be obtained. To address the occlusion problem, we design and implement a novel three-dimensional (3D) LiDAR with a long-arm which is able to scan occluded objects from their flanks. To address the sample-sparsity problem, we propose an adaptive resolution scanning method which detects object and adjusts the angular step in realtime according to the density of points on the object being scanned. Experiments on our prototype system and scanning method verify their effectiveness in anti-occlusion and anti-sparsity as well as accuracy in measuring. The data and the codes are shared on the web site of https: //github.com/SCVision/LongarmLiDAR.

    DOI

  • Pre-robotic Navigation Identification of Pedestrian Crossings and Their Orientations

    Ahmed Farid, Takafumi Matsumaru

    Field and Service Robotics     73 - 84  2021.01  [Refereed]

    DOI

  • Estimation of Flat Object Deformation Using RGB-D Sensor for Robot Reproduction

    Xin He, Takafumi Matsumaru

    Sensors   21 ( 1 )  2020.12  [Refereed]  [International journal]

     View Summary

    This paper introduces a system that can estimate the deformation process of a deformed flat object (folded plane) and generate the input data for a robot with human-like dexterous hands and fingers to reproduce the same deformation of another similar object. The system is based on processing RGB data and depth data with three core techniques: a weighted graph clustering method for non-rigid point matching and clustering; a refined region growing method for plane detection on depth data based on an offset error defined by ourselves; and a novel sliding checking model to check the bending line and adjacent relationship between each pair of planes. Through some evaluation experiments, we show the improvement of the core techniques to conventional studies. By applying our approach to different deformed papers, the performance of the entire system is confirmed to have around 1.59 degrees of average angular error, which is similar to the smallest angular discrimination of human eyes. As a result, for the deformation of the flat object caused by folding, if our system can get at least one feature point cluster on each plane, it can get spatial information of each bending line and each plane with acceptable accuracy. The subject of this paper is a folded plane, but we will develop it into a robotic reproduction of general object deformation.

    DOI

  • An Object Model and Interaction Method for a Simulated Experience of Pottery on a Potter's Wheel

    Takafumi Matsumaru, Ami Morikawa

    Sernsors (MDPI AG (Multidisciplinary Digital Publishing Institute)) (ISSN 1424-8220; CODEN: SENSC9)   20 ( 11 )  2020.05  [Refereed]

     View Summary

    This paper introduces an object model and an interaction method for a simulated experience of pottery on a potter’s wheel. Firstly, we propose a layered cylinder model for a 3D object of the pottery on a potter’s wheel. Secondly, we set three kinds of deformation functions to form the object model from an initial state to a bowl shape: shaping the external surface, forming the inner shape (deepening the opening and widening the opening), and reducing the total height. Next, as for the interaction method between a user and the model, we prepare a simple but similar method for hand-finger operations on pottery on a potter’s wheel, in which the index finger movement takes care of the external surface and the total height, and the thumb movement makes the inner shape. Those are implemented in the three-dimensional aerial image interface (3DAII) developed in our laboratory to build a simulated experience system. We confirm the operation of the proposed object model (layered cylinder model) and the functions of the prepared interaction method (a simple but similar method to actual hand-finger operations) through a preliminary evaluation of participants. The participants were asked to make three kinds of bowl shapes (cylindrical, dome-shaped, and flat-type) and then they answered the survey (maneuverability, visibility, and satisfaction). All participants could make something like three kinds of bowl shapes in less than 30 min from their first touch.

    DOI

  • Intuitive Control of Virtual Robots using Transformed Objects as Multiple Viewports

    Rajeevlochana G. Chittawadigi, Takafumi Matsumaru, Subir Kumar Saha

    2019 IEEE International Conference on Robotics and Biomimetics (IEEE Robio 2019) [Dali, Yunnan, China] (2019.12.6-8)     822 - 827  2019.12  [Refereed]

     View Summary

    In this paper, the integration of Leap Motion controller with RoboAnalyzer software has been reported. Leap Motion is a vision based device that tracks the motion of human hands, which was processed and used to control a virtual robot model in RoboAnalyzer, an established robot simulation software. For intuitive control, the robot model was copied and transformed to be placed at four different locations such that the user watches four different views in the same graphics environment. This novel method was used to avoid multiple windows or viewports and was observed to have a marginally better rendering rate. Several trials of picking up cylindrical objects (pegs) and moving them and placing in cylindrical holes were carried out and it was found that the manipulation was intuitive, even for a novice user.

    DOI

display all >>

Books and Other Publications 【 display / non-display

  • "Design and Evaluation of Handover Movement Informing Reciever of Weight Load", in S. Bandyopadhyay, G. Saravana Kumar, et al (Eds): "Machines and Mechanisms"

    Takafumi Matsumaru

    Narosa Publishing House (New Delhi, India)  2011.11 ISBN: 9788184871920

  • "Study on Handover Movement Informing Receiver of Weight Load as Informative Motion of Human-friendly Robot", in Salvatore Pennacchio (ed.): "Emerging Technologies, Robotics and Control Systems - Third edition"

    Takafumi Matsumaru, Shigehisa Suzuki

    INTERNATIONALSAR (Palermo, Italy, EU)  2009.06 ISBN: 9788890192883

  • "Mobile Robot with Preliminary-announcement and Indication Function of Upcoming Operation just after the Present", in Salvatore Pennacchio (ed.): "Recent Advances in Control Systems, Robotics and Automation- Third edition Volume 2"

    Takafumi Matsumaru

    INTERNATIONALSAR (Palermo, Italy, EU)  2009.01 ISBN: 9788890192876

  • "生体機械機能工学(バイオメカニズム学会編 バイオメカニズム・ライブラリー)"

    松丸隆文

    東京電機大学出版局  2008.10 ISBN: 9784501417505

  • "Chapter 18 - Mobile Robot with Preliminary-Announcement and Indication of Scheduled Route and Occupied Area using Projector", in Aleksandar Lazinica (ed.): "Mobile Robots Motion Planning, New Challenges"

    Takafumi Matsumaru

    I-Tech Education and Publishing (Vienna, Austria, EU)  2008.07 ISBN: 9783902613356

display all >>

Works 【 display / non-display

  • Waseda Open Innovation Forum 2021 (WOI’21) (Sponsor: Research Organization for Open Innovation Strategy), [hosted virtually]

    Bio-Robotics & Human-Mechatronics (T.Matsumaru) Laboratory  Artistic work 

    2021.03
     
     

  • Waseda Open Innovation Forum 2020 (WOI’20) (Sponsor: Research Organization for Open Innovation Strategy), [Waseda Arena]

    T.Matsumaru Laboratory  Artistic work 

    2020.03
     
     

  • International Robot Exhibition 2017 (iREX 2017) RT Plaza, [Tokyo International Exhibition Center], RT09, (2017.11.29-12.02).

    Bio-Robotics, Human-Mechatronics Laboratory, Waseda University  Artistic work 

    2017.11
    -
    2017.12

     View Summary

    (1) 3DAHII (3D Aerial Holographic Image Interface: 三次元空中ホログラフィック画像インタフェース) .
    [*] Asyifa Imanda SEPTIANA (M2), Duc THAN (M2), Ahmed FARID (M2), 堀内 一希 (M1), 松丸 隆文.

  • International Robot Exhibition 2015 (iREX 2015) RT Plaza, [Tokyo International Exhibition Center], RT-04, (2015.12.02-12.05).

    Bio-Robotics, Human-Mechatronics Laboratory, T, Matsumaru Lab, Graduate School of, Information, Production, Systems, Waseda University  Artistic work 

    2015.12
     
     

     View Summary

    (1) IDAT-3 (Image-projective Destop Arm Trainer: 画像投射式卓上型上肢訓練装置) .
    (2) CSLSS-1 (Calliography-Stroke Learning Support System: 書道運筆学習支援システム).

  • International Home Care & Rehabilitation Exhibition 2014 (H.C.R.2014), [Tokyo International Exhibition Center], 5-16-05, (2014.10.01-10.03).

    T.Matsumaru laboratory  Artistic work 

    2014.10
    -
     

display all >>

Awards 【 display / non-display

  • Senior member

    2021.02   IEEE  

  • Excellent Poster Presentation Award (ISIPS 2018)

    2018.11   ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018)   "Fingertip pointing interface by hand detection using Short range depth camera"

    Winner: Kazuki Horiuchi, Takafumi Matsumaru

  • Excellent Oral Presentation Award (ISIPS 2018)

    2018.11   ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018)   "Intuitive Control of Virtual Robots using Leap Motion Sensor"

    Winner: Rajeevlochana G. Chittawadigi, Subir Kumar Saha, Takafumi Matsumaru

  • Excellent Paper Award (ISIPS 2017)

    2017.11   11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan]   Usability Study of Aerial Projection of 3D Hologram Object in 3D Workspace

    Winner: Septiana Asyifa I, Jiono Mahfud, Kazuki Horiuchi, Takafumi Matsumaru

  • Excellent Paper Award (ISIPS 2016)

    2016.11   10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)  

    Winner: R. P. Joshi, P. Sharma, R. A. Boby, S. K. Saha, T. Matsumaru

display all >>

Research Projects 【 display / non-display

  • ヒューマン・ロボット・インタラクションのための2D映像-3D動作マッピング手法

    Project Year :

    2017.04
    -
    2020.03
     

     View Summary

    2次元映像と3次元動作の間の相互マッピング手法の確立を目的として,次の3つの事例研究から本質に迫ろうとしている.(1)3次元空中映像インタフェース(3-dimensional aerial image interface, 3DAII)(主に動作→映像): 空中に投影された3次元映像にユーザが手指を重ねて直接操作できることを特徴とする新しいインタフェースの応用として,ろくろ作業(成形作業,削り作業)により土から器を作りだす工程の再現を試みている.仮想物体の表現にはゲームエンジンであるUnity (ver.5.6.1fl)を使用し,LeapMotionで検出した手指の3次元動作により仮想物体を変形させることで,造形を再現するインタラクション機能の一部を実現した.(2)エアーポインティングによるカーソル操作(主に動作→映像): コンピュータ作業におけるキー入力とカーソル操作の切り替えを円滑に実現することを目的として,キーボードから手首を離すことなく指先の3次元動作だけで2次元映像内のカーソルを操作するシステムを提案した.腕の疲労軽減だけでなく,カーソルが動作する平面と指先の動く方向が一致することで直感的な操作になることが確認できた.(3)銘柄別ペットボトル仕分け作業(主に映像→動作): 水槽の中で冷やされているペットボトルに対して,指定された銘柄を掴み上げるロボットアームシステムの実現を目的として,RGBカメラの2次元映像からロボットアームの3次元動作を計画する.水面を介して水中の物体を認識するので,センサはSLPP(構造化光パターン投影)やToF(飛行時間)などのアクティブ方式では難しいのでパッシブ方式のRGBカメラとする.機械学習により物体区分けや銘柄分類ができることを確認した.2次元映像と3次元動作の間の相互マッピング手法の確立を目的として,3つの事例研究を突破口とすることで,研究・開発を深耕している.(1)3次元空中映像インタフェース: 学術誌に原著論文を投稿・査読中である.さまざまな応用事例を実現してゆくことで経験を重ね,2次元と3次元の間のインタラクションにおける設計原理の確立に近づいている.(2)エアーポインティング: 文献調査などにより従来方法との差異や提案の新規性を明確にしている.基本機能の性能測定と従来方法との比較実験のそれぞれについて,測定内容や実験方法を熟慮・検討して実施した.(3)ペットボトル仕分け作業: 水中にあるばら積み物体を水上から認識して掴み上げる作業への挑戦の事例は少なく,研究対象自体に新規性がある.2次元映像と3次元画像の間の相互マッピング手法の確立を目的として,複数の事例研究を突破口としながら研究・開発を継続する.(1)3次元空中映像インタフェース: ろくろ作業の再現を早期に完成させる.次の応用として3次元動作物体の3次元映像再現を計画している(実現可能性は確認している).また学術誌への論文投稿が滞っているので,現在査読中の学術誌論文には早期に決着してほしい.(2)エアーポインティング: 従来の方法(コンピュータマウス,タッチパッド,タッチパネル,など)との比較実験などを実施しており,データを整理して,学術論文への投稿を準備する.(3)ペットボトル仕分け: ロボットの3次元動作を生成するための,2次元カメラ画像に基づく物体認識の探求を続ける.a)物体どうしの重ね合わせにより,得られた画像の全身に対する割合(例えば8割,5割,3割),得られた画像の全身における部位(例えば頭部,腹部,下部)による,物体区分けや銘柄分類の正答率の変化,b)新たな銘柄を機械学習させるために学習画像の効率的な生成方法や必要な数量の検討,などの面白い課題が残っている

  • Functional advance and various realization of step-on interface (SOI)

    Project Year :

    2011.04
    -
    2014.03
     

     View Summary

    This project aimed to make the step-on interface (SOI) to operate robotic and mechatronic systems both higher functional (higher reliability and higher accuracy) and more diversified (realization in various forms). For high functionality, the camera image processing (recognition of a projection screen and a light spot using OpenCV) and the depth date processing (by use of a three-dimensional depth sensor) have been achieved. For diversification, the operation on screen by on/off gesture of laser pointer (click/drag operation), the real-time follow-up to a projection screen by a pan-tilt-zoom camera control, the recognition of the contact/non-contact operation to the background of a finger-tip based on depth data, and the application to a virtual keyboard and a virtual xylophone have been realized

  • Robot with step-on interface (JST)

    Project Year :

    2007
    -
     
     

  • Development of HMD with gaze tracking function for ALS patient and realization of high-perfomance birateral communication (Shizuoka University)

    Project Year :

    2005
    -
    2006
     

  • Trackless travelling mobile platform (Yazaki Kako Corp.)

    Project Year :

    2004
    -
     
     

display all >>

Presentations 【 display / non-display

  • Chinese Number Gestures Recognition using Finger Joint Detection and Hand Shape Description

    Yingchuang YANG, Takafumi MATSUMARU

    ISIPS 2020 (14h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan] (12-13 November, 2020) 

    Presentation date: 2020.11

    Event date:
    2020.11
     
     
  • Recognition of Football’s Handball Foul Based on Depth Data in Real-time

    Pukun JIA, Takafumi MATSUMARU

    ISIPS 2020 (14h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan] (12-13 November, 2020) 

    Presentation date: 2020.11

    Event date:
    2020.11
     
     
  • Image Segmentation and Brand Recognition in the Robot Picking Task

    Chen Zhu, Takafumi Matsumaru

    ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS  (Kitakyushu)  Waseda-IPS

    Presentation date: 2018.11

     View Summary

    The computer vision guided picking system has been developed for many decades, however, the picking of random ordered multiple objects cannot be treated well with the existing software. In this research, 6 brands of random-placed drinking bottles are designed to be picked up by using a 6 degree of freedom robot. The bottles need to be classified by the brands before picking from the container. In this article, Mask R-CNN, a deep-learning-based image segmentation network are used to process the image taken from a normal camera and Inception v3 are used for the brand recognition task. The Mask R-CNN is trained by using the COCO datasets to detect the bottles and generate a mask on it. In the brand recognition task, 150-200 images are taken or find for each brand first and augmented to 1000 images per brands. As a result, the segmented images can be labeled with the brand name with at least 80% accuracy in the experiment environment.

  • Control Mobile Robot using Single 2d-Camera with New Proposed Camera Calibration Method

    Haitham Al Jabri, Takafumi Matsumaru

    ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS  (Kitakyushu)  Waseda-IPS

    Presentation date: 2018.11

     View Summary

    This paper presents a summary of mobile robot control using 2d-vision feedback with new proposed camera calibration method. The main goal is to highlight the 2d-vision feedback to be analyzed and used as a main pose feedback of a mobile robot. We use 2-active omni-wheels mobile robot with a single 2d-webcam in our experiments. Our main approach is to solve and tackle present limitations of using feature points with a single 2d-camera for the pose estimation such as: non-static environments and data stability. The results discuss the issues and point out the strengths and weaknesses of the used techniques. First, we use feature points detection ORB (Oriented FAST and Rotated BRIEF) and BF (Brute-Force) matcher to detect and match points in different frames, respectively. Second, we use FAST (Features from Accelerated Segment Test) corners and LK (Lucas–Kanade) optical flow to detect corners and track their flow in different frames. Those points and corners are then used for the pose estimation through optimization process with the: (a) calibration method of Zhang using chessboard pattern and (b) our proposed method using reinforcement learning in offline mode.

  • Intuitive Control of Virtual Robots using Leap Motion Sensor

    Rajeevlochana Chittawadigi, Subir Kumar Saha, Takafumi Matsumaru

    ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS  (Kitakyushu)  Waseda-IPS

    Presentation date: 2018.11

     View Summary

    Serial robots used in industry can be controlled by various means such as joint and Cartesian jogging using dedicated teach pendants or by using offline and online programming in software environment. They can also be controlled using master-slave manipulation technique where an exoskeleton can act as a master and the robot as a slave, mimicking the motion of the exoskeleton worn by a user. A recently developed sensor named Leap Motion can also be used to detect motion of the hands of a user, in sub-millimeter accuracy level. In the proposed work, a Leap Motion sensor has been used to track one of the hand of a user. The incremental motion of the tip of index finger is used as Cartesian increment of the end-effector of a virtual robot in RoboAnlayzer software. Currently, work is underway for detecting the orientation of two or three fingers and accordingly control the orientation of the robot’s end-effector. An exoskeleton has to be worn by the user and excessive usage may cause fatigue to the user. The proposed application of Leap Motion relieves the user of the fatigue and yet has good accuracy, and hence can be used as an intuitive method.

display all >>

Specific Research 【 display / non-display

  • ヒトとロボットの協働作業の高度化に関する研究

    2020  

     View Summary

     ヒトとロボットの共存・協働・インタラクション(物理的+情報的)を高度化・深化させることを目的として研究開発を進めた.[1] An Object Model and Interaction Method for a Simulated Experience of Pottery on a Potter's Wheel. Sensors 20(11): 3091 (2020).[2] Estimation of Flat Object Deformation Using RGB-D Sensor for Robot Reproduction. Sensors 21(1): 105 (2021). 

  • 人とロボットのインタラクションの高度化に関する研究

    2020  

     View Summary

     ヒトとロボットの共存・協働・インタラクション(物理的+情報的)を高度化・深化させることを目的として研究開発を進めた.[1] An Object Model and Interaction Method for a Simulated Experience of Pottery on a Potter's Wheel. Sensors 20(11): 3091 (2020).[2] Estimation of Flat Object Deformation Using RGB-D Sensor for Robot Reproduction. Sensors 21(1): 105 (2021). 

  • 歩道通行や交差点横断を特徴とする屋外における歩行者誘導ロボティック・システム

    2018  

     View Summary

    Outdoor pedestrian-navigation robotic-system characterized by passing sidewalk and crossing intersection: We studied path planning for pedestrians in outdoor environment using two-dimensional digital map. The two-dimensional digital map is obtained on the network like Google Maps and OpenStreetMaps. However, it doesn't record all the data about crosswalks and pedestrian paths, for example outside urban areas. Therefore, a path planning which does not depend on preliminarily-recorded map data for pedestrians or large amount of data, for example, to execute SLAM (simultaneous localization and mapping), should be realized. Once given the departure and the destination, it gets the map data around and between them. First, it performs an image processing (contour detection) and visually recognizes city blocks. Next, a graph theory is applied to deduce the pedestrian path from the departure to the destination. In the trial using actual map data, it was possible to plan a reasonable path with 70 to 80% of success rate, including which side of the road to go through. In the future, we plan to detect pedestrian crossings and footbridges from satellite images and merge them into graph data, and to guide pedestrians using a mobile robot in actual environments.

  • 二次元投影映像を用いた三次元動作の人への教示手法に関する研究

    2016  

     View Summary

    Research and Development activities have been continued in order to deepen the human robot interaction technology in the field of Robotics and Mechatronics, and then the following results have been obtained. (1) Visual SLAM using RGB-D sensor: ORB-SHOT SLAM, the trajectory correction by 3D loop closing based on Bag-of-Visual-Words (BoVW) model for RBG-D Visual SLAM has newly been proposed. --> JRM article (to be appeared). (2) Learning system using RGB-D sensor and projector: Calibration and statistical learning techniques for building an interactive screen for children have been proposed, and its trials in a School has been done. --> IEEE/SICE SII conference paper (presented) and IJARS article (to be appeared). (3) Leaning system using LM sensor and projector: Calligraphy-stroke learning support system using projector and motion sensor has been proposed. --> JACIII article (to be appeared). (4) Interactive interface using LM sensor and 3D hologram: Interactive aerial projection of 3D hologram object has been proposed. --> IEEE/ROBIO conference paper (presented).

  • 距離画像情報を利用した映像投影下でのタッチ・インタラクション技術

    2015  

     View Summary

    <1> Research and Development of Near-field Touch Interface using Time-of-flight CameraThe purpose of this study is to apply the 3-dimensional image sensor, time-of-flight camera, into a projector-sensor system, to achieve the basic functions (clicking, dragging and sliding) of conventional touch interface and expand it with some new functions such as finger direction detection. The research items are as followings: (1) Near-field hand extraction method, (2) High accuracy touch/hover detection, (3) Integrated and complete projector-sensor system, (4) Evaluation experiments.<2> Research and Development of Calligraphy-Brushwork Learning Support SystemA calligraphy learning support system was proposed for supporting brushwork learning by using a projector. The system was designed to provide the three kinds of training style according to the learner's ability: copying training, tracing training, and combination of both. In order to instruct the three-dimensional brushwork such as writing speed, pressure force, and orientation of the brush, we proposed the instruction method by presenting the information about the brush tip. This method visualizes the position, orientation and moving direction of the brush. A preliminary experiment on learning was performed and the efficiency of the proposed method was examined through the experimental results.

display all >>

 

Syllabus 【 display / non-display

display all >>