Updated on 2024/12/21

写真a

 
MATSUMARU, Takafumi
 
Affiliation
Faculty of Science and Engineering, Graduate School of Information, Production, and Systems
Job title
Professor
Degree
Doctor of Engineering ( 1998.03 Waseda University )

Research Experience

  • 2010.09
    -
    Now

    Waseda University   Faculty of Science and Engineering, Graduate School of Information, Production and Systems   Professor

  • 2023.07
    -
    2023.09

    Warsaw University of Technology   Doctoral School   Visiting Professor

  • 2023.03
    -
    2023.04

    Warsaw University of Technology   Doctoral School   Visiting Professor.

  • 2010.09
    -
    2011.03

    Shizuoka University   Faculty of Engineering, Graduate School of Engineering   Visiting Professor

  • 1999.04
    -
    2010.08

    Shizuoka University   Faculty of Engineering, Department of Mechanical Engineering   Associate Professor

  • 2004.04
    -
    2005.03

    Shizuoka Institute of Science and Technology   Faculty of Science and Technology, Department of Mechanical Engineering   Lecturer

  • 2003.04
    -
    2003.12

    Part-time Professor. Shizuoka Institute of Science and Technology, Fukuroi, Japan. Robotics and Mechatronics

  • 2002.04
    -
    2003.03

    Invited Professor. LSC (Laboratoire Systemes Complexe)-CNRS, Evry France. Robotics and Mechatronics

  • 1998.04
    -
     

    Visiting Fellow. Shizuoka Industrial Research Institute, Shizuoka, Japan. Robotics and Mechatronics

  • 1994.04
    -
     

    Senior Researcher. Toshiba Corporation, Kawasaki, Japan. Robotics and Mechatronics

  • 1987.04
    -
     

    Researcher. Toshiba Corporation, Kawasaki, Japan. Robotics and Mechatronics

▼display all

Education Background

  •  
    -
    1987.03

    Waseda University   Graduate School, Division of Science and Engineering   Mechanical Engineering, Biological Control Engineering  

  •  
    -
    1985.03

    Waseda University   Faculty of Science and Engineering   Department of Mechanical Engineering  

  •  
    -
    1981.03

    kaisei Senior High School  

  •  
    -
    1978.03

    Kaise Junior High School  

  •  
    -
    1975.03

    Higashi-Kioiwa Elementary School  

Professional Memberships

  •  
     
     

    Society of Biomechanisms Japan (SOBIM)

  •  
     
     

    The Robotic Society of Japan (RSJ)

  •  
     
     

    The Society of Instrument and Control Engineers (SICE)

  •  
     
     

    The Japan Society of Mechanical Engineers (JSME)

  •  
     
     

    Human Interface Society (HIS)

  •  
     
     

    The Virtual Reality Society of Japan (VRSJ)

  •  
     
     

    Society of Automotive Engineers of Japan, Inc. (JSAE)

  •  
     
     

    IEEE(Institute of Electrical and Electronics Engineers, Inc.)

  •  
     
     

    IAENG (the International Association of Engineers)

▼display all

Research Areas

  • Database / Rehabilitation science / Intelligent robotics / Mechanics and mechatronics / Robotics and intelligent system

Research Interests

  • Bioengineering

  • Robotics

  • human-mechatronics

  • Bio-Robotics

Awards

  • WASEDA e-Teaching Award Good Practice Good Practice Award (Course: Human-Robot Interaction)

    2023.07   Center for Higher Education Studies, Waseda University   12th WASEDA e-Teaching Award (2022 Spring / Fall Semester)

    Winner: Takafumi Matsumaru

  • Senior member

    2021.02   IEEE  

  • Excellent Poster Presentation Award (ISIPS 2018)

    2018.11   ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018)   "Fingertip pointing interface by hand detection using Short range depth camera"

    Winner: Kazuki Horiuchi, Takafumi Matsumaru

  • Excellent Oral Presentation Award (ISIPS 2018)

    2018.11   ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018)   "Intuitive Control of Virtual Robots using Leap Motion Sensor"

    Winner: Rajeevlochana G. Chittawadigi, Subir Kumar Saha, Takafumi Matsumaru

  • Excellent Paper Award (ISIPS 2017)

    2017.11   11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan]   Usability Study of Aerial Projection of 3D Hologram Object in 3D Workspace

    Winner: Septiana Asyifa I, Jiono Mahfud, Kazuki Horiuchi, Takafumi Matsumaru

  • Excellent Paper Award (ISIPS 2016)

    2016.11   10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)  

    Winner: R. P. Joshi, P. Sharma, R. A. Boby, S. K. Saha, T. Matsumaru

  • Excellent Poster Presentation Award (ISIPS 2015)

    2015.11   ISIPS 2015  

  • Excellent Paper Award (ISIPS 2015)

    2015.11   ISIPS 2015  

  • Excellent Paper Award (ISIPS 2014)

    2014.11   ISIPS 2014  

  • Excellent Poster Presentation Award (7th IPS-ICS)

    2013.11   7th IPS-ICS  

  • Excellent Paper Award (7th IPS-ICS)

    2013.11   7th IPS-ICS  

  • Excellent Poster Presentation Award (6th IPS-ICS)

    2012.11   6th IPS-ICS  

  • Excellent Presentation Nomination at JSME ROBOMEC 2009

    2009.05   JSME ROBOMEC 2009  

  • Excellent Presentation Nomination at JSME ROBOMEC 2008

    2008.06   JSME ROBOMEC 2008  

  • Best Session Presentation Award at SICE SI2005

    2005.12   SICE SI2005  

  • Excellent Presentation Certificated at JSME ROBOMEC 2000

    2000.05   JSME ROBOMEC 2000  

▼display all

 

Papers

  • Bidirectional Planning for Autonomous Driving Framework with Large Language Model

    Zhikun Ma, Qicong Sun, Takafumi Matsumaru

    Sensors   24 ( 20 ) 6723  2024.10  [Refereed]  [International journal]  [International coauthorship]

    Authorship:Last author

     View Summary

    Autonomous navigation systems often struggle in dynamic, complex environments due to challenges in safety, intent prediction, and strategic planning. Traditional methods are limited by rigid architectures and inadequate safety mechanisms, reducing adaptability to unpredictable scenarios. We propose SafeMod, a novel framework enhancing safety in autonomous driving by improving decision-making and scenario management. SafeMod features a bidirectional planning structure with two components: forward planning and backward planning. Forward planning predicts surrounding agents’ behavior using text-based environment descriptions and reasoning via large language models, generating action predictions. These are embedded into a transformer-based planner that integrates text and image data to produce feasible driving trajectories. Backward planning refines these trajectories using policy and value functions learned through Actor–Critic-based reinforcement learning, selecting optimal actions based on probability distributions. Experiments on CARLA and nuScenes benchmarks demonstrate that SafeMod outperforms recent planning systems in both real-world and simulation testing, significantly improving safety and decision-making. This underscores SafeMod’s potential to effectively integrate safety considerations and decision-making in autonomous driving.

    DOI

    Scopus

  • From Seeing to Recognising – an Extended Self-Organizing Map for Human Postures Identification

    Xin He, Teresa Zielinska, Vibekananda Dutta, Takafumi Matsumaru, Robert Sitnik

    IEEE Robotics and Automation Letters   9 ( 9 ) 7899 - 7906  2024.07  [Refereed]  [International journal]  [International coauthorship]

     View Summary

    The article presents a dedicated method for recognizing human postures using classification and clustering options. The ultimate goal of the research is to recognise human actions based on posture sequences. Such a task imposes expectations on the developed method. For this purpose, a Sparse Autoencoder combined with a Self-Organized Map (SOM) is proposed. SOM is equipped with an additional layer of post-labeling or clustering. This entire structure is called the extended SOM. Two task-oriented modifications are applied to improve SOM performance – a dedicated angular distance measure and a neighbourhood function for updating the SOM weights. The research contribution is the concept of extended SOM, which is trained using unlabeled data and classifies or clusters the human postures. The Sparse Autoencoder preserves the characteristics of the data while reducing its dimensionality. Better classification efficiency of the developed method is demonstrated compared to other representative methods. Ablation studies illustrate how the introduced modifications improve classification results. The developed method is characterised by good resolution in distinguishing postures. A discussion of the concept's usefulness is provided at the end of the article.

    DOI

  • An End-to-End Air Writing Recognition Method Based on Transformer

    Xuhang Tan, Jicheng Tong, Takafumi Matsumaru, Vibekananda Dutta, Xin He

    IEEE Access   11   109885 - 109898  2023.10  [Refereed]  [International journal]  [International coauthorship]

     View Summary

    The air-writing recognition task entails the computer’s ability to directly recognize and interpret user input generated by finger movements in the air. This form of interaction between humans and computers is considered natural, cost-effective, and immersive within the domain of human-computer interaction (HCI). While conventional air-writing recognition has primarily focused on recognizing individual characters, a recent advancement in 2022 introduced the concept of writing in the air (WiTA) to address continuous air-writing tasks. In this context, we assert that the Transformer-based approach can offer improved performance for the WiTA task. To solve the WiTA task, this study formulated an end-to-end air-writing recognition method called TR-AWR, which leverages the Transformer model. Our proposed method adopts a holistic approach by utilizing video frame sequences as input and generating letter sequences as outputs. To enhance the performance of the WiTA task, our method combines the vision transformer model with the traditional transformer model, while introducing data augmentation techniques for the first time. Our approach achieves a character error rate (CER) of 29.86% and a decoding frames per second (D-fps) value of 194.67 fps. Notably, our method outperforms the baseline models in terms of recognition accuracy while maintaining a certain level of real-time performance. The contributions of this paper are as follows: Firstly, this study is the first to incorporate the Transformer method into continuous air-writing recognition research, thereby reducing overall complexity and attaining improved results. Additionally, we adopt an end-to-end approach that streamlines the entire recognition process. Lastly, we propose specific data augmentation guidelines tailored explicitly for the WiTA task. In summary, our study presents a promising direction for effectively addressing the WiTA task and holds potential for further advancements in this domain.

    DOI

  • A Probabilistic Approach based on Combination of Distance Metrics and Distribution Functions for Human Postures Classification

    Xin He, Vibekananda Dutta, Teresa Zielinska, Takafumi Matsumaru

    2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)     1514 - 1521  2023.08  [Refereed]  [International journal]  [International coauthorship]

    Authorship:Last author

     View Summary

    The article proposes a method for classifying human postures using an improved probabilistic neural network (PNN) with different distance measures and different probabilistic distribution functions (PDF). We found that the PNN with angular distance provides better accuracy, precision, and recall for the postures classification tasks than the PNN with conventional Euclidean distance. The k Nearest Neighbors (kNN) method gives slightly better prediction results than PNN, but our PNN is much faster. Such good computational performance is beneficial for posture recognition tasks that require real-time functions. An example is the needs of cobots or service robots. The article also proposes a method for selecting the distribution smoothing parameter ( ) using the sub-optimization process based on the improved Gray Wolf optimization (I-GWO) algorithm. It was found that the impact of PDF differences on the quality of the results can be reduced by choosing the best possible . In order to evaluate the developed method, selected human activities were recorded. The datasets were created using two different RGB-D systems located in two different laboratories.

    DOI

  • Reproduction of Flat and Flexible Object Deformation using RGB-D Sensor and Robotic Manipulator

    Xin He, Vibekananda Dutta, Takafumi Matsumaru

    2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)     1257 - 1262  2022.12  [Refereed]  [International journal]  [International coauthorship]

    Authorship:Last author

     View Summary

    This paper introduces a method that can represent the irregular deformations of a flat flexible object and reproduce the same deformations on a similar object which is in an initial state. The work was carried out in two stages: (a) analyze and estimate the deformation of the object in the target state by a deformation estimation method, (b) planning and execution by a real-time sensor feedback loop. The novelty of the proposal concerns detecting the major deformations and ignoring the undesired deformations due to surface wrinkling, surface roughness, or noisy depth data. Different experimental settings were applied to efficiently estimate the various deformations of the handkerchief and reproduce similar deformations by the six-degree-of-freedom robotic manipulator and RGB-D vision sensor. The subject of this work is a flat flexible object, but it can be extended for more general conditions.

    DOI

  • A Transformer-Based Model for Super-Resolution of Anime Image

    Shizhuo Xu, Vibekananda Dutta, Xin He, Takafumi Matsumaru

    Sensors   22 ( 21 ) 8126 - 31  2022.10  [Refereed]  [International journal]  [International coauthorship]

    Authorship:Last author

     View Summary

    Image super-resolution (ISR) technology aims to enhance resolution and improve image quality. It is widely applied to various real-world applications related to image processing, especially in medical images, while relatively little applied to anime image production. Furthermore, contemporary ISR tools are often based on convolutional neural networks (CNNs), while few methods attempt to use transformers that perform well in other advanced vision tasks. We propose a so-called anime image super-resolution (AISR) method based on the Swin Transformer in this work. The work was carried out in several stages. First, a shallow feature extraction approach was employed to facilitate the features map of the input image’s low-frequency information, which mainly approximates the distribution of detailed information in a spatial structure (shallow feature). Next, we applied deep feature extraction to extract the image semantic information (deep feature). Finally, the image reconstruction method combines shallow and deep features to upsample the feature size and performs sub-pixel convolution to obtain many feature map channels. The novelty of the proposal is the enhancement of the low-frequency information using a Gaussian filter and the introduction of different window sizes to replace the patch merging operations in the Swin Transformer. A high-quality anime dataset was constructed to curb the effects of the model robustness on the online regime. We trained our model on this dataset and tested the model quality. We implement anime image super-resolution tasks at different magnifications (2×, 4×, 8×). The results were compared numerically and graphically with those delivered by conventional convolutional neural network-based and transformer-based methods. We demonstrate the experiments numerically using standard peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), respectively. The series of experiments and ablation study showcase that our proposal outperforms others.

    DOI

    Scopus

    7
    Citation
    (Scopus)
  • Deep Learning Based One-Class Detection System for Fake Faces Generated by GAN Network

    Shengyin Li, Vibekananda Dutta, Xin He, Takafumi Matsumaru

    Sensors   22 ( 20 ) 7767 - 24  2022.10  [Refereed]  [International journal]  [International coauthorship]

    Authorship:Last author

     View Summary

    Recently, the dangers associated with face generation technology have been attracting much attention in image processing and forensic science. The current face anti-spoofing methods based on Generative Adversarial Networks (GANs) suffer from defects such as overfitting and generalization problems. This paper proposes a new generation method using a one-class classification model to judge the authenticity of facial images for the purpose of realizing a method to generate a model that is as compatible as possible with other datasets and new data, rather than strongly depending on the dataset used for training. The method proposed in this paper has the following features: (a) we adopted various filter enhancement methods as basic pseudo-image generation methods for data enhancement; (b) an improved Multi-Channel Convolutional Neural Network (MCCNN) was adopted as the main network, making it possible to accept multiple preprocessed data individually, obtain feature maps, and extract attention maps; (c) as a first ingenuity in training the main network, we augmented the data using weakly supervised learning methods to add attention cropping and dropping to the data; (d) as a second ingenuity in training the main network, we trained it in two steps. In the first step, we used a binary classification loss function to ensure that known fake facial features generated by known GAN networks were filtered out. In the second step, we used a one-class classification loss function to deal with the various types of GAN networks or unknown fake face generation methods. We compared our proposed method with four recent methods. Our experiments demonstrate that the proposed method improves cross-domain detection efficiency while maintaining source-domain accuracy. These studies show one possible direction for improving the correct answer rate in judging facial image authenticity, thereby making a great contribution both academically and practically.

    DOI

    Scopus

    14
    Citation
    (Scopus)
  • Methods of Generating Emotional Movements and Methods of Transmitting Behavioral Intentions: A Perspective on Human-Coexistence Robots

    Takafumi Matsumaru

    Sensors   22 ( 12 ) 4587 - 24  2022.06  [Refereed]  [International journal]

    Authorship:Lead author, Last author, Corresponding author

     View Summary

    The purpose of this paper is to introduce and discuss the following two functions that are considered to be important in human‐coexistence robots and human‐symbiotic robots: the method of generating emotional movements, and the method of transmitting behavioral intentions. The generation of emotional movements is to design the bodily movements of robots so that humans can feel specific emotions. Specifically, the application of Laban movement analysis, the development from the circumplex model of affect, and the imitation of human movements are discussed. However, a general technique has not yet been established to modify any robot movement so that it contains a specific emotion. The transmission of behavioral intentions is about allowing the surrounding humans to understand the behavioral intentions of robots. Specifically, informative motions in arm manipulation and the transmission of the movement intentions of robots are discussed. In the former, the target position in the reaching motion, the physical characteristics in the handover motion, and the landing distance in the throwing motion are examined, but there are still few research cases. In the latter, no groundbreaking method has been proposed that is fundamentally different from earlier studies. Further research and development are expected in the near future.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Mutual Communication to Shorten the Distance between Humans and Robots -- Preliminary Announcement of Robot Operation and Transmission of Robot Intention --

    Takafumi Matsumaru

    Journal of the Society of Instrument and Control Engineering (ISSN: 04534662)   61 ( 3 ) 203 - 208  2022.03  [Refereed]  [Invited]  [Domestic journal]

    Authorship:Lead author

    DOI

  • Training a Robotic Arm Movement with Deep Reinforcement Learning

    Xiaohan Ni, Xin He, Takafumi Matsumaru

    2021 IEEE International Conference on Robotics and Biomimetics (ROBIO 2021)     --- - ---  2021.12  [Refereed]

    Authorship:Last author

    DOI

    Scopus

    3
    Citation
    (Scopus)
  • A depth camera-based system to enable touch-less interaction using hand gestures

    R. Damindarov, C. A. Fam, R. A. Boby, M. Fahim, A. Klimchik, T. Matsumaru

    2021 International Conference "Nonlinearity, Information and Robotics" (NIR 2021)     --- - ---  2021.08  [Refereed]

    Authorship:Lead author

    DOI

  • Long-arm Three-dimensional LiDAR for Anti-occlusion and Anti-sparsity Point Clouds

    Jingyu Lin, Shuqing Li, Wen Dong, Takafumi Matsumaru, Shengli Xie

    IEEE Transactions on Instrumentation and Measurement     1 - 1  2021.08  [Refereed]  [International journal]  [International coauthorship]

     View Summary

    Light detection and ranging (LiDAR) systems, also called laser radars, have a wide range of applications. This paper considers two problems in LiDAR data. The first problem is occlusion. A LiDAR acquires point clouds by scanning the surrounding environment with laser beams emitting from its center, and therefore an object behind another cannot be scanned. The second problem is sample-sparsity. LiDAR scanning is usually taken with a fixed angular step, consequently the sample points on an object surface at a long distance are sparse, and thus accurate boundary and detailed surface of the object cannot be obtained. To address the occlusion problem, we design and implement a novel three-dimensional (3D) LiDAR with a long-arm which is able to scan occluded objects from their flanks. To address the sample-sparsity problem, we propose an adaptive resolution scanning method which detects object and adjusts the angular step in realtime according to the density of points on the object being scanned. Experiments on our prototype system and scanning method verify their effectiveness in anti-occlusion and anti-sparsity as well as accuracy in measuring. The data and the codes are shared on the web site of https: //github.com/SCVision/LongarmLiDAR.

    DOI

  • Pre-robotic Navigation Identification of Pedestrian Crossings and Their Orientations

    Takafumi Matsumaru

    Field and Service Robotics. Springer Proceedings in Advanced Robotics, vol 16     73 - 84  2021.01  [Refereed]  [International journal]

    DOI

    Scopus

  • Estimation of Flat Object Deformation Using RGB-D Sensor for Robot Reproduction

    Xin He, Takafumi Matsumaru

    Sensors   21 ( 1 ) 105 - 36  2020.12  [Refereed]  [International journal]

     View Summary

    This paper introduces a system that can estimate the deformation process of a deformed flat object (folded plane) and generate the input data for a robot with human-like dexterous hands and fingers to reproduce the same deformation of another similar object. The system is based on processing RGB data and depth data with three core techniques: a weighted graph clustering method for non-rigid point matching and clustering; a refined region growing method for plane detection on depth data based on an offset error defined by ourselves; and a novel sliding checking model to check the bending line and adjacent relationship between each pair of planes. Through some evaluation experiments, we show the improvement of the core techniques to conventional studies. By applying our approach to different deformed papers, the performance of the entire system is confirmed to have around 1.59 degrees of average angular error, which is similar to the smallest angular discrimination of human eyes. As a result, for the deformation of the flat object caused by folding, if our system can get at least one feature point cluster on each plane, it can get spatial information of each bending line and each plane with acceptable accuracy. The subject of this paper is a folded plane, but we will develop it into a robotic reproduction of general object deformation.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • An Object Model and Interaction Method for a Simulated Experience of Pottery on a Potter’s Wheel

    Takafumi Matsumaru, Ami Morikawa

    Sensors   20 ( 11 ) 3091 - 3091  2020.05  [Refereed]

     View Summary

    This paper introduces an object model and an interaction method for a simulated experience of pottery on a potter’s wheel. Firstly, we propose a layered cylinder model for a 3D object of the pottery on a potter’s wheel. Secondly, we set three kinds of deformation functions to form the object model from an initial state to a bowl shape: shaping the external surface, forming the inner shape (deepening the opening and widening the opening), and reducing the total height. Next, as for the interaction method between a user and the model, we prepare a simple but similar method for hand-finger operations on pottery on a potter’s wheel, in which the index finger movement takes care of the external surface and the total height, and the thumb movement makes the inner shape. Those are implemented in the three-dimensional aerial image interface (3DAII) developed in our laboratory to build a simulated experience system. We confirm the operation of the proposed object model (layered cylinder model) and the functions of the prepared interaction method (a simple but similar method to actual hand-finger operations) through a preliminary evaluation of participants. The participants were asked to make three kinds of bowl shapes (cylindrical, dome-shaped, and flat-type) and then they answered the survey (maneuverability, visibility, and satisfaction). All participants could make something like three kinds of bowl shapes in less than 30 min from their first touch.

    DOI

    Scopus

    3
    Citation
    (Scopus)
  • Intuitive Control of Virtual Robots using Transformed Objects as Multiple Viewports

    Rajeevlochana G. Chittawadigi, Takafumi Matsumaru, Subir Kumar Saha

    2019 IEEE International Conference on Robotics and Biomimetics (IEEE Robio 2019) [Dali, Yunnan, China] (2019.12.6-8)     822 - 827  2019.12  [Refereed]

     View Summary

    In this paper, the integration of Leap Motion controller with RoboAnalyzer software has been reported. Leap Motion is a vision based device that tracks the motion of human hands, which was processed and used to control a virtual robot model in RoboAnalyzer, an established robot simulation software. For intuitive control, the robot model was copied and transformed to be placed at four different locations such that the user watches four different views in the same graphics environment. This novel method was used to avoid multiple windows or viewports and was observed to have a marginally better rendering rate. Several trials of picking up cylindrical objects (pegs) and moving them and placing in cylindrical holes were carried out and it was found that the manipulation was intuitive, even for a novice user.

    DOI

    Scopus

    3
    Citation
    (Scopus)
  • Dynamic Hand Gesture Recognition for Robot Arm Teaching based on Improved LRCN Model

    Kaixiang Luan, Takafumi Matsumaru

    2019 IEEE International Conference on Robotics and Biomimetics (IEEE Robio 2019) [Dali, Yunnan, China] (2019.12.6-8)     1268 - 1273  2019.12  [Refereed]

     View Summary

    In this research, we focus on finding a new method of human-robot interaction in industrial environment. A visionbased dynamic hand gestures recognition system has been proposed for robot arm picking task. 8 dynamic hand gestures are captured for this task with a 100fps high speed camera. Based on the LRCN model, we combine the MobileNets (V2) and LSTM for this task, the MobileNets (V2) for extracting the image features and recognize the gestures, then, Long Short-Term Memory (LSTM) architecture for interpreting the features across time steps. Around 100 samples are taken for each gesture for training at first, then, the samples are augmented to 200 samples per gesture by data augmentation. Result shows that the model is able to learn the gestures varying in duration and complexity and gestures can be recognized in 88ms with 90.62% accuracy in the experiment on our hand gesture dataset.

    DOI

    Scopus

    7
    Citation
    (Scopus)
  • Three-dimensional Aerial Image Interface, 3DAII

    Takafumi Matsumaru

    Journal of Robotics and Mechatronics (JRM) (Fuji Technology Press Ltd.) (ISSN: 0915-3942(Print) / 1883-8049(Online))   31 ( 5 ) 657 - 670  2019.10  [Refereed]  [Domestic journal]

     View Summary

    In this paper, we introduce the three-dimensional aerial image interface, 3DAII. This interface reconstructs and aerially projects a three-dimensional object image, which can be simultaneously observed from various viewpoints or by multiple users with the naked eye. A pyramid reflector is used to reconstruct the object image, and a pair of parabolic mirrors is used to aerially project the image. A user can directly manipulate the three-dimensional object image by superimposing a user’s hand-finger or a rod on the image. A motion capture sensor detects the user’s hand-finger that manipulates the projected image, and the system immediately exhibits some reaction such as deformation, displacement, and discoloration of the object image, including sound effects. A performance test is executed to confirm the functions of 3DAII. The execution time of the end-tip positioning of a robotic arm has been compared among four operating devices: touchscreen, gamepad, joystick, and 3DAII. The results exhibit the advantages of 3DAII; we can directly instruct the movement direction and movement speed of the end-tip of the robotic arm, using the three-dimensional Euclidean vector outputs of 3DAII in which we can intuitively make the end-tip of the robotic arm move in three-dimensional space. Therefore, 3DAII would be one important alternative to an intuitive spatial user interface, e.g., an operation device of aerial robots, a center console of automobiles, and a 3D modelling system. A survey has been conducted to evaluate comfort and fatigue based on ISO/TS 9241-411 and ease of learning and satisfaction based on the USE questionnaire. We have identified several challenges related to visibility, workspace, and sensory feedback to users that we would like to address in the future.

    DOI

    Scopus

    7
    Citation
    (Scopus)
  • Brand Recognition with Partial Visible Image in the Bottle Random Picking Task based on Inception V3

    Chen Zhu, Takafumi Matsumaru

    IEEE Ro-Man 2019 (The 28th IEEE Intarnational Conference on Robot & Human Interactive Communication) [Le Meridien, Windsor Place, New Delhi, India] (14-18 Oct, 2019)     1 - 6  2019.10  [Refereed]

     View Summary

    In the brand-wise random-ordered drinking PET
    bottles picking task, the overlapping and viewing angle problem
    makes a low accuracy of the brand recognition. In this paper,
    we set the problem to increase the brand recognition accuracy
    and try to find out how the overlapping rate infects on the
    recognition accuracy. By using a stepping motor and transparent fixture, the training images were taken automatically
    from the bottles under 360 degrees to simulate a picture taken
    from viewing angle. After that, the images are augmented with
    random cropping and rotating to simulate the overlapping
    and rotation in a real application. By using the automatically
    constructed dataset, the Inception V3, which was transferred
    learning from ImageNet, is trained for brand recognition. By
    generating a random mask with a specific overlapping rate
    on the original image, the Inception V3 can give 80% accuracy
    when 45% of the object in the image is visible or 86% accuracy
    when the overlapping rate is lower than 30%.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Pre-Robotic Navigation Identification of Pedestrian Crossings & Their Orientations

    Ahmed Farid, Takafumi Matsumaru

    12th Conference on Field and Service Robotics (FSR 2019) [Tokyo, Japan], (August 29-31, 2019)     000 - 000  2019.08  [Refereed]

  • Path Planning in Outdoor Pedestrian Settings Using 2D Digital Maps

    Takafumi Matsumaru

    Journal of Robotics and Mechatronics (JRM) (Fuji Technology Press Ltd.) (ISSN: 0915-3942(Print) / 1883-8049(Online))   31 ( 3 ) 464 - 473  2019.06  [Refereed]  [Domestic journal]

     View Summary

    This article presents a framework for planning sidewalk-wise paths in data-limited pedestrian environments by visually recognizing city blocks in 2D digital maps (e.g. Google Maps, OpenStreetMaps) using contour detection, then applying graph theory to infer a pedestrian path from start till finish. There are two main targeted problems; firstly, several locations around the world (e.g. suburban / rural areas) do not have recorded data on street crossings and pedestrian walkways. Secondly, the continuous process of recording maps (i.e. digital cartography) is, to our current knowledge, manual and not yet fully automated in practice. Both issues contribute towards a scaling problem in which it becomes time and effort consuming to continuously monitor and record such data on a global scale. As a result, the framework’s purpose is to produce path plans that do not depend on prerecorded (e.g. using SLAM) or data-rich pedestrian maps, thus facilitating navigation for mobile robots and people of visual impairments alike. The framework was able to produce pedestrian paths for most locations where data on sidewalks and street crossings were indeed limited, but still some challenges remain. In this article, the framework’s structure, output, and challenges are explained. Additionally, we mention some works in the literature on how to make use of such path plan effectively.

    DOI

    Scopus

    6
    Citation
    (Scopus)
  • Image Processing for Picking Task of Random Ordered PET Drinking Bottles

    Takafumi Matsumaru

    Journal of Robotics, Networking and Artificial Life (JRNAL) (Atlantis Press SARL) (ISSN (Online): 2352-6386 / ISSN (Print): 2405-9021)   6 ( 1 ) 38 - 41  2019.06  [Refereed]  [International journal]

     View Summary

    In this research, six brands of soft drinks are decided to be picked up by a robot with a monocular Red Green Blue (RGB) camera. The drinking bottles need to be located and classified with brands before being picked up. The Mask Regional Convolutional Neural Network (R-CNN), a mask generation network improved from Faster R-CNN, is trained with common object in contest datasets to detect and generate the mask on the bottles in the image. The Inception v3 is selected for the brand classification task. Around 200 images are taken or found at first; then, the images are augmented to 1500 images per brands by using random cropping and perspective transform. The result shows that the masked image can be labeled with its brand name with at least 85% accuracy in the experiment.

    DOI

  • Image Processing for Picking Task of Random Ordered PET Drinking Bottles

    Chen Zhu, Takafumi Matsumaru

    The 2019 International Conference on Artificial Life and Robotics (ICAROB 2019) [B-Con PLAZA, Beppu, Japan], (January 10-13, 2019), GS2-4, pp.634-637, (2019.01.12 Sat).     634 - 637  2019.01  [Refereed]

     View Summary

    In this research, six brands of soft drinks are decided to be picked up by a robot with a monocular RGB camera. The drinking bottles need to be located and classified with brands before being picked up. A Mask R-CNN is pretrained with COCO datasets to detect and generate the mask on the bottles in the image. The Inception v3 is selected for the brand classification task. Around 200 images are taken, then, the images are augmented to 1500 images per brand by using random cropping and perspective transform. The results show that the masked image can be labeled with its brand name with at least 85% accuracy in the experiment.

    DOI

  • Proposing Camera Calibration Method using PPO (Proximal Policy Optimization) for Improving Camera Pose Estimations

    Haitham K. Al-Jabri, Takafumi Matsumaru

    2018 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 2018), [Kuala Lumpur, Malaysia], (December 12-15, 2018), Thu1-8 (2), pp.790-795, (2018.12.13).     790 - 795  2018.12  [Refereed]

     View Summary

    This paper highlights camera orientation estimation accuracy and precision, as well as proposing a new camera calibration technique using a reinforcement learning method named PPO (Proximal Policy Optimization) in offline mode. The offline mode is used just for extracting the camera geometry parameters that are used for improving accuracy in real-time camera pose estimation techniques. We experiment and compare two popular techniques using 2D vision feedbacks and evaluate their accuracy beside other considerations related to real applications such as disturbance cases from surrounding environment and pose data stability. First, we use feature points detection ORB (Oriented FAST and Rotated BRIEF) and BF (Brute-Force) matcher to detect and match points in different frames, respectively. Second, we use FAST (Features from Accelerated Segment Test) corners and LK (Lucas–Kanade) optical flow methods to detect corners and track their flow in different frames. Those points and corners are then used for the pose estimation through optimization process with the: (a) calibration method of Zhang using chessboard pattern and (b) our proposed method using PPO. The results using our proposed calibration method show significant accuracy improvements and easier deployment for end-user compared to the pre-used methods.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Short Range Fingertip Pointing Operation Interface by Depth Camera

    Kazuki Horiuchi, Takafumi Matsumaru

    2018 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 2018), [Kuala Lumpur, Malaysia], (December 12-15, 2018), Thu1-4 (4), pp.132-137, (2018.12.13).     132 - 137  2018.12  [Refereed]

     View Summary

    In this research, we proposed and implemented a finger pointing detection system with a short-range type depth camera. The most widely used pointing device for a computer interface is a mouse, and as an alternative there is a depth sensor to detect hand and finger movement. However, in the present literature, since the user must operate the cursor at a relatively large distance from the detection device, the user needs to raise his arm and make wide movements, which is inconvenient over long periods of time. To solve such usability problem, we proposed a comfortable and easy to use method by narrowing the distance between the keyboard and the pointing device. Next, we compared various depth sensors and selected one that can recognize even small movements Additionally, we proposed a mapping method between the users perceived cursor position and the real one pointed by the index finger direction. Furthermore, we compared our pointing method with a mouse and touch pad for the usability, accuracy and working speed. The results showed that it has users have better performance on continuous operation of character input from the keyboard and cursor pointing.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Integration of Leap Motion Controller with Virtual Robot Module of RoboAnalyzer

    Rajeevlochana G. Chittawadigi, Subir Kumar Saha, Takafumi Matsumaru

    9th Asian Conference on Multibody Dynamics (ACMD 2018), [Xian, China] (August 19-23, 2018)    2018.08  [Refereed]

     View Summary

    In this paper, an integration of Leap Motion controller with RoboAnalyzer software is proposed. Leap Motion is an inexpensive sensor which has three Infrared (IR) emitters and two IR cameras and can track 10 fingers of the two human hands. The device sends the data to the computer connected to it through its controller software which can be accessed in a variety of programming languages. In the proposed setup, the position of the index finger of the right hand is tracked in a Visual C# Server Application and the coordinates are extracted accurately with respect
    to a frame attached at the center of the device. The coordinates are then sent to Virtual Robot Module (client application) in which a Coordinate System (marker) is mapped with the input it receives. Based on the movement of the index finger, the marker moves in the Virtual Robot Module (VRM). When the marker moves closer to the end-effector of the robot, the Server application attaches the marker with the end-effector. Thereafter, any incremental Cartesian motion of the index finger is mapped to equivalent Cartesian motion of the robot endeffector. This is achieved by finding the inverse kinematics solution for the new pose of the robot EE and of the
    eight solutions obtained through inverse kinematics, the solution that is closest to the current pose is selected as the appropriate solution for the new pose. The server application then updates the VRM with the new joint angles and accordingly the robot moves instantly on the software. The setup connected to a laptop running Leap Motion Controller and VRM is shown in Figure 2(a) and the workflow is illustrated in Figure 2(b).

  • Path Planning of Sidewalks & Street Crossings in Pedestrian Environments Using 2D Map Visual Inference

    Takafumi Matsumaru

    ROMANSY 22 - Robot Design, Dynamics and Control   584   247 - 255  2018.05  [Refereed]  [International journal]

     View Summary

    This paper describes a path planning framework for processing 2D maps of given pedestrian locations to provide sidewalk paths and street crossing information. The intention is to allow mobile robot platforms to navigate in pe-destrian environments without previous knowledge and using only the current location and destination as inputs. Depending on location, current path planning solutions on known 2D maps (e.g. Google Maps and OpenStreetMaps) from both research and industry do not always provide explicit information on sidewalk paths and street crossings, which is a common problem in suburban/rural areas. The framework’s goal is to provide path planning by means of visual inference on 2D map images and search queries through downloadable map data. The re-sults have shown both success and challenges in estimating viable city block paths and street crossings.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Measuring Performance of Aerial Projection of 3D Hologram Object (3DHO)

    Asyifa I.Septiana, Mahfud Jiono, Takafumi Matsumaru

    2017 IEEE International Conference on Robotics and Biomimetics (IEEE-ROBIO 2017), [Macau SAR, China], (December 5-8, 2017),     2081 - 2086  2017.12  [Refereed]

     View Summary

    The Aerial Projection of 3D Hologram Object (3DHO) which we have proposed, is a hand gesture-based control system with an interactive 3D hologram object floating in mid-air as the hand movement reference. This system mainly consists of the pyramid-shaped reflector which produces 3D hologram object, the parabolic mirror, and the leap motion controller for capturing hand gesture command. The user can control or interact with 3DHO by using their finger or a baton-shaped object. This paper is focusing on the evaluation of the 3DHO by comparing it to other 3D input devices (such as joystick with a slider, joystick without the slider, and gamepad) on five different positioning tasks. We also investigate the assessment of comfort, ease of learning, and user satisfaction by questionnaire survey. From the experimentation, we learn that 3DHO is not good to use in one-dimensional workspace task, but has a good performance in two-dimensional and three-dimensional workspace tasks. From questionnaire results, we found out that 3DHO is averagely comfortable but may cause some fatigues around arm and shoulder. It is also easy to learn but not satisfying enough to use.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Calligraphy-Stroke Learning Support System Using Projector and Motion Sensor

    Takafumi Matsumaru, Masashi Narita

    Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII) (Fuji Technology Press) (ISSN: 1343-0130(Print) / 1883-8014(Online))   21 ( 4 ) 697 - 708  2017.07  [Refereed]

     View Summary

    This paper presents a newly-developed calligraphy-stroke learning support system. The system has the following functions: a) displaying a brushwork, trajectory, and handwriting, b) recording and playback of an expert's calligraphy-stroke, and c) teaching a learner a calligraphy-stroke. The system has the following features which shows our contribution: (1) It is simple and compact built up with a sensor and a projector so as to be easy to introduce to usual educational fields and practical leaning situations. (2) Three-dimensional calligraphy-stroke is instructed by presenting two-dimensional visual information. (3) A trajectory region is generated as continuous squares calculated using a brush model based on a brush position information measured by a sensor. (4) A handwriting is expressed by mapping a handwriting texture image according to an ink concentration and a brush handling state. The result of a trial experiment suggests the effectiveness of the learning support function about a letter form and a calligraphy-stroke.

    DOI CiNii

  • Calibration and statistical techniques for building an interactive screen for learning of alphabets by children

    Riby Abraham Boby, Ravi Prakash, Subir Kumar Saha, Takafumi Matsumaru, Pratyusha Sharma, Siddhartha Jaitly

    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS   14 ( 3 ) 1 - 17  2017.05  [Refereed]

     View Summary

    This article focuses on the implementation details of a portable interactive device called Image-projective Desktop Varnamala Trainer. The device uses a projector to produce a virtual display on a flat surface. For enabling interaction, the information about a user's hand movement is obtained from a single two-dimensional scanning laser range finder in contrast with a camera sensor used in many earlier applications. A generalized calibration process to obtain exact transformation from projected screen coordinate system to sensor coordinate system is proposed in this article and implemented for enabling interaction. This permits production of large interactive displays with minimal cost. Additionally, it makes the entire system portable, that is, display can be produced on any planar surface like floor, tabletop, and so on. The calibration and its performance have been evaluated by varying screen sizes and the number of points used for calibration. The device was successfully calibrated for different screens. A novel learning-based methodology for predicting a user's behaviour was then realized to improve the system's performance. This has been experimentally evaluated, and the overall accuracy of prediction was about 96%. An application was then designed for this set-up to improve the learning of alphabets by the children through an interactive audiovisual feedback system. It uses a game-based methodology to help students learn in a fun way. Currently, it has bilingual (Hindi and English) user interface to enable learning of alphabets and elementary mathematics. A user survey was conducted after demonstrating it to school children. The survey results are very encouraging. Additionally, a study to ascertain the improvement in the learning outcome of the children was done. The results clearly indicate an improvement in the learning outcome of the children who used the device over those who did not.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • ORB-SHOT SLAM: Trajectory correction by 3D loop closing based on bag-of-visual-words (BoVM) model for RGBb-D visual SLAM

    Zheng Chai, Takafumi Matsumaru

    Journal of Robotics and Mechatronics   29 ( 2 ) 365 - 380  2017.04  [Refereed]

     View Summary

    This paper proposes the ORB-SHOT SLAM or OS-SLAM, which is a novel method of 3D loop closing for trajectory correction of RGB-D visual SLAM. We obtain point clouds from RGB-D sensors such as Kinect or Xtion, and we use 3D SHOT descriptors to describe the ORB corners. Then, we train an offline 3D vocabulary that contains more than 600,000 words by using two million 3D descriptors based on a large number of images from a public dataset provided by TUM. We convert new images to bag-of-visual-words (BoVW) vectors and push these vectors into an incremental database. We query the database for new images to detect the corresponding 3D loop candidates, and compute similarity scores between the new image and each corresponding 3D loop candidate. After detecting 2D loop closures using ORB-SLAM2 system, we accept those loop closures that are also included in the 3D loop candidates, and we assign them corresponding weights according to the scores stored previously. In the final graph-based optimization, we create edges with different weights for loop closures and correct the trajectory by solving a nonlinear least-squares optimization problem. We compare our results with several state-of-the-art systems such as ORB-SLAM2 and RGB-D SLAM by using the TUM public RGB-D dataset. We find that accurate loop closures and suitable weights reduce the error on trajectory estimation more effectively than other systems. The performance of ORB-SHOT SLAM is demonstrated by 3D reconstruction application.

    DOI

    Scopus

    7
    Citation
    (Scopus)
  • Interactive aerial projection of 3D hologram object

    Mahfud, Jiono, Matsumaru, Takafumi

    2016 IEEE International Conference on Robotics and Biomimetics, ROBIO 2016     1930 - 1935  2017.02

     View Summary

    © 2016 IEEE.In this paper we present an interactive aerial projection of 3D hologram objects by using the pyramid hologram and parabolic mirror system (for 3D hologram object reconstruction) and the Leap Motion sensor (as a finger movement detector of a user). This system not only can reconstruct and project the 3D hologram object in the mid-air, but also provides a way to interact with it by moving a user's finger. There are three main steps: the reconstruction of 3D object, the projection of 3D hologram object in the mid-air, and the interactive manipulation of 3D hologram object. The first step is realized by using pyramid hologram with LCD display. The second step is achieved with the parabolic mirror hologram. And Leap Motion sensor is used for the last step to detect user finger movement. This paper traces the design concept and confirms the system function of an interactive aerial projection of 3D hologram objects by a prototype demonstration.

    DOI

    Scopus

    12
    Citation
    (Scopus)
  • Near-field touch interface using time-of-flight camera

    Lixing Zhang, Takafumi Matsumaru

    Journal of Robotics and Mechatronics   28 ( 5 ) 759 - 775  2016.10  [Refereed]

     View Summary

    The purpose of this study is to realize a near-field touch interface that is compact,flexible,and highly accurate. We applied a 3-dimensional image sensor (time-of-flight camera) to achieve the basic functions of conventional touch interfaces,such as clicking,dragging,and sliding,and we designed a complete projector-sensor system. Unlike conventional touch interfaces,such as those on tablet PCs,the system can sense the 3-dimensional positions of fingertips and 3- dimensional directions of fingers. Moreover,it does not require a real touch screen but instead utilizes a mobile projector for display. Nonetheless,the system is compact,with a working distance of as short as around 30 cm. Our methods solve the shadow and reflection problems of the time-of-flight camera and can provide robust detection results. Tests have shown that our approach has a high success rate (98.4%) on touch/hover detection and a small standard error (2.21 mm) on position detection on average for different participants,which is the best performance we have achieved. Some applications,such as the virtual keyboard and virtual joystick,are also realized based on the proposed projector-sensor system.

    DOI

    Scopus

    6
    Citation
    (Scopus)
  • Feature Tracking and Synchronous Scene Generation with a Single Camera

    Takafumi Matsumaru

    International Journal of Image, Graphics and Signal Processing (IJIGSP) (ISSN: 2074-9074(Print), ISSN: 2074-9082 (Online))   8 ( 6 ) 1 - 12  2016.06  [Refereed]  [International journal]

     View Summary

    This paper shows a method of tracking feature points to update camera pose and generating a synchronous map for AR (Augmented Reality) system. Firstly we select the ORB (Oriented FAST and Rotated BRIEF) [1] detection algorithm to detect the feature points which have depth information to be markers, and we use the LK (Lucas-Kanade) optical flow [2] algorithm to track four of them. Then we compute the rotation and translation of the moving camera by relationship matrix between 2D image coordinate and 3D world coordinate, and then we update the camera pose. Last we generate the map, and we draw some AR objects on it. If the feature points are missing, we can compute the same world coordinate as the one before missing to recover tracking by using new corresponding 2D/3D feature points and camera poses at that time. There are three novelties of this study: an improved ORB detection, which can obtain depth information, a rapid update of camera pose, and tracking recovery. Referring to the PTAM (Parallel Tracking and Mapping) [3], we also divide the process into two parallel sub-processes: Detecting and Tracking (including recovery when necessary) the feature points and updating the camera pose is one thread. Generating the map and drawing some objects is another thread. This parallel method can save time for the AR system and make the process work in real-time.

    DOI

  • Extraction of representative point from hand contour data based on laser range scanner for hand motion estimation

    Dai, Chuankai, Matsumaru, Takafumi

    2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015     2139 - 2144  2016.02

     View Summary

    © 2015 IEEE.This paper shows a novel method to extract hand Representative Point (RP) based on 2-dimensional laser range scanner for hand motion estimation on projecting system. Image-projecting Desktop Arm Trainer (IDAT) is a projecting system for hand-eye coordination training, in which a projector displays an exercise screen on the desktop, and a laser range scanner detects trainee's hand motion. To realize multi-user HMI and expand more entertainment functions in IDAT system, an Air Hockey application was developed in which the hand RP requires a high precision. To generate hand RP precisely, we proposed our method in two parts to solve the data error problem and changeable hand contour problem. In part one, a data modifier is proposed and a sub-experiment is carried out to establish a modifying function for correcting sensor original data. In part two, we proposed three RP algorithms and carried out an evaluation experiment to estimate the reliability of three algorithms under different conditions. From the result, we get the most reliable algorithm corresponding to different situations in which the error of hand RP is less than 9.6 mm.

    DOI

    Scopus

  • Simulating and displaying of puck motion in virtual air hockey based on projective interface

    Dai, Chuankai, Matsumaru, Takafumi

    2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015     320 - 325  2016.02

     View Summary

    © 2015 IEEE.This paper shows the simulating and displaying of Puck motion in virtual Air Hockey based on projective interface for multi-users HRI Image-projective Desktop Arm Trainer (IDAT) is an upper-limb rehabilitation system for hand-eye coordination training based on projective interface. To expand more entertainment functions and realize multi-user HRI, we want to develop a virtual Air Hockey application in IDAT. To develop this application, Puck motion should be simulated correctly and displayed smoothly on the screen. There are mainly 3 problems: Data updating rate problem, virtual collision calculation and individual's hand misrecognition problem. We proposed our method into three parts corresponding to different problems. In part 1, we used multiple timers with shared memory to deal with unsynchronized data updating rate problem. In part 2, an original physical engine is designed to calculate Puck's velocity and detect virtual collision. In part 3, to deal with the individual's hand misrecognition problem, a history-based hand owner recognition algorithm is implemented to distinguish users' hands.

    DOI

    Scopus

  • Contour-based binary image orientation detection by orientation context and roulette distance

    Jian Zhou, Takafumi Matsumaru

    IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences   E99A ( 2 ) 621 - 633  2016.02  [Refereed]

     View Summary

    This paper proposes a novel technology to detect the orientation of an image relying on its contour which is noised to varying degrees. For the image orientation detection, most methods regard to the landscape image and the image taken of a single object. In these cases, the contours of these images are supposed to be immune to the noise. This paper focuses on the the contour noised after image segmentation. A polar orientation descriptor Orientation Context is viewed as a feature to describe the coarse distribution of the contour points. This descriptor is verified to be independent of translation, isotropic scaling, and rotation transformation by theory and experiment. The relative orientation depends on the minimum distance Roulette Distance between the descriptor of a template image and that of a test image. The proposed method is capable of detecting the direction on the interval from 0 to 359 degrees which is wider than the former contour-based means (Distance Phase [1], from 0 to 179 degrees). What's more, the results of experiments show that not only the normal binary image (Noise-0, Accuracy-1: 84.8%) (defined later) achieves more accurate orientation but also the binary image with slight contour noise (Noise-1, Accuracy-1: 73.5%) could obtain more precise orientation compared to Distance Phase (Noise-0, Accuracy-1: 56.3%
    Noise-1, Accuracy-1: 27.5%). Although the proposed method (O(op2)) takes more time to detect the orientation than Distance Phase (O(st)), it could be realized including the preprocessing in real time test with a frame rate of 30.

    DOI CiNii

    Scopus

    1
    Citation
    (Scopus)
  • Touchless Human-Mobile Robot Interaction using a Projectable Interactive Surface

    R. Agarwal, P. Sharma, S. K. Saha, T. Matsumaru

    2016 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)   WeP1E.6   723 - 728  2016  [Refereed]

     View Summary

    This paper showcases the development of a mobile robot integrated with Projectable Interactive Surface to facilitate its interaction with human users. The system was designed to interact with users of any physical attributes such as height, arm span etc. without re-calibrating it. The system was designed in such a way that there would be no need for the human to come in physical contact with the robot to give it instructions. This system uses a projector to render a virtual display on the ground allowing us to project large displays. Microsoft Kinect integrated in the systems performs a dual functionality of tracking the user movements along with mapping the surrounding environment. The gestures of the tracked user are interpreted and an audio visual signal is projected by the robot in response.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Interactive Aerial Projection of 3D Hologram Object

    Jiono Mahfud, Takafumi Matsumaru

    2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)   TuC05.3   1930 - 1935  2016  [Refereed]

     View Summary

    In this paper we present an interactive aerial projection of 3D hologram objects by using the pyramid hologram and parabolic mirror system (for 3D hologram object reconstruction) and the Leap Motion sensor (as a finger movement detector of a user). This system not only can reconstruct and project the 3D hologram object in the mid-air, but also provides a way to interact with it by moving a user's finger. There are three main steps: the reconstruction of 3D object, the projection of 3D hologram object in the mid-air, and the interactive manipulation of 3D hologram object. The first step is realized by using pyramid hologram with LCD display. The second step is achieved with the parabolic mirror hologram. And Leap Motion sensor is used for the last step to detect user finger movement. This paper traces the design concept and confirms the system function of an interactive aerial projection of 3D hologram objects by a prototype demonstration.

    DOI

    Scopus

    12
    Citation
    (Scopus)
  • Interactive aerial projection of 3D hologram object

    Jiono Mahfud, Takafumi Matsumaru

    2016 IEEE International Conference on Robotics and Biomimetics, ROBIO 2016     1930 - 1935  2016

     View Summary

    In this paper we present an interactive aerial projection of 3D hologram objects by using the pyramid hologram and parabolic mirror system (for 3D hologram object reconstruction) and the Leap Motion sensor (as a finger movement detector of a user). This system not only can reconstruct and project the 3D hologram object in the mid-air, but also provides a way to interact with it by moving a user's finger. There are three main steps: the reconstruction of 3D object, the projection of 3D hologram object in the mid-air, and the interactive manipulation of 3D hologram object. The first step is realized by using pyramid hologram with LCD display. The second step is achieved with the parabolic mirror hologram. And Leap Motion sensor is used for the last step to detect user finger movement. This paper traces the design concept and confirms the system function of an interactive aerial projection of 3D hologram objects by a prototype demonstration.

    DOI

    Scopus

    12
    Citation
    (Scopus)
  • Touchless Human-Mobile Robot Interaction using a Projectable Interactive Surface

    R. Agarwal, P. Sharma, S. K. Saha, T. Matsumaru

    2016 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)     723 - 728  2016  [Refereed]

     View Summary

    This paper showcases the development of a mobile robot integrated with Projectable Interactive Surface to facilitate its interaction with human users. The system was designed to interact with users of any physical attributes such as height, arm span etc. without re-calibrating it. The system was designed in such a way that there would be no need for the human to come in physical contact with the robot to give it instructions. This system uses a projector to render a virtual display on the ground allowing us to project large displays. Microsoft Kinect integrated in the systems performs a dual functionality of tracking the user movements along with mapping the surrounding environment. The gestures of the tracked user are interpreted and an audio visual signal is projected by the robot in response.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Interactive Aerial Projection of 3D Hologram Object

    Jiono Mahfud, Takafumi Matsumaru

    2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)     1930 - 1935  2016

     View Summary

    In this paper we present an interactive aerial projection of 3D hologram objects by using the pyramid hologram and parabolic mirror system (for 3D hologram object reconstruction) and the Leap Motion sensor (as a finger movement detector of a user). This system not only can reconstruct and project the 3D hologram object in the mid-air, but also provides a way to interact with it by moving a user's finger. There are three main steps: the reconstruction of 3D object, the projection of 3D hologram object in the mid-air, and the interactive manipulation of 3D hologram object. The first step is realized by using pyramid hologram with LCD display. The second step is achieved with the parabolic mirror hologram. And Leap Motion sensor is used for the last step to detect user finger movement. This paper traces the design concept and confirms the system function of an interactive aerial projection of 3D hologram objects by a prototype demonstration.

    DOI

    Scopus

    12
    Citation
    (Scopus)
  • Touchless Human-Mobile Robot Interaction using a Projectable Interactive Surface

    R. Agarwal, P. Sharma, S. K. Saha, T. Matsumaru

    2016 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)     723 - 728  2016

     View Summary

    This paper showcases the development of a mobile robot integrated with Projectable Interactive Surface to facilitate its interaction with human users. The system was designed to interact with users of any physical attributes such as height, arm span etc. without re-calibrating it. The system was designed in such a way that there would be no need for the human to come in physical contact with the robot to give it instructions. This system uses a projector to render a virtual display on the ground allowing us to project large displays. Microsoft Kinect integrated in the systems performs a dual functionality of tracking the user movements along with mapping the surrounding environment. The gestures of the tracked user are interpreted and an audio visual signal is projected by the robot in response.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Simulating and Displaying of Puck Motion in Virtual Air Hockey based on Projective Interface

    Chuankai Dai, Takafumi Matsumaru

    IEEE International Conference on Robotics and Biomimetics, (IEEE-ROBIO 2015), [Zhuhai, China], (December 6-9, 2015)   MoM02.6   320 - 325  2015.12  [Refereed]

     View Summary

    © 2015 IEEE.This paper shows the simulating and displaying of Puck motion in virtual Air Hockey based on projective interface for multi-users HRI Image-projective Desktop Arm Trainer (IDAT) is an upper-limb rehabilitation system for hand-eye coordination training based on projective interface. To expand more entertainment functions and realize multi-user HRI, we want to develop a virtual Air Hockey application in IDAT. To develop this application, Puck motion should be simulated correctly and displayed smoothly on the screen. There are mainly 3 problems: Data updating rate problem, virtual collision calculation and individual's hand misrecognition problem. We proposed our method into three parts corresponding to different problems. In part 1, we used multiple timers with shared memory to deal with unsynchronized data updating rate problem. In part 2, an original physical engine is designed to calculate Puck's velocity and detect virtual collision. In part 3, to deal with the individual's hand misrecognition problem, a history-based hand owner recognition algorithm is implemented to distinguish users' hands.

    DOI

    Scopus

  • SAKSHAR An Image-projective Desktop Varnamala Trainer (IDVT) for Interactive Learning of Alphabets

    Ravi Prakash Joshi, Riby Abraham Boby, Subir Kumar Saha, Takafumi Matsumaru

    Developing Countries Forum - ICRA 2015, [Seattle, Washington, USA], (May 26-30, 2015),   2(3)  2015.05  [Refereed]

  • Calligraphy-Stroke Learning Support System Using Projection

    Masashi Narita, Takafumi Matsumaru

    2015 24TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN)   2015-November   640 - 645  2015  [Refereed]

     View Summary

    In this paper, a calligraphy learning support system is presented for supporting brushwork learning by using a projector. The system was designed to provide the three kinds of training according to the learner's ability as followings: copying training, tracing training, and combination of them. In order to instruct the three-dimensional brushwork such as the writing speed, pressure, and orientation of the brush, we proposed the instruction method by presenting the information to only brush tip. This method can be visualized a brush position and the orientation. In addition, the copying experiment was performed using the proposed method. As a result, the efficiency of the proposed method was examined through experiment.

    DOI

    Scopus

    14
    Citation
    (Scopus)
  • Extraction of Representative Point from Hand Contour Data based on Laser Range Scanner for Hand Motion Estimation

    Chuankai Dai, Takafumi Matsumaru

    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)   WeA06.5   2139 - 2144  2015  [Refereed]

     View Summary

    This paper shows a novel method to extract hand Representative Point (RP) based on 2-dimensional laser range scanner for hand motion estimation on projecting system. Image-projecting Desktop Arm Trainer (IDAT) is a projecting system for hand-eye coordination training, in which a projector displays an exercise screen on the desktop, and a laser range scanner detects trainee's hand motion. To realize multi-user HMI and expand more entertainment functions in IDAT system, an Air Hockey application was developed in which the hand RP requires a high precision. To generate hand RP precisely, we proposed our method in two parts to solve the data error problem and changeable hand contour problem. In part one, a data modifier is proposed and a sub-experiment is carried out to establish a modifying function for correcting sensor original data. In part two, we proposed three RP algorithms and carried out an evaluation experiment to estimate the reliability of three algorithms under different conditions. From the result, we get the most reliable algorithm corresponding to different situations in which the error of hand RP is less than 9.6 mm.

    DOI

    Scopus

  • Simulating and Displaying of Puck Motion in Virtual Air Hockey based on Projective Interface

    Chuankai Dai, Takafumi Matsumaru

    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)   MoM02.6   320 - 325  2015  [Refereed]

     View Summary

    This paper shows the simulating and displaying of Puck motion in virtual Air Hockey based on projective interface for multi-users HRI. Image-projective Desktop Arm Trainer (IDAT) is an upper-limb rehabilitation system for hand-eye coordination training based on projective interface. To expand more entertainment functions and realize multi-user HRI, we want to develop a virtual Air Hockey application in IDAT. To develop this application, Puck motion should be simulated correctly and displayed smoothly on the screen. There are mainly 3 problems: Data updating rate problem, virtual collision calculation and individual's hand misrecognition problem. We proposed our method into three parts corresponding to different problems. In part 1, we used multiple timers with shared memory to deal with unsynchronized data updating rate problem. In part 2, an original physical engine is designed to calculate Puck's velocity and detect virtual collision. In part 3, to deal with the individual's hand misrecognition problem, a history-based hand owner recognition algorithm is implemented to distinguish users' hands.

    DOI

    Scopus

  • Projectable Interactive Surface Using Microsoft Kinect V2: Recovering Information from Coarse Data to Detect Touch

    P. Sharma, R. P. Joshi, R. A. Boby, S. K. Saha, T. Matsumaru

    2015 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)   SuD4.3   795 - 800  2015  [Refereed]

     View Summary

    An Image-projective Desktop Varnamala Trainer (IDVT) called SAKSHAR has been designed to improve the learning by children through an interactive audio visual feedback system. This device uses a projector to render a virtual display, which permits production of large interactive displays with minimal cost. The user's hand is recognized with the help of a Microsoft Kinect Version 2. The entire system is portable, i.e., it can be projected on any planar surface. Since the Kinect does not give precise 3D coordinates of points for detecting a touch, a model of recognition of a touch purely based on the contact of the user's hand with the surface would not yield accurate results. We have instead modeled the touch action by using multiple points along the trajectory of the tracked point of the user's hand while hand makes contact with the surface. Fitting a curve through these points and analyzing the errors is used to make the detection of touch accurate.

    DOI

    Scopus

    10
    Citation
    (Scopus)
  • Extraction of representative point from hand contour data based on laser range scanner for hand motion estimation

    Chuankai Dai, Takafumi Matsumaru

    2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015   WeA06.5   2139 - 2144  2015  [Refereed]

     View Summary

    This paper shows a novel method to extract hand Representative Point (RP) based on 2-dimensional laser range scanner for hand motion estimation on projecting system. Image-projecting Desktop Arm Trainer (IDAT) is a projecting system for hand-eye coordination training, in which a projector displays an exercise screen on the desktop, and a laser range scanner detects trainee's hand motion. To realize multi-user HMI and expand more entertainment functions in IDAT system, an Air Hockey application was developed in which the hand RP requires a high precision. To generate hand RP precisely, we proposed our method in two parts to solve the data error problem and changeable hand contour problem. In part one, a data modifier is proposed and a sub-experiment is carried out to establish a modifying function for correcting sensor original data. In part two, we proposed three RP algorithms and carried out an evaluation experiment to estimate the reliability of three algorithms under different conditions. From the result, we get the most reliable algorithm corresponding to different situations in which the error of hand RP is less than 9.6 mm.

    DOI

    Scopus

  • Simulating and Displaying of Puck Motion in Virtual Air Hockey based on Projective Interface

    Chuankai Dai, Takafumi Matsumaru

    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)     320 - 325  2015  [Refereed]

     View Summary

    This paper shows the simulating and displaying of Puck motion in virtual Air Hockey based on projective interface for multi-users HRI. Image-projective Desktop Arm Trainer (IDAT) is an upper-limb rehabilitation system for hand-eye coordination training based on projective interface. To expand more entertainment functions and realize multi-user HRI, we want to develop a virtual Air Hockey application in IDAT. To develop this application, Puck motion should be simulated correctly and displayed smoothly on the screen. There are mainly 3 problems: Data updating rate problem, virtual collision calculation and individual's hand misrecognition problem. We proposed our method into three parts corresponding to different problems. In part 1, we used multiple timers with shared memory to deal with unsynchronized data updating rate problem. In part 2, an original physical engine is designed to calculate Puck's velocity and detect virtual collision. In part 3, to deal with the individual's hand misrecognition problem, a history-based hand owner recognition algorithm is implemented to distinguish users' hands.

    DOI

    Scopus

  • Projectable Interactive Surface Using Microsoft Kinect V2: Recovering Information from Coarse Data to Detect Touch

    P. Sharma, R. P. Joshi, R. A. Boby, S. K. Saha, T. Matsumaru

    2015 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)     795 - 800  2015

     View Summary

    An Image-projective Desktop Varnamala Trainer (IDVT) called SAKSHAR has been designed to improve the learning by children through an interactive audio visual feedback system. This device uses a projector to render a virtual display, which permits production of large interactive displays with minimal cost. The user's hand is recognized with the help of a Microsoft Kinect Version 2. The entire system is portable, i.e., it can be projected on any planar surface. Since the Kinect does not give precise 3D coordinates of points for detecting a touch, a model of recognition of a touch purely based on the contact of the user's hand with the surface would not yield accurate results. We have instead modeled the touch action by using multiple points along the trajectory of the tracked point of the user's hand while hand makes contact with the surface. Fitting a curve through these points and analyzing the errors is used to make the detection of touch accurate.

    DOI

    Scopus

    10
    Citation
    (Scopus)
  • Extraction of Representative Point from Hand Contour Data based on Laser Range Scanner for Hand Motion Estimation

    Chuankai Dai, Takafumi Matsumaru

    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)     2139 - 2144  2015

     View Summary

    This paper shows a novel method to extract hand Representative Point (RP) based on 2-dimensional laser range scanner for hand motion estimation on projecting system. Image-projecting Desktop Arm Trainer (IDAT) is a projecting system for hand-eye coordination training, in which a projector displays an exercise screen on the desktop, and a laser range scanner detects trainee's hand motion. To realize multi-user HMI and expand more entertainment functions in IDAT system, an Air Hockey application was developed in which the hand RP requires a high precision. To generate hand RP precisely, we proposed our method in two parts to solve the data error problem and changeable hand contour problem. In part one, a data modifier is proposed and a sub-experiment is carried out to establish a modifying function for correcting sensor original data. In part two, we proposed three RP algorithms and carried out an evaluation experiment to estimate the reliability of three algorithms under different conditions. From the result, we get the most reliable algorithm corresponding to different situations in which the error of hand RP is less than 9.6 mm.

    DOI

    Scopus

  • Projectable Interactive Surface Using Microsoft Kinect V2: Recovering Information from Coarse Data to Detect Touch

    P. Sharma, R. P. Joshi, R. A. Boby, S. K. Saha, T. Matsumaru

    2015 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)     795 - 800  2015  [Refereed]

     View Summary

    An Image-projective Desktop Varnamala Trainer (IDVT) called SAKSHAR has been designed to improve the learning by children through an interactive audio visual feedback system. This device uses a projector to render a virtual display, which permits production of large interactive displays with minimal cost. The user's hand is recognized with the help of a Microsoft Kinect Version 2. The entire system is portable, i.e., it can be projected on any planar surface. Since the Kinect does not give precise 3D coordinates of points for detecting a touch, a model of recognition of a touch purely based on the contact of the user's hand with the surface would not yield accurate results. We have instead modeled the touch action by using multiple points along the trajectory of the tracked point of the user's hand while hand makes contact with the surface. Fitting a curve through these points and analyzing the errors is used to make the detection of touch accurate.

    DOI

    Scopus

    10
    Citation
    (Scopus)
  • Image-Projecting Desktop Arm Trainer for Hand-Eye Coordination Training

    Takafumi Matsumaru

    Journal of Robotics and Mechatronics   26 ( 6 ) 704 - 717  2014.12  [Refereed]  [Domestic journal]

     View Summary

    © 2014, Fuji Technology Press. All rights received.This paper presents a novel arm-training system, known as the image-projecting desktop arm trainer (IDAT), which is aimed at hand-eye coordination training. The projector displays an exercise image on a desktop in front of a seated patient, and the scanning range finder measures the behavior of the patient as he/she performs the exercise. IDAT is non-invasive and does not constrain the patient. Its efficiency is based on the voluntary movements of the patient, although it offers neither the physical assistance nor tactile feedback of some conventional systems. Three kinds of training content have been developed: “mole-hitting,” “balloon-bursting,” and “fish-catching.” These games were designed for training hand-eye coordination in different directions. A patient and/or medical professional can set a suitable training level, that is, the training time, speed of movement of the objects, and number of objects to appear at any one time, based on the patient’s condition and ability. A questionnaire survey was carried out to evaluate IDAT-3, and the results showed that it was highly acclaimed in terms of user-friendliness, fun, and usefulness.

    DOI

    Scopus

    6
    Citation
    (Scopus)
  • Human Detecting and FollowingMobile Robot Using a Laser Range Sensor

    Takafumi Matsumaru

    Journal of Robotics and Mechatronics   26 ( 6 ) 718 - 734  2014.12  [Refereed]  [Domestic journal]

     View Summary

    © 2014, Fuji Technology Press. All rights received.To meet the higher requirements of human-machine interface technology, a robot with human-following capability, a classic but significant problem, is discussed in this paper. We first propose a human detection method that uses only a single laser range scanner to detect the waist of the target person. Second, owing to the limited speed of a robot and the potential risk of obstructions, a new human-following algorithm is proposed. The speed and acceleration of a robot are adaptive to the human-walking speed and the distance between the human and robot. Finally, the performance of the proposed control system is successfully verified through a set of experimental results obtained using a two-wheelmobile robot working in a real environment under different scenarios.

    DOI

    Scopus

    14
    Citation
    (Scopus)
  • A Walking Training System with Customizable Trajectory Designing

    Takafumi Matsumaru

    Paladyn. Journal of Behavioral Robotics   5 ( 1 ) 35 - 52  2014.06  [Refereed]  [International journal]

     View Summary

    This paper shows a novel walking training system for foot-eye coordination. To design customizable trajectories for different users conveniently in walking training, a new system which can track and record the actual walking trajectories by a tutor and can use these trajectories for the walking training by a trainee is developed. We set the four items as its human-robot interaction design concept: feedback, synchronization, ingenuity and adaptability. A foot model is proposed to define the position and direction of a foot. The errors in the detection method used in the system are less than 40 mm in position and 15 deg in direction. On this basis, three parts are structured to achieve the system functions: Trajectory Designer, Trajectory Viewer and Mobile Walking Trainer. According to the experimental results,we have confirmed the systemworks as intended and designed such that the steps recorded in Trajectory Designer could be used successfully as the footmarks projected in Mobile Walking Trainer and foot-eye coordination training would be conducted smoothly.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Human-Machine Interaction using the Projection Screen and Light Spots from Multiple Laser Pointers

    Jian Zhou, Takafumi Matsumaru

    2014 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)     16 - 21  2014  [Refereed]

     View Summary

    Multi-user laser pointers system, in which more than single people could use the laser pointers concurrently, has promising applications, such as group discussion, appliances control for several handicapped persons, controlling the large displays by a few users and the like. Conventional methods employed in the above-mentioned applications are common mouse, gesture control utilizing the Kinect sensor or leap motion, including the single laser pointer. Common mouse and single laser pointer could not meet the requirements, which let a few users operate simultaneously. And gesture control is limited to a far distance. Compared to multi-user laser pointers, the majority research focuses on single laser pointer. However, multiple users would suffer the dilemma in which each one is not able to point at the target on the screen any time, at far distance, just because the laser pointer is controlled by only one person. The paper proposes a novel way to make it possible that 3 laser pointers are pointing at the screen without intervening between each other. Firstly, the foreground with all the dynamic spots is extracted. On the foreground, these spots will be searched all the time by grasping the contours of them. Hence, various information of the spots like the coordinate, pixel value, area and so on is obtained. Secondly, a square image containing the whole spot is referred to as the input of the designing back propagation neural network. The BPNN output indicates the category to which the laser pointer belongs. An experiment verifies that it works well under certain light conditions (12-727lux) if green laser pointers are used.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Real-time Gesture Recognition with Finger Naming by RGB Camera and IR Depth Sensor

    Phonpatchara Chochai, Thanapat Mekrungroj, Takafumi Matsumaru

    2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS IEEE-ROBIO 2014     931 - 936  2014  [Refereed]

     View Summary

    This paper introduces the real-time fingers naming method then illustrates how hand languages could be recognized by hand gesture and relation between fingertips. Moreover, this paper provides information on how to calculate and name each finger then using arm for dynamically adjust and improve the stability while the hand is moving. Supported by the study of relation between fingertips, palms, and arms, this proposed method can recognize hand gesture and translate these signs into numbers according to the standard sign language. The approach using in this paper relies on the image depth and the RGB image to identify hands, arms and fingertips. Then, the relation between each part is used to recognize a finger name regardless of the different direction of movement. Also, this report describes how to implement the proposed method into the ASUS Xtion as a sensor.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • 巻頭言 半世紀を生きて,次の半世紀へ

    松丸隆文

    バイオメカニズム学会誌   37 ( 3 ) 151 - 151  2013.08

    CiNii

  • Comparison of Displaying with Vocalizing on Preliminary-Announcement of Mobile Robot Upcoming Operation

    Takafumi Matsumaru

    Advances in Robotics - Modeling, Control and Applications   7   133 - 147  2013.03  [Refereed]  [International journal]

  • Development and Evaluation of Operational Interface Using Touch Screen for Remote Operation of Mobile Robot

    Takafumi Matsumaru

    Advances in Robotics - Modeling, Control and Applications   10   195 - 217  2013.03  [Refereed]  [International journal]

  • Design and Evaluation of Throw-over Movement Informing a Receiver of Object Landing Distance

    Takafumi Matsumaru

    Advances in Robotics - Modeling, Control and Applications   9   171 - 194  2013.03  [Refereed]  [International journal]

  • Measuring the Performance of Laser Spot Clicking Techniques

    Romy Budhi Widodo, Takafumi Matsumaru

    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)     1270 - 1275  2013  [Refereed]

     View Summary

    Laser spot clicking technique is the term of remote interaction technique between human and computer using a laser pointer as a pointing device. This paper is focused on the performance test of two laser spot clicking techniques. An off-the-shelf laser pointer has a toggle switch to generate a laser spot, the presence (ON) or absence (OFF) of this spot and its combination are the candidates of the interaction technique. We conducted empirical study that compared remote pointing technique performed using combination of ON and OFF of the laser spot, ON-OFF and ON-OFF-ON, and using a desktop mouse as a baseline comparison. We present quantitative performance test based on Fitts' test using a one-direction tapping test in ISO/TS 9241-411 procedure; and assessment of comfort using a questionnaire. We hope this result give contribution to the interaction technique using laser pointer as a pointing device especially in selecting the appropriate clicking technique for real application. Our results suggest ON-OFF technique has positive advantages over ON-OFF-ON technique such as the throughput and comfort.

    DOI

    Scopus

    9
    Citation
    (Scopus)
  • Robot human-following limited speed control

    Jianzhao Cai, Takafumi Matsumaru

    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication   TuA1.1P.2   81 - 86  2013  [Refereed]

     View Summary

    Robot human-following is an important part of interaction between robot and people. Generally, speed of mobile robots is limited and far slower than human beings naturally walking speed. In order to catch the human rapidly, in this paper, we introduce a control method which uses adaptive acceleration of robots speed. The speed of robots mainly depends on the human speed and distance. Also the robots speed acceleration is adaptive to the distance between human and robots. The proposed control is successfully verified through experiments. © 2013 IEEE.

    DOI

    Scopus

    6
    Citation
    (Scopus)
  • Image-projective Desktop Arm Trainer IDAT for therapy

    Takafumi Matsumaru, Yi Jiang, Yang Liu

    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication   ThA2T2.3   801 - 806  2013  [Refereed]

     View Summary

    Image-projective Desktop Arm Trainer IDAT is designed to improve the upper limb therapy effect. This paper is focused on its design concept. The most important structural feature of IDAT is that it is non-invasive and unconstrained. Setting on the desk apart from the trainee sitting makes no body contact. It works based on the trainee's voluntary movement, although it has neither physical assist nor tactile feedback on conventional systems. IDAT is developed based on the design concept that the direct interaction is critical in terms of the motivation increase. Instead of a joystick, a handle, a robotic arm and a display screen on conventional systems, IDAT uses the screen projected on a desktop and produces the visual reaction at the time on the spot where the trainee operates. It inspires the vivid, lively and actual feeling to the trainee. © 2013 IEEE.

    DOI

    Scopus

    6
    Citation
    (Scopus)
  • Application of Step-on Interface to Therapy

    Takafumi Matsumaru, Shiyang Dong, Yang Liu

    IEEE/RSJ IROS 2012 Workshop on Motivational Aspects of Robotics in Physical Therapy, [Vilamoura, Algarve, Portugal], (October 7-12, 2012)     6 pages  2012.10  [Refereed]

     View Summary

    This paper describes the application of step-on interface (SOI) to therapy. SOI consists of a projector and a sensor such as range scanner, and its special feature is using a projected screen as bidirectional interface through which the information is presented from a robot to a user and the user instructions are delivered to the robot. The human-friendly amusing mobile robot HFAMRO equipped with SOI on a mobile platform is a playing-tag-robot which can be used for gait training. The image-projective desktop arm trainer IDAT is designed set on a desk in front of a trainee sitting. This kind of system is adjustable and customized to each individual by setting parameters, using multimedia channels, or uploading the program. Therefore it could provide the motivation and accomplishment to a trainee and maintain his/her enthusiasm and interest.

  • Rely Essay My Fabourate 18 To make it true keeping dreams

    Takafumi Matsumaru

    Machine Desing (The Nikkan Kogyo Shinbun)   56 ( 7 ) 14  2012.06

  • Interaction Using the Projector Screen and Spot-light from a Laser Pointer: Handling Some Fundamentals Requirements

    Romy Budhi Widodo, Weijen Chen, Takafumi Matsumaru

    2012 PROCEEDINGS OF SICE ANNUAL CONFERENCE (SICE)   WeA10-04   1392 - 1397  2012  [Refereed]

     View Summary

    This paper presents one of the interaction models between humans and machines using a camera, the projector, and the spot-light from a laser pointer device. A camera was attached on the top of the projector, and the projector directed a direction screen display on the wall, while the user pointed a laser pointer to the desired location on the direction screen display. It is confirmed that this system can handle some distortion conditions of the direction screen display, such as an oblique rectangle, horizontal trapezoid distortion, and vertical trapezoid distortion as well as some surface illuminance - 127, 425, 630, and 1100 lux; and the system is designed to be used for static and moving objects. The coordinates that were obtained from the distorted screen can be used to give commands to a specific machine, robot, and application.

  • Development of Image-projective Desktop Arm Trainer, IDAT

    Yang Liu, Yi Jiang, Takafumi Matsumaru

    2012 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)   SP2-A.5   355 - 360  2012  [Refereed]

     View Summary

    Aim to improve upper limb rehabilitation effect, we design and develop an Image-projective Desktop Arm Trainer (IDAT). Compared with conventional therapy, IDAT provides a more effective and interesting training method. IDAT's goal is to maintain and improve patients' upper limb function by training their eye-hand coordination. We select step-on interface (SOI) as the input system which makes trainees can operate IDAT with hand directly. Trainees can make a customized training setting. It can provide motivation and accomplishment to trainees and maintain their enthusiasm and interest. So IDAT provides a much different human robot interaction with pervious upper limb rehabilitation robots that equip a joystick or controller to operate remotely. We propose this idea in 2007 and have applied SOI on some mobile robots. Now we apply it on IDAT to make a new way to upper limb rehabilitation.

    DOI

    Scopus

    10
    Citation
    (Scopus)
  • Applying Infrared Radiation Image Sensor to Step-on Interface: Touched Point Detection and Tracking

    Yi Jiang, Yang Liu, Takafumi Matsumaru

    2012 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)   MP1-B.4   752 - 757  2012  [Refereed]

     View Summary

    We propose and implement a solution for applying an infrared radiation (IR) image sensor to step-on interface (SOI). SOI is a kind of natural human-robot interface what consists of a projector and a laser range scanner (LRG) sensor. And it enables any interactive touch applications on desktop or floor. We attempt to introduce an IR image sensor such as ASUS Xtion to SOI instead of LRG sensor. In this paper, we will describe the procedure that how to use the Xtion and detect touched point. We distinguish user's hand from background (surface) depends on depth data from ASUS Xtion, and detecting touching action when finger almost close to the background. The proposed processes involve IR depth image acquisition, seeking for hand and its contours by thresholding, recognition of touched areas and computing of theirs center position. The research enables ASUS Xtion be applied on SOI in a simple way. Moreover, this system can realize touch interaction on any surface.

    DOI

    Scopus

    4
    Citation
    (Scopus)
  • Laser Spotlight Detection and Interpretation of Its Movement Behavior in Laser Pointer Interface

    Romy Budhi Widodo, Weijen Chen, Takafumi Matsumaru

    2012 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)   MP1-C.4   780 - 785  2012  [Refereed]

     View Summary

    A laser pointer can be used as an input interface in human-machine interaction. Such utilization, however, can be problematic and one of the main issues is the lack of good reliability in the laser spotlight detection. Another problem is how to interpret the user's movement of the spotlight into commands for the application. This paper proposes a method for a laser spotlight detection. The aim is to improve the practicality and reliability of the previous approaches. We use the maximum pixel value as a multiplier in determining the threshold. Maximum pixel value is obtained from environment brightness at a specified time. For the second problem we propose a simple interpretation of incidents that allows the user to use the application, with three main events: laser-move, hover, and single-click. There is no need for users and program to wait a specified time span to be able to interact with each other, and the user can directly give commands to the application after the single-click event. These approaches result in better reliability, easier operation of the application by the user, and allow opportunity for development of a system for rehabilitative, recreation, and input interface devices in the future.

    DOI

    Scopus

    10
    Citation
    (Scopus)
  • Design and Evaluation of Handover Movement Informing Reciever of Weight Load

    Takafumi Matsumaru

    15th National Conference on Machines and Mechanisms (NaCoMM 2011), [IIT Madras, India]   B-5-5  2011.11  [Refereed]

  • W021003 Robot technology to learn from the exercise of creatures

    Takafumi Matsumaru

    Mechanical Engineering Congress, Japan   2011   "W021003 - 1"-"W021003-8"  2011.09

     View Summary

    This paper presents the informative motion study to make a human-coexistence robot useful. First the usage and the design and marketing of a human-coexistence robot are considered. Next, the informative motion study that we are tackling to improve the human-coexistence robot's personal affinity is explained advocating the informative kinesics for human-machine system. As an example of application deployment, study on continuous movement (usual movement) and preliminary operation (prior operation) is shown.

    CiNii

  • 31st SoBIM Annual Conference (SOBIM 2010 in Hamamatsu)

    Takafumi Matsumaru

      35 ( 1 ) 81 - 83  2011.02

    CiNii

  • Design and evaluation of handover movement informing receiver of weight load

    Matsumaru, Takafumi

    15th National Conference on Machines and Mechanisms, NaCoMM 2011    2011.01

     View Summary

    This paper presents the study results on the handover movement informing a receiver of the weight load as an example of the informative motion for the humansynergetic robot. To design and generate the movement depending on the weight load, the human movement is measured and analyzed, and four items are selected as the parameter to vary - the distance between target point and transferred point (in front-back direction), the distance between highest point and transferred point (in vertical direction), the elbow rotation angle, and the waist joint angle. The fitted curve of the parameter variation depending on the weigh load is obtained from the tendency of the subjects' movement data. The movement data for an arbitrary weight load is generated processing the standard data at 0 kg of weight load so that each parameter follows the fitted curve. From the questionnaire survey, although it is difficult for a receiver to estimate the exact weight load, he may distinguish the heavy weight load from the light weight load so that the package will be received safely and certainly.

  • User-Robot Interaction based on Mobile Robot Step-On Interface

    Takafumi Matsumaru, Wataru Saito, Yuichi Ito

    Transactions of the Virtual Reality Society of Japan (ISSN:1344-011X)   15 ( 3 ) 335 - 345  2010.09  [Refereed]

     View Summary

    The friendly amusing mobile (FAM) function in which a robot and a user can interact through a motion is proposed applying the mobile robot step-on interface (SOI) in which the user can direct robot movement or operation by stepping or pointing on the button which shows the contents to desire on the operation screen projected on a floor. HFAMRO-2 mobile robot is developed to realize playing a light tag and three applications-animal tail stepping, bomb fuse stamping, and footprint stepping-are produced as a trial after the design policy.

    DOI CiNii

  • Truly-Tender-Tailed Tag-Playing Robot Interface through Friendly Amusing Mobile Function

    Takafumi Matsumaru

    Journal of Robotics and Mechatronics   22 ( 3 ) 301 - 307  2010.06  [Refereed]  [Domestic journal]

     View Summary

    To expand use of the mobile robot Step-On Interface
    (SOI), originally targeting maintenance, training, and recovery of human physical and cognitive functions, we introduce a “Truly-Tender-Tailed” (T3, pronounced tee-cube) tag-playing robot as a “Friendly Amusing Mobile” (FAM) function. Displaying a previously prepared bitmap (BMP) image and speeding up display make it easy to design button placement and other screen parameters using a painting software package. The BMP-image scope matrix simplifies step detection and recognition and the motion trajectory design editor facilitates robot behavior design.

    DOI

    Scopus

    9
    Citation
    (Scopus)
  • The Step-on Interface (SOI) on a Mobile Platform - Basic Functions

    Takafumi Matsumaru, Yuichi Ito, Wataru Saitou

    PROCEEDINGS OF THE 5TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI 2010)     343 - 344  2010  [Refereed]

     View Summary

    This video shows the basic functions of HFAMRO-2 equipped with the step-on interface (SOI). In the SOI the projected screen is used as a bilateral interface. It not only presents information from the equipment to the user but also delivers the instructions from the user to the equipment. HFAMRO is intended to represent the concept based on which robots interact with users. It assumes, for example, the ability to play `tag' - in this case, playing tag with light, similar to `shadow' tag. The HFAMRO-2 mobile robot, developed to study the SOI's application with mobility, has two sets of the SOI consisting of a projector and a range scanner on a mobile platform. The projector displays a direction screen on a travel surface and the two-dimensional range scanner detects and measures the user's stepping to specify the selected button.

    DOI

  • The Step-on Interface (SOI) on a Mobile Platform - Rehabilitation of the Physically Challenged

    Takafumi Matsumaru, Yuichi Ito, Wataru Saitou

    PROCEEDINGS OF THE 5TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI 2010)     345 - 345  2010  [Refereed]

    DOI

  • Friendly Amusing Mobile Function for Human-Robot Interaction

    Takafumi Matsumaru

    2010 IEEE RO-MAN     88 - 93  2010  [Refereed]

     View Summary

    This paper introduces a tag-playing robot as a "Friendly Amusing Mobile" (FAM) function to expand the mobile robot Step-On Interface (SOI) use and to promote the human-robot interaction through a motion, targeting maintenance, training, and recovery of human physical and cognitive functions in the elderly, physically challenged, injured, etc. Displaying a previously prepared bitmap (BMP) image becomes the display update rate faster and makes it easy to design button placement and other screen parameters using a painting software package. The scope matrix generated from the BMP image simplifies the step detection and recognition, and the motion trajectory design editor facilitates the robot behavior design.

    DOI

    Scopus

    3
    Citation
    (Scopus)
  • Discrimination and Implementation of Emotions on Zoomorphic Robot Movements

    Takafumi Matsumaru

    SICE Journal of Control, Measurement, and System Integration   2 ( 6 ) 365 - 372  2009.11  [Refereed]

     View Summary

    This paper discusses discrimination and implementation of emotions on a zoomorphic robot aiming at designing emotional robot movements and improving robot friendliness. We consider four emotions; joy, anger, sadness, and fear. Two opposite viewpoints, performer and observer, are considered to acquire emotional movement data for analysis. The performer subjects produce emotional movements and movements that are easy to be recognized as expressions of the intended emotion are selected among the measured movements by the observer subjects. Discriminating the emotion embedded in a movement is tried using feature values based on the Laban movement analysis (LMA) and the principal component analysis (PCA). By the discrimination using PCA, the resultant rates of the correct discrimination are about 70% for all four emotions. The features of emotional movements are presumed from the coefficients of the discrimination function obtained in the emotion discrimination using PCA. Emotions are implemented by converting a setup for basic movements by employing the design principle based on the movement features. The result of the verification experiment suggests that four emotional movements are divided into two groups; the joy and anger group and the sadness and fear group. The emotional movements of joy or anger are dynamic, large-scale and frequent being relatively easy to interpret the intended emotion, while the emotional movements of sadness or fear are static, small-scale and little making difficult to understand the feature of movements.

    DOI CiNii

  • Informative Motion Study to Improve Human-Coexistence Robot’s Personal Affinity

    Takafumi Matsumaru

    IEEE RO-MAN 2009 Workshop on Robot Human Synergetics, [Toyama International Conference Center, Japan]    2009.09

  • Study on Handover Movement Informing Receiver of Weight Load as Informative Motion of Human-friendly Robot

    Takafumi Matsumaru

    International Journal of Factory Automation, Robotics and Soft Computing   2009 ( 3 ) 11 - 19  2009.07  [Refereed]  [International journal]

  • Motion Medhia and Informative Motion –System Integration Based on Motion–

    Satoshi Iwaki, Takafumi Matsumaru

    Journal of the Society of Instrument and Control Engineering   48 ( 6 ) 443 - 447  2009.06  [Refereed]

    CiNii

  • Study on Handover Movement Informing Reciever of Wight Load –Research on Informtive Motion–

    Takafumi Matsumaru

    Journal of the Society of Instrument and Control Engineering   48 ( 6 ) 508 - 512  2009.06  [Refereed]

    CiNii

  • Dynamic Remodeling of Environmental Map using Range Data for Remote Operation of Mobile Robot

    Takafumi Matsumaru

    Journal of Robotics and Mechatronics   21 ( 3 ) 332 - 341  2009.06  [Refereed]  [Domestic journal]

     View Summary

    In studying dynamic remodeling of environmental mapping around a mobile robot operated remotely while data measured by the robot range sensor is sent from the robot to the operator, we introduce the Line & Hollow method and the Cell & Hollow method for environmental mapping. Results for the three types of environmental situation clarifies features and effectiveness of our approach. In Line & Hollow method, an isosceles triangle is set based on the range data. The base line is pulled to express obstacle shape and the inside is hollowed out to express vacant space. In Cell & Hollow method, the cell value corresponding to the range data is incremented, and an obstacle is assumed to be exist if the cell value exceeds the ascending threshold. The cell value is decremented on the line between the cell that measured data indicates and the cell located at the sensor, and the obstacle is deleted if the value drops below the descending threshold. We confirmed that environmental mapping for either reflects a dynamic environmental change.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Functions of Mobile-Robot Step-On Interface

    Takafumi Matsumaru, Kosuke Akai

    Journal of Robotics and Mechatronics   21 ( 2 ) 267 - 276  2009.04  [Refereed]

     View Summary

    To improve HFAMRO-1 mobile robot maneuverability and safety, we added a step-on interface (SOI) to direct robotic or mechatronic tasks and operations (HFAMRO: “human-friendly amusing” mobile robot). To do so, a projector displays a direction screen on a floor or other surface, and an operator specifies a button showing the selected movement by stepping or pointing. We modified the direction screen so that among buttons displayed on two lines, stepped-on buttons directing movement are displayed only on the lower line. We also shortened retention time and had selected movement executed only when the foot was removed from the stepped-on button. The robot has 2 SOIs and multiple projection screens, and can be controlled from either direction for the same function. We synchronized direction and preliminary-announcement screens to inform passersby which way the robot would move. Using range scanner data, the robot distinguishes feet from other objects based on size and autonomous movement fusion control to avoid obstacles is implemented.

    DOI CiNii

    Scopus

    14
    Citation
    (Scopus)
  • Mobile Robot with Preliminary-announcement and Indication Function of Upcoming Operation just after the Present

    Takafumi Matsumaru

    International Journal of Factory Automation, Robotics and Soft Computing   2009 ( 1 ) 102 - 110  2009.01  [Refereed]  [International journal]

  • A characteristics measurement of two-dimensional range scanner and its application

    Takafumi Matsumaru

    Open Automation and Control Systems Journal   2 ( 1 ) 21 - 30  2009  [Refereed]

     View Summary

    This paper shows the result of the characteristics measurement of a two-dimensional active range scanner, URG made by Hokuyo Automatic Co., Ltd. which was released in 2004 and is spreading widely as an external sensor for mobile robot. The following items were clarified from the characteristics measurement of URG-X002S in various conditions. (1) In the case that the object has a gloss surface or a black surface, the error rate (the frequency that the scanner judges to be impossible to measure and an error code is output) rises when the oblique angle of object becomes large and also the distance to object becomes long. (2) In the case that the object has a white surface or a rough surface, not only the error rate is zero but also the margin of error becomes dozens of millimeters and the varying is small, if the oblique angle is smaller than 60 deg and the distance is shorter than 1 m. (3) The lateral error is negligibly small if the distance to detect is shorter than 1 m. Moreover it shows the result of the examination to apply the range scanner in the Step-On Interface (SOI), in which the scanner is used for detection and measurement of the stepping of an operator. Based on the measured results, we designed the judgment method of the stepping, the installation position of the scanner, and the placement of buttons in the direction screen to apply the range scanner to the SOI for operation of a mobile robot.© Takafumi Matsumaru
    Licensee Bentham Open.

    DOI

    Scopus

    14
    Citation
    (Scopus)
  • Handover Movement Informing Receiver of Weight Load as Informative Motion Study for Human-Friendly Robot

    Takafumi Matsumaru

    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2     48 - 54  2009  [Refereed]

     View Summary

    This paper presents the result of the study on handover movement informing a receiver of the weight load as an example of the informative motion of a human-coexistence type robot (human-friendly robot). Movement of human body when performing handover task to a person was measured and analyzed first, and the features of variance of movement depending on the weight load were clarified. Then, the questionnaire survey was carried out to verify whether people could read out the difference in weight load from the variance of movement, in which the movement was reproduced using a simulator. It may be said that a receiver can judge whether a package is heavy or not, although it is difficult to estimate the exact weight load. Accordingly, if the variance of the noted points is included in the design of the handover movement of a humanoid robot, people as a receiver will be able to estimate the weight load to make easy to receive a package safely and certainly.

    DOI

    Scopus

    3
    Citation
    (Scopus)
  • Discrimination of Emotion from Movement and Addition of Emotion in Movement to Improve Human-Coexistence Robot's Personal Affinity

    Takafumi Matsumaru

    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2     40 - 47  2009  [Refereed]

     View Summary

    This paper presents the result of trials of the discrimination of emotion from movement and the addition of emotion in movement on a teddy bear robot, aiming at both expressing a robot's emotion by movement and improving a robot's personal affinity. We addressed four kinds of emotion joy, anger, sadness, and fear. In this research, two standpoints were considered a performer and an observer to establish the data of emotional movement used for analysis. The data of movement were collected as a performer's standpoint and they were sorted out in an observer's standpoint. In discriminating the emotion included in movement from the data of movement, both the method using Laban's feature quantity and the method using principal component analysis were tried. By the discrimination using principal component analysis, about 70% of rate of correct discrimination was obtained on all the four emotions. The feature of movement that each emotion could be interpreted was presumed from the coefficient of the discrimination function obtained in the discrimination using principal component analysis. Using the obtained feature the design principle of movement to add emotion into a basic movement was defined. From the verification experiment, it was suggested that the movement which people can interpret the intended emotion with relatively high probability would be produced about joy and anger. About fear and sadness, since the movement to express those emotions has small and little motions, it would be difficult to distinguish the feature and to produce clear emotional movement.

    DOI

    Scopus

    8
    Citation
    (Scopus)
  • Step-on interface on mobile robot to operate by stepping on projected button

    Takafumi Matsumaru, Kosuke Akai

    Open Automation and Control Systems Journal   2 ( 1 ) 85 - 95  2009  [Refereed]

     View Summary

    This paper proposes the step-on interface (SOI) to operate a mobile robot in which a projector projects and displays a direction screen on a floor and a user specifies a button showing the selected movement by stepping or pointing. The HFAMRO-1 mobile robot has been developed to demonstrate the SOI's potential (HFAMRO: "human-friendly amusing" mobile robot). The SOI of HFAMRO-1 consists of a projector and a range scanner on an omni-directional mobile platform. From operational comparison with other input interfaces, it is confirmed that we can direct the robot movement using our own foot. We had some students who does not specialize robotic systems try to operate HFAMRO-1 with their shoes, and all trial students could specify the button to operate the robot satisfactorily and everyone mastered the SOI immediately. © Matsumaru and Akai.

    DOI

    Scopus

    21
    Citation
    (Scopus)
  • Evaluation Experiment in Simulated Interactive Situation between People and Mobile Robot with Preliminary-Announcement and Indication Function of Upcoming Operation

    Takafumi Matsumaru

    Human interface   10 ( 1 ) 11 - 20  2008.02  [Refereed]

    CiNii

  • Experimental examination in simulated interactive situation between people and mobile robot with preliminary-announcement and indication function of upcoming operation

    Takafumi Matsumaru

    2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-9     3487 - 3494  2008  [Refereed]

     View Summary

    This paper presents the result of the experimental examination by "passing each other" and "positional prediction" in simulated interactive situation between people and mobile robot. We have developed four prototype robots based on four proposed methods for preliminarily announcing and indicating to people the speed and direction of upcoming movement of mobile robot moving on two-dimensional plane. We observed significant difference between when there was a preliminary-announcement and indication (PAI) function and when there was not even in each experiment. Therefore the effect of preliminary-announcement and indication of upcoming operation was declared. In addition the feature and effective usage of each type of preliminary-announcement and indication method were clarified. That is, the method of announcing state of operation just after the present is effective when a person has to judge to which direction he should get on immediately due to the feature that simple information can be quickly transmitted. The method of indicating operations from the present to some future time continuously is effective when a person wants to avoid contact or collision surely and correctly owing to the feature that complicated information can be accurately transmitted. We would like to verify the result in various conditions such as the case that traffic lines are obliquely crossed.

    DOI

    Scopus

    7
    Citation
    (Scopus)
  • Development of Mobile Robot with Preliminary-announcement and Display Function of Forthcoming Motion using Projection Equipment

    Takafumi Matsumaru, Yu Hoshiba, Shinji Hiraiwa, Yasuhiro Miyata

    Journal of the Robotics Society of Japan (ISSN:0289-1824)   25 ( 3 ) 410 - 421  2007.04  [Refereed]

     View Summary

    This paper discusses the mobile robot PMR-5 with the preliminary-announcement and display function which indicates the forthcoming operations to the people near the robot by using a projector. The projector is set on a mobile robot and a 2-dimensional frame is projected on a running surface. In the frame, not only the scheduled course but also the states of operation can be clearly announced as the information about movement. We examine the presentation of the states of operation such as stop or going back including the time information of the scheduled course on the developed robot. Scheduled course is expressed as the arrows considering the intelligibility at sight. Arrow expresses the direction of motion directly and the length of arrow can announce the speed of motion. Operation until 3-second-later is indicated and three arrows classified by color for each second are connected and displayed so these might show the changing of speed during 3-second period. The sign for spot revolution and the characters for stop and going back are also displayed. We exhibited the robot and about 200 visitors did the questionnaire evaluation. The average of 5-stage evaluation is 4.5 points and 3.9 points for the direction of motion and the speed of motion respectively. So we obtained the evaluation that it is intelligible in general.

    DOI CiNii

  • Development of Four Kinds of Mobile Robot with Preliminary-Announcement and Indication Function of Upcoming Operation

    Takafumi Matsumaru

    Journal of Robotics and Mechatronics,   19 ( 2 ) 148 - 159  2007.04  [Refereed]

     View Summary

    We propose approaches and equipment for preliminarily
    announcing and indicating to people the speed and direction of movement of mobile robots moving on a two-dimensional plane. We introduce the four approaches categorized into (1) announcing the state just after the present and (2) indicating operations from the present to some future time continuously. To realize the approaches, we use omni-directional display (PMR-2), flat-panel display (PMR-6), laser pointer (PMR-1), and projection equipment (PMR-5) for the announcement unit of protobots. The four protobots were exhibited at the 2005 International Robot Exhibition (iREX05). We had visitors answer questionnaires in a 5-stage evaluation. The projector robot PMR-5 received the highest evaluation score among the four. An examination of differences by gender and age suggested that some people prefer simple information, friendly expressions, and a minimum of information to be presented at one time.

    DOI CiNii

    Scopus

    9
    Citation
    (Scopus)
  • Mobile robot with preliminary-announcement and indication function of forthcoming operation using flat-panel display

    Takafumi Matsumaru

    PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-10     1774 - 1781  2007  [Refereed]

     View Summary

    This research aims to propose the method and equipment to preliminary-announce and indicate the surrounding people both the speed of motion and the direction of motion of the mobile robot that moves on a two-dimensional plane. This paper discusses the mobile robot PMR-6, in which the liquid crystal display (LCD) is set up on the mobile unit, and the state of operation at 1.5 s before the actual motion is indicated. The basis of the content to display is 'arrow' considering the intelligibility for people even at first sight. The speed of motion is expressed as the size (length and width) of the arrow and its color based on traffic signal. The direction of motion is described with the curved condition of the arrow. The characters of STOP are displayed in red in case of stop. The robot was exhibited to the 2005 International Robot Exhibition held in Tokyo. About 200 visitors answered to the questionnaires. The average of five-stage evaluation is 3.56 and 3.97 points on the speed and on the direction respectively, so the method and expression were evaluated comparatively intelligible. As for the gender, the females appreciated about the speed of motion than the males on the whole. Concerning the age, some of the younger age and the upper age admired highly about the direction of motion than the middle age.

    DOI

    Scopus

    11
    Citation
    (Scopus)
  • Model for Analysis of Weight Lifting Motion considering the Abdominal Pressure increased by Valsalva Maneuver

    Takafumi Matsumaru, Satoshi Fukuyama, Tomohiro Sato

    Transactions of the Japan Society of Mechanical Engineers, Series C (ISSN:0387-5024)   72 ( 724 ) 3863 - 3870  2006.12  [Refereed]

     View Summary

    This paper proposed the model to estimate the load on the lumber vertebra considering not only the value presumed from posture but also the increased amount by Valsalva maneuver of extrapolating from the vital capacity. The pressure force can be estimated to be reduced by about 30% by the effect of the abdominal pressure by using the proposed model as said so far. From the error of the presumed value of ground reaction force from an actual measurement, it is thought that the presumed accuracy of the lumbar vertebra load using the proposed model is smaller than 10%. Furthermore, two operations with extreme start-on posture were compared on transition of the compressive force and shear force on the lumbar vertebra. It was brought out that a start-on posture signficantly affects the maximum load on the lumber vertebra. The result suggests that optimal start-on posture may be between two postures.

    CiNii

  • Development of Mobile Robot with Preliminary-announcement and Display Function of Scheduled Course using Light-ray

    Takafumi Matsumaru, Takashi Kusada, Kazuya Iwase

    Journal of the Robotics Society of Japan (ISSN:0289-1824)   24 ( 8 ) 976 - 984  2006.11  [Refereed]

     View Summary

    This paper discusses the design and the basic characteristic of the mobile robot PMR-1 with the preliminaryannouncement and display function of the forthcoming operation (the direction of motion and the speed of motion) to the people around the robot by drawing a scheduled course on a running surface using light-ray. The laser pointer is used as a light source and the light from the laser pointer is reflected in a mirror. The light-ray is projected on a running surface and a scheduled course is drawn by rotating the reflector around the pan and the tilt axes. The preliminary-announcement and display unit of the developed mobile robot can indicate the operation until 3-second-later preliminarily, so the robot moves drawing the scheduled course from the present to 3-second-later. The experiment on coordination between the preliminary-announcement and the movement has been carried out, and we confirmed the correspondence of the announced course with the robot trajectory both in the case that the movement path is given beforehand and in the case that the robot is operated with manual input from a joystick in real-time. So we have validated the coordination algorithm between the preliminary-announcement and the real movement.

    DOI CiNii

  • Examination on Lifting Motion with Different Star-on Posture, and Study on the Proper Operation using Minimum Jerk Model

    Takafumi Matsumaru

    Transactions of the Japan Society of Mechanical Engineers, Series C    2006.08  [Domestic journal]

    DOI

    Scopus

    3
    Citation
    (Scopus)
  • 異なる開始姿勢からの重量物挙上動作の解析・評価と躍度最小規範を用いた適切な動作姿勢の検討

    Takafumi Matsumaru, Satoshi Fukuyama, Kazuyoshi Shima, Tomotaka Ito

    Transactions of the Japan Society of Mechanical Engineers. C   72 ( 720 ) 2554 - 2561  2006.08  [Refereed]

     View Summary

    The aim of this research is to establish the quantitative evaluation method for lifting task and to clarify the safety posture and optimal motion. This paper firstly examines the operation from differenet three start-on postures with four items: the maximum compressive force and share force on L5/S1 joint, the energy efficiency index, and the degree of contribution of each joint. As a result it has been shown that the load not only to lumbar vertebra but to knee joint should be emphasized and the pose-C is an appropriate start-on posture in which the knee is flexed almost at right-angled and the upper body is raised. Moreover from the simulation using minimum jerk model we found a proper operation, but the actual operation has earlier timing to extend the lower body than presumed. So not only the criteria on the whole operation or the relative criteria but also the absolute criteria paying attention to a portion like knee joint moment seems necessary to study the optimal lifting motion.

    CiNii

  • Study on Design of Physique and Motion for Humanoid Robot [in Japanese]

    Takafumi Matsumaru

    Transactions of the Virtual Reality Society of Japan   11 ( 2 ) 283 - 292  2006.06  [Refereed]

     View Summary

    This paper discusses the design method of the bodily shape and motion of a humanoid robot to raise not only the emotional but informative interpersonal-affinity of a humanoid robot. Concrete knowledge and a concrete opinion are classified into the movement prediction from its configuration and the movement prediction from continuous motion or preliminary motion, and they are discussed mentioning the application and usage. Specifically, the bodily shape and motion, which is easier to predict and understand the capability and performance and the following action and intention of a humanoid robot for the surrounding people who are looking at it, are considered.

    DOI CiNii

  • Evaluation of Motion and Posture during Lifting Task Operation Using Acceptance Rate

    Takafumi Matsumaru, Kazuyoshi Shima, Satoshi Fukuyama, Tomotaka Ito

    Transactions of the Society of Instrument and Control Engineers   42 ( 2 ) 174 - 182  2006.02  [Refereed]

     View Summary

    The aim of this research is to establish the quantitative evaluation method for lifting task and to clarify the safety posture and optimal motion. This paper examines the motion from three kinds of start-on posture with the acceptance rate. Operation-A starts from the pose-A in which the knee is extended to the maximum. Operation-B starts from the pose-B in which the knee is flexed to the maximum and the upper body is raised as much as possible. Operation-C starts from the pose-C in which the knee is flexed at almost right-angled and the upper body is raised. The acceptance rate is the estimated population rate who can permit the joint moment during the lifting operation, based on the presumed moment and the coefficient of variation of the acceptable marginal moment on each joint. The compressive force on lumbar vertebrae computed from the L5/S1 joint moment at 85[%] of the acceptance rate (about Av.-1SD) has not been over the standard value of a previous research. So we have set 85[%] as the safety standard acceptance rate, then we settled judging-A (over 95[%], recommended), judging-B (85-95[%], should note), and judging-C (under 85[%], should modify). Although the ankle joint on operation-A and the knee joint on operation-B is judged as C rank, every joint on operation-C shows high acceptance rate. So the validity of operation-C has been clarified quantitatively.

    DOI CiNii

  • Mobile robot with preliminary-announcement and display function of forthcoming motion using projection equipment

    Takafumi Matsumaru

    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication     443 - 450  2006  [Refereed]

     View Summary

    This paper discusses the mobile robot PMR-5 with the preliminary- announcement and display function which indicates the forthcoming operations to the people near the robot by using a projector. The projector is set on a mobile robot and a 2-dimensional frame is projected on a running surface. In the frame, not only the scheduled course but also the states of operation can be clearly announced as the information about movement. We examine the presentation of the states of operation such as stop or going back including the time information of the scheduled course on the developed robot. Scheduled course is expressed as the arrows considering the intelligibility at sight. Arrow expresses the direction of motion directly and the length of arrow can announce the speed of motion. Operation until 3-second-later is indicated and three arrows classified by color for each second are connected and displayed so these might show the changing of speed during 3-second period. The sign for spot revolution and the characters for stop and going back are also displayed. We exhibited the robot and about 200 visitors did the questionnaire evaluation. The average of 5-stage evaluation is 3.9 points and 4.5 points for the direction of motion and the speed of motion respectively. So we obtained the evaluation that it is intelligible in general. ©2006 IEEE.

    DOI

    Scopus

    41
    Citation
    (Scopus)
  • Mobile robot with preliminary-announcement function of forthcoming motion using light-ray

    Takafumi Matsumaru, Takashi Kusada, Kazuya Iwase

    2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12     1516 - 1523  2006  [Refereed]

     View Summary

    This paper discusses the design and the basic characteristic of the mobile robot PMR-1 with the preliminary-announcement and display function of the forthcoming operation (the direction of motion and the speed of motion) to the people around the robot by drawing a scheduled course on a running surface using light-ray. The laser pointer is used as a light source and the light from the laser pointer is reflected in a mirror. The light-ray is projected on a running surface and a scheduled course is drawn by rotating the reflector around the pan and the tilt axes. The preliminary-announcement and display unit of the developed mobile robot can indicate the operation until 3-second-later preliminarily, so the robot moves drawing the scheduled course from the present to 3-second-later. The experiment on coordination between the preliminary-announcement and the movement has been carried out, and we confirmed the correspondence of the announced course with the robot trajectory both in the case that the movement path is given beforehand and in the case that the robot is operated with manual input from a joystick in real-time. So we have validated the coordination algorithm between the preliminary-announcement and the real movement.

  • 特集「インフォマティブ・モーションから人間機械系の情報動作学へ」に寄せて

    松丸 隆文

    バイオメカニズム学会誌 = Journal of the Society of Biomechanisms   29 ( 3 ) 117 - 117  2005.08

    DOI CiNii

  • Application and Development of Informative Kinesics for Human-Machine System

    Takafumi Matsumaru

    Journal of the Society of Biomechanisms   29 ( 3 ) 139 - 145  2005.08  [Refereed]

     View Summary

    人工機械システムに対する恐怖感や違和感を排除するには,外見を人間型や動物型にするだけでなく,サイズと性能・機能の関係などにおいて,人間の常識,経験知・暗黙知に合致した形態・動作が必要だと考える.具体的には,移動ロボットやヒューマノイド・ロボットにおいて,その次の行動・意図が,それを見ている周囲の人間によって予測しやすい形態・動作を考えている.本稿は,まず人間機械系の情報動作学の応用展開における研究項目と手順を説明し,つぎにこれまでに入手した資料から得られている知見のいくつかを,(1)形態からの運動予測と(2)連続動作・予備動作からの運動予測に分類し,その応用・適用方法に言及しながら論じる.

    DOI CiNii

  • Mobile robot with eyeball expression as the preliminary-announcement and display of the robot's following motion

    T Matsumaru, K Iwase, K Akiyama, T Kusada, T Ito

    AUTONOMOUS ROBOTS   18 ( 2 ) 231 - 246  2005.03  [Refereed]

     View Summary

    This paper explains the PMR-2R ( prototype mobile robot - 2 revised), the mobile robot with the eyeball expression as the preliminary-announcement and display of the robot's following motion. Firstly, we indicate the importance of the preliminary-announcement and display function of the mobile robot's following motion for the informational affinity between human being and a robot, with explaining the conventional methods and the related works. We show the proposed four methods which are categorized into two types: one type which indicates a state just after the moment and the other type which displays from the present to some future time continuously. Then we introduce the PMR-2R, which has the omni-directional display, the magicball, on which the eyeball expresses the robot's following direction of motion and the speed of motion at the same time. From the evaluation experiment, we confirmed the efficiency of the eyeball expression to transfer the information. We also obtained the announcement at around one or two second before the actual motion may be appropriate. And finally we compare the four types of eyeball expression: the one-eyeball type, the two-eyeball type, the will-o'-the-wisp type, and the armor- helmet type. From the evaluation experiment, we have declared the importance to make the robot's front more intelligible especially to announce the robot's direction of motion.

    DOI

    Scopus

    22
    Citation
    (Scopus)
  • Examination on The Combination Control of Manual Operation and Autonomous Motion for Teleoperation of Mobile Robot Using a Software Simulation [in Japanese]

    Takafumi Matsumaru, Kiyoshi Hagiwara, Tomotaka Ito

    Transactions of the Society of Instrument and Control Engineers   41 ( 2 ) 157 - 166  2005.02  [Refereed]

     View Summary

    This paper examines the combination control of the manual operation and the autonomous motion to improve the maneuverability and safety on teleoperation of mobile robot. The autonomous motion which works to support the manual operation processing information from simple range sensors on mobile robot is examined with using a computer simulation. Three types of autonomous motion, revolution (RV), following (FL), and slowdown (SD), are proposed and the way to be equipped on the system are examined. In revolution, robot turns autonomously when robot approaches some obstacle too much. In following, robot translates parallel keeping its orientation to go along the form of the obstacle. In slowdown, robot is restricted its translation speed according to the distance to the obstacle and the translation speed of the robot. Features of each autonomous motion is declared; transit time and mileage become shorter with the revolution or the following, and the contact between robot and obstacle is almost avoided with the slowdown. When the distance to some obstacle to apply some autonomous motion is adjusted depending on the translation speed of the mobile robot, too much addition of autonomous motion against operator's intention is reduced so the maneuverability can be improved. When the revolution or the following is incorporated with the slowdown, even though the transit time is prolonged, the number of near miss for the robot to some obstacles is reduced so the safety can be improved.

    DOI CiNii

  • Combination Control of Manual Operation and Autonomous Motion for Teleoperation of Mobile Robot:Suitable Autonomous Motion for Situation

    Takafumi Matsumaru, Kiyoshi Hagiwara, Tomotaka Ito

    Transactions of the Society of Instrument and Control Engineers   40 ( 9 ) 958 - 967  2004.09  [Refereed]

     View Summary

    This paper examines the combination control of the manual operation and the autonomous motion on tele-operation of mobile robot. The autonomous motion that is suitable for the situation when the robot passes through a passage with bends is examined to improve the maneuverability with using a computer simulation. The situation that the manually operated robot contacts with the sidewall of a passage is investigated with the experiments. It is pointed out that the contact tends to occur around the entrance and exit of bends: a robot tends to contact with the inside near the entrance of the bend and the outside around the exit of the bend. The situation that the operator uses the autonomous motion of the mobile robot under the combination control is also investigated with the experiments. The operator tends to use the autonomous following (FL) near the entrance, and the autonomous revolution (RV) is effective to make the robot return to the center of the passage around the exit. From these situation analyses the selective-revolution/following (S-R/F) is developed in which the situation is estimated based on the direction of the manual operation of the operator and the direction of the obstacle from the robot then the autonomous revolution or the autonomous following is selected and applied according to the estimated situation. The new technique is equipped with the simulation system and it is confirmed that the autonomous revolution against the operator's intention is not applied based on the situation, so the maneuverability can be improved.

    DOI CiNii

  • The Human-Machine-Information System and the Robotic Virtual System

    Takafumi Matsumaru

    Journal of the Society of Instrument and Control Engineers   43 ( 2 ) 116 - 121  2004.02  [Refereed]

    DOI CiNii

  • Examination on a Software Simulation of the Method and Effect of Preliminary-announcement and Display of Human-friendly Robot's Following Action

    Takafumi Matsumaru, Shinnosuke Kudo, Hisashi Endo, Tomotaka Ito

    Transactions of the Society of Instrument and Control Engineers   40 ( 2 ) 189 - 198  2004.02  [Refereed]

     View Summary

    This paper examines the preliminary-announcement and display function of human-friendly robot's following action and intention, especially about the direction of motion and the speed of motion for mobile robot which moves on a 2-dimentional plane. We proposed 2 types of methods: the method indicating a state just after the moment (lamp and party-blowouts) and the method displaying from the present to some future time continouously (beam-of-light and projector). Simulation system has been developed to confirm the effectiveness of the preliminary-announcement and display. Effctiveness can be evaluated by the mobile robot chasing. The mobile robot moves about at a random speed in a random direction. Subject person moves the operation robot by using joystick with looking at the preliminary-announcement on the mobile robot. So the method to display (lamp/blowout/beam) and the timing to announce (0.5-3.0 [s] before the actual motion of the robot) are evaluated numerically with the position/direction gap. The data is processed not only as the average and the standard deviation but also with the two-way ANOVA and t-screening. It was examined on a translation and a rotation separately and then on a 2-dimentional plane.<br>We have found that the method displaying from the present to some future time continuously (beam) is easy to understand. But some amount of length of the displayed path in necessary, which means an appropriate timing depending on conditions. The optimal timing for indicating a state is almost 1.0-1.5 [s] before. If the time difference is too much long, the position/direction gap become large due to the poor memory and the operational mistake. If that is too short the operation will be late owing to the reaction delay. Moreover it seems that human being tends to understand some information with a transforming object than with a changing-color object, and continuous changing is easier to understand than distributed changing.

    DOI CiNii

  • The 8th Robotics Symposia

    MATSUMARU Takafumi

    Journal of The Society of Instrument and Control Engineers   42 ( 6 ) 527 - 527  2003

    DOI CiNii

  • Preliminary-announcement function of mobile robot's following motion by using omni-directional display

    T Matsumaru, K Iwase, T Kusada, K Akiyama, H Gomi, T Ito

    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS 2003, VOL 1-3     650 - 657  2003  [Refereed]

     View Summary

    Human-friendly robot which supports and assists human being directly beside human being is expected as it has become an aging and decrease-in-the-birthrate society. In addition to the safety function to avoid dangers for human being, we think the informational affinity function is important such as to announce surrounding people the robot's following motion before beginning to move. A person can anticipate and guess other's motion considering human body's functional features and behavioral pattern as common sense. This paper discusses the preliminary-announcement and display function of mobile robot's following motion, the direction of motion and the speed of motion, by using a omni-directional display, magicball (R). We have been developing the mobile robot PMR-2 which tells its following action with eyeball expressions.

  • Eyeball expression for preliminary-announcement of mobile robot's following motion

    T Matsumaru, K Iwase, T Kusada, H Gomi, T Ito, K Akiyama

    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS 2003, VOL 1-3     797 - 803  2003  [Refereed]

     View Summary

    Human-friendly robot which supports and assists human being directly beside human being is expected as it has become an aging and decrease-in-the-birthrate society. In addition to the safety function to avoid dangers for human being, we think the informational affinity function is important such as to announce surrounding people the robot's following motion before beginning to move. A person can anticipate and guess other's motion considering human body's functional features and behavioral pattern as common sense. This paper discusses the preliminary-announcement and display function of mobile robot's following motion, the direction of motion and the speed of motion, by using a omni-directional display, magicball (R). This examines the four types of eyeball expression, one-eyeball type, two-eyeball type, will-o'-the wipe type, and armor-helmet type.

  • Simulation on preliminary-announcement and display of mobile robot's following action by lamp, party-blowouts, or beam-light

    T Matsumaru, S Kudo, T Kusada, K Iwase, K Akiyama, T Ito

    PROCEEDINGS OF THE 2003 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM 2003), VOLS 1 AND 2     771 - 777  2003  [Refereed]

     View Summary

    This paper examines the preliminary-announcement and display function of the robot's following action and intention, especially about the direction of motion and the speed of motion for mobile robot which moves on a 2-dimentional plane. Based on the result of the fundamental examination, the validity of the preliminary-announcement and display at around 1 second before the actual motion has been verifiedon a 2-dimentional plane. Moreover we have found that human being tends to understand the change of speed of the mobile robot more with some transforming object than with some changing-color object. Comparing two types of the proposed methods, the efficiency of the preliminary-announcement and display by indicating the continuous information as drawing with beam-light more than by displaying a state as lighting lamp or using blowouts have been declared. Moreover, we have found that the scheduled path needs some amount of length for a person to understand the mobile robot's following action.

    DOI

    Scopus

    5
    Citation
    (Scopus)
  • Robot-to-human communication of mobile robot's following motion using eyeball expression on omni-directional display

    T Matsumaru, K Akiyama, K Iwase, T Kusada, H Gomi, T Ito

    PROCEEDINGS OF THE 2003 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM 2003), VOLS 1 AND 2     790 - 796  2003  [Refereed]

     View Summary

    Human-friendly robot which supports and assists human being directly beside human being is expected as it has become an aging and decrease-in-the-birthrate society. In addition to the safety function to avoid dangers for human being, we think the informational affinity function is important such as to announce surrounding people the robot's following motion before beginning to move. A person can anticipate and guess other's motion considering human body's functional features and behavioral pattern as common sense. This paper discusses the preliminary announcement and display function of mobile robot's following motion, the direction of motion and the speed of motion, by using a omni-directional display, magicball (R). This examines the four types of eyeball expression, one-eyeball type, two-eyeball type, will-o'-the wipe type, and armor-helmet type.

    DOI

    Scopus

    4
    Citation
    (Scopus)
  • Synchronization of mobile robot's movement and preliminary-announcement using omni-directional display

    T Matsumaru, K Iwase, T Kusada, K Akiyama, H Gomi, T Ito

    PROCEEDINGS OF THE 2003 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM 2003), VOLS 1 AND 2     246 - 253  2003  [Refereed]

     View Summary

    Human-friendly robot which supports and assists human being directly beside human being is expected as it has become an aging and decrease-in-the-birthrate society. In addition to the safety function to avoid dangers for human being, we think the informational affinity function is important such as to announce surrounding people the robot's following motion before beginning to move. A person can anticipate and guess other's motion considering human body's functional features and behavioral pattern as common sense. This paper discusses the preliminary announcement and display function of mobile robot's following motion, the direction of motion and the speed of motion, by using a onmi-directional display, magicball (R). We have been developing the mobile robot PMR-2 which tells its following action with eyeball expressions.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Examination by software simulation on preliminary-announcement and display of mobile robot's following action by lamp or blowouts

    T Matsumaru, H Endo, T Ito

    2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS     362 - 367  2003  [Refereed]

     View Summary

    This paper discusses the preliminary-announcement and display function of the robot's following action and intention, especially about the direction of motion and the speed of motion for mobile robot which moves on a 2-dimentional plane. We started with a software simulation about the announcement method indicating a state just after the moment, with lighting lamps and with using blowouts (elastic arrow) on translation and rotation separately. As a result, the following three remarks have been obtained: Not only the direction of motion but also the speed of motion is effective to estimate the following translation. Not the rotation speed but the rotation angle (the target direction) is efficient to understand the following revolution. Around I-second before the actual motion is the optimal timing to recognize the mobile robot's following motion both on translation and rotation. Moreover we have verified the validity of the preliminary-announcement and display also for the general movement on a plane.

    DOI

  • Advanced autonomous action elements in combination control of remote operation and autonomous control

    T Matsumaru, K Hagiwara, T Ito

    IEEE ROMAN 2002, PROCEEDINGS     29 - 34  2002  [Refereed]

     View Summary

    This paper examines the combination control in which remote operation is combined with autonomous behaviors with the aim to realize the remote operation of mobile robot which moves in human-coexisting environment. We considers the distance and direction to an obstacle and the speed of motion of the mobile robot for revolution, following, and slowdown, which we have proposed as the autonomous action element in combination control. Fuzzy reasoning and vector components are used. From the experiment by three subject persons, almost the same result has been obtained. When the distance and direction to an obstacle and the speed of motion of the mobile robot are considered, there doesn't seem to be a great difference in following, but mileage becomes shorter in revolution and transit time is reduced in slowdown.

    DOI

    Scopus

    3
    Citation
    (Scopus)
  • Incorporation of autonomous control elements in combination control of remote operation and autonomous control

    T Matsumaru, K Hagiwara, T Ito

    IECON-2002: PROCEEDINGS OF THE 2002 28TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, VOLS 1-4     2311 - 2316  2002  [Refereed]

     View Summary

    This paper examined the combination control in which remote operation is combined with autonomous behaviours with the aim to realize the remote operation of mobile robot which moves in human-coexisting environment. In revolution, following, and slowdown which we had proposed as combination control the autonomous action element in consideration of the distance and direction to an obstacle and the speed of motion of the mobile robot at that time has been examined Fuzzy reasoning and vector component are used to achieve this consideration. Although there was no big difference in following, mileage becomes short in revolution, and transit time was shortened in slowdown. Furthermore in order to avoid a near miss with an obstacle completely, slowdown is incorporated with revolution or following. Verification experiment in software simulation has been carried out with the same subject persons. Although it takes a little longer transit time, other special bad influence was not appeared.

    DOI

    Scopus

  • Examination on Virtual Environment for Preliminary-Announcement and Display of Human-Friendly Mobile Robot

    Takafumi Matsumaru, Kiyoshi Hagiwara

    Proceedings of 6th Joint International Conference on Advanced Science and Technology (JICAST 2001), [Zhejiang University, Hangzhou, China]     169 - 172  2001.12

    CiNii

  • Examination on Virtual Environment for Combination Control of Human-Friendly Remote-Operated Mobile Robot

    Takafumi Matsumaru, Shin’ichi Ichikawa

    Proceedings of 6th Joint International Conference on Advanced Science and Technology (JICAST 2001), [Zhejiang University, Hangzhou, China]     177 - 180  2001.12

  • Combination Control of Remote Operation with Autonomous Behavior in Human-Friendly Mobile Robot

    Takafumi Matsumaru, Shin’ichi Ichikawa

    Proceedings of the 10th International Conference on Advanced Robotics (ICAR 2001), [Hotel Mercure Buda, Budapest, Hungary]     567 - 572  2001.08  [Refereed]

    CiNii

  • Method and Effect of Preliminary-Announcement and Display for Translation of Mobile Robot

    Takafumi Matsumaru, Kiyoshi Hagiwara

    Proceedings of the 10th International Conference on Advanced Robotics (ICAR 2001), [Hotel Mercure Buda, Budapest, Hungary]     573 - 578  2001.08  [Refereed]

    CiNii

  • Dynamic Brief-to-Precise Strategy for Human-Friendly NeuRobot

    Takafumi Matsumaru

    Proceedings of the 32nd ISR (Internatioal Symposium on Robotics), [Soul ,Korea]     526 - 531  2001.04  [Refereed]

  • Preliminary announcement and display for human-friendly mobile robot

    T Matsumaru, Y Terasawa

    MOBILE ROBOT TECHNOLOGY, PROCEEDINGS     221 - 226  2001  [Refereed]

     View Summary

    This paper describes the preliminary-announcement and display of action and intention of mobile robot which moves in human-coexisting environment. First, the affinity to realize a human-friendly robot is mentioned. Next, the usefulness of remote-operated human-friendly mobile robot is shown with explaining several applications. And it is declared that the informational affinity as the function to announce and display beforehand the following action and intention of robot towards human being is necessary with explaining conventional mobile machine and remote-operated mobile robot. Then the method using "blowout" is proposed to inform both speed of motion and direction of motion simultaneously. Copyright (C) 2001 IFAC.

  • Preliminary-announcement and display for translation and rotation of human-friendly mobile robot

    T Matsumaru, K Hagiwara

    ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS     213 - 218  2001  [Refereed]

     View Summary

    This paper discusses the preliminary-announcement and display method of action and intention which is carried with a robot which moves in a human-coexisting environment, It is pointed out that the speed of motion and the direction of motion are important to announce the action of a mobile robot. The method by using blowout is proposed to tell surrounding people about these two kinds of information. Method to announce the speed of motion in translation is examined using a software simulator concerning the case that a robot is equipped with blowout, lamp, or no apparatus. Consequently, the effectiveness of the preliminary-announcement and display using blowout and the importance of information about the speed of motion are confirmed. Moreover method to announce the direction of motion in rotation is examined concerning the case to announce the target direction or the rotation speed. As a result, it is found that the target direction is efficient to announce in rotation of a mobile robot.

    DOI

    Scopus

  • Trial Experiment of the Learning by Experience System on Mechatronics using LEGO MindStorms

    Takafumi Matsumaru, Chieko Komatsu, Toshikazu Minoshima

    Proceedings of International Conference on Machine Automation (ICMA2000), [Osaka Institute of Technology, Osaka, Japan]     207 - 212  2000.09  [Refereed]

  • Action strategy for remote operation of mobile robot in human coexistence environment

    T Matsumaru, A Fujimori, T Kotoku, K Komoriya

    IECON 2000: 26TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, VOLS 1-4     1 - 6  2000  [Refereed]

     View Summary

    This paper proposes three action strategies for remote operation of mobile robot in human coexistence environment as an application of network robotics towards human coexistence robot. Navigation of mobile robots from remote site only in picture information is very difficult. To realize a human coexistence type robot is a challenging subject. So we proposes (1) the combination control method with remote operation and autonomous motion with priority on the remote operation, (2) the method of extending the task-based data exchange which we have been proposed, and (3) the method of sharing information between coexisting human and mobile robot by preliminary announcement display of action intention of mobile robot.

    DOI

    Scopus

    6
    Citation
    (Scopus)
  • Task - based Data Exchange for Teleoperation Through Communication Network [in Japanese]

    Takafumi Matsumaru, Shunichi Kawabata, Tetsuo Kotoku, Nobuto Matsuhira, Kiyoshi Komoriya, Kazuo Tanie, Kunikatsu Takase

    Journal of the Robotics Society of Japan   17 ( 8 ) 1114 - 1125  1999.11  [Refereed]

     View Summary

    This paper proposes the task based data exchange for teleoperation systems through communication network as an efficient transmission method of data between an operation device and a remote robot. On the task-based data exchange, the more important information according to the contents and conditions of the task which the robot performs is given priority to transmit, for example, by altering the contents of the transmitted data. We have built the experimental system in which the master arm in Tsukuba and the slave arm in Kawasaki are connected through N-ISDN and the standard techniques are utilized, such as TCP/IP, socket, JPEG, etc. A series of experimental task has been effectively carried out by the task based data exchange, that is, the crank operation which consists of grasp and revolution. The communication network with capacity limitation was used effectively and the high maneuverability in real-time with bilateral servo control has been realized. Then the effectiveness of the task based data exchange has been confirmed.

    DOI CiNii

  • Remote Collaboration Through Time Delay in Multiple Teleoperation

    Kohtaro Ohba, Shun'ichi Kawabata, Nak Young Chong, Kiyoshi Komoriya, Takafumi Matsumaru, Nobuto Matsuhira, Kunikatsu Takase, Kazuo Tanie

    Proceedings of IEEE/RSJ Interntaional Conference on Intelligent Robots and Systems (IROS'99), [Kyongju, Korea]     1866 - 1871  1999.10  [Refereed]

    DOI

  • Workability Estimation of remote operation thorough communication circuit

    Takafumi Matsumaru, Shin’ichi Kawabata, Tetsuo Kotoku, Nobuto Matsuhira, Kiyoshi Komoriya, Kazuo Tanie, Kunikatsu Takase

    Proceedings of The 9th International Conference on Advanced Robotics ('99ICAR), [Keidanren Hall, Tokyo, Japan]     231 - 238  1999.10

  • Teleoperation Through ISDN Comminication Network

    Takafumi Matsumaru

    Journal of the Robotics Society of Japan   17 ( 4 ) 481 - 485  1999.05  [Refereed]

    DOI CiNii

  • Task-based data exchange for remote operation system through a communication network

    T Matsumaru, S Kawabata, T Kotoku, N Matsuhira, K Komoriya, K Tanie, K Takase

    ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-4, PROCEEDINGS     557 - 564  1999  [Refereed]

     View Summary

    This paper proposes the task-based data exchange for teleoperation systems through a communication network as an efficient method of transmitting data between an operation device and a remote robot. Oil the task-based data exchange, the more important information according to the contents and conditions of the task which the robot performs is given transmission priority, for example, by altering the contents of the transmitted data. We have built an experimental system iii which the master arm in Tsukuba and the slave arm in Kawasaki, separated by about 100 km, are connected through N-ISDN and the standard techniques are utilized, such as TCP/IP, socket, JPEG, etc. A series of experimental tasks has been effectively carried out by the task-based data exchange, that is, crank operation which consists of grasping and revolution. The communication network with capacity limitations was used effectively, and the high maneuverability in real-time with bilateral servo control has been realized. The effectiveness of the task-based data exchange have been confirmed.

    DOI

  • Technical Targets of Human Friendly Robots

    Research Committee on, Human Friendly Robot, Takatoshi Nozaki, Yoji Yamada, Tsukasa Ogasawara, Shigeki Sugano, Masakatsu Fujie, Takafumi Matsumaru

    Journal of the Robotics Society of Japan   16 ( 3 ) 288 - 294  1998.04  [Refereed]

    DOI CiNii

  • A Study of Configuration Recognition and Workability Judgment Method for Modular Manipulator [in Japanese]

    Takafumi Matsumaru, Nobuto Matsuhira

    Journal of the Robotics Society of Japan   15 ( 3 ) 408 - 416  1997.04  [Refereed]

     View Summary

    A design concept of TOMMS, TOshiba Modular Manipulator system, has been already proposed to achieve a modular manipulator system which can be assembled into any desired configuration to provide adaptability to tasks, using as few kinds and types of modules as possible, without special handling such as modification of control software. To realize the concept, we developed a constitution and configuration recognition method of the assembled manipulator using electric resistance which is simple, practical and reliable. Moreover, to actualize the system which can offer the best suitable manipulator constitution and configuration for the desired task, we developed a workability judgment method considering the degeneracy of degreeds of freedom (d. o. f.) of the manipulator and the conditions of the desired. These methods were applied to the trial system TOMMS-1 and their efficiency and practicality were confirmed.

    DOI CiNii

  • Modular Design Scheme for Robot Manipulator Systems

    Takafumi Matsumaru

    The 3rd International Symposium on Distributed Autonomous Robotic Systems (DARS'96), [Wakoh, Japan]    1996.10  [Refereed]

  • Design Disquisition on Modular Robot Systems

    Takafumi Matsumaru

    Journal of Robotics and Mechatronics   8 ( 5 ) 408 - 419  1996.10  [Refereed]  [Domestic journal]

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Corresponding-to-Operation-Motion Type Control Method for Remote Master-Slave Manipulator System

    Takafumi Matsumaru

    Proceedings of the 3rd International Conference on Motion and Vibration Control: MOVIC, [Makuhari, Japan]     204 - 208  1996.09

  • Design and Control of the Modular Manipulator System : TOMMS [in Japanese]

    Takafumi Matsumaru, Nobuto Matsuhira

    Journal of the Robotics Society of Japan   14 ( 3 ) 428 - 435  1996.04  [Refereed]

     View Summary

    The TOshiba Modular Manipulator System, TOMMS, consists of joint modules, link modules, and a control unit with a joystick. As to the trial manufacturing, a manipulator with 3 d. o. f. is assembled using three joint modules and optional link modules into any desired configuration and shape, for instance, a horizontal type and a vertical type. The assembled manipulator is connected to the control unit, and the position of the end tip of the manipulator is controlled using the joystick without special handling. There is only one type of joint module and link module. There are three input ports and two output ports on the joint module. The distance between the fore side and the back side of the link module is adjustable. The Jacobian matrix is applied to the control software. Control experiments were carried out and the efficiency of the design concept of TOMMS for mechanical hardware and control software was confirmed.

    DOI CiNii

  • Remote Operation Method for Manipulators which Control the Pressure Force [in Japanese]

    Takafumi Matsumaru, Nobuto Matsuhira

    Journal of the Robotics Society of Japan   14 ( 2 ) 255 - 262  1996.03  [Refereed]

     View Summary

    This paper describes a control method for manipulators which work by pressing the end-effector to the workpiece under constant force (e.g. grinding and cleaning) and also positioning the end-tip to everywhere on the workpiece using the operation device. Based on ergonomics, "the operator coordinate system" is introduced which is determined from the glance line to the workpiece and the both eyes of the operator. Further, "the corresponding-to-operational-motion type control method" is proposed, on which method the direction of motion of the operation device and the reactive direction of motion of the end-effector are corresponded in the operator coordinate system. Especially for the workpiece with wave shape, "the corresponding-to-objective-shape type control method" is designed, on which method the winding line and the valley line of the wave are recognized during the work and the directions of motion of the operation device are corresponded to these lines. These methods have been applied to the remote control system including the joystick and the lightweight manipulator, so the efficiency of these methods have been confirmed.

    DOI CiNii

  • Recognition of constitution/configuration and workability judgment for the modular manipulator system, TOMMS

    T Matsumaru

    PROCEEDINGS OF THE 1996 IEEE IECON - 22ND INTERNATIONAL CONFERENCE ON INDUSTRIAL ELECTRONICS, CONTROL, AND INSTRUMENTATION, VOLS 1-3     493 - 500  1996  [Refereed]

     View Summary

    A design concept of TOMMS, TOshiba Modular Manipulator System, has been already proposed to achieve a modular manipulator system which can be assembled into any desired constitution and configuration to provide adaptability to tasks, using as few kinds and types of modules as possible, without special handling such as modification of control software. To realize the concept, we developed a constitution and configuration recognition method of the assembled manipulator using electric resistance which is simple, practical and reliable. Moreover, we developed a workability judgment method considering the degeneracy of degrees of freedom (d.o.f.) of the manipulator and the conditions of the desired task These methods were applied to the trial system TOMMS-1 and their efficiency and practicality were confirmed.

    DOI

  • Design and control of the modular robot system: TOMMS

    T MATSUMARU

    PROCEEDINGS OF 1995 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3     2125 - 2131  1995  [Refereed]

    DOI

    Scopus

    78
    Citation
    (Scopus)
  • Development of Windshield Cleaning Robot System [in Japanese]

    Takafumi Matsumaru, Nobuto Matsuhira

    Journal of the Robotics Society of Japan   12 ( 5 ) 743 - 750  1994.07  [Refereed]

     View Summary

    This paper describes the development of Windshield Cleaning Robot System : WSC. This system is intended to be used for Boeing 747, commonly called jumbo jet, parked at airports prior to service. The objects to be cleaned are spots on windshields caused by collision with dusts, insects, and birds during takeoff and landing. The intention of this new system is that one operator would perform the whole work in 10 minutes. So the system consists of the manipulator (the arm and the cleaning device), the installation unit, the control unit, and the operation unit. A position and force control method is applied to this system. The target position of the arm tip is modified using the signals from the force sensor and the joystick. In accordance with this control method, the pressure force is kept constant and the tip is moved so as to follow the shape of the windshields. The various safety features provided include interference limit to limit the area of movement. System experiments were carried out and the effectivity applying lightweight manipulator with long arms to this work was confirmed.

    DOI CiNii

  • WINDSHIELD CLEANING ROBOT SYSTEM - WSC

    T MATSUMARU, N MATSUHIRA, M JINNO

    IROS '94 - INTELLIGENT ROBOTS AND SYSTEMS: ADVANCED ROBOTIC SYSTEMS AND THE REAL WORLD, VOLS 1-3     1964 - 1971  1994  [Refereed]

    DOI

▼display all

Books and Other Publications

  • "Design and Evaluation of Handover Movement Informing Reciever of Weight Load", in S. Bandyopadhyay, G. Saravana Kumar, et al (Eds): "Machines and Mechanisms"

    Takafumi Matsumaru

    Narosa Publishing House (New Delhi, India)  2011.11 ISBN: 9788184871920

  • "Study on Handover Movement Informing Receiver of Weight Load as Informative Motion of Human-friendly Robot", in Salvatore Pennacchio (ed.): "Emerging Technologies, Robotics and Control Systems - Third edition"

    Takafumi Matsumaru, Shigehisa Suzuki

    INTERNATIONALSAR (Palermo, Italy, EU)  2009.06 ISBN: 9788890192883

  • "Mobile Robot with Preliminary-announcement and Indication Function of Upcoming Operation just after the Present", in Salvatore Pennacchio (ed.): "Recent Advances in Control Systems, Robotics and Automation- Third edition Volume 2"

    Takafumi Matsumaru

    INTERNATIONALSAR (Palermo, Italy, EU)  2009.01 ISBN: 9788890192876

  • "生体機械機能工学(バイオメカニズム学会編 バイオメカニズム・ライブラリー)"

    松丸隆文

    東京電機大学出版局  2008.10 ISBN: 9784501417505

  • "Chapter 18 - Mobile Robot with Preliminary-Announcement and Indication of Scheduled Route and Occupied Area using Projector", in Aleksandar Lazinica (ed.): "Mobile Robots Motion Planning, New Challenges"

    Takafumi Matsumaru

    I-Tech Education and Publishing (Vienna, Austria, EU)  2008.07 ISBN: 9783902613356

  • "Chapter 4 - Preliminary-Announcement Function of Mobile Robots’ Upcoming Operation", in Xing P. Gu&ocirc; (ed.): "Robotics Research Trends"

    Takafumi Matsumaru

    Nova Science Publishers (Hauppauge, NY, USA)  2008.05 ISBN: 1600219977

  • "Lesson10 遠隔操作システム", in "Webラーニングプラザ「事例に学ぶロボティックス」"

    松丸隆文, 伊藤友孝

    (独)科学技術振興機構JST  2002.03

  • "Granularity and Scaling in Modularity Design for Manipulator Systems", in H.Asama, T.Fykuda, T.Arai, I.Endo (Eds.): "Distributed Autonomous Robotic Systems 2"

    Takafumi Matsumaru

    Springer-Verlag  1996.11 ISBN: 4431701907

▼display all

Works

  • Waseda Open Innovation Forum 2021 (WOI’21) (Sponsor: Research Organization for Open Innovation Strategy), [hosted virtually]

    Bio-Robotics & Human-Mechatronics (T.Matsumaru) Laboratory  Artistic work 

    2021.03
     
     

  • Waseda Open Innovation Forum 2020 (WOI’20) (Sponsor: Research Organization for Open Innovation Strategy), [Waseda Arena]

    T.Matsumaru Laboratory  Artistic work 

    2020.03
     
     

  • International Robot Exhibition 2017 (iREX 2017) RT Plaza, [Tokyo International Exhibition Center], RT09, (2017.11.29-12.02).

    Bio-Robotics, Human-Mechatronics Laboratory, Waseda University  Artistic work 

    2017.11
    -
    2017.12

     View Summary

    (1) 3DAHII (3D Aerial Holographic Image Interface: 三次元空中ホログラフィック画像インタフェース) .
    [*] Asyifa Imanda SEPTIANA (M2), Duc THAN (M2), Ahmed FARID (M2), 堀内 一希 (M1), 松丸 隆文.

  • International Robot Exhibition 2015 (iREX 2015) RT Plaza, [Tokyo International Exhibition Center], RT-04, (2015.12.02-12.05).

    Bio-Robotics, Human-Mechatronics Laboratory, T, Matsumaru Lab, Graduate School of, Information, Production, Systems, Waseda University  Artistic work 

    2015.12
     
     

     View Summary

    (1) IDAT-3 (Image-projective Destop Arm Trainer: 画像投射式卓上型上肢訓練装置) .
    (2) CSLSS-1 (Calliography-Stroke Learning Support System: 書道運筆学習支援システム).

  • International Home Care &amp; Rehabilitation Exhibition 2014 (H.C.R.2014), [Tokyo International Exhibition Center], 5-16-05, (2014.10.01-10.03).

    T.Matsumaru laboratory  Artistic work 

    2014.10
    -
     

  • International Robot Exhibition 2013(iREX 2013) RT Plaza, [Tokyo International Exhibition Center], SRT-21, , (2013.11.06-11.09).

    Artistic work 

    2013.11
    -
     

  • 13th Industry-Academia Cooperation Fair, [Kitakyushu Science and Research Park], (2013.10.23-10.25).

    Artistic work 

    2013.10
    -
     

  • 12th Industry-Academia Cooperation Fair, [Kitakyushu Science and Research Park], (2011.10.19-10.21)

    Artistic work 

    2012.10
    -
     

  • The 49th Annual Meeting of the Japanese Association of Rehabilitation Medicine, [Fukuoka International Congress Center], (2012.05.31-06.02)

    Artistic work 

    2012.05
    -
     

  • 北九州学術研究都市 報道記者見学ツアー(北九州産業学術推進機構FAIS) [北九州学術研究都市会議場]

    Artistic work 

    2012.02
    -
     

  • 2011 International Robot Exhibision, [Tokyo International Exhibition Center], SRT-5, (2011.11.09-11.12).

    Artistic work 

    2011.11
    -
     

  • 11th Industry Academia Cooperation Fair, [Kitakyushu Science and Research Park], (2011.10.19-10.21)

    Artistic work 

    2011.10
    -
     

  • 51st West Japan Machine Tool &amp; Industry System Fair, [West Japan General Exhibition Center], (2011.06.23-25)

    Artistic work 

    2011.06
    -
     

  • 2009 International Robot Exhibision, [Tokyo International Exhibision Center], (2009.11.25-11.28)

    Artistic work 

    2009.11
    -
     

  • The 4th Motion Media Contents Contest, [Iidabashi, Tokyo], (2008.07).

    Artistic work 

    2008.07
    -
     

  • 2007 International Robot Exhibision, [Tokyo International Exhibition Center], (2007.11.28-12.01)

    Artistic work 

    2007.11
    -
     

  • Innovation Japan 2007, [Tokyo International Forum], (2007.09.12-09.14)

    Artistic work 

    2007.09
    -
     

  • High-Tech Event for Highschool Stutends / JSME Tokai Branch, [TOYOTA Commemorative Museum of Industry and Technology], (2007.08.03)

    Artistic work 

    2007.08
    -
     

  • The 2nd Motion Media Contents Contest, [University of Electtro-Communications], (2006.06.28)

    Artistic work 

    2006.06
    -
     

  • 2005 International Robot Exhibition, [Tokyo International Exhibition Center], (2005.11.30-12.03)

    Artistic work 

    2005.11
    -
     

  • Character Robot 2005, [Intex Osaka], (2005.07.16-17).

    Artistic work 

    2005.07
    -
     

  • he 1st Motion Media Contents Contest, [NTT Musashiyno], (2005.06.22)

    Artistic work 

    2005.06
    -
     

  • International New Technology Fair 2002, [Tokyo International Exhibition Center], (2002.09.25-09.27)

    Artistic work 

    2002.09
    -
     

  • Robot Station 2002, [Matsuzakaya, Shizuoka, Japan], (2002.08.01-08.06)

    Artistic work 

    2002.08
    -
     

▼display all

Presentations

  • Chinese Number Gestures Recognition using Finger Joint Detection and Hand Shape Description

    Yingchuang YANG, Takafumi MATSUMARU

    ISIPS 2020 (14h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan] (12-13 November, 2020) 

    Presentation date: 2020.11

    Event date:
    2020.11
     
     
  • Recognition of Football’s Handball Foul Based on Depth Data in Real-time

    Pukun JIA, Takafumi MATSUMARU

    ISIPS 2020 (14h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan] (12-13 November, 2020) 

    Presentation date: 2020.11

    Event date:
    2020.11
     
     
  • Image Segmentation and Brand Recognition in the Robot Picking Task

    Chen Zhu, Takafumi Matsumaru

    ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS  (Kitakyushu)  Waseda-IPS

    Presentation date: 2018.11

     View Summary

    The computer vision guided picking system has been developed for many decades, however, the picking of random ordered multiple objects cannot be treated well with the existing software. In this research, 6 brands of random-placed drinking bottles are designed to be picked up by using a 6 degree of freedom robot. The bottles need to be classified by the brands before picking from the container. In this article, Mask R-CNN, a deep-learning-based image segmentation network are used to process the image taken from a normal camera and Inception v3 are used for the brand recognition task. The Mask R-CNN is trained by using the COCO datasets to detect the bottles and generate a mask on it. In the brand recognition task, 150-200 images are taken or find for each brand first and augmented to 1000 images per brands. As a result, the segmented images can be labeled with the brand name with at least 80% accuracy in the experiment environment.

  • Control Mobile Robot using Single 2d-Camera with New Proposed Camera Calibration Method

    Haitham Al Jabri, Takafumi Matsumaru

    ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS  (Kitakyushu)  Waseda-IPS

    Presentation date: 2018.11

     View Summary

    This paper presents a summary of mobile robot control using 2d-vision feedback with new proposed camera calibration method. The main goal is to highlight the 2d-vision feedback to be analyzed and used as a main pose feedback of a mobile robot. We use 2-active omni-wheels mobile robot with a single 2d-webcam in our experiments. Our main approach is to solve and tackle present limitations of using feature points with a single 2d-camera for the pose estimation such as: non-static environments and data stability. The results discuss the issues and point out the strengths and weaknesses of the used techniques. First, we use feature points detection ORB (Oriented FAST and Rotated BRIEF) and BF (Brute-Force) matcher to detect and match points in different frames, respectively. Second, we use FAST (Features from Accelerated Segment Test) corners and LK (Lucas–Kanade) optical flow to detect corners and track their flow in different frames. Those points and corners are then used for the pose estimation through optimization process with the: (a) calibration method of Zhang using chessboard pattern and (b) our proposed method using reinforcement learning in offline mode.

  • Intuitive Control of Virtual Robots using Leap Motion Sensor

    Rajeevlochana Chittawadigi, Subir Kumar Saha, Takafumi Matsumaru

    ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS  (Kitakyushu)  Waseda-IPS

    Presentation date: 2018.11

     View Summary

    Serial robots used in industry can be controlled by various means such as joint and Cartesian jogging using dedicated teach pendants or by using offline and online programming in software environment. They can also be controlled using master-slave manipulation technique where an exoskeleton can act as a master and the robot as a slave, mimicking the motion of the exoskeleton worn by a user. A recently developed sensor named Leap Motion can also be used to detect motion of the hands of a user, in sub-millimeter accuracy level. In the proposed work, a Leap Motion sensor has been used to track one of the hand of a user. The incremental motion of the tip of index finger is used as Cartesian increment of the end-effector of a virtual robot in RoboAnlayzer software. Currently, work is underway for detecting the orientation of two or three fingers and accordingly control the orientation of the robot’s end-effector. An exoskeleton has to be worn by the user and excessive usage may cause fatigue to the user. The proposed application of Leap Motion relieves the user of the fatigue and yet has good accuracy, and hence can be used as an intuitive method.

  • On the Applicability of Mobile Robot Traversal in Pedestrian Settings Without Utilizing Pre-prepared Maps

    Ahmed Farid, Takafumi Matsumaru

    ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS  (Kitakyushu)  Waseda-IPS

    Presentation date: 2018.11

     View Summary

    This paper discusses the prospect of mobile robots’ navigational ability in pedestrian settings (e.g. sidewalks and street crossings), under the condition of not utilizing feature-rich maps that are provided before-hand (e.g. from SLAM-based algorithms). The main motivation is to mimic the human way of interpreting 2D maps (e.g. widely available Google Maps), which would lead to negate the need for pre-mapping a given location. The paper starts by summarizing previous literature regarding robotic navigation in pedestrian settings, leading up to outcomes of our own research. We aim to present results in path planning and real-world scene interpretation, then finally address remaining problems and future prospects.

  • Fingertip pointing interface by hand detection using Short range depth camera

    Kazuki Horiuchi, Takafumi Matsumaru

    ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018)  (Kitakyushu)  Waseda-IPS

    Presentation date: 2018.11

     View Summary

    Computer mice and keyboards are most widely used to work with personal computers as pointing and typing devices respectively. As alternatives to a computer mouse, there are other types of pointing devices that use depth sensor or camera for pointing detection. However, current implementations are uncomfortable for usage because a user must raise his arm(s) to control pointing, as well as having a relatively long distance from the sensing device. To solve those usability problems, we propose a pointing device that can narrow the distance within which users can comfortably move their wrists between the keyboard and the pointing device itself. For this system, we compared between various depth sensors’ performance and eventually chose the Intel Realsense sensor for our system’s use. Additionally, we performed a comparative study involving our proposed system’s performance and that of other conventional input devices. Although the total time to complete experimental tasks using our system was longer than by using other conventional input devices, our proposed system has the fastest time when switching between pointing and typing (i.e. moving hand from mouse to keyboard).

  • Real-time remote projection with three-dimensional aerial holographic image interface

    Kazuki Horiuchi, Asyifa Imanda Septiana, Takafumi Matsumaru

    2018 JSME Conference on Robotics and Mechatronics (ROBOMECH 2018 in Kitakyushu), [Kitakyushu, Japan], (June 2-5, 2018), 2P2-H17 (4pages), (2018.06.05)  (Kitakyushu)  JSME RMD

    Presentation date: 2018.06

     View Summary

    The three-dimensional aerial holographic image interface (3DAHII) is a system for aerial projection of varying 3D objects. The projected objects can be viewed from multiple directions without the use of special devices such as eyeglasses, gas or vapor for projection, rotating parts, and so on. In this research, we propose a system to capture a video of a human body from different perspectives in real-time, and project the result using 3DAHII. Because of realtime capturing and projection, users can interact and communicate directly. To realize this goal, we used a smaller prototype system in which four cameras capture a video of a physical object inside a frame, and the video result will be remotely projected in real-time. The smaller prototype is developed for evaluation and confirmation of the proposed system’s operation.

  • Short range fingertip pointing interface using hand detection by depth camera

    Kazuki Horiuchi, Takafumi Matsumaru

    2018 JSME Conference on Robotics and Mechatronics (ROBOMECH 2018 in Kitakyushu), [Kitakyushu, Japan], (June 2-5, 2018), JSME RMD, 1P2-G11 (4pages), (2018.06.04)  (Kitakyushu)  JSME RMD

    Presentation date: 2018.06

     View Summary

    Computer mice and keyboards are most widely used to work with personal computers as pointing and typing devices respectively. As alternatives to a computer mouse, there are other types of pointing devices that use depth sensor or camera for pointing detection. owever, current implementations are uncomfortable for usage because a user must raise his arm(s) to control pointing, as well as having a relatively long distance from the sensing device. To solve those usability roblems, we propose a pointing device that can narrow the distance within which users can comfortably move their wrists between the keyboard and the pointing device itself. For this system, we compared between various depth sensors’ performance and eventually chose the Intel Realsense sensor for our system’s use. Additionally, we performed a comparative study involving our proposed system’s performance and that of other conventional input devices. Although the total time to complete experimental tasks using our system was longer than by using other conventional input devices, our proposed system has the fastest time when switching between pointing and typing (i.e. moving hand from mouse to keyboard). Key Words: Hand gesture recognition, Motion capture camera, Pointing device

  • A Progressively Adaptive Approach for Tactile Robotic Hand to Identify Object Handling Operations

    Duc Anh Than, Takafumi Matsumaru

    11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], P2C-2:(#019), pp.65-67, (2017.11.15.Wed).  (Kitakyushu) 

    Presentation date: 2017.11

     View Summary

    We as human learn specific operations that typically belong to the object by touching and with sensory feedback data, we form particular skills of object handling such as with which hand pose to grasp the object, how much force to use, and which hand movement to make. As human development in babyhood, we all start learning how to handle objects by hands with repetitive interactions, from which we can learn our own skill on behaviors we could do with the object and how we handle it as well. By equipping a touch-oriented robot hand with tactile sensors and by using the collected sensory feedback data, our research is targeted at forming the underlying link between a particular object itself and the specific corresponding operations acted on the object which are helpful to feasibly accomplish a task ordered by human. Besides, a classification of those operations on the object is presented, from which optimal ones that best suit the human task requirements are determined. Specifically, in this paper we propose a machine-learning based approach in combination with an evolutionary method to help progressively build up the hand-based object cognitive intelligence. Overall, the scheme proposed exploits and reveals the robot-hand potential touch-based abilities/intelligence of object interaction and manipulation apart from the existing visually built-up cognition.

  • Outdoor Navigation for Mobile Robot Platforms with Path Planning of Sidewalk Motion Using Internet Maps

    Ahmed Farid, Takafumi Matsumaru

    11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], P2E-4:(#085), pp.298-301, (2017.11.15.Wed).  (Kitakyushu) 

    Presentation date: 2017.11

     View Summary

    This paper describes a path planning system for processing 2D color maps of a given location to provide sidewalk paths, street crossing landmarks, and instructions for a robot platform or user to navigate, using only the current location and destination as inputs. For the navigation of robots and especially disabled people in outdoor pedestrian environments, path planning that explicitly takes into account sidewalks and street crossings is of great importance. Current path planning solutions on known 2D maps (e.g. Google Maps) from both research and industry do not always provide explicit information on sidewalk paths and street crossings, which is a common problem in suburban/rural areas. The path planner test results are shown for location of our campus.

  • Dynamic Precise Localization of Multi-Mobile Robots in 2D Plane

    Haitham Khamis Mohammed Al Jabri, Tafukumi Matsumaru

    11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], O3C-2:(#020), pp.68-70  (Kitakyushu) 

    Presentation date: 2017.11

     View Summary

    A Small Mobile Robot (SMR) moves inside another wider Frame Mobile Robot (FMR) and one of them stops at a time to be as a reference for the other using multi-laser pointers fixed in the FMR. These robots are designed to form a closed system that is to minimize accumulated errors occurred during the robot moves in which the robots’ positions are corrected regularly. There are several motivations that required high precision and accurate move of a mobile robot. For example; nowadays, different size of printers and plotters are available. Each printer/plotter has limited size of paper to print on. This limitation is a problem that can be solved if a mobile robot which moves precisely is made. However, can this robot perform printing task for unlimited size of area? What about accumulated errors that surely occur while the mobile robot moves? In what resolution can this robot print? The proposed system in this paper is a closed system that can collaborate between different robots in which an efficient method in localizing mobile robots is introduced using the advantage of straightening of laser waves transmissions in digitalize mode. The main two strengths of the research ar

  • Dynamic Precise Localization of Multi-Mobile Robots in 2D Plane

    Haitham Khamis Mohammed Al Jabri, Tafukumi Matsumaru

    11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], P2E-3:(#070), pp.255-257  (Kitakyushu) 

    Presentation date: 2017.11

     View Summary

    A Small Mobile Robot (SMR) moves inside another wider Frame Mobile Robot (FMR) and one of them stops at a time to be as a reference for the other using multi-laser pointers fixed in the FMR. These robots are designed to form a closed system that is to minimize accumulated errors occurred during the robot moves in which the robots’ positions are corrected regularly. There are several motivations that required high precision and accurate move of a mobile robot. For example; nowadays, different size of printers and plotters are available. Each printer/plotter has limited size of paper to print on. This limitation is a problem that can be solved if a mobile robot which moves precisely is made. However, can this robot perform printing task for unlimited size of area? What about accumulated errors that surely occur while the mobile robot moves? In what resolution can this robot print? The proposed system in this paper is a closed system that can collaborate between different robots in which an efficient method in localizing mobile robots is introduced using the advantage of straightening of laser waves transmissions in digitalize mode. The main two strengths of the research are as follow: • Achieve precise and accurate mobile robot coordinates by minimizing accumulated errors • Manage mobile robots’ cooperation in localizing and performing different task at a certain time The robots’ movements are checked discretely by multi-laser pointers and corrected accordingly. This can contribute in achieving accurate and precise coordinates and thus, minimizing accumulated errors. As well as, the ability to collaborate between other robots can enhance some features of the system. For instance, it can accelerate the progress by adding other SMRs or even FRMs to finish a big job in less time. In this case, the system can divide different tasks between available robots to finish any assigned job earlier.

  • Usability Study of Aerial Projection of 3D Hologram Object in 3D Workspace

    Septiana Asyifa I, Jiono Mahfud, Kazuki Horiuchi, Takafumi Matsumaru

    11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], O2C-1:(#033), pp.115-118, (2017.11.14.Tue).  (Kitakyushu) 

    Presentation date: 2017.11

     View Summary

    We have proposed a hand gestured based control system with an interactive 3D hologram object floating in mid-air as the hand movement reference, named The Aerial Projection of 3D Hologram Object (3DHO). The system consists of the hologram projector and a sensor to capture hand gesture. Leap motion is used for capturing the hand gesture command while it manipulates the 3DHO which produced by a pyramid-shaped reflector and two parabolic mirrors. We evaluate our 3DHO’s performance by comparing it to other 3D input devices such as joystick with slider, joystick without slider, and gamepad to do several pointing tasks. The assessment of comfort and user satisfaction also evaluated by using questionnaire survey. We found out that the 3DHO is good to use in three-dimensional workspace tasks. From the experiments, participants think our 3DHO is easy to learn but they felt some fatigue in their hands so 3DHO is not satisfying enough to use. The development by adding haptic feedback and make a wider 3DHO should be done for improving 3DHO performance.

  • Active Secondary Suspension of a Railway Vehicle for Improving Ride Comfort using LQG Optimal Control Technique

    Kaushalendra K Khadanga, Tafukumi Matsumaru

    11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], O1C-2:(#061), pp.226-229  (Kitakyushu) 

    Presentation date: 2017.11

     View Summary

    Passenger comfort has been paramount in the design of suspension systems of high speed cars. The main objective of this paper is to reduce the vertical and pitch accelerations of a half car rail model. A rigid half-car high speed passenger vehicle with 10 degrees of freedom has been modelled to study the vertical and pitch accelerations. State space mathematical approach is used to model rail input that takes in track vibrations. An augmented track and the vehicle model is then designed. An active secondary suspension system based on the Linear Quadratic Gaussian (LQG) optimal control method is designed. Vehicle performances like vertical and pitch accelerations, front and rear suspension travel and control forces have been studied and compared with that of a passive system.

  • 3D Hologram Object Manipulation

    Jiono Mahfud, Takafumi Matsumaru

    10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)  (Kitakyushu) 

    Presentation date: 2016.11

  • 3D Hologram Object Manipulation

    Jiono Mahfud, Takafumi Matsumaru

    10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)  (Kitakyushu) 

    Presentation date: 2016.11

  • Use of Kinect Sensor for Building an Interactive Device

    R. P. Joshi, P. Sharma, R. A. Boby, S. K. Saha, T. Matsumaru

    10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)  (Kitakyushu) 

    Presentation date: 2016.11

  • Use of Kinect Sensor for Building an Interactive Device

    R. P. Joshi, P. Sharma, R. A. Boby, S. K. Saha, T. Matsumaru

    10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)  (Kitakyushu) 

    Presentation date: 2016.11

  • Introduction to Robotics and Mechatronics

    Takafumi MATSUMARU  [Invited]

    Kitakyusyu-yumemiraiwrok2016  (West Japan General Exhibition Center Annex, Kitakyushu, Japan)  Kitakyushu city, Mynavi Corporation

    Presentation date: 2016.08

  • Introduction to Robotics and Mechatronics

    Takafumi MATSUMARU  [Invited]

    Kitakyusyu-yumemiraiwrok2016  (West Japan General Exhibition Center Annex, Kitakyushu, Japan)  Kitakyushu city, Mynavi Corporation

    Presentation date: 2016.08

  • Development of Calligraphy-Stroke Learning Support System Using Projection (2nd report) - Proposal of Trajectory Drawing Method from Brush Position-

    Masashi Narita, Takafumi Matsumaru

    2016 JSME Conference on Robotics and Mechatronics (ROBOMECH 2016 in Yokohama), [Yokohama, Japan], 2P1-12a3 (2pages) 

    Presentation date: 2016.06

  • Investigation of effect of preliminary announcement methods on reduction of psychological threat of a mobile robot moving toward human

    Yutaka Hiroi, Akihiro Maeda, Yuki Tanakamasashi, Osaka Institute of Technology, Takafumi Matsumaru, Waseda Univ, Akinori Ito, T

    2016 JSME Conference on Robotics and Mechatronics (ROBOMECH 201 in Yokohama), [Yokohama, Japan], 2P1-11b2 

    Presentation date: 2016.06

  • Extraction of Representative Point from Hand Contour Based on Laser Range Scanner

    Chuankai Dai, Kaito Yano, Takafumi Matsumaru

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.1-4 

    Presentation date: 2015.11

  • An Economical Version of SAKSHAR-IDVT: Image-projective Desktop Varnamala Trainer

    Pratyusha Sharma, Vinoth Venkatesan, Ravi Prakash Joshi, Riby Abraham Boby, Takafumi Matsumaru, Subir Kumar Saha

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.5-8 

    Presentation date: 2015.11

  • Brushwork Learning Support System Using Projection

    Masashi Narita, Takafumi Matsumaru

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.9 

    Presentation date: 2015.11

  • Feature Tracking and Synchronous Scene Generation with a Single Camera

    Zheng Chai, Takafumi Matsumaru

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.38-39 

    Presentation date: 2015.11

  • Real-time hand side discrimination based on hand orientation and wrist point localization sensing by RGB-D sensor

    Thanapat Mekrungroj, Takafumi Matsumaru

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.67 

    Presentation date: 2015.11

  • Facial Expression Recognition based on Neural Network using Extended Curvature Gabor Filter Bank

    Pengcheng Fang, Takafumi Matsumaru

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.68-69 

    Presentation date: 2015.11

  • Obstacle avoidance based on improved Artificial Potential Field for mobile robot

    Sihui Zhou, Takafumi Matsumaru

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.242-243 

    Presentation date: 2015.11

  • Development of Calligraphy-Stroke Learning Support System Using Projection (1st report) - Proposal and Construction of System -

    2015 JSME Conference on Robotics and Mechatronics (ROBOMECH 2015 in Kyoto), [Kyoto, Japan] 

    Presentation date: 2015.05

  • Human-Machine Interaction using Projection Screen and Multiple Light Spots

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), OS9-2, (2014.11.13), PS6-2, (2014.11.13) 

    Presentation date: 2014.11

  • Development of calligraphy-stroke learning support system using projection

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), PS5-6, (2014.11.13) 

    Presentation date: 2014.11

  • Virtual Musical Instruments based on Interactive Multi-Touch system sensing by RGB Camera and IR Sensor

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), PS6-7, (2014.11.13) 

    Presentation date: 2014.11

  • Screen-Camera Position Calibration and Projected Screen Detection

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), PS6-10, (2014.11.13) 

    Presentation date: 2014.11

  • Touching Accuracy Improvement and New Application Development for IDAT

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), OS3-1, (2014.11.12), PS5-1, (2014.11.13) 

    Presentation date: 2014.11

  • SAKSHAR : An Image-projective Desktop Varnamala Trainer (IDVT) for Interactive Learning of Alphabets

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), OS3-2, (2014.11.12), PS5-2, (2014.11.13) 

    Presentation date: 2014.11

  • Multi-finger Touch Interface based on ToF camera and webcam

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), OS3-3, (2014.11.12), PS5-3, (2014.11.13) 

    Presentation date: 2014.11

  • 画像投射式卓上型上肢訓練装置(IDAT)を用いた片麻痺患者上肢の訓練−予備的研究−

    第39回日本脳卒中学会総会 (STROKE2014),[大阪国際会議場], (2014.3.13-15) 

    Presentation date: 2014.03

  • Introduction of Bio-Robotics and Human-Mechatronics Laboratory

    Takafumi MATSUMARU  [Invited]

    Invited Talk  (Beijing Institute of Technology)  Beijing Institute of Technology

    Presentation date: 2014.02

  • Human Detection and Following Mobile Robot Control System Using Laser Range Sensor

    Takafumi MATSUMARU  [Invited]

    Joint meeting of Peking University and Waseda University  (Peking University)  Peking University

    Presentation date: 2014.02

  • Automatic size adjustment and tracking control of projected display by pan-tilt-zoom camera

    The 14th SICE System Integration Division Annual Conference (SI 2013 in Kobe), [Kobe, Japan], (2013.12.18-20), 1J1-2, pp.703-708. 

    Presentation date: 2013.12

  • Relative Position Calibration using Pan-tilt-zoom Camera for Projection Interface

    7th IPS International Collaboration Symposium (IPS-ICS 2013), [Kitakyushu, Japan], (11-13 October, 2013), PS-502 

    Presentation date: 2013.11

  • Human Detection and Following Mobile Robot Control System Using Range Sensor

    7th IPS International Collaboration Symposium (IPS-ICS 2013), [Kitakyushu, Japan], (11-13 October, 2013), OS4-2, PS-511 

    Presentation date: 2013.11

  • Kinect Sensor Application to Control Mobile Robot by Gesture, Facial movement and Speech

    7th IPS International Collaboration Symposium (IPS-ICS 2013), [Kitakyushu, Japan], (11-13 October, 2013), PS-515 

    Presentation date: 2013.11

  • Contact/non-contact Interaction System using Camera and Depth Sensors

    7th IPS International Collaboration Symposium (IPS-ICS 2013), [Kitakyushu, Japan], (11-13 October, 2013), PS-504 

    Presentation date: 2013.11

  • Human-robot interaction on training

    International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), O003, pp.41-46, 

    Presentation date: 2013.10

  • Using Laser Pointer for Human-Computer Interaction

    International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P013, p.116 

    Presentation date: 2013.10

  • Automatic Adjustment and Tracking of Screen Projected by Using Pan-tilt-zoom Camera

    International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P014, p.117 

    Presentation date: 2013.10

  • Robot Human-following Limited Speed Control

    International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P015, p.118 

    Presentation date: 2013.10

  • Mobile robot control system based on gesture, speech and face track using RGB-D-S sensor

    International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P016, p.119 

    Presentation date: 2013.10

  • Human-robot interaction based on contact/non-contact sensing by camera and depth sensors

    International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P017, p.120 

    Presentation date: 2013.10

  • Plenary talk on Image-projective Desktop Arm Trainer and Touch Interaction Based on IR Image Sensor

    Takafumi MATSUMARU  [Invited]

    Joint Seminar of Peking University and Waseda University  (Peking University)  Peking University

    Presentation date: 2013.02

  • Development of Walking Training Robot with Customizable Trajectory Design System

    6th IPS International Collaboration Symposium (IPS-ICS), [Kitakyushu, Fukuoka, Japan], (14-16 November, 2012), PS-22 

    Presentation date: 2012.11

  • Development of Image-Projective Desktop Arm Trainer and Measurement of Trainee’s Performance

    6th IPS International Collaboration Symposium (IPS-ICS), [Kitakyushu, Fukuoka, Japan], (14-16 November, 2012), PS-23 

    Presentation date: 2012.11

  • Recent Advance on SOI (Step-On Interface) Applications

    International Workshop on Image &amp; Signal Processing and Retrieval (IWISPR2012), [Kitakyushu] 

    Presentation date: 2012.10

  • Development of Walking Training Robot with Customizable Trajectory Design System

    International Workshop on Image & Signal Processing and Retrieval (IWISPR2012), [Kitakyushu] 

    Presentation date: 2012.10

  • Development of Image-Projective Desktop Arm Trainer and Measurement of Trainee’s Performance

    International Workshop on Image &amp; Signal Processing and Retrieval (IWISPR2012), [Kitakyushu] 

    Presentation date: 2012.10

  • Upper Extremity Exercise, Anywhere Pleasantly &#8211;Development of Image-projective Desktop Arm Trainer

     [Invited]

    19th UOEH Rehabilitation Medical TreatmentSeminar [UOEH] 

    Presentation date: 2012.07

  • Introduction of Bio-Robotics and Human-Mechatronics Laboratory

     [Invited]

    Presentation date: 2012.03

  • Introduction of Bio-Robotics and Human-Mechatronics Laboratory

    Workshop co-hsted by Waseda University and Peking University, [Peking University] 

    Presentation date: 2012.02

  • Human-Robot Interaction Design on Mobile Robot with Step-On Interface

    5th IPS International Collaboration Symposium 2011, [Kitakyushu, Japan] 

    Presentation date: 2011.11

  • Prototype of Touch Game Application on Development of Step-On Interface

    5th IPS International Collaboration Symposium 2011, [Kitakyushu, Japan] 

    Presentation date: 2011.11

  • Human-Robot Interaction using Projection Interface

    International Workshop on Target Recognition and Tracking, [Kitakyushu, Japan] 

    Presentation date: 2011.10

  • Human-Robot Interaction Design on Mobile Robot with Step-On Interface

    International Workshop on Target Recognition and Tracking (IWTRT2011), [Kitakyushu] 

    Presentation date: 2011.10

  • Prototype of Touch game Application on Development of Step-On Interface

    International Workshop on Target Recognition and Tracking (IWTRT2011), [Kitakyushu] 

    Presentation date: 2011.10

  • Robot technology to learn from the exercise of creatures

    2011 JSME Annual Meeting, [Tokyo, Japan] 

    Presentation date: 2011.09

  • Teleoperation of Human-Friendly Robot (56th report) - Remote Operation of Two-wheel Drive using Touch Screen Interface -

    2011 JSME Conference on Robotics and Mechatronics (ROBOMEC 2011 in Okayama), [Okayama, Japan] 

    Presentation date: 2011.05

  • Teleoperation of Human-Friendly Robot (56th report) - Remote Operation of Two-wheel Drive using Touch Screen Interface -

    2011 JSME Conference on Robotics and Mechatronics (ROBOMEC 2011 in Okayama), [Okayama, Japan] 

    Presentation date: 2011.05

  • Operation Method of Human-Friendly Robot (10th report) -Development of Touch-game Application using Step-on Interface-

    2011 JSME Conference on Robotics and Mechatronics (ROBOMEC 2011 in Okayama), [Okayama, Japan] 

    Presentation date: 2011.05

  • Operation Method of Human-Friendly Robot (11th report) - Study on Step-on Interface Operation using Laser pointer -

    2011 JSME Conference on Robotics and Mechatronics (ROBOMEC 2011 in Okayama), [Okayama, Japan] 

    Presentation date: 2011.05

  • Informative Motion of Human-Friendly Robot (14th report) - Generation of Throw-Over Motion to Transmit Landing Distance -

    2011 JSME Conference on Robotics and Mechatronics (ROBOMEC 2011 in Okayama), [Okayama, Japan] 

    Presentation date: 2011.05

  • “Informative Motion of Human-Friendly Robot (15th report) - Evaluation of Throw-Over Motion to Transmit Landing Distance -

    2011 JSME Conference on Robotics and Mechatronics (ROBOMEC 2011 in Okayama), [Okayama, Japan] 

    Presentation date: 2011.05

  • “Informative Motion of Human-Friendly Robot (12th report) &#8211; Analysis of Throw-over Motion to Transmit Landing Distance

    The 31st Annual Conference on Biomechanism (SOBIM 2010 in Hamamatsu) 

    Presentation date: 2010.11

  • Informative Motion of Human-Friendly Robot (13th report) - Generation of Throw-over Motion to Transmit Landing Distance -

    The 31st Annual Conference on Biomechanism (SOBIM 2010 in Hamamatsu) 

    Presentation date: 2010.11

  • Remote Operation of Human-Friendly Robot (54th report) - Development and Evaluation of Operational Interface Using Touch Screen -

    JSME Robotics and Mechatronics Conference 2010 (ROBOMEC 2010 in Asahikawa) 

    Presentation date: 2010.06

  • Operation Method of Human-Friendly Robot (9th report) - Study on Control Method for Mobile Robot Step-On Interface -

    JSME Robotics and Mechatronics Conference 2010 (ROBOMEC 2010 in Asahikawa) 

    Presentation date: 2010.06

  • Informative Motion of Human-Friendly Robot (10th report) - Measurement and Analysis of Throw-Over Motion to Transmit Landing Distance -

    JSME Robotics and Mechatronics Conference 2010 (ROBOMEC 2010 in Asahikawa) 

    Presentation date: 2010.06

  • Informative Motion of Human-Friendly Robot (11th report) - Experiment and Evaluation of Throw-Over Motion to Transmit Landing Distance -

    JSME Robotics and Mechatronics Conference 2010 (ROBOMEC 2010 in Asahikawa) 

    Presentation date: 2010.06

  • Interaction Design between People and Mobile Robot Step-on Interface

    Correspondences on Human Interface 

    Presentation date: 2010.03

  • Operation Method of Human-Friendly Robot (8th report) -Application of Mobile Robot with Step-on Interface

    10th SICE System Integration Division Annual Conference 

    Presentation date: 2009.12

  • Informative Motion of Human-Friendly Robot (9th report) -Characteristics Extraction of Throw-over Motion to Transmit Landing Distance

    10th SICE System Integration Division Annual Conference 

    Presentation date: 2009.12

  • Design and Evaluation of Handover Movement Informing Receiver of Weight Load

    The 21th Biomechanism Symposium 

    Presentation date: 2009.08

  • Teleoperation of Human-Friendly Robot (51st report) -Environmental Map Making using Range Scanner-

    2009 JSME Conference of Robotics and Mechatronics 

    Presentation date: 2009.05

  • Teleoperation of Human-Friendly Robot (52nd report) -Operational Interface for Mobile Robot using Voice Recognition-

    2009 JSME Conference of Robotics and Mechatronics 

    Presentation date: 2009.05

  • Teleoperation of Human-Friendly Robot (53rd report) -Operational Interface for Mobile Robot using Touch Screen-

    2009 JSME Conference of Robotics and Mechatronics 

    Presentation date: 2009.05

  • Preliminary-Announcement of Robot's Operation (3rd report) -Comparison Experiment between Voice and Display-

    2009 JSME Conference of Robotics and Mechatronics 

    Presentation date: 2009.05

  • Operation Method of Human-Friendly Robot (6th report) - Improvement of Control Function of Mobile Robot with Step-on Interface-

    2009 JSME Conference of Robotics and Mechatronics 

    Presentation date: 2009.05

  • Operation Method of Human-Friendly Robot (7th report) -Method of Utilization of Mobile Robot with Step-on Interface-

    2009 JSME Conference of Robotics and Mechatronics 

    Presentation date: 2009.05

  • Informative Motion of Human-Friendly Robot (7th report) -Development of Handover Motion to Transmit Weight Load-

    2009 JSME Conference of Robotics and Mechatronics 

    Presentation date: 2009.05

  • Informative Motion of Human-Friendly Robot (8th report) -Characteristics Extraction of Throw-over Motion to Transmit Landing Position-

    2009 JSME Conference of Robotics and Mechatronics 

    Presentation date: 2009.05

▼display all

Research Projects

  • "Nifty Arm" A robot arm that will act on your behalf without teaching, that will help you without saying

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research

    Project Year :

    2022.04
    -
    2025.03
     

  • ヒューマン・ロボット・インタラクションのための2D映像-3D動作マッピング手法

    Project Year :

    2017.04
    -
    2020.03
     

     View Summary

    2次元映像と3次元動作の間の相互マッピング手法の確立を目的として,次の3つの事例研究から本質に迫ろうとしている.(1)3次元空中映像インタフェース(3-dimensional aerial image interface, 3DAII)(主に動作→映像): 空中に投影された3次元映像にユーザが手指を重ねて直接操作できることを特徴とする新しいインタフェースの応用として,ろくろ作業(成形作業,削り作業)により土から器を作りだす工程の再現を試みている.仮想物体の表現にはゲームエンジンであるUnity (ver.5.6.1fl)を使用し,LeapMotionで検出した手指の3次元動作により仮想物体を変形させることで,造形を再現するインタラクション機能の一部を実現した.(2)エアーポインティングによるカーソル操作(主に動作→映像): コンピュータ作業におけるキー入力とカーソル操作の切り替えを円滑に実現することを目的として,キーボードから手首を離すことなく指先の3次元動作だけで2次元映像内のカーソルを操作するシステムを提案した.腕の疲労軽減だけでなく,カーソルが動作する平面と指先の動く方向が一致することで直感的な操作になることが確認できた.(3)銘柄別ペットボトル仕分け作業(主に映像→動作): 水槽の中で冷やされているペットボトルに対して,指定された銘柄を掴み上げるロボットアームシステムの実現を目的として,RGBカメラの2次元映像からロボットアームの3次元動作を計画する.水面を介して水中の物体を認識するので,センサはSLPP(構造化光パターン投影)やToF(飛行時間)などのアクティブ方式では難しいのでパッシブ方式のRGBカメラとする.機械学習により物体区分けや銘柄分類ができることを確認した.2次元映像と3次元動作の間の相互マッピング手法の確立を目的として,3つの事例研究を突破口とすることで,研究・開発を深耕している.(1)3次元空中映像インタフェース: 学術誌に原著論文を投稿・査読中である.さまざまな応用事例を実現してゆくことで経験を重ね,2次元と3次元の間のインタラクションにおける設計原理の確立に近づいている.(2)エアーポインティング: 文献調査などにより従来方法との差異や提案の新規性を明確にしている.基本機能の性能測定と従来方法との比較実験のそれぞれについて,測定内容や実験方法を熟慮・検討して実施した.(3)ペットボトル仕分け作業: 水中にあるばら積み物体を水上から認識して掴み上げる作業への挑戦の事例は少なく,研究対象自体に新規性がある.2次元映像と3次元画像の間の相互マッピング手法の確立を目的として,複数の事例研究を突破口としながら研究・開発を継続する.(1)3次元空中映像インタフェース: ろくろ作業の再現を早期に完成させる.次の応用として3次元動作物体の3次元映像再現を計画している(実現可能性は確認している).また学術誌への論文投稿が滞っているので,現在査読中の学術誌論文には早期に決着してほしい.(2)エアーポインティング: 従来の方法(コンピュータマウス,タッチパッド,タッチパネル,など)との比較実験などを実施しており,データを整理して,学術論文への投稿を準備する.(3)ペットボトル仕分け: ロボットの3次元動作を生成するための,2次元カメラ画像に基づく物体認識の探求を続ける.a)物体どうしの重ね合わせにより,得られた画像の全身に対する割合(例えば8割,5割,3割),得られた画像の全身における部位(例えば頭部,腹部,下部)による,物体区分けや銘柄分類の正答率の変化,b)新たな銘柄を機械学習させるために学習画像の効率的な生成方法や必要な数量の検討,などの面白い課題が残っている

  • Functional advance and various realization of step-on interface (SOI)

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research

    Project Year :

    2011.04
    -
    2014.03
     

    MATSUMARU Takafumi

     View Summary

    This project aimed to make the step-on interface (SOI) to operate robotic and mechatronic systems both higher functional (higher reliability and higher accuracy) and more diversified (realization in various forms). For high functionality, the camera image processing (recognition of a projection screen and a light spot using OpenCV) and the depth date processing (by use of a three-dimensional depth sensor) have been achieved. For diversification, the operation on screen by on/off gesture of laser pointer (click/drag operation), the real-time follow-up to a projection screen by a pan-tilt-zoom camera control, the recognition of the contact/non-contact operation to the background of a finger-tip based on depth data, and the application to a virtual keyboard and a virtual xylophone have been realized

  • Robot with step-on interface (JST)

    Project Year :

    2007
    -
     
     

  • Development of HMD having eye-gaze detection function and realization of advanced bilateral communication for ALS patients

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research

    Project Year :

    2005
    -
    2006
     

    EBISAWA Yoshinobu, MATSUMARU Takafumi, ITO Tomotaka, YAMASHITA Atsushi

     View Summary

    The purpose of this study is make it possible for ALS (amyotrophic lateral sclerosis) patients to communicate with surrounding people and to operate remote human-friendly mobile robots by inputting characters and selecting menu options using residual function of the patients, a light of sight.The HMD (head mounted display) has eye-gaze detection function. It is attached to one eye and characterized that it can detect the exact eye gaze point on the HMD screen in spite of deviation of the eye and the HMD. Image from a PC is presented on the screen. At the same time, irradiation near-infrared light of an LED for the eye, and grabbing and processing the eye image and then detecting the centers of the pupil and the corneal reflection produced by the LED light source. The eye gaze point is calculated by the relative position of these centers. In fiscal 2005, eye gaze detection was difficult in the trial product because ghost images of the light source fell on the eye image. So in fiscal 2006, the reason of the ghost images was investigated. By changing light source arrangement, the problem was solved. However, its total size was bigger than expected. Moreover, due to their dispersion the signal of eye gaze points was smoothed. However, the characteristics which can compensate eye gaze points against deviation of the eye and the HMD is demonstrated.The remote control experiment was carried out in which the developed HMD was used as an I/O interface in a remote operation system of mobile robot. The robot has omni-directional mobile mechanism with the camera to take pictures in remote site and the range sensor to acquire distance information. The operation screen shown by HMD is low resolution, 640 by 480. However, we developed the operation interface including five functions as a minimum function to operate the robot remotely - a remote environmental map, the live images, the button to operate robot, the button to operate camera system, and the display of camera direction. When cursor was placed for two seconds on the robot operation button the operation which the button indicates was made to be performed considering safety. Moreover taking care of the characteristic of detecting the direction of gaze, the stop buttons are arranged in the four corners of operation screen. It has been confirmed that both the mobile robot and the camera system could be operated from operation experiment

  • Development of HMD with gaze tracking function for ALS patient and realization of high-perfomance birateral communication (Shizuoka University)

    Project Year :

    2005
    -
    2006
     

  • Trackless travelling mobile platform (Yazaki Kako Corp.)

    Project Year :

    2004
    -
     
     

  • MERG (multi-medial education research group) (Tokyo Univeristy of Agliculture and Eingineering)

    Project Year :

    2003
    -
     
     

  • Wireless wheeled platform robot (Shizuoka Prefecture Shizuoka Industrial Technology Center, Sun-technical)

    Project Year :

    2002
    -
     
     

  • Remote control of greenhouse melon (Sawada Kogyo, Shizuoka Prefecture agricultural experiment station)

    Project Year :

    2002
    -
     
     

  • Navigation control of intelligent mobile robot (Shizuoka University)

    Project Year :

    2000
    -
     
     

  • Remote operation of robots connected via communication network (MITI-MEL)

    Project Year :

    1999
    -
    2000
     

  • 遠隔体験鑑賞システムを目的とした人間共存型移動ロボットの遠隔と自律の融合制御手法

     View Summary

    本研究は,"遠隔体験・鑑賞システム"を想定した『人間共存型移動ロボットの遠隔操作システム』の実現のための技術課題として,"操作する人間"とロボットの間における安全性や操作性に着目し,自律動作と遠隔操作の長所を効果的に取り入れ,それぞれの欠点を補うような融合制御手法を検討している.いわゆるshared controlにおけるcomputer controlの具体的な動作を提案・評価するものである.移動ロボットに搭載したカメラからの画像を見ながら,操作者がジョイスティックを使ってロボットを操作する.自律動作として,旋回動作,倣い動作,減速動作の3つを検討している.自律動作の組合せの方法,および,環境認識/状況判断の手法について,ソフトウェア・シミュレーションでの検討を継続した.減速動作を基本として,旋回動作や倣い動作を複合化する手法により,障害物とのニアミスの回数を半減でき,安全性が向上することを確認した.状況に適した自律動作に関しては,通路走行時の曲がり角の通り抜けについて,どのような状況で自律動作を積極的に利用するかを解明し,どのような自律動作が使いやすいか,を検討した.操作者のジョイスティック操作とロボットの距離センサ情報から,周囲の環境とロボットの状況を分類し,旋回動作と倣い動作を選択的に付加する手法により,操作者に違和感ない動作を実現し,操作性が向上できることを確認した.ハードウェアでは,オムニホイール(富士製作所)による4輪駆動機構を有する全方向移動機構を開発中であり,また前方約170度の範囲における対象物との距離を走査できる距離センサ(北陽電気PB9)を使用できるように準備している.これらを組み合わせた移動ロボットを早急に完成させたい.今後は,操作者への情報提示・フィードバック手法を検討するとともに,統合ハードウェア・システムを用いた提案手法の検証実験を行ないたい

▼display all

Misc

  • 2A2-H07 Development of Calligraphy-Stroke Learning Support System Using Projection (1st report) : Proposal and Construction of System

    NARITA Masashi, MATSUMARU Takafumi

      2015   "2A2 - H07(1)"-"2A2-H07(4)"  2015.05

     View Summary

    In this paper, a calligraphy learning support system is presented for supporting brushwork learning by using a projector. The system was designed to provide the three kinds of training according to the learner's ability as followings: copying training, tracing training, and combination of them. In order to instruct the three-dimensional brushwork such as the writing speed, pressure, and orientation of the brush, we proposed the instruction method by presenting the information to only brush tip. This method can be visualized a brush position and the orientation. In addition, the copying experiment was performed using the proposed method. As a result, the efficiency of the proposed method was examined through experiment.

    CiNii

  • 1P1-I01 Teleoperation of Human-Friendly Robot (55th report) : Study on relationship between Joystick Operation and Two-wheel Drive Movement(Wheeled Robot/Tracked Vehicle)

    INOUE Ryouhei, KANOU Hiroaki, KAMIYA Ryousuke, MATSUMARU Takafumi

      2011   "1P1 - I01(1)"-"1P1-I01(4)"  2011.05

     View Summary

    We suggest a control system when we operate a robot of two wheel mechanism with joystick. We carried out the driving experiment of two or more control systems under the cooperation of the subject. I took the data and considered those experiments. This paper, it reports on the result of examining the relation between the operation of the joystick and the movement of the two wheel mechanism.

    CiNii

  • 1P1-H01 Operation Method of Human-Friendly Robot (10th report) : Development of Touch-game Application using Step-on Interface(VR and Interface)

    KIKAWADA Masakazu, SAITO Wataru, MATSUMARU Takafumi

      2011   "1P1 - H01(1)"-"1P1-H01(4)"  2011.05

     View Summary

    We propose touch-game application as a new function of Step-on Interface. It is development of the application for recreations. Users operate the screen projected by the projector by touching by hand. We considered projecting a screen on a desk so that elderly people could perform a recreation safely. This paper shows development of the application of the recreation that elderly people can do it happily and safely all the time.

    CiNii

  • 2P1-I07 Teleoperation of Human-Friendly Robot (56th report) : Remote Operation of Two-wheel Drive using Touch Screen Interface(Network Robotics)

    KANO Hiroaki, KAMIYA Ryosuke, INOUE Ryohei, MATSUMARU Takafumi

      2011   "2P1 - I07(1)"-"2P1-I07(4)"  2011.05

     View Summary

    This paper shows evaluation results of the touch screen interface for mobile robot remote operation. Joystick function, button function, and route indication function are developed with an environmental map in the Cell & Hollow method. Features and efficiency are confirmed according to the experimental results in which a mobile robot is remotely operated using the touch screen interface with the developed function to pass through the slalom course or the crank course.

    CiNii

  • 2A1-D04 Operation Method of Human-Friendly Robot (9th report) : Study on Control Method for Mobile Robot Step-On Interface

    ITO Yuichi, SAITO Wataru, HARADA Syunntaro, MATSUMARU Takafumi

      2010   "2A1 - D04(1)"-"2A1-D04(4)"  2010

     View Summary

    We propose the step-on interface (SOI) in which the instruction is given by stepping on the operation screen projected on a ground surface. We developed the HFAMRO (human-friendly amusing mobile robot)-2 in which two sets of SOI are equipped on the two-wheel drive mobile platform. This paper presents two methods based on the real-time range data and the environmental map respectively for the obstacle avoidance and coming back to the original pathway putting priority on the given instruction.

    CiNii

  • 1A2-G30 Informative Motion of Human-Friendly Robot (11th report) : Experiment and Evaluation of Throw-Over Motion to Transmit Landing Distance

    HARADA Shuntaro, SUZUKI Takayuki, SAITO Wataru, MATSUMARU Takafumi

      2010   "1A2 - G30(1)"-"1A2-G30(4)"  2010

     View Summary

    This research aims at realizing the robot throw-over movement from which a user is easy to predict the landing distance of the object when the robot throws it to a user by implementing the human movement characteristics. This paper shows the evaluation experiment to examine the reliability of the representative data selected, in which the moving image of a throw-over motion by a virtual humanoid robot developed on Windows-PC is shown to the subject and he replies the landing distance estimated.

    CiNii

  • 1A2-G29 Informative Motion of Human-Friendly Robot (10th report) : Measurement and Analysis of Throw-Over Motion to Transmit Landing Distance

    HARADA Shuntaro, SUZUKI Takayuki, SAITO Wataru, MATSUMARU Takafumi

      2010   "1A2 - G29(1)"-"1A2-G29(4)"  2010

     View Summary

    This research aims at realizing the robot throw-over movement from which a user is easy to predict the landing distance of the object when the robot throws it to a user by implementing the human movement characteristics. This paper shows the movement data classification after the measurement and analysis of human movement and the representative data selection for the evaluation experiment. Then it presents the experimental examination of the movement changing depending on the landing distance.

    CiNii

  • 1A1-G18 Remote operation of Human-Friendly Robot (54th report) : Development and Evaluation of Operational Interface Using Touch Screen

    KOMATSU Junya, GONG Liang, KANO Hiroaki, SAITO Wataru, HARADA Shuntaro, MATSUMARU Takafumi

      2010   "1A1 - G18(1)"-"1A1-G18(4)"  2010

     View Summary

    This paper shows evaluation results of the touch screen interface for mobile robot remote operation. Button function, joystick function, and route designation function are developed with an environmental map in the Line & Hollow or the Cell & Hollow method. Features and efficiency are confirmed according to the experimental results in which a mobile robot is remotely operated using the touch screen interface with the developed function to pass through a straight passage or slalom course.

    CiNii

  • 2A1-B18 Teleoperation of Human-Friendly Robot (53rd report) : Operational Interface for Mobile Robot using Touch Screen

    GONG Liang, ITO Yuichi, AKAI Kosuke, YAMASHITA Wataru, MATSUMARU Takafumi

      2009   "2A1 - B18(1)"-"2A1-B18(4)"  2009.05

     View Summary

    With the touch panel type interface, the operator cannot only look environmental information around the robot but also control the remote robot and camera by using the button on the touch-screen. In addition the operator can direct the robot to move along the same course as drawing on the environmental map on the touch-screen. Both touch screen and joystick interfaces are experimented on a developed mobile robot remote operation system and discussed on maneuverability and operating accuracy.

    CiNii

  • 2A1-B17 Teleoperation of Human-Friendly Robot (51 st report) : Environmental Map Making using Range Scanner

    YAMASHITA Wataru, GONG Liang, ITO Yuichi, AKAI Kosuke, MATSUMARU Takafumi

      2009   "2A1 - B17(1)"-"2A1-B17(4)"  2009.05

     View Summary

    We have been researching on the remote operated robot in which it is difficult for operator to understand the situation around the robot only by the camera image for remote control. The adequate security and maneuverability cannot be secured, and danger to cause accidents increases. As the solution, we have proposed the methods to construct the environmental map to present the operator the situation around the robot. The performance trial of two range scanners was also executed and reported.

    CiNii

  • 2A1-B19 Teleoperation of Human-Friendly Robot (52nd report) : Operational Interface for Mobile Robot using Voice Recognition

    ITO Yuichi, HAYASHI Hiroyuki, AKAI Kosuke, GONG Liang, MATSUMARU Takafumi

      2009   "2A1 - B19(1)"-"2A1-B19(4)"  2009.05

     View Summary

    The number of the human-coexistence type robot which operates nearby people is increasing. Then a comprehensible and safety interface to operate is required by people who operate advanced robot for the first time. This paper shows a trial interface via voice to operation a mobile robot. It reports the examination to develop a voice interface, especially on mounted functions and the result of the experiments comparing with the keyboard and the step-on interface.

    CiNii

  • 1P1-C18 Operation Method of Human-Friendly Robot (7th report) : Method of Utilization of Mobile Robot with Step-on Interface

    HORIUCHI Yasutada, AKAI Kosuke, ITO Yuichi, GONG Liang, MATSUMARU Takafumi

      2009   "1P1 - C18(1)"-"1P1-C18(4)"  2009.05

     View Summary

    We propose new input method of operation. Instructions to robot are given by stepping on a part of operation screen that is projected on running surface by projector equipped on the robot. This time, we designed the human-friendly amusing mobile (HFAM) function that is new usage of the robot equipped with the SOI. There is a scenario that the robot and a child will be able to play together. In this paper, the outline of the HFAM function and the technology are discussed.

    CiNii

  • 1P1-C17 Operation Method of Human-Friendly Robot (6th report) : Improvement of Control Function of Mobile Robot with Step-on Interface

    AKAI Kosuke, HORIUCHI Yasutada, ITO Yuichi, GONG Liang, MATSUMARU Takafumi

      2009   "1P1 - C17(1)"-"1P1-C17(4)"  2009.05

     View Summary

    We propose new input method of operation. Instructions to robot are given by stepping on a part of operation screen that is projected on running surface by projector equipped on the robot. This paper examines the layout of instruction buttons on operation screen, and the discrimination of obstacles from stepped foot to apply autonomous movement and the path search for obstacle avoidance are also discussed.

    CiNii

  • 1A1-K06 Informative Motion of Human-Friendly Robot (8th report) : Characteristics Extraction of Throw-over Motion to Transmit Landing Position

    SUZUKI Takayuki, FUKUNAGA Daiki, SUZUKI Shigehisa, ITO Yuichi, GONG Liang, MATSUMARU Takafumi

      2009   "1A1 - K06(1)"-"1A1-K06(4)"  2009.05

     View Summary

    This research aims at realizing the robot throw-over movement which a person is easy to predict the distance to the targeted position of the object when throwing from a robot to a person by implementing the movement characteristics by people. This paper shows the experimental examination of points to change in the movement depending on the distance after the measurement and analysis. In addition we selected the movement data by a subject for a future evaluation experiment.

    CiNii

  • 1A2-D13 Preliminary-Announcement of Robot's Operation (3rd report) : Comparison Experiment between Voice and Display

    HAYASHI Hiroyuki, AKAI Kosuke, ITO Yuichi, GONG Liang, MATSUMARU Takafumi

      2009   "1A2 - D13(1)"-"1A2-D13(4)"  2009.05

     View Summary

    While it has become aged and decrease-in-birthrate society recently, a human-friendly robot which can support and assist human being directly is expected. We think it is important for mobile robot to have not only the safety function to avoid contact or collision with human being but also the function to preliminary-announce surrounding people the forthcoming motion of the robot before beginning to move. This paper reports on the result of the side-by-side test on the voice and the display.

    CiNii

  • 1A1-K07 Informative Motion of Human-Friendly Robot (7th report) : Development of Handover Motion to Transmit Weight Load

    SUZUKI Shigehisa, KIMURA Akio, SUZUKI Takayuki, ITO Yuichi, GONG Liang, MATSUMARU Takafumi

      2009   "1A1 - K07(1)"-"1A1-K07(4)"  2009.05

     View Summary

    The deliverer of handover task operates so that the weight load is informed to the receiver. This study is aimed to clarify the characteristic of handover movement by people and to implement it to humanoid robot to make it perform naturally. Movement of people when performing handover task to a person was measured and analyzed. It may be said that a receiver can judge whether the load is heavy or not. The design method of handover movement depending on weight load is discussed and experimented.

    CiNii

  • 1P1-G18 Informative Motion of Human-Friendly Robot (2nd report) : Study on Handover Task including Load Information for Recipient

    SUZUKI Shigehisa, SUGIURA Tatsuhiro, ITO Yuichi, MATSUMARU Takafumi

      2008   "1P1 - G18(1)"-"1P1-G18(2)"  2008.06

     View Summary

    When a person does a handover task, the movement of operator contains some information about feature of the load even without oral remarks and the receiver picks up the meaning and provides against the load then receives the load safely and certainly. This research aims to achieve a natural handover task from robot to human by designing the robot body movement. This paper shows the result of analyzing the human's handover task as an earlier stage.

    CiNii

  • 1P1-G17 Informative Motion of Human-Friendly Robot (3rd report) : Motion Simulation Software to Reproduce Handover Task

    SUGIURA Tatsuhiro, SUZUKI Shigehisa, ITO Yuichi, MATSUMARU Takafumi

      2008   "1P1 - G17(1)"-"1P1-G17(3)"  2008.06

     View Summary

    This research aims to achieve a smooth handover task from robot to person without making the person feel some stress and bringing discomfort. We captured the handover task between two persons and analyzed the feature of human movement of the task in order to use it to design the robot's movement. This paper shows the developed simulation software of humanoid robot's movement. Then it presents the result of the experiment about the weight estimation from the handover movement.

    CiNii

  • 1A1-F21 Teleoperation of Human-Friendly Robot (48th report) : Examination of Line & Hollow Method and Cell & Hollow Method using Range Sensor to Present Remote Environment

    FUJITA Takumi, LIN Mingtong, ITO Yuichi, MATSUMARU Takafumi

      2008   "1A1 - F21(1)"-"1A1-F21(4)"  2008.06

     View Summary

    This paper presents the methods to indicate an unknown environment as environmental map for operator of remote operation system of mobile robot in order to improve the maneuverability. The Line&Hollow method and the Cell&Hollow method are examined. Those are based on the data from range sensor. The result of three kinds of experiments, the recognition of aisle, the understanding of changing environment, and the passing with person, shows the feature and effectiveness of those methods.

    CiNii

  • 1A1-F22 Teleoperation of Human-Friendly Robot (49th report) : Examination of Teleoperation Interface of Mobile Robot using Touchscreen

    LIN Mingtong, FUJITA Takumi, ITO Yuichi, MATSUMARU Takafumi

      2008   "1A1 - F22(1)"-"1A1-F22(4)"  2008.06

     View Summary

    This paper discusses the touch screen as user interface for teleoperation system of mobile robot in order to improve the maneuverability. Button function, joystick function, and route designation are developed with Line&Hollow method as environmental map. Features and efficiency are confirmed on the experiments in which the mobile robot is remotely operated using the touchscreen interface with the developed functions to pass through a passage with orthogonal cranks.

    CiNii

  • 2P2-E09 Operation Method of Human-Friendly Robot (3rd report) : Characterization of Range Sensor to Realize Step-on Interface

    ITO Yuichi, AKAI Kosuke, SUZUKI Shigehisa, MATSUMARU Takafumi

      2008   "2P2 - E09(1)"-"2P2-E09(3)"  2008.06

     View Summary

    We are planning to develop HFAMRO-2 which can play with people using the step-on interface. Therefore we would like to improve the mobility performance and to replace the range sensor from HFAMRO-1. This paper shows the modified design of mobile unit which will be able to double the speed of movement. And it also shows the comparison between new range sensor and old one based on the results of performance measurements. The new sensor can detect and measure the stepped foot more precisely.

    CiNii

  • 2P2-E10 Operation Method of Human-Friendly Robot (4th report) : Study on Mobile Robot with Step-on Interface

    AKAI Kosuke, ITO Yuichi, SUZUKI Shigehisa, MATSUMARU Takafumi

      2008   "2P2 - E10(1)"-"2P2-E10(3)"  2008.06

     View Summary

    We propose new input method of operation. Instructions to robot are given by stepping on a part of operation screen that is projected on running surface by projector equipped on the robot. This paper examines the layout of instruction buttons on operation screen. And additional functions, the preliminary-announcement and indication of forthcoming movement and the discrimination of obstacles from stepped foot to apply autonomous movement and mouse cursor operation, are also discussed.

    CiNii

  • 2P1-A37 Teleoperation of Human-Friendly Robot (34th report) : Development and evaluation of the mobile robot with announcement function using projector

    MATSUMARU Takafumi, HOSHIBA Yu, MIYATA Yasuhiro, HIRAIWA Shinji

      2006   "2P1 - A37(1)"-"2P1-A37(4)"  2006

     View Summary

    Recently it has become an aging and decrease-in-the-birthrate society then human-friendly robot which can support and assist human being directly is expected. We think it is important for a mobile robot to have not only the safety function to avoid a collision with human being but also the function to announce surrounding people the following motion of a robot before beginning to move. This paper discusses preliminary-announcement function for mobile robot with a projection. We developed the PMR-5R and proposed the expression with a projection. We have confirmed its effect by some experimental results.

    CiNii

  • 2P1-A35 Teleoperation of Human-Friendly Robot (33rd report) : Development and evaluation of the mobile robot with announcement function using laser pointer

    MATSUMARU Takafumi, HOSHIBA Yu, MIYATA Yasuhiro, HIRAIWA Shinji

      2006   "2P1 - A35(1)"-"2P1-A35(4)"  2006

     View Summary

    Recently it has become an aging and decrease-in-the-birthrate society then human-friendly robot which can support and assist human being directly is expected. We think it is important for a mobile robot to have not only the safety function to avoid a collision with human being but also the function to announce surrounding people the following motion of a robot before beginning to move. This paper discusses preliminary-announcement function for mobile robot with a laser-pointer. We developed the PMR-1 and proposed the expression with a laser-pointer. We have confirmed its effect by some experimental results.

    CiNii

  • 2P1-A31 Teleoperation of Human-Friendly Robot (31st report) : Development and evaluation of the mobile robot with announcement function using omni-directional display

    MATSUMARU Takafumi, HOSHIBA Yu, MIYATA Yasuhiro, HIRAIWA Shinji

      2006   "2P1 - A31(1)"-"2P1-A31(4)"  2006

     View Summary

    Recently it has become an aging and decrease-in-the-birthrate society then human-friendly robot which can support and assist human being directly is expected. We think it is important for a mobile robot to have not only the safety function to avoid a collision with human being but also the function to announce surrounding people the following motion of a robot before beginning to move. This paper discusses preliminary-announcement function for mobile robot with a projection. We developed the PMR-2R and proposed the expression with an omni-directional display. We have confirmed its effect by some experimental results.

    CiNii

  • 1P1-S-034 Teleoperation of Human-Friendly Robot (26th report) : Examination of robot localization with a single camera(Vision-Based Mobile Robot 2,Mega-Integration in Robotics and Mechatronics to Assist Our Daily Lives)

    Hasegawa Sho, Matsumaru Takafumi, Ito Tomotaka

      2005   82 - 82  2005.06

    CiNii

  • 1A1-S-050 Teleoperation of Human-Friendly Robot (25th report) : Preliminary Announcement of Mobile Robot's Following Motion with a projection(ITS and Robot Technology,Mega-Integration in Robotics and Mechatronics to Assist Our Daily Lives)

    Hoshiba Yu, Yamamori Hiroshi, Matsumaru Takafumi, Ito Tomotaka

      2005   42 - 42  2005.06

    CiNii

  • 1A1-S-047 Teleoperation of Human-Friendly Robot (24th report) : Examination of the combination control algorithm on a maze(ITS and Robot Technology,Mega-Integration in Robotics and Mechatronics to Assist Our Daily Lives)

    Iwase Kazuya, Yamamori Hiroshi, Matsumaru Takafumi, Ito Tomotaka

      2005   42 - 42  2005.06

    CiNii

  • Study on a suitable posture during lifting task operation using minimum jerk model

    MATSUMARU Takafumi, SHIMA Kazuyoshi, FUKUYAMA Satoshi

    The JSME Symposium on Welfare Engineering   2004   145 - 148  2004.09

     View Summary

    The purpose of this study is to clarify the optimal posture in operation of the weight lifting task. On the simulation using the 7-rigid-body link model considering the Valsalva effect from the capacity of lung, each joint angle was changed by the minimum jerk model. The simulation result and the actual measurement were compared and examined using four evaluation criteria: the compression force and shear force on lumbar vertebra, the efficiency index, and the rate of work load on each joint. It has been clarified that not only the load on hip joint but also the load on knee should be considered to evaluate for better lifting motion.

    CiNii

  • Teleoperation of Human-Friendly Robot (18th report) : Expression for Preliminary-Announcement of Mobile Robot's Following Motion using Omnidirectional Display

    Yamazaki T, Iwase K, Matsumaru T, Ito T

      2004   94 - 95  2004.06

    CiNii

  • Teleoperation of Human-Friendly Robot (16th report) : Recognition of Environment using Range Scnsor and Its Presentation

    Akiyama K, Iwase K, Matsumaru T, Ito T

      2004   211 - 211  2004.06

    CiNii

  • Teleperation of Human-Friendly Robot (19th report) : Maneuverability of Mobile Robot depending on Network Condition

    Iwase K, Matsumaru T, Ito T

        83 - 83  2004

    CiNii

  • Teleoperation of Human-Friendly Robot (17th report) : Coordination of Preliminary-Announcement and Motion Control of Remote Operated Mobile Robot

    Kusada T, Iwase K, Matsumaru T, Ito T

        165 - 166  2004

    CiNii

  • Analysis of Human Movement : Study of a suitable posture during operation for lifting task

    SHIMA Kazuyoshi, FUKUYAMA Satoshi, MATSUMARU Takafumi, ITO Tomotaka

      326   163 - 164  2003

     View Summary

    This paper examines the optimal motion on lifting task to prevent the lumbago during the movement. We settled three different posture to start the motion. We adopted four evaluation criteria to examine : pressure force to lumber vertebra, shearing force to lumber vertebra, total efficiency degree, and contribution rate on each joint.

    CiNii

  • Teleoperation of Human-Friendly Robot (15th report) : Reflector driving unit using stepping motor on preliminary-announcement device with beam-light

    GOMI Hirotoshi, KUSADA Takashi, AKIYAMA Kyouhei, IWASE Kazuya, MATSUMARU Takafumi, ITOU Tomotaka

        319 - 320  2003

     View Summary

    Human-Friendly Robots is becoming popular. Human-Friendly Robot lives/works in the same space as human. We think the function to tell surrounding people the robot's following action is important as well as a safety function. We report the preliminary-announcement device of the mobile robot which draws a scheduled path on a ground with beam-light by reflector driving unit using stepping motor.

    CiNii

  • Teleoperation of Human-Friendly Robot : 9th report : Basic Examination for Development of Human-Friendly Mobile Robot

    MORI Hiromitsu, OISHI Hironori, HAGIWARA Kiyoshi, ITOH Tomotaka, MATSUMARU Takafumi

      2002   110 - 111  2002

    CiNii

  • Teleoperation of Human-Friendly Robot : 8th report : Preliminary Announcement Function for Mobile Robot and its Evaluation on 3D Simulation

    ENDO Hisashi, KUDO Shinnosuke, HAGIWARA Kiyoshi, ITOH Tomotaka, MATSUMARU Takafumi

      2002   99 - 99  2002

    CiNii

  • Learning by Experience System on Mechatronics using LEGO MindStorms : 3rd report : Study for Algorithm on Programming

    FUJITA Kazutoshi, KUDO Shinnosuke, HAGIWARA Kiyoshi, ITOH Tomotaka, MATSUMARU Takafumi

      2002   58 - 58  2002

    CiNii

  • Teleoperation of Human-Friendly Robot : 7th report : Evaluation of Composition of Autonomous Behaviors for Combination Control

    HAGIWARA Kiyoshi, ITOH Tomotaka, MATSUMARU Takafumi

      2002   65 - 65  2002

    CiNii

  • 2P2-N2 Teleoperation of Human Friendly Robot : 3rd report: study on preliminary announcement function

    Hagiwara K., Terasawa Y., Matsumaru T.

      2001   80 - 80  2001.06

    CiNii

  • 2P2-K5 Teleoperation of Human Friendly Robot : 2nd report: study on the combination motion control

    Ichikawa S., Hagiwara K., Matsumura T.

      2001   77 - 77  2001.06

    CiNii

  • 2A1-A2 Learning by Experience System on Mechatronics using LEGO MindStorms : Paying Attention to Mechanism and Movement

    Nakashima T., Hagiwara K., Matsumaru T.

        39 - 39  2001

    CiNii

  • 狭隘部作業ロボットの構造と制御に関する研究

    松丸 隆文

    日本ロボット学会誌   18 ( 4 ) 513 - 513  2000.05

    CiNii

  • 2P1-34-043 ネットワーク遠隔操作ロボットのシミュレーションシステムの検討

    蓑島 利和, 小松 千枝子, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2000   82 - 82  2000

     View Summary

    遠隔地のロボットのネットワークを介した操作を検討している。通信手法や制御方法を検討するため。シミュレーションシステムを構築している。ネットワーク接続された2台のパソコンはそれぞれ, 操縦装置, スレーブ・ロボット(CGによるシミュレーション)を備え, 操作者は, 伝送された画像を見ながら, 操縦装置によりスレーブ・ロボットを操作する。システムの機能について検討する。

    CiNii

  • Direct Remote Operation of Master-Slave Manipulators through ISDN (Basic Experiments).

    松丸隆文, 川端俊一, 神徳徹雄, 朝倉誠, 小森谷清, 吉見卓, 谷江和雄

    日本ロボット学会学術講演会予稿集   14th ( 3 )  1996

    J-GLOBAL

▼display all

 

Syllabus

▼display all

 

Research Institute

  • 2022
    -
    2024

    Waseda Research Institute for Science and Engineering   Concurrent Researcher

  • 2022
    -
    2024

    Waseda Center for a Carbon Neutral Society   Concurrent Researcher

Internal Special Research Projects

  • ヒトとロボットの協調作業における物体形状の変形加工の再現機能に関する研究

    2023   Xin He, Vibekananda Dutta , Teresa Zielinska

     View Summary

    This study examines a low-cost (without expensive datasets) general strategy for reproducing plastic deformation by robotic manipulation. Under this study, this work aims to achieve the Semi-3D (limited complexity and variations) deformation reproduction of plastic objects (reproducing various target trenches manually made in a box of kinetic sand with different structures, depth, and width variations by employing a 6DOF manipulator and an Azure Kinect RGB-D sensor). To achieve the goal, we contribute three main techniques:&nbsp;(1) a 3D surface points labeling method to classify the surface points according to the principal curvature so that can generate a smooth labeled result for each 3D point in an RGB-D image,&nbsp;(2) an improved deformation representation method based on the RANSAC (Random Sample Consensus) strategy for representing the various shapes in a standardized form so that can represent the shape by the combination of 3D splines and different variations,&nbsp;(3) a manipulation trajectory generation method for robotic systems based on a task-based adaptive periodic dynamic motion primitive.&nbsp;Different experimental settings were applied to assess the performance of our proposals. Through the comparison experiments (robotic manipulation vs. human operation), we calculated the Point Cloud Structural Similarity Metric (PointSSIM) and demonstrated deformation reproduction results with an acceptable similarity that close to the human operation (less than 3% of difference in average). Furthermore, our proposal can be further extended for more general (without constraints) 3D deformation reproduction. In addition, each proposed technique has the potential to be adapted in other different fields.&nbsp;For future work, our goal is to extend our deformation strategy to the general deformation (without constraints) of general 3D objects. The main challenge for the general 3D object will be the object’s splines (main structure) matching and deforming. If the main structure of 3D object target shapes can be represented from the initial state, extending the proposals of this study (new variations for general deformation cases and corresponding motion primitives for robotic systems), it is possible for a robotic system to reproduce the general deformation of general 3D objects.

  • ヒトとロボットの協調作業における物体形状の変形加工の再現機能に関する研究

    2022  

     View Summary

    この研究では,平らで柔軟な物体の不規則な変形を表現し,同じ変形を再現できる手法を検討した.この手法は次の2段階からなる:(a)考案した変形推定法により目標状態での物体の変形が分析され推定される,(b)リアルタイム・センサ・フィードバックによりロボットの動作が計画され実行される.提案する手法の新規性のひとつは,表面のしわ,表面の粗さ,ノイズの多い深度データによる望ましくない変形を無視し,主要な変形のみを検出することである.6自由度マニピュレータとRGB-Dセンサを使い,小型タオルの変形について,さまざまな設定で実験した結果,さまざまな変形を効率的に推定し,同様の変形を再現できることが示された.

  • 人とロボットのインタラクションの高度化に関する研究

    2021   HE, Xin

     View Summary

    ヒトとロボットのインタラクションの高度化のひとつとして,2021年度は,特にヒトとロボットの協調作業における再現機能に取り組んだ.物体の形状を変形加工する作業において,その場で具体的に教示をしなくとも,変形の前後の状態を観察させるだけで,同じ作業をロボットに代行・再現させるものである.学術上の問題は,変形後の状態の認識と変形過程の推定(認識問題)と,ロボットによる作業工程の生成と再現(計画問題)である.これまで,折り畳まれた平坦な物体の変形過程(折り紙)や平坦で柔軟な物体(ハンドタオルなど)の変形過程(摘まみ上げ)を推定してきた.そこで対象物を高さのある立体や三次元的な変形に拡張することを試みている.

  • 人とロボットのインタラクションの高度化に関する研究

    2020  

     View Summary

     ヒトとロボットの共存・協働・インタラクション(物理的+情報的)を高度化・深化させることを目的として研究開発を進めた.[1] An Object Model and Interaction Method for a Simulated Experience of Pottery on a Potter's Wheel. Sensors 20(11): 3091 (2020).[2] Estimation of Flat Object Deformation Using RGB-D Sensor for Robot Reproduction. Sensors 21(1): 105 (2021).&nbsp;

  • ヒトとロボットの協働作業の高度化に関する研究

    2020  

     View Summary

     ヒトとロボットの共存・協働・インタラクション(物理的+情報的)を高度化・深化させることを目的として研究開発を進めた.[1] An Object Model and Interaction Method for a Simulated Experience of Pottery on a Potter's Wheel. Sensors 20(11): 3091 (2020).[2] Estimation of Flat Object Deformation Using RGB-D Sensor for Robot Reproduction. Sensors 21(1): 105 (2021).&nbsp;

  • 歩道通行や交差点横断を特徴とする屋外における歩行者誘導ロボティック・システム

    2018  

     View Summary

    Outdoor pedestrian-navigation robotic-system characterized by passing sidewalk and crossing intersection: We studied path planning for pedestrians in outdoor environment using two-dimensional digital map. The two-dimensional digital map is obtained on the network like Google Maps and OpenStreetMaps. However, it doesn't record all the data about crosswalks and pedestrian paths, for example outside urban areas. Therefore, a path planning which does not depend on preliminarily-recorded map data for pedestrians or large amount of data, for example, to execute SLAM (simultaneous localization and mapping), should be realized. Once given the departure and the destination, it gets the map data around and between them. First, it performs an image processing (contour detection) and visually recognizes city blocks. Next, a graph theory is applied to deduce the pedestrian path from the departure to the destination. In the trial using actual map data, it was possible to plan a reasonable path with 70 to 80% of success rate, including which side of the road to go through. In the future, we plan to detect pedestrian crossings and footbridges from satellite images and merge them into graph data, and to guide pedestrians using a mobile robot in actual environments.

  • 二次元投影映像を用いた三次元動作の人への教示手法に関する研究

    2016  

     View Summary

    Research and Development activities have been continued in order to deepen the human robot interaction technology in the field of Robotics and Mechatronics, and then the following results have been obtained. (1) Visual SLAM using RGB-D sensor: ORB-SHOT SLAM, the trajectory correction by 3D loop closing based on Bag-of-Visual-Words (BoVW) model for RBG-D Visual SLAM has newly been proposed. --&gt; JRM article (to be appeared). (2) Learning system using RGB-D sensor and projector: Calibration and statistical learning techniques for building an interactive screen for children have been proposed, and its trials in a School has been done. --&gt; IEEE/SICE SII conference paper (presented) and IJARS article (to be appeared). (3) Leaning system using LM sensor and projector: Calligraphy-stroke learning support system using projector and motion sensor has been proposed. --&gt; JACIII article (to be appeared). (4) Interactive interface using LM sensor and 3D hologram: Interactive aerial projection of 3D hologram object has been proposed. --&gt; IEEE/ROBIO conference paper (presented).

  • 距離画像情報を利用した映像投影下でのタッチ・インタラクション技術

    2015  

     View Summary

    &lt;1&gt; Research and Development of Near-field Touch Interface using Time-of-flight CameraThe purpose of this study is to apply the 3-dimensional image sensor, time-of-flight camera, into a projector-sensor system, to achieve the basic functions (clicking, dragging and sliding) of conventional touch interface and expand it with some new functions such as finger direction detection. The research items are as followings: (1) Near-field hand extraction method, (2) High accuracy touch/hover detection, (3) Integrated and complete projector-sensor system, (4) Evaluation experiments.&lt;2&gt; Research and Development of Calligraphy-Brushwork Learning Support SystemA calligraphy learning support system was proposed for supporting brushwork learning by using a projector. The system was designed to provide the three kinds of training style according to the learner's ability: copying training, tracing training, and combination of both. In order to instruct the three-dimensional brushwork such as writing speed, pressure force, and orientation of the brush, we proposed the instruction method by presenting the information about the brush tip. This method visualizes the position, orientation and moving direction of the brush. A preliminary experiment on learning was performed and the efficiency of the proposed method was examined through the experimental results.

  • 画像投射式インタフェースの高性能・高機能化及び応用展開

    2014  

     View Summary

     本課題では,複数の物体が同時に接触でき,接触する物体を識別する機能を持つ仮想タッチ・スクリーンの研究開発を実施した.具体的には,Xtionセンサによる測定データに基づき,接触/非接触センシングだけでなく,手指/道具の区別および接触速度の検出をする,仮想キーボード(鍵盤)および仮想シロフォン(木琴)を実現した.Xtionセンサを機器に実装する手法,各種アルゴリズム:(1)手指の認識,(2)接触の認識,(3)接触位置のマッピング,(4)手指/道具(マレット=ばち)の区別,(5)接触速度の検出の手法,を検討した.これらの機能は,エデュテインメントとエンターテイメントの両方に役立つものだと考える.

  • 投射画面インタフェースの高度・高機能化に関する研究

    2014  

     View Summary

     壁面などに投射した操作パネルの画面を,ロボット・メカトロニクス機器の操作に利用する,投射画面インタフェースの高度・高機能化として,任意の面に投影する双方向インタフェース(ステップ・オン・インタフェースStep-on Interface, SOI)の汎用性の向上に関する研究開発を実施した.具体的には,プロジェクタで壁に投影した操作画面の操作ボタンを,レーザ・ポインタで特定して機器を動作させるシステムの実現を目指した. 本課題では,3つのレーザ・ポインタが互いに干渉することなく,ひとつの画面上で操作することができる,新しい方法を提案した.

  • 画像投射式卓上型上肢訓練装置IDATの機能向上と試用調査

    2013  

     View Summary

    画像投射式卓上型上肢訓練装置IDAT-3のアンケート調査による評価 (2014年04月01日)1. 目的 さまざまな年代の男女から次の3項目について評価を得ること.(1) 操作性(使いやすさ,使い勝手,直感性など)(2) 面白さ(楽しさ,充実感,達成感)(3) 有用性(有効性,実用性,利便性)また意見や所感を得ることで,現在のIDAT-3の欠点や不足を理解し,今後の性能の向上,新機能の開発の検討材料とすること.2. 方法と手順 Method and procedure 次の2つの展示会にIDAT-3を出展した.(1) 展示会1:北九州学術研究都市第13回産学連携フェア(http://fair.ksrp.or.jp/) 期日: 2013年10月23日(水)~10月25日(金) 10h00-17h00 場所:北九州学術研究都市 体育館 対象:専門家,職業人,学生,市民(2) 展示会2:2013国際ロボット展(http://www.nikkan.co.jp/eve/irex/) 期日:2013年11月06日(水)~11月09日(土) 10h00-17h00 場所:東京国際展示場 東1・2・3ホール 対象:専門家,職業人,学生,市民アンケートの実施手順は次のとおり:1) 研究開発の背景と目的,IDATの構成と機能,を来訪者に説明し,デモンストレーションする.2) 来訪者にトレーニングプログラム(モグラたたき,風船割り,さかな捕り)を体験してもらう.3) 好きなだけ試してもらったのちに,アンケートへ記入してもらう.アンケートの内容は次のとおり:1. Gender: □male □female2. Age: □-10s □20s □30s □40s □50s □60s □70s □80s- 3. Country: □Japan □Other ( )4. User-friendliness: (good)9・8・7・6・5(average)・4・3・2・1(bad) Comment: ( )5. Amusingness: (good)9・8・7・6・5(average)・4・3・2・1(bad) Comment: ( )6. Usefulness: (good)9・8・7・6・5(average)・4・3・2・1.(bad) Comment: ( )7. Feedback and opinions: ( )3つの評価項目によるIDAD-3の評価は,相対的な値ではなく主観的な絶対的な値でお願いした.合計7日間の展示より88通の回答を得た.性別では男性67名,女性21名.年齢別では10代以下40名,20代30名,30代4名,40代4名,50代以上6名となった.3. 評価値 3つの評価項目における平均値(標準偏差)は次のとおり:(1) 操作性: 10代以下7.25(1.80),20代7.23(1.45),30代7.38(1.32),40代7.75(0.83),50代9.00(0.00),60代6.33(0.94),70代7.00(0.00),男性7.28(1.54),女性7.29(1.72),合計7.28(1.59)(2) 面白さ: 10代以下8.25(1.34),20代7.93(1.18),30代8.25(0.97),40代6.75(1.09),50代9.00(0.00),60代5.67(0.94),70代7.00(0.00),男性7.88(1.37),女性8.33(1.17),合計7.99(1.34)(3) 有用性: 10代以下7.60(1.50),20代7.67(1.45),30代7.38(1.32),40代8.00(0.71),50代9.00(0.00),60代8.33(0.94),70代7.00(0.00),男性7.60(1.51),女性7.95(1.27),合計7.67(1.46)すべての回答者による平均値は,3つの評価項目(操作性,面白さ,有用性)においてすべて,7と8の間にある. “操作性”が,3項目の中で最小の平均値(7.28)と最大の標準偏差(1.59)を得た. “面白さ”が,3項目の中で最大の平均値(7.99)と最小の標準偏差(1.34)を得た.女性の回答者が少ないため,男女間の結果の差異の検討はむずかしい.そこで以下では,年齢による結果の差異を検討する.3.1 操作性 年齢を若年(-10s),壮年(20s-40s),後年(50s-)の3群に分けてまとめた得点はそれぞれ7.25(1.80),7.31(1.39),7.33(1.37)となる.年齢群間での得点の有意な差は認められない.すなわち,どの年齢群でもほぼ同じ得点(7.25, 7.31, 7.33)で,IDATの操作感は年齢による影響は小さい.しかし他の2項目に比べてあまり高く評価されていない.問題点はあとで回答者からの意見に基づいて検討する.3.2 面白さ 年齢を若年(-10s),壮年(20s-40s),後年(50s-)の3群に分けてまとめた得点はそれぞれ8.25(1.34),7.89(1.20),7.00(1.63)となる.年齢群間での得点の有意な差は認められない.しかし,年齢が上がると面白さの得点が下がる傾向がある.若年者による得点は8.25と高く,興味を引くことができたと考えられる.ゲームのようなトレーニング内容により,若者の関心を得ることが容易だが,高齢者を引き付けるのはそれほど簡単ではない.3.3 有用性 年齢を若年(-10s),壮年(20s-40s),後年(50s-)の3群に分けてまとめた得点はそれぞれ7.60(1.50),7.64(1.46),8.33(0.94)となる.年齢群間での得点の有意な差は認められない.しかし,年齢が上がると有用性を評価する傾向がある.これは,このアンケート調査において回答者の一人一人にIDAT訓練のねらいである手と眼の協調訓練の重要性を説明しているからだと思われる.高齢者は身体および認知機能の低下にともなう危機意識をもっており,このIDAT訓練の意義を認識してもらいやすい.4. 意見や所感 意見や所感に寄せられたコメントを項目に分けて紹介し考察する.4.1 表示 ・もぐらは,平面的な絵が出てくるより立体的なのが出てきたら面白いかも.あとは壁に映してみるとか.・投影面が斜面のときはどうなるでしょう?リハビリテーション訓練装置の映像にどの程度の臨場感が必要かは,よく検討するべきである.一方,家庭用ゲーム機を応用対象としたリアルタイム3次元コンピュータグラフィックスを実現するLSIの性能は向上し続けている.コスト・パフォーマンスを考慮しながら,高精細で高品質の3次元画像を導入することも将来的にはあり得る.また本体が平面に垂直に設置されていれば,その面が傾いていてもその動作に問題はない.机の上でなく壁での実施も,機器構成の工夫により可能になる.4.2 反応感度 ・タッチ反応が少し遅い.・反応が鈍い.・反応速度を向上してほしい.・タイムラグが気になる.障害物を検知するための探索区域は,作業面から約1cm離れた平面上である.したがって,システムは面に触れていない手にも反応する.これは反応感度に対してよく働くはずであるが,確かに応答時間差を感じる.これには,ハードウェアとソフトウェアとの両面で原因が考えられる.まず信号伝達である:動く物体を検出するにはセンサ情報の伝達速度が遅い.センサデータを取得するサイクルタイムを短縮するために,コンピュータの処理能力を含めた見直しが必要である.次にプログラミングである:当たり判定の計算に時間がかかりすぎる.アルゴリズムを見直して最適化する必要がある.4.3 判定精度 ・当たり判定の精度を良くして欲しい.・連続的な当たり判定が遅いときがある.ある程度の反応感度を得るために,障害物の正確な大きさや形状を認識していない.手の大きさ(子供/大人,人種,など)だけでなく,手の状態(握った手,開いた手,など)やトイハンマの使用などにより,算出された障害物の位置が多少異なる.あらゆる状況により正確に対応するためには,訓練をはじめる前に訓練者の叩いた状態を計測して補正値を算出しておくなどの方法を考える必要がある.4.4. 小型化・低コスト化 ・小型化が楽しみにしている.・コストが高い.現在は研究開発のために汎用コンピュータを使っている.しかし商品化にはワンチップマイクロコンピュータを使用して本体と一体化することもできる.また,高価なレーザレンジファインダ(URG)を置き換えて同じ機能を提供する低価格のRGB-Dセンサについても別途検討している.4.5. 全体的な意見 多くの肯定的で励みになる回答を得た.・これなら楽しくリハビリが進むと思いました.・操作しやすい,いろんな人が楽しめる,ぜひまたやりたい.・リハビリに最も良い,楽しさを感じる,実用性あり.・未来の発展を期待しています.このように,多くの来場者は我々の研究開発の重要性と必要性を認め,高く評価してくれた.しかし同時に,改善の余地があることを私たちに知らせる貴重な意見を得ることができた.これらの提言に基づいて,さらにより良いシステムにするべく研究を続ける.

  • 投影式インターフェースを用いた高齢者・身障者用機能維持・回復訓練システムの研究開発

    2011  

     View Summary

     本課題は,上体の運動および認知機能の維持・回復訓練を目的として,SOI(ステップ・オン・インタフェース:プロジェクタの投影画面を機器から人への情報提示だけでなく人から機器への指示入力としても利用する新たらしいインタフェース)とPCのみで構成する単純な可搬型機器を製作すると共に,訓練コンテンツとしてのゲーム・アプリケーションを開発するものである. まず2011年度の前半は,SOIを二輪型移動機構の前後に2組搭載した移動ロボットHFAMRO-2上でアプリケーションの開発をすすめ,「風船割り(対象が上方に動く)」,「魚つかみ(対象が左右に動く)」,「もぐらたたき(対象がランダムな位置に出現)」について目処を立てていた.2011年度の後半からは,北九州ロボットフォーラムの事業である市内発ロボット創生プロジェクトにも採択され,(財)北九州産業学術推進機構を事務局として,産業医科大学リハビリテーション医学講座,九州産業大学工学部,リーフ(株),(株)コア九州カンパニーをメンバーとする共同研究としても実施できた.これにより,機器装置としてこれまでに,1号機(反射型:普及型プロジェクタを用いるが投影距離を稼ぎながら装置の高さを低く抑えるために反射鏡を用いるもの),2号機(直映型:比較的低い位置に設置した少し高価な短焦点型プロジェクタで机上に直接投影するもの),3号機(臨床試用プロトタイプ:最新型のプロジェクタを用いて収納性と可搬性を狙ったもの)を試作し,産業医科大学リハビリテーション科や特別養護老人ホームもじみ苑に持ち込んで意見をうかがう機会を持つこともできた.一方,画面デザインの追加・変更の容易性や詳細な調整を可能とするためにアプリケーション・プログラムのアルゴリズムと実現する手法を全面的に改めた新規開発を実施している,いまだに完全には動作しておらず,不十分な状態である. 2011年度末には産業医科大学と早稲田大学の双方における「人を対象とする研究に関する倫理審査委員会」の承認を得て,2012年度には,リハビリテーション用(医大リハ科)およびレクリエーション用(特養施設)としての臨床試用を開始することになっているため,引き続き何らかの助成と必要としている.

▼display all