2022/12/02 更新

写真a

マツマル タカフミ
松丸 隆文
Scopus 論文情報  
論文数: 0  Citation: 0  h-index: 7

Citation Countは当該年に発表した論文の被引用数

所属
理工学術院 大学院情報生産システム研究科
職名
教授

学内研究所・附属機関兼任歴

  • 2020年
    -
    2022年

    理工学術院総合研究所   兼任研究員

学歴

  •  
    -
    1987年

    早稲田大学   理工学研究科   機械工学専攻 生物制御工学専修  

  •  
    -
    1985年

    早稲田大学   理工学部   機械工学科  

学位

  • 1998年03月   早稲田大学   博士(工学)

経歴

  • 2010年09月
    -
    2011年03月

    静岡大学 客員教授(工学部,大学院工学研究科)

  • 2010年09月
    -
     

    早稲田大学 教授(理工学術院,大学院情報生産システム研究科)

  • 1999年04月
    -
    2010年08月

    静岡大学 助教授(准教授)(工学部機械工学科)

  • 2004年04月
    -
    2005年03月

    静岡理工科大学 非常勤講師(理工学部機械工学科)

  • 2003年04月
    -
    2003年12月

    フランスLSC(Laboratoire Systemes Complexe)-CNRS(Centre National de la Recherche Scientifique) 客員教授((財)日本学術振興会「特定国派遣研究者」制度)

  • 2002年04月
    -
    2003年03月

    静岡県静岡工業技術センター 客員研究員

  • 1998年04月
    -
     

    株式会社 東芝 研究開発センター 機械システム研究所 研究主務(改組)

  • 1994年04月
    -
     

    株式会社 東芝 研究開発センター 機械・エネルギー研究所 研究主務

  • 1987年04月
    -
     

    株式会社 東芝 入社 総合研究所 機械研究所

▼全件表示

所属学協会

  •  
     
     

    バイオメカニズム学会

  •  
     
     

    (社)日本ロボット学会

  •  
     
     

    (社)計測自動制御学会

  •  
     
     

    (社)日本機械学会

  •  
     
     

    ヒューマンインタフェース学会

  •  
     
     

    日本バーチャルリアリティ学会

  •  
     
     

    (社)自動車技術会

  •  
     
     

    IEEE(Institute of Electrical and Electronics Engineers, Inc.)

  •  
     
     

    IAENG (the International Association of Engineers)

▼全件表示

 

研究分野

  • データベース

  • 知能ロボティクス

  • リハビリテーション科学

  • ロボティクス、知能機械システム

  • 機械力学、メカトロニクス

研究キーワード

  • 生体工学

  • ロボット工学

  • ヒューマン・メカトロニクス

  • バイオ・ロボティクス

論文

  • A Transformer-Based Model for Super-resolution of Anime Image

    Shizhuo Xu, Vibekananda Dutta, Xin He, Takafumi Matsumaru

    Sensors (MDPI) (ISSN 1424-8220)   22 ( 21 ) 8126 - 31  2022年10月  [査読有り]  [国際誌]  [国際共著]

    担当区分:最終著者

     概要を見る

    Image super-resolution (ISR) technology aims to enhance resolution and improve image quality. It is widely applied to various real-world applications related to image processing, especially in medical images, while relatively little applied to anime image production. Furthermore, contemporary ISR tools are often based on convolutional neural networks (CNNs), while few methods attempt to use transformers that perform well in other advanced vision tasks. We propose a so-called anime image super-resolution (AISR) method based on the Swin Transformer in this work. The work was carried out in several stages. First, a shallow feature extraction approach was employed to facilitate the features map of the input image’s low-frequency information, which mainly approximates the distribution of detailed information in a spatial structure (shallow feature). Next, we applied deep feature extraction to extract the image semantic information (deep feature). Finally, the image reconstruction method combines shallow and deep features to upsample the feature size and performs sub-pixel convolution to obtain many feature map channels. The novelty of the proposal is the enhancement of the low-frequency information using a Gaussian filter and the introduction of different window sizes to replace the patch merging operations in the Swin Transformer. A high-quality anime dataset was constructed to curb the effects of the model robustness on the online regime. We trained our model on this dataset and tested the model quality. We implement anime image super-resolution tasks at different magnifications (2×, 4×, 8×). The results were compared numerically and graphically with those delivered by conventional convolutional neural network-based and transformer-based methods. We demonstrate the experiments numerically using standard peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), respectively. The series of experiments and ablation study showcase that our proposal outperforms others.

    DOI

    Scopus

  • Deep Learning Based One-Class Detection System for Fake Faces Generated by GAN Network

    Shengyin Li, Vibekananda Dutta, Xin He, Takafumi Matsumaru

    Sensors (MDPI) (ISSN 1424-8220)   22 ( 20 ) 7767 - 24  2022年10月  [査読有り]  [国際誌]  [国際共著]

    担当区分:最終著者

     概要を見る

    Recently, the dangers associated with face generation technology have been attracting much attention in image processing and forensic science. The current face anti-spoofing methods based on Generative Adversarial Networks (GANs) suffer from defects such as overfitting and generalization problems. This paper proposes a new generation method using a one-class classification model to judge the authenticity of facial images for the purpose of realizing a method to generate a model that is as compatible as possible with other datasets and new data, rather than strongly depending on the dataset used for training. The method proposed in this paper has the following features: (a) we adopted various filter enhancement methods as basic pseudo-image generation methods for data enhancement; (b) an improved Multi-Channel Convolutional Neural Network (MCCNN) was adopted as the main network, making it possible to accept multiple preprocessed data individually, obtain feature maps, and extract attention maps; (c) as a first ingenuity in training the main network, we augmented the data using weakly supervised learning methods to add attention cropping and dropping to the data; (d) as a second ingenuity in training the main network, we trained it in two steps. In the first step, we used a binary classification loss function to ensure that known fake facial features generated by known GAN networks were filtered out. In the second step, we used a one-class classification loss function to deal with the various types of GAN networks or unknown fake face generation methods. We compared our proposed method with four recent methods. Our experiments demonstrate that the proposed method improves cross-domain detection efficiency while maintaining source-domain accuracy. These studies show one possible direction for improving the correct answer rate in judging facial image authenticity, thereby making a great contribution both academically and practically.

    DOI

    Scopus

  • Methods of Generating Emotional Movements and Methods of Transmitting Behavioral Intentions: A Perspective on Human-Coexistence Robots

    Takafumi Matsumaru

    Sensors (MDPI) (ISSN 1424-8220)   22 ( 12 ) 4587 - 24  2022年06月  [査読有り]  [国際誌]

    担当区分:筆頭著者, 最終著者, 責任著者

     概要を見る

    The purpose of this paper is to introduce and discuss the following two functions that are considered to be important in human‐coexistence robots and human‐symbiotic robots: the method of generating emotional movements, and the method of transmitting behavioral intentions. The generation of emotional movements is to design the bodily movements of robots so that humans can feel specific emotions. Specifically, the application of Laban movement analysis, the development from the circumplex model of affect, and the imitation of human movements are discussed. However, a general technique has not yet been established to modify any robot movement so that it contains a specific emotion. The transmission of behavioral intentions is about allowing the surrounding humans to understand the behavioral intentions of robots. Specifically, informative motions in arm manipulation and the transmission of the movement intentions of robots are discussed. In the former, the target position in the reaching motion, the physical characteristics in the handover motion, and the landing distance in the throwing motion are examined, but there are still few research cases. In the latter, no groundbreaking method has been proposed that is fundamentally different from earlier studies. Further research and development are expected in the near future.

    DOI

    Scopus

  • ヒトとロボットの距離を縮める意思疎通―ロボットの動作予告・意図伝達―

    松丸隆文

    計測と制御 (ISSN: 04534662)   61 ( 3 ) 203 - 208  2022年03月  [査読有り]  [招待有り]

    担当区分:筆頭著者

    DOI

  • Training a Robotic Arm Movement with Deep Reinforcement Learning

    Xiaohan Ni, Xin He, Takafumi Matsumaru

    2021 IEEE International Conference on Robotics and Biomimetics (ROBIO 2021)     --- - ---  2021年12月  [査読有り]

    担当区分:最終著者

    DOI

    Scopus

  • A depth camera-based system to enable touch-less interaction using hand gestures

    R. Damindarov, C. A. Fam, R. A. Boby, M. Fahim, A. Klimchik, T. Matsumaru

    2021 International Conference "Nonlinearity, Information and Robotics" (NIR 2021)     --- - ---  2021年08月  [査読有り]

    担当区分:筆頭著者

    DOI

  • Long-arm Three-dimensional LiDAR for Anti-occlusion and Anti-sparsity Point Clouds

    Jingyu Lin, Shuqing Li, Wen Dong, Takafumi Matsumaru, Shengli Xie

    IEEE Transactions on Instrumentation and Measurement    2021年08月  [査読有り]  [国際共著]

     概要を見る

    Light detection and ranging (LiDAR) systems, also called laser radars, have a wide range of applications. This paper considers two problems in LiDAR data. The first problem is occlusion. A LiDAR acquires point clouds by scanning the surrounding environment with laser beams emitting from its center, and therefore an object behind another cannot be scanned. The second problem is sample-sparsity. LiDAR scanning is usually taken with a fixed angular step, consequently the sample points on an object surface at a long distance are sparse, and thus accurate boundary and detailed surface of the object cannot be obtained. To address the occlusion problem, we design and implement a novel three-dimensional (3D) LiDAR with a long-arm which is able to scan occluded objects from their flanks. To address the sample-sparsity problem, we propose an adaptive resolution scanning method which detects object and adjusts the angular step in realtime according to the density of points on the object being scanned. Experiments on our prototype system and scanning method verify their effectiveness in anti-occlusion and anti-sparsity as well as accuracy in measuring. The data and the codes are shared on the web site of https: //github.com/SCVision/LongarmLiDAR.

    DOI

    Scopus

    1
    被引用数
    (Scopus)
  • Pre-robotic Navigation Identification of Pedestrian Crossings and Their Orientations

    Ahmed Farid, Takafumi Matsumaru

    Field and Service Robotics     73 - 84  2021年01月  [査読有り]

    DOI

    Scopus

  • Estimation of Flat Object Deformation Using RGB-D Sensor for Robot Reproduction

    Xin He, Takafumi Matsumaru

    Sensors   21 ( 1 ) 105 - 36  2020年12月  [査読有り]  [国際誌]

     概要を見る

    This paper introduces a system that can estimate the deformation process of a deformed flat object (folded plane) and generate the input data for a robot with human-like dexterous hands and fingers to reproduce the same deformation of another similar object. The system is based on processing RGB data and depth data with three core techniques: a weighted graph clustering method for non-rigid point matching and clustering; a refined region growing method for plane detection on depth data based on an offset error defined by ourselves; and a novel sliding checking model to check the bending line and adjacent relationship between each pair of planes. Through some evaluation experiments, we show the improvement of the core techniques to conventional studies. By applying our approach to different deformed papers, the performance of the entire system is confirmed to have around 1.59 degrees of average angular error, which is similar to the smallest angular discrimination of human eyes. As a result, for the deformation of the flat object caused by folding, if our system can get at least one feature point cluster on each plane, it can get spatial information of each bending line and each plane with acceptable accuracy. The subject of this paper is a folded plane, but we will develop it into a robotic reproduction of general object deformation.

    DOI

    Scopus

  • An Object Model and Interaction Method for a Simulated Experience of Pottery on a Potter's Wheel

    Takafumi Matsumaru, Ami Morikawa

    Sernsors (MDPI AG (Multidisciplinary Digital Publishing Institute)) (ISSN 1424-8220; CODEN: SENSC9)   20 ( 11 )  2020年05月  [査読有り]

     概要を見る

    This paper introduces an object model and an interaction method for a simulated experience of pottery on a potter’s wheel. Firstly, we propose a layered cylinder model for a 3D object of the pottery on a potter’s wheel. Secondly, we set three kinds of deformation functions to form the object model from an initial state to a bowl shape: shaping the external surface, forming the inner shape (deepening the opening and widening the opening), and reducing the total height. Next, as for the interaction method between a user and the model, we prepare a simple but similar method for hand-finger operations on pottery on a potter’s wheel, in which the index finger movement takes care of the external surface and the total height, and the thumb movement makes the inner shape. Those are implemented in the three-dimensional aerial image interface (3DAII) developed in our laboratory to build a simulated experience system. We confirm the operation of the proposed object model (layered cylinder model) and the functions of the prepared interaction method (a simple but similar method to actual hand-finger operations) through a preliminary evaluation of participants. The participants were asked to make three kinds of bowl shapes (cylindrical, dome-shaped, and flat-type) and then they answered the survey (maneuverability, visibility, and satisfaction). All participants could make something like three kinds of bowl shapes in less than 30 min from their first touch.

    DOI

    Scopus

    1
    被引用数
    (Scopus)
  • Intuitive Control of Virtual Robots using Transformed Objects as Multiple Viewports

    Rajeevlochana G. Chittawadigi, Takafumi Matsumaru, Subir Kumar Saha

    2019 IEEE International Conference on Robotics and Biomimetics (IEEE Robio 2019) [Dali, Yunnan, China] (2019.12.6-8)     822 - 827  2019年12月  [査読有り]

     概要を見る

    In this paper, the integration of Leap Motion controller with RoboAnalyzer software has been reported. Leap Motion is a vision based device that tracks the motion of human hands, which was processed and used to control a virtual robot model in RoboAnalyzer, an established robot simulation software. For intuitive control, the robot model was copied and transformed to be placed at four different locations such that the user watches four different views in the same graphics environment. This novel method was used to avoid multiple windows or viewports and was observed to have a marginally better rendering rate. Several trials of picking up cylindrical objects (pegs) and moving them and placing in cylindrical holes were carried out and it was found that the manipulation was intuitive, even for a novice user.

    DOI

    Scopus

    3
    被引用数
    (Scopus)
  • Dynamic Hand Gesture Recognition for Robot Arm Teaching based on Improved LRCN Model

    Kaixiang Luan, Takafumi Matsumaru

    2019 IEEE International Conference on Robotics and Biomimetics (IEEE Robio 2019) [Dali, Yunnan, China] (2019.12.6-8)     1268 - 1273  2019年12月  [査読有り]

     概要を見る

    In this research, we focus on finding a new method of human-robot interaction in industrial environment. A visionbased dynamic hand gestures recognition system has been proposed for robot arm picking task. 8 dynamic hand gestures are captured for this task with a 100fps high speed camera. Based on the LRCN model, we combine the MobileNets (V2) and LSTM for this task, the MobileNets (V2) for extracting the image features and recognize the gestures, then, Long Short-Term Memory (LSTM) architecture for interpreting the features across time steps. Around 100 samples are taken for each gesture for training at first, then, the samples are augmented to 200 samples per gesture by data augmentation. Result shows that the model is able to learn the gestures varying in duration and complexity and gestures can be recognized in 88ms with 90.62% accuracy in the experiment on our hand gesture dataset.

    DOI

    Scopus

    4
    被引用数
    (Scopus)
  • Three-dimensional Aerial Image Interface, 3DAII

    Takafumi MATSUMARU, Asyifa Imanda SEPTIANA, Kazuki HORIUCHI

    Journal of Robotics and Mechatronics (JRM) (Fuji Technology Press Ltd.)   31 ( 5 ) 657 - 670  2019年10月  [査読有り]

     概要を見る

    In this paper, we introduce the three-dimensional aerial image interface, 3DAII. This interface reconstructs and aerially projects a three-dimensional object image, which can be simultaneously observed from various viewpoints or by multiple users with the naked eye. A pyramid reflector is used to reconstruct the object image, and a pair of parabolic mirrors is used to aerially project the image. A user can directly manipulate the three-dimensional object image by superimposing a user’s hand-finger or a rod on the image. A motion capture sensor detects the user’s hand-finger that manipulates the projected image, and the system immediately exhibits some reaction such as deformation, displacement, and discoloration of the object image, including sound effects. A performance test is executed to confirm the functions of 3DAII. The execution time of the end-tip positioning of a robotic arm has been compared among four operating devices: touchscreen, gamepad, joystick, and 3DAII. The results exhibit the advantages of 3DAII; we can directly instruct the movement direction and movement speed of the end-tip of the robotic arm, using the three-dimensional Euclidean vector outputs of 3DAII in which we can intuitively make the end-tip of the robotic arm move in three-dimensional space. Therefore, 3DAII would be one important alternative to an intuitive spatial user interface, e.g., an operation device of aerial robots, a center console of automobiles, and a 3D modelling system. A survey has been conducted to evaluate comfort and fatigue based on ISO/TS 9241-411 and ease of learning and satisfaction based on the USE questionnaire. We have identified several challenges related to visibility, workspace, and sensory feedback to users that we would like to address in the future.

    DOI

    Scopus

    5
    被引用数
    (Scopus)
  • Brand Recognition with Partial Visible Image in the Bottle Random Picking Task based on Inception V3

    Chen Zhu, Takafumi Matsumaru

    IEEE Ro-Man 2019 (The 28th IEEE Intarnational Conference on Robot & Human Interactive Communication) [Le Meridien, Windsor Place, New Delhi, India] (14-18 Oct, 2019)     1 - 6  2019年10月  [査読有り]

     概要を見る

    In the brand-wise random-ordered drinking PET
    bottles picking task, the overlapping and viewing angle problem
    makes a low accuracy of the brand recognition. In this paper,
    we set the problem to increase the brand recognition accuracy
    and try to find out how the overlapping rate infects on the
    recognition accuracy. By using a stepping motor and transparent fixture, the training images were taken automatically
    from the bottles under 360 degrees to simulate a picture taken
    from viewing angle. After that, the images are augmented with
    random cropping and rotating to simulate the overlapping
    and rotation in a real application. By using the automatically
    constructed dataset, the Inception V3, which was transferred
    learning from ImageNet, is trained for brand recognition. By
    generating a random mask with a specific overlapping rate
    on the original image, the Inception V3 can give 80% accuracy
    when 45% of the object in the image is visible or 86% accuracy
    when the overlapping rate is lower than 30%.

    DOI

    Scopus

    1
    被引用数
    (Scopus)
  • Pre-Robotic Navigation Identification of Pedestrian Crossings & Their Orientations

    Ahmed Farid, Takafumi Matsumaru

    12th Conference on Field and Service Robotics (FSR 2019) [Tokyo, Japan], (August 29-31, 2019)     000 - 000  2019年08月  [査読有り]

  • Image Processing for Picking Task of Random Ordered PET Drinking Bottles

    Chen Zhu, Takafumi Matsumaru

    Journal of Robotics, Networking and Artificial Life (JRNAL) (Atlantis Press)   6 ( 1 ) 38 - 41  2019年06月  [査読有り]

     概要を見る

    In this research, six brands of soft drinks are decided to be picked up by a robot with a monocular Red Green Blue (RGB) camera. The drinking bottles need to be located and classified with brands before being picked up. The Mask Regional Convolutional Neural Network (R-CNN), a mask generation network improved from Faster R-CNN, is trained with common object in contest datasets to detect and generate the mask on the bottles in the image. The Inception v3 is selected for the brand classification task. Around 200 images are taken or found at first; then, the images are augmented to 1500 images per brands by using random cropping and perspective transform. The result shows that the masked image can be labeled with its brand name with at least 85% accuracy in the experiment.

    DOI

  • Path Planning in Outdoor Pedestrian Settings Using 2D Digital Maps

    Ahmed Farid, Takafumi Matsumaru

    Journal of Robotics and Mechatronics (JRM) (Fuji Technology Press) (ISSN: 0915-3942(Print) / 1883-8049(Online))   31 ( 3 ) 464 - 473  2019年06月  [査読有り]

     概要を見る

    This article presents a framework for planning sidewalk-wise paths in data-limited pedestrian environments by visually recognizing city blocks in 2D digital maps (e.g. Google Maps, OpenStreetMaps) using contour detection, then applying graph theory to infer a pedestrian path from start till finish. There are two main targeted problems; firstly, several locations around the world (e.g. suburban / rural areas) do not have recorded data on street crossings and pedestrian walkways. Secondly, the continuous process of recording maps (i.e. digital cartography) is, to our current knowledge, manual and not yet fully automated in practice. Both issues contribute towards a scaling problem in which it becomes time and effort consuming to continuously monitor and record such data on a global scale. As a result, the framework’s purpose is to produce path plans that do not depend on prerecorded (e.g. using SLAM) or data-rich pedestrian maps, thus facilitating navigation for mobile robots and people of visual impairments alike. The framework was able to produce pedestrian paths for most locations where data on sidewalks and street crossings were indeed limited, but still some challenges remain. In this article, the framework’s structure, output, and challenges are explained. Additionally, we mention some works in the literature on how to make use of such path plan effectively.

    DOI

    Scopus

    5
    被引用数
    (Scopus)
  • Image Processing for Picking Task of Random Ordered PET Drinking Bottles

    Chen Zhu, Takafumi Matsumaru

    The 2019 International Conference on Artificial Life and Robotics (ICAROB 2019) [B-Con PLAZA, Beppu, Japan], (January 10-13, 2019), GS2-4, pp.634-637, (2019.01.12 Sat).     634 - 637  2019年01月  [査読有り]

     概要を見る

    In this research, six brands of soft drinks are decided to be picked up by a robot with a monocular RGB camera. The drinking bottles need to be located and classified with brands before being picked up. A Mask R-CNN is pretrained with COCO datasets to detect and generate the mask on the bottles in the image. The Inception v3 is selected for the brand classification task. Around 200 images are taken, then, the images are augmented to 1500 images per brand by using random cropping and perspective transform. The results show that the masked image can be labeled with its brand name with at least 85% accuracy in the experiment.

    DOI

  • Proposing Camera Calibration Method using PPO (Proximal Policy Optimization) for Improving Camera Pose Estimations

    Haitham K. Al-Jabri, Takafumi Matsumaru

    2018 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 2018), [Kuala Lumpur, Malaysia], (December 12-15, 2018), Thu1-8 (2), pp.790-795, (2018.12.13).     790 - 795  2018年12月  [査読有り]

     概要を見る

    This paper highlights camera orientation estimation accuracy and precision, as well as proposing a new camera calibration technique using a reinforcement learning method named PPO (Proximal Policy Optimization) in offline mode. The offline mode is used just for extracting the camera geometry parameters that are used for improving accuracy in real-time camera pose estimation techniques. We experiment and compare two popular techniques using 2D vision feedbacks and evaluate their accuracy beside other considerations related to real applications such as disturbance cases from surrounding environment and pose data stability. First, we use feature points detection ORB (Oriented FAST and Rotated BRIEF) and BF (Brute-Force) matcher to detect and match points in different frames, respectively. Second, we use FAST (Features from Accelerated Segment Test) corners and LK (Lucas–Kanade) optical flow methods to detect corners and track their flow in different frames. Those points and corners are then used for the pose estimation through optimization process with the: (a) calibration method of Zhang using chessboard pattern and (b) our proposed method using PPO. The results using our proposed calibration method show significant accuracy improvements and easier deployment for end-user compared to the pre-used methods.

    DOI

    Scopus

    1
    被引用数
    (Scopus)
  • Short Range Fingertip Pointing Operation Interface by Depth Camera

    Kazuki Horiuchi, Takafumi Matsumaru

    2018 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 2018), [Kuala Lumpur, Malaysia], (December 12-15, 2018), Thu1-4 (4), pp.132-137, (2018.12.13).     132 - 137  2018年12月  [査読有り]

     概要を見る

    In this research, we proposed and implemented a finger pointing detection system with a short-range type depth camera. The most widely used pointing device for a computer interface is a mouse, and as an alternative there is a depth sensor to detect hand and finger movement. However, in the present literature, since the user must operate the cursor at a relatively large distance from the detection device, the user needs to raise his arm and make wide movements, which is inconvenient over long periods of time. To solve such usability problem, we proposed a comfortable and easy to use method by narrowing the distance between the keyboard and the pointing device. Next, we compared various depth sensors and selected one that can recognize even small movements Additionally, we proposed a mapping method between the users perceived cursor position and the real one pointed by the index finger direction. Furthermore, we compared our pointing method with a mouse and touch pad for the usability, accuracy and working speed. The results showed that it has users have better performance on continuous operation of character input from the keyboard and cursor pointing.

    DOI

    Scopus

    1
    被引用数
    (Scopus)
  • Integration of Leap Motion Controller with Virtual Robot Module of RoboAnalyzer

    Rajeevlochana G. Chittawadigi, Subir Kumar Saha, Takafumi Matsumaru

    9th Asian Conference on Multibody Dynamics (ACMD 2018), [Xian, China] (August 19-23, 2018)    2018年08月  [査読有り]

     概要を見る

    In this paper, an integration of Leap Motion controller with RoboAnalyzer software is proposed. Leap Motion is an inexpensive sensor which has three Infrared (IR) emitters and two IR cameras and can track 10 fingers of the two human hands. The device sends the data to the computer connected to it through its controller software which can be accessed in a variety of programming languages. In the proposed setup, the position of the index finger of the right hand is tracked in a Visual C# Server Application and the coordinates are extracted accurately with respect
    to a frame attached at the center of the device. The coordinates are then sent to Virtual Robot Module (client application) in which a Coordinate System (marker) is mapped with the input it receives. Based on the movement of the index finger, the marker moves in the Virtual Robot Module (VRM). When the marker moves closer to the end-effector of the robot, the Server application attaches the marker with the end-effector. Thereafter, any incremental Cartesian motion of the index finger is mapped to equivalent Cartesian motion of the robot endeffector. This is achieved by finding the inverse kinematics solution for the new pose of the robot EE and of the
    eight solutions obtained through inverse kinematics, the solution that is closest to the current pose is selected as the appropriate solution for the new pose. The server application then updates the VRM with the new joint angles and accordingly the robot moves instantly on the software. The setup connected to a laptop running Leap Motion Controller and VRM is shown in Figure 2(a) and the workflow is illustrated in Figure 2(b).

  • Path Planning of Sidewalks & Street Crossings in Pedestrian Environments Using 2D Map Visual Inference

    Ahmed Farid, Takafumi Matsumaru

    Vigen Arakelian, Philippe Wenger (eds): ʺROMANSY 22 - Robot Design, Dynamics and Controlʺ, CISM International Centre for Mechanical Sciences (Courses and Lectures)   584   247 - 255  2018年06月  [査読有り]

     概要を見る

    This paper describes a path planning framework for processing 2D maps of given pedestrian locations to provide sidewalk paths and street crossing information. The intention is to allow mobile robot platforms to navigate in pe-destrian environments without previous knowledge and using only the current location and destination as inputs. Depending on location, current path planning solutions on known 2D maps (e.g. Google Maps and OpenStreetMaps) from both research and industry do not always provide explicit information on sidewalk paths and street crossings, which is a common problem in suburban/rural areas. The framework’s goal is to provide path planning by means of visual inference on 2D map images and search queries through downloadable map data. The re-sults have shown both success and challenges in estimating viable city block paths and street crossings.

    DOI

    Scopus

    2
    被引用数
    (Scopus)
  • Measuring Performance of Aerial Projection of 3D Hologram Object (3DHO)

    Asyifa I.Septiana, Mahfud Jiono, Takafumi Matsumaru

    2017 IEEE International Conference on Robotics and Biomimetics (IEEE-ROBIO 2017), [Macau SAR, China], (December 5-8, 2017),     2081 - 2086  2017年12月  [査読有り]

     概要を見る

    The Aerial Projection of 3D Hologram Object (3DHO) which we have proposed, is a hand gesture-based control system with an interactive 3D hologram object floating in mid-air as the hand movement reference. This system mainly consists of the pyramid-shaped reflector which produces 3D hologram object, the parabolic mirror, and the leap motion controller for capturing hand gesture command. The user can control or interact with 3DHO by using their finger or a baton-shaped object. This paper is focusing on the evaluation of the 3DHO by comparing it to other 3D input devices (such as joystick with a slider, joystick without the slider, and gamepad) on five different positioning tasks. We also investigate the assessment of comfort, ease of learning, and user satisfaction by questionnaire survey. From the experimentation, we learn that 3DHO is not good to use in one-dimensional workspace task, but has a good performance in two-dimensional and three-dimensional workspace tasks. From questionnaire results, we found out that 3DHO is averagely comfortable but may cause some fatigues around arm and shoulder. It is also easy to learn but not satisfying enough to use.

    DOI

    Scopus

    2
    被引用数
    (Scopus)
  • Calligraphy-Stroke Learning Support System Using Projector and Motion Sensor

    Takafumi Matsumaru, Masashi Narita

    Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII) (Fuji Technology Press) (ISSN: 1343-0130(Print) / 1883-8014(Online))   21 ( 4 ) 697 - 708  2017年07月  [査読有り]

     概要を見る

    This paper presents a newly-developed calligraphy-stroke learning support system. The system has the following functions: a) displaying a brushwork, trajectory, and handwriting, b) recording and playback of an expert's calligraphy-stroke, and c) teaching a learner a calligraphy-stroke. The system has the following features which shows our contribution: (1) It is simple and compact built up with a sensor and a projector so as to be easy to introduce to usual educational fields and practical leaning situations. (2) Three-dimensional calligraphy-stroke is instructed by presenting two-dimensional visual information. (3) A trajectory region is generated as continuous squares calculated using a brush model based on a brush position information measured by a sensor. (4) A handwriting is expressed by mapping a handwriting texture image according to an ink concentration and a brush handling state. The result of a trial experiment suggests the effectiveness of the learning support function about a letter form and a calligraphy-stroke.

    DOI CiNii

  • Calibration and statistical techniques for building an interactive screen for learning of alphabets by children

    Riby Abraham Boby, Ravi Prakash, Subir Kumar Saha, Takafumi Matsumaru, Pratyusha Sharma, Siddhartha Jaitly

    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS   14 ( 3 ) 1 - 17  2017年05月  [査読有り]

     概要を見る

    This article focuses on the implementation details of a portable interactive device called Image-projective Desktop Varnamala Trainer. The device uses a projector to produce a virtual display on a flat surface. For enabling interaction, the information about a user's hand movement is obtained from a single two-dimensional scanning laser range finder in contrast with a camera sensor used in many earlier applications. A generalized calibration process to obtain exact transformation from projected screen coordinate system to sensor coordinate system is proposed in this article and implemented for enabling interaction. This permits production of large interactive displays with minimal cost. Additionally, it makes the entire system portable, that is, display can be produced on any planar surface like floor, tabletop, and so on. The calibration and its performance have been evaluated by varying screen sizes and the number of points used for calibration. The device was successfully calibrated for different screens. A novel learning-based methodology for predicting a user's behaviour was then realized to improve the system's performance. This has been experimentally evaluated, and the overall accuracy of prediction was about 96%. An application was then designed for this set-up to improve the learning of alphabets by the children through an interactive audiovisual feedback system. It uses a game-based methodology to help students learn in a fun way. Currently, it has bilingual (Hindi and English) user interface to enable learning of alphabets and elementary mathematics. A user survey was conducted after demonstrating it to school children. The survey results are very encouraging. Additionally, a study to ascertain the improvement in the learning outcome of the children was done. The results clearly indicate an improvement in the learning outcome of the children who used the device over those who did not.

    DOI

    Scopus

    1
    被引用数
    (Scopus)
  • ORB-SHOT SLAM: Trajectory correction by 3D loop closing based on bag-of-visual-words (BoVM) model for RGBb-D visual SLAM

    Zheng Chai, Takafumi Matsumaru

    Journal of Robotics and Mechatronics   29 ( 2 ) 365 - 380  2017年04月  [査読有り]

     概要を見る

    This paper proposes the ORB-SHOT SLAM or OS-SLAM, which is a novel method of 3D loop closing for trajectory correction of RGB-D visual SLAM. We obtain point clouds from RGB-D sensors such as Kinect or Xtion, and we use 3D SHOT descriptors to describe the ORB corners. Then, we train an offline 3D vocabulary that contains more than 600,000 words by using two million 3D descriptors based on a large number of images from a public dataset provided by TUM. We convert new images to bag-of-visual-words (BoVW) vectors and push these vectors into an incremental database. We query the database for new images to detect the corresponding 3D loop candidates, and compute similarity scores between the new image and each corresponding 3D loop candidate. After detecting 2D loop closures using ORB-SLAM2 system, we accept those loop closures that are also included in the 3D loop candidates, and we assign them corresponding weights according to the scores stored previously. In the final graph-based optimization, we create edges with different weights for loop closures and correct the trajectory by solving a nonlinear least-squares optimization problem. We compare our results with several state-of-the-art systems such as ORB-SLAM2 and RGB-D SLAM by using the TUM public RGB-D dataset. We find that accurate loop closures and suitable weights reduce the error on trajectory estimation more effectively than other systems. The performance of ORB-SHOT SLAM is demonstrated by 3D reconstruction application.

    DOI

    Scopus

    7
    被引用数
    (Scopus)
  • Interactive aerial projection of 3D hologram object

    Mahfud, Jiono, Matsumaru, Takafumi

    2016 IEEE International Conference on Robotics and Biomimetics, ROBIO 2016     1930 - 1935  2017年02月

     概要を見る

    © 2016 IEEE.In this paper we present an interactive aerial projection of 3D hologram objects by using the pyramid hologram and parabolic mirror system (for 3D hologram object reconstruction) and the Leap Motion sensor (as a finger movement detector of a user). This system not only can reconstruct and project the 3D hologram object in the mid-air, but also provides a way to interact with it by moving a user's finger. There are three main steps: the reconstruction of 3D object, the projection of 3D hologram object in the mid-air, and the interactive manipulation of 3D hologram object. The first step is realized by using pyramid hologram with LCD display. The second step is achieved with the parabolic mirror hologram. And Leap Motion sensor is used for the last step to detect user finger movement. This paper traces the design concept and confirms the system function of an interactive aerial projection of 3D hologram objects by a prototype demonstration.

    DOI

    Scopus

    11
    被引用数
    (Scopus)
  • Near-field touch interface using time-of-flight camera

    Lixing Zhang, Takafumi Matsumaru

    Journal of Robotics and Mechatronics   28 ( 5 ) 759 - 775  2016年10月  [査読有り]

     概要を見る

    The purpose of this study is to realize a near-field touch interface that is compact,flexible,and highly accurate. We applied a 3-dimensional image sensor (time-of-flight camera) to achieve the basic functions of conventional touch interfaces,such as clicking,dragging,and sliding,and we designed a complete projector-sensor system. Unlike conventional touch interfaces,such as those on tablet PCs,the system can sense the 3-dimensional positions of fingertips and 3- dimensional directions of fingers. Moreover,it does not require a real touch screen but instead utilizes a mobile projector for display. Nonetheless,the system is compact,with a working distance of as short as around 30 cm. Our methods solve the shadow and reflection problems of the time-of-flight camera and can provide robust detection results. Tests have shown that our approach has a high success rate (98.4%) on touch/hover detection and a small standard error (2.21 mm) on position detection on average for different participants,which is the best performance we have achieved. Some applications,such as the virtual keyboard and virtual joystick,are also realized based on the proposed projector-sensor system.

    DOI

    Scopus

    5
    被引用数
    (Scopus)
  • Feature Tracking and Synchronous Scene Generation with a Single Camera

    Zheng Chai, Takafumi Matsumaru

    International Journal of Image, Graphics and Signal Processing (IJIGSP)   8 ( 6 ) 1 - 12  2016年06月  [査読有り]

     概要を見る

    This paper shows a method of tracking feature points to update camera pose and generating a synchronous map for AR (Augmented Reality) system. Firstly we select the ORB (Oriented FAST and Rotated BRIEF) [1] detection algorithm to detect the feature points which have depth information to be markers, and we use the LK (Lucas-Kanade) optical flow [2] algorithm to track four of them. Then we compute the rotation and translation of the moving camera by relationship matrix between 2D image coordinate and 3D world coordinate, and then we update the camera pose. Last we generate the map, and we draw some AR objects on it. If the feature points are missing, we can compute the same world coordinate as the one before missing to recover tracking by using new corresponding 2D/3D feature points and camera poses at that time. There are three novelties of this study: an improved ORB detection, which can obtain depth information, a rapid update of camera pose, and tracking recovery. Referring to the PTAM (Parallel Tracking and Mapping) [3], we also divide the process into two parallel sub-processes: Detecting and Tracking (including recovery when necessary) the feature points and updating the camera pose is one thread. Generating the map and drawing some objects is another thread. This parallel method can save time for the AR system and make the process work in real-time.

    DOI

  • Extraction of representative point from hand contour data based on laser range scanner for hand motion estimation

    Dai, Chuankai, Matsumaru, Takafumi

    2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015     2139 - 2144  2016年02月

     概要を見る

    © 2015 IEEE.This paper shows a novel method to extract hand Representative Point (RP) based on 2-dimensional laser range scanner for hand motion estimation on projecting system. Image-projecting Desktop Arm Trainer (IDAT) is a projecting system for hand-eye coordination training, in which a projector displays an exercise screen on the desktop, and a laser range scanner detects trainee's hand motion. To realize multi-user HMI and expand more entertainment functions in IDAT system, an Air Hockey application was developed in which the hand RP requires a high precision. To generate hand RP precisely, we proposed our method in two parts to solve the data error problem and changeable hand contour problem. In part one, a data modifier is proposed and a sub-experiment is carried out to establish a modifying function for correcting sensor original data. In part two, we proposed three RP algorithms and carried out an evaluation experiment to estimate the reliability of three algorithms under different conditions. From the result, we get the most reliable algorithm corresponding to different situations in which the error of hand RP is less than 9.6 mm.

    DOI

    Scopus

  • Simulating and displaying of puck motion in virtual air hockey based on projective interface

    Dai, Chuankai, Matsumaru, Takafumi

    2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015     320 - 325  2016年02月

     概要を見る

    © 2015 IEEE.This paper shows the simulating and displaying of Puck motion in virtual Air Hockey based on projective interface for multi-users HRI Image-projective Desktop Arm Trainer (IDAT) is an upper-limb rehabilitation system for hand-eye coordination training based on projective interface. To expand more entertainment functions and realize multi-user HRI, we want to develop a virtual Air Hockey application in IDAT. To develop this application, Puck motion should be simulated correctly and displayed smoothly on the screen. There are mainly 3 problems: Data updating rate problem, virtual collision calculation and individual's hand misrecognition problem. We proposed our method into three parts corresponding to different problems. In part 1, we used multiple timers with shared memory to deal with unsynchronized data updating rate problem. In part 2, an original physical engine is designed to calculate Puck's velocity and detect virtual collision. In part 3, to deal with the individual's hand misrecognition problem, a history-based hand owner recognition algorithm is implemented to distinguish users' hands.

    DOI

    Scopus

  • Contour-based binary image orientation detection by orientation context and roulette distance

    Jian Zhou, Takafumi Matsumaru

    IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences   E99A ( 2 ) 621 - 633  2016年02月  [査読有り]

     概要を見る

    This paper proposes a novel technology to detect the orientation of an image relying on its contour which is noised to varying degrees. For the image orientation detection, most methods regard to the landscape image and the image taken of a single object. In these cases, the contours of these images are supposed to be immune to the noise. This paper focuses on the the contour noised after image segmentation. A polar orientation descriptor Orientation Context is viewed as a feature to describe the coarse distribution of the contour points. This descriptor is verified to be independent of translation, isotropic scaling, and rotation transformation by theory and experiment. The relative orientation depends on the minimum distance Roulette Distance between the descriptor of a template image and that of a test image. The proposed method is capable of detecting the direction on the interval from 0 to 359 degrees which is wider than the former contour-based means (Distance Phase [1], from 0 to 179 degrees). What's more, the results of experiments show that not only the normal binary image (Noise-0, Accuracy-1: 84.8%) (defined later) achieves more accurate orientation but also the binary image with slight contour noise (Noise-1, Accuracy-1: 73.5%) could obtain more precise orientation compared to Distance Phase (Noise-0, Accuracy-1: 56.3%
    Noise-1, Accuracy-1: 27.5%). Although the proposed method (O(op2)) takes more time to detect the orientation than Distance Phase (O(st)), it could be realized including the preprocessing in real time test with a frame rate of 30.

    DOI CiNii

    Scopus

    1
    被引用数
    (Scopus)
  • Touchless Human-Mobile Robot Interaction using a Projectable Interactive Surface

    R. Agarwal, P. Sharma, S. K. Saha, T. Matsumaru

    2016 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)   WeP1E.6   723 - 728  2016年  [査読有り]

     概要を見る

    This paper showcases the development of a mobile robot integrated with Projectable Interactive Surface to facilitate its interaction with human users. The system was designed to interact with users of any physical attributes such as height, arm span etc. without re-calibrating it. The system was designed in such a way that there would be no need for the human to come in physical contact with the robot to give it instructions. This system uses a projector to render a virtual display on the ground allowing us to project large displays. Microsoft Kinect integrated in the systems performs a dual functionality of tracking the user movements along with mapping the surrounding environment. The gestures of the tracked user are interpreted and an audio visual signal is projected by the robot in response.

    DOI

    Scopus

    2
    被引用数
    (Scopus)
  • Interactive Aerial Projection of 3D Hologram Object

    Jiono Mahfud, Takafumi Matsumaru

    2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)   TuC05.3   1930 - 1935  2016年  [査読有り]

     概要を見る

    In this paper we present an interactive aerial projection of 3D hologram objects by using the pyramid hologram and parabolic mirror system (for 3D hologram object reconstruction) and the Leap Motion sensor (as a finger movement detector of a user). This system not only can reconstruct and project the 3D hologram object in the mid-air, but also provides a way to interact with it by moving a user's finger. There are three main steps: the reconstruction of 3D object, the projection of 3D hologram object in the mid-air, and the interactive manipulation of 3D hologram object. The first step is realized by using pyramid hologram with LCD display. The second step is achieved with the parabolic mirror hologram. And Leap Motion sensor is used for the last step to detect user finger movement. This paper traces the design concept and confirms the system function of an interactive aerial projection of 3D hologram objects by a prototype demonstration.

    DOI

    Scopus

    11
    被引用数
    (Scopus)
  • Interactive aerial projection of 3D hologram object

    Jiono Mahfud, Takafumi Matsumaru

    2016 IEEE International Conference on Robotics and Biomimetics, ROBIO 2016     1930 - 1935  2016年

     概要を見る

    In this paper we present an interactive aerial projection of 3D hologram objects by using the pyramid hologram and parabolic mirror system (for 3D hologram object reconstruction) and the Leap Motion sensor (as a finger movement detector of a user). This system not only can reconstruct and project the 3D hologram object in the mid-air, but also provides a way to interact with it by moving a user's finger. There are three main steps: the reconstruction of 3D object, the projection of 3D hologram object in the mid-air, and the interactive manipulation of 3D hologram object. The first step is realized by using pyramid hologram with LCD display. The second step is achieved with the parabolic mirror hologram. And Leap Motion sensor is used for the last step to detect user finger movement. This paper traces the design concept and confirms the system function of an interactive aerial projection of 3D hologram objects by a prototype demonstration.

    DOI

    Scopus

    11
    被引用数
    (Scopus)
  • Touchless Human-Mobile Robot Interaction using a Projectable Interactive Surface

    R. Agarwal, P. Sharma, S. K. Saha, T. Matsumaru

    2016 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)     723 - 728  2016年  [査読有り]

     概要を見る

    This paper showcases the development of a mobile robot integrated with Projectable Interactive Surface to facilitate its interaction with human users. The system was designed to interact with users of any physical attributes such as height, arm span etc. without re-calibrating it. The system was designed in such a way that there would be no need for the human to come in physical contact with the robot to give it instructions. This system uses a projector to render a virtual display on the ground allowing us to project large displays. Microsoft Kinect integrated in the systems performs a dual functionality of tracking the user movements along with mapping the surrounding environment. The gestures of the tracked user are interpreted and an audio visual signal is projected by the robot in response.

    DOI

    Scopus

    2
    被引用数
    (Scopus)
  • Touchless Human-Mobile Robot Interaction using a Projectable Interactive Surface

    R. Agarwal, P. Sharma, S. K. Saha, T. Matsumaru

    2016 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)     723 - 728  2016年

     概要を見る

    This paper showcases the development of a mobile robot integrated with Projectable Interactive Surface to facilitate its interaction with human users. The system was designed to interact with users of any physical attributes such as height, arm span etc. without re-calibrating it. The system was designed in such a way that there would be no need for the human to come in physical contact with the robot to give it instructions. This system uses a projector to render a virtual display on the ground allowing us to project large displays. Microsoft Kinect integrated in the systems performs a dual functionality of tracking the user movements along with mapping the surrounding environment. The gestures of the tracked user are interpreted and an audio visual signal is projected by the robot in response.

    DOI

    Scopus

    2
    被引用数
    (Scopus)
  • Interactive Aerial Projection of 3D Hologram Object

    Jiono Mahfud, Takafumi Matsumaru

    2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)     1930 - 1935  2016年

     概要を見る

    In this paper we present an interactive aerial projection of 3D hologram objects by using the pyramid hologram and parabolic mirror system (for 3D hologram object reconstruction) and the Leap Motion sensor (as a finger movement detector of a user). This system not only can reconstruct and project the 3D hologram object in the mid-air, but also provides a way to interact with it by moving a user's finger. There are three main steps: the reconstruction of 3D object, the projection of 3D hologram object in the mid-air, and the interactive manipulation of 3D hologram object. The first step is realized by using pyramid hologram with LCD display. The second step is achieved with the parabolic mirror hologram. And Leap Motion sensor is used for the last step to detect user finger movement. This paper traces the design concept and confirms the system function of an interactive aerial projection of 3D hologram objects by a prototype demonstration.

    DOI

    Scopus

    11
    被引用数
    (Scopus)
  • Touchless Human-Mobile Robot Interaction using a Projectable Interactive Surface

    R. Agarwal, P. Sharma, S. K. Saha, T. Matsumaru

    2016 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)     723 - 728  2016年

     概要を見る

    This paper showcases the development of a mobile robot integrated with Projectable Interactive Surface to facilitate its interaction with human users. The system was designed to interact with users of any physical attributes such as height, arm span etc. without re-calibrating it. The system was designed in such a way that there would be no need for the human to come in physical contact with the robot to give it instructions. This system uses a projector to render a virtual display on the ground allowing us to project large displays. Microsoft Kinect integrated in the systems performs a dual functionality of tracking the user movements along with mapping the surrounding environment. The gestures of the tracked user are interpreted and an audio visual signal is projected by the robot in response.

    DOI

    Scopus

    2
    被引用数
    (Scopus)
  • Simulating and Displaying of Puck Motion in Virtual Air Hockey based on Projective Interface

    Chuankai Dai, Takafumi Matsumaru

    IEEE International Conference on Robotics and Biomimetics, (IEEE-ROBIO 2015), [Zhuhai, China], (December 6-9, 2015)   MoM02.6   320 - 325  2015年12月  [査読有り]

     概要を見る

    © 2015 IEEE.This paper shows the simulating and displaying of Puck motion in virtual Air Hockey based on projective interface for multi-users HRI Image-projective Desktop Arm Trainer (IDAT) is an upper-limb rehabilitation system for hand-eye coordination training based on projective interface. To expand more entertainment functions and realize multi-user HRI, we want to develop a virtual Air Hockey application in IDAT. To develop this application, Puck motion should be simulated correctly and displayed smoothly on the screen. There are mainly 3 problems: Data updating rate problem, virtual collision calculation and individual's hand misrecognition problem. We proposed our method into three parts corresponding to different problems. In part 1, we used multiple timers with shared memory to deal with unsynchronized data updating rate problem. In part 2, an original physical engine is designed to calculate Puck's velocity and detect virtual collision. In part 3, to deal with the individual's hand misrecognition problem, a history-based hand owner recognition algorithm is implemented to distinguish users' hands.

    DOI

    Scopus

  • SAKSHAR An Image-projective Desktop Varnamala Trainer (IDVT) for Interactive Learning of Alphabets

    Ravi Prakash Joshi, Riby Abraham Boby, Subir Kumar Saha, Takafumi Matsumaru

    Developing Countries Forum - ICRA 2015, [Seattle, Washington, USA], (May 26-30, 2015),   2(3)  2015年05月  [査読有り]

  • Calligraphy-Stroke Learning Support System Using Projection

    Masashi Narita, Takafumi Matsumaru

    2015 24TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN)   2015-November   640 - 645  2015年  [査読有り]

     概要を見る

    In this paper, a calligraphy learning support system is presented for supporting brushwork learning by using a projector. The system was designed to provide the three kinds of training according to the learner's ability as followings: copying training, tracing training, and combination of them. In order to instruct the three-dimensional brushwork such as the writing speed, pressure, and orientation of the brush, we proposed the instruction method by presenting the information to only brush tip. This method can be visualized a brush position and the orientation. In addition, the copying experiment was performed using the proposed method. As a result, the efficiency of the proposed method was examined through experiment.

    DOI

    Scopus

    13
    被引用数
    (Scopus)
  • Extraction of Representative Point from Hand Contour Data based on Laser Range Scanner for Hand Motion Estimation

    Chuankai Dai, Takafumi Matsumaru

    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)   WeA06.5   2139 - 2144  2015年  [査読有り]

     概要を見る

    This paper shows a novel method to extract hand Representative Point (RP) based on 2-dimensional laser range scanner for hand motion estimation on projecting system. Image-projecting Desktop Arm Trainer (IDAT) is a projecting system for hand-eye coordination training, in which a projector displays an exercise screen on the desktop, and a laser range scanner detects trainee's hand motion. To realize multi-user HMI and expand more entertainment functions in IDAT system, an Air Hockey application was developed in which the hand RP requires a high precision. To generate hand RP precisely, we proposed our method in two parts to solve the data error problem and changeable hand contour problem. In part one, a data modifier is proposed and a sub-experiment is carried out to establish a modifying function for correcting sensor original data. In part two, we proposed three RP algorithms and carried out an evaluation experiment to estimate the reliability of three algorithms under different conditions. From the result, we get the most reliable algorithm corresponding to different situations in which the error of hand RP is less than 9.6 mm.

    DOI

    Scopus

  • Simulating and Displaying of Puck Motion in Virtual Air Hockey based on Projective Interface

    Chuankai Dai, Takafumi Matsumaru

    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)   MoM02.6   320 - 325  2015年  [査読有り]

     概要を見る

    This paper shows the simulating and displaying of Puck motion in virtual Air Hockey based on projective interface for multi-users HRI. Image-projective Desktop Arm Trainer (IDAT) is an upper-limb rehabilitation system for hand-eye coordination training based on projective interface. To expand more entertainment functions and realize multi-user HRI, we want to develop a virtual Air Hockey application in IDAT. To develop this application, Puck motion should be simulated correctly and displayed smoothly on the screen. There are mainly 3 problems: Data updating rate problem, virtual collision calculation and individual's hand misrecognition problem. We proposed our method into three parts corresponding to different problems. In part 1, we used multiple timers with shared memory to deal with unsynchronized data updating rate problem. In part 2, an original physical engine is designed to calculate Puck's velocity and detect virtual collision. In part 3, to deal with the individual's hand misrecognition problem, a history-based hand owner recognition algorithm is implemented to distinguish users' hands.

    DOI

    Scopus

  • Projectable Interactive Surface Using Microsoft Kinect V2: Recovering Information from Coarse Data to Detect Touch

    P. Sharma, R. P. Joshi, R. A. Boby, S. K. Saha, T. Matsumaru

    2015 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)   SuD4.3   795 - 800  2015年  [査読有り]

     概要を見る

    An Image-projective Desktop Varnamala Trainer (IDVT) called SAKSHAR has been designed to improve the learning by children through an interactive audio visual feedback system. This device uses a projector to render a virtual display, which permits production of large interactive displays with minimal cost. The user's hand is recognized with the help of a Microsoft Kinect Version 2. The entire system is portable, i.e., it can be projected on any planar surface. Since the Kinect does not give precise 3D coordinates of points for detecting a touch, a model of recognition of a touch purely based on the contact of the user's hand with the surface would not yield accurate results. We have instead modeled the touch action by using multiple points along the trajectory of the tracked point of the user's hand while hand makes contact with the surface. Fitting a curve through these points and analyzing the errors is used to make the detection of touch accurate.

    DOI

    Scopus

    9
    被引用数
    (Scopus)
  • Extraction of representative point from hand contour data based on laser range scanner for hand motion estimation

    Chuankai Dai, Takafumi Matsumaru

    2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015   WeA06.5   2139 - 2144  2015年  [査読有り]

     概要を見る

    This paper shows a novel method to extract hand Representative Point (RP) based on 2-dimensional laser range scanner for hand motion estimation on projecting system. Image-projecting Desktop Arm Trainer (IDAT) is a projecting system for hand-eye coordination training, in which a projector displays an exercise screen on the desktop, and a laser range scanner detects trainee's hand motion. To realize multi-user HMI and expand more entertainment functions in IDAT system, an Air Hockey application was developed in which the hand RP requires a high precision. To generate hand RP precisely, we proposed our method in two parts to solve the data error problem and changeable hand contour problem. In part one, a data modifier is proposed and a sub-experiment is carried out to establish a modifying function for correcting sensor original data. In part two, we proposed three RP algorithms and carried out an evaluation experiment to estimate the reliability of three algorithms under different conditions. From the result, we get the most reliable algorithm corresponding to different situations in which the error of hand RP is less than 9.6 mm.

    DOI

    Scopus

  • Simulating and Displaying of Puck Motion in Virtual Air Hockey based on Projective Interface

    Chuankai Dai, Takafumi Matsumaru

    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)     320 - 325  2015年  [査読有り]

     概要を見る

    This paper shows the simulating and displaying of Puck motion in virtual Air Hockey based on projective interface for multi-users HRI. Image-projective Desktop Arm Trainer (IDAT) is an upper-limb rehabilitation system for hand-eye coordination training based on projective interface. To expand more entertainment functions and realize multi-user HRI, we want to develop a virtual Air Hockey application in IDAT. To develop this application, Puck motion should be simulated correctly and displayed smoothly on the screen. There are mainly 3 problems: Data updating rate problem, virtual collision calculation and individual's hand misrecognition problem. We proposed our method into three parts corresponding to different problems. In part 1, we used multiple timers with shared memory to deal with unsynchronized data updating rate problem. In part 2, an original physical engine is designed to calculate Puck's velocity and detect virtual collision. In part 3, to deal with the individual's hand misrecognition problem, a history-based hand owner recognition algorithm is implemented to distinguish users' hands.

    DOI

    Scopus

  • Projectable Interactive Surface Using Microsoft Kinect V2: Recovering Information from Coarse Data to Detect Touch

    P. Sharma, R. P. Joshi, R. A. Boby, S. K. Saha, T. Matsumaru

    2015 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)     795 - 800  2015年

     概要を見る

    An Image-projective Desktop Varnamala Trainer (IDVT) called SAKSHAR has been designed to improve the learning by children through an interactive audio visual feedback system. This device uses a projector to render a virtual display, which permits production of large interactive displays with minimal cost. The user's hand is recognized with the help of a Microsoft Kinect Version 2. The entire system is portable, i.e., it can be projected on any planar surface. Since the Kinect does not give precise 3D coordinates of points for detecting a touch, a model of recognition of a touch purely based on the contact of the user's hand with the surface would not yield accurate results. We have instead modeled the touch action by using multiple points along the trajectory of the tracked point of the user's hand while hand makes contact with the surface. Fitting a curve through these points and analyzing the errors is used to make the detection of touch accurate.

    DOI

    Scopus

    9
    被引用数
    (Scopus)
  • Extraction of Representative Point from Hand Contour Data based on Laser Range Scanner for Hand Motion Estimation

    Chuankai Dai, Takafumi Matsumaru

    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)     2139 - 2144  2015年

     概要を見る

    This paper shows a novel method to extract hand Representative Point (RP) based on 2-dimensional laser range scanner for hand motion estimation on projecting system. Image-projecting Desktop Arm Trainer (IDAT) is a projecting system for hand-eye coordination training, in which a projector displays an exercise screen on the desktop, and a laser range scanner detects trainee's hand motion. To realize multi-user HMI and expand more entertainment functions in IDAT system, an Air Hockey application was developed in which the hand RP requires a high precision. To generate hand RP precisely, we proposed our method in two parts to solve the data error problem and changeable hand contour problem. In part one, a data modifier is proposed and a sub-experiment is carried out to establish a modifying function for correcting sensor original data. In part two, we proposed three RP algorithms and carried out an evaluation experiment to estimate the reliability of three algorithms under different conditions. From the result, we get the most reliable algorithm corresponding to different situations in which the error of hand RP is less than 9.6 mm.

    DOI

    Scopus

  • Projectable Interactive Surface Using Microsoft Kinect V2: Recovering Information from Coarse Data to Detect Touch

    P. Sharma, R. P. Joshi, R. A. Boby, S. K. Saha, T. Matsumaru

    2015 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)     795 - 800  2015年  [査読有り]

     概要を見る

    An Image-projective Desktop Varnamala Trainer (IDVT) called SAKSHAR has been designed to improve the learning by children through an interactive audio visual feedback system. This device uses a projector to render a virtual display, which permits production of large interactive displays with minimal cost. The user's hand is recognized with the help of a Microsoft Kinect Version 2. The entire system is portable, i.e., it can be projected on any planar surface. Since the Kinect does not give precise 3D coordinates of points for detecting a touch, a model of recognition of a touch purely based on the contact of the user's hand with the surface would not yield accurate results. We have instead modeled the touch action by using multiple points along the trajectory of the tracked point of the user's hand while hand makes contact with the surface. Fitting a curve through these points and analyzing the errors is used to make the detection of touch accurate.

    DOI

    Scopus

    9
    被引用数
    (Scopus)
  • A Walking Training System with Customizable Trajectory Designing

    Shiyang Dong, Takafumi Matsumaru

    Paladyn. Journal of Behavioral Robotics (ISSN: (Online)2081-4836)   5 ( 1 ) 35 - 52  2014年06月  [査読有り]

     概要を見る

    This paper shows a novel walking training system for foot-eye coordination. To design customizable trajectories for different users conveniently in walking training, a new system which can track and record the actual walking trajectories by a tutor and can use these trajectories for the walking training by a trainee is developed. We set the four items as its human-robot interaction design concept: feedback, synchronization, ingenuity and adaptability. A foot model is proposed to define the position and direction of a foot. The errors in the detection method used in the system are less than 40 mm in position and 15 deg in direction. On this basis, three parts are structured to achieve the system functions: Trajectory Designer, Trajectory Viewer and Mobile Walking Trainer. According to the experimental results,we have confirmed the systemworks as intended and designed such that the steps recorded in Trajectory Designer could be used successfully as the footmarks projected in Mobile Walking Trainer and foot-eye coordination training would be conducted smoothly.

    DOI

    Scopus

    2
    被引用数
    (Scopus)
  • Human-Machine Interaction using the Projection Screen and Light Spots from Multiple Laser Pointers

    Jian Zhou, Takafumi Matsumaru

    2014 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)     16 - 21  2014年  [査読有り]

     概要を見る

    Multi-user laser pointers system, in which more than single people could use the laser pointers concurrently, has promising applications, such as group discussion, appliances control for several handicapped persons, controlling the large displays by a few users and the like. Conventional methods employed in the above-mentioned applications are common mouse, gesture control utilizing the Kinect sensor or leap motion, including the single laser pointer. Common mouse and single laser pointer could not meet the requirements, which let a few users operate simultaneously. And gesture control is limited to a far distance. Compared to multi-user laser pointers, the majority research focuses on single laser pointer. However, multiple users would suffer the dilemma in which each one is not able to point at the target on the screen any time, at far distance, just because the laser pointer is controlled by only one person. The paper proposes a novel way to make it possible that 3 laser pointers are pointing at the screen without intervening between each other. Firstly, the foreground with all the dynamic spots is extracted. On the foreground, these spots will be searched all the time by grasping the contours of them. Hence, various information of the spots like the coordinate, pixel value, area and so on is obtained. Secondly, a square image containing the whole spot is referred to as the input of the designing back propagation neural network. The BPNN output indicates the category to which the laser pointer belongs. An experiment verifies that it works well under certain light conditions (12-727lux) if green laser pointers are used.

    DOI

    Scopus

    2
    被引用数
    (Scopus)
  • Image-projecting desktop arm trainer for hand-eye coordination training

    Matsumaru, Takafumi, Liu, Yang, Jiang, Y, Dai, Chuankai

    Journal of Robotics and Mechatronics   26 ( 6 ) 704 - 717  2014年01月  [査読有り]

     概要を見る

    © 2014, Fuji Technology Press. All rights received.This paper presents a novel arm-training system, known as the image-projecting desktop arm trainer (IDAT), which is aimed at hand-eye coordination training. The projector displays an exercise image on a desktop in front of a seated patient, and the scanning range finder measures the behavior of the patient as he/she performs the exercise. IDAT is non-invasive and does not constrain the patient. Its efficiency is based on the voluntary movements of the patient, although it offers neither the physical assistance nor tactile feedback of some conventional systems. Three kinds of training content have been developed: “mole-hitting,” “balloon-bursting,” and “fish-catching.” These games were designed for training hand-eye coordination in different directions. A patient and/or medical professional can set a suitable training level, that is, the training time, speed of movement of the objects, and number of objects to appear at any one time, based on the patient’s condition and ability. A questionnaire survey was carried out to evaluate IDAT-3, and the results showed that it was highly acclaimed in terms of user-friendliness, fun, and usefulness.

    DOI

    Scopus

    5
    被引用数
    (Scopus)
  • Human detecting and following mobile robot using a laser range sensor

    Cai, Jianzhao, Cai, Jianzhao, Matsumaru, Takafumi

    Journal of Robotics and Mechatronics   26 ( 6 ) 718 - 734  2014年01月  [査読有り]

     概要を見る

    © 2014, Fuji Technology Press. All rights received.To meet the higher requirements of human-machine interface technology, a robot with human-following capability, a classic but significant problem, is discussed in this paper. We first propose a human detection method that uses only a single laser range scanner to detect the waist of the target person. Second, owing to the limited speed of a robot and the potential risk of obstructions, a new human-following algorithm is proposed. The speed and acceleration of a robot are adaptive to the human-walking speed and the distance between the human and robot. Finally, the performance of the proposed control system is successfully verified through a set of experimental results obtained using a two-wheelmobile robot working in a real environment under different scenarios.

    DOI

    Scopus

    11
    被引用数
    (Scopus)
  • Real-time Gesture Recognition with Finger Naming by RGB Camera and IR Depth Sensor

    Phonpatchara Chochai, Thanapat Mekrungroj, Takafumi Matsumaru

    2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS IEEE-ROBIO 2014     931 - 936  2014年  [査読有り]

     概要を見る

    This paper introduces the real-time fingers naming method then illustrates how hand languages could be recognized by hand gesture and relation between fingertips. Moreover, this paper provides information on how to calculate and name each finger then using arm for dynamically adjust and improve the stability while the hand is moving. Supported by the study of relation between fingertips, palms, and arms, this proposed method can recognize hand gesture and translate these signs into numbers according to the standard sign language. The approach using in this paper relies on the image depth and the RGB image to identify hands, arms and fingertips. Then, the relation between each part is used to recognize a finger name regardless of the different direction of movement. Also, this report describes how to implement the proposed method into the ASUS Xtion as a sensor.

    DOI

    Scopus

    2
    被引用数
    (Scopus)
  • 巻頭言 半世紀を生きて,次の半世紀へ

    松丸隆文

    バイオメカニズム学会誌   37 ( 3 ) 151 - 151  2013年08月

    CiNii

  • Comparison of Displaying with Vocalizing on Preliminary-Announcement of Mobile Robot Upcoming Operation

    Takafumi Matsumaru

    Calin Ciufudean and Lino Garcia (ed.): "Advances in Robotics - Modeling, Control and Applications", ISBN 978-1-922227-05-8 (Hardcover) / 978-1-461108-44-3 (Paperback), iConcept Press   7   133 - 147  2013年01月  [査読有り]

  • Development and Evaluation of Operational Interface Using Touch Screen for Remote Operation of Mobile Robot

    Takafumi Matsumaru

    Calin Ciufudean and Lino Garcia (ed.): "Advances in Robotics - Modeling, Control and Applications", ISBN 978-1-922227-05-8 (Hardcover) 978-1-461108-44-3 (Paperback), iConcept Press   10   195 - 217  2013年01月  [査読有り]

  • Design and Evaluation of Throw-over Movement Informing a Receiver of Object Landing Distance

    Takafumi Matsumaru

    Calin Ciufudean and Lino Garcia (ed.): "Advances in Robotics - Modeling, Control and Applications", ISBN 978-1-922227-05-8 (Hardcover) 978-1-461108-44-3 (Paperback), iConcept Press   9   171 - 194  2013年01月  [査読有り]

  • Measuring the Performance of Laser Spot Clicking Techniques

    Romy Budhi Widodo, Takafumi Matsumaru

    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)     1270 - 1275  2013年  [査読有り]

     概要を見る

    Laser spot clicking technique is the term of remote interaction technique between human and computer using a laser pointer as a pointing device. This paper is focused on the performance test of two laser spot clicking techniques. An off-the-shelf laser pointer has a toggle switch to generate a laser spot, the presence (ON) or absence (OFF) of this spot and its combination are the candidates of the interaction technique. We conducted empirical study that compared remote pointing technique performed using combination of ON and OFF of the laser spot, ON-OFF and ON-OFF-ON, and using a desktop mouse as a baseline comparison. We present quantitative performance test based on Fitts' test using a one-direction tapping test in ISO/TS 9241-411 procedure; and assessment of comfort using a questionnaire. We hope this result give contribution to the interaction technique using laser pointer as a pointing device especially in selecting the appropriate clicking technique for real application. Our results suggest ON-OFF technique has positive advantages over ON-OFF-ON technique such as the throughput and comfort.

    DOI

    Scopus

    9
    被引用数
    (Scopus)
  • Robot human-following limited speed control

    Jianzhao Cai, Takafumi Matsumaru

    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication   TuA1.1P.2   81 - 86  2013年  [査読有り]

     概要を見る

    Robot human-following is an important part of interaction between robot and people. Generally, speed of mobile robots is limited and far slower than human beings naturally walking speed. In order to catch the human rapidly, in this paper, we introduce a control method which uses adaptive acceleration of robots speed. The speed of robots mainly depends on the human speed and distance. Also the robots speed acceleration is adaptive to the distance between human and robots. The proposed control is successfully verified through experiments. © 2013 IEEE.

    DOI

    Scopus

    6
    被引用数
    (Scopus)
  • Image-projective Desktop Arm Trainer IDAT for therapy

    Takafumi Matsumaru, Yi Jiang, Yang Liu

    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication   ThA2T2.3   801 - 806  2013年  [査読有り]

     概要を見る

    Image-projective Desktop Arm Trainer IDAT is designed to improve the upper limb therapy effect. This paper is focused on its design concept. The most important structural feature of IDAT is that it is non-invasive and unconstrained. Setting on the desk apart from the trainee sitting makes no body contact. It works based on the trainee's voluntary movement, although it has neither physical assist nor tactile feedback on conventional systems. IDAT is developed based on the design concept that the direct interaction is critical in terms of the motivation increase. Instead of a joystick, a handle, a robotic arm and a display screen on conventional systems, IDAT uses the screen projected on a desktop and produces the visual reaction at the time on the spot where the trainee operates. It inspires the vivid, lively and actual feeling to the trainee. © 2013 IEEE.

    DOI

    Scopus

    6
    被引用数
    (Scopus)
  • Application of Step-on Interface to Therapy

    Takafumi Matsumaru, Shiyang Dong, Yang Liu

    IEEE/RSJ IROS 2012 Workshop on Motivational Aspects of Robotics in Physical Therapy, [Vilamoura, Algarve, Portugal], (October 7-12, 2012)     6 pages  2012年10月  [査読有り]

     概要を見る

    This paper describes the application of step-on interface (SOI) to therapy. SOI consists of a projector and a sensor such as range scanner, and its special feature is using a projected screen as bidirectional interface through which the information is presented from a robot to a user and the user instructions are delivered to the robot. The human-friendly amusing mobile robot HFAMRO equipped with SOI on a mobile platform is a playing-tag-robot which can be used for gait training. The image-projective desktop arm trainer IDAT is designed set on a desk in front of a trainee sitting. This kind of system is adjustable and customized to each individual by setting parameters, using multimedia channels, or uploading the program. Therefore it could provide the motivation and accomplishment to a trainee and maintain his/her enthusiasm and interest.

  • リレーエッセイ マイフェイバリット 18 夢をもって実現するために

    松丸隆文

    機械設計(日刊工業新聞社)   56 ( 7 ) 14  2012年06月

  • Interaction Using the Projector Screen and Spot-light from a Laser Pointer: Handling Some Fundamentals Requirements

    Romy Budhi Widodo, Weijen Chen, Takafumi Matsumaru

    2012 PROCEEDINGS OF SICE ANNUAL CONFERENCE (SICE)   WeA10-04   1392 - 1397  2012年  [査読有り]

     概要を見る

    This paper presents one of the interaction models between humans and machines using a camera, the projector, and the spot-light from a laser pointer device. A camera was attached on the top of the projector, and the projector directed a direction screen display on the wall, while the user pointed a laser pointer to the desired location on the direction screen display. It is confirmed that this system can handle some distortion conditions of the direction screen display, such as an oblique rectangle, horizontal trapezoid distortion, and vertical trapezoid distortion as well as some surface illuminance - 127, 425, 630, and 1100 lux; and the system is designed to be used for static and moving objects. The coordinates that were obtained from the distorted screen can be used to give commands to a specific machine, robot, and application.

  • Development of Image-projective Desktop Arm Trainer, IDAT

    Yang Liu, Yi Jiang, Takafumi Matsumaru

    2012 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)   SP2-A.5   355 - 360  2012年  [査読有り]

     概要を見る

    Aim to improve upper limb rehabilitation effect, we design and develop an Image-projective Desktop Arm Trainer (IDAT). Compared with conventional therapy, IDAT provides a more effective and interesting training method. IDAT's goal is to maintain and improve patients' upper limb function by training their eye-hand coordination. We select step-on interface (SOI) as the input system which makes trainees can operate IDAT with hand directly. Trainees can make a customized training setting. It can provide motivation and accomplishment to trainees and maintain their enthusiasm and interest. So IDAT provides a much different human robot interaction with pervious upper limb rehabilitation robots that equip a joystick or controller to operate remotely. We propose this idea in 2007 and have applied SOI on some mobile robots. Now we apply it on IDAT to make a new way to upper limb rehabilitation.

    DOI

    Scopus

    10
    被引用数
    (Scopus)
  • Applying Infrared Radiation Image Sensor to Step-on Interface: Touched Point Detection and Tracking

    Yi Jiang, Yang Liu, Takafumi Matsumaru

    2012 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)   MP1-B.4   752 - 757  2012年  [査読有り]

     概要を見る

    We propose and implement a solution for applying an infrared radiation (IR) image sensor to step-on interface (SOI). SOI is a kind of natural human-robot interface what consists of a projector and a laser range scanner (LRG) sensor. And it enables any interactive touch applications on desktop or floor. We attempt to introduce an IR image sensor such as ASUS Xtion to SOI instead of LRG sensor. In this paper, we will describe the procedure that how to use the Xtion and detect touched point. We distinguish user's hand from background (surface) depends on depth data from ASUS Xtion, and detecting touching action when finger almost close to the background. The proposed processes involve IR depth image acquisition, seeking for hand and its contours by thresholding, recognition of touched areas and computing of theirs center position. The research enables ASUS Xtion be applied on SOI in a simple way. Moreover, this system can realize touch interaction on any surface.

    DOI

    Scopus

    4
    被引用数
    (Scopus)
  • Laser Spotlight Detection and Interpretation of Its Movement Behavior in Laser Pointer Interface

    Romy Budhi Widodo, Weijen Chen, Takafumi Matsumaru

    2012 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)   MP1-C.4   780 - 785  2012年  [査読有り]

     概要を見る

    A laser pointer can be used as an input interface in human-machine interaction. Such utilization, however, can be problematic and one of the main issues is the lack of good reliability in the laser spotlight detection. Another problem is how to interpret the user's movement of the spotlight into commands for the application. This paper proposes a method for a laser spotlight detection. The aim is to improve the practicality and reliability of the previous approaches. We use the maximum pixel value as a multiplier in determining the threshold. Maximum pixel value is obtained from environment brightness at a specified time. For the second problem we propose a simple interpretation of incidents that allows the user to use the application, with three main events: laser-move, hover, and single-click. There is no need for users and program to wait a specified time span to be able to interact with each other, and the user can directly give commands to the application after the single-click event. These approaches result in better reliability, easier operation of the application by the user, and allow opportunity for development of a system for rehabilitative, recreation, and input interface devices in the future.

    DOI

    Scopus

    9
    被引用数
    (Scopus)
  • Design and Evaluation of Handover Movement Informing Reciever of Weight Load

    Takafumi Matsumaru

    15th National Conference on Machines and Mechanisms (NaCoMM 2011), [IIT Madras, India]   B-5-5  2011年11月  [査読有り]

  • W021003 生物の運動に学ぶロボット技術([W02100](バイオエンジニアリング部門,機械力学・計測制御部門,流体工学部門企画),生物に学ぶ機械工学 -生物の仕組みを機械に生かす-)

    松丸隆文

    年次大会 : Mechanical Engineering Congress, Japan   2011   "W021003 - 1"-"W021003-8"  2011年09月

     概要を見る

    This paper presents the informative motion study to make a human-coexistence robot useful. First the usage and the design and marketing of a human-coexistence robot are considered. Next, the informative motion study that we are tackling to improve the human-coexistence robot's personal affinity is explained advocating the informative kinesics for human-machine system. As an example of application deployment, study on continuous movement (usual movement) and preliminary operation (prior operation) is shown.

    CiNii

  • 第31回バイオメカニズム学術講演会 SOBIM 2010 in Hamamatsu

    松丸隆文

    バイオメカニズム学会誌 = Journal of the Society of Biomechanisms   35 ( 1 ) 81 - 83  2011年02月

    CiNii

  • Design and evaluation of handover movement informing receiver of weight load

    Matsumaru, Takafumi

    15th National Conference on Machines and Mechanisms, NaCoMM 2011    2011年01月

     概要を見る

    This paper presents the study results on the handover movement informing a receiver of the weight load as an example of the informative motion for the humansynergetic robot. To design and generate the movement depending on the weight load, the human movement is measured and analyzed, and four items are selected as the parameter to vary - the distance between target point and transferred point (in front-back direction), the distance between highest point and transferred point (in vertical direction), the elbow rotation angle, and the waist joint angle. The fitted curve of the parameter variation depending on the weigh load is obtained from the tendency of the subjects' movement data. The movement data for an arbitrary weight load is generated processing the standard data at 0 kg of weight load so that each parameter follows the fitted curve. From the questionnaire survey, although it is difficult for a receiver to estimate the exact weight load, he may distinguish the heavy weight load from the light weight load so that the package will be received safely and certainly.

  • Design and evaluation of handover movement informing receiver of weight load

    Matsumaru, Takafumi

    15th National Conference on Machines and Mechanisms, NaCoMM 2011    2011年01月

     概要を見る

    This paper presents the study results on the handover movement informing a receiver of the weight load as an example of the informative motion for the humansynergetic robot. To design and generate the movement depending on the weight load, the human movement is measured and analyzed, and four items are selected as the parameter to vary - the distance between target point and transferred point (in front-back direction), the distance between highest point and transferred point (in vertical direction), the elbow rotation angle, and the waist joint angle. The fitted curve of the parameter variation depending on the weigh load is obtained from the tendency of the subjects' movement data. The movement data for an arbitrary weight load is generated processing the standard data at 0 kg of weight load so that each parameter follows the fitted curve. From the questionnaire survey, although it is difficult for a receiver to estimate the exact weight load, he may distinguish the heavy weight load from the light weight load so that the package will be received safely and certainly.

  • Design and evaluation of handover movement informing receiver of weight load

    Matsumaru, Takafumi

    15th National Conference on Machines and Mechanisms, NaCoMM 2011    2011年01月

     概要を見る

    This paper presents the study results on the handover movement informing a receiver of the weight load as an example of the informative motion for the humansynergetic robot. To design and generate the movement depending on the weight load, the human movement is measured and analyzed, and four items are selected as the parameter to vary - the distance between target point and transferred point (in front-back direction), the distance between highest point and transferred point (in vertical direction), the elbow rotation angle, and the waist joint angle. The fitted curve of the parameter variation depending on the weigh load is obtained from the tendency of the subjects' movement data. The movement data for an arbitrary weight load is generated processing the standard data at 0 kg of weight load so that each parameter follows the fitted curve. From the questionnaire survey, although it is difficult for a receiver to estimate the exact weight load, he may distinguish the heavy weight load from the light weight load so that the package will be received safely and certainly.

  • ステップ・オン・インタフェース技術を応用したユーザー・ロボット・インタラクション

    松丸隆文, 斎藤渉, 伊藤祐一

    日本バーチャルリアリティ学会論文誌 (ISSN:1344-011X)   15 ( 3 ) 335 - 345  2010年09月  [査読有り]

     概要を見る

    The friendly amusing mobile (FAM) function in which a robot and a user can interact through a motion is proposed applying the mobile robot step-on interface (SOI) in which the user can direct robot movement or operation by stepping or pointing on the button which shows the contents to desire on the operation screen projected on a floor. HFAMRO-2 mobile robot is developed to realize playing a light tag and three applications-animal tail stepping, bomb fuse stamping, and footprint stepping-are produced as a trial after the design policy.

    DOI CiNii

  • Truly-Tender-Tailed Tag-Playing Robot Through Friendly Amusing Mobile Function

    Takafumi Matsumaru, Yasutada Horiuchi, Kosuke Akai, Yuichi Ito

    Journal of Robotics and Mechatronics   22 ( 3 ) 301 - 307  2010年06月  [査読有り]

     概要を見る

    To expand use of the mobile robot Step-On Interface
    (SOI), originally targeting maintenance, training, and recovery of human physical and cognitive functions, we introduce a “Truly-Tender-Tailed” (T3, pronounced tee-cube) tag-playing robot as a “Friendly Amusing Mobile” (FAM) function. Displaying a previously prepared bitmap (BMP) image and speeding up display make it easy to design button placement and other screen parameters using a painting software package. The BMP-image scope matrix simplifies step detection and recognition and the motion trajectory design editor facilitates robot behavior design.

    DOI

  • The Step-on Interface (SOI) on a Mobile Platform - Basic Functions

    Takafumi Matsumaru, Yuichi Ito, Wataru Saitou

    PROCEEDINGS OF THE 5TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI 2010)     343 - 344  2010年  [査読有り]

     概要を見る

    This video shows the basic functions of HFAMRO-2 equipped with the step-on interface (SOI). In the SOI the projected screen is used as a bilateral interface. It not only presents information from the equipment to the user but also delivers the instructions from the user to the equipment. HFAMRO is intended to represent the concept based on which robots interact with users. It assumes, for example, the ability to play `tag' - in this case, playing tag with light, similar to `shadow' tag. The HFAMRO-2 mobile robot, developed to study the SOI's application with mobility, has two sets of the SOI consisting of a projector and a range scanner on a mobile platform. The projector displays a direction screen on a travel surface and the two-dimensional range scanner detects and measures the user's stepping to specify the selected button.

    DOI

  • The Step-on Interface (SOI) on a Mobile Platform - Rehabilitation of the Physically Challenged

    Takafumi Matsumaru, Yuichi Ito, Wataru Saitou

    PROCEEDINGS OF THE 5TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI 2010)     345 - 345  2010年  [査読有り]

    DOI

  • Friendly Amusing Mobile Function for Human-Robot Interaction

    Takafumi Matsumaru

    2010 IEEE RO-MAN     88 - 93  2010年  [査読有り]

     概要を見る

    This paper introduces a tag-playing robot as a "Friendly Amusing Mobile" (FAM) function to expand the mobile robot Step-On Interface (SOI) use and to promote the human-robot interaction through a motion, targeting maintenance, training, and recovery of human physical and cognitive functions in the elderly, physically challenged, injured, etc. Displaying a previously prepared bitmap (BMP) image becomes the display update rate faster and makes it easy to design button placement and other screen parameters using a painting software package. The scope matrix generated from the BMP image simplifies the step detection and recognition, and the motion trajectory design editor facilitates the robot behavior design.

    DOI

    Scopus

    3
    被引用数
    (Scopus)
  • Discrimination and Implementation of Emotions on Zoomorphic Robot Movements

    Takafumi Matsumaru

    SICE Journal of Control, Measurement, and System Integration   2 ( 6 ) 365 - 372  2009年11月  [査読有り]

     概要を見る

    This paper discusses discrimination and implementation of emotions on a zoomorphic robot aiming at designing emotional robot movements and improving robot friendliness. We consider four emotions; joy, anger, sadness, and fear. Two opposite viewpoints, performer and observer, are considered to acquire emotional movement data for analysis. The performer subjects produce emotional movements and movements that are easy to be recognized as expressions of the intended emotion are selected among the measured movements by the observer subjects. Discriminating the emotion embedded in a movement is tried using feature values based on the Laban movement analysis (LMA) and the principal component analysis (PCA). By the discrimination using PCA, the resultant rates of the correct discrimination are about 70% for all four emotions. The features of emotional movements are presumed from the coefficients of the discrimination function obtained in the emotion discrimination using PCA. Emotions are implemented by converting a setup for basic movements by employing the design principle based on the movement features. The result of the verification experiment suggests that four emotional movements are divided into two groups; the joy and anger group and the sadness and fear group. The emotional movements of joy or anger are dynamic, large-scale and frequent being relatively easy to interpret the intended emotion, while the emotional movements of sadness or fear are static, small-scale and little making difficult to understand the feature of movements.

    DOI CiNii

  • Informative Motion Study to Improve Human-Coexistence Robot’s Personal Affinity

    Takafumi Matsumaru

    IEEE RO-MAN 2009 Workshop on Robot Human Synergetics, [Toyama International Conference Center, Japan]    2009年09月

  • Study on Handover Movement Informing Receiver of Weight Load as Informative Motion of Human-friendly Robot

    Takafumi Matsumaru, Shigehisa Suzuki

    International Journal of Factory Automation, Robotics and Soft Computing   2009 ( 3 ) 11 - 19  2009年07月  [査読有り]

  • モーション・メディアとインフォマティブ・モーション—モーションを基軸にしたシステム・インテグレーション—

    岩城敏, 松丸隆文

    計測と制御   48 ( 6 ) 443 - 447  2009年06月  [査読有り]

    CiNii

  • 荷物重量を受け手に伝える手渡し動作の検討—インフォマティブ・モーションの研究事例—

    松丸隆文

    計測と制御   48 ( 6 ) 508 - 512  2009年06月  [査読有り]

    CiNii

  • Dynamic Remodeling of Environmental Map using Range Data for Remote Operation of Mobile Robot

    Takafumi Matsuamru, Hiroshi Yamamori, Takumi Fujita

    Journal of Robotics and Mechatronics   21 ( 3 ) 332 - 341  2009年06月  [査読有り]

     概要を見る

    In studying dynamic remodeling of environmental mapping around a mobile robot operated remotely while data measured by the robot range sensor is sent from the robot to the operator, we introduce the Line & Hollow method and the Cell & Hollow method for environmental mapping. Results for the three types of environmental situation clarifies features and effectiveness of our approach. In Line & Hollow method, an isosceles triangle is set based on the range data. The base line is pulled to express obstacle shape and the inside is hollowed out to express vacant space. In Cell & Hollow method, the cell value corresponding to the range data is incremented, and an obstacle is assumed to be exist if the cell value exceeds the ascending threshold. The cell value is decremented on the line between the cell that measured data indicates and the cell located at the sensor, and the obstacle is deleted if the value drops below the descending threshold. We confirmed that environmental mapping for either reflects a dynamic environmental change.

    DOI

  • Functions of Mobile-Robot Step-On Interface

    Takafumi Matsumaru, Kosuke Akai

    Journal of Robotics and Mechatronics   21 ( 2 ) 267 - 276  2009年04月  [査読有り]

     概要を見る

    To improve HFAMRO-1 mobile robot maneuverability and safety, we added a step-on interface (SOI) to direct robotic or mechatronic tasks and operations (HFAMRO: “human-friendly amusing” mobile robot). To do so, a projector displays a direction screen on a floor or other surface, and an operator specifies a button showing the selected movement by stepping or pointing. We modified the direction screen so that among buttons displayed on two lines, stepped-on buttons directing movement are displayed only on the lower line. We also shortened retention time and had selected movement executed only when the foot was removed from the stepped-on button. The robot has 2 SOIs and multiple projection screens, and can be controlled from either direction for the same function. We synchronized direction and preliminary-announcement screens to inform passersby which way the robot would move. Using range scanner data, the robot distinguishes feet from other objects based on size and autonomous movement fusion control to avoid obstacles is implemented.

    DOI CiNii

  • Mobile Robot with Preliminary-announcement and Indication Function of Upcoming Operation just after the Present

    Takafumi Matsumaru

    International Journal of Factory Automation, Robotics and Soft Computing   2009 ( 1 ) 102 - 110  2009年01月  [査読有り]

  • A characteristics measurement of two-dimensional range scanner and its application

    Takafumi Matsumaru

    Open Automation and Control Systems Journal   2 ( 1 ) 21 - 30  2009年  [査読有り]

     概要を見る

    This paper shows the result of the characteristics measurement of a two-dimensional active range scanner, URG made by Hokuyo Automatic Co., Ltd. which was released in 2004 and is spreading widely as an external sensor for mobile robot. The following items were clarified from the characteristics measurement of URG-X002S in various conditions. (1) In the case that the object has a gloss surface or a black surface, the error rate (the frequency that the scanner judges to be impossible to measure and an error code is output) rises when the oblique angle of object becomes large and also the distance to object becomes long. (2) In the case that the object has a white surface or a rough surface, not only the error rate is zero but also the margin of error becomes dozens of millimeters and the varying is small, if the oblique angle is smaller than 60 deg and the distance is shorter than 1 m. (3) The lateral error is negligibly small if the distance to detect is shorter than 1 m. Moreover it shows the result of the examination to apply the range scanner in the Step-On Interface (SOI), in which the scanner is used for detection and measurement of the stepping of an operator. Based on the measured results, we designed the judgment method of the stepping, the installation position of the scanner, and the placement of buttons in the direction screen to apply the range scanner to the SOI for operation of a mobile robot.© Takafumi Matsumaru
    Licensee Bentham Open.

    DOI

    Scopus

    13
    被引用数
    (Scopus)
  • Handover Movement Informing Receiver of Weight Load as Informative Motion Study for Human-Friendly Robot

    Takafumi Matsumaru

    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2     48 - 54  2009年  [査読有り]

     概要を見る

    This paper presents the result of the study on handover movement informing a receiver of the weight load as an example of the informative motion of a human-coexistence type robot (human-friendly robot). Movement of human body when performing handover task to a person was measured and analyzed first, and the features of variance of movement depending on the weight load were clarified. Then, the questionnaire survey was carried out to verify whether people could read out the difference in weight load from the variance of movement, in which the movement was reproduced using a simulator. It may be said that a receiver can judge whether a package is heavy or not, although it is difficult to estimate the exact weight load. Accordingly, if the variance of the noted points is included in the design of the handover movement of a humanoid robot, people as a receiver will be able to estimate the weight load to make easy to receive a package safely and certainly.

    DOI

    Scopus

    3
    被引用数
    (Scopus)
  • Discrimination of Emotion from Movement and Addition of Emotion in Movement to Improve Human-Coexistence Robot's Personal Affinity

    Takafumi Matsumaru

    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2     40 - 47  2009年  [査読有り]

     概要を見る

    This paper presents the result of trials of the discrimination of emotion from movement and the addition of emotion in movement on a teddy bear robot, aiming at both expressing a robot's emotion by movement and improving a robot's personal affinity. We addressed four kinds of emotion joy, anger, sadness, and fear. In this research, two standpoints were considered a performer and an observer to establish the data of emotional movement used for analysis. The data of movement were collected as a performer's standpoint and they were sorted out in an observer's standpoint. In discriminating the emotion included in movement from the data of movement, both the method using Laban's feature quantity and the method using principal component analysis were tried. By the discrimination using principal component analysis, about 70% of rate of correct discrimination was obtained on all the four emotions. The feature of movement that each emotion could be interpreted was presumed from the coefficient of the discrimination function obtained in the discrimination using principal component analysis. Using the obtained feature the design principle of movement to add emotion into a basic movement was defined. From the verification experiment, it was suggested that the movement which people can interpret the intended emotion with relatively high probability would be produced about joy and anger. About fear and sadness, since the movement to express those emotions has small and little motions, it would be difficult to distinguish the feature and to produce clear emotional movement.

    DOI

    Scopus

    7
    被引用数
    (Scopus)
  • Step-on interface on mobile robot to operate by stepping on projected button

    Takafumi Matsumaru, Kosuke Akai

    Open Automation and Control Systems Journal   2 ( 1 ) 85 - 95  2009年  [査読有り]

     概要を見る

    This paper proposes the step-on interface (SOI) to operate a mobile robot in which a projector projects and displays a direction screen on a floor and a user specifies a button showing the selected movement by stepping or pointing. The HFAMRO-1 mobile robot has been developed to demonstrate the SOI's potential (HFAMRO: "human-friendly amusing" mobile robot). The SOI of HFAMRO-1 consists of a projector and a range scanner on an omni-directional mobile platform. From operational comparison with other input interfaces, it is confirmed that we can direct the robot movement using our own foot. We had some students who does not specialize robotic systems try to operate HFAMRO-1 with their shoes, and all trial students could specify the button to operate the robot satisfactorily and everyone mastered the SOI immediately. © Matsumaru and Akai.

    DOI

    Scopus

    19
    被引用数
    (Scopus)
  • これからの動作の予告表示機能をもつ移動ロボットの対人模擬環境での評価実験

    松丸隆文

    ヒューマンインタフェース学会論文誌   10 ( 1 ) 11 - 20  2008年02月  [査読有り]

    CiNii

  • Experimental examination in simulated interactive situation between people and mobile robot with preliminary-announcement and indication function of upcoming operation

    Takafumi Matsumaru

    2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-9     3487 - 3494  2008年  [査読有り]

     概要を見る

    This paper presents the result of the experimental examination by "passing each other" and "positional prediction" in simulated interactive situation between people and mobile robot. We have developed four prototype robots based on four proposed methods for preliminarily announcing and indicating to people the speed and direction of upcoming movement of mobile robot moving on two-dimensional plane. We observed significant difference between when there was a preliminary-announcement and indication (PAI) function and when there was not even in each experiment. Therefore the effect of preliminary-announcement and indication of upcoming operation was declared. In addition the feature and effective usage of each type of preliminary-announcement and indication method were clarified. That is, the method of announcing state of operation just after the present is effective when a person has to judge to which direction he should get on immediately due to the feature that simple information can be quickly transmitted. The method of indicating operations from the present to some future time continuously is effective when a person wants to avoid contact or collision surely and correctly owing to the feature that complicated information can be accurately transmitted. We would like to verify the result in various conditions such as the case that traffic lines are obliquely crossed.

    DOI

    Scopus

    6
    被引用数
    (Scopus)
  • プロジェクタを用いて次の動作を予告表示する機能をもつ移動ロボットの開発

    松丸隆文, 干場祐, 平岩慎司, 宮田康広

    日本ロボット学会誌   25 ( 3 ) 410 - 421  2007年04月  [査読有り]

     概要を見る

    This paper discusses the mobile robot PMR-5 with the preliminary-announcement and display function which indicates the forthcoming operations to the people near the robot by using a projector. The projector is set on a mobile robot and a 2-dimensional frame is projected on a running surface. In the frame, not only the scheduled course but also the states of operation can be clearly announced as the information about movement. We examine the presentation of the states of operation such as stop or going back including the time information of the scheduled course on the developed robot. Scheduled course is expressed as the arrows considering the intelligibility at sight. Arrow expresses the direction of motion directly and the length of arrow can announce the speed of motion. Operation until 3-second-later is indicated and three arrows classified by color for each second are connected and displayed so these might show the changing of speed during 3-second period. The sign for spot revolution and the characters for stop and going back are also displayed. We exhibited the robot and about 200 visitors did the questionnaire evaluation. The average of 5-stage evaluation is 4.5 points and 3.9 points for the direction of motion and the speed of motion respectively. So we obtained the evaluation that it is intelligible in general.

    DOI CiNii

  • Development of Four Kinds of Mobile Robot with Preliminary-Announcement and Indication Function of Upcoming Operation

    Takafumi Matsumaru

    Journal of Robotics and Mechatronics,   19 ( 2 ) 148 - 159  2007年04月  [査読有り]

     概要を見る

    We propose approaches and equipment for preliminarily
    announcing and indicating to people the speed and direction of movement of mobile robots moving on a two-dimensional plane. We introduce the four approaches categorized into (1) announcing the state just after the present and (2) indicating operations from the present to some future time continuously. To realize the approaches, we use omni-directional display (PMR-2), flat-panel display (PMR-6), laser pointer (PMR-1), and projection equipment (PMR-5) for the announcement unit of protobots. The four protobots were exhibited at the 2005 International Robot Exhibition (iREX05). We had visitors answer questionnaires in a 5-stage evaluation. The projector robot PMR-5 received the highest evaluation score among the four. An examination of differences by gender and age suggested that some people prefer simple information, friendly expressions, and a minimum of information to be presented at one time.

    DOI CiNii

  • Mobile robot with preliminary-announcement and indication function of forthcoming operation using flat-panel display

    Takafumi Matsumaru

    PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-10     1774 - 1781  2007年  [査読有り]

     概要を見る

    This research aims to propose the method and equipment to preliminary-announce and indicate the surrounding people both the speed of motion and the direction of motion of the mobile robot that moves on a two-dimensional plane. This paper discusses the mobile robot PMR-6, in which the liquid crystal display (LCD) is set up on the mobile unit, and the state of operation at 1.5 s before the actual motion is indicated. The basis of the content to display is 'arrow' considering the intelligibility for people even at first sight. The speed of motion is expressed as the size (length and width) of the arrow and its color based on traffic signal. The direction of motion is described with the curved condition of the arrow. The characters of STOP are displayed in red in case of stop. The robot was exhibited to the 2005 International Robot Exhibition held in Tokyo. About 200 visitors answered to the questionnaires. The average of five-stage evaluation is 3.56 and 3.97 points on the speed and on the direction respectively, so the method and expression were evaluated comparatively intelligible. As for the gender, the females appreciated about the speed of motion than the males on the whole. Concerning the age, some of the younger age and the upper age admired highly about the direction of motion than the middle age.

    DOI

    Scopus

    6
    被引用数
    (Scopus)
  • 重量物挙上動作におけるValsalva効果による腹圧増加分を考慮した解析モデルの提案

    松丸隆文, 福山聡, 佐藤智祐

    日本機械学会論文集C編   72 ( 724 ) 3863 - 3870  2006年12月  [査読有り]

     概要を見る

    This paper proposed the model to estimate the load on the lumber vertebra considering not only the value presumed from posture but also the increased amount by Valsalva maneuver of extrapolating from the vital capacity. The pressure force can be estimated to be reduced by about 30% by the effect of the abdominal pressure by using the proposed model as said so far. From the error of the presumed value of ground reaction force from an actual measurement, it is thought that the presumed accuracy of the lumbar vertebra load using the proposed model is smaller than 10%. Furthermore, two operations with extreme start-on posture were compared on transition of the compressive force and shear force on the lumbar vertebra. It was brought out that a start-on posture signficantly affects the maximum load on the lumber vertebra. The result suggests that optimal start-on posture may be between two postures.

    CiNii

  • 光線を用いて予定経路を表示する機能をもつ移動ロボットの開発

    松丸隆文, 草田享, 岩瀬和也

    日本ロボット学会誌   24 ( 8 ) 976 - 984  2006年11月  [査読有り]

     概要を見る

    This paper discusses the design and the basic characteristic of the mobile robot PMR-1 with the preliminaryannouncement and display function of the forthcoming operation (the direction of motion and the speed of motion) to the people around the robot by drawing a scheduled course on a running surface using light-ray. The laser pointer is used as a light source and the light from the laser pointer is reflected in a mirror. The light-ray is projected on a running surface and a scheduled course is drawn by rotating the reflector around the pan and the tilt axes. The preliminary-announcement and display unit of the developed mobile robot can indicate the operation until 3-second-later preliminarily, so the robot moves drawing the scheduled course from the present to 3-second-later. The experiment on coordination between the preliminary-announcement and the movement has been carried out, and we confirmed the correspondence of the announced course with the robot trajectory both in the case that the movement path is given beforehand and in the case that the robot is operated with manual input from a joystick in real-time. So we have validated the coordination algorithm between the preliminary-announcement and the real movement.

    DOI CiNii

  • Examination on Lifting Motion with Different Start-on Posture, and Study on the Proper Opration using Minimum Jerk Model [in Japanese]

    松丸隆文, 福山聡, 島和義, 伊藤友孝

    日本機械学会論文集C編   72 ( 720 ) 2554 - 2561  2006年08月  [査読有り]

     概要を見る

    The aim of this research is to establish the quantitative evaluation method for lifting task and to clarify the safety posture and optimal motion. This paper firstly examines the operation from differenet three start-on postures with four items: the maximum compressive force and share force on L5/S1 joint, the energy efficiency index, and the degree of contribution of each joint. As a result it has been shown that the load not only to lumbar vertebra but to knee joint should be emphasized and the pose-C is an appropriate start-on posture in which the knee is flexed almost at right-angled and the upper body is raised. Moreover from the simulation using minimum jerk model we found a proper operation, but the actual operation has earlier timing to extend the lower body than presumed. So not only the criteria on the whole operation or the relative criteria but also the absolute criteria paying attention to a portion like knee joint moment seems necessary to study the optimal lifting motion.

    CiNii

  • ヒューマノイド・ロボットの体形・動作の設計に関する一考察

    松丸隆文

    日本バーチャルリアリティ学会論文誌   11 ( 2 ) 283 - 292  2006年06月  [査読有り]

     概要を見る

    This paper discusses the design method of the bodily shape and motion of a humanoid robot to raise not only the emotional but informative interpersonal-affinity of a humanoid robot. Concrete knowledge and a concrete opinion are classified into the movement prediction from its configuration and the movement prediction from continuous motion or preliminary motion, and they are discussed mentioning the application and usage. Specifically, the bodily shape and motion, which is easier to predict and understand the capability and performance and the following action and intention of a humanoid robot for the surrounding people who are looking at it, are considered.

    DOI CiNii

  • 重量物挙上動作における動作姿勢の受容率を用いた評価

    松丸隆文, 島和義, 福山聡, 伊藤友孝

    計測自動制御学会論文集   42 ( 2 ) 174 - 182  2006年02月  [査読有り]

     概要を見る

    The aim of this research is to establish the quantitative evaluation method for lifting task and to clarify the safety posture and optimal motion. This paper examines the motion from three kinds of start-on posture with the acceptance rate. Operation-A starts from the pose-A in which the knee is extended to the maximum. Operation-B starts from the pose-B in which the knee is flexed to the maximum and the upper body is raised as much as possible. Operation-C starts from the pose-C in which the knee is flexed at almost right-angled and the upper body is raised. The acceptance rate is the estimated population rate who can permit the joint moment during the lifting operation, based on the presumed moment and the coefficient of variation of the acceptable marginal moment on each joint. The compressive force on lumbar vertebrae computed from the L5/S1 joint moment at 85[%] of the acceptance rate (about Av.-1SD) has not been over the standard value of a previous research. So we have set 85[%] as the safety standard acceptance rate, then we settled judging-A (over 95[%], recommended), judging-B (85-95[%], should note), and judging-C (under 85[%], should modify). Although the ankle joint on operation-A and the knee joint on operation-B is judged as C rank, every joint on operation-C shows high acceptance rate. So the validity of operation-C has been clarified quantitatively.

    DOI CiNii

  • Mobile robot with preliminary-announcement and display function of forthcoming motion using projection equipment

    Takafumi Matsumaru

    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication     443 - 450  2006年  [査読有り]

     概要を見る

    This paper discusses the mobile robot PMR-5 with the preliminary- announcement and display function which indicates the forthcoming operations to the people near the robot by using a projector. The projector is set on a mobile robot and a 2-dimensional frame is projected on a running surface. In the frame, not only the scheduled course but also the states of operation can be clearly announced as the information about movement. We examine the presentation of the states of operation such as stop or going back including the time information of the scheduled course on the developed robot. Scheduled course is expressed as the arrows considering the intelligibility at sight. Arrow expresses the direction of motion directly and the length of arrow can announce the speed of motion. Operation until 3-second-later is indicated and three arrows classified by color for each second are connected and displayed so these might show the changing of speed during 3-second period. The sign for spot revolution and the characters for stop and going back are also displayed. We exhibited the robot and about 200 visitors did the questionnaire evaluation. The average of 5-stage evaluation is 3.9 points and 4.5 points for the direction of motion and the speed of motion respectively. So we obtained the evaluation that it is intelligible in general. ©2006 IEEE.

    DOI

    Scopus

    32
    被引用数
    (Scopus)
  • Mobile robot with preliminary-announcement function of forthcoming motion using light-ray

    Takafumi Matsumaru, Takashi Kusada, Kazuya Iwase

    2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12     1516 - 1523  2006年  [査読有り]

     概要を見る

    This paper discusses the design and the basic characteristic of the mobile robot PMR-1 with the preliminary-announcement and display function of the forthcoming operation (the direction of motion and the speed of motion) to the people around the robot by drawing a scheduled course on a running surface using light-ray. The laser pointer is used as a light source and the light from the laser pointer is reflected in a mirror. The light-ray is projected on a running surface and a scheduled course is drawn by rotating the reflector around the pan and the tilt axes. The preliminary-announcement and display unit of the developed mobile robot can indicate the operation until 3-second-later preliminarily, so the robot moves drawing the scheduled course from the present to 3-second-later. The experiment on coordination between the preliminary-announcement and the movement has been carried out, and we confirmed the correspondence of the announced course with the robot trajectory both in the case that the movement path is given beforehand and in the case that the robot is operated with manual input from a joystick in real-time. So we have validated the coordination algorithm between the preliminary-announcement and the real movement.

  • 特集「インフォマティブ・モーションから人間機械系の情報動作学へ」に寄せて

    松丸 隆文

    バイオメカニズム学会誌 = Journal of the Society of Biomechanisms   29 ( 3 ) 117 - 117  2005年08月

    DOI CiNii

  • 人間機械系の情報動作学の応用展開

    松丸隆文

    バイオメカニズム学会誌   29 ( 3 ) 139 - 145  2005年08月  [査読有り]

     概要を見る

    人工機械システムに対する恐怖感や違和感を排除するには,外見を人間型や動物型にするだけでなく,サイズと性能・機能の関係などにおいて,人間の常識,経験知・暗黙知に合致した形態・動作が必要だと考える.具体的には,移動ロボットやヒューマノイド・ロボットにおいて,その次の行動・意図が,それを見ている周囲の人間によって予測しやすい形態・動作を考えている.本稿は,まず人間機械系の情報動作学の応用展開における研究項目と手順を説明し,つぎにこれまでに入手した資料から得られている知見のいくつかを,(1)形態からの運動予測と(2)連続動作・予備動作からの運動予測に分類し,その応用・適用方法に言及しながら論じる.

    DOI CiNii

  • Mobile robot with eyeball expression as the preliminary-announcement and display of the robot's following motion

    T Matsumaru, K Iwase, K Akiyama, T Kusada, T Ito

    AUTONOMOUS ROBOTS   18 ( 2 ) 231 - 246  2005年03月  [査読有り]

     概要を見る

    This paper explains the PMR-2R ( prototype mobile robot - 2 revised), the mobile robot with the eyeball expression as the preliminary-announcement and display of the robot's following motion. Firstly, we indicate the importance of the preliminary-announcement and display function of the mobile robot's following motion for the informational affinity between human being and a robot, with explaining the conventional methods and the related works. We show the proposed four methods which are categorized into two types: one type which indicates a state just after the moment and the other type which displays from the present to some future time continuously. Then we introduce the PMR-2R, which has the omni-directional display, the magicball, on which the eyeball expresses the robot's following direction of motion and the speed of motion at the same time. From the evaluation experiment, we confirmed the efficiency of the eyeball expression to transfer the information. We also obtained the announcement at around one or two second before the actual motion may be appropriate. And finally we compare the four types of eyeball expression: the one-eyeball type, the two-eyeball type, the will-o'-the-wisp type, and the armor- helmet type. From the evaluation experiment, we have declared the importance to make the robot's front more intelligible especially to announce the robot's direction of motion.

    DOI

    Scopus

    15
    被引用数
    (Scopus)
  • 移動ロボットの遠隔操作における手動操作と自律動作の融合制御手法のシミュレーションによる検討

    松丸隆文, 萩原潔, 伊藤友孝

    計測自動制御学会論文集   41 ( 2 ) 157 - 166  2005年02月  [査読有り]

     概要を見る

    This paper examines the combination control of the manual operation and the autonomous motion to improve the maneuverability and safety on teleoperation of mobile robot. The autonomous motion which works to support the manual operation processing information from simple range sensors on mobile robot is examined with using a computer simulation. Three types of autonomous motion, revolution (RV), following (FL), and slowdown (SD), are proposed and the way to be equipped on the system are examined. In revolution, robot turns autonomously when robot approaches some obstacle too much. In following, robot translates parallel keeping its orientation to go along the form of the obstacle. In slowdown, robot is restricted its translation speed according to the distance to the obstacle and the translation speed of the robot. Features of each autonomous motion is declared; transit time and mileage become shorter with the revolution or the following, and the contact between robot and obstacle is almost avoided with the slowdown. When the distance to some obstacle to apply some autonomous motion is adjusted depending on the translation speed of the mobile robot, too much addition of autonomous motion against operator's intention is reduced so the maneuverability can be improved. When the revolution or the following is incorporated with the slowdown, even though the transit time is prolonged, the number of near miss for the robot to some obstacles is reduced so the safety can be improved.

    DOI CiNii

  • 移動ロボットの遠隔操作における手動操作と自律動作の融合制御手法−状況に適した自律動作の検討−

    松丸隆文, 萩原潔, 伊藤友孝

    計測自動制御学会論文集   40 ( 9 ) 958 - 967  2004年09月  [査読有り]

     概要を見る

    This paper examines the combination control of the manual operation and the autonomous motion on tele-operation of mobile robot. The autonomous motion that is suitable for the situation when the robot passes through a passage with bends is examined to improve the maneuverability with using a computer simulation. The situation that the manually operated robot contacts with the sidewall of a passage is investigated with the experiments. It is pointed out that the contact tends to occur around the entrance and exit of bends: a robot tends to contact with the inside near the entrance of the bend and the outside around the exit of the bend. The situation that the operator uses the autonomous motion of the mobile robot under the combination control is also investigated with the experiments. The operator tends to use the autonomous following (FL) near the entrance, and the autonomous revolution (RV) is effective to make the robot return to the center of the passage around the exit. From these situation analyses the selective-revolution/following (S-R/F) is developed in which the situation is estimated based on the direction of the manual operation of the operator and the direction of the obstacle from the robot then the autonomous revolution or the autonomous following is selected and applied according to the estimated situation. The new technique is equipped with the simulation system and it is confirmed that the autonomous revolution against the operator's intention is not applied based on the situation, so the maneuverability can be improved.

    DOI CiNii

  • 人間・機械・情報系とロボティックバーチャルシステム

    松丸隆文

    計測と制御   43 ( 2 ) 116 - 121  2004年02月  [査読有り]

    DOI CiNii

  • 人間共存型移動ロボットの行動を予告表示する方法と有効性のシミュレーションによる検討

    松丸隆文, 工藤新之介, 遠藤久嗣, 伊藤友孝

    計測自動制御学会論文集   40 ( 2 ) 189 - 198  2004年02月  [査読有り]

     概要を見る

    This paper examines the preliminary-announcement and display function of human-friendly robot's following action and intention, especially about the direction of motion and the speed of motion for mobile robot which moves on a 2-dimentional plane. We proposed 2 types of methods: the method indicating a state just after the moment (lamp and party-blowouts) and the method displaying from the present to some future time continouously (beam-of-light and projector). Simulation system has been developed to confirm the effectiveness of the preliminary-announcement and display. Effctiveness can be evaluated by the mobile robot chasing. The mobile robot moves about at a random speed in a random direction. Subject person moves the operation robot by using joystick with looking at the preliminary-announcement on the mobile robot. So the method to display (lamp/blowout/beam) and the timing to announce (0.5-3.0 [s] before the actual motion of the robot) are evaluated numerically with the position/direction gap. The data is processed not only as the average and the standard deviation but also with the two-way ANOVA and t-screening. It was examined on a translation and a rotation separately and then on a 2-dimentional plane.<br>We have found that the method displaying from the present to some future time continuously (beam) is easy to understand. But some amount of length of the displayed path in necessary, which means an appropriate timing depending on conditions. The optimal timing for indicating a state is almost 1.0-1.5 [s] before. If the time difference is too much long, the position/direction gap become large due to the poor memory and the operational mistake. If that is too short the operation will be late owing to the reaction delay. Moreover it seems that human being tends to understand some information with a transforming object than with a changing-color object, and continuous changing is easier to understand than distributed changing.

    DOI CiNii

  • 第8回ロボティクス・シンポジア

    松丸 隆文

    計測と制御   42 ( 6 ) 527 - 527  2003年

    DOI CiNii

  • Preliminary-announcement function of mobile robot's following motion by using omni-directional display

    T Matsumaru, K Iwase, T Kusada, K Akiyama, H Gomi, T Ito

    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS 2003, VOL 1-3     650 - 657  2003年  [査読有り]

     概要を見る

    Human-friendly robot which supports and assists human being directly beside human being is expected as it has become an aging and decrease-in-the-birthrate society. In addition to the safety function to avoid dangers for human being, we think the informational affinity function is important such as to announce surrounding people the robot's following motion before beginning to move. A person can anticipate and guess other's motion considering human body's functional features and behavioral pattern as common sense. This paper discusses the preliminary-announcement and display function of mobile robot's following motion, the direction of motion and the speed of motion, by using a omni-directional display, magicball (R). We have been developing the mobile robot PMR-2 which tells its following action with eyeball expressions.

  • Eyeball Expression for Preliminary-Announcement of Mobile Robot’s Following Motion

    Takafumi Matsumaru, Kyouhei Akiyama, Kazuya Iwase, Takashi Kusada, Hirotoshi Gomi, Tomotaka Ito

    Proceedings of The 11th International Conference on Advanced Robotics (ICAR 2003), [University of Coimbra, Portugal]     797 - 803  2003年  [査読有り]

  • Simulation on preliminary-announcement and display of mobile robot's following action by lamp, party-blowouts, or beam-light

    T Matsumaru, S Kudo, T Kusada, K Iwase, K Akiyama, T Ito

    PROCEEDINGS OF THE 2003 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM 2003), VOLS 1 AND 2     771 - 777  2003年  [査読有り]

     概要を見る

    This paper examines the preliminary-announcement and display function of the robot's following action and intention, especially about the direction of motion and the speed of motion for mobile robot which moves on a 2-dimentional plane. Based on the result of the fundamental examination, the validity of the preliminary-announcement and display at around 1 second before the actual motion has been verifiedon a 2-dimentional plane. Moreover we have found that human being tends to understand the change of speed of the mobile robot more with some transforming object than with some changing-color object. Comparing two types of the proposed methods, the efficiency of the preliminary-announcement and display by indicating the continuous information as drawing with beam-light more than by displaying a state as lighting lamp or using blowouts have been declared. Moreover, we have found that the scheduled path needs some amount of length for a person to understand the mobile robot's following action.

    DOI

    Scopus

    4
    被引用数
    (Scopus)
  • Robot-to-Human Communication of Mobile Robot’s Following Motion using Eyeball Expression on Omni-directional Display

    Takafumi Matsumaru, Kyouhei Akiyama, Kazuya Iwase, Takashi Kusada, Hirotoshi Gomi, Tomotaka Ito

    IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), [Kobe, Japan]     790 - 796  2003年  [査読有り]

    DOI

    Scopus

    4
    被引用数
    (Scopus)
  • Synchronization of Mobile Robot’s Movement and Preliminary-announcement using Omni-directional Display

    Takafumi Matsumaru, Kazuya Iwase, Takashi Kusada, Kyouhei Akiyama, Hirotoshi Gomi, Tomotaka Ito

    IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), [Kobe, Japan]     246 - 253  2003年  [査読有り]

    DOI

    Scopus

    1
    被引用数
    (Scopus)
  • Examination by software simulation on preliminary-announcement and display of mobile robot's following action by lamp or blowouts

    T Matsumaru, H Endo, T Ito

    2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS     362 - 367  2003年  [査読有り]

     概要を見る

    This paper discusses the preliminary-announcement and display function of the robot's following action and intention, especially about the direction of motion and the speed of motion for mobile robot which moves on a 2-dimentional plane. We started with a software simulation about the announcement method indicating a state just after the moment, with lighting lamps and with using blowouts (elastic arrow) on translation and rotation separately. As a result, the following three remarks have been obtained: Not only the direction of motion but also the speed of motion is effective to estimate the following translation. Not the rotation speed but the rotation angle (the target direction) is efficient to understand the following revolution. Around I-second before the actual motion is the optimal timing to recognize the mobile robot's following motion both on translation and rotation. Moreover we have verified the validity of the preliminary-announcement and display also for the general movement on a plane.

    DOI

  • Advanced autonomous action elements in combination control of remote operation and autonomous control

    T Matsumaru, K Hagiwara, T Ito

    IEEE ROMAN 2002, PROCEEDINGS     29 - 34  2002年  [査読有り]

     概要を見る

    This paper examines the combination control in which remote operation is combined with autonomous behaviors with the aim to realize the remote operation of mobile robot which moves in human-coexisting environment. We considers the distance and direction to an obstacle and the speed of motion of the mobile robot for revolution, following, and slowdown, which we have proposed as the autonomous action element in combination control. Fuzzy reasoning and vector components are used. From the experiment by three subject persons, almost the same result has been obtained. When the distance and direction to an obstacle and the speed of motion of the mobile robot are considered, there doesn't seem to be a great difference in following, but mileage becomes shorter in revolution and transit time is reduced in slowdown.

    DOI

    Scopus

    3
    被引用数
    (Scopus)
  • Incorporation of autonomous control elements in combination control of remote operation and autonomous control

    T Matsumaru, K Hagiwara, T Ito

    IECON-2002: PROCEEDINGS OF THE 2002 28TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, VOLS 1-4     2311 - 2316  2002年  [査読有り]

     概要を見る

    This paper examined the combination control in which remote operation is combined with autonomous behaviours with the aim to realize the remote operation of mobile robot which moves in human-coexisting environment. In revolution, following, and slowdown which we had proposed as combination control the autonomous action element in consideration of the distance and direction to an obstacle and the speed of motion of the mobile robot at that time has been examined Fuzzy reasoning and vector component are used to achieve this consideration. Although there was no big difference in following, mileage becomes short in revolution, and transit time was shortened in slowdown. Furthermore in order to avoid a near miss with an obstacle completely, slowdown is incorporated with revolution or following. Verification experiment in software simulation has been carried out with the same subject persons. Although it takes a little longer transit time, other special bad influence was not appeared.

    DOI

    Scopus

  • Examination on Virtual Environment for Preliminary-Announcement and Display of Human-Friendly Mobile Robot

    Takafumi Matsumaru, Kiyoshi Hagiwara

    Proceedings of 6th Joint International Conference on Advanced Science and Technology (JICAST 2001), [Zhejiang University, Hangzhou, China]     169 - 172  2001年12月

    CiNii

  • Examination on Virtual Environment for Combination Control of Human-Friendly Remote-Operated Mobile Robot

    Takafumi Matsumaru, Shin’ichi Ichikawa

    Proceedings of 6th Joint International Conference on Advanced Science and Technology (JICAST 2001), [Zhejiang University, Hangzhou, China]     177 - 180  2001年12月

  • Combination Control of Remote Operation with Autonomous Behavior in Human-Friendly Mobile Robot

    Takafumi Matsumaru, Shin’ichi Ichikawa

    Proceedings of the 10th International Conference on Advanced Robotics (ICAR 2001), [Hotel Mercure Buda, Budapest, Hungary]     567 - 572  2001年08月  [査読有り]

    CiNii

  • Method and Effect of Preliminary-Announcement and Display for Translation of Mobile Robot

    Takafumi Matsumaru, Kiyoshi Hagiwara

    Proceedings of the 10th International Conference on Advanced Robotics (ICAR 2001), [Hotel Mercure Buda, Budapest, Hungary]     573 - 578  2001年08月  [査読有り]

    CiNii

  • Dynamic Brief-to-Precise Strategy for Human-Friendly NeuRobot

    Takafumi Matsumaru

    Proceedings of the 32nd ISR (Internatioal Symposium on Robotics), [Soul ,Korea]     526 - 531  2001年04月  [査読有り]

  • Preliminary announcement and display for human-friendly mobile robot

    T Matsumaru, Y Terasawa

    MOBILE ROBOT TECHNOLOGY, PROCEEDINGS     221 - 226  2001年  [査読有り]

     概要を見る

    This paper describes the preliminary-announcement and display of action and intention of mobile robot which moves in human-coexisting environment. First, the affinity to realize a human-friendly robot is mentioned. Next, the usefulness of remote-operated human-friendly mobile robot is shown with explaining several applications. And it is declared that the informational affinity as the function to announce and display beforehand the following action and intention of robot towards human being is necessary with explaining conventional mobile machine and remote-operated mobile robot. Then the method using "blowout" is proposed to inform both speed of motion and direction of motion simultaneously. Copyright (C) 2001 IFAC.

  • Preliminary-announcement and display for translation and rotation of human-friendly mobile robot

    T Matsumaru, K Hagiwara

    ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS     213 - 218  2001年  [査読有り]

     概要を見る

    This paper discusses the preliminary-announcement and display method of action and intention which is carried with a robot which moves in a human-coexisting environment, It is pointed out that the speed of motion and the direction of motion are important to announce the action of a mobile robot. The method by using blowout is proposed to tell surrounding people about these two kinds of information. Method to announce the speed of motion in translation is examined using a software simulator concerning the case that a robot is equipped with blowout, lamp, or no apparatus. Consequently, the effectiveness of the preliminary-announcement and display using blowout and the importance of information about the speed of motion are confirmed. Moreover method to announce the direction of motion in rotation is examined concerning the case to announce the target direction or the rotation speed. As a result, it is found that the target direction is efficient to announce in rotation of a mobile robot.

    DOI

    Scopus

  • Trial Experiment of the Learning by Experience System on Mechatronics using LEGO MindStorms

    Takafumi Matsumaru, Chieko Komatsu, Toshikazu Minoshima

    Proceedings of International Conference on Machine Automation (ICMA2000), [Osaka Institute of Technology, Osaka, Japan]     207 - 212  2000年09月  [査読有り]

  • Action strategy for remote operation of mobile robot in human coexistence environment

    T Matsumaru, A Fujimori, T Kotoku, K Komoriya

    IECON 2000: 26TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, VOLS 1-4     1 - 6  2000年  [査読有り]

     概要を見る

    This paper proposes three action strategies for remote operation of mobile robot in human coexistence environment as an application of network robotics towards human coexistence robot. Navigation of mobile robots from remote site only in picture information is very difficult. To realize a human coexistence type robot is a challenging subject. So we proposes (1) the combination control method with remote operation and autonomous motion with priority on the remote operation, (2) the method of extending the task-based data exchange which we have been proposed, and (3) the method of sharing information between coexisting human and mobile robot by preliminary announcement display of action intention of mobile robot.

    DOI

  • 通信回線を介したロボットの遠隔操作におけるタスク規範型データ伝送手法

    松丸隆文, 川端俊一, 神徳徹雄, 松日楽信人, 小森谷清, 谷江和雄, 高瀬國克

    日本ロボット学会誌   17 ( 8 ) 1114 - 1125  1999年11月  [査読有り]

     概要を見る

    This paper proposes the task based data exchange for teleoperation systems through communication network as an efficient transmission method of data between an operation device and a remote robot. On the task-based data exchange, the more important information according to the contents and conditions of the task which the robot performs is given priority to transmit, for example, by altering the contents of the transmitted data. We have built the experimental system in which the master arm in Tsukuba and the slave arm in Kawasaki are connected through N-ISDN and the standard techniques are utilized, such as TCP/IP, socket, JPEG, etc. A series of experimental task has been effectively carried out by the task based data exchange, that is, the crank operation which consists of grasp and revolution. The communication network with capacity limitation was used effectively and the high maneuverability in real-time with bilateral servo control has been realized. Then the effectiveness of the task based data exchange has been confirmed.

    DOI CiNii

  • Remote Collaboration Through Time Delay in Multiple Teleoperation

    Kohtaro Ohba, Shun'ichi Kawabata, Nak Young Chong, Kiyoshi Komoriya, Takafumi Matsumaru, Nobuto Matsuhira, Kunikatsu Takase, Kazuo Tanie

    Proceedings of IEEE/RSJ Interntaional Conference on Intelligent Robots and Systems (IROS'99), [Kyongju, Korea]     1866 - 1871  1999年10月  [査読有り]

    DOI

  • Workability Estimation of remote operation thorough communication circuit

    Takafumi Matsumaru, Shin’ichi Kawabata, Tetsuo Kotoku, Nobuto Matsuhira, Kiyoshi Komoriya, Kazuo Tanie, Kunikatsu Takase

    Proceedings of The 9th International Conference on Advanced Robotics ('99ICAR), [Keidanren Hall, Tokyo, Japan]     231 - 238  1999年10月

  • 通信回線ISDNを介したロボットの遠隔操作

    松丸隆文

    日本ロボット学会誌   17 ( 4 ) 481 - 485  1999年05月  [査読有り]

    DOI CiNii

  • Task-based data exchange for remote operation system through a communication network

    T Matsumaru, S Kawabata, T Kotoku, N Matsuhira, K Komoriya, K Tanie, K Takase

    ICRA '99: IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-4, PROCEEDINGS     557 - 564  1999年  [査読有り]

     概要を見る

    This paper proposes the task-based data exchange for teleoperation systems through a communication network as an efficient method of transmitting data between an operation device and a remote robot. Oil the task-based data exchange, the more important information according to the contents and conditions of the task which the robot performs is given transmission priority, for example, by altering the contents of the transmitted data. We have built an experimental system iii which the master arm in Tsukuba and the slave arm in Kawasaki, separated by about 100 km, are connected through N-ISDN and the standard techniques are utilized, such as TCP/IP, socket, JPEG, etc. A series of experimental tasks has been effectively carried out by the task-based data exchange, that is, crank operation which consists of grasping and revolution. The communication network with capacity limitations was used effectively, and the high maneuverability in real-time with bilateral servo control has been realized. The effectiveness of the task-based data exchange have been confirmed.

    DOI

  • 人間共存型ロボットシステムにおける技術課題

    人間共存型ロボット研究専門委員会, 野崎武敏, 山田陽滋, 小笠原司, 菅野重樹, 藤江正克, 松丸隆文

    日本ロボット学会誌   16 ( 3 ) 288 - 294  1998年04月  [査読有り]

    DOI CiNii

  • モジュラー・マニピュレータの構成・形状認識と作業遂行可/不可判定の方法に関する検討

    松丸隆文, 松日楽信人

    日本ロボット学会誌   15 ( 3 ) 408 - 416  1997年04月  [査読有り]

     概要を見る

    A design concept of TOMMS, TOshiba Modular Manipulator system, has been already proposed to achieve a modular manipulator system which can be assembled into any desired configuration to provide adaptability to tasks, using as few kinds and types of modules as possible, without special handling such as modification of control software. To realize the concept, we developed a constitution and configuration recognition method of the assembled manipulator using electric resistance which is simple, practical and reliable. Moreover, to actualize the system which can offer the best suitable manipulator constitution and configuration for the desired task, we developed a workability judgment method considering the degeneracy of degreeds of freedom (d. o. f.) of the manipulator and the conditions of the desired. These methods were applied to the trial system TOMMS-1 and their efficiency and practicality were confirmed.

    DOI CiNii

  • Modular Design Scheme for Robot Manipulator Systems

    Takafumi Matsumaru

    The 3rd International Symposium on Distributed Autonomous Robotic Systems (DARS'96), [Wakoh, Japan]    1996年10月  [査読有り]

  • Design Disquisition on Modular Robot Systems

    Takafumi Matsumaru

    Journal of Robotics and Mechatronics   8 ( 5 ) 408 - 419  1996年10月  [査読有り]

    DOI

  • Corresponding-to-Operation-Motion Type Control Method for Remote Master-Slave Manipulator System

    Takafumi Matsumaru

    Proceedings of the 3rd International Conference on Motion and Vibration Control: MOVIC, [Makuhari, Japan]     204 - 208  1996年09月

  • モジュラー・マニピュレータTOMMSの設計と制御

    松丸隆文, 松日楽信人

    日本ロボット学会誌   14 ( 3 ) 428 - 435  1996年04月  [査読有り]

     概要を見る

    The TOshiba Modular Manipulator System, TOMMS, consists of joint modules, link modules, and a control unit with a joystick. As to the trial manufacturing, a manipulator with 3 d. o. f. is assembled using three joint modules and optional link modules into any desired configuration and shape, for instance, a horizontal type and a vertical type. The assembled manipulator is connected to the control unit, and the position of the end tip of the manipulator is controlled using the joystick without special handling. There is only one type of joint module and link module. There are three input ports and two output ports on the joint module. The distance between the fore side and the back side of the link module is adjustable. The Jacobian matrix is applied to the control software. Control experiments were carried out and the efficiency of the design concept of TOMMS for mechanical hardware and control software was confirmed.

    DOI CiNii

  • 押し付け力制御マニピュレータの遠隔操作手法

    松丸隆文, 松日楽信人

    日本ロボット学会誌   14 ( 2 ) 255 - 262  1996年03月  [査読有り]

     概要を見る

    This paper describes a control method for manipulators which work by pressing the end-effector to the workpiece under constant force (e.g. grinding and cleaning) and also positioning the end-tip to everywhere on the workpiece using the operation device. Based on ergonomics, "the operator coordinate system" is introduced which is determined from the glance line to the workpiece and the both eyes of the operator. Further, "the corresponding-to-operational-motion type control method" is proposed, on which method the direction of motion of the operation device and the reactive direction of motion of the end-effector are corresponded in the operator coordinate system. Especially for the workpiece with wave shape, "the corresponding-to-objective-shape type control method" is designed, on which method the winding line and the valley line of the wave are recognized during the work and the directions of motion of the operation device are corresponded to these lines. These methods have been applied to the remote control system including the joystick and the lightweight manipulator, so the efficiency of these methods have been confirmed.

    DOI CiNii

  • Recognition of constitution/configuration and workability judgment for the modular manipulator system, TOMMS

    T Matsumaru

    PROCEEDINGS OF THE 1996 IEEE IECON - 22ND INTERNATIONAL CONFERENCE ON INDUSTRIAL ELECTRONICS, CONTROL, AND INSTRUMENTATION, VOLS 1-3     493 - 500  1996年  [査読有り]

     概要を見る

    A design concept of TOMMS, TOshiba Modular Manipulator System, has been already proposed to achieve a modular manipulator system which can be assembled into any desired constitution and configuration to provide adaptability to tasks, using as few kinds and types of modules as possible, without special handling such as modification of control software. To realize the concept, we developed a constitution and configuration recognition method of the assembled manipulator using electric resistance which is simple, practical and reliable. Moreover, we developed a workability judgment method considering the degeneracy of degrees of freedom (d.o.f.) of the manipulator and the conditions of the desired task These methods were applied to the trial system TOMMS-1 and their efficiency and practicality were confirmed.

    DOI

  • Design and control of the modular robot system: TOMMS

    T MATSUMARU

    PROCEEDINGS OF 1995 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3     2125 - 2131  1995年  [査読有り]

    DOI

    Scopus

    76
    被引用数
    (Scopus)
  • 風防ガラス・クリーニング・ロボットの開発

    松丸隆文, 松日楽信人

    日本ロボット学会誌   12 ( 5 ) 743 - 750  1994年07月  [査読有り]

     概要を見る

    This paper describes the development of Windshield Cleaning Robot System : WSC. This system is intended to be used for Boeing 747, commonly called jumbo jet, parked at airports prior to service. The objects to be cleaned are spots on windshields caused by collision with dusts, insects, and birds during takeoff and landing. The intention of this new system is that one operator would perform the whole work in 10 minutes. So the system consists of the manipulator (the arm and the cleaning device), the installation unit, the control unit, and the operation unit. A position and force control method is applied to this system. The target position of the arm tip is modified using the signals from the force sensor and the joystick. In accordance with this control method, the pressure force is kept constant and the tip is moved so as to follow the shape of the windshields. The various safety features provided include interference limit to limit the area of movement. System experiments were carried out and the effectivity applying lightweight manipulator with long arms to this work was confirmed.

    DOI CiNii

  • WINDSHIELD CLEANING ROBOT SYSTEM - WSC

    T MATSUMARU, N MATSUHIRA, M JINNO

    IROS '94 - INTELLIGENT ROBOTS AND SYSTEMS: ADVANCED ROBOTIC SYSTEMS AND THE REAL WORLD, VOLS 1-3     1964 - 1971  1994年  [査読有り]

    DOI

▼全件表示

書籍等出版物

  • "Design and Evaluation of Handover Movement Informing Reciever of Weight Load", in S. Bandyopadhyay, G. Saravana Kumar, et al (Eds): "Machines and Mechanisms"

    Takafumi Matsumaru

    Narosa Publishing House (New Delhi, India)  2011年11月 ISBN: 9788184871920

  • "Study on Handover Movement Informing Receiver of Weight Load as Informative Motion of Human-friendly Robot", in Salvatore Pennacchio (ed.): "Emerging Technologies, Robotics and Control Systems - Third edition"

    Takafumi Matsumaru, Shigehisa Suzuki

    INTERNATIONALSAR (Palermo, Italy, EU)  2009年06月 ISBN: 9788890192883

  • "Mobile Robot with Preliminary-announcement and Indication Function of Upcoming Operation just after the Present", in Salvatore Pennacchio (ed.): "Recent Advances in Control Systems, Robotics and Automation- Third edition Volume 2"

    Takafumi Matsumaru

    INTERNATIONALSAR (Palermo, Italy, EU)  2009年01月 ISBN: 9788890192876

  • "生体機械機能工学(バイオメカニズム学会編 バイオメカニズム・ライブラリー)"

    松丸隆文

    東京電機大学出版局  2008年10月 ISBN: 9784501417505

  • "Chapter 18 - Mobile Robot with Preliminary-Announcement and Indication of Scheduled Route and Occupied Area using Projector", in Aleksandar Lazinica (ed.): "Mobile Robots Motion Planning, New Challenges"

    Takafumi Matsumaru

    I-Tech Education and Publishing (Vienna, Austria, EU)  2008年07月 ISBN: 9783902613356

  • "Chapter 4 - Preliminary-Announcement Function of Mobile Robots’ Upcoming Operation", in Xing P. Gu&ocirc; (ed.): "Robotics Research Trends"

    Takafumi Matsumaru

    Nova Science Publishers (Hauppauge, NY, USA)  2008年05月 ISBN: 1600219977

  • "Lesson10 遠隔操作システム", in "Webラーニングプラザ「事例に学ぶロボティックス」"

    松丸隆文, 伊藤友孝

    (独)科学技術振興機構JST  2002年03月

  • "Granularity and Scaling in Modularity Design for Manipulator Systems", in H.Asama, T.Fykuda, T.Arai, I.Endo (Eds.): "Distributed Autonomous Robotic Systems 2"

    Takafumi Matsumaru

    Springer-Verlag  1996年11月 ISBN: 4431701907

▼全件表示

Misc

  • 2A2-H07 プロジェクタを用いた書道のための筆遣い学習支援システムの開発(第1報) : システムの提案と構築

    成田 昌史, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2015   "2A2 - H07(1)"-"2A2-H07(4)"  2015年05月

     概要を見る

    In this paper, a calligraphy learning support system is presented for supporting brushwork learning by using a projector. The system was designed to provide the three kinds of training according to the learner's ability as followings: copying training, tracing training, and combination of them. In order to instruct the three-dimensional brushwork such as the writing speed, pressure, and orientation of the brush, we proposed the instruction method by presenting the information to only brush tip. This method can be visualized a brush position and the orientation. In addition, the copying experiment was performed using the proposed method. As a result, the efficiency of the proposed method was examined through experiment.

    CiNii

  • 1P1-I01 人間共存型ロボットのネットワーク遠隔操作に関する研究(第55報) : ジョイスティックの操作と二輪機構の動作の関係の検討(車輪型/クローラ型移動ロボット)

    井上 亮平, 加納 裕章, 神谷 亮介, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2011   "1P1 - I01(1)"-"1P1-I01(4)"  2011年05月

     概要を見る

    We suggest a control system when we operate a robot of two wheel mechanism with joystick. We carried out the driving experiment of two or more control systems under the cooperation of the subject. I took the data and considered those experiments. This paper, it reports on the result of examining the relation between the operation of the joystick and the movement of the two wheel mechanism.

    CiNii

  • 1P1-H01 人間共存型ロボットの操作手法に関する研究(第10報) : ステップ・オン・インタフェース利用したタッチゲーム・アプリケーションの開発(VRとインタフェース)

    黄川田 昌和, 齋藤 渉, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2011   "1P1 - H01(1)"-"1P1-H01(4)"  2011年05月

     概要を見る

    We propose touch-game application as a new function of Step-on Interface. It is development of the application for recreations. Users operate the screen projected by the projector by touching by hand. We considered projecting a screen on a desk so that elderly people could perform a recreation safely. This paper shows development of the application of the recreation that elderly people can do it happily and safely all the time.

    CiNii

  • 2P1-I07 人間共存型ロボットの遠隔操作に関する研究(第56報) : タッチパネルを用いた操作インタフェースによる二輪移動機構の操作(ネットワークロボティクス)

    加納 裕章, 神谷 亮介, 井上 亮平, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2011   "2P1 - I07(1)"-"2P1-I07(4)"  2011年05月

     概要を見る

    This paper shows evaluation results of the touch screen interface for mobile robot remote operation. Joystick function, button function, and route indication function are developed with an environmental map in the Cell & Hollow method. Features and efficiency are confirmed according to the experimental results in which a mobile robot is remotely operated using the touch screen interface with the developed function to pass through the slalom course or the crank course.

    CiNii

  • 2A1-D04 人間共存型ロボットの操作手法に関する研究(第9報) : ステップ・オン・インタフェースにおける制御機能の検討

    伊藤 祐一, 齋藤 渉, 原田 俊太郎, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2010   "2A1 - D04(1)"-"2A1-D04(4)"  2010年

     概要を見る

    We propose the step-on interface (SOI) in which the instruction is given by stepping on the operation screen projected on a ground surface. We developed the HFAMRO (human-friendly amusing mobile robot)-2 in which two sets of SOI are equipped on the two-wheel drive mobile platform. This paper presents two methods based on the real-time range data and the environmental map respectively for the obstacle avoidance and coming back to the original pathway putting priority on the given instruction.

    CiNii

  • 1A2-G30 人間共存型ロボットのインフォマティブ・モーションに関する研究(第11報) : 到達距離を受け手に伝える物体投げ渡し動作の実験・評価

    原田 俊太郎, 鈴木 崇之, 斎藤 渉, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2010   "1A2 - G30(1)"-"1A2-G30(4)"  2010年

     概要を見る

    This research aims at realizing the robot throw-over movement from which a user is easy to predict the landing distance of the object when the robot throws it to a user by implementing the human movement characteristics. This paper shows the evaluation experiment to examine the reliability of the representative data selected, in which the moving image of a throw-over motion by a virtual humanoid robot developed on Windows-PC is shown to the subject and he replies the landing distance estimated.

    CiNii

  • 1A2-G29 人間共存型ロボットのインフォマティブ・モーションに関する研究(第10報) : 到達距離を受け手に伝える物休投げ渡し動作の計測・解析

    原田 俊太郎, 鈴木 崇之, 斎藤 渉, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2010   "1A2 - G29(1)"-"1A2-G29(4)"  2010年

     概要を見る

    This research aims at realizing the robot throw-over movement from which a user is easy to predict the landing distance of the object when the robot throws it to a user by implementing the human movement characteristics. This paper shows the movement data classification after the measurement and analysis of human movement and the representative data selection for the evaluation experiment. Then it presents the experimental examination of the movement changing depending on the landing distance.

    CiNii

  • 1A1-G18 人間共存型ロボットの遠隔操作に関する研究(第54報) : タッチパネルを用いた操作インタフェースの開発と評価

    小松 純也, 宮 慶良, 加納 裕章, 斎藤 渉, 原田 俊太郎, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2010   "1A1 - G18(1)"-"1A1-G18(4)"  2010年

     概要を見る

    This paper shows evaluation results of the touch screen interface for mobile robot remote operation. Button function, joystick function, and route designation function are developed with an environmental map in the Line & Hollow or the Cell & Hollow method. Features and efficiency are confirmed according to the experimental results in which a mobile robot is remotely operated using the touch screen interface with the developed function to pass through a straight passage or slalom course.

    CiNii

  • 2A1-B18 人間共存型ロボットの遠隔操作に関する研究(第53報) : タッチパネルを用いた移動ロボットの操作インタフェース

    宮 慶良, 伊藤 祐一, 赤井 孝輔, 山下 航, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2009   "2A1 - B18(1)"-"2A1-B18(4)"  2009年05月

     概要を見る

    With the touch panel type interface, the operator cannot only look environmental information around the robot but also control the remote robot and camera by using the button on the touch-screen. In addition the operator can direct the robot to move along the same course as drawing on the environmental map on the touch-screen. Both touch screen and joystick interfaces are experimented on a developed mobile robot remote operation system and discussed on maneuverability and operating accuracy.

    CiNii

  • 2A1-B17 人間共存型ロボットの遠隔操作に関する研究(第51報) : レンジスキャナを用いた環境地図の作成

    山下 航, 宮 慶良, 伊藤 祐一, 赤井 孝輔, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2009   "2A1 - B17(1)"-"2A1-B17(4)"  2009年05月

     概要を見る

    We have been researching on the remote operated robot in which it is difficult for operator to understand the situation around the robot only by the camera image for remote control. The adequate security and maneuverability cannot be secured, and danger to cause accidents increases. As the solution, we have proposed the methods to construct the environmental map to present the operator the situation around the robot. The performance trial of two range scanners was also executed and reported.

    CiNii

  • 2A1-B19 人間共存型ロボットの遠隔操作に関する研究(第52報) : 音声認識を用いた移動ロボットの操作インタフェース

    伊藤 祐一, 林 寛之, 赤井 孝輔, 宮 慶良, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2009   "2A1 - B19(1)"-"2A1-B19(4)"  2009年05月

     概要を見る

    The number of the human-coexistence type robot which operates nearby people is increasing. Then a comprehensible and safety interface to operate is required by people who operate advanced robot for the first time. This paper shows a trial interface via voice to operation a mobile robot. It reports the examination to develop a voice interface, especially on mounted functions and the result of the experiments comparing with the keyboard and the step-on interface.

    CiNii

  • 1P1-C18 人間共存型ロボットの操作手法に関する研究(第7報) : ステップ・オン・インタフェースを搭載した移動ロボットの利用方法

    堀内 康匡, 赤井 孝輔, 伊藤 祐一, 宮 慶良, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2009   "1P1 - C18(1)"-"1P1-C18(4)"  2009年05月

     概要を見る

    We propose new input method of operation. Instructions to robot are given by stepping on a part of operation screen that is projected on running surface by projector equipped on the robot. This time, we designed the human-friendly amusing mobile (HFAM) function that is new usage of the robot equipped with the SOI. There is a scenario that the robot and a child will be able to play together. In this paper, the outline of the HFAM function and the technology are discussed.

    CiNii

  • 1P1-C17 人間共存型ロボットの操作手法に関する研究(第6報) : ステップ・オン・インタフェースを搭載した移動ロボットの制御機能向上

    赤井 孝輔, 堀内 康匡, 伊藤 祐一, 宮 慶良, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2009   "1P1 - C17(1)"-"1P1-C17(4)"  2009年05月

     概要を見る

    We propose new input method of operation. Instructions to robot are given by stepping on a part of operation screen that is projected on running surface by projector equipped on the robot. This paper examines the layout of instruction buttons on operation screen, and the discrimination of obstacles from stepped foot to apply autonomous movement and the path search for obstacle avoidance are also discussed.

    CiNii

  • 1A1-K06 人間共存型ロボットのインフォマテイブ・モーションに関する研究(第8報) : 到達位置を受け手に伝える物体投げ渡し動作の特徴抽出

    鈴木 崇之, 福永 大輝, 鈴木 重央, 伊藤 祐一, 宮 慶良, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2009   "1A1 - K06(1)"-"1A1-K06(4)"  2009年05月

     概要を見る

    This research aims at realizing the robot throw-over movement which a person is easy to predict the distance to the targeted position of the object when throwing from a robot to a person by implementing the movement characteristics by people. This paper shows the experimental examination of points to change in the movement depending on the distance after the measurement and analysis. In addition we selected the movement data by a subject for a future evaluation experiment.

    CiNii

  • 1A2-D13 人間共存型ロボットの動作予告に関する研究(第3報) : 音声予告と表示予告の比較実験

    林 寛之, 赤井 孝輔, 伊藤 祐一, 宮 慶良, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2009   "1A2 - D13(1)"-"1A2-D13(4)"  2009年05月

     概要を見る

    While it has become aged and decrease-in-birthrate society recently, a human-friendly robot which can support and assist human being directly is expected. We think it is important for mobile robot to have not only the safety function to avoid contact or collision with human being but also the function to preliminary-announce surrounding people the forthcoming motion of the robot before beginning to move. This paper reports on the result of the side-by-side test on the voice and the display.

    CiNii

  • 1A1-K07 人間共存型ロボットのインフォマテイブ・モーションに関する研究(第7報) : 荷物重量を受け手に伝える手渡し動作の作成

    鈴木 重央, 木村 章夫, 鈴木 崇之, 伊藤 祐一, 宮 慶良, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2009   "1A1 - K07(1)"-"1A1-K07(4)"  2009年05月

     概要を見る

    The deliverer of handover task operates so that the weight load is informed to the receiver. This study is aimed to clarify the characteristic of handover movement by people and to implement it to humanoid robot to make it perform naturally. Movement of people when performing handover task to a person was measured and analyzed. It may be said that a receiver can judge whether the load is heavy or not. The design method of handover movement depending on weight load is discussed and experimented.

    CiNii

  • 1P1-G18 人間共存型ロボットのインフォマティブ・モーションに関する研究(第2報) : 荷物情報を受け手に伝える物体手渡し動作の検討(インフォマティブ・モーションとモーション・メディア-ロボットの身体性と運動-)

    鈴木 重央, 杉浦 達大, 伊藤 祐一, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2008   "1P1 - G18(1)"-"1P1-G18(2)"  2008年06月

     概要を見る

    When a person does a handover task, the movement of operator contains some information about feature of the load even without oral remarks and the receiver picks up the meaning and provides against the load then receives the load safely and certainly. This research aims to achieve a natural handover task from robot to human by designing the robot body movement. This paper shows the result of analyzing the human's handover task as an earlier stage.

    CiNii

  • 1P1-G17 人間共存型ロボットのインフォマティブ・モーションに関する研究(第3報) : 物体手渡し動作を再現する動作シミュレーションソフトウェアの開発(インフォマティブ・モーションとモーション・メディア-ロボットの身体性と運動-)

    杉浦 達大, 鈴木 重央, 伊藤 祐一, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2008   "1P1 - G17(1)"-"1P1-G17(3)"  2008年06月

     概要を見る

    This research aims to achieve a smooth handover task from robot to person without making the person feel some stress and bringing discomfort. We captured the handover task between two persons and analyzed the feature of human movement of the task in order to use it to design the robot's movement. This paper shows the developed simulation software of humanoid robot's movement. Then it presents the result of the experiment about the weight estimation from the handover movement.

    CiNii

  • 1A1-F21 人間共存型ロボットの遠隔操作に関する研究(第48報) : 遠隔環境提示のためのレンジセンサを利用したライン&ホロー方式およびセル&ホロー方式の検討(ネットワークロボティクス)

    藤田 匠, 林 民通, 伊藤 祐一, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2008   "1A1 - F21(1)"-"1A1-F21(4)"  2008年06月

     概要を見る

    This paper presents the methods to indicate an unknown environment as environmental map for operator of remote operation system of mobile robot in order to improve the maneuverability. The Line&Hollow method and the Cell&Hollow method are examined. Those are based on the data from range sensor. The result of three kinds of experiments, the recognition of aisle, the understanding of changing environment, and the passing with person, shows the feature and effectiveness of those methods.

    CiNii

  • 1A1-F22 人間共存型ロボットの遠隔操作に関する研究(第49報) : タッチパネルを利用した移動ロボットの遠隔操作インタフェースの検討(ネットワークロボティクス)

    林 民通, 藤田 匠, 伊藤 祐一, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2008   "1A1 - F22(1)"-"1A1-F22(4)"  2008年06月

     概要を見る

    This paper discusses the touch screen as user interface for teleoperation system of mobile robot in order to improve the maneuverability. Button function, joystick function, and route designation are developed with Line&Hollow method as environmental map. Features and efficiency are confirmed on the experiments in which the mobile robot is remotely operated using the touchscreen interface with the developed functions to pass through a passage with orthogonal cranks.

    CiNii

  • 2P2-E09 人間共存型ロボットの操作手法に関する研究(第3報) : ステップ・オン・インタフェースのための測域センサの性能測定(VRとインタフェース)

    伊藤 祐一, 赤井 孝輔, 鈴木 重央, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2008   "2P2 - E09(1)"-"2P2-E09(3)"  2008年06月

     概要を見る

    We are planning to develop HFAMRO-2 which can play with people using the step-on interface. Therefore we would like to improve the mobility performance and to replace the range sensor from HFAMRO-1. This paper shows the modified design of mobile unit which will be able to double the speed of movement. And it also shows the comparison between new range sensor and old one based on the results of performance measurements. The new sensor can detect and measure the stepped foot more precisely.

    CiNii

  • 2P2-E10 人間共存型ロボットの操作手法に関する研究(第4報) : ステップ・オン・インタフェースを搭載した移動ロボットの検討(VRとインタフェース)

    赤井 孝輔, 伊藤 祐一, 鈴木 重央, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2008   "2P2 - E10(1)"-"2P2-E10(3)"  2008年06月

     概要を見る

    We propose new input method of operation. Instructions to robot are given by stepping on a part of operation screen that is projected on running surface by projector equipped on the robot. This paper examines the layout of instruction buttons on operation screen. And additional functions, the preliminary-announcement and indication of forthcoming movement and the discrimination of obstacles from stepped foot to apply autonomous movement and mouse cursor operation, are also discussed.

    CiNii

  • 2P1-A37 人間共存型ロボットの遠隔操作に関する研究(第34報) : プロジェクタを用いた動作予告機能付き移動ロボットの開発と評価

    松丸 隆文, 干場 祐, 宮田 康広, 平岩 慎司

    ロボティクス・メカトロニクス講演会講演概要集   2006   "2P1 - A37(1)"-"2P1-A37(4)"  2006年

     概要を見る

    Recently it has become an aging and decrease-in-the-birthrate society then human-friendly robot which can support and assist human being directly is expected. We think it is important for a mobile robot to have not only the safety function to avoid a collision with human being but also the function to announce surrounding people the following motion of a robot before beginning to move. This paper discusses preliminary-announcement function for mobile robot with a projection. We developed the PMR-5R and proposed the expression with a projection. We have confirmed its effect by some experimental results.

    CiNii

  • 2P1-A35 人間共存型ロボットの遠隔操作に関する研究(第33報) : レーザーポインタを用いた動作予告機能付き移動ロボットの開発と評価

    松丸 隆文, 干場 祐, 宮田 康広, 平岩 慎司

    ロボティクス・メカトロニクス講演会講演概要集   2006   "2P1 - A35(1)"-"2P1-A35(4)"  2006年

     概要を見る

    Recently it has become an aging and decrease-in-the-birthrate society then human-friendly robot which can support and assist human being directly is expected. We think it is important for a mobile robot to have not only the safety function to avoid a collision with human being but also the function to announce surrounding people the following motion of a robot before beginning to move. This paper discusses preliminary-announcement function for mobile robot with a laser-pointer. We developed the PMR-1 and proposed the expression with a laser-pointer. We have confirmed its effect by some experimental results.

    CiNii

  • 2P1-A31 人間共存型ロボットの遠隔操作に関する研究(第31報) : 全方向ディスプレイを用いた動作予告機能付き移動ロボットの開発と評価

    松丸 隆文, 干場 祐, 宮田 康広, 平岩 慎司

    ロボティクス・メカトロニクス講演会講演概要集   2006   "2P1 - A31(1)"-"2P1-A31(4)"  2006年

     概要を見る

    Recently it has become an aging and decrease-in-the-birthrate society then human-friendly robot which can support and assist human being directly is expected. We think it is important for a mobile robot to have not only the safety function to avoid a collision with human being but also the function to announce surrounding people the following motion of a robot before beginning to move. This paper discusses preliminary-announcement function for mobile robot with a projection. We developed the PMR-2R and proposed the expression with an omni-directional display. We have confirmed its effect by some experimental results.

    CiNii

  • 1P1-S-034 人間共存型ロボットの遠隔操作に関する研究(第26報) : 単眼カメラを用いた移動ロボットの自己位置推定方法の検討(視覚移動ロボット2,生活を支援するロボメカ技術のメガインテグレーション)

    長谷川 翔, 松丸 隆文, 伊藤 友孝

    ロボティクス・メカトロニクス講演会講演概要集   2005   82 - 82  2005年06月

    CiNii

  • 1A1-S-050 人間共存型ロボットの遠隔操作に関する研究(第25報) : プロジェクタを用いた移動ロボットの次の行動予告(ITSとロボット技術,生活を支援するロボメカ技術のメガインテグレーション)

    干場 祐, 山守 浩史, 松丸 隆文, 伊藤 友孝

    ロボティクス・メカトロニクス講演会講演概要集   2005   42 - 42  2005年06月

    CiNii

  • 1A1-S-047 人間共存型ロボットの遠隔操作に関する研究(第24報) : 迷路走行に対応する融合制御アルゴリズムの検討(ITSとロボット技術,生活を支援するロボメカ技術のメガインテグレーション)

    岩瀬 和也, 山守 浩史, 松丸 隆文, 伊藤 友孝

    ロボティクス・メカトロニクス講演会講演概要集   2005   42 - 42  2005年06月

    CiNii

  • 重量物挙上動作における躍度最小モデルを用いた最適動作の検討(コミュニケーション,身体運動の計測と解析2)

    松丸 隆文, 島 和義, 福山 聡, 伊藤 友孝

    福祉工学シンポジウム講演論文集   2004   145 - 148  2004年09月

     概要を見る

    重量物挙上動作における最適な初期姿勢を究明するため,7節剛体リンクモデルにおいてそれぞれの関節角度を躍度最小モデルで変化させたシミュレーション結果と実測結果を比較・検討することで,最適な初期姿勢がどのような評価指標により決定されているかを検討した.その結果,重量物挙上動作においては,腰椎負荷だけでなく膝への負荷も重要であることがわかった.

    CiNii

  • 人間共存型ロボットの遠隔操作に関する研究(第18報) : 全方向ディスプレイを使った移動予告のための表示コンテンツの検討(情緒・感性・身体性)

    山崎 辰夫, 岩瀬 和也, 松丸 隆文, 伊藤 友孝

    ロボティクス・メカトロニクス講演会講演概要集   2004   94 - 95  2004年06月

    CiNii

  • 人間共存型ロボットの遠隔操作に関する研究(第16報) : 距離センサによる環境認識とその情報提示手法の検討(安心・安全を実現するダイナミックセンシング技術(ダイナミックセンシング研究会))

    秋山 京平, 岩瀬 和也, 松丸 隆文, 伊藤 友孝

    ロボティクス・メカトロニクス講演会講演概要集   2004   211 - 211  2004年06月

    CiNii

  • 人間共存型ロボットの遠隔操作に関する研究 (第19報)-ネットワーク条件による移動ロボットの操作性の比較検討-

    岩瀬, 松丸 隆文, 伊藤 友孝

    日本機械学会ロボティクスメカトロニクス講演会'04     83 - 83  2004年

    CiNii

  • 人間共存型ロボットの遠隔操作に関する研究 (第17報)-遠隔操作型移動ロボットにおける行動予告と行動制御の協調手法の検討-

    草田享, 岩瀬 和也, 松丸 隆文, 伊藤 友孝

    日本機械学会 ロボティクス メカトロニクス講演会'04 (ROBOMEC04) 講演論文集     165 - 166  2004年

    CiNii

  • 人間の身体運動の解析-重量物挙上における適切な動作姿勢の検討-

    島, 福山 聡, 松丸 隆文, 伊藤 友孝

    日本機械学会東海支部第52期総会 講演会講演論文集, 2003   326   163 - 164  2003年

     概要を見る

    This paper examines the optimal motion on lifting task to prevent the lumbago during the movement. We settled three different posture to start the motion. We adopted four evaluation criteria to examine : pressure force to lumber vertebra, shearing force to lumber vertebra, total efficiency degree, and contribution rate on each joint.

    CiNii

  • 人間共存型ロボットの遠隔操作に関する研究 (第15報)-レーザー光線を使った行動予告装置におけるステッピングモータによる反射鏡駆動装置の開発-

    五味寛敏, 草田 亨, 秋山 京平, 岩瀬 和也, 松丸 隆文, 伊藤 友孝

    日本機械学会東海支部第52期総会 講演会講演論文集, 2003     319 - 320  2003年

     概要を見る

    Human-Friendly Robots is becoming popular. Human-Friendly Robot lives/works in the same space as human. We think the function to tell surrounding people the robot's following action is important as well as a safety function. We report the preliminary-announcement device of the mobile robot which draws a scheduled path on a ground with beam-light by reflector driving unit using stepping motor.

    CiNii

  • 2P2-E04 人間共存型ロボットの遠隔操作に関する研究 : 第 9 報 : 人間共存型移動ロボット開発のための基礎検討

    森 寛光, 大石 弘展, 萩原 潔, 伊藤 友孝, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2002   110 - 111  2002年

     概要を見る

    『人間共存型移動ロボットの遠隔操作システム』を目的として, 移動ロボットを開発している。身体行動が不自由で外出が難しい高齢者・障害者が, 移動ロボットを遠隔操作することによる博物館観覧などの遠隔体験・鑑賞を想定している。人間共存環境で移動するために重要な課題として, 移動ロボットの行動意図を周囲の人間に予告表示すること, および, 周囲の人間を簡単なセンサで認識すること, について, 検討している結果を報告する。(201字)

    CiNii

  • 2P1-K02 人間共存型ロボットの遠隔操作に関する研究 : 第 8 報 : 移動ロボットの行動意図を予告表示する方法の 3 次元シミュレーションによる評価

    遠藤 久嗣, 工藤 新之介, 萩原 潔, 伊藤 友孝, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2002   99 - 99  2002年

     概要を見る

    本研究は, 特に遠隔操作される移動ロボットにおいて, その移動方向や移動速度をできるだけ単純で, 周囲の人間にわかりやすく予告・伝達する方法を提案することを目的とする。ランプ点灯は, ロボットの円周上に複数個のランプを配置し, その点灯により進行方向を伝達する。吹き戻しは, 旋回する台上に設置することで, 吹き戻しの長さで移動速度を, 吹き戻しの方向で移動方向を表示する。これらについて, 3次元のコンピュータ・シミュレーションを開発し, 評価・検討する。

    CiNii

  • 1P1-K08 レゴ・マインドストームを用いたメカトロニクス体験学習の検討 : 第 3 報 : プログラムの流れの理解を主とした学習方法の開発

    藤田 和俊, 工藤 新之介, 萩原 潔, 伊藤 友孝, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2002   58 - 58  2002年

     概要を見る

    レゴ・マインドストームを用いたメカトロニクスの体験学習において, プログラミングを学習・理解する方法を検討しする。具体的には, 制御の流れをフローチャート, NQC, RCX Code, ロボットの動作, の4つの方法で表現するテキストを提案する。制御の流れを4つの異なる観点で学習することで理解を深め, また基礎的なプログラミングを学習できる。さらにプログラムをロボットが実行しているとき, パソコン画面のプログラム上で, ロボットが実行している命令を逐次表示することで, プログラムをより深く理解できる。(243字)

    CiNii

  • 2A1-C04 人間共存型ロボットの遠隔操作に関する研究 : 第 7 報 : 融合制御における自律行動の複合化の評価

    萩原 潔, 伊藤 友孝, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2002   65 - 65  2002年

     概要を見る

    遠隔操作と自律動作の融合制御において, 複数の自律動作を複合化することの効果を, コンピュータ・シミュレーションにより評価する。人間共存環境を遠隔操作される移動ロボットにおいて, 障害物へ対処するための自律動作として, 旋回動作, 旋回動作, 減速動作をとりあげ, そのそれぞれについて, 障害物との距離, 障害物との角度, そのときのロボットの移動速度などを考慮した手法を開発する。そして減速動作をベースにして複合化し, 減速動作+旋回動作, 減速動作+倣い動作とした場合を検討する。(231字)

    CiNii

  • 2P2-N2 人間共存型ロボットの遠隔操作に関する研究 : 第3報 移動ロボットの行動意図を予告する方法と効果の検討(33. VRとインターフェース)

    萩原 潔, 寺沢 義則, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2001   80 - 80  2001年06月

    CiNii

  • 2P2-K5 人間共存型ロボットの遠隔操作に関する研究 : 第2報 自律移動と遠隔操作の融合による行動制御手法の検討(32. ネットワークロボティクス・メカトロニクスにおけるマルチメディアネットワークI)

    市川 慎一, 萩原 潔, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2001   77 - 77  2001年06月

    CiNii

  • レゴ マインドストームを用いたメカトロニクス体験学習の検討

    中島, 萩原 潔, 松丸 隆文

    日本機械学会ロボティクス メカトロニクス講演会, 2001     39 - 39  2001年

    CiNii

  • 狭隘部作業ロボットの構造と制御に関する研究

    松丸 隆文

    日本ロボット学会誌   18 ( 4 ) 513 - 513  2000年05月

    CiNii

  • 2P1-34-043 ネットワーク遠隔操作ロボットのシミュレーションシステムの検討

    蓑島 利和, 小松 千枝子, 松丸 隆文

    ロボティクス・メカトロニクス講演会講演概要集   2000   82 - 82  2000年

     概要を見る

    遠隔地のロボットのネットワークを介した操作を検討している。通信手法や制御方法を検討するため。シミュレーションシステムを構築している。ネットワーク接続された2台のパソコンはそれぞれ, 操縦装置, スレーブ・ロボット(CGによるシミュレーション)を備え, 操作者は, 伝送された画像を見ながら, 操縦装置によりスレーブ・ロボットを操作する。システムの機能について検討する。

    CiNii

  • 通信回線ISDNを介したロボットの遠隔操作(基礎実験)

    松丸隆文, 川端俊一, 神徳徹雄, 朝倉誠, 小森谷清, 吉見卓, 谷江和雄

    日本ロボット学会学術講演会予稿集   14th ( 3 )  1996年

    J-GLOBAL

▼全件表示

Works(作品等)

  • 早稲田オープン・イノベーション・フォーラム2021(WOI’21)(主催:早稲田大学オープンイノベーション戦略研究機構) [オンライン開催]

    バイオ・ロボティクス&ヒューマン・メカトロニクス(松丸隆文)研究室  芸術活動 

    2021年03月
     
     

  • 早稲田オープン・イノベーション・フォーラム2020(WOI’20)(主催:早稲田大学オープンイノベーション戦略研究機構) [早稲田アリーナ]

    松丸隆文研究室  芸術活動 

    2020年03月
     
     

  • 2017国際ロボット展・RT産学共創マッチング補助事業 RT交流プラザ(主催:(一社)日本ロボット工業会/日刊工業新聞社) [東京国際展示場 東1~6ホール], RT09, (2017.11.29(水)~12.02(土) 10h00-17h00).

    松丸隆文研究室  芸術活動 

    2017年11月
    -
    2017年12月

     概要を見る

    (1) 3DAHII (3D Aerial Holographic Image Interface: 三次元空中ホログラフィック画像インタフェース) .
    [*] Asyifa Imanda SEPTIANA (M2), Duc THAN (M2), Ahmed FARID (M2), 堀内 一希 (M1), 松丸 隆文.

  • 2015国際ロボット展・RT産学共創マッチング補助事業 RT交流プラザ(主催:(一社)日本ロボット工業会/日刊工業新聞社) [東京国際展示場 東ホール], RT-04, (2015.12.02(水)~05(土) 10h00-17h00).

    早稲田大学大学院情報生産システム研究科, 松丸隆文研究室  芸術活動 

    2015年12月
     
     

     概要を見る

    (1) IDAT-3 (Image-projective Destop Arm Trainer: 画像投射式卓上型上肢訓練装置) .
    (2) CSLSS-1 (Calliography-Stroke Learning Support System: 書道運筆学習支援システム).

  • 第41回国際福祉機器展H.C.R.2014(主催:(社福)全国社会福祉協議会,(一財)保健福祉広報協会) [東京国際展示場 東ホール], (2014.10.01(水)〜03(金) 10h00-17h00).

    松丸隆文研究室  芸術活動 

    2014年10月
    -
     

  • 2013国際ロボット展・RT産学共創マッチング補助事業 RT交流プラザ(主催:(一社)日本ロボット工業会,共催:日刊工業新聞社) [東京国際展示場], SRT-21, (2013.11.06(水)〜09(土) 10h00-17h00).

    芸術活動 

    2013年11月
    -
     

  • 北九州学研都市第13回産学連携フェア(主催:北九州学研都市産学連携フェア実行委員会,公益財団法人北九州産業学術推進機構)[北九州学術研究都市](2013.10.23(水)〜25(金) 10h00-17h00)

    芸術活動 

    2013年10月
    -
     

  • 北九州学術研究都市第12回産学連携フェア(主催:北九州学研都市産学連携フェア実行委員会,公益財団法人北九州産業学術推進機構)[北九州学研都市](2011.10.17(水)〜19(金) 10h00-17h00)

    芸術活動 

    2012年10月
    -
     

  • 第49回 日本リハビリテーション医学会学術集会(主催:(社)日本リハビリテーション医学会) [福岡国際会議場], (2012.05.31(木)〜06.02(土))

    芸術活動 

    2012年05月
    -
     

  • 北九州学術研究都市 報道記者見学ツアー(北九州産業学術推進機構FAIS) [北九州学術研究都市会議場]

    芸術活動 

    2012年02月
    -
     

  • 2011国際ロボット展 RT交流プラザ(主催:(社)日本ロボット工業会,共催:日刊工業新聞社) [東京国際展示場], SRT-5, (2011.11.09(水)〜11.12(土) 10h00-17h00)

    芸術活動 

    2011年11月
    -
     

  • 北九州学研都市第11回産学連携フェア(主催:北九州学研都市産学連携フェア実行委員会,北九州産業学術推進機構)[北九州学研都市](2011.10.19(水)〜21(金) 10h00-17h00)

    芸術活動 

    2011年10月
    -
     

  • 第51回西日本総合機械展 ロボット産業マッチングフェア北九州(主催:(財)西日本産業貿易コンベンション協会) [西日本総合展示場 新館], (2011.06.23-06.25)

    芸術活動 

    2011年06月
    -
     

  • 2009国際ロボット展 RT交流プラザ(主催:(社)日本ロボット工業会,日刊工業新聞社) [東京国際展示場], (2009.11.25(水)〜11.28(土))

    芸術活動 

    2009年11月
    -
     

  • 第4回モーションメディアコンテンツコンテスト (主催: (社)計測自動制御学会SI部門モーションメディア部会) [東京 家の光会館], (2008.07.04(金)).

    芸術活動 

    2008年07月
    -
     

  • 2007国際ロボット展 RT交流プラザ(主催:(社)日本ロボット工業会,日刊工業新聞社) [東京国際展示場], (2007.11.28(水)〜12.01(土))

    芸術活動 

    2007年11月
    -
     

  • イノベーション・ジャパン2007−大学見本市(展示会) (主催:(独)科学技術振興機構+(独)新エネルギー・産業技術総合開発機構) [東京国際フォーラム], (2007.09.12(水)〜14(金))

    芸術活動 

    2007年09月
    -
     

  • 高校生のためのハイテクイベント (主催: (社)日本機械学会東海支部) [産業技術記念館 大ホール], (2007.08.03(金)).

    芸術活動 

    2007年08月
    -
     

  • 第2回モーションメディアコンテンツコンテスト (主催: (社)計測自動制御学会SI部門モーションメディア調査研究会) [電気通信大学 総合研究棟], (2006.06.28(水)).

    芸術活動 

    2006年06月
    -
     

  • 2005国際ロボット展 RT交流プラザ (主催: (社)日本ロボット工業会, 共催:日刊工業新聞社) [東京国際展示場東1,2ホール], (2005.11.30(水)〜12.03(土)).

    芸術活動 

    2005年11月
    -
     

  • キャラロボ2005 (主催: (社)大阪国際見本市委員会) [インテックス大阪], (2005.07.16(土)〜07.17(日)).

    芸術活動 

    2005年07月
    -
     

  • 第1回モーションメディアコンテンツコンテスト(主催: (社)計測自動制御学会SI部門モーションメディア調査研究会) [NTT武蔵野研究開発センター], (2005.06.22)

    芸術活動 

    2005年06月
    -
     

  • 国際新技術フェア2002 RT交流プラザ (主催: (社)日本ロボット工業会&日刊工業新聞社) [東京ビックサイト東4ホール], (2002.09.25(水)〜09.27(金)).

    芸術活動 

    2002年09月
    -
     

  • ロボット・ステーション2002(主催: (財)日本玩具文化財団&NHKサービスセンター) [松坂屋静岡店], (2002.08.01(木)〜08.06(火)).

    芸術活動 

    2002年08月
    -
     

▼全件表示

受賞

  • Senior member

    2021年02月   IEEE  

  • Excellent Poster Presentation Award (ISIPS 2018)

    2018年11月   ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018)   "Fingertip pointing interface by hand detection using Short range depth camera"  

    受賞者: Kazuki Horiuchi, Takafumi Matsumaru

  • Excellent Oral Presentation Award (ISIPS 2018)

    2018年11月   ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018)   "Intuitive Control of Virtual Robots using Leap Motion Sensor"  

    受賞者: Rajeevlochana G. Chittawadigi, Subir Kumar Saha, Takafumi Matsumaru

  • Excellent Paper Award (ISIPS 2017)

    2017年11月   11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan]   Usability Study of Aerial Projection of 3D Hologram Object in 3D Workspace  

    受賞者: Septiana Asyifa I, Jiono Mahfud, Kazuki Horiuchi, Takafumi Matsumaru

  • Excellent Paper Award (ISIPS 2016)

    2016年11月   10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)  

    受賞者: R. P. Joshi, P. Sharma, R. A. Boby, S. K. Saha, T. Matsumaru

  • Excellent Poster Presentation Award (ISIPS 2015)

    2015年11月   ISIPS 2015  

  • Excellent Paper Award (ISIPS 2015)

    2015年11月   ISIPS 2015  

  • Excellent Paper Award (ISIPS2014)

    2014年11月   ISIPS 2014  

  • Excellent Poster Presentation Award (7th IPS-ISC)

    2013年11月   7th IPS-ICS  

  • Excellent Paper Award (7th IPS-ICS)

    2013年11月   7th IPS-ICS  

  • Excellent Poster Presentation Award (6th IPS-ICS)

    2012年11月   6th IPS-ICS  

  • (社)日本機械学会ロボティクス・メカトロニクス部門, ロボティクス・メカトロニクス講演会2009における優秀講演ノミネーション(78件/約350件/1035件)

    2009年05月   JSME ROBOMEC 2009  

  • (社)日本機械学会ロボティクス・メカトロニクス部門, ロボティクス・メカトロニクス講演会2008における優秀講演ノミネーション(59件/1054件)

    2008年06月   JSME ROBOMEC 2008  

  • (社)計測自動制御学会第6回システムインテグレーション部門講演会, SI2005ベストセッション講演賞

    2005年12月   SICE SI2005  

  • (社)日本機械学会ロボティクス・メカトロニクス部門, ロボティクス・メカトロニクス講演会’00における優秀講演の認定(169件/653件)

    2000年05月   JSME ROBOMEC 2000  

▼全件表示

共同研究・競争的資金等の研究課題

  • ヒューマン・ロボット・インタラクションのための2D映像-3D動作マッピング手法

    研究期間:

    2017年04月
    -
    2020年03月
     

     概要を見る

    2次元映像と3次元動作の間の相互マッピング手法の確立を目的として,次の3つの事例研究から本質に迫ろうとしている.(1)3次元空中映像インタフェース(3-dimensional aerial image interface, 3DAII)(主に動作→映像): 空中に投影された3次元映像にユーザが手指を重ねて直接操作できることを特徴とする新しいインタフェースの応用として,ろくろ作業(成形作業,削り作業)により土から器を作りだす工程の再現を試みている.仮想物体の表現にはゲームエンジンであるUnity (ver.5.6.1fl)を使用し,LeapMotionで検出した手指の3次元動作により仮想物体を変形させることで,造形を再現するインタラクション機能の一部を実現した.(2)エアーポインティングによるカーソル操作(主に動作→映像): コンピュータ作業におけるキー入力とカーソル操作の切り替えを円滑に実現することを目的として,キーボードから手首を離すことなく指先の3次元動作だけで2次元映像内のカーソルを操作するシステムを提案した.腕の疲労軽減だけでなく,カーソルが動作する平面と指先の動く方向が一致することで直感的な操作になることが確認できた.(3)銘柄別ペットボトル仕分け作業(主に映像→動作): 水槽の中で冷やされているペットボトルに対して,指定された銘柄を掴み上げるロボットアームシステムの実現を目的として,RGBカメラの2次元映像からロボットアームの3次元動作を計画する.水面を介して水中の物体を認識するので,センサはSLPP(構造化光パターン投影)やToF(飛行時間)などのアクティブ方式では難しいのでパッシブ方式のRGBカメラとする.機械学習により物体区分けや銘柄分類ができることを確認した.2次元映像と3次元動作の間の相互マッピング手法の確立を目的として,3つの事例研究を突破口とすることで,研究・開発を深耕している.(1)3次元空中映像インタフェース: 学術誌に原著論文を投稿・査読中である.さまざまな応用事例を実現してゆくことで経験を重ね,2次元と3次元の間のインタラクションにおける設計原理の確立に近づいている.(2)エアーポインティング: 文献調査などにより従来方法との差異や提案の新規性を明確にしている.基本機能の性能測定と従来方法との比較実験のそれぞれについて,測定内容や実験方法を熟慮・検討して実施した.(3)ペットボトル仕分け作業: 水中にあるばら積み物体を水上から認識して掴み上げる作業への挑戦の事例は少なく,研究対象自体に新規性がある.2次元映像と3次元画像の間の相互マッピング手法の確立を目的として,複数の事例研究を突破口としながら研究・開発を継続する.(1)3次元空中映像インタフェース: ろくろ作業の再現を早期に完成させる.次の応用として3次元動作物体の3次元映像再現を計画している(実現可能性は確認している).また学術誌への論文投稿が滞っているので,現在査読中の学術誌論文には早期に決着してほしい.(2)エアーポインティング: 従来の方法(コンピュータマウス,タッチパッド,タッチパネル,など)との比較実験などを実施しており,データを整理して,学術論文への投稿を準備する.(3)ペットボトル仕分け: ロボットの3次元動作を生成するための,2次元カメラ画像に基づく物体認識の探求を続ける.a)物体どうしの重ね合わせにより,得られた画像の全身に対する割合(例えば8割,5割,3割),得られた画像の全身における部位(例えば頭部,腹部,下部)による,物体区分けや銘柄分類の正答率の変化,b)新たな銘柄を機械学習させるために学習画像の効率的な生成方法や必要な数量の検討,などの面白い課題が残っている

  • ステップ・オン・インタフェースの高機能化とさまざまな形態での実現

    研究期間:

    2011年04月
    -
    2014年03月
     

     概要を見る

    ロボット・メカトロニクス機器を操作するためのステップ・オン・インタフェース(SOI)の高機能化(高信頼性と高精度化)と多様化(さまざまな形態での実現)を図った.高機能化では,カメラ画像処理(OpenCVによる投影画面やスポット光点の認識)と深度データ処理(三次元深度センサの利用)によるSOIを可能にした.多様化では,レーザー・ポインタのon/offジェスチャによる画面操作(クリック/ドラッグ動作),ズーム付き雲台カメラの制御による投影画面へのリアルタイム追従,深度データに基づく指先の背景への接触/非接触動作の認識,バーチャル・キーボードとバーチャル・シロフォンへの応用を実現した

  • ステップ・オン・インタフェースを用いたロボットに関する先行技術調査(科学技術振興機構)

    研究期間:

    2007年
    -
     
     

  • ALS患者のための視線検出機能付HMDの開発と高度双方向コミュニケーションの実現(静岡大学工学部)

    研究期間:

    2005年
    -
    2006年
     

  • 無軌道走行移動体に関する研究(矢崎化工(株))

    研究期間:

    2004年
    -
     
     

  • MERG(マルチメディア教育に関する研究グループ)(東京農工大学工学部)

    研究期間:

    2003年
    -
     
     

  • 無線式台車型ロボットに関する研究(静岡県静岡工業技術センター,(有)サンテクニカル)

    研究期間:

    2002年
    -
     
     

  • 温室メロン遠隔制御に関する研究(沢田工業(株),静岡県農業試験場)

    研究期間:

    2002年
    -
     
     

  • インテリジェント移動ロボットの誘導制御に関する研究(静岡大学工学部機械工学科)

    研究期間:

    2000年
    -
     
     

  • 通信ネットワーク接続したロボットの遠隔操作に関する研究(通商産業省工業技術院機械技術研究所)

    研究期間:

    1999年
    -
    2000年
     

  • 遠隔体験鑑賞システムを目的とした人間共存型移動ロボットの遠隔と自律の融合制御手法

     概要を見る

    本研究は,"遠隔体験・鑑賞システム"を想定した『人間共存型移動ロボットの遠隔操作システム』の実現のための技術課題として,"操作する人間"とロボットの間における安全性や操作性に着目し,自律動作と遠隔操作の長所を効果的に取り入れ,それぞれの欠点を補うような融合制御手法を検討している.いわゆるshared controlにおけるcomputer controlの具体的な動作を提案・評価するものである.移動ロボットに搭載したカメラからの画像を見ながら,操作者がジョイスティックを使ってロボットを操作する.自律動作として,旋回動作,倣い動作,減速動作の3つを検討している.自律動作の組合せの方法,および,環境認識/状況判断の手法について,ソフトウェア・シミュレーションでの検討を継続した.減速動作を基本として,旋回動作や倣い動作を複合化する手法により,障害物とのニアミスの回数を半減でき,安全性が向上することを確認した.状況に適した自律動作に関しては,通路走行時の曲がり角の通り抜けについて,どのような状況で自律動作を積極的に利用するかを解明し,どのような自律動作が使いやすいか,を検討した.操作者のジョイスティック操作とロボットの距離センサ情報から,周囲の環境とロボットの状況を分類し,旋回動作と倣い動作を選択的に付加する手法により,操作者に違和感ない動作を実現し,操作性が向上できることを確認した.ハードウェアでは,オムニホイール(富士製作所)による4輪駆動機構を有する全方向移動機構を開発中であり,また前方約170度の範囲における対象物との距離を走査できる距離センサ(北陽電気PB9)を使用できるように準備している.これらを組み合わせた移動ロボットを早急に完成させたい.今後は,操作者への情報提示・フィードバック手法を検討するとともに,統合ハードウェア・システムを用いた提案手法の検証実験を行ないたい

  • ALS患者のための視線検出機能付HMDの開発と双方向コミュニケーションの実現

     概要を見る

    本研究の目的は、ALS(筋萎縮性硬化症)患者の残存機能である視線を利用して,文字入力やメニュー選択できるようにすることで、周囲の人への意思伝達、人間共存型ロボットの遠隔操作などを可能にすることである。開発した視線検出機能付きヘッドマウントディスプレイ(HMD)は,片目に取り付ける形態で,HMDと眼との相対位置がずれても,表示画像上の見ている位置(視点)が正確にわかることを特徴とする。HMDにはPCからの映像を表示するとともに、LEDで眼に対して近赤外線を照射してCCDで眼の画像を捉えて画像処理することで,瞳孔像の中心とLED光源の角膜反射像の中心を実時間で検出し、それらの相対位置関係から視点を検出する。17年度の試作品は光源のゴーストが映りこんでいて視線検出が困難であった。そこで18年度はゴーストの原因を究明し、光源の配置を変えることでその問題を解消した。全体のサイズはやや大きくなり,また検出される視点位置がばらつくために移動平均を用いて視点位置を求めているが、HMDと眼の相対位置のずれに対して視点位置を補正できるという特徴を明らかにした。このHMDを,移動ロボットの遠隔操作システムにおける入出力インタフェースとして遠隔操作実験を実施した。ロボットは遠隔画像取得用カメラと距離情報取得用レンジセンサを搭載した全方向移動機構である。HMDで提示される操作画面は640×480と解像度が低いが,ロボットを遠隔操作するための最低限の機能として,遠隔環境地図,ライブ映像,ロボット操作ボタン,カメラ操作ボタン,カメラ向き表示の5機能を含めた操作インタフェースを作成した。ロボット操作ボタンは,安全性を考慮してカーソルを2秒間ボタン上に置いてから実行するようにした。また視線検出の特性を考慮して,停止ボタンを操作画面の四隅に配置した。動作実験により移動ロボットやカメラを操作できることを確認した

▼全件表示

講演・口頭発表等

  • Chinese Number Gestures Recognition using Finger Joint Detection and Hand Shape Description

    Yingchuang YANG, Takafumi MATSUMARU

    ISIPS 2020 (14h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan] (12-13 November, 2020)  

    発表年月: 2020年11月

    開催年月:
    2020年11月
     
     
  • Recognition of Football’s Handball Foul Based on Depth Data in Real-time

    Pukun JIA, Takafumi MATSUMARU

    ISIPS 2020 (14h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan] (12-13 November, 2020)  

    発表年月: 2020年11月

    開催年月:
    2020年11月
     
     
  • Image Segmentation and Brand Recognition in the Robot Picking Task

    Chen Zhu, Takafumi Matsumaru

    ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS   (北九州)  Waseda-IPS  

    発表年月: 2018年11月

     概要を見る

    The computer vision guided picking system has been developed for many decades, however, the picking of random ordered multiple objects cannot be treated well with the existing software. In this research, 6 brands of random-placed drinking bottles are designed to be picked up by using a 6 degree of freedom robot. The bottles need to be classified by the brands before picking from the container. In this article, Mask R-CNN, a deep-learning-based image segmentation network are used to process the image taken from a normal camera and Inception v3 are used for the brand recognition task. The Mask R-CNN is trained by using the COCO datasets to detect the bottles and generate a mask on it. In the brand recognition task, 150-200 images are taken or find for each brand first and augmented to 1000 images per brands. As a result, the segmented images can be labeled with the brand name with at least 80% accuracy in the experiment environment.

  • Control Mobile Robot using Single 2d-Camera with New Proposed Camera Calibration Method

    Haitham Al Jabri, Takafumi Matsumaru

    ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS   (北九州)  Waseda-IPS  

    発表年月: 2018年11月

     概要を見る

    This paper presents a summary of mobile robot control using 2d-vision feedback with new proposed camera calibration method. The main goal is to highlight the 2d-vision feedback to be analyzed and used as a main pose feedback of a mobile robot. We use 2-active omni-wheels mobile robot with a single 2d-webcam in our experiments. Our main approach is to solve and tackle present limitations of using feature points with a single 2d-camera for the pose estimation such as: non-static environments and data stability. The results discuss the issues and point out the strengths and weaknesses of the used techniques. First, we use feature points detection ORB (Oriented FAST and Rotated BRIEF) and BF (Brute-Force) matcher to detect and match points in different frames, respectively. Second, we use FAST (Features from Accelerated Segment Test) corners and LK (Lucas–Kanade) optical flow to detect corners and track their flow in different frames. Those points and corners are then used for the pose estimation through optimization process with the: (a) calibration method of Zhang using chessboard pattern and (b) our proposed method using reinforcement learning in offline mode.

  • Intuitive Control of Virtual Robots using Leap Motion Sensor

    Rajeevlochana Chittawadigi, Subir Kumar Saha, Takafumi Matsumaru

    ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS   (北九州)  Waseda-IPS  

    発表年月: 2018年11月

     概要を見る

    Serial robots used in industry can be controlled by various means such as joint and Cartesian jogging using dedicated teach pendants or by using offline and online programming in software environment. They can also be controlled using master-slave manipulation technique where an exoskeleton can act as a master and the robot as a slave, mimicking the motion of the exoskeleton worn by a user. A recently developed sensor named Leap Motion can also be used to detect motion of the hands of a user, in sub-millimeter accuracy level. In the proposed work, a Leap Motion sensor has been used to track one of the hand of a user. The incremental motion of the tip of index finger is used as Cartesian increment of the end-effector of a virtual robot in RoboAnlayzer software. Currently, work is underway for detecting the orientation of two or three fingers and accordingly control the orientation of the robot’s end-effector. An exoskeleton has to be worn by the user and excessive usage may cause fatigue to the user. The proposed application of Leap Motion relieves the user of the fatigue and yet has good accuracy, and hence can be used as an intuitive method.

  • On the Applicability of Mobile Robot Traversal in Pedestrian Settings Without Utilizing Pre-prepared Maps

    Ahmed Farid, Takafumi Matsumaru

    ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS   (北九州)  Waseda-IPS  

    発表年月: 2018年11月

     概要を見る

    This paper discusses the prospect of mobile robots’ navigational ability in pedestrian settings (e.g. sidewalks and street crossings), under the condition of not utilizing feature-rich maps that are provided before-hand (e.g. from SLAM-based algorithms). The main motivation is to mimic the human way of interpreting 2D maps (e.g. widely available Google Maps), which would lead to negate the need for pre-mapping a given location. The paper starts by summarizing previous literature regarding robotic navigation in pedestrian settings, leading up to outcomes of our own research. We aim to present results in path planning and real-world scene interpretation, then finally address remaining problems and future prospects.

  • Fingertip pointing interface by hand detection using Short range depth camera

    Kazuki Horiuchi, Takafumi Matsumaru

    ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018)   (北九州)  Waseda-IPS  

    発表年月: 2018年11月

     概要を見る

    Computer mice and keyboards are most widely used to work with personal computers as pointing and typing devices respectively. As alternatives to a computer mouse, there are other types of pointing devices that use depth sensor or camera for pointing detection. However, current implementations are uncomfortable for usage because a user must raise his arm(s) to control pointing, as well as having a relatively long distance from the sensing device. To solve those usability problems, we propose a pointing device that can narrow the distance within which users can comfortably move their wrists between the keyboard and the pointing device itself. For this system, we compared between various depth sensors’ performance and eventually chose the Intel Realsense sensor for our system’s use. Additionally, we performed a comparative study involving our proposed system’s performance and that of other conventional input devices. Although the total time to complete experimental tasks using our system was longer than by using other conventional input devices, our proposed system has the fastest time when switching between pointing and typing (i.e. moving hand from mouse to keyboard).

  • 3次元空中ホログラフィック画像インターフェースを用いたリアルタイム遠隔投影

    堀内一希, セプティアナ アシファイ マンダ, 松丸隆文

    日本機械学会 ロボティクス・メカトロニクス講演会2018 (ROBOMECH 2018 in Kitakyushu), [北九州国際コンベンションゾーン] (2018.06.02-06.05), 2P2-H17 (4pages), (2018.06.05).   (北九州)  日本機械学会ロボティクスメカトロニクス部門  

    発表年月: 2018年06月

     概要を見る

    The three-dimensional aerial holographic image interface (3DAHII) is a system for aerial projection of varying 3D objects. The projected objects can be viewed from multiple directions without the use of special devices such as eyeglasses, gas or vapor for projection, rotating parts, and so on. In this research, we propose a system to capture a video of a human body from different perspectives in real-time, and project the result using 3DAHII. Because of realtime capturing and projection, users can interact and communicate directly. To realize this goal, we used a smaller prototype system in which four cameras capture a video of a physical object inside a frame, and the video result will be remotely projected in real-time. The smaller prototype is developed for evaluation and confirmation of the proposed system’s operation.

  • 手検出深度カメラを用いた近距離指先ポインティング

    堀内一希, 松丸隆文

    日本機械学会 ロボティクス・メカトロニクス講演会2018 (ROBOMECH 2018 in Kitakyushu), [北九州国際コンベンションゾーン] (2018.06.02-06.05), 主催:日本機械学会ロボティクスメカトロニクス部門, 1P2-G11 (4pages), (2018.06.04).   (北九州)  日本機械学会ロボティクスメカトロニクス部門  

    発表年月: 2018年06月

     概要を見る

    Computer mice and keyboards are most widely used to work with personal computers as pointing and typing devices respectively. As alternatives to a computer mouse, there are other types of pointing devices that use depth sensor or camera for pointing detection. owever, current implementations are uncomfortable for usage because a user must raise his arm(s) to control pointing, as well as having a relatively long distance from the sensing device. To solve those usability roblems, we propose a pointing device that can narrow the distance within which users can comfortably move their wrists between the keyboard and the pointing device itself. For this system, we compared between various depth sensors’ performance and eventually chose the Intel Realsense sensor for our system’s use. Additionally, we performed a comparative study involving our proposed system’s performance and that of other conventional input devices. Although the total time to complete experimental tasks using our system was longer than by using other conventional input devices, our proposed system has the fastest time when switching between pointing and typing (i.e. moving hand from mouse to keyboard). Key Words: Hand gesture recognition, Motion capture camera, Pointing device

  • A Progressively Adaptive Approach for Tactile Robotic Hand to Identify Object Handling Operations

    Duc Anh Than, Takafumi Matsumaru

    11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], P2C-2:(#019), pp.65-67, (2017.11.15.Wed).   (Kitakyushu) 

    発表年月: 2017年11月

     概要を見る

    We as human learn specific operations that typically belong to the object by touching and with sensory feedback data, we form particular skills of object handling such as with which hand pose to grasp the object, how much force to use, and which hand movement to make. As human development in babyhood, we all start learning how to handle objects by hands with repetitive interactions, from which we can learn our own skill on behaviors we could do with the object and how we handle it as well. By equipping a touch-oriented robot hand with tactile sensors and by using the collected sensory feedback data, our research is targeted at forming the underlying link between a particular object itself and the specific corresponding operations acted on the object which are helpful to feasibly accomplish a task ordered by human. Besides, a classification of those operations on the object is presented, from which optimal ones that best suit the human task requirements are determined. Specifically, in this paper we propose a machine-learning based approach in combination with an evolutionary method to help progressively build up the hand-based object cognitive intelligence. Overall, the scheme proposed exploits and reveals the robot-hand potential touch-based abilities/intelligence of object interaction and manipulation apart from the existing visually built-up cognition.

  • Outdoor Navigation for Mobile Robot Platforms with Path Planning of Sidewalk Motion Using Internet Maps

    Ahmed Farid, Takafumi Matsumaru

    11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], P2E-4:(#085), pp.298-301, (2017.11.15.Wed).   (Kitakyushu) 

    発表年月: 2017年11月

     概要を見る

    This paper describes a path planning system for processing 2D color maps of a given location to provide sidewalk paths, street crossing landmarks, and instructions for a robot platform or user to navigate, using only the current location and destination as inputs. For the navigation of robots and especially disabled people in outdoor pedestrian environments, path planning that explicitly takes into account sidewalks and street crossings is of great importance. Current path planning solutions on known 2D maps (e.g. Google Maps) from both research and industry do not always provide explicit information on sidewalk paths and street crossings, which is a common problem in suburban/rural areas. The path planner test results are shown for location of our campus.

  • Outdoor Navigation for Mobile Robot Platforms with Path Planning of Sidewalk Motion Using Internet Maps

    Ahmed Farid, Takafumi Matsumaru

    11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], P2E-4:(#085), pp.298-301, (2017.11.15.Wed).   (Kitakyushu) 

    発表年月: 2017年11月

     概要を見る

    This paper describes a path planning system for processing 2D color maps of a given location to provide sidewalk paths, street crossing landmarks, and instructions for a robot platform or user to navigate, using only the current location and destination as inputs. For the navigation of robots and especially disabled people in outdoor pedestrian environments, path planning that explicitly takes into account sidewalks and street crossings is of great importance. Current path planning solutions on known 2D maps (e.g. Google Maps) from both research and industry do not always provide explicit information on sidewalk paths and street crossings, which is a common problem in suburban/rural areas. The path planner test results are shown for location of our campus.

  • Dynamic Precise Localization of Multi-Mobile Robots in 2D Plane

    Haitham Khamis Mohammed Al Jabri, Tafukumi Matsumaru

    11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], O3C-2:(#020), pp.68-70   (Kitakyushu) 

    発表年月: 2017年11月

     概要を見る

    A Small Mobile Robot (SMR) moves inside another wider Frame Mobile Robot (FMR) and one of them stops at a time to be as a reference for the other using multi-laser pointers fixed in the FMR. These robots are designed to form a closed system that is to minimize accumulated errors occurred during the robot moves in which the robots’ positions are corrected regularly. There are several motivations that required high precision and accurate move of a mobile robot. For example; nowadays, different size of printers and plotters are available. Each printer/plotter has limited size of paper to print on. This limitation is a problem that can be solved if a mobile robot which moves precisely is made. However, can this robot perform printing task for unlimited size of area? What about accumulated errors that surely occur while the mobile robot moves? In what resolution can this robot print? The proposed system in this paper is a closed system that can collaborate between different robots in which an efficient method in localizing mobile robots is introduced using the advantage of straightening of laser waves transmissions in digitalize mode. The main two strengths of the research ar

  • Dynamic Precise Localization of Multi-Mobile Robots in 2D Plane

    Haitham Khamis Mohammed Al Jabri, Tafukumi Matsumaru

    11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], P2E-3:(#070), pp.255-257   (Kitakyushu) 

    発表年月: 2017年11月

     概要を見る

    A Small Mobile Robot (SMR) moves inside another wider Frame Mobile Robot (FMR) and one of them stops at a time to be as a reference for the other using multi-laser pointers fixed in the FMR. These robots are designed to form a closed system that is to minimize accumulated errors occurred during the robot moves in which the robots’ positions are corrected regularly. There are several motivations that required high precision and accurate move of a mobile robot. For example; nowadays, different size of printers and plotters are available. Each printer/plotter has limited size of paper to print on. This limitation is a problem that can be solved if a mobile robot which moves precisely is made. However, can this robot perform printing task for unlimited size of area? What about accumulated errors that surely occur while the mobile robot moves? In what resolution can this robot print? The proposed system in this paper is a closed system that can collaborate between different robots in which an efficient method in localizing mobile robots is introduced using the advantage of straightening of laser waves transmissions in digitalize mode. The main two strengths of the research are as follow: • Achieve precise and accurate mobile robot coordinates by minimizing accumulated errors • Manage mobile robots’ cooperation in localizing and performing different task at a certain time The robots’ movements are checked discretely by multi-laser pointers and corrected accordingly. This can contribute in achieving accurate and precise coordinates and thus, minimizing accumulated errors. As well as, the ability to collaborate between other robots can enhance some features of the system. For instance, it can accelerate the progress by adding other SMRs or even FRMs to finish a big job in less time. In this case, the system can divide different tasks between available robots to finish any assigned job earlier.

  • Usability Study of Aerial Projection of 3D Hologram Object in 3D Workspace

    Septiana Asyifa I, Jiono Mahfud, Kazuki Horiuchi, Takafumi Matsumaru

    11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], O2C-1:(#033), pp.115-118, (2017.11.14.Tue).   (Kitakyushu) 

    発表年月: 2017年11月

     概要を見る

    We have proposed a hand gestured based control system with an interactive 3D hologram object floating in mid-air as the hand movement reference, named The Aerial Projection of 3D Hologram Object (3DHO). The system consists of the hologram projector and a sensor to capture hand gesture. Leap motion is used for capturing the hand gesture command while it manipulates the 3DHO which produced by a pyramid-shaped reflector and two parabolic mirrors. We evaluate our 3DHO’s performance by comparing it to other 3D input devices such as joystick with slider, joystick without slider, and gamepad to do several pointing tasks. The assessment of comfort and user satisfaction also evaluated by using questionnaire survey. We found out that the 3DHO is good to use in three-dimensional workspace tasks. From the experiments, participants think our 3DHO is easy to learn but they felt some fatigue in their hands so 3DHO is not satisfying enough to use. The development by adding haptic feedback and make a wider 3DHO should be done for improving 3DHO performance.

  • Active Secondary Suspension of a Railway Vehicle for Improving Ride Comfort using LQG Optimal Control Technique

    Kaushalendra K Khadanga, Tafukumi Matsumaru

    11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], O1C-2:(#061), pp.226-229   (Kitakyushu) 

    発表年月: 2017年11月

     概要を見る

    Passenger comfort has been paramount in the design of suspension systems of high speed cars. The main objective of this paper is to reduce the vertical and pitch accelerations of a half car rail model. A rigid half-car high speed passenger vehicle with 10 degrees of freedom has been modelled to study the vertical and pitch accelerations. State space mathematical approach is used to model rail input that takes in track vibrations. An augmented track and the vehicle model is then designed. An active secondary suspension system based on the Linear Quadratic Gaussian (LQG) optimal control method is designed. Vehicle performances like vertical and pitch accelerations, front and rear suspension travel and control forces have been studied and compared with that of a passive system.

  • 3D Hologram Object Manipulation

    Jiono Mahfud, Takafumi Matsumaru

    10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)   (Kitakyushu) 

    発表年月: 2016年11月

  • 3D Hologram Object Manipulation

    Jiono Mahfud, Takafumi Matsumaru

    10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)   (Kitakyushu) 

    発表年月: 2016年11月

  • Use of Kinect Sensor for Building an Interactive Device

    R. P. Joshi, P. Sharma, R. A. Boby, S. K. Saha, T. Matsumaru

    10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)   (Kitakyushu) 

    発表年月: 2016年11月

  • Use of Kinect Sensor for Building an Interactive Device

    R. P. Joshi, P. Sharma, R. A. Boby, S. K. Saha, T. Matsumaru

    10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)   (Kitakyushu) 

    発表年月: 2016年11月

  • Introduction to Robotics and Mechatronics

    松丸隆文  [招待有り]

    『北九州ゆめみらいワーク2016』 みらい教室   (西日本総合展示場新館, 北九州市)  主催: 北九州市,運営: (株)マイナビ  

    発表年月: 2016年08月

  • Introduction to Robotics and Mechatronics

    松丸隆文  [招待有り]

    『北九州ゆめみらいワーク2016』 みらい教室   (西日本総合展示場新館, 北九州市)  主催: 北九州市,運営: (株)マイナビ  

    発表年月: 2016年08月

  • プロジェクタを用いた書道のための筆遣い学習支援システムの開発(第2報) ―毛筆の位置情報からの軌跡生成方法の提案―

    成田昌史, 松丸隆文

    日本機械学会 ロボティクス・メカトロニクス講演会2016(ROBOMECH 2016 in Yokohama), [パシフィコ横浜] (2016.06.08-06.11), 2P1-12a3 (2pages)  

    発表年月: 2016年06月

  • 移動ロボット接近時における動作予告を用いた恐怖感低減に関する検討

    廣井富, 前田彰大, 田中佑季, 松丸隆文, 伊藤彰則

    日本機械学会 ロボティクス・メカトロニクス講演会2016(ROBOMECH 2016 in Yokohama), [パシフィコ横浜] (2016.06.08-06.11), 2P1-11b2 (3pages)  

    発表年月: 2016年06月

  • Extraction of Representative Point from Hand Contour Based on Laser Range Scanner

    Chuankai Dai, Kaito Yano, Takafumi Matsumaru

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.1-4  

    発表年月: 2015年11月

  • An Economical Version of SAKSHAR-IDVT: Image-projective Desktop Varnamala Trainer

    Pratyusha Sharma, Vinoth Venkatesan, Ravi Prakash Joshi, Riby Abraham Boby, Takafumi Matsumaru, Subir Kumar Saha

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.5-8  

    発表年月: 2015年11月

  • Brushwork Learning Support System Using Projection

    Masashi Narita, Takafumi Matsumaru

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.9  

    発表年月: 2015年11月

  • Feature Tracking and Synchronous Scene Generation with a Single Camera

    Zheng Chai, Takafumi Matsumaru

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.38-39  

    発表年月: 2015年11月

  • Real-time hand side discrimination based on hand orientation and wrist point localization sensing by RGB-D sensor

    Thanapat Mekrungroj, Takafumi Matsumaru

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.67  

    発表年月: 2015年11月

  • Facial Expression Recognition based on Neural Network using Extended Curvature Gabor Filter Bank

    Pengcheng Fang, Takafumi Matsumaru

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.68-69  

    発表年月: 2015年11月

  • Obstacle avoidance based on improved Artificial Potential Field for mobile robot

    Sihui Zhou, Takafumi Matsumaru

    9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.242-243  

    発表年月: 2015年11月

  • プロジェクタを用いた書道のための筆遣い学習支援システムの開発(第1 報) —システムの提案と構築—

    日本機械学会 ロボティクス・メカトロニクス講演会2015(ROBOMECH 2015 in Kyoto), [京都市勧業館「みやこめっせ」]  

    発表年月: 2015年05月

  • Human-Machine Interaction using Projection Screen and Multiple Light Spots

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), OS9-2, (2014.11.13), PS6-2, (2014.11.13)  

    発表年月: 2014年11月

  • Development of calligraphy-stroke learning support system using projection

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), PS5-6, (2014.11.13)  

    発表年月: 2014年11月

  • Virtual Musical Instruments based on Interactive Multi-Touch system sensing by RGB Camera and IR Sensor

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), PS6-7, (2014.11.13)  

    発表年月: 2014年11月

  • Screen-Camera Position Calibration and Projected Screen Detection

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), PS6-10, (2014.11.13)  

    発表年月: 2014年11月

  • Touching Accuracy Improvement and New Application Development for IDAT

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), OS3-1, (2014.11.12), PS5-1, (2014.11.13)  

    発表年月: 2014年11月

  • SAKSHAR : An Image-projective Desktop Varnamala Trainer (IDVT) for Interactive Learning of Alphabets

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), OS3-2, (2014.11.12), PS5-2, (2014.11.13)  

    発表年月: 2014年11月

  • Multi-finger Touch Interface based on ToF camera and webcam

    8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), OS3-3, (2014.11.12), PS5-3, (2014.11.13)  

    発表年月: 2014年11月

  • 画像投射式卓上型上肢訓練装置(IDAT)を用いた片麻痺患者上肢の訓練−予備的研究−

    第39回日本脳卒中学会総会 (STROKE2014),[大阪国際会議場], (2014.3.13-15)  

    発表年月: 2014年03月

  • Introduction of Bio-Robotics and Human-Mechatronics Laboratory

    Takafumi MATSUMARU  [招待有り]

    Invited Talk   (Beijing Institute of Technology)  Beijing Institute of Technology  

    発表年月: 2014年02月

  • Human Detection and Following Mobile Robot Control System Using Laser Range Sensor

    Takafumi MATSUMARU  [招待有り]

    Joint meeting of Peking University and Waseda University   (Peking University)  Peking University  

    発表年月: 2014年02月

  • パンチルトズームカメラによる投影画面とのサイズ調整と追従制御

    第14回 計測自動制御学会 システムインテグレーション部門講演会 (SICE SI2013), [神戸国際会議場], (2013.12.18-20), 1J1-2, pp.703-708.  

    発表年月: 2013年12月

  • Relative Position Calibration using Pan-tilt-zoom Camera for Projection Interface

    7th IPS International Collaboration Symposium (IPS-ICS 2013), [Kitakyushu, Japan], (11-13 October, 2013), PS-502  

    発表年月: 2013年11月

  • Human Detection and Following Mobile Robot Control System Using Range Sensor

    7th IPS International Collaboration Symposium (IPS-ICS 2013), [Kitakyushu, Japan], (11-13 October, 2013), OS4-2, PS-511  

    発表年月: 2013年11月

  • Kinect Sensor Application to Control Mobile Robot by Gesture, Facial movement and Speech

    7th IPS International Collaboration Symposium (IPS-ICS 2013), [Kitakyushu, Japan], (11-13 October, 2013), PS-515  

    発表年月: 2013年11月

  • Contact/non-contact Interaction System using Camera and Depth Sensors

    7th IPS International Collaboration Symposium (IPS-ICS 2013), [Kitakyushu, Japan], (11-13 October, 2013), PS-504  

    発表年月: 2013年11月

  • Human-robot interaction on training

    International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), O003, pp.41-46,  

    発表年月: 2013年10月

  • Using Laser Pointer for Human-Computer Interaction

    International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P013, p.116  

    発表年月: 2013年10月

  • Automatic Adjustment and Tracking of Screen Projected by Using Pan-tilt-zoom Camera

    International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P014, p.117  

    発表年月: 2013年10月

  • Robot Human-following Limited Speed Control

    International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P015, p.118  

    発表年月: 2013年10月

  • Mobile robot control system based on gesture, speech and face track using RGB-D-S sensor

    International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P016, p.119  

    発表年月: 2013年10月

  • Human-robot interaction based on contact/non-contact sensing by camera and depth sensors

    International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P017, p.120  

    発表年月: 2013年10月

  • Plenary talk on Image-projective Desktop Arm Trainer and Touch Interaction Based on IR Image Sensor

    Takafumi MATSUMARU  [招待有り]

    Joint Seminar of Peking University and Waseda University   (Peking University)  Peking University  

    発表年月: 2013年02月

  • Development of Walking Training Robot with Customizable Trajectory Design System

    6th IPS International Collaboration Symposium (IPS-ICS), [Kitakyushu, Fukuoka, Japan], (14-16 November, 2012), PS-22  

    発表年月: 2012年11月

  • Development of Image-Projective Desktop Arm Trainer and Measurement of Trainee’s Performance

    6th IPS International Collaboration Symposium (IPS-ICS), [Kitakyushu, Fukuoka, Japan], (14-16 November, 2012), PS-23  

    発表年月: 2012年11月

  • Recent Advance on SOI (Step-On Interface) Applications

    International Workshop on Image &amp; Signal Processing and Retrieval (IWISPR2012), [Kitakyushu]  

    発表年月: 2012年10月

  • Development of Walking Training Robot with Customizable Trajectory Design System

    International Workshop on Image & Signal Processing and Retrieval (IWISPR2012), [Kitakyushu]  

    発表年月: 2012年10月

  • Development of Image-Projective Desktop Arm Trainer and Measurement of Trainee’s Performance

    International Workshop on Image & Signal Processing and Retrieval (IWISPR2012), [Kitakyushu]  

    発表年月: 2012年10月

  • どこでも楽しく上肢機能訓練ー画像投射式卓上型上肢訓練装置 Image-projective Desketop Arm Trainer,IDATの開発ー

     [招待有り]

    第19回産業医科大学リハビリテーション医療研究会 [産業医科大学2号館 2305講義室]  

    発表年月: 2012年07月

  • バイオ・ロボティクス&ヒューマン・メカトロニクス研究室

     [招待有り]

    旭興産グループ奨学金授与式 [旭興産門司工場]  

    発表年月: 2012年03月

  • Introduction of Bio-Robotics and Human-Mechatronics Laboratory

    Workshop co-hsted by Waseda University and Peking University, [Peking University]  

    発表年月: 2012年02月

  • Human-Robot Interaction Design on Mobile Robot with Step-On Interface

    5th IPS International Collaboration Symposium 2011, [Kitakyushu, Japan]  

    発表年月: 2011年11月

  • Prototype of Touch Game Application on Development of Step-On Interface

    5th IPS International Collaboration Symposium 2011, [Kitakyushu, Japan]  

    発表年月: 2011年11月

  • Human-Robot Interaction using Projection Interface

    International Workshop on Target Recognition and Tracking, [Kitakyushu, Japan]  

    発表年月: 2011年10月

  • Human-Robot Interaction Design on Mobile Robot with Step-On Interface

    International Workshop on Target Recognition and Tracking (IWTRT2011), [Kitakyushu]  

    発表年月: 2011年10月

  • Prototype of Touch game Application on Development of Step-On Interface

    International Workshop on Target Recognition and Tracking (IWTRT2011), [Kitakyushu]  

    発表年月: 2011年10月

  • 生物の運動に学ぶロボット技術

    日本機械学会 2011年度年次大会 [東京工業大学大岡山キャンパス]  

    発表年月: 2011年09月

  • 人間共存型ロボットの遠隔操作に関する研究(第55報) −ジョイスティックの操作と二輪機構の動作の関係の検討—

    日本機械学会 ロボティクス・メカトロニクス講演会2011(ROBOMEC2011)[岡山コンベンションセンター]  

    発表年月: 2011年05月

  • 人間共存型ロボットの遠隔操作に関する研究(第56報) −タッチパネルを用いた操作インタフェースによる二輪移動機構の操作—

    日本機械学会 ロボティクス・メカトロニクス講演会2011(ROBOMEC2011)[岡山コンベンションセンター]  

    発表年月: 2011年05月

  • 人間共存型ロボットの操作手法に関する研究(第10報) —ステップ・オン・インタフェース利用したタッチゲーム・アプリケーションの開発—

    日本機械学会 ロボティクス・メカトロニクス講演会2011(ROBOMEC2011)[岡山コンベンションセンター]  

    発表年月: 2011年05月

  • 人間共存型ロボットの操作手法に関する研究(第11報) −レーザー・ポインタによるステップ・オン・インタフェースの操作の検討—

    日本機械学会 ロボティクス・メカトロニクス講演会2011(ROBOMEC2011)[岡山コンベンションセンター]  

    発表年月: 2011年05月

  • 人間共存型ロボットのインフォマティブ・モーションに関する研究(第14報) —到達距離を受け手に伝える物体投げ渡し動作の生成—

    日本機械学会 ロボティクス・メカトロニクス講演会2011(ROBOMEC2011)[岡山コンベンションセンター]  

    発表年月: 2011年05月

  • 人間共存型ロボットのインフォマティブ・モーションに関する研究(第15報) —到達距離を受け手に伝える物体投げ渡し動作の評価—

    日本機械学会 ロボティクス・メカトロニクス講演会2011(ROBOMEC2011)[岡山コンベンションセンター]  

    発表年月: 2011年05月

  • 人間共存型ロボットのインフォマティブ・モーションに関する研究(第12報) —到達距離を受け手に伝える物体投げ渡し動作の解析—

    第31回バイオメカニズム学術講演会(SOBIM2010)  

    発表年月: 2010年11月

  • 人間共存型ロボットのインフォマティブ・モーションに関する研究(第13報) —到達距離を受け手に伝える物体投げ渡し動作の作成—

    第31回バイオメカニズム学術講演会(SOBIM2010)  

    発表年月: 2010年11月

  • 人間共存型ロボットの遠隔操作に関する研究(第54報) —タッチパネルを用いた操作インタフェースの開発と評価—

    日本機械学会 ロボティクス・メカトロニクス講演会2010(ROBOMEC2010)  

    発表年月: 2010年06月

  • 人間共存型ロボットの操作手法に関する研究(第9報) —ステップ・オン・インタフェースにおける制御機能の検討—

    日本機械学会 ロボティクス・メカトロニクス講演会2010(ROBOMEC2010)  

    発表年月: 2010年06月

  • 人間共存型ロボットのインフォマティブ・モーションに関する研究(第10報) —到達距離を受け手に伝える物体投げ渡し動作の計測・解析—

    日本機械学会 ロボティクス・メカトロニクス講演会2010(ROBOMEC2010)  

    発表年月: 2010年06月

  • 人間共存型ロボットのインフォマティブ・モーションに関する研究(第11報) —到達距離を受け手に伝える物体投げ渡し動作の実験・評価—

    日本機械学会 ロボティクス・メカトロニクス講演会2010(ROBOMEC2010)  

    発表年月: 2010年06月

  • ステップ・オン・インタフェースを搭載した移動ロボットと人とのインタラクションの設

    第59回ヒュ-マンインタフェ-ス学会研究会「インタラクションのデザイン(特集:デザインとア-ト)および一般」  

    発表年月: 2010年03月

  • 人間共存型ロボットの操作手法に関する研究(第8報) —ステップ・オン・インタフェースを搭載した移動ロボットの応用検討—

    計測自動制御学会 第10回システムインテグレーション部門講演会(SI2009)  

    発表年月: 2009年12月

  • 人間共存型ロボットのインフォマティブ・モーションに関する研究(第9報) —到達距離を受け手に伝える物体投げ渡し動作の特徴—

    計測自動制御学会 第10回システムインテグレーション部門講演会(SI2009)  

    発表年月: 2009年12月

  • 荷物重量を受け手に伝える手渡し動作の生成と評価

    第21回バイオメカニズムシンポジウム  

    発表年月: 2009年08月

  • 人間共存型ロボットの遠隔操作に関する研究(第51報) —レンジスキャナを用いた環境地図の作成—

    日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)  

    発表年月: 2009年05月

  • "人間共存型ロボットの遠隔操作に関する研究(第52報) —音声認識を用いた移動ロボットの操作インタフェース—

    日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)  

    発表年月: 2009年05月

  • 人間共存型ロボットの遠隔操作に関する研究(第53報) —タッチパネルを用いた移動ロボットの操作インタフェース—

    日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)  

    発表年月: 2009年05月

  • 人間共存型ロボットの動作予告に関する研究(第3報) —音声予告と表示予告の比較実験—

    日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)  

    発表年月: 2009年05月

  • 人間共存型ロボットの操作手法に関する研究(第6報) —ステップ・オン・インタフェースを搭載した移動ロボットの制御機能向上—

    日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)  

    発表年月: 2009年05月

  • 人間共存型ロボットの操作手法に関する研究(第7報) —ステップ・オン・インタフェースを搭載した移動ロボットの利用方法—

    日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)  

    発表年月: 2009年05月

  • 人間共存型ロボットのインフォマティブ・モーションに関する研究(第7報) —荷物重量を受け手に伝える手渡し動作の作成—

    日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)  

    発表年月: 2009年05月

  • 人間共存型ロボットのインフォマティブ・モーションに関する研究(第8報) —到達位置を受け手に伝える物体投げ渡し動作の特徴抽出—

    日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)  

    発表年月: 2009年05月

▼全件表示

学内研究費(特定課題)

  • 人とロボットのインタラクションの高度化に関する研究

    2021年   HE, Xin

     概要を見る

    ヒトとロボットのインタラクションの高度化のひとつとして,2021年度は,特にヒトとロボットの協調作業における再現機能に取り組んだ.物体の形状を変形加工する作業において,その場で具体的に教示をしなくとも,変形の前後の状態を観察させるだけで,同じ作業をロボットに代行・再現させるものである.学術上の問題は,変形後の状態の認識と変形過程の推定(認識問題)と,ロボットによる作業工程の生成と再現(計画問題)である.これまで,折り畳まれた平坦な物体の変形過程(折り紙)や平坦で柔軟な物体(ハンドタオルなど)の変形過程(摘まみ上げ)を推定してきた.そこで対象物を高さのある立体や三次元的な変形に拡張することを試みている.

  • 人とロボットのインタラクションの高度化に関する研究

    2020年  

     概要を見る

     ヒトとロボットの共存・協働・インタラクション(物理的+情報的)を高度化・深化させることを目的として研究開発を進めた.[1] An Object Model and Interaction Method for a Simulated Experience of Pottery on a Potter's Wheel. Sensors 20(11): 3091 (2020).[2] Estimation of Flat Object Deformation Using RGB-D Sensor for Robot Reproduction. Sensors 21(1): 105 (2021).&nbsp;

  • ヒトとロボットの協働作業の高度化に関する研究

    2020年  

     概要を見る

     ヒトとロボットの共存・協働・インタラクション(物理的+情報的)を高度化・深化させることを目的として研究開発を進めた.[1] An Object Model and Interaction Method for a Simulated Experience of Pottery on a Potter's Wheel. Sensors 20(11): 3091 (2020).[2] Estimation of Flat Object Deformation Using RGB-D Sensor for Robot Reproduction. Sensors 21(1): 105 (2021).&nbsp;

  • 歩道通行や交差点横断を特徴とする屋外における歩行者誘導ロボティック・システム

    2018年  

     概要を見る

    Outdoor pedestrian-navigation robotic-system characterized by passing sidewalk and crossing intersection: We studied path planning for pedestrians in outdoor environment using two-dimensional digital map. The two-dimensional digital map is obtained on the network like Google Maps and OpenStreetMaps. However, it doesn't record all the data about crosswalks and pedestrian paths, for example outside urban areas. Therefore, a path planning which does not depend on preliminarily-recorded map data for pedestrians or large amount of data, for example, to execute SLAM (simultaneous localization and mapping), should be realized. Once given the departure and the destination, it gets the map data around and between them. First, it performs an image processing (contour detection) and visually recognizes city blocks. Next, a graph theory is applied to deduce the pedestrian path from the departure to the destination. In the trial using actual map data, it was possible to plan a reasonable path with 70 to 80% of success rate, including which side of the road to go through. In the future, we plan to detect pedestrian crossings and footbridges from satellite images and merge them into graph data, and to guide pedestrians using a mobile robot in actual environments.

  • 二次元投影映像を用いた三次元動作の人への教示手法に関する研究

    2016年  

     概要を見る

    Research and Development activities have been continued in order to deepen the human robot interaction technology in the field of Robotics and Mechatronics, and then the following results have been obtained. (1) Visual SLAM using RGB-D sensor: ORB-SHOT SLAM, the trajectory correction by 3D loop closing based on Bag-of-Visual-Words (BoVW) model for RBG-D Visual SLAM has newly been proposed. --&gt; JRM article (to be appeared). (2) Learning system using RGB-D sensor and projector: Calibration and statistical learning techniques for building an interactive screen for children have been proposed, and its trials in a School has been done. --&gt; IEEE/SICE SII conference paper (presented) and IJARS article (to be appeared). (3) Leaning system using LM sensor and projector: Calligraphy-stroke learning support system using projector and motion sensor has been proposed. --&gt; JACIII article (to be appeared). (4) Interactive interface using LM sensor and 3D hologram: Interactive aerial projection of 3D hologram object has been proposed. --&gt; IEEE/ROBIO conference paper (presented).

  • 距離画像情報を利用した映像投影下でのタッチ・インタラクション技術

    2015年  

     概要を見る

    &lt;1&gt; Research and Development of Near-field Touch Interface using Time-of-flight CameraThe purpose of this study is to apply the 3-dimensional image sensor, time-of-flight camera, into a projector-sensor system, to achieve the basic functions (clicking, dragging and sliding) of conventional touch interface and expand it with some new functions such as finger direction detection. The research items are as followings: (1) Near-field hand extraction method, (2) High accuracy touch/hover detection, (3) Integrated and complete projector-sensor system, (4) Evaluation experiments.&lt;2&gt; Research and Development of Calligraphy-Brushwork Learning Support SystemA calligraphy learning support system was proposed for supporting brushwork learning by using a projector. The system was designed to provide the three kinds of training style according to the learner's ability: copying training, tracing training, and combination of both. In order to instruct the three-dimensional brushwork such as writing speed, pressure force, and orientation of the brush, we proposed the instruction method by presenting the information about the brush tip. This method visualizes the position, orientation and moving direction of the brush. A preliminary experiment on learning was performed and the efficiency of the proposed method was examined through the experimental results.

  • 画像投射式インタフェースの高性能・高機能化及び応用展開

    2014年  

     概要を見る

     本課題では,複数の物体が同時に接触でき,接触する物体を識別する機能を持つ仮想タッチ・スクリーンの研究開発を実施した.具体的には,Xtionセンサによる測定データに基づき,接触/非接触センシングだけでなく,手指/道具の区別および接触速度の検出をする,仮想キーボード(鍵盤)および仮想シロフォン(木琴)を実現した.Xtionセンサを機器に実装する手法,各種アルゴリズム:(1)手指の認識,(2)接触の認識,(3)接触位置のマッピング,(4)手指/道具(マレット=ばち)の区別,(5)接触速度の検出の手法,を検討した.これらの機能は,エデュテインメントとエンターテイメントの両方に役立つものだと考える.

  • 投射画面インタフェースの高度・高機能化に関する研究

    2014年  

     概要を見る

     壁面などに投射した操作パネルの画面を,ロボット・メカトロニクス機器の操作に利用する,投射画面インタフェースの高度・高機能化として,任意の面に投影する双方向インタフェース(ステップ・オン・インタフェースStep-on Interface, SOI)の汎用性の向上に関する研究開発を実施した.具体的には,プロジェクタで壁に投影した操作画面の操作ボタンを,レーザ・ポインタで特定して機器を動作させるシステムの実現を目指した. 本課題では,3つのレーザ・ポインタが互いに干渉することなく,ひとつの画面上で操作することができる,新しい方法を提案した.

  • 画像投射式卓上型上肢訓練装置IDATの機能向上と試用調査

    2013年  

     概要を見る

    画像投射式卓上型上肢訓練装置IDAT-3のアンケート調査による評価 (2014年04月01日)1. 目的 さまざまな年代の男女から次の3項目について評価を得ること.(1) 操作性(使いやすさ,使い勝手,直感性など)(2) 面白さ(楽しさ,充実感,達成感)(3) 有用性(有効性,実用性,利便性)また意見や所感を得ることで,現在のIDAT-3の欠点や不足を理解し,今後の性能の向上,新機能の開発の検討材料とすること.2. 方法と手順 Method and procedure 次の2つの展示会にIDAT-3を出展した.(1) 展示会1:北九州学術研究都市第13回産学連携フェア(http://fair.ksrp.or.jp/) 期日: 2013年10月23日(水)~10月25日(金) 10h00-17h00 場所:北九州学術研究都市 体育館 対象:専門家,職業人,学生,市民(2) 展示会2:2013国際ロボット展(http://www.nikkan.co.jp/eve/irex/) 期日:2013年11月06日(水)~11月09日(土) 10h00-17h00 場所:東京国際展示場 東1・2・3ホール 対象:専門家,職業人,学生,市民アンケートの実施手順は次のとおり:1) 研究開発の背景と目的,IDATの構成と機能,を来訪者に説明し,デモンストレーションする.2) 来訪者にトレーニングプログラム(モグラたたき,風船割り,さかな捕り)を体験してもらう.3) 好きなだけ試してもらったのちに,アンケートへ記入してもらう.アンケートの内容は次のとおり:1. Gender: □male □female2. Age: □-10s □20s □30s □40s □50s □60s □70s □80s- 3. Country: □Japan □Other ( )4. User-friendliness: (good)9・8・7・6・5(average)・4・3・2・1(bad) Comment: ( )5. Amusingness: (good)9・8・7・6・5(average)・4・3・2・1(bad) Comment: ( )6. Usefulness: (good)9・8・7・6・5(average)・4・3・2・1.(bad) Comment: ( )7. Feedback and opinions: ( )3つの評価項目によるIDAD-3の評価は,相対的な値ではなく主観的な絶対的な値でお願いした.合計7日間の展示より88通の回答を得た.性別では男性67名,女性21名.年齢別では10代以下40名,20代30名,30代4名,40代4名,50代以上6名となった.3. 評価値 3つの評価項目における平均値(標準偏差)は次のとおり:(1) 操作性: 10代以下7.25(1.80),20代7.23(1.45),30代7.38(1.32),40代7.75(0.83),50代9.00(0.00),60代6.33(0.94),70代7.00(0.00),男性7.28(1.54),女性7.29(1.72),合計7.28(1.59)(2) 面白さ: 10代以下8.25(1.34),20代7.93(1.18),30代8.25(0.97),40代6.75(1.09),50代9.00(0.00),60代5.67(0.94),70代7.00(0.00),男性7.88(1.37),女性8.33(1.17),合計7.99(1.34)(3) 有用性: 10代以下7.60(1.50),20代7.67(1.45),30代7.38(1.32),40代8.00(0.71),50代9.00(0.00),60代8.33(0.94),70代7.00(0.00),男性7.60(1.51),女性7.95(1.27),合計7.67(1.46)すべての回答者による平均値は,3つの評価項目(操作性,面白さ,有用性)においてすべて,7と8の間にある. “操作性”が,3項目の中で最小の平均値(7.28)と最大の標準偏差(1.59)を得た. “面白さ”が,3項目の中で最大の平均値(7.99)と最小の標準偏差(1.34)を得た.女性の回答者が少ないため,男女間の結果の差異の検討はむずかしい.そこで以下では,年齢による結果の差異を検討する.3.1 操作性 年齢を若年(-10s),壮年(20s-40s),後年(50s-)の3群に分けてまとめた得点はそれぞれ7.25(1.80),7.31(1.39),7.33(1.37)となる.年齢群間での得点の有意な差は認められない.すなわち,どの年齢群でもほぼ同じ得点(7.25, 7.31, 7.33)で,IDATの操作感は年齢による影響は小さい.しかし他の2項目に比べてあまり高く評価されていない.問題点はあとで回答者からの意見に基づいて検討する.3.2 面白さ 年齢を若年(-10s),壮年(20s-40s),後年(50s-)の3群に分けてまとめた得点はそれぞれ8.25(1.34),7.89(1.20),7.00(1.63)となる.年齢群間での得点の有意な差は認められない.しかし,年齢が上がると面白さの得点が下がる傾向がある.若年者による得点は8.25と高く,興味を引くことができたと考えられる.ゲームのようなトレーニング内容により,若者の関心を得ることが容易だが,高齢者を引き付けるのはそれほど簡単ではない.3.3 有用性 年齢を若年(-10s),壮年(20s-40s),後年(50s-)の3群に分けてまとめた得点はそれぞれ7.60(1.50),7.64(1.46),8.33(0.94)となる.年齢群間での得点の有意な差は認められない.しかし,年齢が上がると有用性を評価する傾向がある.これは,このアンケート調査において回答者の一人一人にIDAT訓練のねらいである手と眼の協調訓練の重要性を説明しているからだと思われる.高齢者は身体および認知機能の低下にともなう危機意識をもっており,このIDAT訓練の意義を認識してもらいやすい.4. 意見や所感 意見や所感に寄せられたコメントを項目に分けて紹介し考察する.4.1 表示 ・もぐらは,平面的な絵が出てくるより立体的なのが出てきたら面白いかも.あとは壁に映してみるとか.・投影面が斜面のときはどうなるでしょう?リハビリテーション訓練装置の映像にどの程度の臨場感が必要かは,よく検討するべきである.一方,家庭用ゲーム機を応用対象としたリアルタイム3次元コンピュータグラフィックスを実現するLSIの性能は向上し続けている.コスト・パフォーマンスを考慮しながら,高精細で高品質の3次元画像を導入することも将来的にはあり得る.また本体が平面に垂直に設置されていれば,その面が傾いていてもその動作に問題はない.机の上でなく壁での実施も,機器構成の工夫により可能になる.4.2 反応感度 ・タッチ反応が少し遅い.・反応が鈍い.・反応速度を向上してほしい.・タイムラグが気になる.障害物を検知するための探索区域は,作業面から約1cm離れた平面上である.したがって,システムは面に触れていない手にも反応する.これは反応感度に対してよく働くはずであるが,確かに応答時間差を感じる.これには,ハードウェアとソフトウェアとの両面で原因が考えられる.まず信号伝達である:動く物体を検出するにはセンサ情報の伝達速度が遅い.センサデータを取得するサイクルタイムを短縮するために,コンピュータの処理能力を含めた見直しが必要である.次にプログラミングである:当たり判定の計算に時間がかかりすぎる.アルゴリズムを見直して最適化する必要がある.4.3 判定精度 ・当たり判定の精度を良くして欲しい.・連続的な当たり判定が遅いときがある.ある程度の反応感度を得るために,障害物の正確な大きさや形状を認識していない.手の大きさ(子供/大人,人種,など)だけでなく,手の状態(握った手,開いた手,など)やトイハンマの使用などにより,算出された障害物の位置が多少異なる.あらゆる状況により正確に対応するためには,訓練をはじめる前に訓練者の叩いた状態を計測して補正値を算出しておくなどの方法を考える必要がある.4.4. 小型化・低コスト化 ・小型化が楽しみにしている.・コストが高い.現在は研究開発のために汎用コンピュータを使っている.しかし商品化にはワンチップマイクロコンピュータを使用して本体と一体化することもできる.また,高価なレーザレンジファインダ(URG)を置き換えて同じ機能を提供する低価格のRGB-Dセンサについても別途検討している.4.5. 全体的な意見 多くの肯定的で励みになる回答を得た.・これなら楽しくリハビリが進むと思いました.・操作しやすい,いろんな人が楽しめる,ぜひまたやりたい.・リハビリに最も良い,楽しさを感じる,実用性あり.・未来の発展を期待しています.このように,多くの来場者は我々の研究開発の重要性と必要性を認め,高く評価してくれた.しかし同時に,改善の余地があることを私たちに知らせる貴重な意見を得ることができた.これらの提言に基づいて,さらにより良いシステムにするべく研究を続ける.

  • 投影式インターフェースを用いた高齢者・身障者用機能維持・回復訓練システムの研究開発

    2011年  

     概要を見る

     本課題は,上体の運動および認知機能の維持・回復訓練を目的として,SOI(ステップ・オン・インタフェース:プロジェクタの投影画面を機器から人への情報提示だけでなく人から機器への指示入力としても利用する新たらしいインタフェース)とPCのみで構成する単純な可搬型機器を製作すると共に,訓練コンテンツとしてのゲーム・アプリケーションを開発するものである. まず2011年度の前半は,SOIを二輪型移動機構の前後に2組搭載した移動ロボットHFAMRO-2上でアプリケーションの開発をすすめ,「風船割り(対象が上方に動く)」,「魚つかみ(対象が左右に動く)」,「もぐらたたき(対象がランダムな位置に出現)」について目処を立てていた.2011年度の後半からは,北九州ロボットフォーラムの事業である市内発ロボット創生プロジェクトにも採択され,(財)北九州産業学術推進機構を事務局として,産業医科大学リハビリテーション医学講座,九州産業大学工学部,リーフ(株),(株)コア九州カンパニーをメンバーとする共同研究としても実施できた.これにより,機器装置としてこれまでに,1号機(反射型:普及型プロジェクタを用いるが投影距離を稼ぎながら装置の高さを低く抑えるために反射鏡を用いるもの),2号機(直映型:比較的低い位置に設置した少し高価な短焦点型プロジェクタで机上に直接投影するもの),3号機(臨床試用プロトタイプ:最新型のプロジェクタを用いて収納性と可搬性を狙ったもの)を試作し,産業医科大学リハビリテーション科や特別養護老人ホームもじみ苑に持ち込んで意見をうかがう機会を持つこともできた.一方,画面デザインの追加・変更の容易性や詳細な調整を可能とするためにアプリケーション・プログラムのアルゴリズムと実現する手法を全面的に改めた新規開発を実施している,いまだに完全には動作しておらず,不十分な状態である. 2011年度末には産業医科大学と早稲田大学の双方における「人を対象とする研究に関する倫理審査委員会」の承認を得て,2012年度には,リハビリテーション用(医大リハ科)およびレクリエーション用(特養施設)としての臨床試用を開始することになっているため,引き続き何らかの助成と必要としている.

▼全件表示

 

現在担当している科目

▼全件表示