Updated on 2024/04/24

写真a

 
MORI, Tatsuya
 
Affiliation
Faculty of Science and Engineering, School of Fundamental Science and Engineering
Job title
Professor
Degree
Dr. of Information Science ( 2005.03 Waseda University )
Homepage URL
Profile

When I worked at an industry R&D team, my research theme was measurement and analysis of the Internet. I became interested in security research when I stayed at University of Wisconsin–Madison in 2007 as a visiting scholar. Currently, I am working on various research topics associated with security and privacy; they range from hardware to humans. I like research on security and privacy issues for emerging technologies and interdisciplinary research that fuses different disciplines.

Research Experience

  • 2019.04
    -
    Now

    National Institute of Information and Communications Technology   Cybersecurity Research Institute   Guest Researcher

  • 2018.05
    -
    Now

    RIKEN   Center for Advanced Intelligence Project   Visiting researcher

  • 2018.04
    -
    Now

    Waseda University   Department of Computer Science and Communication Engineering   Professor

  • 2021.09
    -
    2022.09

    Waseda University   Department of Computer Science and Communication Engineering   Head of the department

  • 2020.09
    -
    2022.09

    Waseda University   Communication Engineering Department   Head of the department

  • 2013.04
    -
    2018.03

    Waseda University   Department of Computer Science and Communication Engineering   Associate professor

  • 2011.04
    -
    2013.03

    The University of Electro-Communications

  • 2011.04
    -
    2013.03

    NTT Network Technology Laboratories   Senior researcher

  • 2010.07
    -
    2011.03

    NTTサービスインテグレーション基盤研究所   主任研究員

  • 2007.04
    -
    2010.06

    NTTサービスインテグレーション基盤研究所   研究主任

  • 2007.02
    -
    2008.03

    ウィスコンシン州立大学マディソン校   客員研究員

  • 2003.04
    -
    2007.03

    NTTサービスインテグレーション基盤研究所   研究員

  • 1999.04
    -
    2003.03

    NTT情報流通プラットフォーム研究所   研究員

▼display all

Education Background

  • 2002.09
    -
    2005.03

    Waseda University  

  • 1997.04
    -
    1999.03

    Waseda University  

  • 1993.04
    -
    1997.03

    Waseda University   School of Science and Engineering  

Committee Memberships

  • 2024.04
    -
    Now

    The Network and Distributed System Security Symposium (NDSS 2025)  Program Comittee

  • 2024.01
    -
    Now

    The ACM Conference on Computer and Communications Security 2024 (CCS 2024)  Program Committee Member

  • 2023.12
    -
    Now

    The ACM Internet Measurement Conference (IMC 2024)  Program Committee Member

  • 2023.05
    -
    Now

    Information Processing Society Japan, Computer Security Research Group  Executive Committee Member

  • 2022.09
    -
    Now

    The annual Privacy Enhancing Technologies Symposium (PETS)  Artifact Review Committee Member

  • 2021.04
    -
    Now

    JST/さきがけ「社会変革に向けたICT基盤強化」  領域アドバイザ

  • 2020.04
    -
    Now

    National center of Incident readiness and Strategy for Cybersecurity (NISC)  Committee member of the Cybersecurity Research and Development Strategy

  • 2020.01
    -
    Now

    European Workshop on Usable Security (EuroUSEC)  Program Committee

  • 2022.12
    -
    Now

    Center for Research and Development Strategy, Japan Science and Technology Agency (JST)  Commitee member (Communication Research Area)

  • 2016.05
    -
    Now

    電子情報通信学会 情報セキュリティ研究会  研究専門委員

  • 2014.05
    -
    Now

    電子情報通信学会 情報通信システムセキュリティ研究会  研究専門委員

  • 2023.01
    -
    2023.10

    The ACM Conference on Computer and Communications Security 2023 (CCS 2023)  Program Committee Member

  • 2021.06
    -
    2023.06

    ACM ASIA Conference on Computer and Communications Security (ACM ASIACCS)  Program committee member

  • 2021.04
    -
    2023.04

    情報処理学会 コンピューターセキュリティ研究会  運営委員

  • 2021.10
    -
    2022.04

    The Passive and Active Measurement (PAM) conference 2022  Program committee member

  • 2018
    -
    2022

    European Symposium on Research in Computer Security (ESORICS)  Program Committee

  • 2020.12
    -
    2021.03

    経済産業省委託事業 ドローンセキュリティWG委員会  委員

  • 2020.12
    -
    2021.03

    総務省AIセキュリティの研究開発に関する調査検討会  委員

  • 2020.07
    -
    2021.03

    National center of Incident readiness and Strategy for Cybersecurity (NISC) Research, Industry-Academia-Government Collaboration Strategy Working Group  Chief Investigator

  • 2019.08
    -
    2021.03

    日本学術振興会 産学協力研究委員会  サイバーセキュリティ第192委員会 委員

  • 2019.08
    -
    2021.03

    電子情報技術産業協会 スマートホームのサイバーセキュリティ調査事業に係る有識者会議  委員

  • 2019.06
    -
    2021.03

    厚生労働省委託事業  IT 講師養成プログラム開発委員会委員

  • 2017.05
    -
    2021.03

    情報処理学会コンピュータセキュリティ研究会  幹事

  • 2020.10
     
     

    コンピュータセキュリティシンポジウム CSS 2020  プログラム委員長

  • 2018.05
    -
    2020.02

    Elsevier Computers & Security  Editorial board member

  • 2018.03
    -
    2019.03

    情報処理学会 IPSJ-ONE実行委員会  幹事

  • 2017
    -
    2019

    ACM ASIA Conference on Computer and Communications Security (ACM ASIACCS)  Program Committee

  • 2016.05
    -
    2017.04

    情報処理学会 コンピューターセキュリティ研究会  運営委員

  • 2013.05
    -
    2017.04

    電子情報通信学会 インターネットアーキテクチャ研究会  研究専門委員

  • 2011.05
    -
    2016.05

    IEICE  Associate editor

  • 2011.05
    -
    2016.05

    電子情報通信学会  英文論文誌編集委員

▼display all

Professional Memberships

  •  
     
     

    IPSJ

  •  
     
     

    IEICE

  •  
     
     

    ACM

  •  
     
     

    IEEE

Research Areas

  • Information security / Computer system

Research Interests

  • Network

  • Privacy

  • Security

Awards

  • IPSJ-ITS研究会 優秀論文賞

    2023.05   第93回ITS合同研究発表会   悪意のある人工霧がLiDAR物体検出モデルに与える影響の評価

    Winner: 田中優奈, 野本一輝, 小林竜之輔, 森達哉

  • CSS2022優秀論文賞

    2022.10   情報処理学会コンピュータセキュリティシンポジウム (CSS2022)   まばたきによって生じる電圧を用いた認証方式の提案

    Winner: 飯島 涼, 竹久 達也, 大木 哲史, 森 達哉

  • Best Paper Award

    2021.10   European Symposium on Usable Security (EuroUSEC 2021)   "Careless Participants Are Essential For Our Phishing Study: Understanding the Impact of Screening Methods"

    Winner: T. Matsuura, A. Hasegawa, M. Akiyama, T. Mori

  • CSS2021最優秀論文賞

    2021.10   情報処理学会コンピュータセキュリティシンポジウム (CSS2021)   「接触確認フレームワークに対する陽性者特定攻撃の評価と対策」

    Winner: 野本 一輝, 秋山 満昭, 衛藤 将史, 猪俣 敦夫, 森 達哉

  • Distinguished Paper Award

    2020.02   Network and Distributed System Security Symposium (NDSS 2020)   "Melting Pot of Origins: Compromising the Intermediary Web Services that Rehost Websites"

    Winner: T. Watanabe, E. Shioji, M. Akiyama, T. Mori

  • UWS2019優秀論文賞

    2019.10   情報処理学会コンピュータセキュリティ研究会   パスワード生成アシスト技術の有効性評価:異なる言語圏のユーザを対象とした追試研究

    Winner: 森啓華, 長谷川彩子, 渡邉卓弥, 笹崎寿貴, 秋山満昭, 森 達哉

  • CSS2019最優秀論文賞

    2019.10   情報処理学会コンピュータセキュリティシンポジウム (CSS2019)   「Voice Assistant アプリの大規模実態調査」

    Winner: 刀塚敦子, 飯島涼, 渡邉卓弥, 秋山満昭, 酒井哲也, 森達哉

  • CSS2018最優秀論文賞

    2018.10   情報処理学会コンピュータセキュリティシンポジウム (CSS2018)   「超音波の分離放射による音声認識機器への攻撃:ユーザスタディ評価と対策技術の提案」

    Winner: 飯島涼, 南翔汰, シュウインゴウ, 竹久達也, 高橋健志, 及川靖広, 森達哉

  • Best student paper award

    2018.08   USENIX Workshop on Offensive Technologies (WOOT 2018)   "A Feasibility Study of Radio-frequency Retroreflector Attack"

    Winner: S. Wakabayashi, S. Maruyama, T. Mori, S. Goto, M. Kinugawa, Y. Hayashi

  • CSS2017優秀論文賞(2)

    2017.10   情報処理学会コンピュータセキュリティ研究会   オンラインオークションにおけるプライバシーリスクとユーザ認識の調査

    Winner: 長谷川彩子, 秋山満昭, 八木毅, 森達哉

  • CSS2017優秀論文賞(1)

    2017.10   情報処理学会コンピュータセキュリティ研究会   静電容量方式タッチパネルに対する敵対的な干渉の脅威

    Winner: 丸山誠太, 若林哲宇, 森達哉

  • CSS2017最優秀論文賞

    2017.10   情報処理学会コンピュータセキュリティシンポジウム (CSS2017)   「ユーザブロック機能の光と陰: ソーシャルアカウントを特定するサイドチャネルの構成」

    Winner: 渡邉卓弥, 塩治榮太朗, 秋山満昭, 笹岡京斗, 八木毅, 森達哉

  • Best paper award

    2017.07   International Conference on Applications and Technologies in Information Security (ATIS 2017)   "Characterizing Promotional Attacks in Mobile App Store"

    Winner: B. Sun, X. Luo, M. Akiyama, T. Watanabe, T. Mori

  • MWS2016優秀論文賞

    2016.10   情報処理学会コンピュータセキュリティ研究会   モバイルアプリストアにおけるプロモーショナル攻撃の自動検知システム

    Winner: 孫博, 秋山満昭, 森達哉

  • CSS2016最優秀論文賞

    2016.10   情報処理学会コンピュータセキュリティシンポジウム (CSS2016)   「Trojan of Things: モノに埋め込まれた悪性NFCタグがもたらす脅威の評価」

    Winner: 丸山誠太, 星野遼, 森達哉

  • SCIS2016イノベーション論文賞

    2016.06   電子情報通信学会 暗号と情報セキュリティシンポジウム (SCIS 2016)   ICの周辺回路や配線に実装可能なハードウェア・トロイによる情報漏えい評価

    Winner: 林優一, 衣川昌宏, 森達哉

  • PWS2015優秀論文賞

    2015.10   情報処理学会コンピュータセキュリティ研究会   RouteDetector: 9軸センサ情報を用いた位置情報追跡攻撃

    Winner: 渡邉卓弥, 秋山満昭, 森達哉

  • MWS2013優秀論文賞

    2013.10   情報処理学会コンピュータセキュリティ研究会   自動化されたマルウェア動的解析システムで収集した大量APIコールログの分析

    Winner: 藤野朗稚, 森達哉

  • The Telecom System Technology Award

    2010.03   The Telecommunication Advancement Foundation   Identifying Heavy-Hitter Flows from Sampled Flow Statistics

    Winner: Tatsuya Mori, Tetsuya Takine, Jianping Pan, Ryoichi Kawahara, Masato Uchida, Shigeki Goto

  • Best paper award

    2010.01   IEEE/ACM COMSNETS 2010   On the effectiveness of IP reputation for spam filtering

    Winner: Holly Esquivel, Aditya Akella, Tatsuya Mori

  • IEICE Transactions Best Paper Award

    2009.05   IEICE   Identifying Heavy-Hitter Flows from Sampled Flow Statistics

    Winner: Tatsuya MORI, Tetsuya TAKINE, Jianping PAN, Ryoichi KAWAHARA, Masato UCHIDA, Shigeki GOTO

  • CSS2022学生論文賞

    2022.10   情報処理学会コンピュータセキュリティシンポジウム (CSS2022)   UEFI モジュールのパッキングによる難読化

    Winner: 松尾 和輝, 丹田 賢, 川古谷 裕平, 森 達哉

  • CSS2022学生論文賞

    2022.10   情報処理学会コンピュータセキュリティシンポジウム (CSS2022)   マガイノクラシファイア : 自律飛行型ドローンを標的とした投影攻撃の対策

    Winner: 大山 穂高, 飯島 涼, 森 達哉

  • ICSS研究賞

    2021.06   電子情報通信学会 情報通信システムセキュリティ(ICSS)研究会   Exposure Notification Frameworkがもたらすプライバシーリスクの評価と対策

    Winner: 野本一輝, 秋山満昭, 衛藤将史, 猪俣敦夫, 森 達哉

  • CSS2019コンセプト研究賞

    2019.10   情報処理学会コンピュータセキュリティ研究会   プログラミング言語に対する ホモグリフ攻撃の実現可能性評価

    Winner: 鈴木宏彰, 米谷嘉郎, 森達哉

  • CSS2019学生論文賞

    2019.10   情報処理学会コンピュータセキュリティ研究会   サーバ証明書解析によるフィッシングサイト検知

    Winner: 櫻井悠次, 渡邉卓弥, 奥田哲矢, 秋山満昭, 森達哉

  • CSS2018学生論文賞(3)

    2018.10   情報処理学会コンピュータセキュリティ研究会   言語圏ごとのパスワード生成・管理の傾向比較

    Winner: 森啓華, シュウインゴウ, 森達哉

  • CSS2018学生論文賞(2)

    2018.10   情報処理学会コンピュータセキュリティ研究会   IDNホモグラフ攻撃の大規模実態調査:傾向と対策

    Winner: 鈴木宏彰, 森達哉, 米谷嘉朗

  • CSS2018学生論文賞(1)

    2018.10   情報処理学会コンピュータセキュリティ研究会   SeQR: ショルダーハック耐性を持つQRコード生成方法

    Winner: 笹崎寿貴, シュウインゴウ, 丸山誠太, 森達哉

  • MWS2017学生論文賞(2)

    2017.10   情報処理学会コンピュータセキュリティ研究会   モバイルアプリ開発者による脆弱性対応の実態調査

    Winner: 安松達彦, 金井文宏, 渡邉卓弥, 塩治榮太朗, 秋山満昭, 森達哉

  • CSS2017学生論文賞(1)

    2017.10   情報処理学会コンピュータセキュリティ研究会   電波再帰反射攻撃の実用性評価

    Winner: 若林哲宇, 丸山誠太, 星野遼, 森達哉

  • CSS2016学生論文賞(2)

    2016.10   情報処理学会コンピュータセキュリティ研究会   Androidアプリケーションにおける電子署名の大規模調査

    Winner: 吉田奏絵, 今井宏謙, 芹沢奈々, 森達哉, 金岡晃

  • CSS2016学生論文賞(1)

    2016.10   情報処理学会コンピュータセキュリティ研究会   Webトラッキング検知システムの構築とサードパーティトラッキングサイトの調査

    Winner: 芳賀夢久, 高田雄太, 秋山満昭, 森達哉

  • インターネットアーキテクチャ研究賞

    2016.06   電子情報通信学会 IA研究会   Inferring the Number of Accesses to Internet Services using DNS Traffic

    Winner: A. Shimoda, K. Ishibashi, S. Harada, K. Sato, M. Tsujino, T. Inoue, M. Shimura, T. Takebe, K. Takahashi, T. Mori, S. Goto

  • 情報通信セキュリティシステム研究賞

    2016.06   電子情報通信学会 ICSS研究会   攻撃インフラの時系列変動特性に基づく悪性ドメイン名の検知法

    Winner: 千葉大紀, 八木毅, 秋山満昭, 森達哉, 矢田健, 針生剛男, 後藤滋樹

  • MWS2015学生論文賞

    2015.10   情報処理学会コンピュータセキュリティ研究会   Androidクローンアプリの大規模分析

    Winner: 石井悠太, 渡邉卓弥, 秋山満昭, 森達哉

  • 情報通信セキュリティシステム研究賞

    2015.06   電子情報通信学会 ICSS研究会   正規アプリに類似した Android アプリの実態解明

    Winner: 石井悠太, 渡邉卓弥, 秋山満昭, 森達哉

  • MWS2014学生論文賞

    2014.10   情報処理学会コンピュータセキュリティ研究会   Androidアプリの説明文とプライバシー情報アクセスの相関分析

    Winner: 渡邉卓弥, 秋山満昭, 酒井哲也, 鷲崎弘宜, 森達哉

  • CSS2014学生論文賞

    2014.10   情報処理学会コンピュータセキュリティ研究会   Androidアプリの説明文とプライバシー情報アクセスの相関分析

    Winner: 渡邉卓弥, 秋山満昭, 酒井哲也, 鷲崎弘宜, 森達哉

  • Best paper award

    2014.06   World Telecommunications Congress 2014 (WTC2014)   Loss Recovery Method for Content Pre-distribution in VoD Service

    Winner: N. Kamiyama, R. Kawahara, T. Mori

  • Best poster award

    2014.06   ACM ASIACCS 2014   Understanding the consistency between words and actions for Android apps

    Winner: T. Watanabe, T. Mori

  • CSS2013学生論文賞

    2013.10   情報処理学会コンピュータセキュリティ研究会   自動化されたマルウェア動的解析システムで収集した大量APIコールログの分析

    Winner: 藤野朗稚, 森達哉

  • インターネットアーキテクチャ研究賞

    2012.06   電子情報通信学会 IA研究会   Combining the outcomes of IP reputation services

    Winner: 森 達哉, 佐藤一道, 高橋洋介, 木村達明, 石橋圭介

  • 情報ネットワーク研究賞

    2012.03   電子情報通信学会 情報ネットワーク(IN)研究会   実計測トラフィックを用いたTCP品質尺度間の相関性分析

    Winner: 池田泰弘, 上山憲昭, 川原亮一, 木村達明, 森 達哉

  • Best student paper award

    2011.08   APAN (Asia Pacific Advanced Network) Network Research Workshop 2011   Analysis of Redirection Caused by Web-based Malware

    Winner: Yuta Takata, Shigeki Goto, Tatsuya Mori

  • Best student paper award

    2010.07   IEEE/IPSJ SAINT 2010   Sensor in the Dark: Building Untraceable Large-scale Honeypots using Virtualization Technologies

    Winner: Akihiro Shimoda, Tatsuya Mori, Shigeki Goto

  • Internet Architecture Research Award

    2010.06   Understanding the large-scale spamming botnet

    Winner: Tatsuya MoriHolly, EsquivelAditya AkellaAkihiro, ShimodaShigeki Goto

  • ネットワークシステム研究賞

    2009.03   電子情報通信学会 NS研究会   ISP型CDNの性能評価

    Winner: 上山 憲昭, 森 達哉, 川原 亮一, 長谷川 治久

  • ネットワークシステム研究賞

    2006.03   電子情報通信学会 NS研究会   ナイーブベイズ分類器を用いたフロー特性分類方法

    Winner: 森 達哉, 川原 亮一, 上山 憲昭

  • テレコミュニケーションマネジメント研究賞

    2005.03   電子情報通信学会 TM研究会   サンプルパケット情報を用いたTCPフローレベル性能劣化検出法

    Winner: 川原 亮一, 石橋 圭介, 森 達哉, 阿部 威郎

▼display all

 

Papers

  • DeGhost: Unmasking Phantom Intrusions in Autonomous Recognition Systems

    H. Oyama, R. Iijima, T. Mori

    Proceedings of the 9th IEEE European Symposium on Security and Privacy 2024 (EuroS&P 2024)    2024.07  [Refereed]

    Authorship:Last author, Corresponding author

  • The Catcher in the Eye: Recognizing Users by their Blinks

    Ryo Iijima, Tetsuya Takehisa, Tetsushi Ohki, Tatsuya Mori

    Proceedings of the 19th ACM ASIA Conference on Information, Computer and Communications Security (ASIACCS 2024)    2024.07  [Refereed]

    Authorship:Last author

  • Browser Permission Mechanisms Demystified

    Kazuki Nomoto, Takuya Watanabe, Eitaro Shioji, Mitsuaki Akiyama, Tatsuya Mori

    Proceedings of the Network and Distributed System Security Symposium (NDSS 2023)    2023.02  [Refereed]

    Authorship:Last author, Corresponding author

  • Understanding the Behavior Transparency of Voice Assistant Applications Using the ChatterBox Framework

    Atsuko Natatsuka, Ryo Iijima, Takuya Watanabe, Mitsuaki Akiyama, Tetsuya Sakai, Tatsuya Mori

    Proceedings of the 25th International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2022)     143 - 159  2022.10  [Refereed]

    Authorship:Last author, Corresponding author

    DOI

    Scopus

  • On the Feasibility of Linking Attack to Google/Apple Exposure Notification Framework

    Kazuki Nomoto, Mitsuaki Akiyama, Masashi Eto, Atsuo Inomata, Tatsuya Mori

    Proceedings of the 22nd Privacy Enhancing Technologies Symposium (PETS 2022)   2022 ( 4 ) 140 - 161  2022.08  [Refereed]

    Authorship:Last author, Corresponding author

    DOI

  • Audio Hotspot Attack: An Attack on Voice Assistance Systems Using Directional Sound Beams and its Feasibility

    Ryo Iijima, Shota Minami, Yunao Zhou, Tatsuya Takehisa, Takeshi Takahashi, Yasuhiro Oikawa, Tatsuya Mori

    IEEE Transactions on Emerging Topics in Computing   9 ( 4 ) 2004 - 2018  2021.10  [Refereed]

    Authorship:Last author, Corresponding author

    DOI

  • Melting Pot of Origins: Compromising the Intermediary Web Services that Rehost Websites

    T. Watanabe, E. Shioji, M. Akiyama, T. Mori

    Proceedings of the 26th Network and Distributed System Security Symposium (NDSS 2020)     1 - 15  2020.02  [Refereed]

    Authorship:Last author, Corresponding author

    DOI

    Scopus

    16
    Citation
    (Scopus)
  • EIGER: automated IOC generation for accurate and interpretable endpoint malware detection.

    Yuma Kurogome, Yuto Otsuki, Yuhei Kawakoya, Makoto Iwamura, Syogo Hayashi, Tatsuya Mori, Koushik Sen

    Proceedings of the 35th Annual Computer Security Applications Conference, ACSAC 2019, San Juan, PR, USA, December 09-13, 2019     687 - 701  2019.12  [Refereed]

    DOI

    Scopus

    17
    Citation
    (Scopus)
  • ShamFinder: An Automated Framework for Detecting IDN Homographs.

    Hiroaki Suzuki, Daiki Chib, Yoshiro Yoneya, Tatsuya Mori, Shigeki Goto

    Proceedings of the Internet Measurement Conference, IMC 2019, Amsterdam, The Netherlands, October 21-23, 2019     449 - 462  2019.10  [Refereed]

    Authorship:Corresponding author

    DOI

    Scopus

    25
    Citation
    (Scopus)
  • Tap 'n Ghost: A Compilation of Novel Attack Techniques against Smartphone Touchscreens.

    Seita Maruyama, Satohiro Wakabayashi, Tatsuya Mori

    2019 IEEE Symposium on Security and Privacy, SP 2019, San Francisco, CA, USA, May 19-23, 2019     620 - 637  2019.05  [Refereed]

    Authorship:Last author, Corresponding author

    DOI

    Scopus

    23
    Citation
    (Scopus)
  • Don't throw me away: Threats Caused by the Abandoned Internet Resources Used by Android Apps.

    Elkana Pariwono, Daiki Chiba, Mitsuaki Akiyama, Tatsuya Mori

    Proceedings of the 2018 on Asia Conference on Computer and Communications Security, AsiaCCS 2018, Incheon, Republic of Korea, June 04-08, 2018     147 - 158  2018.06  [Refereed]

    Authorship:Last author, Corresponding author

  • User Blocking Considered Harmful? An Attacker-Controllable Side Channel to Identify Social Accounts.

    Takuya Watanabe, Eitaro Shioji, Mitsuaki Akiyama, Keito Sasaoka, Takeshi Yagi, Tatsuya Mori

    2018 IEEE European Symposium on Security and Privacy, EuroS&P 2018, London, United Kingdom, April 24-26, 2018     323 - 337  2018.04  [Refereed]

    Authorship:Last author, Corresponding author

    DOI

    Scopus

    8
    Citation
    (Scopus)
  • A First Look at Brand Indicators for Message Identification (BIMI).

    Masanori Yajima, Daiki Chiba 0001, Yoshiro Yoneya, Tatsuya Mori

    Passive and Active Measurement - 24th International Conference(PAM)     479 - 495  2023

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Understanding Non-Experts’ Security- and Privacy-Related Questions on a Q&A Site

    Ayako A. Hasegawa, Naomi Yamashita, Tatsuya Mori, Daisuke Inoue, Mitsuaki Akiyama

    Proceedings of the Eighteenth Symposium on Usable Privacy and Security (SOUPS 2022)     39 - 56  2022.08  [Refereed]

  • Cyber-physical firewall: monitoring and controlling the threats caused by malicious analog signals.

    Ryo Iijima, Tatsuya Takehisa, Tatsuya Mori

    Proceedings of the 19th ACM International Conference on Computing Frontiers (CF’22)     296 - 304  2022.05  [Refereed]

    Authorship:Last author, Corresponding author

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Know Your Victim: Tor Browser Setting Identification via Network Traffic Analysis.

    Chun-Ming Chang, Hsu-Chun Hsiao, Timothy M. Lynar, Tatsuya Mori

    Companion of The Web Conference 2022     201 - 204  2022

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Experiences, Behavioral Tendencies, and Concerns of Non-Native English Speakers in Identifying Phishing Emails.

    Ayako Akiyama Hasegawa, Naomi Yamashita, Mitsuaki Akiyama, Tatsuya Mori

    Journal of Information Processing   30   841 - 858  2022

    DOI

    Scopus

  • Measuring Adoption of DNS Security Mechanisms with Cross-Sectional Approach

    Masanori Yajima, Daiki Chiba 0001, Yoshiro Yoneya, Tatsuya Mori

    Proceedings of the IEEE Global Communications Conference: Communication & InformationSystems Security (Globecom 2021)     1 - 6  2021.12  [Refereed]

    Authorship:Last author, Corresponding author

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • A First Look at COVID-19 Domain Names: Origin and Implications

    Ryo Kawaoka, Daiki Chiba, Takuya Watanabe, Mitsuaki Akiyama, Tatsuya Mori

    Passive and Active Measurement     39 - 53  2021.04  [Refereed]

    Authorship:Last author, Corresponding author

    DOI

    Scopus

    4
    Citation
    (Scopus)
  • Why They Ignore English Emails: The Challenges of Non-Native Speakers in Identifying Phishing Emails.

    Ayako Akiyama Hasegawa, Naomi Yamashita, Mitsuaki Akiyama, Tatsuya Mori

    Seventeenth Symposium on Usable Privacy and Security     319 - 338  2021

  • Careless Participants Are Essential for Our Phishing Study: Understanding the Impact of Screening Methods.

    Tenga Matsuura, Ayako Akiyama Hasegawa, Mitsuaki Akiyama, Tatsuya Mori

    EuroUSEC     36 - 47  2021  [Refereed]  [International journal]

    Authorship:Last author, Corresponding author

    DOI

    Scopus

    3
    Citation
    (Scopus)
  • Identifying the Phishing Websites Using the Patterns of TLS Certificates.

    Yuji Sakurai, Takuya Watanabe 0001, Tetsuya Okuda, Mitsuaki Akiyama, Tatsuya Mori

    J. Cyber Secur. Mobil.   10 ( 2 ) 451 - 486  2021  [Refereed]

    Authorship:Last author, Corresponding author

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Discovering HTTPSified Phishing Websites Using the TLS Certificates Footprints

    Yuji Sakurai, Takuya Watanabe, Tetsuya Okuda, Mitsuaki Akiyama, Tatsuya Mori

    2020 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW)     522 - 531  2020.09  [Refereed]

    Authorship:Last author, Corresponding author

    DOI

  • Study on the Vulnerabilities of Free and Paid Mobile Apps Associated with Software Library.

    Takuya Watanabe, Mitsuaki Akiyama, Fumihiro Kanei, Eitaro Shioji, Yuta Takata, Bo Sun, Yuta Ishii, Toshiki Shibahara, Takeshi Yagi, Tatsuya Mori

    IEICE Trans. Inf. Syst.   103-D ( 2 ) 276 - 291  2020  [Refereed]

    Authorship:Last author, Corresponding author

  • CLAP: Classification of Android PUAs by Similarity of DNS Queries.

    Mitsuhiro Hatada, Tatsuya Mori

    IEICE Trans. Inf. Syst.   103-D ( 2 ) 265 - 275  2020

    DOI

    Scopus

  • Follow Your Silhouette: Identifying the Social Account of Website Visitors through User-Blocking Side Channel.

    Takuya Watanabe 0001, Eitaro Shioji, Mitsuaki Akiyama, Keito Sasaoka, Takeshi Yagi, Tatsuya Mori

    IEICE Trans. Inf. Syst.   103-D ( 2 ) 239 - 255  2020

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Discovering Malicious URLs Using Machine Learning Techniques

    Bo Sun, Takeshi Takahashi, Lei Zhu, Tatsuya Mori

    Intelligent Systems Reference Library   177   33 - 60  2020  [Refereed]

     View Summary

    © Springer Nature Switzerland AG 2020. Security specialists have been developing and implementing many countermeasures against security threats, which is needed because the number of new security threats is further and further growing. In this chapter, we introduce an approach for identifying hidden security threats by using Uniform Resource Locators (URLs) as an example dataset, with a method that automatically detects malicious URLs by leveraging machine learning techniques. We demonstrate the effectiveness of the method through performance evaluations.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Comparative Analysis of Three Language Spheres: Are Linguistic and Cultural Differences Reflected in Password Selection Habits?

    Keika Mori, Takuya Watanabe, Yunao Zhou, Ayako Akiyama Hasegawa, Mitsuaki Akiyama, Tatsuya Mori

    2019 IEEE European Symposium on Security and Privacy Workshops, EuroS&P Workshops 2019, Stockholm, Sweden, June 17-19, 2019     159 - 171  2019  [Refereed]

    DOI

    Scopus

    8
    Citation
    (Scopus)
  • Understanding the Origins of Weak Cryptographic Algorithms Used for Signing Android Apps.

    Kanae Yoshida, Hironori Imai, Nana Serizawa, Tatsuya Mori, Akira Kanaoka

    J. Inf. Process.   27   593 - 602  2019  [Refereed]

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Audio Hotspot Attack: An Attack on Voice Assistance Systems Using Directional Sound Beams and its Feasibility

    Ryo Iijima, Shota Minami, Yunao Zhou, Tatsuya Takehisa, Takeshi Takahashi, Yasuhiro Oikawa, Tatsuya Mori

    IEEE Transactions on Emerging Topics in Computing   9 ( 4 ) 1 - 1  2019

    DOI

  • Understanding the Responsiveness of Mobile App Developers to Software Library Updates.

    Tatsuhiko Yasumatsu, Takuya Watanabe, Fumihiro Kanei, Eitaro Shioji, Mitsuaki Akiyama, Tatsuya Mori

    Proceedings of the Ninth ACM Conference on Data and Application Security and Privacy, CODASPY 2019, Richardson, TX, USA, March 25-27, 2019     13 - 24  2019  [Refereed]

    DOI

    Scopus

    8
    Citation
    (Scopus)
  • DomainProfiler: toward accurate and early discovery of domain names abused in future.

    Daiki Chiba, Takeshi Yagi, Mitsuaki Akiyama, Toshiki Shibahara, Tatsuya Mori, Shigeki Goto

    Int. J. Inf. Sec.   17 ( 6 ) 661 - 680  2018.11  [Refereed]

  • Understanding the Inconsistency between Behaviors and Descriptions of Mobile Apps.

    Takuya Watanabe, Mitsuaki Akiyama, Tetsuya Sakai, Hironori Washizaki, Tatsuya Mori

    IEICE Transactions   101-D ( 11 ) 2584 - 2599  2018.11  [Refereed]

  • Stay On-Topic: Generating Context-Specific Fake Restaurant Reviews.

    Mika Juuti, Bo Sun, Tatsuya Mori, N. Asokan

    Computer Security - 23rd European Symposium on Research in Computer Security, ESORICS 2018, Barcelona, Spain, September 3-7, 2018, Proceedings, Part I     132 - 151  2018.09  [Refereed]

    DOI

    Scopus

    17
    Citation
    (Scopus)
  • Understanding the Origins of Weak Cryptographic Algorithms Used for Signing Android Apps.

    Kanae Yoshida, Hironori Imai, Nana Serizawa, Tatsuya Mori, Akira Kanaoka

    2018 IEEE 42nd Annual Computer Software and Applications Conference, COMPSAC 2018, Tokyo, Japan, 23-27 July 2018, Volume 2     713 - 718  2018.08  [Refereed]

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • DomainChroma: Building actionable threat intelligence from malicious domain names

    Daiki Chiba, Mitsuaki Akiyama, Takeshi Yagi, Kunio Hato, Tatsuya Mori, Shigeki Goto

    Computers and Security   77   138 - 161  2018.08  [Refereed]

     View Summary

    Since the 1980s, domain names and the domain name system (DNS) have been used and abused. Although legitimate Internet users rely on domain names as indispensable infrastructures for using the Internet, attackers use or abuse them as reliable, instantaneous, and distributed attack infrastructures. However, there is a lack of complete understanding of such domain-name abuses and methods for coping with them. In this study, we designed and implemented a unified analysis system combining current defense solutions to build actionable threat intelligence from malicious domain names. The basic concept underlying our system is malicious domain name chromatography. Our analysis system can distinguish among mixtures of malicious domain names for websites. On the basis of this concept, we do not create a hodgepodge of current solutions but design separation of abused domain names and offer actionable threat intelligence or defense information by considering the characteristics of malicious domain names as well as the possible defense solutions and points of defense. Finally, we evaluated our analysis system and defense-information output using a large real dataset to show the effectiveness and validity of our system.

    DOI

    Scopus

    18
    Citation
    (Scopus)
  • Detecting malware-infected devices using the HTTP header patterns

    Sho Mizuno, Mitsuhiro Hatada, Tatsuya Mori, Shigeki Goto

    IEICE Transactions on Information and Systems   E101D ( 5 ) 1370 - 1379  2018.05  [Refereed]

     View Summary

    Damage caused by malware has become a serious problem. The recent rise in the spread of evasive malware has made it difficult to detect it at the pre-infection timing. Malware detection at post-infection timing is a promising approach that fulfills this gap. Given this background, this work aims to identify likely malware-infected devices from the measurement of Internet traffic. The advantage of the traffic-measurementbased approach is that it enables us to monitor a large number of endhosts. If we find an endhost as a source of malicious traffic, the endhost is likely a malware-infected device. Since the majority of malware today makes use of the web as a means to communicate with the C&amp
    C servers that reside on the external network, we leverage information recorded in the HTTP headers to discriminate between malicious and benign traffic. To make our approach scalable and robust, we develop the automatic template generation scheme that drastically reduces the amount of information to be kept while achieving the high accuracy of classification
    since it does not make use of any domain knowledge, the approach should be robust against changes of malware. We apply several classifiers, which include machine learning algorithms, to the extracted templates and classify traffic into two categories: malicious and benign. Our extensive experiments demonstrate that our approach discriminates between malicious and benign traffic with up to 97.1% precision while maintaining the false positive rate below 1.0%.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Audio Hotspot Attack: An Attack on Voice Assistance Systems Using Directional Sound Beams.

    Ryo Iijima, Shota Minami, Yunao Zhou, Tatsuya Takehisa, Takeshi Takahashi 0001, Yasuhiro Oikawa, Tatsuya Mori

    Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS 2018, Toronto, ON, Canada, October 15-19, 2018     2222 - 2224  2018  [Refereed]

    DOI

  • A Feasibility Study of Radio-frequency Retroreflector Attack.

    Satohiro Wakabayashi, Seita Maruyama, Tatsuya Mori, Shigeki Goto, Masahiro Kinugawa, Yu-ichi Hayashi

    12th USENIX Workshop on Offensive Technologies, WOOT 2018, Baltimore, MD, USA, August 13-14, 2018.    2018  [Refereed]

  • PADetective: A systematic approach to automate detection of promotional attackers in mobile app store

    Bo Sun, Xiapu Luo, Mitsuaki Akiyama, Takuya Watanabe, Tatsuya Mori

    Journal of Information Processing   26   212 - 223  2018.01  [Refereed]

     View Summary

    Mobile app stores, such as Google Play, play a vital role in the ecosystem of mobile device software distribution platforms. When users find an app of interest, they can acquire useful data from the app store to inform their decision regarding whether to install the app. This data includes ratings, reviews, number of installs, and the category of the app. The ratings and reviews are the user-generated content (UGC) that affect the reputation of an app. Therefore, miscreants can leverage such channels to conduct promotional attacks
    for example, a miscreant may promote a malicious app by endowing it with a good reputation via fake ratings and reviews to encourage would-be victims to install the app. In this study, we have developed a system called PADetective that detects miscreants who are likely to be conducting promotional attacks. Using a 1723-entry labeled dataset, we demonstrate that the true positive rate of detection model is 90%, with a false positive rate of 5.8%. We then applied our system to an unlabeled dataset of 57M reviews written by 20M users for 1M apps to characterize the prevalence of threats in the wild. The PADetective system detected 289K reviewers as potential PA attackers. The detected potential PA attackers posted reviews to 136K apps, which included 21K malicious apps. We also report that our system can be used to identify potentially malicious apps that have not been detected by anti-virus checkers.

    DOI

    Scopus

    3
    Citation
    (Scopus)
  • Automatically Generating Malware Analysis Reports Using Sandbox Logs.

    Bo Sun, Akinori Fujino, Tatsuya Mori, Tao Ban, Takeshi Takahashi 0001, Daisuke Inoue

    IEICE Trans. Inf. Syst.   101-D ( 11 ) 2622 - 2632  2018  [Refereed]

    DOI

    Scopus

    7
    Citation
    (Scopus)
  • DomainProfiler: toward accurate and early discovery of domain names abused in future

    Daiki Chiba, Takeshi Yagi, Mitsuaki Akiyama, Toshiki Shibahara, Tatsuya Mori, Shigeki Goto

    International Journal of Information Security     1 - 20  2017.12  [Refereed]

     View Summary

    Domain names are at the base of today’s cyber-attacks. Attackers abuse the domain name system (DNS) to mystify their attack ecosystems
    they systematically generate a huge volume of distinct domain names to make it infeasible for blacklisting approaches to keep up with newly generated malicious domain names. To solve this problem, we propose DomainProfiler for discovering malicious domain names that are likely to be abused in future. The key idea with our system is to exploit temporal variation patterns (TVPs) of domain names. The TVPs of domain names include information about how and when a domain name has been listed in legitimate/popular and/or malicious domain name lists. On the basis of this idea, our system actively collects historical DNS logs, analyzes their TVPs, and predicts whether a given domain name will be used for malicious purposes. Our evaluation revealed that DomainProfiler can predict malicious domain names 220 days beforehand with a true positive rate of 0.985. Moreover, we verified the effectiveness of our system in terms of the benefits from our TVPs and defense against cyber-attacks.

    DOI

    Scopus

    6
    Citation
    (Scopus)
  • Network event extraction from log data with nonnegative tensor factorization

    Tatsuaki Kimura, Keisuke Ishibashi, Tatsuya Mori, Hiroshi Sawada, Tsuyoshi Toyono, Ken Nishimatsu, Akio Watanabe, Akihiro Shimoda, Kohei Shiomoto

    IEICE Transactions on Communications   E100B ( 10 ) 1865 - 1878  2017.10  [Refereed]

     View Summary

    Network equipment, such as routers, switches, and RA- DIUS servers, generate various log messages induced by network events such as hardware failures and protocol flaps. In large production networks, analyzing the log messages is crucial for diagnosing network anomalies
    however, it has become challenging due to the following two reasons. First, the log messages are composed of unstructured text messages generated in accordance with vendor-specific rules. Second, network events that in- duce the log messages span several geographical locations, network layers, protocols, and services. We developed a method to tackle these obsta- cles consisting of two techniques: statistical template extraction (STE) and log tensor factorization (LTF). The former leverages a statistical clustering technique to automatically extract primary templates from unstructured log messages. The latter builds a statistical model that collects spatial-Temporal patterns of log messages. Such spatial-Temporal patterns provide useful in- sights into understanding the impact and patterns of hidden network events. We evaluate our techniques using a massive amount of network log mes- sages collected from a large operating network and confirm that our model fits the data well. We also investigate several case studies that validate the usefulness of our method.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Eating moment recognition using heart rate responses

    Shinji Hotta, Tatsuya Mori, Daisuke Uchida, Kazuho Maeda, Yoshinori Yaginuma, Akihiro Inomata

    UbiComp/ISWC 2017 - Adjunct Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers     69 - 72  2017.09  [Refereed]

     View Summary

    There are many studies for recognizing eating moments using wide types of modality (e.g. arm motion). However, they are needed to be improved for both accuracy and robustness for practical use in daily life. In this paper, we propose a novel recognition method using bimodal heart rate responses caused by eating. Our method combines (i) short-term and (ii) long-term features of heart rate changes. The proposed method was evaluated for recognizing eating moment with the free-environment dataset (9 participants, 604 days), and achieved 98.6% accuracy and 56.9% F-score. The proposed features related to ingestion and digestion contribute to robust eating moment recognition.

    DOI

  • DomainChroma: Providing Optimal Countermeasures against Malicious Domain Names

    Daiki Chiba, Mitsuaki Akiyama, Takeshi Yagi, Takeshi Yada, Tatsuya Mori, Shigeki Goto

    Proceedings - International Computer Software and Applications Conference   1   643 - 648  2017.09  [Refereed]

     View Summary

    Domain names and domain name system (DNS) have been used and abused for over 30 years since the 1980s. Although legitimate Internet users rely on domain names as their indispensable infrastructures for using the Internet, attackers use or abuse them as reliable, instantaneous, and distributed attack infrastructure. However, there is a lack of complete understanding of such domain name abuses and the methods for coping with them. In this paper, we design and implement a unified and objective analysis pipeline combining the existing defense solutions to realize practical and optimal defenses against today's malicious domain names. The basic concept underlying our novel analytical approach is malicious domain names' chromatography. Our new analysis pipeline can distinguish among mixtures of malicious domain names for websites. On the basis of this concept, we do not create a hodgepodge of existing solutions but design separation of abused domain names and offer defense information by considering the characteristics of malicious domain names as well as the possible defense solutions and points of defense. Finally, we evaluate our analysis pipeline and output defense information using a large and real dataset to show the effectiveness and validity of our proposed approach.

    DOI

    Scopus

    3
    Citation
    (Scopus)
  • Detecting and Classifying Android PUAs by Similarity of DNS queries

    Mitsuhiro Hatada, Tatsuya Mori

    Proceedings - International Computer Software and Applications Conference   2   590 - 595  2017.09  [Refereed]

     View Summary

    This work develops a method of detecting and classifying 'potentially unwanted applications' (PUAs) such as adware or remote monitoring tools. Our approach leverages DNS queries made by apps. Using a large sample of Android apps from third-party marketplaces, we first reveal that DNS queries can provide useful information for the detection and classification of PUAs. Next, we show that existing DNS blacklists are ineffective to perform these tasks. Finally, we demonstrate that our methodology performed with high accuracy.

    DOI

    Scopus

    6
    Citation
    (Scopus)
  • Understanding the security management of global third-party Android marketplaces.

    Yuta Ishii, Takuya Watanabe, Fumihiro Kanei, Yuta Takata, Eitaro Shioji, Mitsuaki Akiyama, Takeshi Yagi, Bo Sun, Tatsuya Mori

    Proceedings of the 2nd ACM SIGSOFT International Workshop on App Market Analytics, WAMA@ESEC/SIGSOFT FSE 2017, Paderborn, Germany, September 5, 2017     12 - 18  2017.09  [Refereed]

    DOI

    Scopus

    10
    Citation
    (Scopus)
  • APPraiser: A Large Scale Analysis of Android Clone Apps

    Yuta IshiI, Takuya Watanabe, Mitsuaki Akiyama, Tatsuya Mori

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS   E100D ( 8 ) 1703 - 1713  2017.08  [Refereed]

     View Summary

    Android is one of the most popular mobile device platforms. However, since Android apps can be disassembled easily, attackers inject additional advertisements or malicious codes to the original apps and redistribute them. There are a non-negligible number of such repackaged apps. We generally call those malicious repackaged apps "clones." However, there are apps that are not clones but are similar to each other. We call such apps "relatives." In this work, we developed a framework called APPraiser that extracts similar apps and classifies them into clones and relatives from the large dataset. We used the APPraiser framework to study over 1.3 million apps collected from both official and third-party marketplaces. Our extensive analysis revealed the following findings: In the official marketplace, 79% of similar apps were attributed to relatives, while in the third-party marketplace, 50% of similar apps were attributed to clones. The majority of relatives are apps developed by prolific developers in both marketplaces. We also found that in the third-party market, of the clones that were originally published in the official market, 76% of them are malware.

    DOI

    Scopus

    4
    Citation
    (Scopus)
  • Finding New Varieties of Malware with the Classification of Network Behavior

    Mitsuhiro Hatada, Tatsuya Mori

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS   E100D ( 8 ) 1691 - 1702  2017.08  [Refereed]

     View Summary

    An enormous number of malware samples pose a major threat to our networked society. Antivirus software and intrusion detection systems are widely implemented on the hosts and networks as fundamental countermeasures. However, they may fail to detect evasive malware. Thus, setting a high priority for new varieties of malware is necessary to conduct in-depth analyses and take preventive measures. In this paper, we present a traffic model for malware that can classify network behaviors of malware and identify new varieties of malware. Our model comprises malwarespecific features and general traffic features that are extracted from packet traces obtained from a dynamic analysis of the malware. We apply a clustering analysis to generate a classifier and evaluate our proposed model using large-scale live malware samples. The results of our experiment demonstrate the effectiveness of our model in finding new varieties of malware.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Tracking the Human Mobility Using Mobile Device Sensors

    Takuya Watanabe, Mitsuaki Akiyama, Tatsuya Mori

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS   E100D ( 8 ) 1680 - 1690  2017.08  [Refereed]

     View Summary

    We developed a novel, proof-of-concept side-channel attack framework called RouteDetector, which identifies a route for a train trip by simply reading smart device sensors: an accelerometer, magnetometer, and gyroscope. All these sensors are commonly used by many apps without requiring any permissions. The key technical components of RouteDetector can be summarized as follows. First, by applying a machine-learning technique to the data collected from sensors, RouteDetector detects the activity of a user, i.e., "walking," "in moving vehicle," or " other. "Next, it extracts departure/arrival times of vehicles from the sequence of the detected human activities. Finally, by correlating the detected departure/ arrival times of the vehicle with timetables/route maps collected from all the railway companies in the rider's country, it identifies potential routes that can be used for a trip. We demonstrate that the strategy is feasible through field experiments and extensive simulation experiments using timetables and route maps for 9,090 railway stations of 172 railway companies.

    DOI

    Scopus

    4
    Citation
    (Scopus)
  • Building a Scalable Web Tracking Detection System: Implementation and the Empirical Study

    Yumehisa Haga, Yuta Takata, Mitsuaki Akiyama, Tatsuya Mori

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS   E100D ( 8 ) 1663 - 1670  2017.08  [Refereed]

     View Summary

    Web tracking is widely used as a means to track user's behavior on websites. While web tracking provides new opportunities of e-commerce, it also includes certain risks such as privacy infringement. Therefore, analyzing such risks in the wild Internet is meaningful to make the user's privacy transparent. This work aims to understand how the web tracking has been adopted to prominent websites. We also aim to understand their resilience to the ad-blocking techniques. Web tracking-enabled websites collect the information called the web browser fingerprints, which can be used to identify users. We develop a scalable system that can detect fingerprinting by using both dynamic and static analyses. If a tracking site makes use of many and strong fingerprints, the site is likely resilient to the ad-blocking techniques. We also analyze the connectivity of the third-party tracking sites, which are linked from multiple websites. The link analysis allows us to extract the group of associated tracking sites and understand how influential these sites are. Based on the analyses of 100,000 websites, we quantify the potential risks of the web tracking-enabled websites. We reveal that there are 226 websites that adopt fingerprints that cannot be detected with the most of off-the-shelf anti-tracking tools. We also reveal that a major, resilient third-party tracking site is linked to 50.0 % of the top-100,000 popular websites.

    DOI

    Scopus

  • Analyzing the ecosystem of malicious URL redirection through longitudinal observation from honeypots

    Mitsuaki Akiyama, Takeshi Yagi, Takeshi Yada, Tatsuya Mori, Youki Kadobayashi

    COMPUTERS & SECURITY   69   155 - 173  2017.08  [Refereed]

     View Summary

    Today, websites are exposed to various threats that exploit their vulnerabilities. A compromised website will be used as a stepping-stone and will serve attackers' evil purposes. For instance, URL redirection mechanisms have been widely used as a means to perform web based attacks covertly; i.e., an attacker injects a redirect code into a compromised website so that a victim who visits the site will be automatically navigated to a malware distribution site. Although many defense operations against malicious websites have been developed, we still encounter many active malicious websites today. As we will show in the paper, we infer that the reason is associated with the evolution of the ecosystem of malicious redirection.
    Given this background, we aim to understand the evolution of the ecosystem through long-term measurement. To this end, we developed a honeypot-based monitoring system, which specializes in monitoring the behavior of URL redirections. We deployed the monitoring system across four years and collected more than 100K malicious redirect URLs, which were extracted from 776 distinct websites. Our chief findings can be summarized as follows: (1) Click-fraud has become another motivation for attackers to employ URL redirection, (2) The use of web-based domain generation algorithms (DGAs) has become popular as a means to increase the entropy of redirect URLs to thwart URL blacklisting, and (3) Both domain flux and IP-flux are concurrently used for deploying the intermediate sites of redirect chains to ensure robustness of redirection.
    Based on the results, we also present practical countermeasures against malicious URL redirections. Security/network operators can leverage useful information obtained from the honeypot-based monitoring system. For instance, they can disrupt infrastructures of web based attack by taking down domain names extracted from the monitoring system. They can also collect web advertising/tracking IDs, which can be used to identify the criminals behind attacks. (C) 2017 The Author(s). Published by Elsevier Ltd.

    DOI

    Scopus

    26
    Citation
    (Scopus)
  • BotDetector: A robust and scalable approach toward detecting malware-infected devices

    Sho Mizuno, Mitsuhiro Hatada, Tatsuya Mori, Shigeki Goto

    IEEE International Conference on Communications     1 - 7  2017.07  [Refereed]

     View Summary

    Damage caused by malware is a serious problem that needs to be addressed. The recent rise in the spread of evasive malware has made it difficult to detect it at the pre-infection timing. Malware detection at post-infection timing is a promising approach that fulfills this gap. Given this background, this work aims to identify likely malware-infected devices from the measurement of Internet traffic. The advantage of the traffic-measurement-based approach is that it enables us to monitor a large number of clients. If we find a client as a source of malicious traffic, the client is likely a malware-infected device. Since the majority of malware today makes use of the web as a means to communicate with the C&amp
    C servers that reside on the external network, we leverage information recorded in the HTTP headers to discriminate between malicious and legitimate traffic. To make our approach scalable and robust, we develop the automatic template generation scheme that drastically reduces the amount of information to be kept while achieving the high accuracy of classification
    since it does not make use of any domain knowledge, the approach should be robust against changes of malware. We apply several classifiers, which include machine learning algorithms, to the extracted templates and classify traffic into two categories: malicious and legitimate. Our extensive experiments demonstrate that our approach discriminates between malicious and legitimate traffic with up to 97.1% precision while maintaining the false positive below 1.0%.

    DOI

    Scopus

    13
    Citation
    (Scopus)
  • Understanding the origins of mobile app vulnerabilities: A large-scale measurement study of free and paid apps

    Takuya Watanabe, Mitsuaki Akiyama, Fumihiro Kanei, Eitaro Shioji, Yuta Takata, Bo Sun, Yuta Ishi, Toshiki Shibahara, Takeshi Yagi, Tatsuya Mori

    IEEE International Working Conference on Mining Software Repositories     14 - 24  2017.06  [Refereed]

     View Summary

    This paper reports a large-scale study that aims to understand how mobile application (app) vulnerabilities are associated with software libraries. We analyze both free and paid apps. Studying paid apps was quite meaningful because it helped us understand how differences in app development/maintenance affect the vulnerabilities associated with libraries. We analyzed 30k free and paid apps collected from the official Android marketplace. Our extensive analyses revealed that approximately 70%/50% of vulnerabilities of free/paid apps stem from software libraries, particularly from third-party libraries. Somewhat paradoxically, we found that more expensive/popular paid apps tend to have more vulnerabilities. This comes from the fact that more expensive/popular paid apps tend to have more functionality, i.e., more code and libraries, which increases the probability of vulnerabilities. Based on our findings, we provide suggestions to stakeholders of mobile app distribution ecosystems.

    DOI

    Scopus

    30
    Citation
    (Scopus)
  • Evaluation of EM Information Leakage caused by IEMI with Hardware Trojan

    Kinugawa Masahiro, Hayashi Yu-ichi, Mori Tatsuya

    IEEJ Transactions on Fundamentals and Materials   137 ( 3 ) 153 - 157  2017  [Refereed]

     View Summary

    <p>Hardware Trojans (HT) that are implemented at the time of manufacturing ICs are being reported as a new threat that could destroy the IC or degrade its security under specific circumstances, and is becoming a key security challenge that must be addressed. On the other hand, since it is also common to use components manufactured or bought via third parties in portions outside of the substrate on which the IC is mounted or communication lines connecting the IC and the substrate, there is a possibility that HTs may also be set in the peripheral circuits of the IC in the same manner as in the IC. In this paper, we developed an HT that could be implemented in the peripheral circuits and wiring of an IC, investigated the possibility of being able to acquire information processed inside a device by measuring the electromagnetic waves generated and leaked by Intentional Electromagnetic Interference (IEMI) with HT outside the device, and investigated detection methods for cases where such HTs are implemented.</p>

    DOI CiNii

    Scopus

    3
    Citation
    (Scopus)
  • Continuous real-time measurement method for heart rate monitoring using face images

    Daisuke Uchida, Tatsuya Mori, Masato Sakata, Takuro Oya, Yasuyuki Nakata, Kazuho Maeda, Yoshinori Yaginuma, Akihiro Inomata

    Communications in Computer and Information Science   690   224 - 235  2017  [Refereed]

     View Summary

    This paper investigates fundamental mechanisms of brightness changes in heart rate (HR) measurement from face images through three kinds of experiments
    (i) measurement of light reflection from cheek covered with/without copper film, (ii) spectroscopy measurement of reflection light from face and (iii) simultaneous measurement of face images and laser speckle images. The brightness change of the face skin are found to be caused by both the green light absorption variation by the blood volume changes and the light reflection variation by pulsatory face movements. The Real-time Pulse Extraction Method (RPEM), designed to extract the variation of light absorption by removing motion noise, is corroborated for the robustness by comparing the RPEM with the pulse wave of the ear photoplethysmography. The RPEM is also applied to heart rate measurements of seven participants during office work under non-controlled condition in order to evaluate continuous real-time HR monitoring. RMSE = 6.7 bpm is achieved as an average result of seven participants in five days with the 44% of HR measured rate with respect to the number of reference HRs from the electrocardiogram during face is detected. The result indicates that the RPEM method enables HR monitoring in daily life.

    DOI

  • Characterizing promotional attacks in mobile app store

    Bo Sun, Xiapu Luo, Mitsuaki Akiyama, Takuya Watanabe, Tatsuya Mori

    Communications in Computer and Information Science   719   113 - 127  2017  [Refereed]

     View Summary

    Mobile app stores, such as Google Play, play a vital role in the ecosystem of mobile apps. When users look for an app of interest, they can acquire useful data from the app store to facilitate their decision on installing the app or not. This data includes ratings, reviews, number of installs, and the category of the app. The ratings and reviews are the user-generated content (UGC) that affect the reputation of an app. Unfortunately, miscreants also exploit such channels to conduct promotional attacks (PAs) that lure victims to install malicious apps. In this paper, we propose and develop a new system called PADetective to detect miscreants who are likely to be conducting promotional attacks. Using a dataset with 1,723 of labeled samples, we demonstrate that the true positive rate of detection model is 90%, with a false positive rate of 5.8%. We then applied PADetective to a large dataset for characterizing the prevalence of PAs in the wild and find 289 K potential PA attackers who posted reviews to 21 K malicious apps.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Statistical estimation of the names of HTTPS servers with domain name graphs

    Tatsuya Mori, Takeru Inoue, Akihiro Shimoda, Kazumichi Sato, Shigeaki Harada, Keisuke Ishibashi, Shigeki Goto

    COMPUTER COMMUNICATIONS   94   104 - 113  2016.11  [Refereed]

     View Summary

    Adoption of SSL/TLS to protect the privacy of web users has become increasingly common. In fact, as of September 2015, more than 68% of top-1M websites deploy SSL/TLS to encrypt their traffic. The transition from HTTP to HTTPS has brought a new challenge for network operators who need to understand the hostnames of encrypted web traffic for various reasons. To meet the challenge, this work develops a novel framework called SFMap, which estimates names of HTTPS servers by analyzing precedent DNS queries/responses in a statistical way. The SFMap framework introduces domain name graph, which can characterize highly dynamic and diverse nature of DNS mechanisms. Such complexity arises from the recent deployment and implementation of DNS ecosystems; i.e., canonical name tricks used by CDNs, the dynamic and diverse nature of DNS TTL settings, and incomplete and unpredictable measurements due to the existence of various DNS caching instances. First, we demonstrate that SFMap establishes good estimation accuracies and outperforms a state-of-the-art approach. We also aim to identify the optimized setting of the SFMap framework. Next, based on the preliminary analysis, we introduce techniques to make the SFMap framework scalable to large-scale traffic data. We validate the effectiveness of the approach using large-scale Internet traffic. (C) 2016 Elsevier B.V. All rights reserved.

    DOI

    Scopus

    9
    Citation
    (Scopus)
  • POSTER

    Bo Sun, Akinori Fujino, Tatsuya Mori

    Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security    2016.10

    DOI

    Scopus

    7
    Citation
    (Scopus)
  • Domainprofiler: Discovering domain names abused in future

    Daiki Chiba, Takeshi Yagi, Mitsuaki Akiyama, Toshiki Shibahara, Takeshi Yada, Tatsuya Mori, Shigeki Goto

    Proceedings - 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2016     491 - 502  2016.09  [Refereed]

     View Summary

    Cyber attackers abuse the domain name system (DNS) to mystify their attack ecosystems, they systematically generate a huge volume of distinct domain names to make it infeasible for blacklisting approaches to keep up with newly generated malicious domain names. As a solution to this problem, we propose a system for discovering malicious domain names that will likely be abused in future. The key idea with our system is to exploit temporal variation patterns (TVPs) of domain names. The TVPs of domain names include information about how and when a domain name has been listed in legitimate/popular and/or malicious domain name lists. On the basis of this idea, our system actively collects DNS logs, analyzes their TVPs, and predicts whether a given domain name will be used for malicious purposes. Our evaluation revealed that our system can predict malicious domain names 220 days beforehand with a true positive rate of 0.985.

    DOI

    Scopus

    32
    Citation
    (Scopus)
  • Automating URL Blacklist Generation with Similarity Search Approach

    Bo Sun, Mitsuaki Akiyama, Takeshi Yagi, Mitsuhiro Hatada, Tatsuya Mori

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS   E99D ( 4 ) 873 - 882  2016.04  [Refereed]

     View Summary

    Modern web users may encounter a browser security threat called drive-by-download attacks when surfing on the Internet. Drive-by-download attacks make use of exploit codes to take control of user's web browser. Many web users do not take such underlying threats into account while clicking URLs. URL Blacklist is one of the practical approaches to thwarting browser-targeted attacks. However, URL Blacklist cannot cope with previously unseen malicious URLs. Therefore, to make a URL blacklist effective, it is crucial to keep the URLs updated. Given these observations, we propose a framework called automatic blacklist generator (AutoBLG) that automates the collection of new malicious URLs by starting from a given existing URL blacklist. The primary mechanism of AutoBLG is expanding the search space of web pages while reducing the amount of URLs to be analyzed by applying several pre-filters such as similarity search to accelerate the process of generating blacklists. AutoBLG consists of three primary components: URL expansion, URL filtration, and URL verification. Through extensive analysis using a high-performance web client honeypot, we demonstrate that AutoBLG can successfully discover new and previously unknown drive-by-download URLs from the vast web space.

    DOI

    Scopus

    20
    Citation
    (Scopus)
  • Automating URL Blacklist Generation with Similarity Search Approach

    Bo Sun, Mitsuaki Akiyama, Takeshi Yagi, Mitsuhiro Hatada, Tatsuya Mori

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS   E99D ( 4 ) 873 - 882  2016.04  [Refereed]

     View Summary

    Modern web users may encounter a browser security threat called drive-by-download attacks when surfing on the Internet. Drive-by-download attacks make use of exploit codes to take control of user's web browser. Many web users do not take such underlying threats into account while clicking URLs. URL Blacklist is one of the practical approaches to thwarting browser-targeted attacks. However, URL Blacklist cannot cope with previously unseen malicious URLs. Therefore, to make a URL blacklist effective, it is crucial to keep the URLs updated. Given these observations, we propose a framework called automatic blacklist generator (AutoBLG) that automates the collection of new malicious URLs by starting from a given existing URL blacklist. The primary mechanism of AutoBLG is expanding the search space of web pages while reducing the amount of URLs to be analyzed by applying several pre-filters such as similarity search to accelerate the process of generating blacklists. AutoBLG consists of three primary components: URL expansion, URL filtration, and URL verification. Through extensive analysis using a high-performance web client honeypot, we demonstrate that AutoBLG can successfully discover new and previously unknown drive-by-download URLs from the vast web space.

    DOI

    Scopus

    20
    Citation
    (Scopus)
  • Clone or Relative?: Understanding the Origins of Similar Android Apps

    Yuta Ishii, Takuya Watanabe, Mitsuaki Akiyama, Tatsuya Mori

    IWSPA'16: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS     25 - 32  2016  [Refereed]

     View Summary

    Since it is not hard to repackage an Android app, there are many cloned apps, which we call "clones" in this work. As previous studies have reported, clones are generated for bad purposes by malicious parties, e.g., adding malicious functions, injecting/replacing advertising modules, and piracy. Besides such clones, there are legitimate, similar apps, which we call "relatives" in this work. These relatives are not clones but are similar in nature; i.e., they are generated by the same app-building service or by the same developer using a same template. Given these observations, this paper aims to answer the following two research questions: (RQ1) How can we distinguish between clones and relatives? (RQ2) What is the breakdown of clones and relatives in the official and third-party marketplaces? To answer the first research question, we developed a scalable framework called APPraiser that systematically extracts similar apps and classifies them into clones and relatives. We note that our key algorithms, which leverage sparseness of the data, have the time complexity of O(n) in practice. To answer the second research question, we applied the APPraiser framework to the over 1 3 millions of apps collected from official and third-party marketplaces. Our analysis revealed the following findings: In the official marketplace, 79% of similar apps were attributed to relatives while, in the third-party marketplace, 50% of similar apps were attributed to clones. The majority of relatives are apps developed by prolific developers in both marketplaces. We also found that in the third-party market, of the clones that were originally published in the official market, 76% of them are malware. To the best of our knowledge, this is the first work that clarified the breakdown of "similar" Android apps, and quantified their origins using a huge dataset equivalent to the size of official market.

    DOI

    Scopus

    9
    Citation
    (Scopus)
  • Continuous Real-time Heart Rate Monitoring from Face Images.

    Tatsuya Mori, Daisuke Uchida, Masato Sakata, Takuro Oya, Yasuyuki Nakata, Kazuho Maeda, Yoshinori Yaginuma, Akihiro Inomata

    Proceedings of the 9th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2016) - Volume 4: BIOSIGNALS, Rome, Italy, February 21-23, 2016.     52 - 56  2016  [Refereed]

    DOI

  • Clone or Relative?: Understanding the Origins of Similar Android Apps

    Yuta Ishii, Takuya Watanabe, Mitsuaki Akiyama, Tatsuya Mori

    IWSPA'16: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS     25 - 32  2016  [Refereed]

     View Summary

    Since it is not hard to repackage an Android app, there are many cloned apps, which we call "clones" in this work. As previous studies have reported, clones are generated for bad purposes by malicious parties, e.g., adding malicious functions, injecting/replacing advertising modules, and piracy. Besides such clones, there are legitimate, similar apps, which we call "relatives" in this work. These relatives are not clones but are similar in nature; i.e., they are generated by the same app-building service or by the same developer using a same template. Given these observations, this paper aims to answer the following two research questions: (RQ1) How can we distinguish between clones and relatives? (RQ2) What is the breakdown of clones and relatives in the official and third-party marketplaces? To answer the first research question, we developed a scalable framework called APPraiser that systematically extracts similar apps and classifies them into clones and relatives. We note that our key algorithms, which leverage sparseness of the data, have the time complexity of O(n) in practice. To answer the second research question, we applied the APPraiser framework to the over 1 3 millions of apps collected from official and third-party marketplaces. Our analysis revealed the following findings: In the official marketplace, 79% of similar apps were attributed to relatives while, in the third-party marketplace, 50% of similar apps were attributed to clones. The majority of relatives are apps developed by prolific developers in both marketplaces. We also found that in the third-party market, of the clones that were originally published in the official market, 76% of them are malware. To the best of our knowledge, this is the first work that clarified the breakdown of "similar" Android apps, and quantified their origins using a huge dataset equivalent to the size of official market.

    DOI

    Scopus

    9
    Citation
    (Scopus)
  • Understanding the Inconsistencies between Text Descriptions and the Use of Privacy-sensitive Resources of Mobile Apps.

    Takuya Watanabe, Mitsuaki Akiyama, Tetsuya Sakai, Tatsuya Mori

    Eleventh Symposium On Usable Privacy and Security, SOUPS 2015, Ottawa, Canada, July 22-24, 2015.     241 - 255  2015.08  [Refereed]

    Authorship:Last author, Corresponding author

  • RouteDetector: Sensor-based Positioning System That Exploits Spatio-Temporal Regularity of Human Mobility.

    Takuya Watanabe, Mitsuaki Akiyama, Tatsuya Mori

    9th USENIX Workshop on Offensive Technologies, WOOT '15, Washington, DC, USA, August 10-11, 2015.    2015  [Refereed]

  • SFMap: Inferring Services over Encrypted Web Flows Using Dynamical Domain Name Graphs

    Tatsuya Mori, Takeru Inoue, Akihiro Shimoda, Kazumichi Sato, Keisuke Ishibashi, Shigeki Goto

    TRAFFIC MONITORING AND ANALYSIS, TMA 2015   9053   126 - 139  2015  [Refereed]

     View Summary

    Most modern Internet services are carried over the web. A significant amount of web transactions is now encrypted and the transition to encryption has made it difficult for network operators to understand traffic mix. Thegoal of this study is to enable network operators to inferhostnames within HTTPS traffic because hostname information is useful to understand the breakdown of encrypted web traffic. The proposed approach correlates HTTPS flows and DNS queries/responses. Although this approach may appear trivial, recent deployment and implementation ofDNS ecosystems have made it a challenging research problem; i. e., canonical name tricks used by CDNs, the dynamic and diverse nature of DNS TTL settings, and incompletemeasurements due to the existence of various caching mechanisms. To tackle these challenges, we introduce domain name graph (DNG), which is a formal expression that characterizes the highly dynamic and diverse nature of DNS mechanisms. Furthermore, we have developed a framework called ServiceFlow map (SFMap) that works on top of the DNG. SFMap statistically estimates the hostname of an HTTPS server, given a pair of client and server IP addresses. We evaluate the performance ofSFMapthrough extensive analysis using real packet traces collected from two locations with different scales. Wedemonstrate thatSFMapestablishes good estimation accuracies and outperforms a stateoftheart approach.

    DOI

    Scopus

    13
    Citation
    (Scopus)
  • Inferring Popularity of Domain Names with DNS Traffic: Exploiting Cache Timeout Heuristics.

    Akihiro Shimoda, Keisuke Ishibashi, Kazumichi Sato, Masayuki Tsujino, Takeru Inoue, Masaki Shimura, Takanori Takebe, Kazuki Takahashi, Tatsuya Mori, Shigeki Goto

    2015 IEEE Global Communications Conference, GLOBECOM 2015, San Diego, CA, USA, December 6-10, 2015     1 - 6  2015  [Refereed]

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Increasing the Darkness of Darknet Traffic

    Yumehisa Haga, Akira Saso, Tatsuya Mori, Shigeki Goto

    2015 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM)     1 - 7  2015  [Refereed]

     View Summary

    A Darknet is a passive sensor system that monitors traffic routed to unused IP address space. Darknets have been widely used as tools to detect malicious activities such as propagating worms, thanks to the useful feature that most packets observed by a darknet can be assumed to have originated from non-legitimate hosts. Recent commoditization of Internet-scale survey traffic originating from legitimate hosts could overwhelm the traffic that was originally supposed to be monitored with a darknet. Based on this observation, we posed the following research question: "Can the Internet-scale survey traffic become noise when we analyze darknet traffic?" To answer this question, we propose a novel framework, ID2, to increase the darkness of darknet traffic, i.e., ID2 discriminates between Internet-scale survey traffic originating from legitimate hosts and other traffic potentially associated with malicious activities. It leverages two intrinsic characteristics of Internet-scale survey traffic: a network-level property and some form of footprint explicitly indicated by surveyors. When we analyzed darknet traffic using ID2, we saw that Internet-scale traffic can be noise. We also demonstrated that the discrimination of survey traffic exposes hidden traffic anomalies, which are invisible without using our technique.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Discovering Similar Malware Samples Using API Call Topics

    Akinori Fujino, Junichi Murakami, Tatsuya Mori

    2015 12TH ANNUAL IEEE CONSUMER COMMUNICATIONS AND NETWORKING CONFERENCE     140 - 147  2015  [Refereed]

     View Summary

    To automate mal ware analysis, dynamic malware analysis systems have attracted increasing attention from both the industry and research communities. Of the various logs collected by such systems, the API call is a very promising source of information for characterizing mal ware behavior. This work aims to extract similar mal ware samples automatically using the concept of "API call topics;" which represents a set of API calls that are intrinsic to a specific group of malware samples. We first convert Win32 API calls into "API words." We then apply non-negative matrix factorization (NMF) clustering analysis to the corpus of the extracted API words. NMF automatically generates the API call topics from the API words. The contributions of this work can be summarized as follows. We present an unsupervised approach to extract API call topics from a large corpus of API calls. Through analysis of the API call logs collected from thousands of mal ware samples, we demonstrate that the extracted API call topics can detect similar malware samples. The proposed approach is expected to be useful for automating the process of analyzing a huge volume of logs collected from dynamic malware analysis systems.

    DOI

    Scopus

    27
    Citation
    (Scopus)
  • SFMap: Inferring Services over Encrypted Web Flows Using Dynamical Domain Name Graphs

    Tatsuya Mori, Takeru Inoue, Akihiro Shimoda, Kazumichi Sato, Keisuke Ishibashi, Shigeki Goto

    TRAFFIC MONITORING AND ANALYSIS, TMA 2015   9053   126 - 139  2015  [Refereed]

     View Summary

    Most modern Internet services are carried over the web. A significant amount of web transactions is now encrypted and the transition to encryption has made it difficult for network operators to understand traffic mix. Thegoal of this study is to enable network operators to inferhostnames within HTTPS traffic because hostname information is useful to understand the breakdown of encrypted web traffic. The proposed approach correlates HTTPS flows and DNS queries/responses. Although this approach may appear trivial, recent deployment and implementation ofDNS ecosystems have made it a challenging research problem; i. e., canonical name tricks used by CDNs, the dynamic and diverse nature of DNS TTL settings, and incompletemeasurements due to the existence of various caching mechanisms. To tackle these challenges, we introduce domain name graph (DNG), which is a formal expression that characterizes the highly dynamic and diverse nature of DNS mechanisms. Furthermore, we have developed a framework called ServiceFlow map (SFMap) that works on top of the DNG. SFMap statistically estimates the hostname of an HTTPS server, given a pair of client and server IP addresses. We evaluate the performance ofSFMapthrough extensive analysis using real packet traces collected from two locations with different scales. Wedemonstrate thatSFMapestablishes good estimation accuracies and outperforms a stateoftheart approach.

    DOI

    Scopus

    13
    Citation
    (Scopus)
  • AutoBLG: Automatic URL Blacklist Generator Using Search Space Expansion and Filters

    Bo Sun, Mitsuaki Akiyama, Takeshi Yagi, Mitsuhiro Hatada, Tatsuya Mori

    2015 IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATION (ISCC)     625 - 631  2015  [Refereed]

     View Summary

    Modern web users are exposed to a browser security threat called drive-by-download attacks that occur by simply visiting a malicious Uniform Resource Locator (URL) that embeds code to exploit web browser vulnerabilities. Many web users tend to click such URLs without considering the underlying threats. URL blacklists are an effective countermeasure to such browser-targeted attacks. URLs are frequently updated; therefore, collecting fresh malicious URLs is essential to ensure the effectiveness of a URL blacklist. We propose a framework called automatic blacklist generator (AutoBLG) that automatically identifies new malicious URLs using a given existing URL blacklist. The key idea of AutoBLG is expanding the search space of web pages while reducing the amount of URLs to be analyzed by applying several pre-filters to accelerate the process of generating blacklists. AutoBLG comprises three primary primitives: URL expansion, URL filtration, and URL verification. Through extensive analysis using a high-performance web client honeypot, we demonstrate that AutoBLG can successfully extract new and previously unknown drive-by-download URLs.

    DOI

    Scopus

    9
    Citation
    (Scopus)
  • Inferring Popularity of Domain Names with DNS Traffic: Exploiting Cache Timeout Heuristics.

    Akihiro Shimoda, Keisuke Ishibashi, Kazumichi Sato, Masayuki Tsujino, Takeru Inoue, Masaki Shimura, Takanori Takebe, Kazuki Takahashi, Tatsuya Mori, Shigeki Goto

    Proceedings of the IEEE Global Communications Conference     1 - 6  2015  [Refereed]

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Increasing the Darkness of Darknet Traffic

    Yumehisa Haga, Akira Saso, Tatsuya Mori, Shigeki Goto

    2015 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM)     1 - 7  2015  [Refereed]

     View Summary

    A Darknet is a passive sensor system that monitors traffic routed to unused IP address space. Darknets have been widely used as tools to detect malicious activities such as propagating worms, thanks to the useful feature that most packets observed by a darknet can be assumed to have originated from non-legitimate hosts. Recent commoditization of Internet-scale survey traffic originating from legitimate hosts could overwhelm the traffic that was originally supposed to be monitored with a darknet. Based on this observation, we posed the following research question: "Can the Internet-scale survey traffic become noise when we analyze darknet traffic?" To answer this question, we propose a novel framework, ID2, to increase the darkness of darknet traffic, i.e., ID2 discriminates between Internet-scale survey traffic originating from legitimate hosts and other traffic potentially associated with malicious activities. It leverages two intrinsic characteristics of Internet-scale survey traffic: a network-level property and some form of footprint explicitly indicated by surveyors. When we analyzed darknet traffic using ID2, we saw that Internet-scale traffic can be noise. We also demonstrated that the discrimination of survey traffic exposes hidden traffic anomalies, which are invisible without using our technique.

    DOI

  • Loss Recovery Method for Content Pre-distribution in VoD Service

    N. Kamiyama, R. Kawahara, T. Mori

    Proceedings of the World Telecommunications Congress     1 - 6  2014.06  [Refereed]

  • Spatio-temporal Factorization of Log Data for Understanding Network Events

    Tatsuaki Kimura, Keisuke Ishibashi, Tatsuya Mori, Hiroshi Sawada, Tsuyoshi Toyono, Ken Nishimatsu, Akio Watanabe, Akihiro Shimoda, Kohei Shiomoto

    2014 PROCEEDINGS IEEE INFOCOM     610 - 618  2014  [Refereed]

     View Summary

    Understanding the impacts and patterns of network events such as link flaps or hardware errors is crucial for diagnosing network anomalies. In large production networks, analyzing the log messages that record network events has become a challenging task due to the following two reasons. First, the log messages are composed of unstructured text messages generated by vendor-specific rules. Second, network equipment such as routers, switches, and RADIUS severs generate various log messages induced by network events that span across several geographical locations, network layers, protocols, and services. In this paper, we have tackled these obstacles by building two novel techniques: statistical template extraction (STE) and log tensor factorization (LTF). STE leverages a statistical clustering technique to automatically extract primary templates from unstructured log messages. LTF aims to build a statistical model that captures spatial-temporal patterns of log messages. Such spatial-temporal patterns provide useful insights into understanding the impacts and root cause of hidden network events. This paper first formulates our problem in a mathematical way. We then validate our techniques using massive amount of network log messages collected from a large operating network. We also demonstrate several case studies that validate the usefulness of our technique.

    DOI

    Scopus

    68
    Citation
    (Scopus)
  • Spatio-temporal factorization of log data for understanding network events

    Tatsuaki Kimura, Keisuke Ishibashi, Tatsuya Mori, Hiroshi Sawada, Tsuyoshi Toyono, Ken Nishimatsu, Akio Watanabe, Akihiro Shimoda, Kohei Shiomoto

    Proceedings - IEEE INFOCOM     610 - 618  2014  [Refereed]

     View Summary

    Understanding the impacts and patterns of network events such as link flaps or hardware errors is crucial for diagnosing network anomalies. In large production networks, analyzing the log messages that record network events has become a challenging task due to the following two reasons. First, the log messages are composed of unstructured text messages generated by vendor-specific rules. Second, network equipment such as routers, switches, and RADIUS severs generate various log messages induced by network events that span across several geographical locations, network layers, protocols, and services. In this paper, we have tackled these obstacles by building two novel techniques: statistical template extraction (STE) and log tensor factorization (LTF). STE leverages a statistical clustering technique to automatically extract primary templates from unstructured log messages. LTF aims to build a statistical model that captures spatial-temporal patterns of log messages. Such spatial-temporal patterns provide useful insights into understanding the impacts and root cause of hidden network events. This paper first formulates our problem in a mathematical way. We then validate our techniques using massive amount of network log messages collected from a large operating network. We also demonstrate several case studies that validate the usefulness of our technique. © 2014 IEEE.

    DOI

    Scopus

    68
    Citation
    (Scopus)
  • Optimally Identifying Worm-Infected Hosts

    Noriaki Kamiyama, Tatsuya Mori, Ryoichi Kawahara, Shigeaki Harada

    IEICE TRANSACTIONS ON COMMUNICATIONS   E96B ( 8 ) 2084 - 2094  2013.08  [Refereed]

     View Summary

    We have proposed a method of identifying superspreaders by flow sampling and a method of filtering legitimate hosts from the identified superspreaders using a white list. However, the problem of how to optimally set parameters of phi, the measurement period length, m*, the identification threshold of the flow count m within phi, and H*, the identification probability for hosts with m = m*, remained unsolved. These three parameters seriously impact the ability to identify the spread of infection. Our contributions in this work are two-fold: (1) we propose a method of optimally designing these three parameters to satisfy the condition that the ratio of the number of active worm-infected hosts divided by the number of all vulnerable hosts is bound. by a given upper-limit during the time T required to develop a patch or an anti-worm vaccine, and (2) the proposed method can optimize the identification accuracy of worm-infected hosts by maximally using a limited amount of memory resource of monitors.

    DOI

    Scopus

  • Optimally Identifying Worm-Infected Hosts

    Noriaki Kamiyama, Tatsuya Mori, Ryoichi Kawahara, Shigeaki Harada

    IEICE TRANSACTIONS ON COMMUNICATIONS   E96B ( 8 ) 2084 - 2094  2013.08  [Refereed]

     View Summary

    We have proposed a method of identifying superspreaders by flow sampling and a method of filtering legitimate hosts from the identified superspreaders using a white list. However, the problem of how to optimally set parameters of phi, the measurement period length, m*, the identification threshold of the flow count m within phi, and H*, the identification probability for hosts with m = m*, remained unsolved. These three parameters seriously impact the ability to identify the spread of infection. Our contributions in this work are two-fold: (1) we propose a method of optimally designing these three parameters to satisfy the condition that the ratio of the number of active worm-infected hosts divided by the number of all vulnerable hosts is bound. by a given upper-limit during the time T required to develop a patch or an anti-worm vaccine, and (2) the proposed method can optimize the identification accuracy of worm-infected hosts by maximally using a limited amount of memory resource of monitors.

    DOI

    Scopus

  • Syslog+SNS分析によるネットワーク故障検知・原因分析技術

    木村達明, 竹下 恵, 豊野剛, 横田将裕, 西松研, 森達哉

    NTT技術ジャーナル   25 ( 7 ) 20 - 24  2013.04

  • Network failure detection and diagnosis by analyzing syslog and SNS data: Applying big data analysis to network operations

    Tatsuaki Kimura, Kei Takeshita, Tsuyoshi Toyono, Masahiro Yokota, Ken Nishimatsu, Tatsuya Mori

    NTT Technical Review   11 ( 11 )  2013.04

  • Mean-variance relationship of the number of flows in traffic aggregation and its application to traffic management

    Ryoichi Kawahara, Tetsuya Takine, Tatsuya Mori, Noriaki Kamiyama, Keisuke Ishibashi

    COMPUTER NETWORKS   57 ( 6 ) 1560 - 1576  2013.04  [Refereed]

     View Summary

    We consider the mean-variance relationship of the number of flows in traffic aggregation, where flows are divided into several groups randomly, based on a predefined flow aggregation index, such as source IP address. We first derive a quadratic relationship between the mean and the variance of the number of flows belonging to a randomly chosen traffic aggregation group. Note here that the result is applicable to sampled flows obtained through packet sampling. We then show that our analytically derived mean-variance relationship fits well those in actual packet trace data sets. Next, we present two applications of the mean-variance relationship to traffic management. One is an application to detecting network anomalies through monitoring a time series of traffic. Using the mean-variance relationship, we determine the traffic aggregation level in traffic monitoring so that it meets two predefined requirements on false positive and false negative ratios simultaneously. The other is an application to load balancing among network equipments that require per-flow management. We utilize the mean-variance relationship for estimating the processing capability required in each network equipment. (C) 2013 Elsevier B.V. All rights reserved.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Analyzing influence of network topology on designing ISP-operated CDN

    Noriaki Kamiyama, Tatsuya Mori, Ryoichi Kawahara, Shigeaki Harada, Haruhisa Hasegawa

    TELECOMMUNICATION SYSTEMS   52 ( 2 ) 969 - 977  2013.02  [Refereed]

     View Summary

    The transmission bandwidth consumed by delivering rich content, such as movie files, is enormous, so it is urgent for ISPs to design an efficient delivery system minimizing the amount of network resources consumed. To serve users rich content economically and efficiently, an ISP itself should provide servers with huge storage capacities at a limited number of locations within its network. Therefore, we have investigated the content deployment method and the content delivery process that are desirable for this ISP-operated content delivery network (CDN). We have also proposed an optimum cache server allocation method for an ISP-operated CDN. In this paper, we investigate the properties of the topological locations of nodes at which cache placement is effective using 31 network topologies of actual ISPs. We also classify the 31 networks into two types and evaluate the optimum cache count in each network type.

    DOI

    Scopus

    5
    Citation
    (Scopus)
  • Autonomic load balancing of flow monitors

    Noriaki Kamiyama, Tatsuya Mori, Ryoichi Kawahara

    COMPUTER NETWORKS   57 ( 3 ) 741 - 761  2013.02  [Refereed]

     View Summary

    In monitoring flows at routers for flow analysis or deep packet inspection, the monitor calculates hash values from the flow ID of each packet arriving at the input port of the router. Therefore, the monitors must update the flow table at the transmission line rate, so high-speed and high-cost memory, such as SRAM, is used for the flow table. This requires the monitors to limit the monitoring target to just some of the flows. However, if the monitors randomly select the monitoring targets, multiple routers on the route will sometimes monitor the same flow, or no monitors will monitor a flow. To maximize the number of monitored flows in the entire network, the monitors must select the monitoring targets while maintaining a balanced load among them. We propose an autonomous load-balancing method where monitors exchange information on monitor load only with adjacent monitors. Numerical evaluations using the actual traffic matrix of Internet2 show that the proposed method improves the total monitored flow count by about 50% compared with that of independent sampling. Moreover, we evaluate the load-balancing effect on 36 backbone networks of commercial ISPs. (c) 2012 Elsevier B.V. All rights reserved.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Analyzing Spatial Structure of IP Addresses for Detecting Malicious Websites

    Chiba Daiki, Tobe Kazuhiro, Mori Tatsuya, Goto Shigeki

    IMT   8 ( 3 ) 855 - 866  2013

     View Summary

    Web-based malware attacks have become one of the most serious threats that need to be addressed urgently. Several approaches that have attracted attention as promising ways of detecting such malware include employing one of several blacklists. However, these conventional approaches often fail to detect new attacks owing to the versatility of malicious websites. Thus, it is difficult to maintain up-to-date blacklists with information for new malicious websites. To tackle this problem, this paper proposes a new scheme for detecting malicious websites using the characteristics of IP addresses. Our approach leverages the empirical observation that IP addresses are more stable than other metrics such as URLs and DNS records. While the strings that form URLs or DNS records are highly variable, IP addresses are less variable, i.e., IPv4 address space is mapped onto 4-byte strings. In this paper, a lightweight and scalable detection scheme that is based on machine learning techniques is developed and evaluated. The aim of this study is not to provide a single solution that effectively detects web-based malware but to develop a technique that compensates the drawbacks of existing approaches. The effectiveness of our approach is validated by using real IP address data from existing blacklists and real traffic data on a campus network. The results demonstrate that our scheme can expand the coverage/accuracy of existing blacklists and also detect unknown malicious websites that are not covered by conventional approaches.

    DOI CiNii

  • Few-mode fiber for optical MIMO transmission with low computational complexity

    T. Sakamoto, T. Mori, T. Yamamoto, F. Yamamoto

    Proceedings of SPIE - The International Society for Optical Engineering   8647  2013  [Refereed]

     View Summary

    This paper introduces our recent results on mode-division multiplexing transmission with MIMO processing. We have been studying coherent optical MIMO transmission systems and developing few-mode fibers to reduce the complexity of MIMO processing, for example, by using multi-step index fibers to control the differential mode delay (DMD) of the fibers and to compensate for the total DMD. We also investigated a transmission system using reduced-complexity MIMO processing. Finally, we review our latest 2×2 WDM-MIMO transmission experiments with low MIMO processing complexity. © 2012 SPIE.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • A periodic combined-content distribution mechanism in peer-assisted content delivery networks.

    Naoya Maki, Ryoichi Shinkuma, Tatsuya Mori, Noriaki Kamiyama, Ryoichi Kawahara

    Proceedings of the 2013 ITU Kaleidoscope: Building Sustainable Communities, Kyoto, Japan, April 22-24, 2013     1 - 8  2013  [Refereed]

  • Expected Traffic Reduction by Content-oriented Incentive in Peer-assisted Content Delivery Networks

    Naoya Maki, Takayuki Nishio, Ryoichi Shinkuma, Tatsuro Takahashi, Tatsuya Mori, Noriaki Kamiyama, Ryoichi Kawahara

    2013 INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING (ICOIN)     450 - 455  2013  [Refereed]

     View Summary

    Content services that deliver large-volume content files have been growing rapidly. In these services, it is crucial for the service provider and the network operator to minimize traffic volume in order to lower the cost charged for bandwidth and the cost for network infrastructure, respectively. To reduce the traffic, traffic localization has been discussed; network traffic is localized when requested content files are served by an other nearby altruistic client instead of the source servers. With this mechanism, the concept of the peer-assisted content delivery network (CDN) can localize the overall traffic and enable service providers to minimize traffic without deploying or borrowing distributed storage. To localize traffic effectively, content files that are likely to be requested by many clients should be cached locally. We present a traffic engineering scheme for peer-assisted CDN models. Its key idea is to control the behavior of clients by using a content-oriented incentive mechanism. This approach optimizes traffic flows by letting altruistic clients download content files that are most likely to contribute to localizing network traffic. To let altruistic clients request the desired files, we combine content files while keeping the price equal to that for a single content. We discuss the performance of our proposed algorithm considering the cache replacement algorithms.

    DOI

    Scopus

    5
    Citation
    (Scopus)
  • Analyzing spatial structure of IP addresses for detecting malicious websites

    Daiki Chiba, Kazuhiro Tobe, Tatsuya Mori, Shigeki Goto

    Journal of Information Processing   21 ( 3 ) 539 - 550  2013  [Refereed]

     View Summary

    Web-based malware attacks have become one of the most serious threats that need to be addressed urgently. Several approaches that have attracted attention as promising ways of detecting such malware include employing one of several blacklists. However, these conventional approaches often fail to detect new attacks owing to the versatility of malicious websites. Thus, it is difficult to maintain up-to-date blacklists with information for new malicious websites. To tackle this problem, this paper proposes a new scheme for detecting malicious websites using the characteristics of IP addresses. Our approach leverages the empirical observation that IP addresses are more stable than other metrics such as URLs and DNS records. While the strings that form URLs or DNS records are highly variable, IP addresses are less variable, i.e., IPv4 address space is mapped onto 4-byte strings. In this paper, a lightweight and scalable detection scheme that is based on machine learning techniques is developed and evaluated. The aim of this study is not to provide a single solution that effectively detects web-based malware but to develop a technique that compensates the drawbacks of existing approaches. The effectiveness of our approach is validated by using real IP address data from existing blacklists and real traffic data on a campus network. The results demonstrate that our scheme can expand the coverage/accuracy of existing blacklists and also detect unknown malicious websites that are not covered by conventional approaches. © 2013 Information Processing Society of Japan.

    DOI

    Scopus

    4
    Citation
    (Scopus)
  • Optimally designing ISP-operated CDN

    Noriaki Kamiyama, Tatsuya Mori, Ryoichi Kawahara, Haruhisa Hasegawa

    IEICE Transactions on Communications   E96-B ( 3 ) 790 - 801  2013  [Refereed]

     View Summary

    Recently, the number of users downloading video content on the Internet has dramatically increased, and it is highly anticipated that downloading huge size, rich content such as movie files will become a popular use of the Internet in the near future. The transmission bandwidth consumed by delivering rich content is enormous, so it is urgent for ISPs to design an efficient delivery system that minimizes the amount of network resources consumed. To deliver web content efficiently, a content delivery network (CDN) is often used. CDN providers collocate a huge number of servers within multiple ISPs without being informed of detailed network information, i.e., network topologies, from ISPs. Minimizing the amount of network resources consumed is difficult because a CDN provider selects a server for each request based on only rough estimates of response time. Therefore, an ordinary CDN is not suited for delivering rich content. P2Pbased delivery systems are becoming popular as scalable delivery systems. However, by using a P2P-based system, we still cannot obtain the ideal delivery pattern that is optimal for ISPs because the server locations depend on users behaving selfishly. To provide rich content to users economically and efficiently, an ISP itself should optimally provide servers with huge storage capacities at a limited number of locations within its network. In this paper, we investigate the content deployment method, the content delivery process, and the server allocation method that are desirable for this ISP-operated CDN. Moreover, we evaluate the effectiveness of the ISPoperated CDN using the actual network topologies of commercial ISPs. Copyright © 2013 The Institute of Electronics, Information and Communication Engineers.

    DOI

    Scopus

    9
    Citation
    (Scopus)
  • Analyzing characteristics of TCP quality metrics with respect to type of connection through measured traffic data

    Yasuhiro Ikeda, Ryoichi Kawahara, Noriaki Kamiyama, Tatsuaki Kimura, Tatsuya Mori

    IEICE Transactions on Communications   E96-B ( 2 ) 533 - 542  2013  [Refereed]

     View Summary

    We analyze measured traffic data to investigate the characteristics of TCP quality metrics such as packet retransmission rate, roundtrip time (RTT), and throughput of connections classified by their type (client-server (C/S) or peer-to-peer (P2P)), or by the location of the connection host (domestic or overseas). Our findings are as follows. (i) The TCP quality metrics of the measured traffic data are not necessarily consistent with a theoretical formula proposed in a previous study. However, the average RTT and retransmission rate are negatively correlated with the throughput, which is similar to this formula. Furthermore, the maximum idle time, which is defined as the maximum length of the packet interarrival times, is negatively correlated with throughput. (ii) Each TCP quality metric of C/S connections is higher than that of P2P connections. Here "higher quality" means that either the throughput is higher, or the other TCP quality metrics lead to higher throughput
    for example the average RTT is lower or the retransmission rate is lower. Specifically, the median throughput of C/S connections is 2.5 times higher than that of P2P connections in the incoming direction of domestic traffic. (iii) The characteristics of TCP quality metrics depend on the location of the host of the TCP connection. There are cases in which overseas servers might use a different TCP congestion control scheme. Even if we eliminate these servers, there is still a difference in the degree of impact the average RTT has on the throughput between domestic and overseas traffic. One reason for this is thought to be the difference in the maximum idle time, and another is the fact that congestion levels of these types of traffic differ, even if their average RTTs are the same. Copyright © 2013 The Institute of Electronics, Information and Communication Engineers.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Effect of Limiting Pre-Distribution and Clustering Users on Multicast Pre-Distribution VoD

    Noriaki Kamiyama, Ryoichi Kawahara, Tatsuya Mori, Haruhisa Hasegawa

    IEICE TRANSACTIONS ON COMMUNICATIONS   E96B ( 1 ) 143 - 154  2013.01  [Refereed]

     View Summary

    In Video on Demand (VoD) services, the demand for content items greatly changes daily over the course of the day. Because service providers are required to maintain a stable service during peak hours, they need to design system resources on the basis of peak demand time, so reducing the server load at peak times is important. To reduce the peak load of a content server, we propose to multicast popular content items to all users independently of actual requests as well as providing on-demand unicast delivery. With this solution, however, the hit ratio of pre-distributed content items is small, and large-capacity Storage is required at each set-top box (STB). We can expect to cope with this problem by limiting the number of pre-distributed content items or clustering users based on their viewing histories. We evaluated the effect of these techniques by using actual VoD access log data. We also evaluated the total cost of the multicast pre-distribution VoD system with the proposed two techniques.

    DOI

    Scopus

  • Analyzing spatial structure of IP addresses for detecting malicious websites

    Daiki Chiba, Kazuhiro Tobe, Tatsuya Mori, Shigeki Goto

    Journal of Information Processing   21 ( 3 ) 539 - 550  2013  [Refereed]

     View Summary

    Web-based malware attacks have become one of the most serious threats that need to be addressed urgently. Several approaches that have attracted attention as promising ways of detecting such malware include employing one of several blacklists. However, these conventional approaches often fail to detect new attacks owing to the versatility of malicious websites. Thus, it is difficult to maintain up-to-date blacklists with information for new malicious websites. To tackle this problem, this paper proposes a new scheme for detecting malicious websites using the characteristics of IP addresses. Our approach leverages the empirical observation that IP addresses are more stable than other metrics such as URLs and DNS records. While the strings that form URLs or DNS records are highly variable, IP addresses are less variable, i.e., IPv4 address space is mapped onto 4-byte strings. In this paper, a lightweight and scalable detection scheme that is based on machine learning techniques is developed and evaluated. The aim of this study is not to provide a single solution that effectively detects web-based malware but to develop a technique that compensates the drawbacks of existing approaches. The effectiveness of our approach is validated by using real IP address data from existing blacklists and real traffic data on a campus network. The results demonstrate that our scheme can expand the coverage/accuracy of existing blacklists and also detect unknown malicious websites that are not covered by conventional approaches. © 2013 Information Processing Society of Japan.

    DOI

    Scopus

    4
    Citation
    (Scopus)
  • Traffic Engineering of Peer-Assisted Content Delivery Network with Content-Oriented Incentive Mechanism

    Naoya Maki, Takayuki Nishio, Ryoichi Shinkuma, Tatsuya Mori, Noriaki Kamiyama, Ryoichi Kawahara, Tatsuro Takahashi

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS   E95D ( 12 ) 2860 - 2869  2012.12  [Refereed]

     View Summary

    In content services where people purchase and download large-volume contents, minimizing network traffic is crucial for the service provider and the network operator since they want to lower the cost charged for bandwidth and the cost for network infrastructure, respectively. Traffic localization is an effective way of reducing network traffic. Network traffic is localized when a client can obtain the requested content files from other a near-by altruistic client instead of the source servers. The concept of the peer-assisted content distribution network (CDN) can reduce the overall traffic with this mechanism and enable service providers to minimize traffic without deploying or borrowing distributed storage. To localize traffic effectively, content files that are likely to be requested by many clients should be cached locally. This paper presents a novel traffic engineering scheme for peer-assisted CDN models. Its key idea is to control the behavior of clients by using content-oriented incentive mechanism. This approach enables us to optimize traffic flows by letting altruistic clients download content files that are most likely contributed to localizing traffic among clients. In order to let altruistic clients request the desired files, we combine content files while keeping the price equal to the one for a single content. This paper presents a solution for optimizing the selection of content files to be combined so that cross traffic in a network is minimized. We also give a model for analyzing the upper-bound performance and the numerical results.

    DOI

    Scopus

    4
    Citation
    (Scopus)
  • Analyzing and Reducing the Impact of Traffic on Large-Scale NAT

    Ryoichi Kawahara, Tatsuya Mori, Takeshi Yada, Noriaki Kamiyama

    IEICE TRANSACTIONS ON COMMUNICATIONS   E95B ( 9 ) 2815 - 2827  2012.09  [Refereed]

     View Summary

    We investigate the impact of traffic on the performance of large-scale NAT (LSN). since it has been attracting attention as a means of better utilizing the limited number of global IPv4 addresses. We focus on the number of active flows because they drive up the LSN memory requirements in two ways; more flows must be held in LSN memory, and more global IPv4 addresses must be prepared. Through traffic measurement data analysis, we found that more than 1% of hosts generated more than 100 TCP flows or 486 UDP flows at the same time, and on average, there were 1.43-3.99 active TCP flows per host, when the inactive timer used to clear the flow state from a flow table was set to 15s. When the timer is changed from 15 s to 10 min, the number of active flows increases more than tenfold. We also investigate how to reduce the above impact on LSN in terms of saving memory space and accommodating more users for each global IPv4 address. We show that to save memory space, regulating network anomalies can reduce the number of active TCP flows on an LSN by a maximum of 48.3% and by 29.6% on average. We also discuss the applicability of a batch flow-arrival model for estimating the variation in the number of active flows, when taking into account that the variation is needed to prepare an appropriate memory space. One way to allow each global IPv4 address to accommodate more users is to better utilize destination IP address information when mapping a source IP address from a private address to a global IPv4 address. This can effectively reduce the required number of global IPv4 addresses by 85.9% for TCP traffic and 91.9% for UDP traffic on average.

    DOI

    Scopus

  • Extended Darknet: Multi-Dimensional Internet Threat Monitoring System

    Akihiro Shimoda, Tatsuya Mori, Shigeki Goto

    IEICE TRANSACTIONS ON COMMUNICATIONS   E95B ( 6 ) 1915 - 1923  2012.06  [Refereed]

     View Summary

    Internet threats caused by botnets/worms are one of the most important security issues to be addressed. Darknet, also called a dark IP address space, is one of the best solutions for monitoring anomalous packets sent by malicious software. However, since darknet is deployed only on an inactive IP address space, it is an inefficient way for monitoring a Working network that has a considerable number of active IP addresses. The present paper addresses this problem. We propose a scalable, lightweight malicious packet monitoring system based on a multi-dimensional IP/port analysis. Our system significantly extends the monitoring scope of darknet. In order to extend the capacity of darknet, our approach leverages the active IP address space without affecting legitimate traffic. Multidimensional monitoring enables the monitoring of TCP ports with firewalls enabled on each of the IP addresses. We focus on delays of TCP syn/ack responses in the traffic. We locate syn/ack delayed packets and forward them to sensors or honeypots for further analysis. We also propose a policy-based flow classification and forwarding mechanism and develop a prototype of a monitoring system that implements our proposed architecture. We deploy our system on a campus network and perform several experiments for the evaluation of our system. We verify that our system can cover 89% of the IP addresses while darknet-based monitoring only covers 46%. On our campus network, our system monitors twice as many IP addresses as darknet.

    DOI

    Scopus

    3
    Citation
    (Scopus)
  • Effects of Sampling and Spatio/Temporal Granularity in Traffic Monitoring on Anomaly Detectability

    Keisuke Ishibashi, Ryoichi Kawahara, Tatsuya Mori, Tsuyoshi Kondoh, Shoichiro Asano

    IEICE TRANSACTIONS ON COMMUNICATIONS   E95B ( 2 ) 466 - 476  2012.02  [Refereed]

     View Summary

    We quantitatively evaluate how sampling and spatio/temporal granularity in traffic monitoring affect the detectability of anomalous traffic. Those parameters also affect the monitoring burden, so network operators face a trade-off between the monitoring burden and detectability and need to know which are the optimal paramter values. We derive equations to calculate the false positive ratio and false negative ratio for given values of the sampling rate, granularity, statistics of normal traffic, and volume of anomalies to be detected. Specifically, assuming that the normal traffic has a Gaussian distribution, which is parameterized by its mean and standard deviation, we analyze how sampling and monitoring granularity change these distribution parameters. This analysis is based on observation of the backbone traffic, which exhibits spatially uncorrelated and temporally long-range dependence. Then we derive the equations for delectability. With those equations, we can answer the practical questions that arise in actual network operations: what sampling rate to set to find the given volume of anomaly, or, if the sampling is too high for actual operation, what granularity is optimal to find the anomaly for a given lower limit of sampling rate.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Detecting Malicious Websites by Learning IP Address Features

    Daiki Chiba, Kazuhiro Tobe, Tatsuya Mori, Shigeki Goto

    2012 IEEE/IPSJ 12TH INTERNATIONAL SYMPOSIUM ON APPLICATIONS AND THE INTERNET (SAINT)     29 - 39  2012  [Refereed]

     View Summary

    Web-based malware attacks have become one of the most serious threats that need to be addressed urgently. Several approaches that have attracted attention as promising ways of detecting such malware include employing various blacklists. However, these conventional approaches often fail to detect new attacks owing to the versatility of malicious websites. Thus, it is difficult to maintain up-to-date blacklists with information regarding new malicious websites. To tackle this problem, we propose a new method for detecting malicious websites using the characteristics of IP addresses. Our approach leverages the empirical observation that IP addresses are more stable than other metrics such as URL and DNS. While the strings that form URLs or domain names are highly variable, IP addresses are less variable, i.e., IPv4 address space is mapped onto 4-bytes strings. We develop a lightweight and scalable detection scheme based on the machine learning technique. The aim of this study is not to provide a single solution that effectively detects web-based malware but to develop a technique that compensates the drawbacks of existing approaches. We validate the effectiveness of our approach by using real IP address data from existing blacklists and real traffic data on a campus network. The results demonstrate that our method can expand the coverage/accuracy of existing blacklists and also detect unknown malicious websites that are not covered by conventional approaches.

    DOI

    Scopus

    38
    Citation
    (Scopus)
  • Autonomic Load Balancing for Flow Monitoring

    Noriaki Kamiyama, Tatsuya Mori, Ryoichi Kawahara

    2012 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC)     2684 - 2688  2012  [Refereed]

     View Summary

    Monitoring flows at routers for flow analysis or deep packet inspection requires the monitors to update monitored flow information at the transmission line rate and needs to use highspeed memory such as SRAM. Therefore, it is difficult to measure all flows, and the monitors need to limit the monitoring target to a part of the flows. However, if monitoring targets are randomly selected, an identical flow will be monitored at multiple routers on its route, or a flow will not be monitored at any routers on its route. To maximize the number of flows monitored in the entire network, the monitors are required to select the monitoring targets while maintaining a balanced load among the monitors. In this paper, we propose an autonomous load balancing method where monitors exchange monitor load information with only adjacent monitors.

    DOI

    Scopus

  • A sophisticated ad hoc cloud computing environment built by the migration of a server to facilitate distributed collaboration

    Tatsuya Mori, Makoto Nakashima, Tetsuro Ito

    Proceedings - 26th IEEE International Conference on Advanced Information Networking and Applications Workshops, WAINA 2012     1196 - 1202  2012  [Refereed]

     View Summary

    A sophisticated ad hoc cloud computing environment (SpACCE) providing calculation capacity of PCs is proposed to facilitate distributed collaboration. Distributed collaboration is now indispensable in daily work and mainly occurs ad hoc in offices and laboratories. However, computer resources in offices and laboratories are under-utilized, while conventional cloud computing environments composed of dedicated servers are not suited to flexibly deploying applications ad hoc. A SpACCE can be built according to the needs that occur at any given time on a set of personal, i.e., non-dedicated, PCs and dynamically migrate a server for application sharing to another PC. CollaboTray, an application-sharing system, indispensable to share any application without modification, is employed to realize the migration of a server. By migrating a server, the redundant calculation capacity of PCs used for individual work can be utilized to produce a sophisticated ad hoc cloud computing environment, where the response time of the application shared among the users is improved. The level of calculation capacity required to execute the migration of a server and the effectiveness of the migration were clarified by building a SpACCE in a university research room. © 2012 IEEE.

    DOI

  • SpACCE: a sophisticated ad hoc cloud computing environment built by server migration to facilitate distributed collaboration.

    Tatsuya Mori, Makoto Nakashima, Tetsuro Ito

    IJSSC   2 ( 4 ) 230 - 239  2012  [Refereed]

    DOI

  • Fundamental Study for Controlling Environment using Biological signal.

    Tatsuya Mori, Yoshikazu Maekawa, Yoko Akiyama, Fumihito Mishima, Koichi Sutani, Sunao Iwaki, Shigehiro Nishijima

    Control. Intell. Syst.   40 ( 3 )  2012  [Refereed]

    DOI

  • Detection accuracy of network anomalies using sampled flow statistics

    Ryoichi Kawahara, Keisuke Ishibashi, Tatsuya Mori, Noriaki Kamiyama, Shigeaki Harada, Haruhisa Hasegawa, Shoichiro Asano

    INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT   21 ( 6 ) 513 - 535  2011.11  [Refereed]

     View Summary

    We investigated the detection accuracy of network anomalies when using flow statistics obtained through packet sampling. Through a case study based on measurement data, we showed that network anomalies generating a large number of small flows, such as network scans or SYN flooding, become difficult to detect during packet sampling. We then developed an analytical model that enables us to quantitatively evaluate the effect of packet sampling and traffic conditions, such as anomalous traffic volume, on detection accuracy. We also investigated how the detection accuracy worsens when the packet sampling rate decreases. In addition, we show that, even with a low sampling rate, spatially partitioning monitored traffic into groups makes it possible to increase detection accuracy. We also developed a method of determining an appropriate number of partitioned groups, and we show its effectiveness. Copyright (C) 2011 John Wiley & Sons, Ltd.

    DOI

    Scopus

    5
    Citation
    (Scopus)
  • Parallel video streaming optimizing network throughput

    Noriaki Kamiyama, Ryoichi Kawahara, Tatsuya Mori, Shigeaki Harada, Haruhisa Hasegawa

    COMPUTER COMMUNICATIONS   34 ( 10 ) 1182 - 1194  2011.07  [Refereed]

     View Summary

    In the Internet, video streaming services, in which users can enjoy videos at home, are becoming popular. Video streaming with high definition TV (HDTV) or ultra high definition video (UHDV) quality will be also provided and widely demanded in the future. However, the transmission bit-rate of high-quality video streaming is quite large, so generated traffic flows will cause link congestion. In the Internet, routes that packets take are determined using static link weights, so the network productivity, i.e., the maximum achievable throughout by the network, is determined by the capacity of a bottleneck link with the maximum utilization, although utilizations of many links remain low level. Therefore, when providing streaming services of rich content, i.e., videos with HDTV or UHDV quality, it is important to flatten the link utilization, i.e., reduce the maximum link utilization. We propose that ISPs use multiple servers to deliver rich content to balance the link utilization and propose server allocation and server selection methods for parallel delivery. We evaluate the effect of parallel delivery using 23 actual commercial ISP networks. (C) 2010 Elsevier B.V. All rights reserved.

    DOI

    Scopus

    4
    Citation
    (Scopus)
  • MapReduce システムのネットワーク負荷分析

    森 達哉, 木村 達明, 池田 泰弘, 上山 憲昭, 川原 亮一

    オペレーションズ・リサーチ : 経営の科学 = [O]perations research as a management science [r]esearch   56 ( 6 ) 331 - 338  2011.06

     View Summary

    MapReduceを実行する大規模分散システムの性能評価モデルの構築に向け,MapReduceによる大規模データ処理を実行した際にシステム全体に生じる負荷をネットワークの観点から分析したケーススタディを紹介する.

    CiNii

  • Optimally designing caches to reduce P2P traffic

    Noriaki Kamiyama, Ryoichi Kawahara, Tatsuya Mori, Shigeaki Harada, Haruhisa Hasegawa

    COMPUTER COMMUNICATIONS   34 ( 7 ) 883 - 897  2011.05  [Refereed]

     View Summary

    Traffic caused by P2P services dominates a large part of traffic on the Internet and imposes significant loads on the Internet, so reducing P2P traffic within networks is an important issue for ISPs. In particular, a huge amount of traffic is transferred within backbone networks; therefore reducing P2P traffic is important for transit ISPs to improve the efficiency of network resource usage and reduce network capital cost. To reduce P2P traffic, it is effective for ISPs to implement cache devices at some router ports and reduce the hop length of P2P flows by delivering the required content from caches. However, the design problem of cache locations and capacities has not been well investigated, although the effect of caches strongly depends on the cache locations and capacities. We propose an optimum design method of cache capacity and location for minimizing the total amount of P2P traffic based on dynamic programming, assuming that transit ISPs provide caches at transit links to access ISP networks. We apply the proposed design method to 31 actual ISP backbone networks and investigate the main factors determining cache efficiency. We also analyze the property of network structure in which deploying caches are effective in reducing P2P traffic for transit ISPs. We show that transit ISPs can reduce the P2P traffic within their networks by about 50-85% by optimally designing caches at the transit links to the lower networks. (C) 2010 Elsevier B.V. All rights reserved.

    DOI

    Scopus

    6
    Citation
    (Scopus)
  • Traffic engineering using overlay network

    Ryoichi Kawahara, Shigeaki Harada, Noriaki Kamiyama, Tatsuya Mori, Haruhisa Hasegawa, Akihiro Nakao

    IEEE International Conference on Communications    2011  [Refereed]

     View Summary

    Due to integrated high-speed networks accommodating various types of services and applications, the quality of service (QoS) requirements for those networks have also become diverse. The network resources are shared by the individual service traffic in the integrated network. Thus, the QoS of all the services may be degraded indiscriminately when the network becomes congested due to a sudden increase in traffic for a particular service if there is no traffic engineering taking into account each service's QoS requirement. To resolve this problem, we present a method of controlling individual service traffic by using an overlay network, which makes it possible to flexibly add various functionalities. The overlay network provides functionalities to control individual service traffic, such as constructing an overlay network topology for each service, calculating the optimal route for the service's QoS, and caching the content to reduce traffic. Specifically, we present a method of overlay routing that is based on the Hedge algorithm, an online learning algorithm to guarantee an upper bound in the difference from the optimal performance. We show the effectiveness of our overlay routing through simulation analysis for various network topologies. © 2011 IEEE.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Multicast pre-distribution in VoD services

    Noriaki Kamiyama, Ryoichi Kawahara, Tatsuya Mori, Haruhisa Hasegawa

    2011 IEEE International Workshop Technical Committee on Communications Quality and Reliability, CQR 2011    2011  [Refereed]

     View Summary

    The number of users of VoD services in which users can request content delivery on demand has increased dramatically. In VoD services, the demand for content changes greatly daily. Because service providers are required to maintain a stable service during peak hours, they need to design the system resources based on the demand at the peak time, so reducing the server load at the peak time is an important issue. Although multicast delivery in which multiple users requesting the same content are supported by one delivery session is effective for suppressing the server load during peak hours, the response time of users seriously increases. A P2P-assisted delivery system in which users download content from other users watching the same content is also effective for reducing the server load. However, the system performance depends on selfish user behavior, and optimizing the usage of system resources is difficult. Moreover, complex operation, i.e., switching the delivery multicast tree or source peers, is necessary to support VCR operation. In this paper, we propose to reduce the server load without increasing user response time by multicasting popular content to all users independently of actual requests as well as providing on-demand unicast delivery. Through numerical evaluation using actual VoD access log data, we clarify the effectiveness of the proposed method. © 2011 IEEE.

    DOI

    Scopus

    7
    Citation
    (Scopus)
  • Performance evaluation of peer-assisted content distribution

    Ryoichi Kawahara, Noriaki Kamiyama, Tatsuya Mori, Haruhisa Hasegawa

    2011 IEEE Consumer Communications and Networking Conference, CCNC'2011     725 - 729  2011  [Refereed]

     View Summary

    Peer-assisted content distribution technologies have been attracting attention. By using not only server resources but also the resources of end hosts (i.e., peers), we can reduce the offered load on servers as well as utilization of the access bandwidth of the servers. However, offered traffi to the network may increase because the traffi exchanged between peers passes across the network. Specificall, if individual peers send traffi disregarding underlay network topology and traffi conditions, the peer-assisted content distribution method may cause excessive traffi offered to the network and poor application performance. We thus investigated the impact of traffi caused by peer-assisted content distribution on the underlay network. We found that although peer-assisted content distribution disregarding underlay network topology causes 80120% additional traffi compared with the optimal case, i.e., content distribution using cache servers allocated optimally in the network, using underlay network topology enables us to achieve almost the same efficien network resource utilization as the optimal case. We also found that the peer-assisted approach can adaptively cope with change in the traffi demand matrix because uploaders in the network are generated according to the demand matrix in a self-organizing manner. This is because peers that have downloaded the content become uploaders so many uploaders are generated in the area where a large number of content requests exist according to the traffi condition
    therefore, the content delivery traffi can be localized. © 2011 IEEE.

    DOI

    Scopus

  • Limiting pre-distribution and clustering users on multicast pre-distribution VoD

    Noriaki Kamiyama, Ryoichi Kawahara, Tatsuya Mori, Haruhisa Hasegawa

    Proceedings of the 12th IFIP/IEEE International Symposium on Integrated Network Management, IM 2011     706 - 709  2011  [Refereed]

     View Summary

    In Video on Demand (VoD) services, the demand for content items greatly changes daily, so reducing the server load at the peak time is an important issue for ISPs to reduce the server cost. To achieve this goal, we proposed to reduce the server load by multicasting popular content items to all users independently of actual requests as well as providing on-demand unicast delivery. In this solution, however, the hit ratio of pre-distributed content items is small, and a large-capacity storage is required at set-top box (STB). We might be able to cope with this problem by limiting the number of pre-distributed content items or clustering users based on the history of viewing. We evaluate the effect of these techniques using actual VoD access log data. We clarify that the required storage capacity at STB can be halved while keeping the effect of server load reduction to about 80% by limiting pre-distributed content items, and user clustering is effective only when the cluster count is about two. © 2011 IEEE.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Traffic engineering using overlay network

    Ryoichi Kawahara, Shigeaki Harada, Noriaki Kamiyama, Tatsuya Mori, Haruhisa Hasegawa, Akihiro Nakao

    2011 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC)     1 - 6  2011  [Refereed]

     View Summary

    Due to integrated high-speed networks accommodating various types of services and applications, the quality of service (QoS) requirements for those networks have also become diverse. The network resources are shared by the individual service traffic in the integrated network. Thus, the QoS of all the services may be degraded indiscriminately when the network becomes congested due to a sudden increase in traffic for a particular service if there is no traffic engineering taking into account each service's QoS requirement. To resolve this problem, we present a method of controlling individual service traffic by using an overlay network, which makes it possible to flexibly add various functionalities. The overlay network provides functionalities to control individual service traffic, such as calculating the optimal route for each service's QoS. Specifically, we present a method of overlay routing that is based on the Hedge algorithm, an online learning algorithm to guarantee an upper bound in the difference from the optimal performance. We show the effectiveness of our overlay routing through simulation analysis for various network topologies.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • How is e-mail sender authentication used and misused?

    Tatsuya Mori, Yousuke Takahashi, Kazumichi Sato, Keisuke Ishibashi

    ACM International Conference Proceeding Series     31 - 37  2011  [Refereed]

     View Summary

    E-mail sender authentication is a promising way of verifying the sources of e-mail messages. Since today's primary e-mail sender authentication mechanisms are designed as fully decentralized architecture, it is crucial for e-mail operators to know how other organizations are using and misusing them. This paper addresses the question "How is the DNS Sender Policy Framework (SPF), which is the most popular e-mail sender authentication mechanism, used and misused in the wild?" To the best of our knowledge, this is the first extensive study addressing the fundamental question. This work targets both legitimate and spamming domain names and correlates them with multiple data sets, including the e-mail delivery logs collected from medium-scale enterprise networks and various IP reputation lists. We first present the adoption and usage of DNS SPF from both global and local viewpoints. Next, we present empirically why and how spammers leverage the SPF mechanism in an attempt to pass a simple SPF authentication test. We also present that non-negligible volume of legitimate messages originating from legitimate senders will be rejected or marked as potential spam with the SPF policy set by owners of legitimate domains. Our findings will help provide (1) e-mail operators with useful insights for setting adequate sender or receiver policies and (2) researchers with the detailed measurement data for understanding the feasibility, fundamental limitations, and potential extensions to e-mail sender authentication mechanisms. Copyright © 2011 ACM.

    DOI

    Scopus

    9
    Citation
    (Scopus)
  • Effect of temporal monitoring granularity on detection accuracy of anomalous traffic

    ISHIBASHI Keisuke, KAWAHARA Ryoichi, MORI Tatsuya, KONDOH Tsuyoshi, ASANO Shoichiro

    IEICE technical report   110 ( 260 ) 43 - 48  2010.10

     View Summary

    In this paper, we quantitatively evaluate how temporal granularity in traffic monitoring (monitoring interval) affects the detectability of anomalous traffic. We build equations to calculate False Negative Ratio (FNR) with given sampling rate, statistics of normal traffic and volume of anomaly to be detected. With those equations, we can provide answers to the questions that arise in actual network operators, and had not been answered yet, such as which monitoring granularity is appropriate to find the given volume of anomaly. We also evaluate how the equations well estimate the FNR by using the actual traffic data.

    CiNii

  • Detecting Anomalous Traffic using Communication Graphs

    K.Ishibashi, T. Kondoh, S. Harada, T. Mori, R. Kawahara, S. Asano

    World Telecommunications Congress (WTC) 2010    2010.09  [Refereed]

    CiNii

  • Analyzing influence of network topology on designing ISP-operated CDN

    Noriaki Kamiyama, Tatsuya Mori, Ryoichi Kawahara, Shigeaki Harada, Haruhisa Hasegawa

    Proceedings of 2010 14th International Telecommunications Network Strategy and Planning Symposium, Networks 2010    2010  [Refereed]

     View Summary

    The transmission bandwidth consumed by delivering rich content, such as movie files, is enormous, so it is urgent for ISPs to design an efficient delivery system minimizing the amount of network resources consumed. To serve users rich content economically and efficiently, an ISP itself should provide servers with huge storage capacities at a limited number of locations within its network. Therefore, we have investigated the content deployment method and the content delivery process that are desirable for this ISP-operated content delivery network (CDN). We have also proposed an optimum cache server allocation method for an ISP-operated CDN. In this paper, we investigate the properties of the topological locations of nodes at which cache placement is effective using 31 network topologies of actual ISPs. We also classify the 31 networks into two types and evaluate the optimum cache count in each network type. ©2010 IEEE.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Optimally designing capacity and location of caches to reduce P2P traffic

    Noriaki Kamiyama, Ryoichi Kawahara, Tatsuya Mori, Shigeaki Harada, Haruhisa Hasegawa

    IEEE International Conference on Communications    2010  [Refereed]

     View Summary

    Traffic caused by P2P services dominates a large part of traffic on the Internet and imposes significant loads on the Internet, so reducing P2P traffic within networks is an important issue for ISPs. In particular, a huge amount of traffic is transferred within backbone networks
    therefore reducing P2P traffic is important for transit ISPs to improve the efficiency of network resource usage and reduce network capital cost. To reduce P2P traffic, it is effective for ISPs to implement cache devices at some router ports and reduce the hop length of P2P flows by delivering the required content from caches. However, the design problem of cache locations and capacities has not been well investigated, although the effect of caches strongly depends on the cache locations and capacities. We propose an optimum design method of cache capacity and location for minimizing the total amount of P2P traffic based on dynamic programming, assuming that transit ISPs provide caches at transit links to access ISP networks. We apply the proposed design method to 31 actual ISP backbone networks. ©2010 IEEE.

    DOI

    Scopus

    4
    Citation
    (Scopus)
  • Characterizing Traffic Flows Originating from Large-Scale Video Sharing Services

    Tatsuya Mori, Ryoichi Kawahara, Haruhisa Hasegawa, Shinsuke Shimogawa

    TRAFFIC MONITORING AND ANALYSIS, PROCEEDINGS   6003   17 - 31  2010  [Refereed]

     View Summary

    This work attempts to characterize network traffic flows originating from large-scale video sharing services such as YouTube. The key technical contributions of this paper are twofold. We first present a simple and effective methodology that identifies traffic flows originating from video hosting servers. The key idea behind our approach is to leverage the addressing/naming conventions used in large-scale server farms. Next, using the identified video flows, we investigate the characteristics of network traffic flows of video sharing services from a network service provider view. Our study reveals the intrinsic characteristics of the flow size distributions of video sharing services. The origin of the intrinsic characteristics is rooted on the differentiated service provided for free and premium membership of the video sharing services. We also investigate temporal characteristics of video traffic flows.

    DOI

    Scopus

    18
    Citation
    (Scopus)
  • Sensor in the dark: Building untraceable large-scale honeypots using virtualization technologies

    Akihiro Shimoda, Tatsuya Mori, Shigeki Goto

    Proceedings - 2010 10th Annual International Symposium on Applications and the Internet, SAINT 2010     22 - 30  2010  [Refereed]

     View Summary

    A Honeypot is a system that aims to detect and analyze malicious attacks attempted on a network in an interactive manner. Because the primary objective of a honeypot is to detect enemies without being known to them, it is important to hide its existence. However, as several studies have reported, exploiting the unique characteristics of hosts working on a consecutive IP addresses range easily reveals the existence of honeypots. In fact, there exist some anti-honeypot tools that intelligently probe IP address space to locate Internet security sensors including honeypots. In order to tackle this problem, we propose a system called DarkPots, that consists of a large number of virtualized honeypots using unused and nonconsecutive IP addresses in a production network. DarkPots enables us to deploy a large number of honeypots within an active IP space used for a production network
    thus detection is difficult using existing probing techniques. In addition, by virtually classifying the unused IP addresses into several groups, DarkPots enables us to perform several monitoring schemes simultaneously. This function is meaningful because we can adopt more than one monitoring schemes and compare their results in an operating network. We design and implement a prototype of DarkPots and empirically evaluate its effectiveness and feasibility by concurrently performing three independent monitoring schemes in a high-speed campus network. The system successfully emulated 7,680 of virtualized honeypots on a backbone link that carries 500 Mbps - 1 Gbps of traffic without affecting legitimate traffic. Our key findings suggest: (1) active and interactive monitoring schemes provide more in-depth insights of malicious attacks, compared to passive monitoring approach in a quantitative way, and (2) randomly distributed allocation of IP addresses has an advantage over the concentrated allocation in that it can collect more information from malwares. These features are crucial in monitoring the security threats. © 2010 IEEE.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Impact of Topology on Parallel Video Streaming

    Noriaki Kamiyama, Ryoichi Kawahara, Tatsuya Mori, Shigeaki Harada, Haruhisa Hasegawa

    PROCEEDINGS OF THE 2010 IEEE-IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM     607 - 614  2010  [Refereed]

     View Summary

    Video streaming with HDTV or UHDV quality will be provided and widely demanded in the future. However, the transmission bit-rate of high-quality video streaming is quite large, so generated traffic flows will cause link congestion. Therefore, when providing streaming services of rich content, it is important to flatten the link utilization, i.e., reduce the maximum link utilization. To achieve this goal, parallel video streaming in which ISPs use multiple servers to deliver rich content is effective. However, the effect of parallel video streaming depends on the network topology and link capacities. In this paper, we investigate the impact of network topologies on the effect of parallel video streaming using 23 actual commercial ISP networks, when optimally designing server locations and optimally selecting servers.

    DOI

    Scopus

    6
    Citation
    (Scopus)
  • Optimally Designing Capacity and Location of Caches to Reduce P2P Traffic

    Noriaki Kamiyama, Ryoichi Kawahara, Tatsuya Mori, Shigeaki Harada, Haruhisa Hasegawa

    2010 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS     1 - 6  2010  [Refereed]

     View Summary

    Traffic caused by P2P services dominates a large part of traffic on the Internet and imposes significant loads on the Internet, so reducing P2P traffic within networks is an important issue for ISPs. In particular, a huge amount of traffic is transferred within backbone networks; therefore reducing P2P traffic is important for transit ISPs to improve the efficiency of network resource usage and reduce network capital cost. To reduce P2P traffic, it is effective for ISPs to implement cache devices at some router ports and reduce the hop length of P2P flows by delivering the required content from caches. However, the design problem of cache locations and capacities has not been well investigated, although the effect of caches strongly depends on the cache locations and capacities. We propose an optimum design method of cache capacity and location for minimizing the total amount of P2P traffic based on dynamic programming, assuming that transit ISPs provide caches at transit links to access ISP networks. We apply the proposed design method to 31 actual ISP backbone networks.

    DOI

    Scopus

    4
    Citation
    (Scopus)
  • On the effectiveness of IP reputation for spam filtering.

    Holly Esquivel, Aditya Akella, Tatsuya Mori

    Second International Conference on Communication Systems and Networks, COMSNETS 2010, Bangalore, India, January 5-9, 2010     1 - 10  2010  [Refereed]

    DOI CiNii

    Scopus

    28
    Citation
    (Scopus)
  • Adaptive bandwidth control to handle long-duration large flows

    Ryoichi Kawahara, Tatsuya Mori, Noriaki Kamiyama, Shigeaki Harada, Haruhisa Hasegawa

    IEEE International Conference on Communications    2009  [Refereed]

     View Summary

    We describe a method of adaptively controlling bandwidth allocation to flows for reducing the file transfer time of short flows without decreasing throughput of long-duration large flows. According to the rapid increase in Internet traffic volume, effective traffic engineering is increasingly required. Specifically, the traffic of long-duration large flows due to the use of peer-to-peer applications, for example, is a problem. Most conventional QoS controls allocate a fair-share bandwidth to each flow regardless of its duration. Thus, a long-duration large flow (such as a P2P flow) is allocated the same bandwidth as a short-duration flow (such as data from a Web page) in which the user is more sensitive to response time, i.e., file transfer time. As a result, long-duration large flows consume bandwidth over a long period and increase response times of short-duration flows, and conventional QoS methods do nothing to prevent this. In this paper, we therefore investigate a different approach, that is, a new form of bandwidth control that enables us to achieve better performance when handling short-duration flows while maintaining performance when handling long-duration flows. The basic idea is to tag packets of long-duration large flows according to traffic conditions and to give temporarily higher priority to nontagged packets during network congestion. We also show the effectiveness of our method through simulation. ©2009 IEEE.

    DOI

    Scopus

  • ISP-Operated CDN

    Noriaki Kamiyama, Tatsuya Mori, Ryoichi Kawahara, Shigeaki Harada, Haruhisa Hasegawa

    IEEE INFOCOM 2009 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS     49 - 54  2009  [Refereed]

     View Summary

    The transmission bandwidth consumed by delivering rich content, such as movie files, is enormous, so it is urgent for ISPs to design an efficient delivery system minimizing the amount of network resources consumed. To efficiently deliver web content, a content delivery networks (CDNs) have been widely used. CDN providers collocate a huge number of servers within multiple ISPs without being informed the detailed network information, i.e., network topologies, from ISPs. Minimizing the amount of network resources consumed is difficult because a CDN provider selects a server for each request based on only rough estimates of response time. To serve users rich content economically and efficiently; an ISP itself should optimally provide servers with huge storage capacities at a limited number of locations within its network. In this paper, we investigate the content deployment method, the content delivery process, and the server allocation method that are desirable for this ISP-operated CDN. Moreover, we evaluate the effectiveness of the ISP-operated CDN using network topologies of actual ISPs.

  • Adaptive bandwidth control to handle long-duration large flows

    Ryoichi Kawahara, Tatsuya Mori, Noriaki Kamiyama, Shigeaki Harada, Haruhisa Hasegawa

    2009 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, VOLS 1-8     2241 - 2246  2009  [Refereed]

     View Summary

    We describe a method of adaptively controlling bandwidth allocation to flows for reducing the file transfer time of short flows without decreasing throughput of long-duration large flows. According to the rapid increase in Internet traffic volume, effective traffic engineering is increasingly required. Specifically, the traffic of long-duration large flows due to the use of peer-to-peer applications, for example, is a problem. Most conventional QoS controls allocate a fair-share bandwidth to each flow regardless of its duration. Thus, a long-duration large flow (such as a P2P flow) is allocated the same bandwidth as a short-duration flow (such as data from a Web page) in which the user is more sensitive to response time, i.e., file transfer time. As a result, long-duration large flows consume bandwidth over a long period and increase response times of short-duration flows, and conventional QoS methods do nothing to prevent this. In this paper, we therefore investigate a different approach, that is, a new form of bandwidth control that enables us to achieve better performance when handling short-duration flows while maintaining performance when handling long-duration flows. The basic idea is to tag packets of long-duration large flows according to traffic conditions and to give temporarily higher priority to nontagged packets during network congestion. We also show the effectiveness of our method through simulation.

    DOI

    Scopus

  • Improving deployability of peer-assisted CDN platform with incentive

    Tatsuya Mori, Noriaki Kamiyama, Shigeaki Harada, Haruhisa Hasegawa, Ryoichi Kawahara

    GLOBECOM 2009 - 2009 IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE, VOLS 1-8     2076 - 2082  2009  [Refereed]

     View Summary

    As a promising solution to manage the huge workload of large-scale VoD services, managed peer-assisted CDN systems, such as P4P [25] has attracted attention. Although the approach works well in theory or in a controlled environment, to our best knowledge, there have been no general studies that address how actual peers can be incentivized in the wild Internet; thus, deployablity of the system with respect to incentives to users has been an open issue. With this background in mind, we propose a new business model that aims to make peer-assisted approaches more feasible. The key idea of the model is that users sell their idle resources back to ISPs. In other words, ISPs can leverage resources of cooperative users by giving them explicit incentives, e.g., virtual currency. We show the high-level framework of designing optimal incentive amount to users. We also analyze how incentives and other external factors affect the efficiency of the system through simulation. Finally, we discuss other fundamental factors that are essential for the deployability of managed peer-assisted model. We believe that the new business model and the insights obtained through this work are useful for assessing the practical design and deployment of managed peer-assisted CDNs.

    DOI

    Scopus

    8
    Citation
    (Scopus)
  • Design and implementation of scalable, transparent threads for multi-core media processor.

    Takeshi Kodaka, Shunsuke Sasaki, Takahiro Tokuyoshi, Ryuichiro Ohyama, Nobuhiro Nonogaki, Koji Kitayama, Tatsuya Mori, Yasuyuki Ueda, Hideho Arakida, Yuji Okuda, Toshiki Kizu, Yoshiro Tsuboi, Nobu Matsumoto

    Design, Automation and Test in Europe, DATE 2009, Nice, France, April 20-24, 2009     1035 - 1039  2009  [Refereed]

    DOI

  • サンプルパケット情報を用いたトラヒック測定分析手法

    川原 亮一, 森 達哉, 滝根 哲哉, 浅野 正一郎

    オペレーションズ・リサーチ : 経営の科学 = [O]perations research as a management science [r]esearch   53 ( 6 ) 328 - 333  2008.06  [Refereed]

     View Summary

    インターネット上においてネットワークリソースの浪費や品質劣化を引き起こす異常トラヒックをトラヒック測定を通じて検知・制御する技術は,安心で快適な通信サービスを提供するために不可欠となっている.一方,ネットワークの大規模化・高速化に伴い,パケットサンプリングによる測定が注目されている.本稿では,サンプルパケット情報から異常トラヒックを検出するためのトラヒック測定分析手法について,関連研究動向の紹介を交えながら筆者らの研究内容について紹介する.また,各手法の実データ評価結果も示す.

    CiNii

  • Finding cardinality heavy-hitters in mussive traffic data and its application to anomaly detection

    Keisuke Ishibashi, Tatsuya Mori, Ryoichi Kawarara, Yutaka Hrrokawa, Atsushi Kobayashi, Kimihiro Yamamoto, Hitoaki Sakamoto, Shoichiro Asano

    IEICE TRANSACTIONS ON COMMUNICATIONS   E91B ( 5 ) 1331 - 1339  2008.05  [Refereed]

     View Summary

    We propose an algorithm for finding heavy hitters in terms of cardinality (the number of distinct items in a set) in massive traffic data using a small amount of memory. Examples of such cardinality heavy-hitters are hosts that send large numbers of flows, or hosts that communicate with large numbers of other hosts. Finding these hosts is crucial to the provision of good communication quality because they significantly affect the communications of other hosts via either malicious activities such as worm scans, spam distribution, or botnet control or normal activities such as being a member of a flash crowd or performing peer-to-peer (P2P) communication. To precisely determine the cardinality of a host we need tables of previously seen items for each host (e.g., flow tables for every host) and this may infeasible for a high-speed environment with a massive amount of traffic. In this paper, we use a cardinality estimation algorithm that does not require these tables but needs only a little information called the cardinality summary. This is made possible by relaxing the goal from exact counting to estimation of cardinality. In addition, we propose an algorithm that does not need to maintain the cardinality summary for each host, but only for partitioned addresses of a host. As a result, the required number of tables can be significantly decreased. We evaluated our algorithm using actual backbone traffic data to find the heavy-hitters in the number of flows and estimate the number of these flows. We found that while the accuracy degraded when estimating for hosts with few flows, the algorithm could accurately find the top-100 hosts in terms of the number of flows using a limited-sized memory. In addition, we found that the number of tables required to achieve a pre-defined accuracy increased logarithmically with respect to the total number of hosts, which indicates that our method is applicable for large traffic data for a very large number of hosts. We also introduce an application of our algorithm to anomaly detection. With actual traffic data, our method could successfully detect a sudden network scan.

    DOI

    Scopus

    4
    Citation
    (Scopus)
  • Packet sampling TCP flow rate estimation and performance degradation detection method

    Ryoichi Kawahara, Tatsuya Mori, Keisuke Ishibashi, Noriaki Kamiyama, Hideaki Yoshino

    IEICE TRANSACTIONS ON COMMUNICATIONS   E91B ( 5 ) 1309 - 1319  2008.05  [Refereed]

     View Summary

    Managing the performance at the flow level through traffic measurement is crucial for effective network management. With the rapid rise in link speeds, collecting all packets has become difficult, so packet sampling has been attracting attention as a scalable means of measuring flow statistics. In this paper, we firstly propose a method of estimating TCP flow rates of sampled flows through packet sampling, and then develop a method of detecting performance degradation at the TCP flow level from the estimated flow rates. In the method of estimating flow rates, we use sequence numbers of sampled packets, which make it possible to improve markedly the accuracy of estimating the flow rates of sampled flows. Using both an analytical model and measurement data, we show that this method gives accurate estimations. We also show that, by observing the estimated rates of sampled flows, we can detect TCP performance degradation. The method of detecting performance degradation is based on the following two findings: (i) sampled flows tend to have high flow-rates and (ii) when a link becomes congested, the performance of high-rate flows becomes degraded first. These characteristics indicate that sampled flows are sensitive to congestion, so we can detect performance degradation of flows that are sensitive to congestion by observing the rate of sampled flows. We also show the effectiveness of our method using measurement data.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Integrated method for loss-resilient multicast source authentication and data reconstruction

    Tatsuya Mori, Hideki Tode, Koso Murakami

    IEEE International Conference on Communications     5844 - 5848  2008  [Refereed]

     View Summary

    Multicast is efficient transfer scheme for contents distribution to a large number of clients, but it is necessary to meet security issues. On the other hand, source authentication is one of the important techniques for protecting from malicious users that plot eavesdropping, masquerading and so on. Though many authentication schemes have been proposed, most of them are not suitable for practical multicast network. The design of scheme should meet robustness against unreliable network. In this paper, we expand the existent authentication scheme using erasure code, and propose the novel control mechanism that cooperates with data reconstruction process. In addition, we show the effectiveness of our proposal by computer simulation. ©2008 IEEE.

    DOI

  • Optimum Identification of Worm-Infected Hosts

    Noriaki Kamiyama, Tatsuya Mori, Ryoichi Kawahara, Shigeaki Harada

    IP OPERATIONS AND MANAGEMENT, PROCEEDINGS   5275   103 - 116  2008  [Refereed]

     View Summary

    The authors have proposed a method of identifying superspreaders by flow sampling and a method of extracting worm-infected hosts from the identified superspreaders using a white list. However, the problem of how to optimally set parameters, phi, the measurement period length, m*, the identification threshold of the flow count m within phi, and H*, the identification probability for hosts with m = m*, remains unsolved. These three parameters seriously affect the worm-spreading property. In this paper, we propose a method of optimally designing these three parameters to satisfy the condition that the ratio of the number of active worm-infected hosts divided by the number of all the vulnerable hosts is bound by a given upper-limit during the time T required to develop a patch or an anti-worm vaccine.

    DOI

    Scopus

  • Integrated Method for Loss-Resilient Multicast Source Authentication and Data Reconstruction.

    Tatsuya Mori, Hideki Tode, Koso Murakami

    Proceedings of IEEE International Conference on Communications, ICC 2008, Beijing, China, 19-23 May 2008     5844 - 5848  2008  [Refereed]

    DOI

  • A Method of Detecting Network Anomalies in Cyclic Traffic

    Shigeaki Harada, Ryoichi Kawahara, Tatsuya Mori, Noriaki Kamiyama, Haruhisa Hasegawa, Hideaki Yoshino

    GLOBECOM 2008 - 2008 IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE     2057 - 2061  2008  [Refereed]

     View Summary

    We present a method of detecting network anomalies, such as DDoS (distributed denial of service) attacks and flash crowds, automatically in real time. We evaluated this method using measured traffic data and found that it successfully differentiated suspicious traffic. In this paper, we focus on cyclic traffic, which has a daily and/or weekly cycle, and show that the differentiation accuracy is improved by utilizing such a cyclic tendency in anomaly detection. Our method differentiates suspicious traffic that has different statistical characteristics from normal traffic. At the same time, it learns about cyclic large-volume traffic, such as traffic for network operations, and finally considers it to be legitimate.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Identifying heavy-hitter flows from sampled flow statistics

    Tatsuya Mori, Tetsuya Takine, Jianping Pan, Ryoichi Kawahara, Masato Uchida, Shigeki Goto

    IEICE TRANSACTIONS ON COMMUNICATIONS   E90B ( 11 ) 3061 - 3072  2007.11  [Refereed]

     View Summary

    With the rapid increase of link speed in recent years, packet sampling has become a very attractive and scalable means in collecting flow statistics; however, it also makes inferring original flow characteristics much more difficult. In this paper, we develop techniques and schemes to identify flows with a very large number of packets (also known as heavy-hitter flows) from sampled flow statistics. Our approach follows a two-stage strategy: We first parametrically estimate the original flow length distribution from sampled flows. We then identify heavy-hitter flows with Bayes' theorem, where the flow length distribution estimated at the first stage is used as an a priori distribution. Our approach is validated and evaluated with publicly available packet traces. We show that our approach provides a very flexible framework in striking an appropriate balance between false positives and false negatives when sampling frequency is given.

    DOI

    Scopus

    36
    Citation
    (Scopus)
  • Effect of sampling rate and monitoring granularity on anomaly detectability

    Keisuke Ishibashi, Ryoichi Kawahara, Mori Tatsuya, Tsuyoshi Kondoh, Shoichiro Asano

    2007 IEEE GLOBAL INTERNET SYMPOSIUM     25 - +  2007  [Refereed]

     View Summary

    In this paper, we quantitatively evaluate how sampling decreases the detect-ability of anomalous traffic. We build equations to calculate the false positive ratio (FPR) and false negative ratio (FNR) for given values of the sampling rate, statistics of normal traffic, and volume of anomalies to be detected. We show that by changing the measurement granularity, we can detect anomalies even with a low sampling rate and give the equation to derive optimal granularity by using the relationship between the mean and variance of aggregated flows. With those equations, we can answer for the practical questions that arise in actual network operations; what sampling rate to set in order to find the given volume of anomaly, or, if the sampling is too high for actual operation, then what granularity is optimal to find the anomaly for a given lower limit of sampling rate.

  • Estimating scale of peer-to-peer file sharing applications using multilayer partial measurement

    Satoshi Kamei, Masato Uchida, Tatsuya Mori, Yutaka Takahashi

    ELECTRONICS AND COMMUNICATIONS IN JAPAN PART I-COMMUNICATIONS   90 ( 3 ) 54 - 63  2007  [Refereed]

     View Summary

    Autonomous distributed systems comprising an overlay network on the Internet are proliferating as a new technology responsible for the next-generation Web. Due to the properties of such a large-scale system, it is difficult to measure the characteristics of the entire system without modifying the internal protocol of the applications. In this paper, by using a measurement method applicable to P2P applications and measured results obtained by applying this measurement method in part, a general-purpose method for estimating the size and behavior of the entire P2P network is proposed. Further, the present method is applied to a real P2P network and its effectiveness is presented by a specific example. (C) 2006 Wiley Periodicals, Inc.

    DOI

    Scopus

  • 2-D bitmap for summarizing inter-host communication patterns

    Keisuke Ishibashi, Tatsuya Mori, Ryoichi Kawahara, Katsuyasu Toyama, Shunichi Osawa, Shoichiro Asano

    SAINT - 2007 International Symposium on Applications and the Internet - Workshops, SAINT-W     83  2007  [Refereed]

     View Summary

    We propose a tool for summarizing communication patterns between multiple hosts from traffic data with a small memory space, using a 2-D bitmap. Here, we focus on the communication pattern between pairs of source-destination hosts
    these represents spatial communication patterns. By analyzing communication patterns using a bitmap, we can identify a super spreader, which is a host that sends packets to many destinations, or analyze the relationship between two source hosts. For the latter purpose, we present an application of the bitmap to calculate the similarity of hosts based on their peer-hosts patterns. © 2007 IEEE.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • A study on detecting network anomalies using sampled flow statistics

    Ryoichi Kawahara, Tatsuya Mori, Noriaki Kamiyama, Shigeaki Harada, Shoichiro Asano

    SAINT - 2007 International Symposium on Applications and the Internet - Workshops, SAINT-W     81  2007  [Refereed]

     View Summary

    We investigate how to detect network anomalies using flow statistics obtained through packet sampling. First, we show that network anomalies generating a huge number of small flows, such as network scans or SYN flooding, become difficult to detect when we execute packet sampling. This is because such flows are more unlikely to be sampled than normal flows. As a solution to this problem, we then show that spatially partitioning the monitored traffic into groups and analyzing the traffic of individual groups can increase the detectability of such anomalies: We also show the effectiveness of the partitioning method using network measurement data. © 2007 IEEE.

    DOI

    Scopus

    17
    Citation
    (Scopus)
  • Simple and adaptive identification of superspreaders by flow sampling

    Noriaki Kamiyama, Tatsuya Mori, Ryoichi Kawahara

    INFOCOM 2007, VOLS 1-5     2481 - +  2007  [Refereed]

     View Summary

    Abusive traffic caused by worms is increasing severely in the Internet. In many cases, worm-infected hosts generate a huge number of flows of small size during a short time. To suppress the abusive traffic and prevent worms from spreading, identifying these "superspreaders" as soon as possible and coping with them, e.g., disconnecting them from the network, is important. This paper proposes a simple and adaptive method of identifying superspreaders by flow sampling. By satisfying the given memory size and the requirement for the processing time, the proposed method can adaptively optimize parameters according to changes in traffic patterns.

    DOI

    Scopus

    43
    Citation
    (Scopus)
  • Efficient timeout checking mechanism for traffic control

    Noriaki Kamiyama, Tatsuya Mori, Ryoichi Kawahara, Eng Keong Lua

    PROCEEDINGS - 16TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS, VOLS 1-3     327 - 333  2007  [Refereed]

     View Summary

    Traffic flow measurement is essential to implement QOS control in the Internet. Flow monitoring system collects and stores sampled How states in a flow table (FT) and the entries are renewed at every packet sampling. Entries in the FT are checked and removed when no packets are sampled within a predetermined timeout. We propose an efficient timeout checking mechanism based on checking a small number of entries selected randomly from the FT. Our proposed method aims to reduce the number of memory accesses dramatically and keep the memory size small. We evaluate our method and compare with the conventional method that checks all flow entries of the FT periodically. Our simulation and comparison results show that our method is able to reduce the number of memory access at a factor of 1000 with a small increase in memory size of approximately 10 percent.

    DOI

    Scopus

  • Detection accuracy of network anomalies using sampled flow statistics

    Ryoichi Kawahara, Keisuke Ishibashi, Tatsuya Mori, Noriaki Kamiyama, Shigeaki Harada, Shoichiro Asano

    GLOBECOM 2007: 2007 IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE, VOLS 1-11     1959 - +  2007  [Refereed]

     View Summary

    We investigate the detection accuracy of network anomalies when we use flow statistics obtained through packet sampling. We have already shown, through a case study based on measurement data, that network anomalies generating a huge number of small flows, such as network scans or SYN flooding, become hard to detect when we perform packet sampling. In this paper, we first develop an analytical model that enables us to quantitatively evaluate the effect of packet sampling on the detection accuracy and then investigate why detection accuracy worsens when the packet sampling rate decreases. In addition, we show that, even with a low sampling rate, spatially partitioning the monitored traffic into groups makes it possible to increase the detection accuracy. We also develop a method of determining an appropriate number of partitioned groups and show its effectiveness.

    DOI

    Scopus

    7
    Citation
    (Scopus)
  • QoS control to handle long-duration large flows and its performance evaluation

    Ryoichi Kawahara, Tatsuya Mori, Takeo Abe

    IEEE International Conference on Communications   2   579 - 584  2006  [Refereed]

     View Summary

    A method of controlling the rate of long-duration large flows and its performance evaluation is described in this paper. Most conventional QoS controls allocate a fair-share bandwidth to each flow regardless of its duration. Thus, a long-duration large flow (such as a P2P flow) is allocated the same bandwidth as a short-duration flow (such as data from a Web page) in which the user is more sensitive to response time. As a result, long-duration flows will occupy the bandwidth over the long period and worsen response times of short-duration flows, and the conventional QoS methods do nothing to prevent this. We have, therefore, proposed a new form of QoS control that takes flow duration into account and assigns higher priority to the acceptance of shorter-duration flows. In this paper, we show through simulation that our method achieves high performance for short-duration flows without degrading the performance of long-duration flows. We also explain how to set parameters used in our method. Furthermore, we discuss the applicability of a packet-sampling technique to improve the method's scalability. © 2006 IEEE.

    DOI

    Scopus

  • Simple and accurate identification of high-rate flows by packet sampling

    Noriaki Kamiyama, Tatsuya Mori

    25TH IEEE INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS, VOLS 1-7, PROCEEDINGS IEEE INFOCOM 2006     2836 - 2848  2006  [Refereed]

     View Summary

    Unfairness among best-effort flows is a serious problem on the Internet. In particular, UDP flows or unresponsive flows that do not obey the TCP flow control mechanism can consume a large share of the available bandwidth. High-rate flows seriously affect other flows, so it is important to identify them and limit their throughput by selectively dropping their packets. As link transmission capacity increases and the number of active flows increases, however, capturing all packet information becomes more difficult. In this paper, we propose a novel method of identifying high-rate flows by using sampled packets. The proposed method simply identifies flows from which Y packets are sampled without timeout. The identification principle is very simple and the implementation is easy. We derive the identification probability for flows with arbitrary flow rates and obtain an identification curve that clearly demonstrates the accuracy of identification. The characteristics of this method are determined by three parameters: the identification threshold Y, the timeout coefficient K, and the sampling interval N. To match the experimental identification probability to the theoretical one and to simplify the identification mechanism, we should set K to the maximum allowable value. Although increasing Y improves the identification accuracy, both the required memory size and the processing power grow as Y increases. Numerical evaluation using an actual packet trace demonstrated that the proposed method achieves very high identification accuracy with a much simpler mechanism than that of previously proposed methods.

    DOI

    Scopus

    35
    Citation
    (Scopus)
  • Estimating top N hosts in cardinality using small memory resources

    Keisuke Ishibashi, Tatsuya Mori, Ryoichi Kawahara, Yutaka Hirokawa, Atsushi Kobayashi, Kimihiro Yamamoto, Hitoaki Sakamoto

    ICDEW 2006 - Proceedings of the 22nd International Conference on Data Engineering Workshops     29  2006  [Refereed]

     View Summary

    We propose a method to find N hosts that have the N highest cardinalities, where cardinality is the number of distinct items such as the number of flows, ports, or peer hosts. The method also estimates their cardinalities. While existing algorithms to find the top N frequent items can be directly applied to find N hosts that send the N largest numbers of packets through packet data stream, finding hosts that have the N highest cardinalities requires tables of previously seen items for each host to check whether an item of an arrival packet is new, which requires a lot of memory. Even if we use the existing cardinality estimation methods, we still need to have cardinality information about each host. In this paper, we use the property of cardinality estimation, in which the cardinality of intersections of multiple data sets can be estimated with cardinality information of each data set. Using the property, we propose an algorithm that does not need to maintain tables for each host, but only for partitioned addresses of a host and estimate the cardinality of a host as the intersection of cardinalities of partitioned addresses. We also propose a method to find top N hosts in cardinalities which is to be monitored to detect anomalous behavior in networks. We evaluate our algorithm through actual backbone traffic data. While the estimation accuracy of our scheme degrades for small cardinalities, as for the top 100 hosts, the accuracy of our algorithm with 4, 096 tables is almost the same as having tables of every hosts.

    DOI

    Scopus

    6
    Citation
    (Scopus)
  • QoS control to handle long-duration large flows and its performance evaluation

    Ryoichi Kawahara, Tatsuya Mori, Takeo Abe

    2006 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, VOLS 1-12     579 - 584  2006  [Refereed]

     View Summary

    A method of controlling the rate of long-duration large flows and its performance evaluation is described in this paper. Most conventional QoS controls allocate a fair-share bandwidth to each flow regardless of its duration. Thus, a long-duration large flow (such as a P2P flow) is allocated the same bandwidth as a short-duration flow (such as data from a Web page) in which the user is more sensitive to response time. As a result, long-duration flows will occupy the bandwidth over the long period and worsen response times of short-duration flows, and the conventional QoS methods do nothing to prevent this. We have, therefore, proposed a new form of QoS control that takes flow duration into account and assigns higher priority to the acceptance of shorter-duration flows. In this paper, we show through simulation that our method achieves high performance for short-duration flows without degrading the performance of long-duration flows. We also explain how to set parameters used in our method. Furthermore, we discuss the applicability of a packet-sampling technique to improve the method&apos;s scalability.

    DOI

    Scopus

  • Estimating flow rate from sampled packet streams for detection of performance degradation at TCP flow level

    Ryoichi Kawahara, Tatsuya Mori, Keisuke Ishibashi, Noriaki Kamiyama, Takeo Abe

    GLOBECOM 2006 - 2006 IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE    2006  [Refereed]

     View Summary

    A method of estimating TCP flow-rates of sampled flows through packet sampling is described in this paper. We use sequence numbers of sampled packets, which make it possible to improve markedly the accuracy of estimating the flow rates. Using an analytical model, we investigate how to set parameters such as packet sampling probability used in this method of estimation. As a remarkable result, we show that the estimation accuracy improves as the sampling probability decreases. Using measured data, we also show that this method gives accurate estimations. We also show that this estimation method enables us to detect performance degradation at the TCP flow level.

    DOI

    Scopus

    2
    Citation
    (Scopus)
  • Method of bandwidth dimensioning and management for aggregated TCP flows with heterogeneous access links

    R Kawahara, K Ishibashi, T Mori, T Ozawa, T Abe

    IEICE TRANSACTIONS ON COMMUNICATIONS   E88B ( 12 ) 4605 - 4615  2005.12  [Refereed]

     View Summary

    We propose a method of dimensioning and managing the bandwidth of a link on which flows with heterogeneous access-link bandwidths are aggregated. We use a processor-sharing queue model to develop a formula approximating the mean TCP file-transfer time of flows on an access link in such a situation. This only requires the bandwidth of the access link carrying the flows on which. we are focusing and the bandwidth and utilization of the aggregation link, each of which is easy to set or measure. We then extend the approximation to handle various factors affecting actual TCP behavior, such as the round-trip time and restrictions other than the access-link bandwidth and the congestion of the aggregation link. To do this, we define the virtual access-link bandwidth as the file-transfer speed of a flow when the utilization of the aggregation link is negligibly small. We apply the virtual access-link bandwidth in our approximation to estimate the TCP performance of a flow with increasing utilization of the aggregation link. This method of estimation is used as the basis for a method of dimensioning the bandwidth of a link such that the TCP performance is maintained, and for a method of managing the bandwidth by comparing the measured link utilization with an estimated threshold indicating degradation of the TCP performance. The accuracy of the estimates produced by our method is estimated through both computer simulation and actual measurement.

    DOI

    Scopus

    1
    Citation
    (Scopus)
  • Flow analysis of internet traffic: World wide web versus peer-to-peer

    Tatsuya Mori, Masato Uchida, Shigeki Goto

    Systems and Computers in Japan   36 ( 11 ) 70 - 81  2005.10  [Refereed]

     View Summary

    Peer-to-peer (P2P) applications have been expanding rapidly in recent years, and the contribution of P2P to present Internet traffic is close to that of the World Wide Web (WWW). In this study, the flow of WWW and P2P traffic is analyzed by network measurement. The characteristics of WWW and P2P are examined, especially in terms of the flow arrival interval, flow duration, flow size, and flow rate. Based on the results of the analysis, the effect of a P2P flow increase on the overall traffic characteristics is investigated. The results of this study will be utilized in network design, proposals for control procedures, and traffic modeling, considering the traffic characteristics of particular applications. © 2005 Wiley Periodicals, Inc.

    DOI

    Scopus

    21
    Citation
    (Scopus)
  • A method of detecting performance degradation at TCP flow level from sampled packet streams

    R Kawahara, K Ishibashi, T Mori, T Abe

    2005 Workshop on High Performance Switching and Routing     157 - 161  2005  [Refereed]

     View Summary

    Managing the performance at the flow level through traffic measurement is crucial for effective network management. On the other hand, with the rapid rise in link speeds, collecting all packets has become difficult, so packet sampling has been attracting attention as a scalable means of measuring flow statistics. We have therefore established a method of detecting performance degradation at the TCP flow level from sampled flow behaviors. The proposed method is based on the following two flow characteristics: (i) sampled flows tend to have high flowrates and (ii) when a link becomes congested, the performance of high-rate flows becomes degraded first. These characteristics indicate that sampled flows are sensitive to congestion, so we can detect performance degradation of flows that are sensitive to congestion by observing the rate of sampled flows. We also show the effectiveness of our method using measured data.

  • On the characteristics of Internet traffic variability: Spikes and elephants

    T Mori, R Kawahara, S Naito, S Goto

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS   E87D ( 12 ) 2644 - 2653  2004.12  [Refereed]

     View Summary

    Analysing and modeling of traffic play a vital role in designing and controlling of networks effectively. To construct a practical traffic model that can be used for various networks, it is necessary to characterize aggregated traffic and user traffic. This paper investigates these characteristics and their relationship. Our analyses are based on a huge number of packet traces from five different networks on the Internet. We found that: (1) marginal distributions of aggregated traffic fluctuations follow positively skewed (non-Gaussian) distributions, which leads to the existence of "spikes", where spikes correspond to an extremely large value of momentary throughput, (2) the amount of user traffic in a unit of time has a wide range of variability, and (3) flows within spikes are more likely to be "elephant flows", where an elephant flow is an IP flow with a high volume of traffic. These findings are useful in constructing a practical and realistic Internet traffic model.

  • Identifying elephant flows through periodically sampled packets.

    Tatsuya Mori, Masato Uchida, Ryoichi Kawahara, Jianping P, Shigeki Goto

    Proceedings of the 4th ACM SIGCOMM Internet Measurement Conference, IMC 2004, Taormina, Sicily, Italy, October 25-27, 2004     115 - 120  2004.10  [Refereed]

    DOI

    Scopus

    155
    Citation
    (Scopus)
  • A method of bandwidth dimensioning and management for aggregated TCP flows with heterogeneous access links

    Ryoichi Kawahara, Keisuke Ishibashi, Tatsuya Mori, Toshihisa Ozawa, Shuichi Sumita, Takeo Abe

    Networks 2004 - 11th International Telecommunications Network Strategy and Planning Symposium     15 - 20  2004  [Refereed]

     View Summary

    We propose a method of dimensioning and managing the bandwidth of a link on which flows arriving on access links that have heterogeneous bandwidths are aggregated. We start by developing a formula that approximates the mean TCP file-transfer time of a flow in such a situation. This only requires the bandwidth of the access link carrying the flow and the bandwidth and utilization of the aggregation link, each of which is easy to set or measure. We then extend the approximation to handle various factors that affect actual TCP behavior, such as round-trip time and restrictions other than the access-link bandwidth and congestion of the aggregation link in the end-to-end path of the flow. To do this, we define the virtual access-link bandwidth as the file-transfer speed of the flow when utilization of the aggregation link is negligibly small. We apply the virtual access-link bandwidth in the approximation to estimate the TCP performance of the flow with increasing utilization of the aggregation link. We use this method of estimation as the basis for a method of dimensioning the bandwidth of the link such that TCP performance is maintained and a method of managing bandwidth by comparing measured link utilization with the estimated threshold that indicates degradation of TCP performance. We also use simulation to analyze the accuracy of the estimates produced by our method.

  • On the characteristics of Internet traffic variability: Spikes and elephants

    T Mori, R Kawahara, S Naito, S Goto

    2004 INTERNATIONAL SYMPOSIUM ON APPLICATIONS AND THE INTERNET, PROCEEDINGS     99 - 106  2004  [Refereed]

     View Summary

    Analysing and modeling of traffic play a vital role in designing and controlling of networks effectively. To construct a practical traffic model that can be used for various networks, it is necessary to characterize aggregated traffic and user traffic, This paper investigates these characteristics and their relationship. Our analyses are based on a huge number of packet traces from five different networks on the Internet. We found that: (1) marginal distributions of aggregated traffic fluctuations follow positively skewed (non-Gaussian) distributions, which leads to the existence of "spikes", where spikes correspond to an extremely large value of momentary throughput, (2) the amount of user traffic in a unit of time has a wide range of variability, and (3) flows within spikes are more likely to be "elephant flows", where an elephant flow is an IP flow with a high volume of traffic. These findings are useful in constructing a practical and realistic Internet traffic model.

    DOI

    Scopus

    28
    Citation
    (Scopus)

▼display all

Research Projects

  • Security Assessment and Countermeasures for AI-Driven Cyber-Physical Systems

    JST  CREST

    Project Year :

    2023.10
    -
    2029.03
     

  • Understanding the impact of adversarial inputs on autonomous driving systems and developing countermeasure technologies

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research Grant-in-Aid for Scientific Research (A)

    Project Year :

    2022.04
    -
    2025.03
     

    Tatsuya Mori, Jun Sakuma, Takeshi Sugawara, Kenji Sawada

  • Developing real-world oriented authentication technologies

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research Grant-in-Aid for Challenging Research (Exploratory)

    Project Year :

    2022.06
    -
    2024.03
     

    Tatsuya Mori, Tetsushi Ohki

  • Assessment of the impact of malicious input data on automated driving systems

    National Institute of Informatics  ROIS NII Open Collaborative Research 2022

    Project Year :

    2022.07
    -
    2023.03
     

    Tatsuya Mori, Shunsuke Aoki

  • Research on Security of Characters

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research

    Project Year :

    2020.07
    -
    2022.03
     

  • Context-aware Approaches for Securing Appified IoT Devices

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research Grant-in-Aid for Scientific Research (B)

    Project Year :

    2019.04
    -
    2022.03
     

    Tatsuya Mori, Katsunari Yoshioka, Toshihiro Yamauchi, Koichi Mouri, Akira Kanaoka

     View Summary

    This research project focused on the security and privacy issues of applications running on IoT platforms, and worked on methods for analyzing and controlling the behavior of applications based on the context in which they are used. Specifically, we conducted (1) a large-scale measurement study of security threats and issues in application-oriented IoT platforms, (2) development of context inspection techniques for IoT application behavior, and (3) development of access control and emergency handling mechanisms for IoT platforms.

  • 音声セキュリティ研究の開拓

    日本学術振興会  科学研究費助成事業 挑戦的研究(萌芽)

    Project Year :

    2018.06
    -
    2020.03
     

    森 達哉

  • Malware Informatics as a Power Base of Cyber Security Analysis

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research Grant-in-Aid for Scientific Research (B)

    Project Year :

    2016.04
    -
    2019.03
     

    Goto Shigeki, MORI Tatsuya

     [International coauthorship]

     View Summary

    In modern networked society, the most severe threat is Cyber Attack. There is a significant demand for establishing defense technology for cyber attacks. There have been many research projects on cyber attacks. However, they deal with a specific kind of attacks individually, and they include some manual operations in their methods. This project proposes the Malware Informatics which covers the large scale database of malware (malicious software). It also shows the feature engineering, which is useful and powerful in data analysis. It proposes a new method for evaluating machine learning algorithms which play central roles in our data science approach to cyber defense technology. We have published many papers and described detailed results on the Web page of our research project.

  • 人間の移動軌跡とセンサー情報の相関分析により生じる脅威の実証と対策

    日本学術振興会  科学研究費助成事業

    Project Year :

    2015.04
    -
    2017.03
     

    森 達哉

  • 超高速ネットワーク詳細モニタリング技術の研究

    日本学術振興会  科学研究費助成事業

    Project Year :

    2013.08
    -
    2016.03
     

    森 達哉

▼display all

Misc

  • AI時代のサイバーセキュリティ:5.AIを活用したシステムへの攻撃と防御に関する最新セキュリティ研究動向

    Tatsuya Mori

      63 ( 10 ) 26 - 34  2022.09  [Invited]

    Authorship:Lead author

    Article, review, commentary, editorial, etc. (scientific journal)  

    DOI

  • Machine Learning and Offensive Security

    森達哉

    日本セキュリティ・マネジメント学会誌(Web)   33 ( 3 )  2020  [Invited]

    Authorship:Lead author

    Article, review, commentary, editorial, etc. (scientific journal)  

    J-GLOBAL

  • Poster: Toward automating the generation of malware analysis reports using the sandbox logs

    Bo Sun, Akinori Fujino, Tatsuya Mori

    Proceedings of the ACM Conference on Computer and Communications Security   24-28-   1814 - 1816  2016.10  [Refereed]

     View Summary

    In recent years, the number of new examples of malware has continued to increase. To create effective countermeasures, security specialists often must manually inspect vast sandbox logs produced by the dynamic analysis method. Conversely, antivirus vendors usually publish malware analysis reports on their website. Because malware analysis reports and sandbox logs do not have direct connections, when analyzing sandbox logs, security specialists cannot benefit from the information described in such expert reports. To address this issue, we developed a system called ReGenerator that automates the generation of reports related to sandbox logs by making use of existing reports published by antivirus vendors. Our system combines several techniques, including the Jaccard similarity, Natural Language Processing (NLP), and Generation (NLG), to produce concise human-readable reports describing malicious behavior for security specialists.

    DOI

  • "I’m Stuck, Too!" Revisiting Difficulties of Using Web Authentication Mechanisms for Visually Impaired Person

    Yuta Ota, Akira Kanaoka, Tatsuya Mori

    The twelfth Symposium on Usable Privacy and Security (SOUPS 2016) Poster Session    2016.06  [Refereed]

    Other  

  • A Security Framework for Detecting and Regulating Threats Caused by Analog Signals

    コンピュータセキュリティシンポジウム2021論文集     79 - 86  2021.10

    CiNii

  • A First Look at COVID-19 Domain Names: Origin and Implications

    Ryo Kawaoka, Daiki Chiba, Takuya Watanabe, Mitsuaki Akiyama, Tatsuya Mori

    CoRR   abs/2102.05290  2021.02

     View Summary

    This work takes a first look at domain names related to COVID-19 (Cov19doms
    in short), using a large-scale registered Internet domain name database, which
    accounts for 260M of distinct domain names registered for 1.6K of distinct
    top-level domains. We extracted 167K of Cov19doms that have been registered
    between the end of December 2019 and the end of September 2020. We attempt to
    answer the following research questions through our measurement study: RQ1: Is
    the number of Cov19doms registrations correlated with the COVID-19 outbreaks?,
    RQ2: For what purpose do people register Cov19doms? Our chief findings are as
    follows: (1) Similar to the global COVID-19 pandemic observed around April
    2020, the number of Cov19doms registrations also experienced the drastic
    growth, which, interestingly, pre-ceded the COVID-19 pandemic by about a month,
    (2) 70 % of active Cov19doms websites with visible content provided useful
    information such as health, tools, or product sales related to COVID-19, and
    (3) non-negligible number of registered Cov19doms was used for malicious
    purposes. These findings imply that it has become more challenging to
    distinguish domain names registered for legitimate purposes from others and
    that it is crucial to pay close attention to how Cov19doms will be used/misused
    in the future.

  • Application of Adversarial Examples to Physical ECG Signals.

    Taiga Ono, Takeshi Sugawara, Jun Sakuma, Tatsuya Mori

    CoRR   abs/2108.08972  2021

  • Poster: A First Look at the Privacy Risks of Voice Assistant Apps.

    Atsuko Natatsuka, Ryo Iijima, Takuya Watanabe, Mitsuaki Akiyama, Tetsuya Sakai, Tatsuya Mori

    Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, London, UK, November 11-15, 2019.     2633 - 2635  2019  [Refereed]

    DOI

  • 計測セキュリティと今後の方向性-攻撃と評価の螺旋的発展

    松本勉, 松本勉, 森達哉, 竹久達也, 竹久達也, 藤野毅, 鈴木大輔

    電子情報通信学会大会講演論文集(CD-ROM)   2019  2019

    J-GLOBAL

  • ShamFinder: An Automated Framework for Detecting IDN Homographs.

    Hiroaki Suzuki, Daiki Chib, Yoshiro Yoneya, Tatsuya Mori, Shigeki Goto

    CoRR   abs/1909.07539  2019

    Authorship:Corresponding author

    Internal/External technical report, pre-print, etc.  

  • A Study of Purchase History Leakage on Auction Sites and Users' Expectations

      59 ( 9 ) 1689 - 1698  2018.09

     View Summary

    An online purchase history is privacy-sensitive information which indirectly indicates who he/she is. To protect buyers from the leakage of their purchase histories, online auction sites have adopted some form of privacy protection mechanisms such as anonymization of buyers' ID. However, Minkus et al. have demonstrated it's possible to reconstruct an online purchase history on a specific online auction site. In this paper, we extend their work and demonstrate that a purchase history attack can also work for other auction sites with more powerful privacy protection mechanism. In our experiment on an actual auction site, we confirmed that our extended attack is able to reveal 97.2% of users' online purchase histories. Additionally we study users' expectations regarding their privacy on online auction sites and reveal that many users aren't aware of the possibility of the leakage of their purchase histories. This result indicates there is a discrepancy between the potential privacy risk in online auction sites and users' expectations. Finally we make recommendations towards better privacy for auction users and service providers respectively.

    CiNii

  • Understanding the new security threats against IoT devices-An approach of offensive security and user study-

    森達哉

    電子情報通信学会技術研究報告   118 ( 192(CQ2018 46-62)(Web) )  2018

    J-GLOBAL

  • Stay On-Topic: Generating Context-specific Fake Restaurant Reviews.

    Mika Juuti, Bo Sun, Tatsuya Mori, N. Asokan

    CoRR   abs/1805.02400   132 - 151  2018

    Internal/External technical report, pre-print, etc.  

    DOI

  • User Blocking Considered Harmful? An Attacker-controllable Side Channel to Identify Social Accounts.

    Takuya Watanabe, Eitaro Shioji, Mitsuaki Akiyama, Keito Sasaoka, Takeshi Yagi, Tatsuya Mori

    CoRR   abs/1805.05085  2018

    Internal/External technical report, pre-print, etc.  

  • POSTER: Is Active Electromagnetic Side-channel Attack Practical?

    Satohiro Wakabayashi, Seita Maruyama, Tatsuya Mori, Shigeki Goto, Masahiro Kinugawa, Yu-ichi Hayashi

    Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30 - November 03, 2017     2587 - 2589  2017  [Refereed]

    DOI

  • A Study on the Vulnerabilities of Mobiles Apps associated with Software Modules.

    Takuya Watanabe, Mitsuaki Akiyama, Fumihiro Kanei, Eitaro Shioji, Yuta Takata, Bo Sun, Yuta Ishii, Toshiki Shibahara, Takeshi Yagi, Tatsuya Mori

    CoRR   abs/1702.03112  2017

    Internal/External technical report, pre-print, etc.  

  • Trojan of Things: Embedding Malicious NFC Tags into Common Objects.

    Seita Maruyama, Satohiro Wakabayashi, Tatsuya Mori

    CoRR   abs/1702.07124  2017

    Internal/External technical report, pre-print, etc.  

  • B-16-2 Feature Analysis of Transmission Activities of Spam E-mail Containing Malwre

    Shimura Masaki, Hatada Mitsuhiro, Mori Tatsuya, Goto Shigeki

    Proceedings of the IEICE General Conference   2016 ( 2 ) 490 - 490  2016.03

    CiNii

  • 視覚障害者に対するウェブ認証インタフェースのアクセシビリティ評価

    太田 裕也, 金岡 晃, 森 達哉

    研究報告セキュリティ心理学とトラスト(SPT)   2016-SPT-17 ( 19 ) 1 - 6  2016.02

    Research paper, summary (national, other academic conference)  

  • Exposing Hidden Traffic Using Name Information

      115 ( 370 ) 19 - 24  2015.12

    CiNii

  • Exposing Hidden Traffic Using Name Information

    Tatsuya Mori, Takeru Inoue, Akihiro Shimoda, Kazumichi Sato, Shigeaki Harada, Keisuke Ishibashi, Yumehisa Haga, Akira Saso, Shigeki Goto

    IEICE Technical Report   115 ( 371 ) 1 - 6  2015.12  [Invited]

    Research paper, summary (national, other academic conference)  

    CiNii J-GLOBAL

  • B-16-8 Inferring the number of users' access by analyzing DNS query intervals

    Shimoda Akihiro, Ishibashi Keisuke, Tsujino Masayuki, Inoue Takeru, Mori Tatsuya, Goto Shigeki

    Proceedings of the Society Conference of IEICE   2015 ( 2 ) 333 - 333  2015.08

    CiNii

  • Understanding Android apps that are similar to legitimate ones

    ISHII Yuta, WATANABE Takuya, AKIYAMA Mitsuaki, MORI Tatsuya

    IEICE technical report. Information and communication system security   114 ( 489 ) 187 - 192  2015.03

     View Summary

    Because it is not hard to repackage Android apps, there are many repackaged apps in the wild. Previous studies have reported that many of such repackaged apps were created for bad purposes; e.g., inserting advertising modules that are not present in the original version or inserting malicious code that steals privacy-sensitive information. This paper aims to understand the characteristics of repackaged apps. To this end, we develop a method that automatically extracts and classifies repackaged apps. Our analysis of 10K+ Android apps that were collected from official and third-party market places revealed that there were non-negligible number of repackaged apps in third-party markets, and the characteristics of malicious modules that were inserted to the original apps.

    CiNii

  • Estimation of hostnames of HTTPS communication using DNS queries/responses

    MORI Tatsuya, INOUE Takeru, SHIMODA Akihiro, SATO Kazumichi, ISHIBASHI Keisuke, GOTO Shigeki

    IEICE technical report. Information networks   114 ( 478 ) 255 - 260  2015.03

     View Summary

    Most modern Internet services are carried over the web. A significant amount of web transactions is now encrypted and the transition to encryption has made it difficult for network operators to understand traffic mix. The goal of this study is to enable network operators to infer hostnames within HTTPS traffic because hostname information is useful to understand the breakdown of encrypted web traffic. The proposed approach correlates HTTPS flows and DNS queries/responses. We introduce domain name graph (DNG), which is a formal expression that characterizes the highly dynamic and diverse nature of DNS mechanisms. Furthermore, we have developed a framework called Service-Flow map (SFMap) that works on top of the DNG. SFMap statistically estimates the hostname of an HTTPS server, given a pair of client and server IP addresses. We evaluate the performance of SFMap through extensive analysis using real packet traces collected from two locations with different scales. We demonstrate that SFMap establishes good estimation accuracies and outperforms a state-of-the-art approach.

    CiNii

  • B-7-53 Inferring Traffic Volume of Internet Services using Flows and DNS Logs

    Shimoda Akihiro, Sato Kazumichi, Ishibashi Keisuke, Inoue Takeru, Mori Tatsuya, Goto Shigeki

    Proceedings of the IEICE General Conference   2015 ( 2 ) 203 - 203  2015.02

    CiNii

  • Searching malicious URL from vast webspace

    SUN Bo, AKIYAMA Mitsuaki, YAGI Takeshi, MORI Tatsuya

    IEICE technical report. Information and communication system security   114 ( 340 ) 61 - 66  2014.11

     View Summary

    Many Web-based attacks such as Drive-by-download and phishing scam are easily triggered by accessing landing page URL. Most Web users usually click the URL without being aware of its underlying dangerousness. so they received Material or financial damage due to Web-based attacks. URL Blacklist is one of effective approach for preventing Web-based attacks from occurring.However,Blacklist's update cannot catch up with the appearance of new malicious URL. Under this situation, the main objective of this research is to extend URL Blacklist with variety of new malicious URL as quickly as possible. We propose Magnet system that can gather new URL that is similar to few existing malicious ones considered as query based on static feature. In this paper, we describe the detail of Magnet system and experiment conclusion.

    CiNii

  • Seven years in MWS: Experiences of sharing datasets with anti-malware research community in Japan

    Mitsuhiro Hatada, Masato Terada, Tatsuya Mori

    Proceedings of the ACM Conference on Computer and Communications Security     1433 - 1435  2014.11  [Refereed]

     View Summary

    In 2008, the anti-Malware engineering WorkShop (MWS) was organized in Japan. The main objective of MWS is to accelerate and expand the activities of anti-malware research. To this end, MWS aims to attract new researchers and stimulate new research by lowering the technical obstacles associated with collecting the datasets that are crucial for addressing recent cyber threats. Moreover, MWS hosts intimate research workshops where researchers can freely discuss their results obtained using MWS and other datasets. This paper presents a quantitative accounting of the effectiveness of the MWS community by tracking the number of papers and new researchers that have arisen from the use of our datasets. In addition, we share the lessons learned from our experiences over the past seven years of sharing datasets with the community. Copyright is held by the owner/author(s).

    DOI

  • Detecting Malware with Machine Learning Reloaded

      2014 ( 2 ) 827 - 834  2014.10

    CiNii

  • Detection of Android apps that secretly abuse the camera

    WATANABE Takuya, MORI Tatsuya, SAKAI Tetsuya

    IEICE technical report. Information and communication system security   113 ( 502 ) 119 - 124  2014.03

     View Summary

    We propose a method for detecting Android apps that may secretly abuse the camera to leak private or important information of the user. Our key idea is to combine two approaches: (1) analysis of disassembled code of application package files and (2) text analysis of natural language descriptions that are used to explain the details of apps. In our experiment using 10,855 Android apps collected from third-party markets, our method successfully extracted 43 samples that likely abuse camera secretly. We applied dynamic analysis to the 43 samples manually and revealed that at least 28 samples did have proper reasons to use camera thus were negative and two samples exhibited unnatural content and behaviour; thus we need detailed static code analysis for further investigation. We also found that of the 43 samples, 18 samples were detected as malware; hence, a large fraction of detected samples with our framework, which aims to extract apps with inconsistency between description and code, were actually malware.

    CiNii

  • B-7-71 Inferring Services over Encrypted Web Flows

    Mori Tatsuya, Inoue Takeru

    Proceedings of the IEICE General Conference   2014 ( 2 ) 246 - 246  2014.03

    CiNii

  • Analyzing Spatial Structure of IP Addresses for Detecting Malicious Websites

    Daiki Chiba, Kazuhiro Tobe, Tatsuya Mori, Shigeki Goto

      54 ( 6 )  2013.06

    CiNii

  • Visualizing Network Logs for Diagnosing Large-scale Networks Problems

    KIMURA Tatsuaki, MORI Tatsuya, TOYONO Tsuyoshi, ISHIBASHi Keisuke, SHIOMOTO Kohei

    IEICE technical report   112 ( 463 ) 495 - 498  2013.03

     View Summary

    Network logs, such as router syslogs are efficient for understanding faulty events or current network states in detail. However, analyzing these logs is a difficult issue due to the diversification of them accompanied with the increased number of network elements and the multivendor environment. In this research, we present a new method for visualizing large scale logs in an efficient fashion for network operation without any previous knowledge about logs. The proposed method provides a whole view of large scale logs by using data compression techniques such as template extraction and log grouping. In addition, it helps network operators to find abnormal logs by extracting abnormality from occurrence patterns of logs. Finally, we evaluate our method using data collected at some experimental network and demonstrate its efficiency.

    CiNii

  • B-6-80 Detecting anomalous network events based on the log data generation patterns

    Kimura Tatsuaki, Mori Tatsuya, Toyono Tsuyoshi, Ishibashi Keisuke, Shiomoto Kouhei

    Proceedings of the IEICE General Conference   2013 ( 2 ) 80 - 80  2013.03

    CiNii

  • Simulation Study of Traffic Reduction by Combining Content Files in Peer-assisted Content Delivery Networks

    MAKI Naoya, SHINKUMA Ryoichi, MORI Tatsuya, KAMIYAMA Noriaki, KAWAHARA Ryoichi, TAKAHASHI Tatsuro

    Technical report of IEICE. CQ   112 ( 218 ) 1 - 6  2012.09

     View Summary

    In the content delivery services, minimizing the amount of generated traffic is important for the service providers since the volumes of content files tend to be quite large. To solve this problem, peer-assisted content delivery network (CDN)l ocalizes network traffic by which a client can obtain the requested content files from a near-by altruistic client instead of the source server. We have proposed traffic engineering scheme for peer-assisted CDN models which combines content files likely to contributed to localizing traffic while keeping the price equal to the single-content price to induce altruistic clients to request combined-content. However, we can expect further traffic localization if multiple altruistic clients cache each combined-content. In this paper, we propose and validate the combined-content distribution mechanism which determine when combined-content should be offered.

    CiNii

  • B-14-7 Mining log association rules from large scale network logs

    Kimura Tatsuaki, Mori Tatsuya, Ishibashi Keisuke, Shiomoto Kouhei

    Proceedings of the Society Conference of IEICE   2012 ( 2 ) 344 - 344  2012.08

    CiNii

  • B-7-16 Extracting event templates from large scale network logs

    Kimura Tatsuaki, Mori Tatsuya, Ishibashi Keisuke, Shiomoto Kouhei

    Proceedings of the IEICE General Conference   2012 ( 2 ) 177 - 177  2012.03

    CiNii

  • Impact of the Interconnection Network Structure on Shuffle Completion Time in MapReduce Processing

    MATSUKI Tatsuma, KIMURA Tatsuaki, MORI Tatsuya, TAKINE Tetsuya

    IEICE technical report. Information networks   111 ( 469 ) 377 - 382  2012.03

     View Summary

    MapReduce processing, a typical distributed processing scheme in data centers, includes shuffle operation, where a massive amount of data are transferred between computation servers. In this article, we investigate the impact of the interconnection network structure on the shuffle completion time. For this purpose, we consider a simple tree structure and fat-tree strucure, and investigate their impact on the shuffle completion time through theoretical examination and simulation experiments.

    CiNii

  • Mining Network Logs for Diagnosing Large-scale Networks Problems

    KIMURA Tatsuaki, MORI Tatsuya, ISHIBASHI Keisuke, SHIOMOTO Kohei

    IEICE technical report   111 ( 468 ) 261 - 264  2012.03

     View Summary

    On recent large IP networks where various services deployed, more detailed and complicated network operations are required. In particular, log informations, such as router syslogs, processed by network elements are efficient for understanding faulty events or current network sates in detail. However, analyzing these logs is a difficult issue due to the diversification of these large scale logs accompanied with the increased number of network elements and the multivendor environment. In this research, we consider the techniques to identify meaningful informations from the vast amounts of stored logs by extracting templates of logs and grouping them without any previous knowledge about logs.

    CiNii

  • Boosting IP Reputation Services

    MORI T., Kimura Tatsuaki, Takahashi Yousuke, Sato kazumichi, Ishibashi Keisuke

    IEICE General Conference, March 2012     "S - 1"-"S-2"  2012

    CiNii

  • Impact of Multicast Pre-distribution on Networks

    KAMIYAMA Noriaki, YOKOTA Kenji, KAWAHARA Ryoichi, MORI Tatsuya

    IEICE technical report   111 ( 202 ) 1 - 6  2011.09

     View Summary

    VoD services in which users can request content delivery on demand has been widely used. In VoD services, the demand for content widely changes on daily scale. Because service provider is required to maintain stable service during peak hours, reducing the server load at peak hours is an important problem. Therefore, we proposed to reduce the server load without increasing user response time by multicasting content to all users prior to requests, independently of actual requests. Through numerical evaluation using actual VoD access log data, we clarified the effectiveness of the proposed method on reducing the server load. However, the influence of the multicast pre-distribution on the network load has not been investigated. Therefore, in this paper, we evaluate the impact of the multicast pre-distribution on the network load using some network topologies of ISPs.

    CiNii

  • Traffic Reduction by Content-oriented Incentive Mechanism in Peer-assisted Content Delivery Network

    MAKI Naoya, NISHIO Takayuki, SHINKUMA Ryoichi, MORI Tatsuya, KAMIYAMA Noriaki, KAWAHARA Ryoichi, TAKAHASHI Tatsuro

    IEICE technical report   111 ( 202 ) 13 - 18  2011.09

     View Summary

    Minimizing network traffic is an important issue in content services delivering large volume content files to lower the cost charged for bandwidth and the network infrastructure. Traffic localization is an effective way of reducing network traffic. Peer-assisted content delivery network (CDN) localizes network traffic when a client can obtain the requested content files from a near-by altruistic client instead of the source server. To localize traffic effectively, content files likely to be requested by many clients should be cached locally. This paper presents a novel traffic engineering scheme for peer-assisted CDN models. The key idea is to combine content files while keeping the price equal to the single-content price to induce altruistic clients to request the desired files to be cached. In this paper, we come up with a solution for determining contents combinations that localize download traffic in a network and discuss its upper-bound performance.

    CiNii

  • B-7-44 Mean-variance characteristics of number of flows and its application to traffic management

    Kawahara Ryoichi, Takine Tetsuya, Mori Tatsuya, Kamiyama Noriaki, Ishibashi Keisuke

    Proceedings of the Society Conference of IEICE   2011 ( 2 ) 137 - 137  2011.08

    CiNii

  • BS-6-42 Combining IP reputation services(BS-6. Planning, Control and Management on Networks and Services)

    Mori Tatsuya, Sato Kazumichi, Takahashi Yousuke, Kimura Tatsuaki, Ishibashi Keisuke

    Proceedings of the Society Conference of IEICE   2011 ( 2 ) "S - 112"-"S-113"  2011.08

    CiNii

  • Analysis of Malicious Traffic Based on TCP Fingerprinting

      52 ( 6 ) 2009 - 2018  2011.06

     View Summary

    Modern kernel malwares compose of their own network drivers and use them directly from kernel-mode to conceal their activities from anti-malware tools. Since these network drivers have specific characteristics, we can detect traffic flows originating from those drivers by analyzing some parameters recorded in TCP headers. On the basis of the above characteristics, we apply a fingerprinting technique to collect IP addresses of the hosts that are likely infected with kernel malwares. Using the method, we also aim to understand the characteristics of the hosts infected with kernel malware and their communications using network measurement data collected in several production networks.

    CiNii

  • Combining the outcomes of IP reputation services

    MORI Tatsuya, SATO Kazumichi, TAKAHASHI Yosuke, KIMURA Tatsuaki, ISHIBASHI Keisuke

    IEICE technical report   111 ( 82 ) 1 - 6  2011.06

     View Summary

    IP reputation systems establish a service that provides us with the "reputation" of IP addresses, primarily based on past measurement. For instance, several DNSBL (Domain Name System Block List) services provide a list of IP addresses published through the DNS; i.e., they return the negative reputation for IP addresses as potential origins of e-mail spam messages. As there are a lot of independent DNSBL services available on the Internet, it is crucial that we combine the outcomes of those reputation systems. This paper provides simple methods that attempt to extract accurate decision based on multiple outcomes of IP reputation systems. Using e-mail delivery logs collected at an middle-scale enterprise network, we evaluate the effectiveness of the approaches and compare their advantages and disadvantages.

    CiNii

  • Analyzing Correlation among TCP Quality Metrics on Measured Traffic Data

    IKEDA Yasuhiro, KAMIYAMA Noriaki, KAWAHARA Ryoichi, KIMURA Tatsuaki, MORI Tatsuya

    IEICE technical report   111 ( 67 ) 51 - 56  2011.05

     View Summary

    With the increase in demand for the quality of service guarantee due to integrated IP networks accommodating various types of services, monitoring network performance is crucial. In this paper, through traffic measurement data analysis, we investigate the characteristics of TCP quality metrics of flows classified by the type of connection such as client-server (C/S) or peer-to-peer (P2P), and the correlation among TCP quality metrics, where TCP quality metrics are packet retransmission rate, average RTT, and throughput. Firstly, we show that each TCP quality metric of the C/S type of connections is higher than that of the P2P type of connections. Specifically, the median of throughput of the C/S connections is 2.5 times higher than that of the P2P connections in the incoming direction of domestic traffic. Secondly, we show that the throughput is higher as the packet retransmission rate and the average RTT are smaller, and the throughput is more affected by the average RTT than the packet retransmission rate. We also show that there is the difference in the degree of impact of the average RTT on the throughput between the domestic and the international traffic. This is due to the fact that the congestion levels of those are different even if the average RTTs of those are same. Especially, we show that, while the TCP quality of the domestic connections is higher than that of the international connections in many cases, there are cases that the quality of the international connections is higher than that of the domestic connections when the average RTT is around 100 msec.

    CiNii

  • Traffic Localization by Incentive Content-Download Control in Peer-assisted Content Distribution

    MAKI Naoya, NISHIO Takayuki, SHINKUMA Ryoichi, MORI Tatsuya, KAMIYAMA Noriaki, KAWAHARA Ryoichi, TAKAHASHI Tatsuro

    IEICE technical report   111 ( 43 ) 29 - 34  2011.05

     View Summary

    Traffic is localized when a user can obtain her or his requested content from another user instead of the source server. The concept of the peer-assisted content distribution can reduce the overall traffic using this mechanism. To localize traffic effectively, content files likely requested by a lot of users should be cached locally. This paper proposes incentive content-download control, which controls downloaded content files by users considering content popularity and cached location. We particularly discuss a price-based control mechanism. Computer simulations validate our control mechanism for traffic localization.

    CiNii

  • Performance Evaluation of Autonomic Load Balancing for Flow Measurement

    KAMIYAMA Noriaki, MORI Tatsuya, KAWAHARA Ryoichi

    IEICE technical report   111 ( 1 ) 5 - 10  2011.04

     View Summary

    When measuring flows at routers for flow analysis or DPI, the measurement devices are required to select the measurement target with balancing the load at measurement devices to maximize the number of flows measured in the entire network. Therefore, we proposed the autonomous load balancing method of measurement devices in which the measurement devices exchange information with only adjacent nodes. In this paper, we evaluate the effect of load balancing of this method using 36 ISP network topologies, and investigate the effectiveness of this method compared with other measurement methods. Moreover, we clarify the influence of network topologies on the effect of this autonomic load balancing method.

    CiNii

  • Web感染型マルウェアのリダイレクト解析

    高田雄太, 森達哉, 後藤滋樹

    全国大会講演論文集   2011 ( 1 ) 497 - 499  2011.03

     View Summary

    Web感染型マルウェアが猛威を振るっている.<br />マルウェアに感染すると個人情報が不正に盗み出されたり,Webサイトが改ざんされたりする.<br />マルウェアの中には,Webブラウザの脆弱性を突いて制御を奪い,Webサイトにアクセスした利用者を複数のWebサイトに誘導して,マルウェアをダウンロードさせたり,インストールさせるDrive-by-Download攻撃がある.<br />本論文は,パケットキャプチャデータに基づいてDrive-by-Download攻撃によるリダイレクトを解析する.<br />その結果を活用すると,マルウェアを配布するサイト,および踏台とされているサイト,入口となっているサイトを特定することができる.<br />このようなサイトの情報を集めてURLのブラックリストを構成すれば,表面上は正常なサイトにアクセスしているように見えるのに,実際には悪性サイトのURLに誘導されている場合でも検知することができる.

    CiNii

  • SVMによるIP攻撃通信の判別法

    千葉大紀, 森達哉, 後藤滋樹

    全国大会講演論文集   2011 ( 1 ) 491 - 493  2011.03

     View Summary

    インターネットではマルウェア(悪意のあるソフトウェア)の活動による被害が<br />拡大している。今後、マルウェアがさらに複雑になり多様化すると予想される。<br />マルウェアによる被害を少くするためには、既知のマルウェアの情報に基づいて<br />防御するだけでなく、未知の攻撃に対する防衛手段を講じる必要がある。本論文<br />は教師あり機械学習法の一つであるSVM (Support Vector Machine) を用いて、<br />過去の悪意のある通信の特徴を事前に学習し、未来の通信の悪性を予測して判別<br />する手法を提案する。この方法は、既存のシグネチャベースのルールでは検知や<br />フィルタリングが難しい未知の攻撃通信の判別において優位性がある。

    CiNii

  • B-7-38 A method of identifying TCP performance degradation using flow information

    Kawahara Ryoichi, Mori Tatsuya, Kamiyama Noriaki, Ishibashi Keisuke

    Proceedings of the IEICE General Conference   2011 ( 2 ) 202 - 202  2011.02

    CiNii

  • B-7-65 Validation of Peer-assisted Content Distribution with Incentive Mechanism

    MAKI Naoya, NISHIO Takayuki, SHINKUMA Ryoichi, MORI Tatsuya, KAMIYAMA Noriaki, KAWAHARA Ryoichi, TAKAHASHI Tatsuro

    Proceedings of the IEICE General Conference   2011 ( 2 ) 229 - 229  2011.02

    CiNii

  • Autonomic Load Balancing for Flow Measurement

    KAMIYAMA Noriaki, MORI Tatsuya, KAWAHARA Ryoichi

    IEICE technical report   110 ( 448 ) 687 - 692  2011.02

     View Summary

    When measuring flows at routers for flow analysis or DPI, the measurement devices need to update monitored flow information at the transmission line rate and need to use high-speed memory such as SRAM. There-fore, it is difficult to measure all flows, and the measurement devices need to limit the measurement target to a part of flows. However, if they randomly select their measurement targets, an identical flow might be measured at multiple routes on it route, and a flow might not be measured at any routers on it route. Moreover, to maximize the number of flows measured in the entire network, the measurement devices are required to select the measurement target with balancing the load at measurement devices. In this paper, we propose an autonomous load balancing method of measurement devices in which the measurement devices exchange information with only adjacent nodes.

    CiNii

  • A method of estimating RTT using hash based sampling and bloom filter and its evaluation

    KAWAHARA Ryoichi, KAWAGUTI Ginga, MORI Tatsuya, KAMIYAMA Noriaki, ISHIBASHI Keisuke

    IEICE technical report   110 ( 341 ) 31 - 36  2010.12

     View Summary

    Instead of performing active measurements between a pair of particular end-hosts, we propose a method of estimating quality of service (QoS) such as latency between hosts through passive measurements, which enables us to grasp network-wide QoS states such as QoS matrix. In this paper, we focus on round trip time (RTT). To achieve scalability, we utilize hash-based sampling and bloom filter to estimate RTTs without analyzing all the packets. We also show the evaluation results of our method using actual measurement data.

    CiNii

  • Understanding the Characteristics of Network Workload for MapReduce

    MORI Tatsuya, KIMURA Tatsuaki, IKEDA Yasuhiro, KAMIYAMA Noriaki, KAWAHARA Ryoichi

    IEICE technical report   110 ( 287 ) 5 - 10  2010.11

     View Summary

    This work studies the workloads of a distributed computing system that is used to execute MapReduce programs for processing large-scale data. Especially, we focus our attention on the network workload. A Hadoop cluster consisting of 12 nodes is used for our analysis. MapReduce job traces are collected on a master server and slave servers. To study the detailed characteristics of network load, we also collect packet header traces on the slave servers. First, through a case study analysis, we reveal the correlation between MapReduce tasks and workloads on the underlying network. Next, we show the parameter configuration of MapReduce could affect the properties of TCP flows used for the communication and data copying among servers. Finally, we discuss the implications on network measurement schemes for MapReduce-like systems.

    CiNii

  • Loss-Recovery Method on Content Pre-distribution in VoD Service

    KAMIYAMA Noriaki, YOKOTA Kenji, KAWAHARA Ryoichi, MORI Tatsuya

    IEICE technical report   110 ( 287 ) 11 - 16  2010.11

     View Summary

    To reduce the peak load of content server, the authors have proposed to multicast popular content to all users independently of user requests, in addition to deliver content on demand. However, recovery method for lost packets has not been investigated. In this paper, we propose a loss-recovery method suited for multicast pre-distribution VoD system. Because a large part of content items pre-distributed to STBs are not viewed by users, we propose to deliver just lost packets in the requested content item on demand at the time of each user request. Using the access log data of actual VoD system, we compare the proposed method with existing loss-recovery methods for multicast delivery and clarify the superiority of the proposed method.

    CiNii

  • Analysis of impact of traffic on large-scale NAT

    KAWAHARA Ryoichi, YADA Takeshi, MORI Tatsuya

    IEICE technical report   110 ( 224 ) 75 - 80  2010.10

     View Summary

    We investigate the impact of TCP traffic on a large-scale NAT (LSN), which has been attracting attention as a means of leveraging a limited number of global IPv4 addresses. Through traffic measurement data analysis, we found the followings: More than 1% of hosts generated more than 100 flows at the same time. The number of active flows depended on the measurement points. At one measurement point, there was on average 1.43 - 1.83 active flows per host, while at the other point, there was on average 3.10 - 3.98 active flows. When the inactive timer used to clear the flow state from a flow table is changed from 15 s to 10 min, the number of active flows becomes more than 10 times larger. We also investigate how to reduce the above impact on an LSN in terms of saving memory space and accommodating more users for each global IPv4 address. To save memory space, we found that we can reduce the number of active flows on an LSN by a maximum of 48% by regulating network anomalies. To accommodate more users for each global IPv4 address, when mapping a source IP address from a private to a global IPv4 address, leveraging destination IP address information can effectively reduce the required number of global IPv4 addresses by 86% on average.

    CiNii

  • B-7-17 A method of estimating RTT using bloom filter and packet sampling

    Kawahara Ryoichi, Kawaguti Ginga, Mori Tatsuya, Kamiyama Noriaki, Ishibashi Keisuke

    Proceedings of the Society Conference of IEICE   2010 ( 2 ) 94 - 94  2010.08

    CiNii

  • B-11-12 Effect of Limiting Pre-distribution Content in VoD Services

    Kamiyama Noriaki, Kawahara Ryoichi, Mori Tatsuya, Hasegawa Haruhisa

    Proceedings of the Society Conference of IEICE   2010 ( 2 ) 279 - 279  2010.08

    CiNii

  • RL-005 Analysis of the Effect of Honeypots on neighbor IP Addresses

    Shimoda Akihiro, Mori Tatsuya, Goto Shigeki

      9 ( 4 ) 25 - 30  2010.08

    CiNii

  • On the use and misuse of E-mail sender authenication mechanisms

    MORI Tatsuya

    IEICE technical report   110 ( 113 ) 101 - 106  2010.06

     View Summary

    E-mail sender authentication is a promising way of verifying the sources of e-mail messages. Since today's primary e-mail sender authentication mechanisms are designed as fully decentralized architecture, it is crucial for e-mail operators to know how other organizations are using and misusing them. This paper aims to address the question "How is the DNS Sender Policy Framework (SPF), which is the most popular e-mail sender authentication mechanism, used and misused in the wild?" To the best of our knowledge, this is the first extensive study addressing the fundamental question. This work targets both legitimate and spamming domain names and correlates them with multiple data sets, including the e-mail delivery logs collected from medium-scale enterprise networks and various IP reputation lists. We first present the adoption and usage of DNS SPF from both global and local viewpoints. Next, we present empirically why and how spammers leverage the SPF mechanism in an attempt to pass a simple SPF authentication test. We also present that non-negligible volume of legitimate messages originating from legitimate senders will be rejected or marked as potential spam with the SPF policy set by owners of legitimate domains. Our findings will help provide (1) e-mail operators with useful insights for setting adequate sender or receiver policies and (2) researchers with the detailed measurement data for understanding the feasibility, fundamental limitations, and potential extensions to e-mail sender authentication mechanisms.

    CiNii

  • R&Dホットコーナー ソリューション オーバレイネットワークによるトラフィック可制御化技術

    川原 亮一, 上山 憲昭, 森 達哉

    NTT技術ジャ-ナル   22 ( 6 ) 32 - 35  2010.06

    CiNii

  • Applications of IP Flow Measurement Technologies

    KAWAHARA Ryoichi, MORI Tatsuya, KAMIYAMA Noriaki

    The Journal of the Institute of Electronics, Information and Communication Engineers   93 ( 4 ) 287 - 292  2010.04

    CiNii

  • BS-3-15 Controlling Overlays with Overlay : Traffic Engineering through Cooperation between Overlay and Underlay

    Kawahara Ryoichi, Harada Shigeaki, Kamiyama Noriaki, Mori Tatsuya, Hasegawa Haruhisa, Nakao Akihiro

    Proceedings of the IEICE General Conference   2010 ( 2 ) "S - 52"-"S-53"  2010.03

    CiNii

  • B-6-100 Performance Comparison of Load Reduction Methods in VoD Services

    Kamiyama Noriaki, Kawahara Ryouichi, Mori Tatsuya, Hasegawa Haruhisa

    Proceedings of the IEICE General Conference   2010 ( 2 ) 100 - 100  2010.03

    CiNii

  • Effect of Limiting Pre-distribution Content and User Clustering on Content Pre-distribution

    KAMIYAMA Noriaki, KAWAHARA Ryoichi, MORI Tatsuya, HASEGAWA Haruhisa

    IEICE technical report   109 ( 448 ) 171 - 176  2010.02

     View Summary

    To reduce the peak load of content server, the authors have proposed to broadcast content to all users in addition to deliver content on demand. However, storage with large capacity is required at STB. We might be able to cope with these problems by limiting the number of pre-distribute content or clustering users based on the history of viewing content. Therefore, in this paper, we evaluate the effect of these techniques using actual VoD log data. We clarify that the required storage capacity at STB can be halved while bounding the degradation of server load reduction on about 20% by limiting pre-distributed content, and user clustering is effective only when the cluster count is about two.

    CiNii

  • Broadcast Pre-distribution in VoD Services

    KAMIYAMA Noriaki, KAWAHARA Ryoichi, MORI Tatsuya, HASEGAWA Haruhisa

    IEICE technical report   109 ( 398 ) 83 - 88  2010.01

     View Summary

    VoD services in which users can request content delivery on demand has been widely used. In VoD services, the demand for content widely changes on daily scale. Because service provider is required to maintain stable service during peak hours, reducing the server load at peak hours is an important problem. Although multicast delivery in which multiple users requesting the same content are supported by one delivery session is effective to suppress the server load at peak hours, the response time of users seriously increases. P2P-assisted delivery system in which users download content from other users watching the same content is also effective to reduce the server load. However, the system performance depends on selfish user behavior, and the global optimization is difficult. Moreover, complex operation, i.e., switching the delivery multicast tree or source peers, is necessary to support VCR operation. In this paper, we propose to reduce the server load without increasing user response time by broadcasting content to all users prior requests, independently of actual requests. Through numerical evaluation using actual VoD access log data, we clarify the effectiveness of the proposed method.

    CiNii

  • 実行ファイルに含まれる文字列の学習に基づくマルウェア検出方法

    戸部和洋, 森達哉, 千葉大紀, 下田晃弘, 後藤滋樹

    マルウェア対策研究人材育成ワークショップ 2010 (MWS 2010)     777 - 782  2010

  • Understanding Large-Scale Spamming Botnets From Internet Edge Sites

    Tatsuya Mori, Holly Esquivel, Aditya Akella, Akihiro Shimoda, Shigeki Goto

    Proceedings of Seventh Conference on Email and Anti-spam (CEAS 2010)     1 - 8  2010

  • Characterizing workload of large-scale video sharing services

    MORI Tatsuya, KAWAHARA Ryoichi, HASEGAWA Haruhisa, SHIMOGAWA Shinsuke

    IEICE technical report   109 ( 273 ) 33 - 38  2009.11

     View Summary

    This work attempts to characterize workload of large-scale video sharing services such as YouTube and Smiley videos (Nico Nico Douga). The key technical contributions of this paper are twofold. We first propose a simple and effective methodology that identifies traffic flows originated from video hosting servers. The key idea behind our approach is to leverage the addressing/naming convention used in the large-scale server farms. Next, using the identified video flows, we study the workload of video sharing services from an Internet edge site view. We reveal the intrinsic characteristics of the flow size distributions of video sharing services, which have not been known before. We show that the origin of the intrinsic characteristics is rooted on the differentiated service provided for free- and premium membership of the video-sharing services. We also discuss the implications of the intrinsic characteristics of video-sharing services.

    CiNii

  • B-7-11 A method of estimating QoS using flow sampling

    Kawahara Ryoichi, Ishibashi Keisuke, Mori Tatsuya, Kamiyama Noriaki, Hasegawa Haruhisa

    Proceedings of the Society Conference of IEICE   2009 ( 2 ) 79 - 79  2009.09

    CiNii

  • BS-5-5 Performance evaluation of peer assisted data dissemination

    Kawahara Ryoichi, Kamiyama Noriaki, Mori Tatsuya, Hasegawa Haruhisa

    Proceedings of the Society Conference of IEICE   2009 ( 2 ) "S - 17"-"S-18"  2009.09

    CiNii

  • BS-5-6 Toward scalable byte caching on high-speed core routers

    Mori Tatsuya, Nakao Akihiro, Kamiyama Noriaki, Hasegawa Haruhisa, Kawahara Ryoichi

    Proceedings of the Society Conference of IEICE   2009 ( 2 ) "S - 19"-"S-20"  2009.09

     View Summary

    Toward scalable byte caching mechanism that operates on high-speed core routers, this paper proposes a selective packet filtration mechanism, using a probabilistic approach. We develop a fast approximation algorithm for determining whether the router should cache bytes in a probabilistic way. Theoretical and experimental analysis demonstrates that our technique achieves good enough approximation for both small and large selection probabilities, which can be used to perform accurate filtration, i.e., low false positive ratio or low false negative ratio.

    CiNii

  • Understanding the large-scale spamming botnet

    MORI Tatsuya, ESQUIVEL Holly, AKELLA Aditya, SHIMODA Akihiro, GOTO Shigeki

    IEICE technical report   109 ( 137 ) 53 - 58  2009.07

     View Summary

    On November 11, 2008, the primary web hosting company, McColo, for the command and control servers of Srizbi botnet was shutdown by its upstream ISPs. Subsequent reports claimed that the volume of spam dropped significantly everywhere on that very same day. In this work, we aim to understand the world's worst spamming botnet, Srizbi, and to study the effectiveness of targeting the botnet's command and control servers, i.e., McColo shutdown, from the viewpoint of Internet edge sites. We conduct an extensive measurement study that consists of e-mail delivery logs and packet traces collected at three vantage points. The total measurement period spans from July 2007 to April 2009, which includes the day of McColo shutdown. We employ passive TCP fingerprinting on the collected packet traces to identify Srizbi bots and spam messages sent from them. The main contributions of this work are summarized as follows. We first estimate the global scale of Srizbi botnet in a probabilistic way. Next, we quantify the volume of spam sent from Srizbi and the effectiveness of the McColo shutdown from an edge site perspective. Finally, we reveal several findings that are useful in understanding the growth and evolution of spamming botnets. We detail the rise and steady growth of Srizbi botnet, as well as, the version transition of Srizbi after the McColo shutdown.

    CiNii

  • Analyzing Impact of Topology on Effect of Parallel Video Streaming

    KAMIYAMA Noriaki, KAWAHARA Ryoichi, MORI Tatsuya, HARADA Shigeaki, HASEGAWA Haruhisa

    IEICE technical report   109 ( 129 ) 79 - 84  2009.07

     View Summary

    The transmission bit-rate of video streaming with high quality is quite large, so generated traffic flows will cause link congestion. Therefore, when providing streaming services of rich content, it is important to improve the maximum network throughput by flattening the link utilization, i.e., reducing the maximum link utilization. So, we have proposed that ISPs use parallel download to deliver rich content to balance the link utilization, and proposed optimum server allocation and server selection methods for parallel video streaming. In this paper, using 23 actual commercial ISP networks, we investigate the actual settings of link capacities and clarify the influence of link weight settings on the maximum network throughput. We also analyze the impact of network topologies on the effect of parallel video streaming.

    CiNii

  • Improving deployability of peer-assisted CDN platform with incentive

    MORI Tatsuya, KAMIYAMA Noriaki, HARADA Shigeaki, HASEGAWA Haruhisa, KAWAHARA Ryoichi

    IEICE technical report   109 ( 102 ) 7 - 12  2009.06

     View Summary

    As a promising solution to manage the huge workload of large-scale VoD services, managed peer-assisted CDN systems, such as P4P has attracted attention. Although the approach works well in theory or in a controlled network testbed, to our best knowledge, there have been no general studies that address how actual peers can be incentivized in the wild Internet; thus, deployablity of the system with respect to incentives to users has been an open issue. With this background in mind, we propose a new business model that aims to make peer-assisted approaches more feasible. The key idea of the model is that users sell their idle resources back to ISPs. In other words, ISPs can leverage resources of cooperative users by giving them explicit incentives, e.g., virtual currency. We analyze how incentives and other external factors affect the efficiency of the system through simulation.

    CiNii

  • Analysis of Topological Impact on Caches for Reducing P2P Traffic

    KAMIYAMA Noriaki, KAWAHARA Ryoichi, MORI Tatsuya, HARADA Shigeaki, HASEGAWA Haruhisa

    IEICE technical report   109 ( 102 ) 1 - 6  2009.06

     View Summary

    Traffic caused by P2P services dominates a large part of traffic in the Internet, so reducing the P2P traffic within the networks is an important issue for ISPs. To reduce the P2P traffic, it is effective for ISPs to implement cache devices at some ports of routers and reduce the hop length of P2P flows by delivering required contents from caches. Hence, we have proposed the optimum design method of caches for P2P traffic minimizing the total amount of P2P traffic based on the dynamic programming, assuming that transit ISPs provide caches at peering points with stub networks. In this paper, we apply this method to 31 commercial ISP networks and numerically investigate the properties of nodes where cache deployments are effective and the topology structure in which caches are effective.

    CiNii

  • Parallel Video Streaming Maximizing Maximum Network Throughput

    KAMIYAMA Noriaki, KAWAHARA Ryoichi, MORI Tatsuya, HARADA Shigeaki, HASEGAWA Haruhisa

    IEICE technical report   109 ( 36 ) 7 - 12  2009.05

     View Summary

    In the Internet, video streaming service in which users can enjoy videos at home is becoming popular. Video streaming with HDTV or UHDV class quality will be also provided and widely demanded in future. However, the transmission bit-rate of video streaming with high quality is quite large, so generated traffic flows will cause link congestion. In the Internet, routes that packets take are determined by using static link weights, so the maximum network throughput, i.e., the maximum achievable throughout by the network, is determined by the capacity of bottleneck link with the maximum utilization although the utilizations of many links remain low level. Therefore, when providing streaming services of rich content, it is important to improve the maximum network throughput by flattening the link utilization, i.e., reducing the maximum link utilization. In this paper, we propose that ISPs use parallel download to deliver rich content to balance the link utilization, and propose optimum server allocation and server selection methods for parallel download.

    CiNii

  • BS-4-14 How Incentive Helps in Making Peer-assisted CDN Deployable?(BS-4. System, control and design technologies for emerging network)

    Mori Tatsuya, Kamiyama Noriaki, Harada Shigeaki, Hasegawa Haruhisa, Kawahara Ryoichi

    Proceedings of the IEICE General Conference   2009 ( 2 ) "S - 27"-"S-28"  2009.03

    CiNii

  • B-7-43 On traffic optimization through cooperation of overlay and underlay networks

    Kawahara Ryoichi, Kamiyama Noriaki, Mori Tatsuya, Harada Shigeaki, Hasegawa Haruhisa, Nakao Akihiro

    Proceedings of the IEICE General Conference   2009 ( 2 ) 187 - 187  2009.03

     View Summary

    Overlays and P2P technologies have evolved as vehicles to enable wide-area deployment of network services such as various kinds of content delivery services. On the other hand, disregarding underlay network topology, the current P2P applications may cause inefficient network resource utilization and poor application performance, which poses a significant problem as the rapid increase in the content delivery traffic. We thus investigate a way to optimize application traffic through cooperation between overlay and underlay networks to improve application performance as well as resource utilization in network providers.

    CiNii

  • B-7-120 Optimum Cache Design for Reducing P2P Traffic

    Kamiyama Noriaki, Kawahara Ryoichi, Mori Tatsuya, Harada Shigeaki, Hasegawa Haruhisa

    Proceedings of the IEICE General Conference   2009 ( 2 ) 264 - 264  2009.03

    CiNii

  • B-7-44 オーバレイネットワークにおける経路選択方法の検討(B-7.情報ネットワーク,一般セッション)

    原田 薫明, 川原 亮一, 森 達哉, 上山 憲昭, 長谷川 治久

    電子情報通信学会総合大会講演論文集   2009 ( 2 ) 188 - 188  2009.03

    CiNii

  • Analysis of Relation between Volume and Periodic Pharacteristics in the Internet Traffic

    HARADA Shigeaki, KAWAHARA Ryoichi, MORI Tatsuya, KAMIYAMA Noriaki, HASEGAWA Haruhisa, TOKUHISA Masaki

    IEICE technical report   108 ( 458 ) 213 - 218  2009.02

     View Summary

    Many reports show that Internet traffic measured at large scale networks such as backbone networks exhibits visible strong cyclic characteristics such as the diurnal pattern and the weekly pattern. The authors so far have proposed a method of detecting anomalous sudden changes, which is caused by DDoS (Distributed Denial of Service) attack or network failure, accurately by utilizing such a cyclic tendency. In the proposed method, we need to manually distinguish whether the traffic has cyclic characteristics or not. However, in order to detect network anomalies in a large scale network, it is important to distinguish whether traffic has cyclic tendency or not automatically. In this paper, aiming to establish a method of automatically distinguishing the cyclic tendency, we apply the Fourier transform to time series data of traffic which was measured at a large scale network and show the result of relation between cyclic tendency and volume of traffic.

    CiNii

  • Optimum Cache Design for Reducing P2P Traffic

    KAMIYAMA Noriaki, KAWAHARA Ryoichi, MORI Tatsuya, HARADA Shigeaki, HASEGAWA Haruhisa

    IEICE technical report   108 ( 457 ) 129 - 134  2009.02

     View Summary

    Traffic caused by P2P services dominates a large part of traffic in the Internet, so reducing the P2P traffic within the networks is an important issue for ISPs. To reduce the P2P traffic, it is effective for ISPs to implement cache devices at some ports of routers and reduce the hop length of P2P flows by delivering required contents from caches. However, the design problem of caches for P2P traffic has not been well investigated although the effect of caches strongly depends on the cache locations and cache capacities. In this paper, we propose an optimum design method of caches for P2P traffic minimizing the total amount of P2P traffic based on the dynamic programming, assuming that transit ISPs provide caches at peering points with stub networks. Moreover, we numerically show the results of applying the proposed cache design method to 31 actual ISP backbone networks.

    CiNii

  • PrBL : Probabilistic BlackList for E-mail Spammers

    MORI Tatsuya

    IEICE technical report   108 ( 457 ) 15 - 20  2009.02

     View Summary

    Recent drastic increase in the number of spam messages has caused significant overload on e-mail delivery systems. IP reputation services such as DNSBL (DNS BlackList) have been widely used as an effective way to lower the overhead of e-mail delivery system by restricting smtp connections based on the reputation listed in the blacklists. Since those reputation services require only IP address lookups, they are the most light-weight and scalable anti-spam solution. However, these approaches have fundamental limitations, namely, flexibility, extensibility, locality, and the explicit modeling of spamicity and legitimacy. In this work, we attempt to relax the limitations of existing IP repuration-based approaches by leveraging statistical technique. Hence, we call our method PrBL (probabilistic blacklist). The key idea of our approach is to make use of the property of e-mail senders in terms of geographical and logical network locations, and the intrinsic signatures derived from the analysis of TCP headers, which are independent of e-mail content. Machine-learning tool is used to establish the probabilistic classification of e-mail senders. We validate the performance of PrBL through the analysis of SMTP logs collected at an enterprise e-mail server over 4-months of period. We also show that by tuning the policy parameter, PrBL can establish much better accuracy (i.e, less false positives), compared to the widely used DNSBLs.

    CiNii

  • TCPフィンガープリントによる悪意のある通信の分析

    木佐森幸太, 下田晃弘, 森達哉, 後藤滋樹

    コンピュータセキュリティシンポジウム2009     553 - 558  2009

  • P2P CDN Architecture Based on Explicit Incentive to End-Users

    MORI Tatsuya, KAMIYAMA Noriaki, HARADA Shigeaki, HASEGAWA Haruhisa, KAWAHARA Ryoichi

    IEICE technical report   108 ( 286 ) 81 - 86  2008.11

     View Summary

    P2P CDN has attracted more and more attentions as a mean to establish flexible and effieicent engineering of network resources such as network links or server CPU loads. Many researchers have validated the effectiveness of the P2P CDN approaches, however, to the authors'best knowledge, there have been no studies that addresses how the peers can be incentivized in the systems;thus, deployablity of the system with respect to user incentive is remaining as an open problem. This paper attempts to address this problem by giving a business model that the incentive to users is provided by ISP. We first show a framework how an ISP can design the optimum incentive while making a good trade-off between cost and reward. Next, we show a possible design of the P2P-based CDN architecture based on the business model. We also discuss its benefit and potential problems.

    CiNii

  • Performance Evaluation of ISP-Operated CDN

    KAMIYAMA Noriaki, MORI Tatsuya, KAWAHARA Ryoichi, HASEGAWA Haruhisa

    IEICE technical report   108 ( 258 ) 43 - 48  2008.10

     View Summary

    It is highly anticipated that downloading rich content with huge size, such as movie content, becomes a popular service on the Internet in near future. The transmission bandwidth consumed by delivering rich content is enormous, so it is an urgent matter for ISPs to design the efficient delivery system minimizing the amount of network resources consumed. To provide users rich content economically without stress, it is effective that an ISP itself optimally provides servers with huge storage capacity at a limited number of locations within its network. So, the authors investigated the content deployment method, the delivery process, and the server allocation method which are desirable for the ISP-operated CDN. In this paper, we evaluate the content deployment process, the estimation accuracy of the average hop length, and the reduction effect of total amount of traffic over 31 topologies of actual ISPs, and we clarify the availability of the ISP-operated CDN.

    CiNii

  • ISP-Operated CDN

    KAMIYAMA Noriaki, KAWAHARA Ryoichi, MORI Tatsuya, HASEGAWA Haruhisa

    IEICE technical report   108 ( 203 ) 63 - 68  2008.09

     View Summary

    In these years, the number of users downloading video contents on the Internet has dramatically increased, and it is highly anticipated that downloading rich contents with huge size, such as movie contents, becomes a popular service on the Internet in near future. The transmission bandwidth consumed by delivering rich contents is enormous, so it is an urgent matter for ISPs to design the efficient delivery system minimizing the amount of network resources consumed. As a way of efficiently delivering web contents, CDN (contents delivery network) has been widely used. Using CDN, however, it is difficult to minimize the amount of network resources consumed because a CDN provider selects a server for each request based on rough estimates of response time. Moreover, CDN providers collocate a huge number of servers within multiple ISPs, so it is difficult to increase the storage capacity of each server because of the total storage cost. Therefore, ordinary CDN is not suited for delivering rich contents. On the other hand, P2P-based delivery system is becoming popular as a scalable delivery system. However, by using P2P-based system, we cannot still obtain the ideal delivery pattern which is optimum for ISPs because server function depends on users behaving selfishly. To provide users rich contents economically without stress, it is effective that an ISP itself optimally provides servers with huge storage capacity at a limited number of locations within its network. In this paper, we investigate the content deployment method, the delivery process, and the server allocation method which are desirable for this ISP-operated CDN.

    CiNii

  • B-7-9 Path identification using elephant flows

    ISHIBASHI Keisuke, KOBAYASHI Atsushi, MORI Tatsuya, KAWAHARA Ryoichi, KONDOH Tsuyoshi

    Proceedings of the IEICE General Conference   2008 ( 2 ) 86 - 86  2008.03

    CiNii

  • BS-5-3 Optimum Parameter Setting in Identifying and Quarantining Worm-Infected Hosts

    Kamiyama Noriaki, Mori Tatsuya, Kawahara Ryoichi, Harada Shigeaki

    Proceedings of the IEICE General Conference   2008 ( 2 ) "S - 62"-"S-63"  2008.03

    CiNii

  • B-7-11 Mean-variance characteristic of spatially partitioned traffic and its applications to estimation of traffic variations

    Kawahara Ryoichi, Ishibashi Keisuke, Mori Tatsuya, Kamiyama Noriaki, Harada Shigeaki, Asano Shoichiro

    Proceedings of the IEICE General Conference   2008 ( 2 ) 88 - 88  2008.03

    CiNii

  • 異常トラヒック測定分析手法 (特集 広域異常トラヒック検知・制御システムの研究開発)

    川原 亮一, 森 達哉, 原田 薫明

    NTT技術ジャ-ナル   20 ( 3 ) 21 - 25  2008.03

    CiNii

  • インターネットトラヒック測定分析手法と異常トラヒック検出法

    川原亮一, 原田 薫明, 森 達哉, 上山 憲昭

    日本OR学会第59回シンポジウム「インターネットとOR」, 2008. 3     31 - 45  2008

     View Summary

    インターネット上においてネットワークリソースの浪費や品質劣化を引き起こす異常トラヒックを検知・制御する技術は,安心で快適な通信サービスを提供するために不可欠となっている.本稿では,異常トラヒックを検出するためのトラヒック測定分析手法について,関連研究動向の紹介を交えながら筆者らの研究内容について紹介する.また,各手法の実データ評価結果も示す.

    CiNii

  • BS-8-8 Extracting Worm-Infected Hosts Using White List

    Kamiyama Noriaki, Mori Tatsuya, Kawahara Ryoichi, Harada Shigeaki, Yoshino Hideaki

    Proceedings of the Society Conference of IEICE   2007 ( 2 ) "S - 92"-"S-93"  2007.08  [Refereed]

     View Summary

    In the Internet, the rapid spread of worms is a serious problem. In many cases, worm-infected hosts generate a huge amount of flows with small size to search for other target hosts by scanning. Therefore, we defined hosts generating many flows, i.e., more than or equal to the threshold during a measurement period, as superspreaders, and we proposed a method of identifying superspreaders by flow sampling. However, some legitimate hosts generating many flows, such as DNS servers, can also be superspreaders. Therefore, if we simply regulate all the identified superspreaders, e.g., limiting their flow generation rate or quarantining them, legitimate hosts identified as superspreaders are also regulated. Legitimate hosts generating many flows tend to be superspreaders in multiple continuous measurement periods. In this paper, we propose a method of extracting worm-infected hosts from identified superspreaders using a white list. We define two network statuses, a normal state and a worm-outbreak state. During the normal state, the IP addresses of identified superspreaders are inserted into the white list. During the worm outbreak state, worm-infected hosts are extracted from the identified superspreaders by comparing them with the host entries stored in the white list. Using an actual packet trace and a simulated abusive traffic, we demonstrate that many legitimate hosts are filtered from the identified superspreaders while suppressing the increase in incorrectly unextracted worm-infected hosts. © 2008 IEEE.

    DOI CiNii

  • Optimum Parameter Setting in Identifying and Quarantining Worm-Infected Hosts

    KAMIYAMA Noriaki, MORI Tatsuya, KAWAHARA Ryoichi, HARADA Shigeaki

    IEICE technical report   107 ( 148 ) 13 - 18  2007.07

     View Summary

    Abusive traffic caused by worm is a serious problem in the Internet because it consumes a large portion of network resources. In many cases, worm-infected hosts generate huge amount of flows with small size to search other target hosts by port-scanning. So, we proposed a method identifying suspicious worm-infected hosts by flow sampling. From given B, the amount of memory, and three control parameters, φ, the measurement period, m^*, the identification threshold of the flow count m within φ, and H^*, the identifiation probability for hosts with m=m^*, this method automatically optimizes all the other parameters maximizing the identification accuracy of hosts with m≧m^*. However, how to optimally set the three parameters, φ, m^*, and H^*, still remain unsolved. In this paper, we propose a design method for these three parameters to satisfy that the ratio of the active worm-infected hosts to all vulnerable hosts is bounded by a given upper-limit during the time T required to develop a patch or an anti-worm vaccine.

    CiNii

  • Identifying Worm-Infected Hosts Using White List

    KAMIYAMA Noriaki, MORI Tatsuya, KAWAHARA Ryoichi, HARADA Shigeaki, YOSHINO Hideaki

    IEICE technical report   107 ( 98 ) 79 - 84  2007.06

     View Summary

    In the Internet, a rapid spread of worm is a serious problem. In many cases, worm-infected hosts generate huge amount of flows with small size to search other target hosts by port-scanning. So, we defined hosts generating flows more than or equal to the threshold during a measurement period as superspreaders and proposed a method identifying superspreaders by flow sampling. However, some normal hosts generating many flows, such,as DNS servers, can be also superspreaders. Therefore, if we regulate all identified superspreaders, e.g., limiting the flow generation rate or quarantining, these normal hosts are also regulated. Normal hosts generating many flows tend to be superspreaders in multiple continuous measurement periods. So in this paper, we propose a method of identifying worm-infected hosts using a white list. During a normal period, we insert the IP addresses of identified hosts into the white list. During a worm outbreak period, we identify worm-infected hosts by comparing the identified host with those stored in the white list.

    CiNii

  • B-7-72 Detection accuracy of network anomaly detection using sampled flow statistics

    Kawahara Ryoichi, Ishibashi Keisuke, Mori Tatsuya, Kamiyama Noriaki, Harada Shigeaki, Asano Shoichiro

    Proceedings of the IEICE General Conference   2007 ( 2 ) 162 - 162  2007.03

    CiNii

  • B-7-73 Detection accuracy of network anomaly detection using sampled packet statistics

    ISHIBASHI Keisuke, KAWAHARA Ryoichi, MORI Tatsuya, KONDO Tsuyoshi, ASANO Shoichiro

    Proceedings of the IEICE General Conference   2007 ( 2 ) 163 - 163  2007.03

    CiNii

  • Effect of sampling rate and monitoring granularity on anomaly detectability

    ISHIBASHI Keisuke, KAWAHARA Ryoichi, MORI Tatsuya, KONDOH TSUYOSHI, ASANO Shoichiro

    IEICE technical report   106 ( 578 ) 125 - 130  2007.03

     View Summary

    In this paper, we quantitatively evaluate how sampling decrease detectability of anomalous traffic. We build equations to calculate False Positive Ratio (FPR) and False Negative Ratio (FNR) with given sampling rate, statistics of normal traffic and volume of anomaly to be detected. We then show by changing measurement granularity, we can detect anomalies even with low sampling rate by using the relationship between the mean and variance of aggregated flows. With those equations, we can provide answers to the questions that arise in actual network operators, and had not been answered yet, such as which sampling rate to set in order to find the given volume of anomaly, or, if the sampling is too high for the actual operation, then which granularity is optimal to find the anomaly with given lower limit of sampling rate.

    CiNii

  • Detection accuracy of network anomaly detection using sampled flow statistics

    KAWAHARA Ryoichi, ISHIBASHI Keisuke, MORI Tatsuya, KAMIYAMA Noriaki, HARADA Shigeaki, ASANO Shoichiro

    IEICE technical report   106 ( 578 ) 131 - 136  2007.03

     View Summary

    We showed so far that network anomalies generating a huge number of small flows, such as network scans or SYN flooding, become hard to detect when we execute the packet sampling. This is because such flows are more unlikely to be sampled than normal flows. In this paper, we thus investigate the effect of packet sampling on the detection accuaracy. We also compare the detection accuracy under packet sampling with that under flow sampling. In addition, we also develop a method of spatially partitioning the traffic into groups to increase the detectability, as well as the way of determining the appropriate number of groups.

    CiNii

  • B-7-116 A method of detecting network anomalies and determining their termination

    Harada Shigeaki, Kawahara Ryoichi, Mori Tatsuya, Kamiyama Noriaki, Hirokawa Yutaka, Yamamoto Kimihiro

    Proceedings of the IEICE General Conference     206 - 206  2007

    CiNii

  • BS-8-5 A method of detecting network anomalies for periodic traffic

    Harada Shigeaki, Kawahara Ryoichi, Mori Tatsuya, Kamiyama Noriaki, Yoshino Hideaki

    Proceedings of the Society Conference of IEICE   107 ( 222 ) "S - 86"-"S-87"  2007

     View Summary

    We present a method of detecting network anomalies, such as DDoS attacks and flash crowds, automatically in real time. We evaluated this method using measured traffic data and found that it successfully differentiates suspicious traffic. In this paper, we focus on periodic traffic which have daily cycle and/or weekly cycle, and we show that the accuracy of differentiation is improved using such periodic tendency in anomaly detection. Our method differentiates suspicious traffic that have different statistical characteristics from normal traffic. At the same time, our method learns periodic large-volume traffic, such as operating traffic, and considers them as legitimate at the end. Therefore, our method has fewer false-positives than original methods which do not consider periodic tendency.

    CiNii

  • A method of detecting network anomalies and determining their termination

    HARADA Shigeaki, KAWAHARA Ryoichi, MORI Tatsuya, KAMIYAMA Noriaki, HIROKAWA Yutaka, YAMAMOTO Kimihiro

    IEICE technical report   106 ( 420 ) 115 - 120  2006.12

     View Summary

    Detecting network anomalies such as DDoS attack has become more and more crucial in these days. When we monitor large-scale networks, it is essential to improve the accuracy of anomaly detection while keeping the operation overhead as small as possible. Moreover, it is necessary to determine network anomalies' termination properly. These are because detecting network anomalies and determining their termination become triggers for the start of causal analysis and the end of network controls. In this work, we develop a method of anomaly detection that detects anomalies automatically and determines their termination in real time. Further, we evaluate our method using measured traffic data.

    CiNii

  • A study on detecting network anomalies using sampled flow statistics

    KAWAHARA Ryoichi, MORI Tatsuya, ISHIBASHI Keisuke, KAMIYAMA Noriaki, HARADA Shigeaki, ASANO Shoichiro

    IEICE technical report   106 ( 357 ) 7 - 12  2006.11

     View Summary

    We investigate how to detect network anomalies using flow statistics obtained through packet sampling. First, we show that network anomalies generating a huge number of small flows, such as network scans or SYN flooding, become hard to detect when we execute the packet sampling. This is because such flows are more unlikely to be sampled than normal flows. As a solution to this problem, we then show that we can increase the detectability of such anomalies by spatially partitioning the monitored traffic into some groups so that we concentrate anomalous flows on particular group(s). We also show its effectiveness through actual measurement data.

    CiNii

  • A study on detecting network anomalies using sampled flow statistics

    KAWAHARA Ryoichi, MORI Tatsuya, ISHIBASHI Keisuke, KAMIYAMA Noriaki, HARADA Shigeaki, ASANO Shoichiro

    IEICE technical report   106 ( 356 ) 37 - 42  2006.11

     View Summary

    We investigate how to detect network anomalies using flow statistics obtained through packet sampling. First, we show that network anomalies generating a huge number of small flows, such as network scans or SYN flooding, become hard to detect when we execute the packet sampling. This is because such flows are more unlikely to be sampled than normal flows. As a solution to this problem, we then show that we can increase the detectability of such anomalies by spatially partitioning the monitored traffic into some groups so that we concentrate anomalous flows on particular group(s). We also show its effectiveness through actual measurement data.

    CiNii

  • Inferring original traffic pattern from sampled flow statistics

    MORI Tatsuya, KAWAHARA Ryoichi, KAMIYAMA Noriaki, ISHIBASHI Keisuke, HARADA Shigeaki

    IEICE technical report   106 ( 357 ) 13 - 18  2006.11  [Refereed]

     View Summary

    Packet sampling has become a practical and indispensable means to measure flow statistics. Nowadays, most of major ISPs are monitoring their networks based on the sampled flow statistics collected at main routers. Recent studies have demonstrated that analyzing traffic patterns is crucial in detecting network anomalies. For example, sharp increase in the number of small flows may be related to an anomalous event such as worm outbreak. We may not be able to infer the original traffic patterns correctly from the sampled flow statistics because sampling process wipes out a lot of information about small flows, which play a vital role in determining the characteristics of traffic patterns. In this paper, we first show an example of how the sampling process wipes out the original statistics using measured data. Then, we show empirical examples indicating that the original traffic pattern cannot be inferred correctly even if we use a statistical inference method for incomplete data, i.e., the EM algorithm, for sampled flow statistics. Finally, we show that additional information about the original flow statistics, the number of unsampled flows, is helpful in tracking the change in original traffic patterns using sampled flow statistics.

    DOI CiNii

  • Source Authentication Method Combined with Data Reconstruction for the Multicast Distribution

    MORI Tatsuya, TAKAHASHI Jun, TODE Hideki, MURAKAMI Koso

    IEICE technical report   106 ( 236 ) 5 - 8  2006.09

     View Summary

    Multicast is efficient transfer scheme for content distribution in terms of lighter load on the network and server, but it is necessary to meet security issues. In particular, "Source authentication" is one of the important techniques for protecting from malicious users that plot eavesdropping, masquerading and so on. On the other hand, data reconstruction is one of the necessary techniques, because distributed data packets are often lost in the network, and the loss sometimes causes the fatal deterioration of the quality without data reconstruction. Many schemes for source authentication and data reconstruction have been proposed individually, but no scheme considering the combination of these researches have been proposed. In this paper, we use the FEC(Forward Error Correction) techniques and propose the efficient source authentication method combined with FEC-based data reconstruction.

    CiNii

  • Performance Evaluation of Flow Hog Identification Method

    KAMIYAMA Noriaki, MORI Tatsuya, KAWAHARA Ryoichi

    IEICE technical report   106 ( 237 ) 97 - 102  2006.09

     View Summary

    Worm-infected hosts generate a large number of flows during a short time. We proposed a method identifying hosts that generate many flows, i.e., flow hogs, using flow sampling. This method consists of a Bloom filter finding a new flow and a host table storing the sampled flow count of each host. We also proposed an optimum memory allocation method for each module to minimize the false negative ratio. To obtain the optimum identification threshold, we need to appropriately estimate the median of flow count for flow hogs. In this paper, we propose a method accurately estimating the median from the host set identified in the previous measurement period. We also show the results of performance comparisons with other methods.

    CiNii

  • B-7-55 Identifying Variable-Rate Large-Size Flows by Packet Sampling

    Kamiyama Noriaki, Mori Tatsuya, Kawahara Ryoichi

    Proceedings of the Society Conference of IEICE   2006 ( 2 ) 115 - 115  2006.09

    CiNii

  • NetDelta : Method for detailed, long-term analysis of massive amount of traffic data

    MORI Tatsuya, ISHIBASHI Keisuke, KAMIYAMA Noriaki, KAWAHARA Ryoichi, ASANO Shoichiro

    IEICE technical report   106 ( 236 ) 133 - 138  2006.09

     View Summary

    This paper presents a novel network measurement method-NetDelta-to establish DEtailed, Long-Term Analysis of a massive amount of traffic data. NetDelta analyzes detailed traffic summaries such as volume and cardinality for each key element of interest, e.g., source IP address, destination IP address, and destination port number. The key idea is to apply bitmap-based counting techniques to measure and store the cardinality statistics. Traffic summaries are stored in small binary strings, that conserves the memory resources and disk consumption; therefore, long-term analysis of traffic data can be established. Another unique and valuable feature of NetDelta is that it can change the granularity of traffic summaries and keys, e.g., to a longer time-scale and larger network address space. In general, we do not know which scale is relevant to the analysis in advance. Accordingly, being able to change the scale of the stored traffic summaries and keys to identify the relevant scale is crucial. NetDelta is able to meet such a demand. We evaluate the method through the traffic data measured at a high-speed backbone link.

    CiNii

  • Optimum Memory Allocation in Identifying Flow Hogs

    KAMIYAMA Noriaki, MORI Tatsuya, KAWAHARA Ryoichi

    IEICE technical report   106 ( 167 ) 97 - 100  2006.07

     View Summary

    Hosts infected by worm generate huge amount of flows during a short time, so it is important to identify these flow hogs as soon as possible. We proposed a method identifying flow hogs using flow sampling. This method consists of a Bloom filter finding a new flow and a host table storing the sampled flow count for each host. In this paper, we propose an optimum memory allocation method for each module to minimize the false negative ratio.

    CiNii

  • QoS control to handle long-duration large flows and its performance evaluation

    KAWAHARA Ryoichi, MORI Tatsuya, ABE Takeo

    IEICE technical report   106 ( 9 ) 69 - 74  2006.04

     View Summary

    A method of controlling the rate of long-duration large flows and its performance evaluation is described in this paper. Most conventional QoS controls allocate a fair-share bandwidth to each flow regardless of its duration. Thus, a long-duration large flow (such as a P2P flow) is allocated the same bandwidth as a short-duration flow (such as data from a Web page) in which the user is more sensitive to response time. As a result, long-duration flows will occupy the bandwidth over the long period and worsen response times of short-duration flows. We have, therefore, proposed a QoS control that takes flow duration into account and assigns higher priority to the acceptance of shorter-duration flows. In this paper, we explain how to set parameters used in our method and show the effectiveness of our method through simulation analysis. We also discuss the applicability of a packet-sampling technique to improve the method's scalability. We also show that our method also can handle unresponsive high-rate flows.

    CiNii

  • B-6-93 A Study on Loss-Resilient Source Authentication Method in the Multicast Distribution

    Mori Tatsuya, Takahashi Jun, Tode Hideki, Murakami Koso

    Proceedings of the IEICE General Conference   2006 ( 2 ) 93 - 93  2006.03

    CiNii

  • NetHost : Aggregation of Traffic Summary Per-Host

    MORI Tatsuya, ISHIBASHI Keisuke, KAMIYAMA Noriaki, KAWAHARA Ryoichi

    IEICE technical report   105 ( 627 ) 5 - 8  2006.03

     View Summary

    Knowing the statistics of hosts in a managed network is crucial for network operators. Among such statistics, the incremental values such as number of packets or bytes sent by a host are easy to be counted. On the other hand, counting the cardinality, such as the number of distinct destination hosts for a source host, is not an easy task. This paper develops a novel method- NetHost, which monitors and aggregates the per-host statistics, including the cardinalities. Our approach is to use probabilistic counting algorithm for counting cardinalities, and to aggregate multiple traffic summaries for a host. We implement NetHost and validate its performance using the measured traffic data.

    CiNii

  • Identification of Flow Hogs by Flow Sampling

    KAMIYAMA Noriaki, MORI Tatsuya, KAWAHARA Ryoichi

    IEICE technical report   105 ( 628 ) 165 - 170  2006.03

     View Summary

    Abusive traffic caused by worm, virus, or DDoS, etc., is a serious problem in the current Internet because it consumes a large portion of network resources. In many cases, hosts targeted by DDoS attacker or infected with worm or virus generate huge amount of flows with small size during a short time. So, it is important to identify these "flow hogs" as soon as possible and cope with their behaviors by disconnecting them, for example. This paper proposes a method accurately identifying flow hogs using flow sampling.

    CiNii

  • BS-5-1 Estimating Top-N Hosts in Cardinality and its Application to Anomaly Detection

    ISHIBASHI Keisuke, MORI Tatsuya, KAWAHARA Ryoichi, HIROKAWA Yutaka, KOBAYASHI Atsushi, Yamamoto Kimihiro, SAKAMOTO Hitoaki

    Proceedings of the IEICE General Conference   106 ( 14 ) "S - 35"-"S-36"  2006

     View Summary

    We propose a method to find N hosts that have the N highest cardinalities, where cardinality is the number of distinct items such as the number of flows, ports, or peer hosts. The method also estimates their cardinalities. Finding hosts that have the N highest cardinalities requires tables of previously seen items for each host to check whether an item of an arrival packet is new or not, which requires a lot of memory. In this paper, we use the property of cardinality estimation, in which the cardinality of intersections of multiple data sets can be estimated with cardinality information of each data set. Using the property, we propose an algorithm that does not need to maintain tables for each host, but only for partitioned addresses of a host and estimate the cardinality of a host as the intersection of cardinalities of partitioned addresses. We evaluate our algorithm through actual backbone traffic data.

    CiNii

  • Simple timeout mechanism in traffic measurement

    KAMIYAMA Noriaki, MORI Tatsuya, KAWAHARA Ryoichi

    IEICE technical report   105 ( 472 ) 97 - 102  2005.12

     View Summary

    In traffic measurement for flows, packet sampling is used to reduce the number of flows captured. Flow statistics are managed in a flow table (FT), and a flow entry created in FT is removed when no more packets are sampled from the flow during the timeout length. To remove timeout flow entries, the router normally checks the all flow entries in FT whether they are timeout or not, periodically. In this mechanism, the growth of timeout processing load is problem when the number of flows increases. In this paper, we propose checking just V entries randomly selected from FT. The numerical evaluation clarifies that the proposed method dramatically reduces the required number of memory accesses while keeping the increase of required memory size small.

    CiNii

  • Detection of Worm-Infected Hosts by Communication Pattern Analysis

    MORI Tatsuya, KAWAHARA Ryoichi, KAMIYAMA Noriaki, ISHIBASHI Keisuke, ABE Takeo

    IEICE technical report   105 ( 407 ) 1 - 6  2005.11

     View Summary

    This paper develops a new method, which detects worm-infected hosts through the analysis of communication pattern of each host. Our approach follows a two-stage strategy. We first introduce the quantitative definition of communication pattern. We show that the worm-infected hosts exhibit intrinsic characteristics of communication pattern, and they can be classified from those of other hosts, through the cluster analysis. We then propose a method to detect worm-infected hosts by applying the defined communication pattern to the Naive Bayesian Classifier (NBC). We validate the accuracy of our method with measured traffic data.

    CiNii

  • Detection of Worm-Infected Hosts by Communication Pattern Analysis

    MORI Tatsuya, KAWAHARA Ryoichi, KAMIYAMA Noriaki, ISHIBASHI Keisuke, ABE Takeo

    IEICE technical report   105 ( 405 ) 13 - 18  2005.11

     View Summary

    This paper develops a new method, which detects worm-infected hosts through the analysis of communication pattern of each host. Our approach follows a two-stage strategy. We first introduce the quantitative definition of communication pattern. We show that the worm-infected hosts exhibit intrinsic characteristics of communication pattern, and they can be classified from those of other hosts, through the cluster analysis. We then propose a method to detect worm-infected hosts by applying the defined communication pattern to the Naive Bayesian Classifier (NBC). We validate the accuracy of our method with measured traffic data.

    CiNii

  • Estimating flow rate from sampled packet streams for detection of performance degradation at TCP flow level

    KAWAHARA Ryoichi, MORI Tatsuya, ISHIBASHI Keisuke, KAMIYAMA Noriaki, ABE Takeo

    IEICE technical report   105 ( 405 ) 19 - 24  2005.11

     View Summary

    A method of estimating TCP flow-rates of sampled flows through packet sampling is described in this paper. We use sequence numbers of sampled packets, which make it possible to improve markedly the accuracy of estimating the flow rates. Using an analytical model, we investigate how to set parameters such as packet sampling probability used in this method of estimation. As a remarkable result, we show that the estimation accuracy improves as the sampling probability decreases. Using measured data, we also show that this method gives accurate estimations. We also show that this estimation method enables us to detect performance degradation at the TCP flow level.

    CiNii

  • Estimating Scale of Peer-to-Peer File Sharing Applications Using Multi-Layer Partial Measurement

    KAMEI Satoshi, UCHIDA Masato, MORI Tatsuya, TAKAHASHI Yutaka

    The IEICE transactions on communications B   88 ( 11 ) 2171 - 2180  2005.11

    CiNii

  • Applying Naive Bayesian Classifier to Network Management

    MORI Tatsuya, KAWAHARA Ryoichi, KAMIYAMA Noriaki

    IEICE technical report   105 ( 357 ) 17 - 20  2005.10

     View Summary

    This paper develops a new method, which detects hosts infected by worms. The key idea is to use Naive Bayesian Classifier (NBC), which is used for spam filtering. Using the learned statistics, which can be obtained from a priori data or measured statistics, the method probabilisticly estimates the class to which newly measured hosts may belong. Since the method can estimate the class of hosts, which have unknown attributes, it has good robustness. We evaluate the accuracy of the method, using measured packet traces.

    CiNii

  • BS-9-2 Performance evaluation of QoS control to handle long-duration large flows(BS-9. Latest Trends on Information Networking Technologies)

    Kawahara Ryoichi, Kaneko Hidefumi, Mori Tatsuya, Abe Takeo

    Proceedings of the Society Conference of IEICE   2005 ( 2 ) "SE - 3"-"SE-4"  2005.09

    CiNii

  • B-7-7 Performance Comparison of High-Rate Flow Identifiers.

    Kamiyama Noriaki, Mori Tatsuya

    Proceedings of the Society Conference of IEICE   2005 ( 2 ) 134 - 134  2005.09

    CiNii

  • B-7-32 Analysis of impact of link congestion in reverse direction on TCP performance

    Kawahara Ryoichi, Mori Tatsuya, Ishibashi Keisuke, Abe Takeo

    Proceedings of the Society Conference of IEICE   2005 ( 2 ) 159 - 159  2005.09

    CiNii

  • Accurate Identification of High-Rate Flows

    KAMIYAMA Noriaki, MORI Tatsuya

    IEICE technical report. Information networks   105 ( 178 ) 167 - 172  2005.07

     View Summary

    The author proposed a novel method named "short timeout" that identifies high-rate flows using sampled packets. The short timeout method identifies flows from which two packets are sampled without timeout as high-rate flows. In this paper, we generalize this identification mechanism, i.e., identifying flows from which Y packets are sampled without timeout. We analytically derive the identification probability and clarify that the identification accuracy is improved as Y grows. However, we also find that the increase of Y requires larger memory size and faster processing speed.

    CiNii

  • Classifying Flow Characteristics using Naive Bayesian Classifier

    MORI Tatsuya, KAWAHARA Ryoichi, KAMIYAMA Noriaki

    IEICE technical report   105 ( 12 ) 9 - 12  2005.04

     View Summary

    The statistics of flow, which is an unit of traffic produced by each user or application, gives us a meaningful insight for making practical network management. That is, if we rapidly knew the anomaly flows that might significantly affect the network performance, then we can immediately make an adequate action aganist such flows to protect our networks. It also allows us to make nice controlling scheams or effective troubleshooting. This paper delvelops new method to establish such an objective, using Naive Bayesian Classifier. Using the learned statistics, which can be obtained from measurement, the method probabilisticly estimate the class to which newly arrived flows may belong. Since the method does not maintain per-flow statistics, it has strong scalability. We evaluate the accuracy of the method, using measured packet traces.

    CiNii

  • A method of estimating TCP flow statistics through packet sampling and its evaluation

    KAWAHARA Ryoichi, MORI Tatsuya, ISHIBASHI Keisuke, KAMIYAMA Noriaki, ABE Takeo

    Technical report of IEICE. TM   104 ( 706 ) 19 - 24  2005.03

     View Summary

    Managing the performance at the flow level through traffic measurement is crucial for effective network management. On the other hand, with the rapid rise in link speeds, collecting all packets has become difficult, so packet sampling has been attracting attention as a scalable means of measuring flow statistics. We have therefore established a method of detecting performance degradation at the TCP flow level through ordinary packet sampling. We have also proposed a new sampling method to estimate TCP flow distributions in terms of flow-size, flow-rate, and flow-duration. The proposed method is based on the characteristic that a SYN packet in each flow is equally sampled. We also show the effectiveness of our methods using measured data.

    CiNii

  • 超高速ネットワークにおけるトラヒック測定分析技術

    川原 亮一, 森 達哉, 石橋 圭介, 阿部 威郎

    オペレーションズ・リサーチ : 経営の科学 = [O]perations research as a management science [r]esearch   50 ( 3 ) 163 - 168  2005.03

     View Summary

    本稿では, 超高速ネットワークにおけるトラヒック測定分析技術として, サンプルパケットのみを用いてトラヒック制御・品質管理に有用な統計情報を推定する手法を二つ紹介する.まず, 回線帯域の占有率が大きいユーザを特定する手法について述べる.本手法は, 帯域占有率の大きいユーザが他の一般ユーザの通信を圧迫している場合に, そのようなユーザを迅速に切り分けて制御することを可能とする.次に, パケットサンプリングにより抽出されたユーザのみの挙動から元のユーザ全体の品質劣化を検出する方法について述べる.また, 実測データ分析を通じて各方式の有効性を検証した結果についても報告する.

    CiNii

  • B-7-26 Analyzing Flow Characteristics for Building Scalable Flow Management Scheam

    Mori Tatsuya, Kawahara Ryoichi, Kamiyama Noriaki, Ishibashi Keisuke, Abe Takeo

    Proceedings of the IEICE General Conference     180 - 180  2005

    CiNii

  • B-7-4 Classification of Internet hosts based on communication pattern

    Mori Tatsuya, Kawahara Ryoichi, Kamiyama Noriaki

    Proceedings of the Society Conference of IEICE     131 - 131  2005

    CiNii

  • B-7-2 A method of estimating TCP flow statistics through SYN packet sampling

    Kawahara Ryoichi, Mori Tatsuya, Kamiyama Noriaki, Abe Takeo

    Proceedings of the IEICE General Conference     156 - 156  2005

    CiNii

  • 超高速ネットワークにおけるトラヒック測定分析技術(企業事例交流会(2))

    川原 亮一, 森 達哉, 阿部 威郎

    日本オペレーションズ・リサーチ学会秋季研究発表会アブストラクト集   2004   22 - 23  2004.09

    CiNii

  • A New Traffic Control Paradigm using Overlay Networks

    KIMURA Takumi, UCHIDA Masato, KAWAHARA Ryoichi, KAMEI Satoshi, MORI Tatsuya, NOGAMI Shinya, ABE Takeo

    IEICE technical report. Information networks   104 ( 182 ) 7 - 12  2004.07

     View Summary

    In this paper, we propose a traffic control paradigm using overlay networks which are logical networks over IP networks in the Internet scale. Its key technologies are dynamic topology configuration and Quality of Service (QoS) routing based on IP-layer information measured in real time. The control paradigm is expected to enable end-to-end QoS and global load-balancing through multiple Internet Service Providers (ISPs).

    CiNii

  • Identifying elephant flows through periodically sampled packets

    MORI Tatsuya, UCHIDA Masato, KAWAHARA Ryoichi, PAN Jianping, GOTO Shigeki

    IEICE technical report. Information networks   104 ( 181 ) 31 - 37  2004.07

     View Summary

    Identifying elephant flows is very important in developing effective and efficient traffic engineering schemes. In addition, obtaining the statistics of these flows is also very useful for network operation and management. On the other hand, with the rapid growth of link speed in recent years, packet sampling has become a very attractive and scalable means to measure flow statistics ; however, it also makes identifying elephant flows become much more difficult. Based on Bayes' theorem, this paper develops techniques and schemes to identify elephant flows in periodically sampled packets. We show that our basic framework is very flexible in making appropriate trade-offs between false positives (misidentified flows) and false negatives (missed elephant flows) with regard to a given sampling frequency. We further validate and evaluate our approach by using some publicly available traces. Our schemes are generic and require no per-packet processing ; hence, they allow a very cost-effective implementation for being deployed in large-scale high-speed networks.

    CiNii

  • A method of detecting performance degradation at TCP flow level from sampled packet streams

    KAWAHARA Ryoichi, ISHIBASHI Keisuke, MORI Tatsuya, ABE Takeo

    Technical report of IEICE. TM   104 ( 165 ) 37 - 42  2004.07

     View Summary

    With the rapid growth of link speed, packet sampling has become a very attractive and scalable means to measure How statistics. We thus establish a method of detecting performance degradation at TCP (low level from sampled flow behaviors. The proposed method is based on the following two flow characteristics: (I) the flow rate- of sampled flows tend to be higher than that of all flows, and (ii) when the link becomes congested, the performance of high-rate flows becomes degraded first. We also show the effectiveness of our method using measured data.

    CiNii

  • On the Flow Analysis of the Internet Traffic : Web vs. P2P

    MORI Tatsuya, UCHIDA Masato, GOTO Shigeki

    The Transactions of the Institute of Electronics,Information and Communication Engineers.   87 ( 5 ) 561 - 571  2004.05

    CiNii

  • Identifying elephant flows from sampled packet stream

    MORI Tatsuya, UCHIDA Masato, KAWAHARA Ryoichi, GOTO Shigeki

    IEICE technical report   104 ( 18 ) 17 - 20  2004.04

     View Summary

    In a network link with sufficient volume of traffic, a small number of flows constructed from a large number of packets occupy a large part of whole aggregated traffic. Such flows are called "elephant flows". Idenfying and controlling them will be useful for contrusting efficient and effective traffic engieering schemes. Meanwhile, with the recent growth in bandwidth of network links, packet sampling technique has been widely noticed as a scalable technology for measuring and managing networks. In this paper, we propose a new method for identifying elephant flows from a sampled packet stream.. We will also evaluate the method using measured data.

    CiNii

  • A method of estimating TCP performance for aggregated flows with heterogeneous access links and its evaluation

    KAWAHARA Ryoichi, ISHIBASHI Keisuke, MORI Tatsuya, OZAWA Toshihisa, SUMITA Shuichi, ABE Takeo

    IEICE technical report   104 ( 18 ) 29 - 32  2004.04

     View Summary

    We propose a method of estimating TCP performance in a link on which flows with heterogeneous access-link bandwidths are aggregated. It has been reported that mean TCP file-transfer time can be evaluated using processor-sharing model when all flows have the same access link bandwidth. We thus start by developing a formula that approximates the mean TCP file-transfer time of a flow under a heterogeneous access-link condition. We then extend the approximation to handle various factors that limit actual transfer speed of a TCP flow besides access-link bandwidth. We also develop a method of bandwidth dimensioning and management based on this method of estimation.

    CiNii

  • B-7-102 A scalable QoS control to handle long-lived elephant flows

    Kawahara Ryoichi, Mori Tatsuya, Sumita Shuichi, Abe Takeo

    Proceedings of the IEICE General Conference   2004 ( 2 ) 311 - 311  2004.03

    CiNii

  • B-7-115 A Proposal of Decentralized Topology Control for Unstructured Overlay Networks

    Kimura Takumi, Kamei Satoshi, Mori Tatsuya, Uchida Masato, Sumita Shuichi, Abe Takeo

    Proceedings of the IEICE General Conference   2004 ( 2 ) 324 - 324  2004.03

    CiNii

  • B-7-117 Identifying elephant flows using packet sampling

    Mori Tatsuya, Uchida Masato, Kawahara Ryoichi, Goto Shigeki

    Proceedings of the IEICE General Conference   2004 ( 2 ) 326 - 326  2004.03

    CiNii

  • B-7-121 An Approximation of TCP File-Transfer Time for Heterogeneous Access Links

    ISHIBASHI Keisuke, KAWAHARA Ryoichi, MORI Tatsuya, OZAWA Toshihisa, AIDA Masaki

    Proceedings of the IEICE General Conference   2004 ( 2 ) 330 - 330  2004.03

    CiNii

  • P2Pファイル共有の実態調査 (特集 P2P技術とサービス)

    大井 恵太, 亀井 聡, 森 達哉

    NTT技術ジャ-ナル   16 ( 3 ) 18 - 21  2004.03

    CiNii

  • サンプルパケットから構成されるフロー統計の評価

    森達哉

    2004年電子情報通信学会ソサイエティ大会    2004

    CiNii

  • 異速度TCPフロー集約リンクにおけるTCP品質推定法と帯域設計管理法

    川原亮一, 石橋 圭介, 森 達哉, 小沢 利久, 住田 修一, 阿部 威郎

    日本OR学会春季研究発表会, 2004     98 - 99  2004

    CiNii

  • Analysis of Peer-to-Peer File Sharing Applications

    OOI KEITA, KAMEI SATOSHI, Tatsuya Mori

    IPSJ SIG Notes   114 ( 87 ) 17 - 24  2003.08

     View Summary

    As Internet access line bandwidth has increased, peer-to-peer applications have been increasing and they have had a great impact on networks. In particular, the spread of peer-to-peer file sharing applications raises concerns about copyrights. However, it is difficult to gather much information because files are not transmitted via a server. In this paper, we measure and analyze peer-to-peer file sharing applications, WinMX, Gnutella, and Winny by heuristic methods in order to clarify the scale of peer-to-peer file sharing and details of shared files.

    CiNii

  • Status and Traffic Issues of Peer-to-Peer File Sharing Applications : for traffic measurement, traffic control, network design, and operation

    KAMEI Satoshi, MORI Tatsuya, OOI Keita

    Technical report of IEICE. CQ   103 ( 178 ) 39 - 46  2003.07

     View Summary

    As Internet access line bandwidth has increased, peer-to-peer applications have been increasing and have great impact on networks. In this paper, we measure and analyse peer-to-peer file sharing applications, which have most impact on networks, WinMX, Gnutella, and Winny from the viewpoint of network layer and application layer. And, we suggest traffic issues for peer-to-peer traffic growth, traffic measurement, traffic control, network design, and operation.

    CiNii

  • On the relationship between the Pareto rule of flow rate distribution and network traffic variability

    Mori Tatsuya, Kawahara Ryoichi, Naito Shozo

    Proceedings of the IEICE General Conference   2003 ( 2 ) "S - 3"-"S-4"  2003.03

    CiNii

  • P2P Traffic Separation Method and Its Evaluation

    KAMEI Satoshi, MORI Tatsuya, KIMURA Takumi

    Proceedings of the Society Conference of IEICE     200 - 200  2003

    CiNii

  • Analysis of network traffic focusing on per-time-block flow statistics

    MORI Tatsuya, KAWAHARA Ryoichi, NAITO Shozo

    IEICE technical report. Information networks   101 ( 716 ) 1 - 8  2002.03

     View Summary

    Recently, a number of studies have elucidated that marginal distributions of network traffic exhibit non-Gaussian nature, and this property is crucial for modeling realistic network traffic. In this work, to study causal mechanisms that cause non-Gaussian nature of marginal distributions, we defined 'per-time-block flow, and investigated flow size, hop counts and application breakdown for each per-time-block flow and their relationships.

    CiNii

  • Analysis of Non-Gaussian Nature of Network Traffic

    Tatsuya Mori, Ryoichi Kawahara, Shozo Naito

    CoRR   cs.NI/0201004  2002

    Internal/External technical report, pre-print, etc.  

  • Analysis of Non-Gaussian Nature of Network Traffic and its Implication on Network Performance

    Tatsuya Mori, Ryoichi Kawahara, Shozo Naito

    CoRR   cs.NI/0209004  2002  [Refereed]

    Internal/External technical report, pre-print, etc.  

  • A Study on the Difference in Marginal Distributions of Network Traffic

    MORI Tatsuya, KAWAHARA Ryoichi

    IEICE technical report. Information networks   101 ( 414 ) 1 - 7  2001.11

     View Summary

    To study the causal mechanisms that explain the difference in marginal distributions of network traffic fluctuations, we analyzed IP flow statistics in detail. We found that (1) the increase in the average number of active flow did not necessarily let the marginal distribution be Gaussian and (2) the distribution of flow size per time block satisfied power-law and (3) this flow structure was correlated with the marginal distribution property of aggregated traffic and is crucial for network traffic modeling and performance evaluation.

    CiNii

  • SB-10-3 Is the Hurst Parameter Sufficient for Evaluating the Performance of Bursty Network Traffic?

    Mori Tatsuya, Kawahara Ryoichi

    Proceedings of the Society Conference of IEICE   2001 ( 2 ) 21 - 22  2001.08

    CiNii

  • A study on the difference of characterestics of self-similar traffic with similar hurst parameter

    MORI Tatsuya

    Proceedings of the IEICE General Conference   2001 ( 2 ) 206 - 206  2001.03

    CiNii

  • 7p-K-9 General property of fluctuaion in the limit of controlling chaos

    Mori Tatsuya, Aizawa Yoji

    Meeting Abstracts of the Physical Society of Japan   52 ( 0 ) 782 - 782  1997

    DOI CiNii

▼display all

Industrial Property Rights

▼display all

Other

  • Cybersecurity Encouragement Prizes of Minister for Internal Affairs and Communications

    2023.03
    -
     

     View Summary

    https://www.soumu.go.jp/main_sosiki/joho_tsusin/eng/pressrelease/2023/2/28_03.html

  • 第9回WASEDA e-Teaching Award大賞

    2021.03
    -
     

     View Summary

    https://www.waseda.jp/inst/ches/news/2021/03/25/3199/

  • 早稲田大学ティーチングアワード受賞(2020年度春学期)

    2021.02
    -
     

     View Summary

    https://www.waseda.jp/inst/ches/news/2021/01/12/3157/

  • コンピュータセキュリティシンポジウム CSS 2020 キャンドルスターセッション(CSS×2.0)1等星

    2020.10
    -
     
 

Syllabus

▼display all

 

Sub-affiliation

  • Faculty of Science and Engineering   Graduate School of Fundamental Science and Engineering

Research Institute

  • 2022
    -
    2024

    Waseda Research Institute for Science and Engineering   Concurrent Researcher

  • 2022
    -
    2024

    Global Information and Telecommunication Institute   Concurrent Researcher

Internal Special Research Projects

  • IoTのアプリ化に向けたコンテキストアウェア・セキュリティ制御技術

    2018   安松達彦, 秋山満昭, 刀塚敦子, 渡邉卓弥, 飯島涼

     View Summary

    平成30年度はIoTアプリの一例として、Android端末向けのアプリを対象とし、アプリの解析ならびに開発者による脆弱性対応に関する検討を実施した。成果をACM CODASPY 2019にて発表した。また、別のIoTアプリの例としてAIスピーカ上で動作するCloudアプリの大規模調査を実施した。これにより、ユーザの音声によって起動されるアプリがどのような挙動をし、どのような情報を取得するかを明らかにした。成果は2019年度に外部公開予定である。

  • マルウェアインフォマティクスの創成

    2015   後藤滋樹

     View Summary

    2015年度はマルウェアインフォマティクスの確立に向け、大規模なデータ収集に関する課題は来年度以降の取り組みとし(2016年度、同テーマにて科研費基盤B採択決定:代表者=後藤滋樹教授)、主として機械学習技術を用いたマルウェアの検体および通信の分類に関する研究に取り組んだ。具体的には以下の6つの課題に取り組んだ。(1) 動的解析ログに基づくマルウェア検体の分類(2) 動的解析に基づくマルウェア検体通信の分類(3) 標的型攻撃メールに添付されたマルウェアの解析(4) マルウェア解析レポートの学習による、新規マルウェア検体に対するレポートの自動生成(5) モバイルアプリに対する海賊版アプリの検出方法(6) ユーザーのレビュー情報に基づく悪性アプリの検出技術得られた成果を、国際会議発表(1件)、国際会議ポスター発表(3件)、国内研究会発表(9件)でそれぞれ発表した。いずれの研究課題も今年度立ち上げたものであるが良好な成果を得ており、特にモバイル系の研究成果に関しては高い評価を受けた結果、国際会議採録(採録率=30.0%)、および国内研究会にて優秀学生論文賞の表彰を受けた。また、多くの研究は企業との共同研究にも反映されており、今後の実社会での活用が期待される。課題 (1) に関する具体的な成果は下記のとおりである。・武部嵩礼・後藤滋樹 Paragraph Vectorを利用した亜種マルウェア推定法 信学総大 D-19-16 2016年3月・青木一樹・後藤滋樹 APIコール情報を利用したマルウェアの階層型分類 信学総大 D-19-17 2016年3月課題 (2) に関する具体的な成果は下記のとおりである。・水野翔,畑田充弘,森達哉,後藤滋樹, ”マルウェアに感染したホストによる通信の統計的弁別方法”&nbsp; 信学技報, vol. 115, no. 488, ICSS2015-66, pp. 117-122, 2016年3月・畑田充弘,森達哉, ”未知マルウェア検知に向けたマルウェア通信の実態調査”, コンピュータセキュリティシンポジウム2015論文集,vol. 2015,No. 3,pp. 520-527,2015年10月課題 (3) に関する具体的な成果は下記のとおりである。・志村正樹・畑田充弘・森 達哉・後藤滋樹 マルウェア添付スパムメールの送信活動の特徴分析 信学総大 B-16-2 2016年3月・志村正樹,畑田充弘,森達哉,後藤 滋樹 , ”スパムトラップを用いたマルウェア添付スパムメールの分析”,&nbsp; コンピュータセキュリティシンポジウム2015論文集,vol. 2015,No. 3,pp. 1243-1250,2015年10月・M. Shimura, M. Hatada, T. Mori, and S. goto,&nbsp; “Analysis of Spam Mail Containing Malicious Attachments using Spamtrap,” (Poster presentation)  The 18th International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2015)課題 (4) に関する具体的な成果は下記のとおりである。・藤野朗稚,森達哉, ”エキスパートによるマルウェア解析レポートと動的解析ログの相関分析“, コンピュータセキュリティシンポジウム2015論文集,vol. 2015,No. 3,pp. 702-709,2015年10月課題 (5) に関する具体的な成果は下記のとおりである。・Y. Ishii, T. Watanabe, M. Akiyama, and T. Mori,&nbsp;&nbsp; “Clone or Relative?: Understanding the Origins of Similar Android Apps,”&nbsp;&nbsp; Proceedings of the ACM International Workshop on Security And Privacy Analytics (IWSPA 2016), pp. 25-32, Mar 2016,・石井悠太,渡邉卓弥,秋山満昭,森達哉,”Androidクローンアプリの大規模分析”,コンピュータセキュリティシンポジウム2015論文集,vol. 2015,No. 3,pp. 207-214,2015年10月 (MWS 学生論文賞)・Y. Ishii, T. Watanabe, M. Akiyama, and T. Mori,&nbsp;&nbsp; &nbsp;“Understanding the Origins of Similar Android Apps,” (poster presentation),&nbsp;&nbsp; The 18th International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2015)課題 (6) に関する具体的な成果は下記のとおりである。・孫博,渡邉卓弥,秋山満昭,森達哉, ”Androidアプリストアにおける不自然なレーティング・レビューの解析”, コンピュータセキュリティシンポジウム2015論文集,Vol. 2015,No. 3,pp. 655-662 ,2015年10月・B. Sun, T. Watanabe, M. Akiyama, and T. Mori,&nbsp; “Seeing is Believing? The analysis of unusual ratings and reviews on Android app store,”&nbsp;(poster presentation), The 18th International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2015)

  • 超高速ネットワークの詳細モニタリングに向けた確率的ハッシュテーブル

    2013  

     View Summary

    【研究目的の概要】テラビット級の超高速ネットワークでリアルタイムに詳細な観測を行うための基礎および応用技術を研究する.リアルタイムなネットワーク観測ではアドレスや名前,観測時刻などの膨大な変数間の対応関係を高速に記憶・参照することが求められる.一方で必ずしも まったく誤りのない完全な観測結果までは求められない.この点に注目し,わずかな誤りを許容することで記憶容量を大幅に圧縮し,高速化を図る確率的連想配列の研究に取り組む.またこの基礎技術を用いた具体的なアプリケーションを開発する.【成果の概要】2013年度は確率的ハッシュテーブル(連想配列)を実現するデータ構造とアルゴリズムの検討およびそのような確率的連想配列を利用した具体的なアプリケーションとして,個々の通信フローに対して,DNSクエリを参照・解析し,対応するサービス名を付与する方式(SFMap)を検討した.SFMap の用途は暗号化されたことにより,中身が不明な通信フローに対し,DNS クエリ・応答に組を解析することでその通信サービスを推定することであり,通信事業者が自社のネットワークの利用状況を把握するのに有用である.確率的連想配列に関しては2つの Bloom Filter を組み合わせて Matrix を構成する方式(Matfix Filter=MF)を確立した.理論および数値計算による性能評価を行った結果,MF は既存方式と比較して良好な性能を得られることを確認した.具体的には所与のエラー率が与えられた下で {key, value} のタプルを連想配列に入力する際に,極力メモリ消費量を低減しつつも高速にデータの登録・参照が可能な方式である.特に key 数が膨大であり,value 数が比較的小さいようなケースにおいて既存方式に対して有利な性能を有する特徴がある.SFMap に関しては実通信データを用いた精度評価を行い,こちらも良好な性能が得られることを確認した.SFMap については JST ERATO の井上武氏と共著で国内学会で発表を行った他,実通信データを読み込んで所望の処理を実現するソフトウェアを開発した.【今後の予定】基礎的な方式および具体的なアプリケーションの良好な動作が確認できたので,2014年度は MF および SFMap の検討で得られた成果をまとめ,フルペーパーとして論文投稿する予定である.方式を実装したソフトウェアをオープンソースソフトウェアとして広く公開することも検討している.また,本研究は科研費スタート支援の補助を受けており,2014年度も継続する予定である.さらに本成果に基づいた民間企業との共同研究を実施する予定である.